Extending SANs Over TCP/IP

Posted on
Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone

Extending storage area networks (SANs) over a distance has become a necessity for enterprise networks. A single SAN island with one storage system does not satisfy redundancy and disaster-recovery requirements. Enterprise networks require co-locations, redundant data centers and disaster-recovery sites in the event of a network, environmental or other types of critical business interruption. As a result, many enterprise customers architect these disaster-recovery sites for the purposes of data recovery and business operation continuance during anomalous situations and disasters. In addition, several draft standards and best practice documents highly recommend or require financial institutions to adhere to strict disaster-recovery guidelines.

As disaster-recovery solutions extend their reach to distant data centers, replicating, copying, migrating and vaulting data over TCP/IP from a main site to a remote site have become valuable skills. With Cisco MDS switches, enterprises can design their data-replication, data-copy or data-migration solutions using Cisco’s Fibre Channel over IP (FCIP) solution without building separate transport networks for Fibre Channel. Furthermore, due to buffer constraints and requirements for a dedicated transport, extending SANs well beyond several hundred kilometers is limited with native Fibre Channel. FCIP solutions offer the ability to share existing bandwidth with Ethernet/IP solutions and reduce the costs of having dedicated transports, such as optical transports. Cisco MDS SAN Extension solutions provide 32 MB of buffering for FCIP that allows for disaster recovery solutions to overcome typical Fibre Channel distance limitations due to Fibre Channel buffering. In typical data-replication or data-copy environments, a storage system at the local site replicates its data to remote storage system at a remote data center.

There are different methods of replication or data copying. These methods usually fall into one of the following categories:



  • Synchronous Data Replication: A data-replication method that involves writing to a local storage system and, in turn, having the local storage system write data to the remote storage system. This transaction is not complete from the host perspective until the data is written and acknowledged at the local and remote storage system. This type of replication provides no data lost during an anomalous event. Synchronous data replication requires very low latency, and FCIP is not always the most popular choice. Generally, optical solutions are the first choice for the transport network. Nevertheless, deploying FCIP over optical solutions transporting Ethernet at short distances is becoming more popular.
  • Asynchronous Data Replication: A data-replication method where data on the remote storage array is “X” amount of time behind the local storage array. This data, although not in sync with the local storage, is still valid data. In disaster recovery scenarios, some data may be lost, but only by the amount of time the replication is behind the real-time transactions, which is configurable in many storage arrays. This method allows transactions to be written to local storage without having to wait for acknowledgement from remote storage. Thus it does not hinder performance on the local host’s application. Although latency is a factor with asynchronous data replication, it is not as critical as synchronous data replication. As such, asynchronous data replication solutions are commonly used with FCIP.
  • Data-Copying, Data-Vaulting, etc.: A data-replication method where data is copied to a remote data system by a copy algorithm. This type of replication method includes data migration and tape backups. It provides point-in-time recovery, but it is not deterministic and depends on how fast data is copied and when data is taken offline and copied. This category of replicating data commonly uses FCIP.


This article will briefly introduce SAN extension using Cisco MDS switches with FCIP for the purpose of data replication. The topic will be approached from a high-level point-of-view, rather than a technological, marketing or comparative aspect.

Interconnecting SANs Using FCIP
Before delving into a discussion of TCP/IP and its effect on throughput and latency of transporting storage over IP, let’s look at the basic configuration of interconnecting two Cisco MDS switches by building an extended inter-switch link (EISL) over an FCIP tunnel. An ISL is an inter-switch link in a storage fabric, while an EISL is an extended ISL used for carrying multiple virtual SANs (VSANs). Cisco’s standardization of VSAN segments a physical fabric into multiple autonomous logical fabrics. Propagating multiple fabrics across a single EISL over IP has significant advantages.

The first step to interconnect SANs over FCIP is to configure the egress (outgoing) IP interfaces on the Cisco MDS switch. Aside from the management interface, any Ethernet interface on the IPS-4, IPS-8 or MPS/MDS9216i module is capable of FCIP, with the correct license. The IP configuration of Gigabit Ethernet interfaces in SAN-OS is nearly identical to Cisco IOS. Here is an example of configuring an IP interface on a Cisco MDS switch:


MDS-1# config terminal


Enter configuration commands, one per line.  End with CNTL/Z.


MDS-1(config)# interface gigabitethernet 9/1


MDS-1(config-if)# ip address


MDS-1(config-if)# no shutdown



FCIP tunnels run on top of the Gigabit Ethernet interfaces. Before configuring the FCIP tunnel, the respective FCIP profile must be configured. FCIP profiles detail the local IP address and TCP parameters to be used on a FCIP tunnel. The FCIP interface configuration includes the peer IP address, FCIP profile and other configuration options, such as TCP time-stamping, compression and writes acceleration. Since FCIP profiles and the tunnels are independent, multiple FCIP tunnels may use the same FCIP profile in a multi-point configuration.

The following example illustrates a minimum FCIP profile and tunnel configuration required to establish an EISL link between two SAN islands:




fcip enable




fcip profile 1


  ip address




Like what you see? Share it.Share on Google+Share on LinkedInShare on FacebookShare on RedditTweet about this on TwitterEmail this to someone


Posted in Archive|


Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>