Cisco MDS 9000 Family Data Mobility Manager Configuration Guide
Understanding DMM Topologies
Downloads: This chapterpdf (PDF - 422.0KB) The complete bookPDF (PDF - 4.74MB) | Feedback

Understanding DMM SAN Topologies

Table Of Contents

Understanding DMM SAN Topologies

Overview

FC-Redirect

DMM Topology Guidelines

Homogeneous SANs

Heterogeneous SANs

DMM Method 3 Topology

Supported Topologies in Method 3

Three-Fabric Configuration

Two-Fabric Configuration

One-Fabric Topology

Ports in a Server-Based Job


Understanding DMM SAN Topologies


Cisco MDS DMM is designed to support a variety of SAN topologies. The SAN topology influences the location of the SSM/MSM module and the DMM feature configuration. The following sections describe common SAN topologies and their implications for DMM:

Overview

FC-Redirect

DMM Topology Guidelines

Homogeneous SANs

Heterogeneous SANs

DMM Method 3 Topology

Ports in a Server-Based Job

Overview

Cisco DMM supports homogeneous SANs (all Cisco MDS switches), as well as heterogeneous SANs (a mixture of MDS switches and other vendor switches). In a heterogeneous SAN, you must connect the existing and new storage to Cisco MDS switches.

In both homogeneous and heterogeneous SANs, Cisco MDS DMM supports dual-fabric and single-fabric SAN topologies. Dual-fabric and single-fabric topologies both support single path and multipath configurations.

In a single path configuration, a migration job includes only the one path (which is represented as an initiator/target port pair). In a multipath configuration, a migration job must include all paths (which are represented as two initiator/target port pairs).

FC-Redirect

When a data migration job is in progress, all traffic (in both directions) sent between the server HBA port and the existing storage is intercepted and forwarded to the SSM/MSM, using the FC-Redirect capability.

FC-Redirect requirements for the SAN topology configuration include the following:

The existing storage must be connected to a switch with FC-Redirect capability. FC-Redirect capability is available on MDS 9500 Series and MDS 9200 Series switches.

Server HBA ports may be connected to a switch with or without FC-Redirect capability.

The switches with FC-Redirect must be running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.

The server HBA port and the existing storage port must be zoned together. The default-zone policy must be configured as deny.

The SSM or MSM can be located anywhere in the fabric, as long as the FCNS database in the SSM or MSM switch has the required information about the server HBA ports and the existing storage ports. The SSM or MSM switch must be running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.

The following examples show the server-to-storage packet flow when a data migration job is in progress. For clarity, the example shows the SSM or MSM and the existing storage connected to separate switches. The recommended practice is to connect the existing storage to the same switch as the SSM/MSM.

In Figure 3-1, the server HBA port is connected to switch A and the existing storage is connected to switch C. Both switches have FC Redirect capability. The SSM/MSM is installed on switch B. All three switches are running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later.

Figure 3-1 Host Connected to FC-Redirect Switch

When the data migration job is started, FC-Redirect is configured on switch A to divert the server traffic to the SSM/MSM. FC-Redirect is configured on switch C to redirect the storage traffic to the SSM/MSM.

In Figure 3-2, the server HBA port is connected to switch A, which either does not have FC-Redirect capability or is not running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later. The existing storage is connected to switch C, which has FC-Redirect capability. The SSM/MSM is installed on switch B. Switches B and C are running SAN-OS 3.2(1) or NX-OS 4.1(1b) or later.

When the data migration job is started, FC-Redirect is configured on switch C to redirect the server and storage traffic to the SSM/MSM. This configuration introduces additional network latency and consumes additional bandwidth, because traffic from the server travels an extra network hop (A to C, C to B, B to C). The recommended configuration (placing the SSM/MSM in switch C) avoids the increase in network latency and bandwidth.

Figure 3-2 Host Not Connected to FC-Redirect Switch

DMM Topology Guidelines

When determining the provisioning and configuration requirements for DMM, note the following guidelines related to a SAN topology:

The existing and new storage must be connected to MDS switches.

Switches connected to the storage ports must be running MDS SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.

The SSM or MSM is supported on MDS 9500 series switches and MDS 9200 series switches. The switch must be running MDS SAN-OS 3.2(1) or NX-OS 4.1(1b) or later release.

DMM requires a minimum of one SSM or MSM in each fabric.

DMM does not support migration of logical volumes. For example, if the existing storage is a logical volume with three physical LUNs, DMM treats this as three LUN-to-LUN migration sessions.

If you plan to deploy DMM and FCIP write acceleration together, there are restrictions in the supported topologies. Contact Cisco for assistance with designing the DMM topology.

Minimum revisions should be updated to include the minimum supported MSM release, NX-OS Release 4.1(1b).


Note In a storage-based migration, you may corrupt the storage if a new server port tries to access the storage ports once the migration has started. For example, a server port is returned to service, or a new server is brought on line.


Homogeneous SANs

A homogeneous SAN contains only Cisco MDS switches. Most topologies fit the following categories:

Core-Edge—Hosts at the edge of the network, and storage at the core.

Edge-Core—Hosts and storage at the edge of the network, and ISLs between the core switches.

Edge-Core-Edge—Hosts and storage connected to opposite edges of the network and core switches with ISLs.

For all of the above categories, we recommend that you locate the SSM/MSM in the switch closest to the storage devices. Following this recommendation ensures that DMM introduces no additional network traffic during data migrations.

Figure 3-3 shows a common SAN topology, with servers at the edge of the network and storage arrays in the core.

Figure 3-3 Homogeneous SAN Topology

In a homogeneous network, you can locate the SSM/MSM on any DMM-enabled MDS switch in the fabric. It is recommend that SSM/MSM is installed in the switch connected to the existing storage. The new storage should be connected to the same switch as the existing storage. If the SSM/MSM is on a different switch from the storage, additional ISL traffic crosses the network during the migration (all traffic between storage and server is routed through the SSM/MSM).

Heterogeneous SANs

When planning Cisco MDS DMM data migration for a heterogeneous SAN, note the following guidelines:

The existing and new storage devices for the migration must be connected to MDS switches.

The path from the SSM/MSM to the storage-connected switch must be through a Cisco fabric.

Depending on the topology, you may need to make configuration changes prior to data migration.

DMM Method 3 Topology

DMM Method 3 is a derivative of the DMM Method 2 (also called Asynchronous DMM). DMM Method 3 supports the dedicated migration fabric and is designed to address the problem of migrating data from an array port that is connected to a dedicated SAN that is different from the production SAN.

Many IT organizations require data migration to a remote data center. Some organizations prefer to use a dedicated storage port (on the existing storage array) connected to a separate physical fabric. This fabric is called the migration or replication fabric because it is used for data migration as well as continuous data replication services.

The LUNs mapped to the existing storage port in the migration and remote SAN are also mapped to another storage port on the array that is connected to the production SAN and accessed by one or more servers. The servers may also access the storage from two production SANs for redundancy. In this topology, the migration SAN becomes the third SAN to which the existing storage array is connected. The new storage array is connected only to the migration SAN and may not have any ports on the production SAN(s). (See Figure 3-4.)

Figure 3-4 DMM Method 3 Topology

In the above topology, DMM Method 3 should be used to migrate data from the existing storage to the new storage in the replication and migration SAN. DMM Method 3 requires an SSM or MSM in each of the production SANs (with support for a maximum of two production SANs) and an SSM/MSM in the migration SAN. The DMM Method 3 job has three SSM or MSMs unlike Method 1 and Method 2, which can have a maximum of two SSM/MSMs. In Method 3, the SSM/MSM in the migration SAN is responsible for executing the sessions in the DMM job, and copying the data from the existing storage to the new storage. The SSM/MSMs in the production SANs are responsible for tracking the server writes to the existing storage. No server writes are expected in the migration SAN.

Server writes in the Production SAN are logged by the SSM or MSM in that fabric by maintaining a Modified Region Log (MRL) for each LUN that is migrated. This MRL is the same as the MRL maintained in DMM Method 2. The SSM/MSM in the migration SAN does not maintain any MRL for the LUN because no server writes to the existing storage LUN are expected in the migration SAN. The SSM/MSM in the migration SAN is responsible for retrieving the MRLs for a LUN from both the production SANs and for performing a union of the MRLs to create a superset of all modified blocks on the LUN via paths from both production SANs. The SSM or MSM then copies all the modified regions from the existing storage LUN to the new storage LUN in the migration SAN. This process is repeated until the administrator is ready to finish the DMM job and perform a cutover. The finish operation on a Method 3 places all LUNs in the offline mode and performs a final pass over the combined MRL to synchronize the existing and new storage LUN in each session. This cutover process is the same process used in cutover operations in DMM Method 2.

Supported Topologies in Method 3

There are three possible configurations that are available when you configure a migration job using Method 3. The configurations are described in the following sections:

Three-Fabric Configuration

Two-Fabric Configuration

One-Fabric Topology

Three-Fabric Configuration

The three-fabric topology supports two production fabrics and one migration fabric. Each of the fabrics have one VSAN per fabric as shown in Figure 3-5.

Figure 3-5 Three-Fabric Topology

The production fabric consists of the following:

Two fabrics, Fabric A and Fabric B

Two VSANs in each of the fabric, VSAN 10 in Fabric A and VSAN 20 in Fabric B

Two DMM modules in each of the fabric, DMM Module 1 and DMM Module 2

Ports for the application server and the existing storage

Application server port and storage port in the same VSAN for each fabric

The VSANs in both the fabrics can have different numbers.

The migration fabric consists of the following:

One fabric, Fabric C

One VSAN, VSAN 15

One DMM module, DMM Module 3

Existing storage port and new storage port in the same VSAN

The migration fabric VSAN can have a different number from the production fabric VSAN.

Two-Fabric Configuration

The two-fabric configuration has one or two fabrics in the production fabric and one fabric in the migration fabric.

This section covers the following sample two-fabric configurations:

Two-Fabric Topology, Type 1

Two-Fabric Topology, Type 2

Two-Fabric Topology, Type 3

Two-Fabric Topology, Type 1

Consider a two-fabric topology as shown in Figure 3-6. The topology has two fabrics, one each in the production fabric and migration fabric.

Figure 3-6 Two-Fabric Topology, Type 1

The production fabric consists of the following:

One fabric, Fabric A

One VSAN, VSAN 10 in Fabric A

One DMM module, DMM Module 1

Ports for the application server and the existing storage

Application server and existing storage ports in the same VSAN

The migration fabric consists of the following:

One fabric, Fabric C

One VSAN, VSAN 15 in Fabric C

One DMM module, DMM Module 2

Existing storage and new storage ports in the same VSAN

The migration fabric VSAN can have a different number from the production fabric VSAN.

Two-Fabric Topology, Type 2

Consider a two-fabric topology as shown in Figure 3-7. The topology has two fabrics in the production fabric and one fabric in the migration fabric.

Figure 3-7 Two-Fabric Topology, Type 2

The production fabric consists of:

One fabric, Fabric A

Two VSANs, VSAN 10 and VSAN 20

Two DMM modules, DMM Module 1 for VSAN 10 and DMM Module 2 for VSAN 20

Ports for the application server and the existing storage

Application server port and existing storage port in each VSAN

The migration fabric consists of:

One fabric, Fabric C

One VSAN, VSAN 15

One DMM module, DMM Module 3

Existing storage port and new storage port in the same VSAN

The migration fabric VSAN number can be different from the production fabric VSAN number.

Two-Fabric Topology, Type 3

Consider a two-fabric sample topology as shown in Figure 3-8. The topology has two fabrics in the production fabric and one fabric in the migration fabric. Each fabric has one DMM module.

Figure 3-8 Two-Fabric Topology, Type 3

The production fabric consists of:

One fabric, Fabric A

Two VSANs, VSAN 10 and VSAN 20

One DMM module, DMM Module 1

Ports for the application server and the existing storage

Application server port and existing storage port in the same VSAN

The migration fabric consists of:

One fabric, Fabric C

One VSAN, VSAN 15

One DMM module, DMM Module 2

Existing storage port and new storage port in the same VSAN

The migration VSAN number can be different from the production VSAN numbers.

One-Fabric Topology

In the single fabric configuration, there are two production VSANs and one migration VSAN in one fabric. This section covers the following topics:

One-Fabric Topology, Type 1

One-Fabric Topology, Type 2

One-Fabric Topology, Type 1

Consider a one-fabric topology as shown in Figure 3-9.

Figure 3-9 One-Fabric Topology, Type 1

The production VSAN consists of:

Two VSANs, VSAN 10 and VSAN 20

Two DMM modules, DMM Module 1 for VSAN 10 and DMM Module 2 for VSAN 20

Ports for the application server and the existing storage

Application server port and storage port in the same VSAN

The migration VSAN consists of:

One VSAN, VSAN 15

One DMM module, DMM Module 3

Application server port and new storage port in the same VSAN

One-Fabric Topology, Type 2

Consider a one-fabric topology as shown in Figure 3-10.

Figure 3-10 One-Fabric Topology, Type 2

The production VSAN consists of:

Two VSANs, VSAN 10 and VSAN 20

One DMM module for both the VSANs, DMM Module 1

Port for the application server and the existing storage

Application server port and existing storage port in the same VSAN

The migration VSAN consists of:

One VSAN, VSAN 15

One DMM module, DMM Module 2


Note The migration VSAN and the production VSAN should have different DMM modules.


Existing storage port and new storage port in the same VSAN

Ports in a Server-Based Job

This section provides guidelines for configuring server-based migration jobs.

When creating a server-based migration job, you must include all possible paths from the host to the LUNs being migrated. All writes to a migrated LUN need to be mirrored in the new storage until the job is destroyed, so that no data writes are lost. Therefore, all active ports on the existing storage that expose the same set of LUNs to the server must be added to a single data migration job.

In a multipath configuration, two or more active storage ports expose the same set of LUNs to two HBA ports on the server (one initiator/target port pair for each path). Multipath configurations are supported in dual-fabric topologies (one path through each fabric) and in single-fabric topologies (both paths through the single fabric).

In a single-path configuration, only one active storage port exposes the LUN set to the server. The migration job includes one initiator and target port pair (DMM does not support multiple servers accessing the same LUN set).

The following sections describe how to apply the rules to various configurations:

Single LUN Set, Active-Active Array

Multiple LUN Set, Active-Active Arrays

Single LUN Set, Active-Passive Array

Single LUN Set, Active-Active Array

In the example shown in Figure 3-11, the server accesses three LUNs over Fabric 1 using storage port ES1. The server accesses the same LUNs over Fabric 2 using storage port ES2.

Both storage ports (ES1 and ES2) must be included in the same data migration job, as both ports are active and expose the same LUN set.

Figure 3-11 Single LUN Set, Active-Active Array

You create a data migration job with the following configuration:

Server Port
Existing Storage Port
New Storage Port

H1

ES1

NS1

H2

ES2

NS2



Note If the example in Figure 3-11 showed multipathing over a single fabric SAN, there would be no difference in the data migration job configuration.


Multiple LUN Set, Active-Active Arrays

In the example shown in Figure 3-12, the server accesses three LUNs over Fabric 1 using storage port ES1. The server accesses the same LUNs over Fabric 2 using storage port ES2. The server accesses three different LUNs over Fabric 1 using storage port ES3, and accesses the same LUNs over Fabric 2 using storage port ES4.

Figure 3-12 Multiple LUN Set, Active-Active Arrays

You need to create two data migration jobs, because the server has access to two LUN sets on two different storage ports. You need to include two storage ports in each data migration job, as they are active-active multipathing ports.

One migration job has the following configuration:

Server Port
Existing Storage
New Storage

H1

ES1

NS1

H2

ES2

NS2


This job includes three data migration sessions (for LUNs 1,2, and 3).

The other migration job has the following configuration:

Server Port
Existing Storage
New Storage

H1

ES3

NS3

H2

ES4

NS4


This job includes three data migration sessions (for LUNs 7,8, and 9).

Single LUN Set, Active-Passive Array

In an active-passive array, the LUNs exposed by a storage port may be active or passive.

Example 1: Each controller has two active ports

In the example shown in Figure 3-13, the server accesses a single LUN set. However, all LUNs are not active on a single storage port. The active-passive array in the example has two controllers, each with two ports. LUN 0 and LUN 1 are active on ES1 and ES2. LUN 2 and LUN 3 are active on ES3 and ES4.

Logically, the server sees two active LUN sets that are accessed from two different storage ports. Each storage port is paired for multipathing.

Figure 3-13 Example 1: Single LUN Set, Active-Passive Array

The server accesses LUN 0 and LUN 1 over Fabric 1 using storage port ES1. The server accesses the same LUNs over Fabric 2 using storage port ES2. The server accesses LUN 2 and LUN 3 over Fabric 1 using storage port ES3, and accesses the same LUNs over Fabric 2 using storage port ES4.

You need to create two data migration jobs, because the server has access to two LUN sets over two different storage ports. Each of the data migration jobs includes two storage ports, because both ports access the active LUNs on the storage.

Only the active LUNs and associated storage ports are included in each job. (LUNs 0 and 1 in one job and LUNs 1 and 2 in the other job).


Note You can use the Server Lunmap Discovery (SLD) tool to see the LUNs that are active on each port of an active-passive array.



Note In Cisco DMM, if a data migration job is configured for an Active-Passive array, only the paths on the active controller of the storage are included as part of the job. As a result, if a LUN Trespass has occurred due to a controller failover, the host I/Os on the new path to the storage are not captured by DMM and they are not applied to the new storage. If a LUN trespass or controller-failover occurs during migration, destroy the job and recreate it to perform the migration again to ensure that the old and new storage are synchronized.


One migration job has the following configuration:

Server Port
Existing Storage
New Storage

H1

ES1

NS1

H2

ES2

NS2


This job includes two data migration sessions (for LUNs 0 and 1).

The other migration job has the following configuration:

Server Port
Existing Storage
New Storage

H1

ES3

NS3

H2

ES4

NS4


This job includes two data migration sessions (for LUNs 2 and 3).

Example 2: Each controller has only one active port

In the example shown in Figure 3-14, the server accesses a single LUN set. However, all LUNs are not active on a single storage port. The active-passive array in the example has two controllers, each with a single port. LUN 0 and LUN 1 are active on ES1. LUN 2 and LUN 3 are active on ES2.

Logically, the server sees two active LUN sets that are accessed from different storage ports.

Figure 3-14 Example 2: Single LUN Set, Active-Passive Array

The server accesses LUN 0 and LUN 1 over Fabric 1 using storage port ES1. The server accesses LUN 3 and LUN 4 over Fabric 2 using storage port ES2.

You need to create two data migration jobs, because the server has access to two LUN sets over two different storage ports. Each of the data migration jobs includes the ports from a single fabric.

One migration job has the following configuration:

Server Port
Existing Storage
New Storage

H1

ES1

NS1


The other migration job has the following configuration:

Server Port
Existing Storage
New Storage

H2

ES2

NS2