VMDC DCI 1.0 DG
VMDC DCI Design
Downloads: This chapterpdf (PDF - 5.38MB) The complete bookPDF (PDF - 9.67MB) | Feedback

Table of Contents

VMDC DCI Design

Data Center Fabric Design

FabricPath Terminology

FabricPath Topologies

FabricPath “Typical Data Center” Model

Layer 3 Design

Services

Tenancy Models

LAN Extension Options for Multi-Site Topologies

OTV Design Considerations

Nexus 1000v Virtual Switch Metro Extensions

Compute

Storage

Storage Design Constraints

Zero RPO and Near-Zero RTO Using NetApp MetroCluster

MetroCluster Design with FCoE Frontend

Network Connectivity for Storage Access

SAN Design Details

Datastore Layout

Less Stringent RTO/RPO Protection Using NetApp SnapMirror

VMware Redundancy and Workload Mobility Options

VMware Workload Mobility Design

VMDC DCI Design

The Virtualized Multiservice Data Center (VMDC) architecture is based on the foundational design principles of modularity, high availability (HA), differentiated service support, secure multi-tenancy, and automated service orchestration, as shown in Figure 3-1. These design principles provide streamlined turn-up of new services, maximized service availability, resource optimization, facilitated business compliance, and support for self-service IT models. These benefits maximize operational efficiency and enable private and public cloud providers to focus on their core business objectives. This VMDC DCI release builds upon the design principles that have been previously validated and deployed at large scale in both enterprises and service providers. In addition, VMDC DCI extends these critical design principles to operate across multi-site topologies spanning metro and geo distances.

Figure 3-1 VMDC Design Principles

 

Modularity—Unstructured growth is at the root of many operational and CAPEX challenges for data center administrators. Defining standardized physical and logical deployment models is the key to streamlining operational tasks such as moves, adds and changes, and troubleshooting performance issues or service outages. VMDC reference architectures provide blueprints for defining atomic units of growth within the data center, called PoDs.

High Availability—The concept of public and private “Cloud” is based on the premise that the data center infrastructure transitions from a cost center to an agile, dynamic platform for revenue-generating services. In this context, maintaining service availability is critical. VMDC reference architectures are designed for optimal service resilience, with no single point of failure for the shared (“multi-tenant”) portions of the infrastructure. As a result, great emphasis is placed upon availability and recovery analysis during VMDC system validation. VMDC DCI extends the validated design to support business continuity and application workload mobility across multi-site topologies.

Differentiated Service—Generally, bandwidth is plentiful in the data center infrastructure. However, clients may need to remotely access their applications via the Internet or some other type of public or private WAN. Typically, WANs are bandwidth bottlenecks. VMDC provides an end-to-end QoS framework for service tuning based upon application requirements. VMDC DCI extends this end-to-end QoS framework across multi-site topologies.

Multi-tenancy—As data centers transition to Cloud models, and from cost centers to profit center, services will naturally broaden in scope, stretching beyond physical boundaries in new ways. Security models must also expand to address vulnerabilities associated with increased virtualization. In VMDC, “multi-tenancy” is implemented using logical containers, also called “Cloud Consumer” that are defined in these new, highly virtualized and shared infrastructures. These containers provide security zoning in accordance with Payment Card Industry (PCI), Federal Information Security Management Act (FISMA), and other business and industry standards and regulations. VMDC is certified for PCI and FISMA compliance. VMDC DCI extends multi-tenancy and security constructs across multi-site environments.

Service Orchestration—Industry pundits note that the difference between a virtualized data center and a “cloud” data center is the operational model. The benefits of the cloud – agility, flexibility, rapid service deployment, and streamlined operations – are achievable only with advanced automation and service monitoring capabilities. The VMDC reference architectures include service orchestration and monitoring systems in the overall system solution. This includes best-of-breed solutions from Cisco (for example, Cisco Intelligent Automation for Cloud) and partners, such as BMC and Zenoss.

The following sections provide design guidance to extend each element of application environment across multi-site topologies. As shown in Figure 3-2, the extended application environment includes:

  • WAN Connectivity and Multi-site LAN Extensions
  • Data Center Fabric Networking to implement tenancy, network containers, and QoS
  • L4-L7 Services to implement physical/virtual services including security and load balancing
  • Hypervisors and Virtual Networking to implement workload migrations and virtual switching
  • Compute resources spanning multiple sites
  • Storage resources to implement multi-site clusters and data replication

Figure 3-2 DCI Extensions Across the Application Environment

 

Data Center Fabric Design

VMDC DCI leverages FabricPath as the Unified Data Center fabric. FabricPath combines the stability and scalability of routing in Layer 2 (L2), supporting the creation of simple, scalable, and efficient L2 domains that apply to many network scenarios. Because traffic forwarding leverages the Intermediate System to Intermediate System (IS-IS) protocol, rather than Spanning Tree (STP), the bisectional bandwidth of the network is expanded, facilitating data center-wide workload mobility.

Preview a brief primer on FabricPath technology for details.

FabricPath benefits include:

Simplified Network, Reducing Operating Expenses

  • FabricPath is simple to configure. The only necessary configuration consists of distinguishing core ports, which link the switches, from edge ports, to which end devices are attached. No parameters need to be tuned to achieve operational status, and switch addresses are assigned automatically.
  • One control protocol is used for unicast forwarding, multicast forwarding, and VLAN pruning. Networks designed using FabricPath require less combined configuration than equivalent networks based on STP, further reducing the overall management needed for the solution.
  • Static network designs make assumptions about traffic patterns and the locations of servers and services. If, as often happens over time, those assumptions become incorrect, complex redesign can be necessary. A fabric switching system based on FabricPath can be easily expanded as needed with additional access nodes in a plug and play manner, with minimal operational impact.
  • Switches that do not support FabricPath can still be attached to the FabricPath fabric in a redundant way without resorting to STP.
  • FabricPath L2 troubleshooting tools provide parity with those currently available in the IP community for non-fabric path environments. For example, the Ping and Traceroute features now offered at L2 with FabricPath can measure latency and test a particular paths among the multiple equal-cost paths to a destination within the fabric.

Reliability Based on Proven Technology

  • Although FabricPath offers a plug-and-play user interface, its control protocol is built on top of the powerful IS-IS routing protocol, an industry standard that provides fast convergence and is proven to scale in the largest service provider (SP) environments.
  • Loop prevention and mitigation is available in the data plane, helping ensure safe forwarding unmatched by any transparent bridging technology. FabricPath frames include a time-to-live (TTL) field similar to the one used in IP, and an applied reverse-path forwarding (RPF) check.

Efficiency and High Performance

  • With FabricPath, equal-cost multipath (ECMP) protocols used in the data plane can enable the network to find optimal paths among all the available links between any two devices. First-generation hardware supporting FabricPath can perform 16-way ECMP, which, when combined with 16-port 10 gigabits per second (Gbps) port-channels, represents bandwidth of up to 2.56 terabits per second (Tbps) between switches.
  • With FabricPath, frames are forwarded along the shortest path to their destination, reducing the latency of the exchanges between end stations compared to a STP based solution.
  • FabricPath needs to learn at the edge of the fabric only a subset of the MAC addresses present in the network, enabling massive scalability of the switched domain.

FabricPath Terminology

FabricPath comprises two types of nodes: spine nodes and leaf nodes. A spine node is one that connects to other switches in the fabric and a leaf node is one that connects to servers. These terms are useful in greenfield scenarios but may be vague for migration situations, where one has built a hierarchical topology and is accustomed to using traditional terminology to describe functional roles.

In this document, we expand our set of terms to correlate fabric path nodes and functional roles to hierarchical network terminology:

  • Aggregation-Edge—A FabricPath node that sits at the “edge” of the fabric, corresponding to an aggregation node in a hierarchical topology.
  • Access-Edge—A FabricPath node that sits at the edge of the fabric, corresponding to an access node in a hierarchical topology.

These nodes may perform L2 and/or L3 functions. At times, we also refer to an L3 spine or a L3 edge node to clarify the location of Layer 2/Layer 3 boundaries and distinguish between nodes that are performing Layer 3 functions versus L2-only functions.

FabricPath Topologies

FabricPath can be implemented in a variety of network designs, from full-mesh to ring topologies. In VMDC 3.0.X design and validation, the following DC design options, based on FabricPath, were considered:

  • Typical Data Center Design—This model represents a starting point for FabricPath migration, where FabricPath is simply replaces older layer 2 resilience and loop avoidance technologies, such as virtual port channel (vPC) and STP. This design assumes that the existing hierarchical topology, featuring pairs of core, aggregation, and access switching nodes, remains in place and that FabricPath provides L2 multipathing.
  • Switched Fabric Data Center Design—This model represents horizontal infrastructure expansion of the infrastructure to leverage improved resilience and bandwidth, characterized by a Clos architectural model.
  • Extended Switched Fabric Data Center Design—This model assumes further expansion of the data center infrastructure fabric for inter-PoD or inter-building communication.

These are discussed in detail in VMDC 3.0 documentation: The Design Guide is publicly available, while the Implementation Guide is available to partners, and to Cisco customers under NDA.

While the logical containers discussed in VMDC DCI may be implemented over a traditional classical Ethernet (vPC) or FabricPath designs, this release is based on the Typical Data Center FabricPath design option previously validated in VMDC 3.0/3.0.1.

FabricPath “Typical Data Center” Model

A Typical Data Center design is a two-tier FabricPath design as shown in Figure 3-3. VMDC architectures are built around modular building blocks called PoDs. Each PoD uses a localized Services attachment model. In a classical Ethernet PoD, vPCs handle L2 switching, providing an active-active environment that does not depend on STP, but converges quickly after failures occur. In contrast, Figure 3-3 shows a VMDC PoD with FabricPath as a vPC replacement.

Figure 3-3 Typical Data Center Design

 

  • From a resilience perspective, a vPC-based design is sufficient at this scale, although there are other benefits of using FabricPath, including:
  • FabricPath is simple to configure and manage. There is no need to identify a pair of peers or configure port channels. Nevertheless, port channels can still be leveraged in FabricPath topologies if needed.
  • FabricPath is flexible. It does not require a particular topology, and functions even if the network is cabled for the classic triangle vPC topology. FabricPath can accommodate any future design.
  • FabricPath does not use or extend STP. Even a partial introduction of FabricPath benefits the network because it segments the span of STP.
  • FabricPath can be extended easily without degrading operations. Adding a switch or a link in a FabricPath-based fabric does not result in lost frames. Therefore, it is possible to start with a small network and extend it gradually, as needed.
  • FabricPath increases the pool of servers that are candidates for VM mobility and thereby enables more efficient server utilization.

Note Certain application environments, especially those that generate high levels of broadcast, may not tolerate extremely large Layer 2 environments.


Layer 3 Design

VMDC DCI will follow the design of VMDC 3.0/3.0.1 and will use a combination of dynamic and static routing to communicate reachability information across the layer three portions of the infrastructure. In this design dynamic routing is achieved using OSPF as the IGP. The Core routers are OSPF Area Border Routers (ABR) connecting to OSPF Area 0 in the IP Core and the NSSA area within the data center. To scale IP prefix tables, aggregation-edge nodes are placed in stub areas with the aggregation-edge node advertising “default route” (Type 7) for reachability. Service appliances (ASA Firewall and Citrix SDX SLB) are physically connected directly to the aggregation-edge nodes; reachability to/from these appliances is communicated via static routes. In the case of clustered ASA firewalls, for traffic from the ASA(s) to the Nexus 7000 aggregation-edge nodes, a default static route points to the HSRP VIP on the Nexus 7000, while for traffic from the Nexus 7000 aggregation-edge to the ASA, a static route on the Nexus 7000 for server subnets points to the ASA outside IP interface address.

Since VMDC DCI will use the “Typical Data Center” design, the Citrix SDX SLB appliance is configured in one-arm mode. This has several key benefits, especially in multi-site scenarios:

  • One-arm mode limits the extension of FabricPath VLANs to the appliances
  • One-arm mode keeps VLAN ARP entries off the SDX SLB
  • The port-channel attachment method allows for a separation of failure domains.
  • Source-NAT on the SDX SLB insures symmetric routing and a return path for moved workloads. This is especially important for DCI designs that span multiple sites.

VRF-lite is implemented on the aggregation-edge nodes and provides a unique per-tenant VRF. This design secures and isolates private tenant applications and zones via dedicated routing and forwarding tables. Figure 3-4 shows the Layer 3 implementation for the Typical Data Center design and describes connections for a single tenant.

Figure 3-4 Layer 3 Connectivity Design

 

 

VMDC DCI uses the Typical Data Center design featuring a two-node Layer 3 spine (aka aggregation-edge nodes). In this model, active/active gateway routing is enabled through the use of vPC+ on the inter-Spine (FabricPath) peer-link. This creates a single emulated switch from both spine nodes. HSRP thus announces the virtual MAC of the emulated switch ID, enabling dual-active paths from each access-edge switch device, serving to optimize resiliency and throughput, while providing for efficient East/West routing.

Services

Design considerations for the services components within the Cloud data center infrastructure are described below.

The Citrix NetScaler Software Load Balancer (SLB) and ASA 5585 firewall appliances are used in a Typical DC design to provide load balancing and front-end/first-tier firewalling. The VMDC DCI architecture utilizes clustered ASA Firewalls (Release 9.0+). This feature serves two functions: enhanced resiliency and capacity/throughput expansion. Up to eight Cisco ASA 5585-X or 5580 Adaptive Security Appliance firewall modules may be joined in a single cluster to deliver up to 128 Gbps of multiproduct throughput (300 Gbps maximum) and more than 50 million concurrent connections. This is achieved via the Cisco Cluster Link Aggregation Control Protocol (cLACP), which enables multi-system ASA clusters to function and be managed as a single entity. This provides significant benefits in terms of streamlined operation and management, in that firewall policies pushed to the cluster get replicated across all units within the cluster, while the health, performance and capacity statistics of the entire cluster may be managed from a single console.

Clustered ASA appliances can operate in routed, transparent, or mixed-mode. However, all members of the cluster must be in the same mode. Clustered ASA appliances in this system release will be deployed and validated in routed mode.

It is important to note that transparent mode deployment considerations are discussed in the VMDC white paper .

Characteristics of the appliance-based service attachment as implemented in the Typical DC model include:

  • VMDC DCI uses a vPC attachment from clustered ASAs to Nexus 7000 aggregation-edge nodes to provide enhanced resiliency. More specifically, one vPC (across 2 clustered ASAs) to the N7k aggregation-edge nodes is utilized for data traffic, and multiple port-channels per ASA (to vPCs on the Nexus 7000 aggregation-edge nodes) are used for communication of cluster control link (CCL) traffic. Similarly, the Citrix SDX SLBs uses vPCs connections per SDX appliance to both redundant aggregation-edge nodes to provide SLB resiliency.
  • The Citrix SDX SLB is in “one-arm” mode to optimize traffic flows for load balanced and non-load balanced traffic. This limits the extension of FabricPath VLANs to the appliances, and keeps the VLAN ARP entries off the SDX. Source-NAT on the SDX SLB insures symmetric routing and a return path for moved workloads extended across the multiple sites
  • Active/Active Failover between redundant (non-clustered) appliances through configuration of active/standby pairs on alternating (primary/secondary) contexts. In contrast, the clustered resilience functionality available on the ASA is such that every member of the cluster is capable of forwarding every traffic flow and can be active for all flows. All resiliency implementations will be contained within a single data center since neither Cisco ASA firewalls or Citrix SLB currently support clustering over a metro distance at this time.
  • VMDC DCI follows current best-practice recommendations, using out-of-band links for FT state communication between redundant appliances. In the context of non-clustered, redundant ASA pairs, interface monitoring is activated to insure proper triggering of failover to a standby interface, only one interface (inside or outside) must be monitored per FT failover group, though monitoring of both is possible. Should it feature higher resilience characteristics, the management path between the redundant ASAs could be a monitoring option. For clustered ASA appliances, the CCL (Cluster Control Link) communicates control plane information between cluster members, including flow re-direction controls. This design follows best practice recommendations for CCL high availability by employing vPCs on the redundant N7k aggregation-edge from port-channels on each ASA in the cluster.

The appliances are statically routed, and redistributed into NSSA area 10. Key considerations for this service implementation include:

  • Disaster Recovery Implications—Disaster recovery must provide a complete replication of the network services resources and associated subnets at the recovery location for failover. Service orchestration will aid in the creation of recovery resources including L4-L7 services, network containers, and resource allocations across a range of application elements.
  • Resource allocation for compute, storage and network services applied to multi-tiered applications is less complex, in that it is pod-based. This should translate to simpler workflow and resource allocation algorithms for service automation systems.
  • Pod-based deployments represent a simple, clear-cut operational domain, for greater ease of troubleshooting and determination of dependencies in root-cause analysis.

Tenancy Models

A primary focus of VMDC DCI is to determine the impact of workload migrations on VMDC network containers and related L4-L7 services. Certain workload moves (Live Migrations) require that existing network connections remain intact, and that existing services remain stateful throughout the move. This will typically require some type of temporary “tromboning” back to the original data center for existing connections and related services. Other workload moves (Cold Migration) allow workloads and existing network connections to be terminated and restarted at a new location. In both cases, new network containers will be created at the recovery site, and external users will be redirected to the new site where workload has been moved.

From an architectural perspective, VMDC DCI remains aligned with tenancy models previously defined in VMDC 2.3 and VMDC 3.0/3.0.1 releases. A number of VMDC containers are presented in Figure 3-5.

Figure 3-5 VMDC Network Containers

 

Modifications to network containers for VMDC DCI include:

  • The introduction of Citrix SDX load balancer appliances to replace Cisco ACE load balancers across each container
  • The validation of Clustered ASA Firewall appliances within the Expanded Palladium Multi-zone container
  • Validate network container performance across multi-site scenarios in which the application and services may reside at different locations
  • Validate the migration strategy used to move complete tenants and related containers to a new site.

A primary focus of the VMDC DCI is to determine the impact of workload mobility on different network containers and their related L4-L7 services. The Expanded Palladium container was validated in the release and is typically implemented for Enterprise Private Clouds.

Expanded Palladium Multi-zone—The Expanded Palladium Multi-zone container implements separate front-end and back-end security zones, each of which may have a different set of network services applied. The original Palladium container aligns more closely with traditional zoning models in use in physical IT deployments. Private Cloud data centers employ an Expanded version of the Palladium container as described in Figure 3-6 . The Expanded Palladium Multi-zone container supports additional capacity and many private zones, as described below.

  • A single, shared (multi-tenant) public zone, with multiple server VLANs and a single Citrix SDX context (or multiple contexts) for SLB. This is in the global routing table used by the Public Zone.
  • Multiple, private (unique per-tenant or user group) firewalled zones reachable via the public zone – i.e., the firewall “outside” interface is in the public zone. These private zones include a Citrix SDX SLB, and may have 1 to many VLANs.
  • VSG vPath security can be applied in a multi-tenant/shared fashion to the public zone.
  • VSG vPath security can be applied in dedicated fashion to each of the private zones, providing a second tier of policy enforcement, and back-end (East/West) zoning. Unique VLANs may be used per zone for VLAN-based isolation. However, in validation we assumed the desire to conserve VLANs would drive one to use a single VLAN with multiple security zones applied for policy-based isolation.

An alternative way to view this model is as a single, DC-wide “tenant” with a single front-end zone and multiple back-end zones for (East/West) application-based isolation.

Figure 3-6 Expanded Palladium Multi-Zone Container

 

LAN Extension Options for Multi-Site Topologies

There are number of options available to implement Layer 2 Extensions for Data Center Interconnections. The specific choice of which DCI option best suits an SP or Enterprise is dependent on a number of factors including: the number of interconnected sites, number of VLAN/MACs extended, distance between sites, link type and available link bandwidth, operational complexity, L2 domain isolation, tenancy, and the cost of network interconnect links that support DCI traffic. Most of the current DCI choices fit into three categories listed in Figure 3-7.

Figure 3-7 LAN Extension Options

 

The first DCI option includes Ethernet switching extensions over dark fiber and use either VSS, vPC, or FabricPath. These models are typically implemented in a dual site model and or may be contained to a campus distance. The second category includes a number of MPLS variants, including EoMPLS (VMDC previously validated), VPLS, and E-VPN (routed VPLS, future availability). These MPLS options are typically well suited for large SP or Enterprise customers with an MPLS backbone, many sites, and large multi-tenant cloud environments. The third option includes extensions supported over any IP transport such as OTV. OTV is well suited for Enterprise or SP style deployments with fewer sites and lower scaled tenants and VLANs. One final option includes Hypervisor based Overlays that could be used as DCI options, including VXLAN, NV-GRE, or STT. Most of these models are at various stages of development and have significant limitations that limit full scale deployments by large SPs or Enterprises in the near term. Most virtual overlay options in their current state are better suited for intra (within) site switching rather than inter (between) site DCI extensions.

The current Cisco positioning of relevant DCI technologies to handle Intra-Site versus Inter-Site connectivity is summarized in the Figure 3-8.

Figure 3-8 Intra-Site versus Inter-Site Connectivity

 

Based a number of new capabilities included in the Nexus 6.2 release, VMDC DCI validated OTV as the LAN extension option to support Private or Public Cloud deployments. Future VMDC releases will target VPLS or E-VPN DCI options to support larger Public Cloud deployments. OTV is a feature that allows Ethernet traffic from a local area network (LAN) to be tunneled over an IP network to create a “logical data center” spanning several data centers in different locations. OTV is well suited for Private Cloud Enterprise and SP customers.

OTV differentiated characteristics include:

  • Capability of extending Layer 2 LANs over any network by leveraging IP-encapsulated MAC routing.
  • Simplification of configuration and operation by enabling seamless deployment over existing network without redesign, requiring minimal configuration commands and providing single-touch site configuration for adding new data centers.
  • Increasing resiliency by preserving existing Layer 3 failure boundaries, providing automated multi-homing, and including built-in loop prevention.
  • Maximizing available bandwidth by using equal-cost multipath and optimal multicast replication (in deployments where the transport infrastructure is multicast enabled).

The VMDC DCI design interconnects FabricPath data centers with OTV LAN extensions to emulate a three site data center business continuity design and enable various workload mobility options (Figure 3-9). Future VMDC releases will validate VPLS or E-VPN LAN extensions integrated with vPC or FabricPath designs to support larger Public Cloud business models.

Figure 3-9 FabricPath and OTV Topology

 

OTV Design Considerations

The OTV implementation utilizes a dedicated Virtual Device Context (VDC) deployed at the aggregation layer of the VMDC DCI design. A dedicated OTV VDC will perform OTV functions while the Aggregation-VDC will provide SVI routing functions. The L2-L3 boundary is implemented on the Nexus 7000 aggregation device, as described in Figure 3-10. The Data Center core device (ASR 9K, ASR1K, or Nexus 7K) will only perform L3 functions. Spanning tree and L2 Broadcast domains will be isolated between data centers. OTV provides LAN extensions across geographic sites to connect distributed compute PoDs. OTV may also provide Intra-DC campus extensions if FabricPath is not sufficient.

Figure 3-10 OTV Edge Device in the Aggregation Layer

 

There are multiple ways to attach OTV VDCs to aggregation layer devices, each with varied levels of resiliency. VMDC DCI used the dual-homed VDC attachment design described in Figure 3-11. This attachment design provides the best resiliency, although it does consume more physical interfaces than the less resilient single-homed option. As described in Figure 3-11, logical port-channels are used for the Join interfaces and the Internal interfaces. Therefore, traffic recovery after single link failure event is based on port-channel re-hashing. There is no need for Authoritative Edge Device (AED) re-election. In the event of a physical node (or VDC) failure, AED re-election is required, but collateral impact is limited to a few seconds and only for 50% of the extended VLANs.

Figure 3-11 Dual Homed OTV VDC

 

Similarly, there are different options to load balance VLANs across dual-homed aggregation devices. VMDC DCI implements the most resilient model, site based VLAN load balancing, described in the Figure 3-12. In this model, the AED role is negotiated between the two OTV VDCs (on a per VLAN basis). For a given VLAN all traffic must be carried to the AED Device. Traffic flows are optimized by leveraging resilient Port-Channels as Internal Interfaces. The AED encapsulates the original L2 frame into an IP packet and sends it back to the aggregation layer device. The aggregation layer device routes the IP packet toward the DC Core/WAN edge. L3 routed traffic bypasses the OTV VDC.

Figure 3-12 Per-VLAN Load Balancing

 

This release will validate OTV implementation over a multicast transport. The multicast topology example is provided in Figure 3-13. Unicast transport is also a supported option but was not implemented in this VMDC DCI release. MAC advertisements between OTV connected sites take on the following characteristics:

  • MAC addresses are advertised with their VLAN IDs, IP next hop and Site-ID
  • IP next hops are the addresses of Edge Devices’ Join interfaces
  • Each OTV update can contain multiple MAC addresses for different VLANs
  • When the MAC address ages out from the OTV Device MAC Table, an update is created and sent to the remote OTV Edge Devices (MAC Withdraw)

Figure 3-13 OTV Multicast Transport Topology

 

VMDC DCI utilizes FHRP Filtering to insure that egress traffic flows are routed to an HSRP group that is local to each data center. This model is described in Figure 3-14. FHRP localization is achieved via a combination of VACLs and MAC route filters. The result is that different data centers can have the one HSRP group with one VIP, but each site has an active router used for first hop routing that is local to the site.

Figure 3-14 FHRP Filtering between Sites

 

There are a number of Nexus 7000 hardware limitations for the OTV implementation. These limitations are listed below and in Figure 3-15.

  • OTV VDC must use only M-series ports for both internal and join interfaces

Recommendation is to allocate M only interfaces to the OTV VDC

All M series modules are supported (M1-48, M1-32, M1-08, M2 series)

  • F1 and F2 linecards do not support OTV natively

F1 and F2e support for OTV internal Interfaces is supported

Figure 3-15 NEXUS 7000 OTV Configuration

 

As Enterprises and SPs extend their data centers for Business Continuity or Workload Mobility, it is likely that there will be overlapping VLANs allocations across data centers. Therefore, this release will implement a VLAN translation mechanism to overcome this issue, as described in Figure 3-16. This function will translate a local VLAN to remote VLAN in a different site (VLAN in the West Site corresponds to a different VLAN in the East Site).

Figure 3-16 OTV VLAN Translation between Sites

 

Nexus 1000v Virtual Switch Metro Extensions

The Cisco Nexus 1000V Series Switches are virtual machine access switches running the Cisco NX-OS operating system and supporting various hypervisors such as VMware vSphere and Microsoft Hyper-V. Operating inside the hypervisors, the Cisco Nexus 1000V Series provides Policy-based virtual machine connectivity, Mobile virtual machine security and network policy, and non-disruptive operational model for server virtualization and networking teams. Cisco Nexus 1000V switches provide consistent networking feature set and provisioning process all the way from the virtual machine access layer to the core of the data center network infrastructure. Virtual servers can use the same network configuration, security policy, diagnostic tools, and operational models as their physical server counterparts attached to dedicated physical network ports. Virtualization administrators can access predefined network policies that follow mobile virtual machines across sites to ensure proper connectivity saving valuable administration time. This comprehensive set of capabilities aids operators to deploy server virtualization faster and enables enhanced workload mobility across multiple sites.

VMDC DCI validates the performance and enhanced availability features of Nexus 1000v Distributed Virtual Switches that span metro data centers. Nexus 1000v VSMs and VEMs are now capable of extending across metro distances to support extended ESXi clusters and Live Workload mobility scenarios. Active and Backup VSMs can now operate in different data centers as described in Figure 3-17. VMDC DCI also maps different Nexus 1000v switches (and HA design options) to various categories of applications to achieve corresponding RPO/RTO targets. For example, applications that require near zero RPO/RTO and Live Workload mobility would utilize a Nexus 1000v that is distributed across metro data centers, to support non-disruptive moves. Other applications with less stringent RPO/RTO requirements would utilize a different Nexus 1000v that is contained to a single data center. It is important to map applications with different RPO/RTO requirements to different pairs of Nexus 1000v switches to optimize resiliency and cost.

Figure 3-17 NEXUS 1000v Metro Extensions

 

A Nexus 1000v configuration that spans multiple sites is similar to the single site setup except for the fact that the Nexus 1000v high availability VSM pair is distributed across the two sites. The new connectivity option is as shown in Figure 3-18.

Figure 3-18 Nexus 1000v Connectivity Across a Metro Distance

 

Both Nexus 1110 and Nexus 1000v VSM pairs communicate over the OTV link utilizing VLAN management and control/packet VLANs. In the case of a complete data center failure, the VSM on the second data center takes over the role of primary VSM (assuming the VSM role was “Secondary”). If the two data centers become segregated because of a communication failure such as network links down, both VSMs become primary and result in a split-brain scenario. When data center communication resumes, Nexus 1000v pairs use the following rules (in order) to determine the new primary VSM.

1. Module Count—The number of modules that are attached to the VSM.

2. vCenter Status—Status of the connection between the VSM and vCenter.

3. Last Configuration Time—The time when the last configuration is done on the VSM.

4. Last Standby-Active Switch—The time when the VSM last switched from standby to active state. (VSM with a longer active time gets higher priority).

Additional details can be found in the N1Kv Configuration Guide .

Compute

The VMDC DCI compute architecture implements a high degree of server virtualization, driven by data center consolidation, the dynamic resource allocation requirements fundamental to a "cloud" model, and the need to maximize operational efficiencies while reducing capital expense (CAPEX). Therefore, the VMDC DCI architecture is based upon three key elements:

  • Hypervisor-Based Virtualization—in this as in previous system releases, VMware's vSphere 5.1 plays a key role in this release, enabling the creation of virtual machines on physical servers by lo gically abstracting the server environment in terms of CPU, memory, and network touch points into multiple virtual software containers. In addition, vSphere and SRM will critical roles to demonstrate various workload mobility and business continuity scenarios. Future releases will demonstrate that the architecture is hypervisor agnostic using Microsoft Hyper-V VMs within a Virtual Private Cloud Container. Microsoft Hyper-V support will be provided as a separate addendum to the VMDC DCI release.
  • Unified Computing System (UCS)—unifying network, server, and I/O resources into a single, converged system, the Cisco UCS provides a highly resilient, low-latency unified fabric for the integration of lossless 10-Gigabit Ethernet and FCoE functions with x-86 server architectures. The UCS provides a stateless compute environment that abstracts I/O resources and server personality, configuration and connectivity, facilitating dynamic programmability. Hardware state abstraction makes it easier to move applications and operating systems across server hardware, which is fundamental for workload mobility and business continuity functions.
  • Multiple UCS systems were staged at each data center to house compute resources (tenant VMs and service nodes) for the purposes of testing multi-UCS logical segments, associated failure scenarios, and workload migrations.
  • The Cisco Nexus 1000V provides a feature-rich Distributed Virtual Switch, incorporating software-based VN-link technology to extend network visibility, QoS, and security policy to the virtual machine level of granularity. VMDC DCI validates multiple N1Kv designs in which the N1Kv is distributed across metro data centers supporting applications in extended clusters, and traditional N1Kv designs in which the DVS is contained in a single data center. Multiple N1Kv switches are used to support specific groupings of applications with various RPO/RTO requirements. The N1Kv 2.2 release will be leveraged to increase port and host capacity to 4k ports per VSM/128 hosts per VSM/300 ports (max.) per host.
  • VMDC DCI system release uses VMware vSphere 5.1 as the compute virtualization operating system. Fundamental to the virtualized compute architecture is the notion of clusters; a cluster consists of two or more hosts with their associated resource pools, virtual machines, and data stores. Working in with vCenter as a compute domain manager, vSphere advanced functionality, such as HA and DRS, is built around the management of cluster resources. vSphere supports cluster sizes of up to 32 servers when HA and/or DRS features are utilized. Clusters may be extended across metro data centers to support Live workload mobility or may be used as the target pool of an SRM Cold workload migration. VMDC DCI groups resources into clusters using criterion related to workload mobility and application RPO/RTO requirements. For example, applications that require “extended clusters” across metro data centers should utilize different resource pools than applications that are “siloed” to a single data center.

In general practice, however, the larger the scale of the compute environment and the higher the virtualization (VM, network interface, and port) requirement, the more advisable it is to use smaller cluster sizes to optimize performance and virtual interface port scale. Therefore, in VMDC large pod simulations, cluster sizes are limited to 16 servers; in smaller pod simulations, cluster sizes of 16 or 32 are used. As in previous VMDC releases, three compute profiles are created to represent large, medium, and small workload: “Large” has 1 vCPU/core and 16 GB RAM; “Medium” has .5 vCPU/core and 8 GB RAM; and “Small” has .25 vCPU/core and 4 GB of RAM.

The UCS compute architecture implemented the following functions in VMDC DCI:

  • Implement multiple UCS 5100 series chassis (5108s), each populated with up to eight (half-width) server blades.
  • Each server has dual 10 GigE attachments, providing redundant A and B sides of the internal UCS fabric.
  • The UCS is a fully redundant system, with two 2200 Series Fabric Extenders per chassis and two 6200 Series Fabric Interconnects per pod.
  • Internally, eight uplinks per Fabric Extender feed into dual Fabric Interconnects to pre-stage the system for the maximum bandwidth possible per server. This configuration means that each server has 20 GigE bandwidth for server-to-server traffic in the UCS fabric.
  • Each UCS 6200 Fabric Interconnect aggregates via redundant 10 GigE EtherChannel connections into the leaf or “access-edge” switch (Nexus 5500 or Nexus 6000). The number of uplinks provisioned will depend upon traffic engineering requirements. For example, to provide an eight-chassis system with an 8:1 oversubscription ratio for internal fabric bandwidth to FabricPath aggregation-edge bandwidth, a total of 160 G (16 x 10 G) of uplink bandwidth capacity must be provided per UCS system.
  • The Nexus 1000V functions as the virtual access switching layer, providing per-VM policy and policy mobility.
  • In this system release, we will demonstrate the virtual machine host use case as part of the Expanded Palladium network container.

Storage

The storage architecture used in the VMDC DCI system follows the current storage best practices established in previous VMDC releases. Key design aspects of the VMDC storage architecture include:

  • Use of Cisco Data Center Unified Fabric to optimize and reduce LAN and SAN cabling costs
  • High availability through multi-level redundancy (link, port, fabric, Director, RAID)
  • Risk mitigation through fabric isolation (multiple fabrics, VSANs)
  • Datastore isolation through NPV/NPIV virtualization techniques, combined with zoning and LUN masking
  • Stretched datastores and backing storage for metro data center high availability
  • Datastore storage replication for geo data center disaster recovery and cold migration

VMDC DCI extends storage capabilities to support synchronous storage replication and asynchronous storage replication across multi-site topologies. VMDC validated designs continue to support a number of storage vendors. This VMDC DCI release was validated using NetApp products. NetApp MetroCluster implements synchronous storage replication with SyncMirror® across metro data centers, supporting applications with the most stringent RPO/RTO requirements. NetApp SnapMirror provides synchronous and semi-synchronous storage replication across metro distances, and asynchronous storage replication across metro and geo distances, supporting applications with less stringent RTO requirements and/or greater geographic distance protection.

Storage Design Constraints

The multi-site design must meet following requirements as per NetApp MetroCluster and Cisco product technology guidelines.

  • The maximum supported distance for MetroCluster implementations is 100km for FC back-end storage environments and 200km for SAS back-end storage.
  • The maximum supported distance for an FCoE front-end storage environment between two Cisco Nexus 7000s with F2 line cards is 80 km.

Based on these design constraints, a stretched-site MetroCluster solution using a Cisco Nexus 7000 FCoE front-end is validated with a maximum distance of 80KM

Zero RPO and Near-Zero RTO Using NetApp MetroCluster

VMDC DCI introduces NetApp MetroCluster storage extensions to provide synchronous storage replication and site resiliency across metro distances. NetApp MetroCluster is a cost-effective, integrated, high-availability and disaster recovery solution that protects against site failures resulting from human error, HVAC failures, power failures, building fire, architectural failures, and planned maintenance downtime.

NetApp highly available pairs couple two controllers to protect against single controller failures. NetApp disk shelves have built-in physical and software redundancies such as dual power supplies, dual shelf modules, multipath high availability cabling, and RAID-DP® (double parity). NetApp HA pairs and shelves protect against many data center failures but cannot protect against local site failure.

NetApp MetroCluster layers additional protection onto existing NetApp HA. MetroCluster enables synchronous data mirroring to achieve zero data loss, and automatic failover between data centers enables nearly 100% uptime. Thus, MetroCluster enables a zero recovery point objective (RPO) and a near-zero recovery time objective (RTO).

NetApp HA uses cluster failover (CFO) functionality to protect against controller failures. On failure of a NetApp controller, the surviving controller takes over the failed controller's data-serving operations, while continuing its own data-serving operations, described in Figure 3-19. Controllers in a NetApp HA pair use the cluster interconnect to monitor partner health and to mirror incoming data of recent writes not yet propagated to disk.

Figure 3-19 Failed Controller Operations

 

MetroCluster uses NetApp HA CFO functionality to automatically protect against controller failures. Additionally, MetroCluster layers local SyncMirror, cluster failover on disaster (CFOD), hardware redundancy, and geographical separation to achieve additional levels of availability.

Local SyncMirror synchronously mirrors data across the two halves of the MetroCluster configuration by writing data to two plexes: the local plex (on the local shelf) actively serving data and the remote plex (on the remote shelf) normally not serving data. In the event of a local shelf failure, the remote shelf seamlessly takes over data-serving operations. No data loss occurs because of synchronous mirroring.

CFOD protects against complete site disasters by:

  • Initiating a controller failover to the surviving controller
  • Serving the failed controller’s data by activating the data mirror
  • Continuing to serve its own data

Hardware redundancy is provided for all MetroCluster components. Controllers, storage, cables, switches (fabric MetroCluster), bridges, and adapters are all redundant.

Geographical separation is implemented by physically separating controllers and storage, creating two MetroCluster halves. For distances under 500m (campus distances), long cables are used to create stretch MetroCluster configurations, as illustrated in Figure 3-20.

Figure 3-20 Synchronous Data Mirroring with Stretch MetroCluster

 

For distances over 500m but under 200km/~125 miles (metro distances), a fabric is implemented across the two geographies, creating a fabric MetroCluster configuration, as shown in Figure 3-21. VMDC DCI implements fabric MetroCluster across a metro distance to support synchronous storage replication for the most business-critical applications that require stringent RPO/RTO targets.

Figure 3-21 Synchronous Data Mirroring with Fabric MetroCluster

 

Two MetroCluster design options are available. The first option uses traditional Fibre Channel “front-end” connections and MDS switches to the compute stack; the second option is less costly and includes FCoE “front-end” connections to the compute stack.

Since previous VMDC DCI releases have validated most FC designs, this VMDC DCI release will implement and validate the FCoE MetroCluster design, providing customers a new MetroCluster deployment option. If customers require metro distances greater than 80 km, it is recommended that they use the traditional FC-based MetroCluster option.

MetroCluster Design with FC Frontend

The Fibre Channel option is based on best practice designs as specified in NetApp Technical Report TR-3548, Best Practices for MetroCluster Design and Implementation . Two “back-end” designs are specified and use either an MDS-9148 or an MDS-9222i FC switch. Alternative MDS switches can be integrated as needed. The fabric MetroCluster interconnect options are represented in Figure 3-22 and Figure 3-23. The maximum supported distance for the FC design is 160 km and is a function of link latency and buffer credits on MDS switch ports.

Figure 3-22 MetroCluster Design with FC Frontend

 

Figure 3-23 MetroCluster Design with FCoE Frontend

 

MetroCluster Design with FCoE Frontend

Figure 3-24 shows the Fibre Channel over Ethernet (FCoE) MetroCluster design. This design is based on a FlexPod validated system and includes FCoE “front-end” interfaces to the compute stack. In addition, Cisco Nexus 7000 VDCs are required to segment the “front-end” FCoE ports/traffic from IP ports/traffic. The “back-end” storage replication function is implemented with traditional FC and uses either an MDS-9148 or an MDS-9222i FC switch. Alternative MDS switches can be integrated as needed for the “back-end” connections. The maximum supported distance for the FCoE design is 80 km and is a function of link latency and queue depth on the Cisco Nexus 7000 F2 line card. Other FCoE switch options (such as Cisco Nexus 5K or Nexus 6K) and other Cisco Nexus 7000 line card options (such as M1, F1) do not have sufficient line card queue depth to support FCoE spanning 80km distances and are not recommended for metro distances.

Figure 3-24 NetApp Fabric MetroCluster Design with FCoE Frontend

 

Network Connectivity for Storage Access

During normal operations, each site primarily accesses its local controller, hence the local datastores. However, in case of a failure, the surviving controller takes over all storage presentation and serves data for all datastores over the configured storage protocols (NFS, Fibre channel, iSCSI). In the event of a partial rather than complete site failure, hosts in the affected DC will have to access their storage across the metro link. The network configuration therefore must allow both LAN as well as SAN access across both the data centers. To achieve redundancy and to support both IP and FC traffic, a separate IP/LAN and FCoE connection is required on Cisco Nexus 7000 switches. This is achieved by connecting the Cisco Nexus 7000s as shown in Figure 3-25.

Figure 3-25 Network and SAN connectivity

 

By utilizing the IP and FCoE links between the two DCs (shown above), an ESXi host in either DC can access all the datastores on each of the controllers. The IP links can be configured as a layer-2 or layer-3 link. Since OTV is being utilized to extend layer-2 across the sites in this VMDC DCI release, the IP links could be a layer-3 routed link reachable through multiple hops. The FCoE link between the devices is multi-hop enabled (VE port).

SAN Design Details

For SAN connectivity, NetApp FAS uses an FCoE connection to the Cisco Nexus 7000. In each Cisco Nexus 7000, a storage VDC is created, and the local controller connects to both the Cisco Nexus 7000s as shown in Figure 3-26. On each of the sites, Cisco Nexus 7000-A acts as a SAN-A switch and Cisco Nexus 7000-B acts as a SAN-B switch. To provide FC connectivity between the sites, FCoE connections are configured between the Cisco Nexus 7000 storage VDCs.

Figure 3-26 SAN Connectivity

 

As shown in Figure 3-26, two redundant paths are configured between the two sites for SAN resiliency. The ports between the Cisco Nexus 7000 switches are configured as FCoE VE ports to enable multi-hop FCoE. Using this configuration, every ESXi host has access to both the NetApp controllers. The boot policies used in boot-from-SAN configuration are very similar to single-site FlexPod infrastructure. The fabric path to the local controller becomes the preferred path, and the fabric path to the remote controller is set up as a secondary path. This protects against a failure within the local fabric that renders the local controller inaccessible (cables, switch component, controller component).

Datastore Layout

As per VMware guidelines, virtual machine datastores are configured on both NetApp controllers. To avoid cross-DC traffic, DRS host affinity groups and rules are configured to keep VMs on the hosts located in the same site as the datastores. The boot LUNs for ESXi hosts are also configured on storage presented from the local FAS controller. Mirroring using SyncMirror is enabled, and both sites maintain synchronized copies of each other’s data as it is written. Figure 3-27 hows the datastore configuration for both the DCs.

Figure 3-27 Datastore Layout

 

For more information about NetApp MetroCluster and MetroCluster in a FlexPod configuration, refer to:

Less Stringent RTO/RPO Protection Using NetApp SnapMirror

SnapMirror provides array-based data replication between NetApp FAS controllers. Built on NetApp Snapshot™ technology, SnapMirror is an extremely efficient “thin replication” solution whereby only the 4KB blocks that have been changed or added since the previous update are replicated between systems. In addition, destination volumes can be thin provisioned and will only consume as much space as the source volume itself. SnapMirror is configured at the storage volume level and provides not only remote disaster recovery, but the capability to restore from the secondary system from any recovery points (SnapShots) created on the primary system; for example, if 100 SnapShots from the past 30 days are available on the primary, they are available on the secondary as well. SnapMirror supports the pre-seeding of destination targets using SnapMirror to tape. SnapMirror also supports the establishment of cascading replication between multiple systems for multi-hop protection. SnapMirror is easily configured using NetApp OnCommand® System Manager, Protection Manager, or the Data ONTAP® CLI.

SnapMirror relationships can be configured in three different modes, depending on the RPO requirements and the connectivity characteristics between sites. A single SnapMirror license per controller provides the ability to use any one or all of the replication modes as latency and bandwidth permits. SnapMirror Sync can meet RPO targets of zero data loss with updates sent to the destination as they occur, but requires a very low RTT value between source and destination; higher latency between sites can result in higher effective latency for the applications running on the source/production volume. SnapMirror Semi-Sync meets RPO targets of minor data loss up to approximately 10 seconds, while being able to tolerate 2x-5x the latency over the replication network compared to SnapMirror Sync. SnapMirror Async meets RPO targets of a minute or more and is supported for effectively any geographic distance. Because greater distances and therefor greater round trip latency will necessarily impact replication times, these factors need to be taken into consideration when designing the architecture of replication strategy ( Table 3-1 ).

 

Table 3-1 Modes of Replication

Mode of Replication
RPO Requirements
Round Trip Time Between Primary and Secondary Sites
Effective Site Distances

SnapMirror Sync

Zero or near-zero

2 ms

Metro

SnapMirror Semi-Sync

Near-zero to minutes

5 ms – 10 ms

Metro

SnapMirror Async

Any

(Minutes to hours, or higher)

Any

Any

(Metro or Geo)

For many protected workloads, SnapMirror Async is the appropriate replication mechanism by providing support for extended geographic distances, higher tolerance for network latency, higher RPO requirements, and granular control of those RPO requirements at the level of individual volumes. Each SnapMirror relationship, comprised of source and destination volumes, can be updated (replicated) on its own schedule as dictated by the RPO requirements of the customer’s data or application. In addition, each relationship can have its own parameters specified for rate limiting or network compression. SnapMirror Async provides the underlying storage replication used with VMware vCenter Site Recovery Manager in this VMDC DCI release.

SnapMirror network compression is a native feature of Data ONTAP which enables compression of over-the-wire data blocks during SnapMirror transfers. When enabled, free CPU cycles are used for a standard gzip algorithm to compress the data blocks on the source controller, and then to decompress the received data blocks on the destination controller. This compression does not affect data at rest.

SnapMirror relationships can be single (source A to destination B), multiple (source A to destination B, source A to destination C), or cascading (source A to destination B, destination B to destination C). Cascading relationships for multi-hop replication are supported by SnapMirror in several configurations ( Table 3-2 ).

 

Table 3-2 Supported SnapMirror Cascade Configurations

SnapMirror Cascade Configuration

Support

SnapMirror Sync/Semi-Sync  SnapMirror Async

Yes

SnapMirror Async  SnapMirror Async

Yes

SnapMirror Sync/Semi-Sync  SnapMirror Sync/Semi-Sync

No

SnapMirror Async  SnapMirror Sync/Semi-Sync

No

Because destination data replicated via SnapMirror is in a read-only state while the relationship is in effect, the effective RTO is necessarily higher than when using SyncMirror and MetroCluster. Read-write access to the replicated volume can be provided using NetApp FlexClone technology, and this feature is a key enabler of test scenarios run Site Recovery Manager. In the case of data migration or disaster recovery, the storage administrator breaks the SnapMirror relationship and the destination volume is automatically promoted to a read-write copy, which can then be mapped to and accessed by clients.

For DR situations in which the primary storage is recovered, SnapMirror provides an efficient means of resynchronizing the primary and recovery sites. SnapMirror can resynchronize the two sites, transferring only changed and new data back to the primary site from the DR site by simply reversing the SnapMirror relationships.

For more information NetApp SnapMirror refer to:

VMware Redundancy and Workload Mobility Options

VMware vSphere 5.1 enables a number of VM redundancy and mobility capabilities. These features can enable live and cold VM migrations, as well as live and cold high availability and disaster recovery. Various options are available to achieve RPO/RTO targets, ranging from zero downtime and zero data loss to many minutes or hours of recovery time and data loss. Private and public cloud providers are increasingly using these hypervisor-based features to implement business continuity and workload mobility across metro and geo distant data centers. A brief description of vSphere 5.1 capabilities is provided below. The VMDC DCI release will use a subset of these features as highlighted later in this section.

VMware High Availability (HA) minimizes virtual machine downtime in a resource pool by monitoring hosts, virtual machines, or applications within virtual machines, then, in the event a failure is detected, restarting virtual machines on alternate hosts. Recovery time is typically within minutes. Resource pools for HA hosts can extend across metro distances.

Figure 3-28 VMware High Availability (HA)

 

VMware Fault Tolerance (FT) runs a secondary copy of a virtual machine on a secondary host and rapidly switches to that secondary copy in the event of failure of the primary host. Recovery time and data loss are typically near zero. Resource pools for FT hosts can extend across metro distances.


Note It is important to note that Microsoft Hyper-V DCI functionality will be covered in a separate addendum to this VMDC DCI release.


Figure 3-29 VMware Fault Tolerance (FT)

 

VMware vMotion and Storage vMotion features allow running virtual machines and related storage to be migrated from one physical server to another with no downtime or no data loss. Resource pools for vMotion hosts can extend across metro distances.

Figure 3-30 VMware Live vMotion and Storage vMotion

 

VMware “Shared Nothing” vMotion is a feature that allows running virtual machines and related storage to be migrated from one physical server to another without the need of a shared storage device. These are live moves with no downtime or data loss. Resource pools for Shared Nothing-vMotion hosts can extend across metro distances.

Figure 3-31 VMware “Shared Nothing” Live vMotion

 

vSphere Metro Storage Cluster (vMSC) is a certified configuration designed to make sure of data high availability using a storage architecture that provides logical or physical site resiliency. ESXi hosts within each site are configured with access to storage in both sites, and hosts from both sites are included within the same vSphere HA cluster. With the certified storage array providing the data resiliency across sites, vSphere clusters provide the necessary HA at the compute or hypervisor level. To make sure that data access from each host does not incur a nonoptimum path across sites, DRS host affinity groups and rules are recommended to fence VMs so that the running instance remains local to its backing storage.

The multi-site design must meet the following requirements:

  • The maximum supported network latency between sites for the VMware® ESXi™ vMotion networks is 10ms round-trip time with VMware vSphere® Enterprise Plus Edition™ licenses; with lower edition licensing the maximum supported latency is only 10ms round-trip time.
  • A minimum of 250 Mbps network bandwidth, configured with redundant links, is required for the ESXi vMotion network.

For more information on vMSC on NetApp refer to:

vCenter Site Recovery Manager (SRM) is a disaster-recovery management product. It uses vSphere Replication and supports a broad set of storage-replication products to replicate virtual machines to a secondary site. It also provides a simple interface for setting up recovery plans that are coordinated across all infrastructure layers, replacing traditional error-prone run-books. Recovery plans can be tested nondisruptively as frequently as required to make sure that they meet business objectives. At the time of a site failover or workload migration, Site Recovery Manager automates both failover and failback processes, making sure of fast and highly predictable recovery point objectives (RPOs) and recovery time objectives (RTOs). Facilitating recovery with VMware Site Recovery Manager automation depends heavily on array or storage area network (SAN) replication to copy data between sites. SRM software executes on a SRM server or virtual machine at both the protected and recovery sites, but also requires a Virtual Center to run at the remote site.

Figure 3-32 shows a typical SRM environment with VMware vCenter Site Recovery Manager and NetApp FAS/V-Series Storage Systems.

Figure 3-32 vCenter Site Recovery Manager (SRM)

 

Site Recovery Manager Architecture is an SRM environment consisting of separate vCenter instances. Even though there might be only two instances of vCenter in your environment, SRM supports a shared recovery site model. In the shared recovery site model, multiple vCenter instances can be configured to protect VMs in a single vCenter instance that all the other sites share for recovery resources. Each vCenter instance manages a different set of ESX or ESXi hosts. In an SRM environment, the vCenter instance or site in which a VM is currently running is referred to as the protected site for that VM. The site to which the VM’s data is replicated is referred to as the recovery site for that VM. When using SRM to manage failover and DR testing, failover and testing occur at the same granularity as the SnapMirror relationship. That is, if you have configured a FlexVol® volume as a datastore, all VMs in that datastore will be part of the same SRM protection group and therefore part of the same SRM recovery plan.

A typical SRM environment would consist of the following at each site:

  • A number of VMware hosts configured in the HA/DRS clusters
  • NetApp FAS or V-Series systems to provide storage for VMFS or NFS datastores
  • VMware vCenter Server
  • Site Recovery Manager Server
  • Microsoft® SQL Server® database
  • Various servers providing infrastructures services such as Active Directory® servers for authentication and DNS servers for name resolution

Figure 3-33 Typical SRM Environment

 

Figure 3-34 shows VMs that exist at the protected site 1, being replicated to the recovery site 2. For simplicity, this figure shows replication and protection of VMs going only in one direction, from site 1 to site 2. However, replication and protection of VMs can be performed in both directions, with different VMs in different datastores at each site that are configured to be recovered in the opposite site.

In an SRM environment, communication does not occur directly between the SRM servers; instead, SRM communication is performed by proxy through the vCenter Server at each site, as shown by the blue arrowed lines. The same is true of communication with the NetApp storage arrays. At no time does the SRM server in site 1 communicate with the FAS/V-Series controller in site 2. If you are working in the SRM interface at site 1 and you are performing some action that requires an operation be performed on the FAS/V-Series controller at site 2, the SRA command is sent by proxy through the vCenter Servers to the SRM server at site 2. The SRM server at site 2 then communicates with the local NetApp controller and sends the response back to the SRM server in site 1, again by proxy back through the vCenter Servers.

It’s important that the infrastructure services, such as authentication, name resolution, and VMware licensing, are active and available at both sites.

SnapMirror is used to replicate FlexVol volumes backing NFS or VMFS datastores from the primary site to the DR site.

VMware vSphere Replication provides a low-cost hypervisor-based data replication technique to create snapshots of virtual storage for use in a recovery process.

Figure 3-34 VMware vSphere Replication

 

VMware Workload Mobility Design

The VMDC DCI release validated live and cold workload mobility scenarios across the Active-Active metro design and Active-Backup metro/geo design.

In the Active-Active Metro design, live workload mobility in which an active VM and related storage is moved from one metro data center to a different metro data center was implemented by:

  • VMware vMotion spanning metro data centers using a stretched ESXi cluster and a single vCenter management extended across two metro data center sites.
  • NetApp fabric MetroCluster performed synchronous storage replication of virtual machine datastores across metro data centers.
  • Cisco Nexus 1000v Distributed Virtual Switches (DVS) span metro sites to support live vMotion workload mobility. In addition, all vMotion vmknics on a host should share a single DVS. Each vmknic's port group should be configured to use a different physical NIC as its active vmnic. In addition, all vMotion vmknics should be on the same vMotion network.
  • For metro data centers with a 5ms RTT or less, any licensing edition of VMware vSphere is supported. For 5ms–10ms RTT, Enterprise Plus licensing is required to support metro vMotion.
  • vMotion performance will increase as additional network bandwidth is made available to the vMotion network. Consider provisioning 10Gb vMotion network interfaces for maximum vMotion performance.
  • Multiple vMotion vmknics can provide a further increase in network bandwidth available to vMotion.
  • While a vMotion operation is in progress, ESXi opportunistically reserves CPU resources on both the source and destination hosts in order to make sure of the ability to fully utilize the network bandwidth. ESXi will attempt to use the full available network bandwidth regardless of the number of vMotion operations being performed. The amount of CPU reservation therefore depends on the number of vMotion NICs and their speeds; 10% of a processor core for each 1Gb network interface, 100% of a processor core for each 10Gb network interface, and a minimum total reservation of 30% of a processor core. Therefore, leaving some unreserved CPU capacity in a cluster can help make sure that vMotion tasks get the resources required to fully utilize available network bandwidth.

In the Active-Standby Metro/Geo design, Cold workload mobility in which a stopped VM is moved from one metro/geo data center to different metro/geo data center will be implemented by:

  • VMware Site Recovery Manager spanning metro and geo data centers using separate ESXi clusters and separate vCenter server instances at each data center site.
  • NetApp SnapMirror performs synchronous, semi-synchronous, or asynchronous storage replication of datastores between data centers to achieve RPO data loss targets that meet the application business requirements, but with a higher RTO than is provided by MetroCluster.
  • Cisco Nexus 1000v Distributed Virtual Switches (DVS) implement separate DVS switches at each data center to manage workloads used in cold migration and disaster recovery scenarios.
  • An SRM planned migration was used to invoke a cold workload migration. Planned migration makes sure of an orderly shutdown of virtual machines at the protected site, synchronizes the data with the failover site by making sure of complete replication of all data, and finally recovers the virtual machines at the failover site. Planned migration makes sure of application-consistent migration to the secondary site with no data loss.
  • Site Recovery Manager supports configurations in which both sites are running active virtual machines that Site Recovery Manager can recover at the other site. In an active-active SRM scenario, users configure recovery plan work flows in one direction, from site 1 to site 2, for the protected virtual machines at site 1. Recovery plan work flows are configured in the opposite direction, from site 2 to site 1, for the protected virtual machines at site 2. The VMDC DCI system utilized an active/passive recovery scenario.