A stretched cluster is a cluster with ESX/ESXi hosts in different physical locations.
In an environment where the same Cisco Nexus 1000 instance spans two data centers, this configuration allows you to have Virtual Ethernet Modules (VEMs) in different data centers be part of the same vCenter Server cluster.
By choosing this configuration, you are ensure that the VEMs in either data center (in a two data center environment) are a part of the same Dynamic Resource Scheduling (DRS) / VMware High Availability (VMW HA) / Fault Tolerance (FT) domain that allows for multiple parallel virtual machine (VM) migration events.
The Split Cluster configuration is an alternate to the Stretched Cluster deployment. With this configuration, the deployment consists of one or more clusters on either physical site with no cluster that contains VEMs in multiple data centers. While this configuration allows for VM migration between physical data centers, these events are not automatically scheduled by DRS.
Prerequisites for Virtualized Workload Mobility (DC to DC vMotion)
Virtualized Workload Mobility (DC to DC vMotion) has the following prerequisite:
Layer 2 extension between the two physical data centers over the DCI link.
Guidelines and Limitations
Virtualized Workload Mobility (DC to DC vMotion) has the following guidelines and limitations:
The VSM HA pair must be located in the same site as their storage and the active vCenter Server.
Layer 3 control mode is preferred.
If you are using Link Aggregation Control Protocol (LACP) on the VEM, use LACP offload.
Quality of Service bandwidth guarantees for control traffic over the DCI link.
Limit the number of physical data centers to two.
A maximum latency of 5 ms is supported for VSM-VEM control traffic.
For the period of time that the VSM and VEM cannot communicate, the VEM continues to operate with its last known configuration. Once the DCI link connectivity is restored and the VSM-VEM communication is reestablished, the system should come back to its previous operational state. This mode type is no different than the headless mode of operation within a data center and has the following limitations for the headless VEM:
No new ports can be brought up on the headless VEM (new VMs coming up or VMs coming up after vMotion).
No NetFlow data exports.
Ports shut down because DHCPS/DAI rate limits are not automatically brought up until the VSM reconnects.
Port security options, such as aging or learning secure MAC addresses and shutting down/recovering from port-security violations, are not available until the VSM reconnects.
The Cisco Discovery Protocol (CDP) does not function for the disconnected VEM.
IGMP joins/leaves are not processed until the VSM reconnects.
Queries on BRIDGE and IF-MIB processed at the VSM give the last known status for the hosts in headless mode.
If a VEM loses the connection to its VSM, the Vmotions to that particular VEM are blocked. The VEM shows up in the VCenter Server as having a degraded (yellow) status.
Handling Additional Distance/Latency Between the VSM and VEM
In a network where there is a considerable distance between the VSM and VEM, latency becomes a critical factor.
Because the control traffic between the VSM and VEM faces a sub-millisecond latency within a data center, latency can increase to a few milliseconds depending on the distance.
With an increased round-trip time, communication between the VSM and VEM takes longer. As you add VEMs and vEthernet interfaces, the time it takes to perform actions such as configuration commands, module insertions, port bring-up, and showshow commands increase because that many tasks are serialized.
Migrating a VSM
This section describes how migrate a VSM from one physical site to another.
If you are migrating a VSM on a Cisco Nexus 1010, see the Cisco Nexus 1010 Software Configuration Guide, Release 4.2(1)SP1(3).