The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The VMDC solution provides design and implementation guidance for enterprises deploying private cloud services, and for service providers (SPs) building virtual private and public cloud services. The Cisco VMDC solution integrates various Cisco and third-party products that are part of the cloud computing ecosystem. Cisco's VMDC system defines an end-to-end architecture, which an organization may reference for the migration or build out of virtualized, multiservice data centers for new cloud-based service models such as Infrastructure as a Service (IaaS). Figure 1-1 shows the basic architectural framework for VMDC. The solution scope includes integrated compute, network, and storage components, a functional layered infrastructure, and service definitions for intra-DC, inter-DC, and automation and service assurance models.
Figure 1-1 Basic VMDC Architecture Framework
Refer to the Cisco Virtualized Multiservice Data Center site for additional details on VMDC.
Validated VMDC architectural systems include a range of traditional hierarchical classic Ethernet models and a variety of Clos FabricPath-based models. Although this document focuses on inserting Hyper-V into a specific FabricPath-based topology model called a "Typical Data Center" design (for FabricPath), deployment considerations described in this document generally apply to all validated VMDC architectures.
A "Typical Data Center" design is a 2-tier FabricPath design, as shown in Figure 1-2. All VMDC architectures are built around modular building blocks called pods. Each pod uses a localized services attachment model. In a pod, Virtual Port Channels (vPCs) handle Layer 2 (L2) switching between the Edge devices and the compute. This provides an active-active environment that does not depend on Spanning Tree Protocol (STP) and converges quickly after failures. Figure 1-2 shows a VMDC pod using FabricPath between the Edge and Aggregation/Spine devices. In previously VMDC releases, vPCs were also used here as well. FabricPath replaces these vPCs.
Figure 1-2 VMDC 3.0.1 Typical Data Center Design
Hyper-V is used to implement hypervisor-based virtualization and enable the creation of VMs on physical servers. Hyper-V logically abstracts the server environment in terms of CPU, memory, and network touch points into multiple virtual software containers. In previous VMDC offerings, VMware's hypervisor was used.
The Cisco Nexus 1000V Switch for Microsoft Hyper-V L2 switch extends Cisco networking benefits to Microsoft Windows Server 2012 Hyper-V deployments. The Nexus 1000V Switch for Microsoft Hyper-V distributed virtual switching platform provides advanced features and is tightly integrated with the Hyper-V ecosystem.
Table 1-1 summarizes the capabilities and benefits of Cisco Nexus 1000V Switch for Microsoft Hyper-V switch when used in conjunction with Microsoft Hyper-V.
The Expanded Palladium tenancy model provides flexibility in server VLANs placement in different zones, public and private. This model was further refined in VMDC 3.0.1 for the private cloud use case. Public virtual routing and forwarding instances (VRFs) are combined into one common public zone. The model assumes there is an "infrastructure" demilitarized zone (DMZ) above the common public zone, so there is no need for a separate protected front-end zone (and VRF) to accommodate per-tenant DMZs. This is a norm in the Enterprise environment. The public zone is shared across multiple user organizations or "tenants" (infrastructure zone) and provides access to the public Internet and serves as a shared resource zone. Figure 1-3 shows a simplified, high-level version of this model.
Figure 1-3 Expanded Palladium Tenancy Model
Microsoft and VMware are both leading providers of cloud technologies. While their base technologies differ, their models exhibit the common functional components shown in Table 1-2.
However, as might be expected, Microsoft and VMware terminology differs. Figure 1-4 highlights key terms in the Microsoft and VMware hypervisor ecosystems.
Figure 1-4 Hypervisor Terminology Comparison
Microsoft and VMware also have different licensing practices, as summarized in Table 1-3.
There are three editions of VMware vSphere: Standard, Enterprise, and Enterprise Plus. To support VM management, each edition requires the purchase of a vCenter Server. For Nexus 1000V Switch for Microsoft Hyper-V support, an Enterprise Plus license is also required.
Refer to the VMware vSphere with Operations Management website for additional details.
Note If Self-Service and Service Management are required, the user should consider purchasing the vCloud Suite, which includes a license for Enterprise Plus.
Microsoft Private Cloud provides a Standard and a Datacenter Edition. The Standard Edition has a limitation on the number of vCPU and supported VMs, while the Datacenter Edition has unlimited support. The Nexus 1000V Switch for Microsoft Hyper-V is supported in both editions.
Refer to the Cisco Nexus 1000V Switch for Microsoft Hyper-V website for additional details on key benefits, features, and capabilities of Nexus 1000V with Microsoft Hyper-V.
Refer to the Microsoft Private Cloud website for additional details on key benefits, success stories, and how to evaluate or purchase Microsoft Hyper-V.
Refer to the Microsoft Private Cloud whitepaper for a comparative look at functionality, benefits, and economics.
Refer to VMware vSphere 5 vs. Microsoft Hyper-V 2012 for competitive performance results.
Both Microsoft and VMware can now manage multi-hypervisor environments.
Refer to the VMware vCenter Multi-Hypervisor Manager Documentation site to download the VMware vCenter Multi-Hypervisor Manager. Documentation for this plugin is also available on the webpage.
Microsoft System Center 2012 and SCVMM can manage multi-hypervisor environments. Refer to Managing VMware Infrastructure in VMM site for additional guidance.
Microsoft Hyper-V and Nexus 1000V Switch for Microsoft Hyper-V were tested in a VMDC 3.0.1 infrastructure. The system under test also leveraged the VMDC Virtual Management Infrastructure (VMI) for deploying the Nexus 1000V Switch for Microsoft Hyper-V Virtualized Switch Module (VSM).
Figure 1-5 shows how the Microsoft Hyper-V compute environment connects into the VMDC network infrastructure, VMI, and storage area network (SAN).
Figure 1-5 VMDC Test Environment
Table 1-4 lists the system hardware components and their associated software versions.