The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The Virtualized Multiservice Data Center (VMDC) architecture is based on the foundational design principles of modularity, high availability (HA), differentiated service support, secure multi-tenancy, and automated service orchestration (Figure 1-1).
These design principles provide streamlined turn-up of new services, maximized service availability, resource optimization, facilitated business compliance, and support for self-service IT models. These benefits maximize operational efficiency and enable private and public cloud providers to focus on their core business objectives.
Figure 1-1 VMDC Design Principles
Modularity—Unstructured growth is at the root of many operational and CAPEX challenges for data center administrators. Defining standardized physical and logical deployment models is the key to streamlining operational tasks such as moves, adds and changes, and troubleshooting performance issues or service outages. VMDC reference architectures provide blueprints for defining atomic units of growth within the data center, called PoDs.
High Availability—The concept of public and private “Cloud” is based on the premise that the data center infrastructure transitions from a cost center to an agile, dynamic platform for revenue-generating services. In this context, maintaining service availability is critical. VMDC reference architectures are designed for optimal service resilience, with no single point of failure for the shared (“multi-tenant”) portions of the infrastructure. As a result, great emphasis is placed upon availability and recovery analysis during VMDC system validation.
Differentiated Service—Generally, bandwidth is plentiful in the data center infrastructure. However, clients may need to remotely access their applications via the Internet or some other type of public or private WAN. Typically, WANs are bandwidth bottlenecks. VMDC provides an end-to-end QoS framework for service tuning based upon application requirements. This release adds consideration of a set of tools for application visiibility, control and optimization, enhancing the ability to provide application-centric differentiated services.
Multi-Tenancy—As data centers transition to Cloud models, and from cost centers to profit center, services will naturally broaden in scope, stretching beyond physical boundaries in new ways. Security models must also expand to address vulnerabilities associated with increased virtualization. In VMDC, “multi-tenancy” is implemented using logical containers, also called “Cloud Consumer” models that are defined in these new, highly virtualized and shared infrastructures. These containers provide security zoning in accordance with Payment Card Industry (PCI), Federal Information Security Management Act (FISMA), and other business and industry standards and regulations. VMDC is certified for PCI and FISMA compliance.
Service Orchestration—Industry pundits note that the difference between a virtualized data center and a “cloud” data center is the operational model. The benefits of the cloud – agility, flexibility, rapid service deployment, and streamlined operations – are achievable only with advanced automation and service monitoring capabilities. The VMDC reference architectures include service orchestration and monitoring systems in the overall system solution. This includes best-of-breed solutions from Cisco (for example, Cisco Intelligent Automation for Cloud) and partners, such as BMC and Zenoss.
VMDC VSA 1.0.1 leveraged FabricPath as the Unified Data Center fabric. FabricPath combines the stability and scalability of routing in Layer 2 (L2), supporting the creation of simple, scalable, and efficient L2 domains that apply to many network scenarios. Because traffic forwarding leverages the Intermediate System to Intermediate System (IS-IS) protocol, rather than Spanning Tree (STP), the bi-sectional bandwidth of the network is expanded, facilitating data center-wide workload mobility.
For a brief primer on FabricPath technology, refer to:
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-687554.pdf
In VSA 1.0.2 we return to an STP-based Layer 2 topology, as an interim step for deploying VSA on the new Cisco Nexus 9000 switching systems. As of this writing, the Nexus 9000 systems support "standalone" mode, running NX-OS code for layer 2 and layer 3 Data Center functionality; however, FabricPath is not a supported L2 technology. Of the various possible traditional STP protocol options supported on the Nexus 9000 systems—Spanning Tree Protocol (STP), IEEE 802.1w Rapid Spanning Tree (Rapid PVST+) or IEEE 802.1s Multiple Spanning Tree (MSTP) - we selected MSTP as the most scalable, given that we envision requiring a large number of transit VLANs through the layer 2 domain for connections to per-tenant virtual routers (i.e., depending on the multi-tenant scale requirement), and this protocol decouples spanning tree instances from VLAN instances.
Note Certain application environments, especially those that generate high levels of broadcast, may not tolerate extremely large Layer 2 environments.
In this case, the VSA architecture mitigates high levels of broadcasts within the data center by logically bounding the MSTP L2 fabric with L3 devices: the PE/WAN edge router and multiple, per-tenant virtual routers.
Another minor change from previous VSA releases is the replacement of the clustered Cisco ASR9000 WAN Edge/PE routers with redundant ASR 1000 routers as an alternative option. This change targets Private Cloud deployment cases, where fewer remote sites are aggregating into the Data Center from the wide area backbone.
Finally, VSA 1.0.1 reintroduced the physical ASA security appliance, as an alternative perimeter firewall to the CSR or ASA1000v virtual firewalls as part of a hybrid physical/virtual security deployment option; however this was not a focus for VSA 1.0.2 and so although a valid architectural option was not included in scope for this incremental release.
Previous releases of VMDC addressed several methods of resiliently attaching redundant appliance or module-based service nodes to optimize service availability and efficient link path utilization, including Ether-channel, vPCs with Multi-Chassis EtherChannel on paired Virtual Switching Systems (VSSs), and vPCs on clustered (Cisco ASA) firewall appliances. However, service node implementation in VMDC VSA differs significantly from these releases in the following ways:
These characteristics provide for a "pay as you grow" model with significant CAPEX savings in upfront deployment costs. In terms of availability implications, dedication of resources to a specific tenant means that strict 1:1 redundancy may no longer be the default mode of operation for these forms of service nodes. Rather, administrators now have greater flexibility to fine-tune redundant services and methods for those tenants or organizations who have mission-critical applications with high availability requirements.
The VMDC architecture can support multiple virtual containers, referred to as cloud consumer models. These models are described in greater detail later in this document, and in previous release material:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/2.2/collateral/vmdcConsu merModels.pdf
Because this release is based on unique, dedicated per-tenant security, load balancing and optimization services, for validation purposes VMDC VSA 1.0.2 focuses only on containers that do not feature shared (multi-tenant) security/services zones. High-level representations of these are highlighted in green in Figure 1-2.
Note The "Gold" container does not feature a shared zone, but is considered to be a subset of the "Expanded Gold" container.
As you move from left to right Figure 1-2, the validated VMDC VSA containers, which are based upon real-world, commonly deployed N-tiered application and security models, become increasingly complex, growing from single to multiple security zones and policy enforcement points and from application of single to multiple types of services. VMDC VSA features additional dedicated network service options, such as network analysis and optimization. Although not shown in Figure 1-2, these were validated as part of the Expanded Gold container in VSA 1.0. In VSA 1.0.1, we evolved the system architecture by extending these tenancy models down into the storage layer of the infrastructure, leveraging new storage abstraction and isolation technology in the form of NetApp's Storage Virtual Machines.
The following sections describe the network components used in the VMDC VSA solution (summarized in Table 1-1 ) and provide a snapshot of the intra-DC and overall system end-to-end network topology model validated in VMDC VSA 1.0.2 (Figure 1-3 and Figure 1-4).
Cisco ASR 9000, ASR 1000, ISRG2 3945, CSR Cisco Nexus 7009, 7004 (Nexus 7018 and Nexus 7010 not in SUT but valid architectural option) |
|
Citrix NetScaler VPX Server Load Balancer |
|
Cisco Unified Computing System (UCS) Cisco UCS 6296UP Fabric Interconnect Cisco Fabric Extender 2208XP IO Module UCS B200/230/440-M2 and B200-M3 Blade Servers UCS M81KR Virtual Interface card |
|
(1/2/4/8 Gbps 24-Port FC Module; 18/4-Port Multiservice Module; Sup-2A; 24-port 8 Gbps FC Module; 18-port 4 Gbps FC Module) |
|
NetApp FAS 62501 |
|
|
1.Refer to NetApp storage array product family information: http://www.netapp.com/us/products/storage-systems/index.aspx |
Figure 1-3 VMDC VSA 1.0.2 Intra-DC Topology
Figure 1-4 VMDC VSA 1.0 End-to-End Topology
The virtual service model may easily be utilized in scaled down form at Enterprise remote sites to provide private cloud services as part of a Public Provider managed service offering. In this case, the remote sites in the preceding diagram in this context would be centrally controlled via out of band management paths. The private clouds can be tailored to fit application and services requirements, ranging in size from a Flexpod or Vblock to a small C-Series chassis "pod-in-a-box" entry point (Figure 1-5).
Figure 1-5 Remote Site Private Cloud
The following release change summary is provided for clarity.
The following documents are available for reference and consideration.
http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns1050/landing_vmdc.html#~rel eases
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/2.6/vmdcm1f1wp.ht ml
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/2.6/vmdctechwp.html
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/2.2/design_guide/vm dcDesign22.html
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/3.0.1/DG/VMDC_3.0.1_DG.html
http://www.cisco.com/c/en/us/solutions/enterprise/data-center-designs-data-center-interconnect/index.html
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/DCI/1-0/DG/DCI.html
http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/dz_cloudservice.html