Guest

Cisco Intelligent Automation for Cloud

Cisco IAC 4.0 Configuring Multiple Nexus 1000v Virtual Switches

Document ID: 117687

Updated: Jun 05, 2014

   Print

Introduction

This document provides considerations and steps related to configuring multiple Nexus 1000v virtual switches for use with a single Cisco Intelligent Automation for Cloud deployment.

The following examples use two Datacenters sharing a single vCenter instance, although the approach and methodology is also applicable multiple Datacenters using different vCenter instances running on different UCS POD/Clusters. In all cases, the intent is to support multiple Nexus1000v virtual switches with one Cisco IAC and PNSC. Note that IAC workflow-driven configuration for muliple vCenter Datacenter instances is not supported in the current release, as the IAC Service Resource Cotainer cannot span across muliple vCenter Datacenters for the moment.

Requirements

Cisco IAC with multiple Nexus 1000v is supported with the product versions listed below (not necessarily the minimums). For the complete solution compatibility matrix, refer to the Cisco IAC Compatibility matrix.

Cisco IAC Compatibility Matrix

image1.png

Configuration Approach

The approach detailed in this document is admin-driven vs. IAC workflow-driven; meaning setup configuration is done proactively so that the administrator can decide upon when creating Organization which Datacenter to deploy to. The Customer performing administration will want to decide where virtual network devices and virtual machines will be deployed. More specifically, during the provisioning of add network, add compute & network PODs, and add Service Resource Containers, the administrator is focused which Datacenter to deploy an Organization and subsequent virtual machines to.

image26.png

Multiple Datacenters - Mutliple vCenter Instances

The following examples use two Datacenters sharing a single vCenter instance, although the approach and methodology is also applicable multiple Datacenters using different vCenter instances running on different UCS POD/Clusters. In all cases, the intent is to support multiple Nexus1000v virtual switches with one Cisco IAC and PNSC.

Background Information

As previously mentioned, this deployment implements Cisco IAC with two Nexus1000v under two different Datacenters in the same vCenter. The following illustration shows how all the areas map together.

image2.png

For reference, although not described specifically in this document, here is a visual representation where two UCS PODs, each with their own vCenter can have one Cisco IAC and two Nexus1000v virtual switches.

image3.png

Setup Considerations

The sample steps provided were for proof-of-concept testing with the intention to provide a working example. The goal was to model a deployment support two Nexus1000v virtual switches one instance of Cisco IAC and PNSC. Provisioning was done only for Advanced Network Service enabled Organization and VDCs under those Organizations.

Using a single Cisco IAC and PNSC, resources were discovered and registered; networks were built up as well as PODs and containers in a proactive manner with selections being specific to each Datacenter. The end result is the ability to deploy an Organization into one Datacenter (IT/Support) and then another Organization under another Datacenter (IT/Support-II). Both of these Organizations exist under the same Tenant from Cisco IAC and PNSC's perspective although they don't have to.

The end result is the ability for the administrator to have a choice to deploy Advanced Network Service devices to one Datacenter or another. Through the proactive setup of networks, PODs and Containers, the administrator now has a choice to deploy Advanced Network Service devices to one Datacenter or another.

Upon ordering a VDC, the appropriate Compute POD must be selected. There is flexibility in terms of which datastore and cluster to deploy VMs to per VDC (described later). Beyond this point, ordering VMs and using Virtual Network Services (Floating IP, Server Load Balancer Binding) have no dependencies in terms of being in a second Datacenter with a second Nexus1000v.

1.0 vCenter Setup

Two Datacenters (Support and Support-II) are in use within one vCenter. Each Datacenter has their own VSM and each has a single ESXi host acting as the VEM module for each VSM.

image4.png

1.1 Nexus1000v Setup

The Nexus 1000v needs to first be discovered by Cisco IAC. In order to do so we provide SNMP username, authentication and privacy protocol as well as SSH credentials.

A MD5 has of a password can be generated from most Unix systems using the following command.

image5.png

The Nexus 1000v should have the following configuration related to SNMP to make it discoverable by Cisco IAC.

image6.png

From Cisco IAC, run discovery and specify your Nexus1000v.

image7.png

1.2 Nexus 1000v Registration

After discovery, you need to register your N1kv from Setup -> Manage infrastructure. Registration gives the Device a Friendly Name, defines the Device Role and identifies the linkage to the PNSC it is currently integrated with.

The following is an example of what the registration form might look like in IAC for Nexus1000v.

image9.png

The following is an example of policy-agent configuration at the bottom of the nexus1000v configuration which should already be in place for integration of Nexus1000v and PNSC:

image10.png

The following confirms integration with PNSC (PNSC can see both Nexus1kvs)

image11.png

1.3 Adding Networks to IAC

For Organization deployments which include Advanced Network Services, Cisco IAC must be made aware of which networks to use. The required networks are Infrastructure, Service and Internet Transit will be configured on each Nexus1kv. This means the Layer 2 domain (VLAN) for each of the networks exists within both datacenters.

Each Nexus1kv has an uplink bound to vmnic(s) on the ESXi host which are in turn trunked to the physical switch fabric. In this way, the communication is host-to-host, inter-cluster, or even inter-datacenter, as long as the layer 2 domain is propagated and not isolated. The Enterprise, Load Balancer and Tenant networks are not mentioned here as they are dynamically created by Cisco IAC during Organization and VDC creation. User and Management networks are not relevant to this conversation.

When an infrastructure, service or internet transit network is added, the network path will determine which Nexus1kv to use and thus which Datacenter is to be used. This is an important point to note as rather than adding all instances where a network is known – i.e. each esxi host’s vSwitch, each Nexus1kv, we are specifically selecting which resource we want to use to access this network.

The end result is that when an vnic is assigned to a VM later on by the Cisco IAC workflow, it will use the network associated with the Nexus1kv in its Datacenter. This separation is required as between Datacenters as the Nexus1000v is deployed per Datacenter.

The following is an example of the vCenter integration of the two virtual switches in use in this document

image13.png

Assuming the port-profile has been configured in the Nexus1000v, the same network will exist and be selectable (after discovery by Cisco IAC) as a vCenter port-group. This vCenter port-group will have a network path specific to the Nexus1000v. IAC maintains these port-groups and network mappings in its database via the standards table in order decide later the proper network to use on the vnic assigned to the VM.

In the following sections specific selections used in the proof of concept when performing Add Network.

1.4 Infrastructure Network – Add Networks to Cisco IAC

image14.png

1.5 Service Network – Add Networks to Cisco IAC

image15.png

1.6 Internet Transit Network – Add Networks to Cisco IAC

image16.png

1.7 Creating Network PODs

Network PODs are required to logically group physical and virtual network devices. In this case, we are identifying each of the Nexus1kv in each network POD and providing the range of VLANs to use. Here we have overlap as IAC can manage assigned networks and VLANs accordingly but we want to have separate network PODs as each specifies one of the Nexus1000v virtual switches and also in order to map to individual compute PODs and a Resource Containers; one specifically for each Datacenter.

One important aspect to consider is that when Cisco IAC needs to create networks for tenants, enterprise transit and a load balancer, it will want to create these networks within the Nexus1kv that has port-profiles (vCenter port-groups) connected to the Virtual Devices (CSR) of that organization. For example, if CSR in Datacenter A has been provisioned with an infrastructure network for management, and an internet transit network on Nexus1kv A, Cisco IAC will want to create the tenant networks as well as the Enterprise Transit and Load Balancer network in this same Nexus1kv.

Below are the Network POD settings used:

image17.png

1.8 Creating Compute PODs

The Compute POD identifies the underlying Infrastructure type, in this case vCenter vs. OpenStack or EC2. The POD also identifies the vCenter datacenter and the UCS Manger (representing the B-Series h/w supporting this cluster/POD).

It should be noted that although both Compute PODs are using the same UCS Manager and same vCenter (Different Datacenters), any UCS Manager and vCenter which Cisco IAC has discovered are available for selection. In this way, a Nexus1kv in another Cluster/POD could be referenced and used.

Below are examples of the settings used during this proof of concept. (Note this is modify view of already created Compute PODs):

image18.png

1.9 Service Resource Container

The Service Resource Container is the final step in identifying and assembling the associations of Compute, Storage and Network resources. It is worth taking notice that each Service Resource Container has been made from completely different selections for all items; this is intentional.

Since the Compute POD references the Network POD, this makes the virtual switch and tenant VLAN range known to the Service Resource Container. The Datacenter is identified with selection of the previously configured Compute POD.

Since the Datacenter could have multiple VMware clusters and a datastores, the option is presented to make a selection for each. These will be used during deployment to determine the location of Virtual Network Devices.

The previously defined networks are also available for selection. This is an important step; recall that the networks were added and only singular selections were made based on network path including the Nexus1000v which is part of the Datacenter.

For example:

image25.png

It is important to select networks corresponding to the network path including the Nexus1000v for the desired Datacenter since virtual machines in this Datacenter will only have access to the network path of its Nexus1000v.

Below are the Service Resource Containers assembled for each Datacenter; please note that it is also possible to specify Resource Pool CPU and Memory reservations, share size (CPU only) and limits.

image19.png

1.10 Add Public Subnet to Network POD

One final provisioning aspect to consider is the addition of Public Subnets. During Day0 of Cisco IAC configuration, the initial Network POD is added as well as a pool of public addresses. Public addresses are used for internet reachability to Virtual Machines in the Unprotected Public Zones, Virtual Machines in Protected Zones via Floating IP (Static NAT) and to Load Balancer Virtual IPs (VIPs).

Since a second Network POD was added corresponding to the second Nexus1000v, it is important to remember to add a range of public addresses for this Network POD before performing Create Organization.

image20.png

1.11 Create Organization

When creating an organization, one of the items on the form will make a selection for which Service Resource Container. The selection options are intentional choices which allow the administrator to select where and how to deploy the Advanced Network Services Virtual Devices (CSR, VSG, VPX); as well as which networks to connect them to.

The details of the previously assembled Container are conveniently presented making it easy for the administrator to understand the overall mappings assembled earlier. Below are the selections made during Creation Organization.

image21.png

The Virtual Network Devices will be deployed in a Resource Pool with the same name as the Service Resource Container as shown here:

image22.png

1.12 Create Virtual Datacenter

Once the Organization has been deployed successfully, the next step is to create a Virtual Data Center. Selections on the Create Virtual Datacenter form include selecting a Compute POD, Cluster and Datastore. The Service Resource Container also has these selections to define where Virtual Network Devices will be deployed Organization creation. With Create Virtual Data Center, the selections will determine where tenant VMs will be deployed on vCenter. These VMs are connected to new ‘Tenant’ networks added in public and/or private zones, as per the VDC.

Taking the second Datacenter (Support-II) as an example, a 2-Zone Gold VDC has been created. In this example, this VDC will hold VMs in the same Cluster and Datastore as the Virtual Network Devices. A new Resource Pool with the naming convention of the "Tenant"- "VDC" is created.

The Compute POD selected must correspond to one which has the Network POD of the Nexus1kv for the Datacenter you intend to use. This means the administrator must understand/remember which Network POD they have associated with the chosen Compute POD. In our cases, selecting the Compute POD which was also used in the Service Resource Container makes sense. The same Cluster and Datastore were also chosen for simplicity although any Cluster and Datastore under this Datacenter would suffice.

image23.png

1.13 Additional Considerations

The data-uplink on the Nexus1kv trunks VLANs out to the fabric (via the physical vmnic) on each of the ESXi hosts. This needs to be manually configured to pass the specific VLANs of the infrastructure, service, internet transit and Network POD VLAN range (your subsequent tenant, load balancer and enterprise networks).

An example of the uplink is as follows:

image24.png

1.13 Local Template Registration Required

The data-uplink on the Nexus1kv trunks VLANs out to the fabric (via the physical vmnic) on each of the ESXi hosts. This needs to be manually configured to pass the specific VLANs of the infrastructure, service, internet transit and Network POD VLAN range (your subsequent tenant, load balancer and enterprise networks).

1.14 Compute POD Considerations

The Compute POD selected during VDC creation must have a Network POD association containing the Nexus1000v Virtual Switch you intend to use. At the time of this authoring, there is the ability to select Compute POD tracking with CSCuo41679 Create VDC Compute Pod drop-down options need to be more restrictive

Summary: Multiple Compute PODs have been defined. For example, 'stress2 pod " Compute POD is associated with one Network POD which has Nexus1kv A and another Compute POD. 'Stress2 compute POD Support II' is associated with another network POD which has Nexus1kv B.

Since the underlying Org was created with a Service Resource Container referencing "Stress2 compute POD Support II", the CSR is already deployed referencing the network path of Nexus1kv A. If we attempt to create a VDC on this CSR and provision tenant networks for the VDC in Nexus1kv B, they will not be accessible to the CSR. The reason is because the CSR is in Datacenter A, corresponding to Compute and Network POD having Nexus1kvA and now the VDC networks were created in Nexus1kv B which is not accessible in Datacenter A.

Related Information

Updated: Jun 05, 2014
Document ID: 117687