In HCS for Contact Center, virtualization of all UC applications and key third-party components use Cisco Unified Computing System (UCS) hardware as the platform. The HCS virtualization integrates the UCS platform and SAN, and virtualizes the target UC applications. The following sections describes the deployment of the SP virtualization infrastructure.
Cisco UCS 6100 Series Fabric Interconnects is a core part of UCS that provides network connectivity and management capabilities for attached blades and chassis. The Cisco UCS 6100 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, and Fibre Channel over Ethernet (FCoE) functions.
The Interconnects provide the management and communication support for the Cisco UCS B-Series blades and the UCS 5100 Series blade server chassis. All chassis, and therefore all blades, attached to the interconnects become part of a single, high availability management domain. By supporting a unified fabric, the Cisco UCS 6100 Series provides LAN and SAN connectivity for all blades in its domain.
You will require the following connections for a working UCS:
Console connection on the 6100 Series switch.
At least one 10 Gbps connection between the 6100 Series switch and the Fabric Extender 2104 on the chassis.
At least one 10 Gbps connection on the 6100 Series switch for the northbound interface to a core router or switch (could be a port-channel connection).
At least one FCoE connection between the 6100 Series switch and a Multilayer Director Switch (MDS) switch.
Cluster link ports connected between the 6100 Series switches in a high availability deployment.
Basic Configuration for UCS
UCS Manager provides centralized management capabilities, creates a unified management domain, and serves as the central nervous system of the UCS. UCSM delivers embedded device-management software that manages the system from end to end as a single logical entity through a GUI, a CLI, or, an XML API.
UCS Manager resides on a pair of Cisco UCS 6100 Series fabric interconnects using a clustered, active-standby configuration for HA. The software participates in server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection.
After 6100 Series initial configuration, you can configure UCS from the GUI. You can launch the GUI from a URL that is reachable to the configured 6100 Management IP address.
You must define the ports that are capable of passing Fibre Channel (FC) traffic as Fibre Channel uplink ports using the SAN configuration tab of the UCS Manager.
Any time there is a change in the number of links between the 6100 series switch and the blade server chassis, you must perform a chassis acknowledgment to make the UCS Manager aware of the link change which causes a rebuild of its connectivity data.
Configure Server Management IP Address Pool
The UCSM server management IP address pool assigns an external IP addresses for each of the blade servers installed. UCS Manager uses the IP addresses in a management IP pool for external access to a server through the following:
Serial over LAN
Complete the following procedure to configure server management IP address pool.
Choose Admin > Communication Services > Management IP Pool.
Right-click and select Create Block of IP Addresses.
Configure UCS LAN
The enabled Uplink Ethernet ports in UCS 6100 series switch forwards traffic to the next layer in the network. You can configure the LAN properties such as VLANs, MAC Pools, and vNIC templates within the LAN view in the UCS Manager.
Complete the following procedures to create VLANs and MAC pools.
In the Cisco UCS, a named VLAN creates a connection to a specific external LAN. The VLAN isolates traffic to that external LAN, which includes broadcast traffic. The name that you assign to a VLAN ID adds a layer of abstraction that you can use to globally update all servers associated with service profiles that use the named VLAN. You do not need to reconfigure servers individually to maintain communication with the external LAN. Complete the following procedure to add VLANs.
Click the LAN tab and then right-click the VLANs.
Enter the name or designation of the VLANs being added and the VLAN IDs to use.
A decision on how the named VLAN is accessible by the 6100 Series switches completes the UCS VLAN additions.
Create MAC Pools
A MAC pool is a collection of network identities, or MAC addresses, that are unique in Layer 2 (L2) and available to be assigned to a vNIC on a server. If you use MAC pools in service profiles, you do not have to manually configure the MAC addresses to be used by the server associated with the service profile.
To assign a MAC address to a server, you must include the MAC pool in a vNIC policy. The vNIC policy is then included in the service profile assigned to that server. Complete the following procedure to create a MAC pool.
Click the LAN tab.
Right-click Pools .
Select Create MAC Pool.
Configure UCS SAN
Each UCS 6120 fabric interconnect has an open slot to add expansion modules that add Fibre Channel ports for SAN connectivity. You can enable these ports and their attributes through the SAN scope of the UCS Manager.
Complete the following procedures to configure SAN properties such as VSANs, Fibre Channel uplink ports, World Wide Node Name (WWNN) pools, World Wide Port Name (WWPN) pools, and Virtual Host Bus Adapter (vHBA) templates, within the SAN view in the UCS Manager.
A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic to that external SAN, including broadcast traffic. The traffic on one named VSAN knows that the traffic on another named VSAN exists, but cannot read or access that traffic.
Like a VLAN name, the name that you assign to a VSAN ID adds a layer of abstraction that allows you to globally update all servers associated with service profiles that use the named VSAN. You do not need to reconfigure the servers individually to maintain communication with the external SAN. You can create more than one named VSAN with the same VSAN ID.
In a cluster configuration, you can configure a named VSAN to be accessible only to the FC uplinks on one fabric interconnect or to the FC uplinks on both fabric interconnects. Complete the following procedure to create VSAN.
Click the SAN tab.
Right-click the VSANs.
Configure the following to complete the VSAN configuration:
Enter a name for the VSAN.
Enter the VSAN interaction with Interconnect fabric(s).
Enter a VSAN ID.
Enter the FCoE VLAN.
Associate VSAN with an FC Uplink Port
After you create a VSAN, you must associate it with a physical FC interface. Complete the following procedure to associate VSAN with an FC uplink port.
Click the Equipment tab.
Open the target FC port and select the desired VSAN from the drop-down list
Click Ok and save the changes.
Create WWNN Pools
A World Wide Node Name (WWNN) pool is one of two pools used by the FC vHBAs in the UCS. You can create separate pools for WWNNs assigned to the server and World Wide Port Names (WWPNs) assigned to the vHBA. The pool assigns WWNNs to servers. If you include a WWNN pool in a service profile, the associated server is assigned a WWNN from that pool.
Click the SAN tab.
Choose Pools, select WWNN pools and expand it.
Choose WWNN Pool node-default
Right-click the Create WWN Block.
Enter the pool size and click OK.
Create WWPN Pools
A WWPN is the second type of pool used by Fibre Channel vHBAs in the UCS. WWPN pool assigns WWPNs to the vHBAs. If a pool of WWPNs is included in a service profile, the associated server is assigned a WWPN from that pool.
Click the SAN tab.
Choose Pools, select WWPN pools and expand it.
Choose WWPN Pool node-default.
Right-click Create WWPN Block.
Enter the pool size and click OK.
Configure UCS Server
Cisco UCS Manager uses service profiles to provision servers and their I/O properties. Server, network, and storage administrators creates the service profiles and stores in the Cisco UCS 6100 Series fabric interconnects. Service profiles are centrally managed and stored in a database on the fabric interconnect.
Service profile provides the following services:
Service profiles are the central concept of Cisco UCS and thus each service profile ensures that the associated server hardware is configured to support the applications that it hosts.
The service profile maintains the server hardware configurations, interfaces, fabric connectivity, and server, and network identity. This information is stored in a format that you can manage through Cisco UCS Manager.
Service profile provides the following advantages:
Simplifies the creation of service profiles and ensures consistent policies within the system for a given service or application as service profile templates are used. This approach makes easy to configure one server or 320 servers with thousands of virtual machines, decoupling scale from complexity.
Reduces the number of manual steps that need to be taken, helping reduce the chance for human error, improving consistency, and reducing server and network deployment times.
Dissociates hardware specific attributes from the design. If a specific server in the deployment is replaced, the service profile associated with the old server is applied to the newly installed server allowing for near seamless replacement of hardware if needed.
Configure the following MDS to place the UCS server blade vHBAs and SAN Port World Wide Name (PWWN) under the same zone and activate the zoneset.
fcalias name scale-esxi-c5b1-vHBA0 vsan 600
member pwwn 20:00:00:25:b5:02:13:7e
fcalias name cx4-480-spb-b0 vsan 600
member pwwn 50:06:01:68:46:e0:1b:e0
fcalias name cx4-480-spa-a1 vsan 600
member pwwn 50:06:01:61:46:e0:1b:e0
zone name zone33 vsan 600
member fcalias cx4-480-spb-b0
member fcalias cx4-480-spa-a1
member fcalias scale-esxi-c5b1-vHBA0
zoneset name scale_zoneset vsan 600
zoneset activate name scale_zoneset vsan 600
The CLI configuration for MDS-B is as follows:
fcalias name scale-esxi-c5b1-vHBA1 vsan 700
member pwwn 20:00:00:25:b5:02:13:6e
fcalias name cx4-480-spa-a0 vsan 700
member pwwn 50:06:01:60:46:e0:1b:e0
fcalias name cx4-480-spb-b1 vsan 700
member pwwn 50:06:01:69:46:e0:1b:e0
zone name zone33 vsan 700
member fcalias cx4-480-spa-a0
member fcalias cx4-480-spb-b1
member fcalias scale-esxi-c5b1-vHBA1
zoneset name scale_zoneset vsan 700
zoneset activate name scale_zoneset vsan 700
Complete the following procedure to configure SAN.
Create LUNs for boot (recommended to use the lowest LUN number, and make sure the LUN number for shared storage are higher than the ones used for boot), 8G should be sufficient.
Register the hosts. Navigate to Storage System Connectivity Status and register each unknown vHBA.
Configure the following to create a storage group:
Create a storage group
ESX Boot from SAN
Complete the following procedures to configure for booting from SAN:
Before you deploy the Nexus 1000v, your system must meet the following requirements.
VMware vCenter Server 5.0 is installed.
All Hosts must be running ESXi 5.0.
Two ESXi hosts are available to run the primary and standby VSM VM.
Each host should have least two physical NICs.
The uplink should be a trunk port carrying all VLANs configured on the ESX host.
Ensure that the inter-switch trunk links carry all relevant VLANs, including control, packet and Native VLANs.
On the host running the VSM VM, the control and packet VLANs are configured through the VMware switch and the VMNIC.
Add Hosts to vCenter
Complete the following procedure to add hosts to vCenter.
Add hosts using the vSphere client, using the Add Host Wizard. Enter the IP address of the host and the username/password of the ESXi server, which was configured when the ESXi software was loaded on the host.
Assign a license to the Host.
Configure VSM host lockdown.
Review the options you have selected and click then Finish to add the Host
What to Do Next
After you add a host, confirm by navigating to the path Home > Inventory > Hosts.
Set up VEM on each ESX Server
Complete the following procedure to configure the Virtual Ethernet Module (VEM) on each ESx server:
Access the ESX server ssh console.
Copy .vib file from %Nexus%\VEM\ to a /tmp directory of ESx server.
Enter the following command:
esxcli software vib install -v /tmp/cross_cisco-vem-v140-22.214.171.124.5.1.0-3.0.1.vib
After the successful installation, the following message appears:
/tmp # esxcli software vib install -v /tmp/cross_cisco-vem-v140-126.96.36.199.5.1.0-3.0.1.vib
Installation ResultMessage: Operation finished successfully.
Reboot Required: false
VIBs Installed: Cisco_bootbank_cisco-vem-v140-esx_188.8.131.52.5.1.0-3.0.1
VIBs Removed: Cisco_bootbank_cisco-vem-v131-esx_184.108.40.206.4.1.0-3.0.4
Install Cisco Nexus VSM
Complete the following procedure to install the Cisco Nexus Virtual Supervisor Module (VSM).
Mount the Nexus 1000V ISO image to the local system.
Navigate to %Nexus%\VSM\Installer_App> folder using Command prompt.
Enter the following command to launch the Nexus 1000V Installation Management.
java -jar Nexus1000V-install.jar
Enter vCenter Credentials and click Next:
Select the VSMs host from vCenter inventory and click Next:
Select OVA file to create VSM.
Select the OVA image from the location %Nexus%\VSM\Install.
Enter the virtual name.
Select a datastore.
Choose L2 from Configure port groups for L2.
Select appropriate VLANs for port groups.
Configure VSM with the native VLAN ID and network settings.
To begin the installation, click Next.
Review Configuration. System checks the configuration status.
Click Yes to migrate the host and its network to the DVS.
DVS Migration. System performs the migration checks.
Verify that the Nexus1000v virtual machines are created in vCenter.
Copy the license file to bootflash and enter the following command to install the license bootflash:
install license bootflash:<license filename>.lic
Configure Cisco Nexus
Complete the following procedure to configure the Cisco Nexus 1000V switch for Cisco HCS for Contact Center.
Complete all configuration steps in enable > configuration terminal mode.
Configure the Nexus port profile uplink:
port-profile type ethernet n1kv-uplink0
switchport mode trunk
allowed vlan <vlan ID's>
channel-group auto mode on mac-pinning
system vlan <vlan ID> # Customer specific native vlan ID identified in the switch
Configure the public VM port profiles:
port-profile type vethernet Visible-VLAN
switchport mode access
switchport access vlan
<vlan ID> # Customer specific public vlan ID defined in the switch
Configure the private VM port profiles:
port-profile type vethernet Private-VLAN
switchport mode access
switchport access vlan
<vlan ID> # Customer specific private vlan ID defined in the switch
There are two active uplinks for the blades; one uplink carries the traffic and other is for failover. However, traffic flows through only one uplink at a time. For this NIC overridding is required. Use Pinning IDs to implement the NIC overridding, where Pinning ID 0 refer to the first uplink and Pinning ID 1 is for the second uplink.
Add Second Customer Instance in Single Blade for 500 Agent Deployment
Perform the following procedure to add a second customer instance for a single blade in a 500 agent deployment model.
Create new Data Stores (if needed) and associate the corresponding LUNs.
Enter the following commands in the Nexus prompt to configure Nexus to add one more VLAN:
vlan <VLAN ID>
Create and configure a new VLAN on UCS Manager:
Log in to UCS Manager console and click LAN tab.
Navigate to Create VLANs.
Enter the following VLAN Details:
Fabric and Sharing type
Click Servers tab and select VNIC.
Select Ethernet and click Modify VLANs.
Verify the VLANs that you want to associate with particular server.
Enter the following commands in the Nexus prompt to create a port profile for newly created VLAN:
Port-profile type vethernet VM-<VLAN ID>
switchport mode access
switchport access vlan <VLAN ID>
Configure the following details to associate the second 500-agent virtual machines with the new VLAN:
Log in to the ESXi host using VMware Infrastructure Client:
Select a VM.
Select Edit Settings.
Select Network Adapters.
Select the newly created VLAN from the list.
Create Two-Way External Trust
You must create a two-way trust between the service provider and the customer domain controllers for each customer instance for Unified CCDM. Before you create a two-way external trust you must Create Conditional Forwarders and Create Forwarders in both the service provider domain controller and the customer domain controller.
Complete the following procedure to create a two-way external trust between the service provider domain controller and the customer domain controller.