- Preface
- New and Changed Information
- Overview of Cisco Unified Computing System
- Overview of Cisco UCS Manager
- Overview of Cisco UCS Manager GUI
- Configuring the Fabric Interconnects
- Configuring Ports and Port Channels
- Configuring Communication Services
- Configuring Authentication
- Configuring Organizations
- Configuring Role-Based Access Control
- Configuring DNS Servers
- Configuring System-Related Policies
- Managing Licenses
- Managing Virtual Interfaces
- Registering Cisco UCS Domains with Cisco UCS Central
- LAN Uplinks Manager
- VLANs
- Configuring LAN Pin Groups
- Configuring MAC Pools
- Configuring Quality of Service
- Configuring Network-Related Policies
- Configuring Upstream Disjoint Layer-2 Networks
- Configuring Named VSANs
- Configuring SAN Pin Groups
- Configuring WWN Pools
- Configuring Storage-Related Policies
- Configuring Fibre Channel Zoning
- Configuring Server-Related Pools
- Setting the Management IP Address
- Configuring Server-Related Policies
- Configuring Server Boot
- Deferring Deployment of Service Profile Updates
- Service Profiles
- Configuring Storage Profiles
- Managing Power in Cisco UCS
- Managing Time Zones
- Managing the Chassis
- Managing Blade Servers
- Managing Rack-Mount Servers
- Starting the KVM Console
- CIMC Session Management
- Managing the I/O Modules
- Backing Up and Restoring the Configuration
- Recovering a Lost Password
- Configuring vNIC Templates
- Configuring Ethernet Adapter Policies
- Ethernet and Fibre Channel Adapter Policies
- Creating an Ethernet Adapter Policy
- Configuring an Ethernet Adapter Policy to Enable eNIC Support for MRQS on Linux Operating Systems
- Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGRE
- Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with VXLAN
- Deleting an Ethernet Adapter Policy
- Configuring the Default vNIC Behavior Policy
- Configuring LAN Connectivity Policies
- LAN and SAN Connectivity Policies
- Privileges Required for LAN and SAN Connectivity Policies
- Interactions between Service Profiles and Connectivity Policies
- Creating a LAN Connectivity Policy
- Creating a vNIC for a LAN Connectivity Policy
- Deleting a vNIC from a LAN Connectivity Policy
- Creating an iSCSI vNIC for a LAN Connectivity Policy
- Deleting an iSCSI vNIC from a LAN Connectivity Policy
- Deleting a LAN Connectivity Policy
- Configuring Network Control Policies
- Configuring Multicast Policies
- Configuring UDLD Link Policies
- Understanding UDLD
- UDLD Configuration Guidelines
- Creating a Link Profile
- Creating a UDLD Link Policy
- Modifying the UDLD System Settings
- Assigning a Link Profile to a Port Channel Ethernet Interface
- Assigning a Link Profile to an Uplink Ethernet Interface
- Assigning a Link Profile to a Port Channel FCoE Interface
- Assigning a Link Profile to an Uplink FCoE Interface
- Configuring VMQ Connection Policies
Configuring Network-Related Policies
This chapter includes the following sections:
- Configuring vNIC Templates
- Configuring Ethernet Adapter Policies
- Configuring the Default vNIC Behavior Policy
- Configuring LAN Connectivity Policies
- Configuring Network Control Policies
- Configuring Multicast Policies
- Configuring UDLD Link Policies
- Configuring VMQ Connection Policies
Configuring vNIC Templates
vNIC Template
This policy defines how a vNIC on a server connects to the LAN. This policy is also referred to as a vNIC LAN connectivity policy.
Cisco UCS Manager does not automatically create a VM-FEX port profile with the correct settings when you create a vNIC template. If you want to create a VM-FEX port profile, you must configure the target of the vNIC template as a VM.
You need to include this policy in a service profile for it to take effect.
Note | If your server has two Emulex or QLogic NICs (Cisco UCS CNA M71KR-E or Cisco UCS CNA M71KR-Q), you must configure vNIC policies for both adapters in your service profile to get a user-defined MAC address for both NICs. If you do not configure policies for both NICs, Windows still detects both of them in the PCI bus. Then because the second eth is not part of your service profile, Windows assigns it a hardware MAC address. If you then move the service profile to a different server, Windows sees additional NICs because one NIC did not have a user-defined MAC address. |
Creating a vNIC Template
This policy requires that one or more of the following resources already exist in the system:
What to Do Next
Include the vNIC template in a service profile.
Binding a vNIC to a vNIC Template
You can bind a vNIC associated with a service profile to a vNIC template. When you bind the vNIC to a vNIC template, Cisco UCS Manager configures the vNIC with the values defined in the vNIC template. If the existing vNIC configuration does not match the vNIC template, Cisco UCS Manager reconfigures the vNIC. You can only change the configuration of a bound vNIC through the associated vNIC template. You cannot bind a vNIC to a vNIC template if the service profile that includes the vNIC is already bound to a service profile template.
If the vNIC is reconfigured when you bind it to a template, Cisco UCS Manager reboots the server associated with the service profile.
Unbinding a vNIC from a vNIC Template
Deleting a vNIC Template
Configuring Ethernet Adapter Policies
Ethernet and Fibre Channel Adapter Policies
These policies govern the host-side behavior of the adapter, including how the adapter handles traffic. For example, you can use these policies to change default settings for the following:
-
Queues
-
Interrupt handling
-
Performance enhancement
-
RSS hash
-
Failover in an cluster configuration with two fabric interconnects
Operating System Specific Adapter Policies
By default, Cisco UCS provides a set of Ethernet adapter policies and Fibre Channel adapter policies. These policies include the recommended settings for each supported server operating system. Operating systems are sensitive to the settings in these policies. Storage vendors typically require non-default adapter settings. You can find the details of these required settings on the support list provided by those vendors.
We recommend that you use the values in these policies for the applicable operating system. Do not modify any of the values in the default policies unless directed to do so by Cisco Technical Support.
However, if you are creating an Ethernet adapter policy for a Windows OS (instead of using the default Windows adapter policy), you must use the following formulas to calculate values that work with Windows:
- Completion Queues = Transmit Queues + Receive Queues
- Interrupt Count = (Completion Queues + 2) rounded up to nearest power of 2
For example, if Transmit Queues = 1 and Receive Queues = 8 then:
- Completion Queues = 1 + 8 = 9
- Interrupt Count = (9 + 2) rounded up to the nearest power of 2 = 16
- Accelerated Receive Flow Steering
- Interrupt Coalescing
- Adaptive Interrupt Coalescing
- RDMA Over Converged Ethernet for SMB Direct
- Guidelines and Limitations for SMB Direct with RoCE
Accelerated Receive Flow Steering
Accelerated Receive Flow Steering (ARFS) is hardware-assisted receive flow steering that can increase CPU data cache hit rate by steering kernel level processing of packets to the CPU where the application thread consuming the packet is running.
Using ARFS can improve CPU efficiency and reduce traffic latency. Each receive queue of a CPU has an interrupt associated with it. You can configure the Interrupt Service Routine (ISR) to run on a CPU. The ISR moves the packet from the receive queue to the backlog of one of the current CPUs, which processes the packet later. If the application is not running on this CPU, the CPU must copy the packet to non-local memory, which adds to latency. ARFS can reduce this latency by moving that particular stream to the receive queue of the CPU on which the application is running.
Guidelines and Limitations for Accelerated Receive Flow Steering
Interrupt Coalescing
Adapters typically generate a large number of interrupts that a host CPU must service. Interrupt coalescing reduces the number of interrupts serviced by the host CPU. This is done by interrupting the host only once for multiple occurrences of the same event over a configurable coalescing interval.
When interrupt coalescing is enabled for receive operations, the adapter continues to receive packets, but the host CPU does not immediately receive an interrupt for each packet. A coalescing timer starts when the first packet is received by the adapter. When the configured coalescing interval times out, the adapter generates one interrupt with the packets received during that interval. The NIC driver on the host then services the multiple packets that are received. Reduction in the number of interrupts generated reduces the time spent by the host CPU on context switches. This means that the CPU has more time to process packets, which results in better throughput and latency.
Adaptive Interrupt Coalescing
Due to the coalescing interval, the handling of received packets adds to latency. For small packets with a low packet rate, this latency increases. To avoid this increase in latency, the driver can adapt to the pattern of traffic flowing through it and adjust the interrupt coalescing interval for a better response from the server.
Adaptive interrupt coalescing (AIC) is most effective in connection-oriented low link utilization scenarios including email server, databases server, and LDAP server. It is not suited for line-rate traffic.
Guidelines and Limitations for Adaptive Interrupt Coalescing
RDMA Over Converged Ethernet for SMB Direct
RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing and Live Migration.
Cisco UCS Manager Release 2.2(4) supports RoCE for Microsoft SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.
Guidelines and Limitations for SMB Direct with RoCE
-
Microsoft SMB Direct with RoCE is supported only on Windows 2012 R2.
-
Microsoft SMB Direct withRoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
-
Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
-
Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
-
Maximum number of queue pairs per adapter is 8192.
-
Maximum number of memory regions per adapter is 524288.
-
If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.
Creating an Ethernet Adapter Policy
Tip | If the fields in an area are not displayed, click the Expand icon to the right of the heading. |
Configuring an Ethernet Adapter Policy to Enable eNIC Support for MRQS on Linux Operating Systems
Cisco UCS Manager includes eNIC support for the Multiple Receive Queue Support (MRQS) feature on Red Hat Enterprise Linux Version 6.x and SUSE Linux Enterprise Server Version 11.x.
Step 1 | Create an
Ethernet adapter policy.
|
Step 2 | Install an eNIC
driver Version 2.1.1.35 or later.
See Cisco UCS Virtual Interface Card Drivers for Linux Installation Guide. |
Step 3 | Reboot the server |
Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGRE
Cisco UCS Manager supports stateless offloads with NVGRE only with Cisco UCS VIC 1340 and/or Cisco UCS VIC 1380 adapters that are installed on servers running Windows Server 2012 R2 operating systems. Stateless offloads with NVGRE cannot be used with NetFlow, usNIC, VM-FEX, or VMQ.
Step 1 | In the Navigation pane, click the Servers tab. |
Step 2 | On the Servers tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click
Adapter
Policies and choose
Create
Ethernet Adapter Policy.
For more information on creating an Ethernet adapter policy, see Creating an Ethernet Adapter Policy. |
Step 5 | Click OK to create the Ethernet adapter policy. |
Step 6 | Install an eNIC
driver Version 3.0.0.8 or later.
For more information, see http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/vic_drivers/install/Windows/b_Cisco_VIC_Drivers_for_Windows_Installation_Guide.html. |
Step 7 | Reboot the server. |
Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with VXLAN
Cisco UCS Manager supports stateless offloads with VXLAN only with Cisco UCS VIC 1340 and/or Cisco UCS VIC 1380 adapters that are installed on servers running VMWare ESXi Release 5.5 and later releases of the operating system. Stateless offloads with VXLAN cannot be used with NetFlow, usNIC, VM-FEX, or VMQ.
Step 1 | In the Navigation pane, click the Servers tab. |
Step 2 | On the Servers tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click
Adapter
Policies and choose
Create
Ethernet Adapter Policy.
|
Step 5 | Click OK to create the Ethernet adapter policy. |
Step 6 | Install an eNIC
driver Version 2.1.2.59 or later.
For more information, see http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/vic_drivers/install/ESX/2-0/b_Cisco_VIC_Drivers_for_ESX_Installation_Guide.html. |
Step 7 | Reboot the server. |
Deleting an Ethernet Adapter Policy
Configuring the Default vNIC Behavior Policy
Default vNIC Behavior Policy
Default vNIC behavior policy allows you to configure how vNICs are created for a service profile. You can choose to create vNICS manually, or you can allow them to be created automatically
You can configure the default vNIC behavior policy to define how vNICs are created. This can be one of the following:
-
None—Cisco UCS Manager does not create default vNICs for a service profile. All vNICs must be explicitly created.
-
HW Inherit—If a service profile requires vNICs and none have been explicitly defined, Cisco UCS Manager creates the required vNICs based on the adapter installed in the server associated with the service profile.
Note | If you do not specify a default behavior policy for vNICs, HW Inherit is used by default. |
Configuring a Default vNIC Behavior Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the root node.
You can configure only the default vNIC behavior policy in the root organization. You cannot configure the default vNIC behavior policy in a sub-organization. |
Step 4 | Click Default vNIC Behavior. |
Step 5 | On the General Tab, in the Properties area, click one of the following radio buttons in the Action field:
|
Step 6 | Click Save Changes. |
Configuring LAN Connectivity Policies
LAN and SAN Connectivity Policies
Connectivity policies determine the connections and the network communication resources between the server and the LAN or SAN on the network. These policies use pools to assign MAC addresses, WWNs, and WWPNs to servers and to identify the vNICs and vHBAs that the servers use to communicate with the network.
Note | We do not recommend that you use static IDs in connectivity policies, because these policies are included in service profiles and service profile templates and can be used to configure multiple servers. |
Privileges Required for LAN and SAN Connectivity Policies
Connectivity policies enable users without network or storage privileges to create and modify service profiles and service profile templates with network and storage connections. However, users must have the appropriate network and storage privileges to create connectivity policies.
Privileges Required to Create Connectivity Policies
Connectivity policies require the same privileges as other network and storage configurations. For example, you must have at least one of the following privileges to create connectivity policies:
Privileges Required to Add Connectivity Policies to Service Profiles
After the connectivity policies have been created, a user with ls-compute privileges can include them in a service profile or service profile template. However, a user with only ls-compute privileges cannot create connectivity policies.
Interactions between Service Profiles and Connectivity Policies
You can configure the LAN and SAN connectivity for a service profile through either of the following methods:
LAN and SAN connectivity policies that are referenced in the service profile
Local vNICs and vHBAs that are created in the service profile
Local vNICs and a SAN connectivity policy
Local vHBAs and a LAN connectivity policy
Cisco UCS maintains mutual exclusivity between connectivity policies and local vNIC and vHBA configuration in the service profile. You cannot have a combination of connectivity policies and locally created vNICs or vHBAs. When you include a LAN connectivity policy in a service profile, all existing vNIC configuration is erased, and when you include a SAN connectivity policy, all existing vHBA configuration in that service profile is erased.
Creating a LAN Connectivity Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click LAN Connectivity Policies and choose Create LAN Connectivity Policy. |
Step 5 | In the Create LAN Connectivity Policy dialog box, enter a name and optional description. |
Step 6 | Do one of the following: |
Step 7 | To add vNICs, in
the
vNIC
Table area, click
Add on the table icon bar and complete the following
fields in the
Create
vNIC dialog box:
|
Step 8 | If you want to
use iSCSI boot with the server, click the down arrows to expand the
Add
iSCSI vNICs bar and do the following:
|
Step 9 | After you have created all the vNICs or iSCSI vNICs you need for the policy, click OK. |
What to Do Next
Include the policy in a service profile or service profile template.
Creating a vNIC for a LAN Connectivity Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the LAN Connectivity Policies node. |
Step 4 | Choose the policy to which you want to add a vNIC. |
Step 5 | In the Work pane, click the General tab. |
Step 6 | On the icon bar of the vNICs table, click Add. |
Step 7 | In the
Create vNIC dialog box, enter the name, select a
MAC Address Assignment, and check the
Use vNIC Template check box if you want to use
an existing vNIC template.
You can also create a MAC pool from this area. |
Step 8 | Choose the
Fabric ID, select the
VLANs that you want to use, enter the
MTU, and choose a
Pin Group.
You can also create a VLAN and a LAN pin group from this area. |
Step 9 | In the Operational Parameters area, choose a Stats Threshold Policy. |
Step 10 | In the Adapter Performance Profile area, choose an
Adapter Policy,
QoS Policy, and a
Network Control Policy.
You can also create an Ethernet adapter policy, QoS policy, and network control policy from this area. |
Step 11 | In the Connection Policies area, choose the
Dynamic vNIC,
usNIC or
VMQ radio button, then choose the
corresponding policy.
You can also create a dynamic vNIC, usNIC, or VMQ connection policy from this area. |
Step 12 | Click OK. |
Step 13 | Click Save Changes. |
Deleting a vNIC from a LAN Connectivity Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the LAN Connectivity Policies node. |
Step 4 | Select the policy from which you want to delete the vNIC. |
Step 5 | In the Work pane, click the General tab. |
Step 6 | In the vNICs table, do the following: |
Step 7 | If the Cisco UCS Manager GUI displays a confirmation dialog box, click Yes. |
Step 8 | Click Save Changes. |
Creating an iSCSI vNIC for a LAN Connectivity Policy
Step 1 | In the Navigation pane, click the LAN tab. | ||||||||||||||
Step 2 | On the LAN tab, expand . | ||||||||||||||
Step 3 | Expand the LAN Connectivity Policies node. | ||||||||||||||
Step 4 | Choose the policy to which you want to add an iSCSI vNIC. | ||||||||||||||
Step 5 | In the Work pane, click the General tab. | ||||||||||||||
Step 6 | On the icon bar of the Add iSCSI vNICs table, click Add. | ||||||||||||||
Step 7 | In the Create iSCSI vNIC dialog box, complete the following fields:
| ||||||||||||||
Step 8 | In the MAC Address Assignment drop-down list in the iSCSI MAC Address area, choose one of the following:
| ||||||||||||||
Step 9 | (Optional)If you want to create a MAC pool that will be available to all service profiles, click Create MAC Pool and complete the fields in the Create MAC Pool wizard.
For more information, see Creating a MAC Pool. | ||||||||||||||
Step 10 | Click OK. | ||||||||||||||
Step 11 | Click Save Changes. |
Deleting an iSCSI vNIC from a LAN Connectivity Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the LAN Connectivity Policies node. |
Step 4 | Chose the policy from which you want to delete the iSCSI vNIC. |
Step 5 | In the Work pane, click the General tab. |
Step 6 | In the Add iSCSI vNICs table, do the following: |
Step 7 | If the Cisco UCS Manager GUI displays a confirmation dialog box, click Yes. |
Step 8 | Click Save Changes. |
Deleting a LAN Connectivity Policy
If you delete a LAN connectivity policy that is included in a service profile, you will delete all vNICs and iSCSI vNICs from that service profile and disrupt LAN data traffic for the server associated with the service profile.
Configuring Network Control Policies
Network Control Policy
This policy configures the network control settings for the Cisco UCS domain, including the following:
-
Whether the Cisco Discovery Protocol (CDP) is enabled or disabled
-
How the virtual interface ( VIF) behaves if no uplink port is available in end-host mode
-
The action that Cisco UCS Manager takes on the remote Ethernet interface, vEthernet interface , or vFibre Channel interface when the associated border port fails
-
Whether the server can use different MAC addresses when sending packets to the fabric interconnect
-
Whether MAC registration occurs on a per-VNIC basis or for all VLANs
Action on Uplink Fail
By default, the Action on Uplink Fail property in the network control policy is configured with a value of link-down. For adapters such as the Cisco UCS M81KR Virtual Interface Card, this default behavior directs Cisco UCS Manager to bring the vEthernet or vFibre Channel interface down if the associated border port fails. For Cisco UCS systems using a non-VM-FEX capable converged network adapter that supports both Ethernet and FCoE traffic, such as Cisco UCS CNA M72KR-Q and the Cisco UCS CNA M72KR-E, this default behavior directs Cisco UCS Manager to bring the remote Ethernet interface down if the associated border port fails. In this scenario, any vFibre Channel interfaces that are bound to the remote Ethernet interface are brought down as well.
Note | if your implementation includes those types of non-VM-FEX capable converged network adapters mentioned in this section and the adapter is expected to handle both Ethernet and FCoE traffic, we recommend that you configure the Action on Uplink Fail property with a value of warning. Note that this configuration might result in an Ethernet teaming driver not being able to detect a link failure when the border port goes down. |
MAC Registration Mode
MAC addresses are installed only on the native VLAN by default, which maximizes the VLAN port count in most implementations.
Note | If a trunking driver is being run on the host and the interface is in promiscuous mode, we recommend that you set the Mac Registration Mode to All VLANs. |
Configuring Link Layer Discovery Protocol for Fabric Interconnect vEthernet Interfaces
Cisco UCS Manager Release 2.2.4 allows you to enable and disable LLDP on a vEthernet interface. You can also retrieve information about these LAN uplink neighbors. This information is useful while learning the topology of the LAN connected to the UCS system and while diagnosing any network connectivity issues from the Fabric Interconnect (FI). The FI of a UCS system is connected to LAN uplink switches for LAN connectivity and to SAN uplink switches for storage connectivity. When using Cisco UCS with Cisco Application Centric Infrastructure (ACI), LAN uplinks of the FI are connected to ACI leaf nodes. Enabling LLDP on a vEthernet interface will help the Application Policy Infrastructure Controller (APIC) to identify the servers connected to the FI by using vCenter.
To permit the discovery of devices in a network, support for Link Layer Discovery Protocol (LLDP), a vendor-neutral device discovery protocol that is defined in the IEEE 802.1ab standard, is introduced. LLDP is a one-way protocol that allows network devices to advertise information about themselves to other devices on the network. LLDP transmits information about the capabilities and current status of a device and its interfaces. LLDP devices use the protocol to solicit information only from other LLDP devices.
You can enable or disable LLDP on a vEthernet interface based on the Network Control Policy (NCP) that is applied on the vNIC in the service profile.
Creating a Network Control Policy
MAC address-based port security for Emulex converged Network Adapters (N20-AE0102) is not supported. When MAC address-based port security is enabled, the fabric interconnect restricts traffic to packets that contain the MAC address that it first learns. This is either the source MAC address used in the FCoE Initialization Protocol packet, or the MAC address in an ethernet packet, whichever is sent first by the adaptor. This configuration can result in either FCoE or Ethernet packets being dropped.
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click the Network Control Policies node and select Create Network Control Policy. |
Step 5 | In the Create Network Control Policy dialog box, complete the required fields. |
Step 6 | In the LLDP area, do the following: |
Step 7 | In the
MAC
Security area, do the following to determine whether the server can
use different MAC addresses when sending packets to the fabric interconnect:
|
Step 8 | Click OK. |
Deleting a Network Control Policy
Configuring Multicast Policies
Multicast Policy
This policy is used to configure Internet Group Management Protocol (IGMP) snooping and IGMP querier. IGMP Snooping dynamically determines hosts in a VLAN that should be included in particular multicast transmissions. You can create, modify, and delete a multicast policy that can be associated to one or more VLANs. When a multicast policy is modified, all VLANs associated with that multicast policy are re-processed to apply the changes. By default, IGMP snooping is enabled and IGMP querier is disabled. For private VLANs, you can set a multicast policy for primary VLANs but not for their associated isolated VLANs due to a Cisco NX-OS forwarding implementation.
The following limitations apply to multicast policies on the Cisco UCS 6100 series fabric interconnect and the 6200 series fabric interconnect:
-
If a Cisco UCS domain includes only 6100 series fabric interconnects, only the default multicast policy is allowed for local VLANs or global VLANs.
-
If a Cisco UCS domain includes one 6100 series fabric interconnect and one 6200 series fabric interconnect: -
Only the default multicast policy is allowed for a local VLAN on a 6100 series fabric interconnect.
-
On a 6200 series fabric interconnect, user-defined multicast policies can also be assigned along with the default multicast policy.
-
Only the default multicast policy is allowed for a global VLAN (as limited by one 6100 series fabric interconnect in the cluster.
-
-
If a Cisco UCS domain includes only 6200 series fabric interconnects, any multicast policy can be assigned.
Creating a Multicast Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the root node. |
Step 4 | Right-click the Multicast Policies node and select Create Multicast Policy. |
Step 5 | In the Create Multicast Policy dialog box, specify the name and IGMP snooping information. |
Step 6 | Click OK. |
Modifying a Multicast Policy
Note | You cannot change the name of the multicast policy once it has been created. |
Deleting a Multicast Policy
Note | If you assigned a non-default (user-defined) multicast policy to a VLAN and then delete that multicast policy, the associated VLAN inherits the multicast policy settings from the default multicast policy until the deleted policy is re-created. |
LACP Policy
Link Aggregation combines multiple network connections in parallel to increase throughput and to provide redundancy. Link aggregation control protocol (LACP) provides additional benefits for these link aggregation groups. Cisco UCS Manager enables you to configure LACP properties using LACP policy.
You can configure the following for a lacp policy:
-
Suspended-individual: If you do not configure the ports on an upstream switch for lacp, the fabric interconnects treat all ports as uplink Ethernet ports to forward packets. You can place the lacp port in suspended state to avoid loops. When you set suspend-individual on a port-channel with lacp, if a port that is part of the port-channel does not receive PDUs from the peer port, it will go into suspended state.
-
Timer values: You can configure rate-fast or rate-normal. In rate-fast configuration, the port is expected to receive 1 PDU every 1 second from the peer port. The time out for this is 3 seconds. In rate-normal configuration, the port is expected to receive 1 PDU every 30 seconds. The timeout for this is 90 seconds.
System creates a default lacp policy at system start up. You can modify this policy or create new. You can also apply one lacp policy to multiple port-channels.
Creating a LACP Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | In the Work Pane, click LACP Policies tab, and click the + sign. |
Step 5 | In the Create LACP Policy dialog box, fill in the required fields. |
Step 6 | Click OK. |
Modifying a LACP Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | In the Work Pane, LACP Policies tab, and click on the policy you want to edit. |
Step 5 | Click the Properties icon on the right. |
Step 6 | In the Properties dialog box, make the required changes and click Apply. |
Step 7 | Click OK. |
Configuring UDLD Link Policies
Understanding UDLD
UniDirectional Link Detection (UDLD) is a Layer 2 protocol that enables devices connected through fiber-optic or twisted-pair Ethernet cables to monitor the physical configuration of the cables and detect when a unidirectional link exists. All connected devices must support UDLD for the protocol to successfully identify and disable unidirectional links. When UDLD detects a unidirectional link, it marks the link as unidirectional. Unidirectional links can cause a variety of problems, including spanning-tree topology loops.
UDLD works with the Layer 1 mechanisms to determine the physical status of a link. At Layer 1, autonegotiation takes care of physical signaling and fault detection. UDLD performs tasks that autonegotiation cannot perform, such as detecting the identities of neighbors and shutting down misconnected interfaces. When you enable both autonegotiation and UDLD, the Layer 1 and Layer 2 detections work together to prevent physical and logical unidirectional connections and the malfunctioning of other protocols.
A unidirectional link occurs whenever traffic sent by a local device is received by its neighbor but traffic from the neighbor is not received by the local device.
Modes of Operation
UDLD supports two modes of operation: normal (the default) and aggressive. In normal mode, UDLD can detect unidirectional links due to misconnected interfaces on fiber-optic connections. In aggressive mode, UDLD can also detect unidirectional links due to one-way traffic on fiber-optic and twisted-pair links and to misconnected interfaces on fiber-optic links.
In normal mode, UDLD detects a unidirectional link when fiber strands in a fiber-optic interface are misconnected and the Layer 1 mechanisms do not detect this misconnection. If the interfaces are connected correctly but the traffic is one way, UDLD does not detect the unidirectional link because the Layer 1 mechanism, which is supposed to detect this condition, does not do so. In case, the logical link is considered undetermined, and UDLD does not disable the interface. When UDLD is in normal mode, if one of the fiber strands in a pair is disconnected and autonegotiation is active, the link does not stay up because the Layer 1 mechanisms did not detect a physical problem with the link. In this case, UDLD does not take any action, and the logical link is considered undetermined.
UDLD aggressive mode is disabled by default. Configure UDLD aggressive mode only on point-to-point links between network devices that support UDLD aggressive mode. With UDLD aggressive mode enabled, when a port on a bidirectional link that has a UDLD neighbor relationship established stops receiving UDLD packets, UDLD tries to reestablish the connection with the neighbor and administratively shuts down the affected port. UDLD in aggressive mode can also detect a unidirectional link on a point-to-point link on which no failure between the two devices is allowed. It can also detect a unidirectional link when one of the following problems exists:
Methods to Detect Unidirectional Links
UDLD operates by using two mechanisms:
-
Neighbor database maintenance
UDLD learns about other UDLD-capable neighbors by periodically sending a hello packet (also called an advertisement or probe) on every active interface to keep each device informed about its neighbors. When the switch receives a hello message, it caches the information until the age time (hold time or time-to-live) expires. If the switch receives a new hello message before an older cache entry ages, the switch replaces the older entry with the new one.
UDLD clears all existing cache entries for the interfaces affected by the configuration change whenever an interface is disabled and UDLD is running, whenever UDLD is disabled on an interface, or whenever the switch is reset. UDLD sends at least one message to inform the neighbors to flush the part of their caches affected by the status change. The message is intended to keep the caches synchronized.
-
Event-driven detection and echoing
UDLD relies on echoing as its detection mechanism. Whenever a UDLD device learns about a new neighbor or receives a resynchronization request from an out-of-sync neighbor, it restarts the detection window on its side of the connection and sends echo messages in reply. Because this behavior is the same on all UDLD neighbors, the sender of the echoes expects to receive an echo in reply.
If the detection window ends and no valid reply message is received, the link might shut down, depending on the UDLD mode. When UDLD is in normal mode, the link might be considered undetermined and might not be shut down. When UDLD is in aggressive mode, the link is considered unidirectional, and the interface is shut down.
If UDLD in normal mode is in the advertisement or in the detection phase and all the neighbor cache entries are aged out, UDLD restarts the link-up sequence to resynchronize with any potentially out-of-sync neighbors.
If you enable aggressive mode when all the neighbors of a port have aged out either in the advertisement or in the detection phase, UDLD restarts the link-up sequence to resynchronize with any potentially out-of-sync neighbor. UDLD shuts down the port if, after the fast train of messages, the link state is still undetermined.
UDLD Configuration Guidelines
The following guidelines and recommendations apply when you configure UDLD:
-
A UDLD-capable interface also cannot detect a unidirectional link if it is connected to a UDLD-incapable port of another switch.
-
When configuring the mode (normal or aggressive), make sure that the same mode is configured on both sides of the link.
-
UDLD should be enabled only on interfaces that are connected to UDLD capable devices. The following interface types are supported:
Creating a Link Profile
Creating a UDLD Link Policy
Modifying the UDLD System Settings
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | On the LAN tab, expand . |
Step 4 | Expand the Link Protocol Policy node and click UDLD System Settings. |
Step 5 | In the Work pane, click the General tab. |
Step 6 | In the Properties area, modify the fields as needed. |
Step 7 | Click Save Changes. |
Assigning a Link Profile to a Port Channel Ethernet Interface
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the port channel node and click the Eth Interface where you want to assign a link profile. |
Step 4 | In the Work pane, click the General tab. |
Step 5 | In the Properties area, choose the link profile that you want to assign. |
Step 6 | Click Save Changes. |
Assigning a Link Profile to an Uplink Ethernet Interface
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand |
Step 3 | Click the Eth Interface where you want to assign a link profile. |
Step 4 | In the Work pane, click the General tab. |
Step 5 | In the Properties area, choose the link profile that you want to assign. |
Step 6 | Click Save Changes. |
Assigning a Link Profile to a Port Channel FCoE Interface
Step 1 | In the Navigation pane, click the SAN tab. |
Step 2 | On the SAN tab, expand |
Step 3 | Expand the FCoE port channel node and click the FCoE Interface where you want to assign a link profile. |
Step 4 | In the Work pane, click the General tab. |
Step 5 | In the Properties area, choose the link profile that you want to assign. |
Step 6 | Click Save Changes. |
Assigning a Link Profile to an Uplink FCoE Interface
Step 1 | In the Navigation pane, click the SAN tab. |
Step 2 | On the SAN tab, expand |
Step 3 | Click the FC0E interface where you want to assign a link profile. |
Step 4 | In the Work pane, click the General tab. |
Step 5 | In the Properties area, choose the link profile that you want to assign. |
Step 6 | Click Save Changes. |
Configuring VMQ Connection Policies
VMQ Connection Policy
Cisco UCS Manager enables you to configure VMQ connection policy for a vNIC. VMQ provides improved network performance to the entire management operating system. Configuring a VMQ vNIC connection policy involves the following:
- Create a VMQ connection policy
- Create a static vNIC in a service profile
- Apply the VMQ connection policy to the vNIC
If you want to configure the VMQ vNIC on a service profile for a server, at least one adapter in the server must support VMQ. Make sure the servers have at least one the following adapters installed:
The following are the supported Operating Systems for VMQ:
You can apply only any one of the vNIC connection policies on a service profile at any one time. Make sure to select one of the three options such as Dynamic, usNIC or VMQ connection policy for the vNIC. When a VMQ vNIC is configured on service profile, make sure you have the following settings:
Creating a VMQ Connection Policy
Step 1 | In the Navigation pane, click the LAN tab. |
Step 2 | On the LAN tab, expand . |
Step 3 | Expand the
node for the organization where you want to create the policy.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click the VMQ Connection Policies node and select Create VMQ Connection Policy. |
Step 5 | In the Create
VMQ Connection Policy dialog box, specify the required information.
The interrupt count must be at least the number of logical processors in the server, such as 32 or 64. |
Step 6 | Click OK. |
Assigning Virtualization Preference to a vNIC
Step 1 | In the Navigation pane, click the Servers tab. |
Step 2 | On the Servers tab, expand . |
Step 3 | Click on the vNIC name to display properties on the work pane. |
Step 4 | In the Connection Policies section, select the radio button for VMQ and select the VMQ Connection Policy from the drop down. In the Properties area Virtualization Preference for this vNIC changes to VMQ. |
Enabling VMQ and NVGRE Offloading on the same vNIC
Note | Currently, VMQ is not supported along with VXLAN on the same vNIC. |
Task |
Description |
See |
||
---|---|---|---|---|
Enable normal NVGRE offloading |
Perform this task by setting the corresponding flags in the adapter profile which is associated with the given vNIC.
|
Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGRE |
||
Enable VMQ |
Perform this task by setting the appropriate connection policy when you add a vNIC to the service profile. |
Information About NetQueue
NetQueue improves traffic performance by providing a network adapter with multiple receive queues. These queues allow the data interrupt processing that is associated with individual virtual machines to be grouped.
Note | NetQueue is supported on servers running VMware ESXi operating systems. |
Configuring NetQueue
Step 1 | Create a Virtual Machine Queue (VMQ) connection policy. | ||||
Step 2 | Configure
NetQueues in a service profile by selecting the VMQ connection policy.
Use the following when you are configuring NetQueue:
| ||||
Step 3 | Enable the MSIX mode in the adapter policy for NetQueue. | ||||
Step 4 | Associate the service profile with the server. |