The Cisco Unified Computing System™ helps address today's business challenges by streamlining data center resources, scaling service delivery, and radically reducing the number of devices requiring setup, management, power and cooling, and cabling. However, these features pose interesting challenges in designing and deploying a SAN. This document discusses these challenges and how they can be addressed using Cisco® MDS 9000 Family SAN switches, with their superior architecture and rich feature set. This document discusses how the architecture and features of Cisco MDS 9000 Family switches can be used to build a highly scalable and available SAN for Cisco Unified Computing System deployments.
The first section discusses the unique characteristics of the Cisco Unified Computing System.
The second section discusses the effect of the Cisco Unified Computing System characteristics on the SAN design and presents considerations for a good SAN design.
The next section presents deployment guidelines to address the challenges posed.
The final section provides detailed procedures for installing and deploying a Cisco Unified Computing System SAN.
Characteristics of Cisco Unified Computing System
The Cisco Unified Computing System brings unique capabilities to scale and simplify server deployment, using features that are unique in the industry. Some of the main features that are relevant to a SAN design are discussed here.
High-Density Server Chassis
Generally, blade systems provide higher server density than traditional rack-mount servers. The Cisco Unified Computing System provides exceptional server density by increasing scalability through a design offering up to 320 discrete servers and thousands of virtual machines in a single highly available management domain.
All Servers Are SAN Ready
Cisco's unified fabric technology reduces costs by eliminating the need for multiple sets of adapters, cables, LAN and SAN switches, and high-performance computing networks. Hence, every server is capable of connecting to both the LAN and SAN using converged network adapters (CNAs) and unified fabric interconnects. The default SAN enablement of every server (both physical and virtual) provides SAN connectivity without the need to justify incremental investments in host bus adapters (HBAs).
Optimized for Server Virtualization
With Cisco Extended Memory Technology, the Cisco Unified Computing System provides a larger memory footprint. In addition, the servers offer the high performance of the newest generation of Intel processors. These two features dramatically increase the number of virtual servers hosted per physical server compared to traditional blade servers. In addition, server profiles and the stateless computing architecture of the Cisco Unified Computing System make the system well suited for highly scalable virtualization deployments.
SAN Design Considerations
This section discusses important considerations to address the effects of the Cisco Unified Computing System characteristics on SAN design. A list of decision variables that affect the SAN design is provided at the end of this section.
High Density and Unified Fabric Increase SAN Scalability Requirements
High density in the server rack means more servers attached to the network. With SAN-ready servers, the SAN-attach rate will increase with Cisco Unified Computing System deployments. This increase leads to more server logins and demands greater scalability and performance from the SAN infrastructure.
Virtualized Servers Need a SAN That Can Provide Predictable High Performance
Server virtualization brings agility to servers along with the capability to host multiple virtual servers on physical servers. However, the agility demands predictable behavior and performance from the underlying networks, including the SAN. When servers move from one physical resource to another, SAN connectivity to the storage may change depending on the location and connectivity of the physical servers. Hence, the SAN should perform in a highly predictable way that does not change depending on the location of the virtual machine.
Server SAN Connectivity Is Through Cisco UCS 6100 Series Fabric Interconnects
Network connectivity to Cisco Unified Computing System servers is provided through Cisco UCS 6100 Series Fabric Interconnects, which separate the LAN and SAN traffic and hand it off to the corresponding upstream LAN and SAN core and aggregator switches (Figure 1). For SAN connectivity, the fabric interconnect operates in the N-Port Virtualizer (NPV) mode. In this mode, the servers are assigned to uplinks connecting to the upstream switch either automatically or using pinning.
Figure 1. SAN Connectivity for the Cisco Unified Computing System
Oversubscription and Scalability Limits Affect Performance
When designing any SAN, it is important to understand the performance requirements of the applications and design the SAN to meet that criteria. Several factors-including the number of servers, number targets, type of application (transactional or large block), bandwidth requirements for servers and targets, type of switching modules and switches-influence the SAN design. These factors lead to oversubscription, scalability, and performance requirements.
In addition to these parameters, for a Cisco Unified Computing System SAN deployment, other factors need to be considered because of the unique features of the environment. The high-density server complex combined with the virtualization-optimized architecture leads to more servers1 attached to each port on the director, increasing the number of servers serviced by switch ports and modules, as well as potentially to a higher fan-in ratio of servers to storage ports. Hence, the following factors need to be considered as well:
• Number of virtual servers per blade server
• Number of blade servers per Cisco Unified Computing System chassis
• Number of chassis served by each pair of Cisco UCS fabric interconnects
Higher Number of Servers Accessing the Storage Ports Requires Management
Because of the higher server density, each storage port may be servicing a higher number of servers, including both physical and virtual servers. The physical servers are identified by a login on the storage port, but the virtual servers may not be. Typically, storage arrays have upper limits on the number of logins they can support. However, the number of virtual servers may not be noticed. It is important to keep these numbers to a reasonable level by considering the number of ports on the storage arrays and the bandwidth of each of the connected ports.
SAN Design Parameters
The preceding design considerations typically lead to the following decision variables, which are considered typical SAN design parameters.
Number of Uplinks Between Cisco UCS Fabric Interconnects and Cisco MDS 9000 Family SAN Switch
Using hot-pluggable expansion modules, up to eight2 Fibre Channel uplinks can be deployed. This translates to a maximum uplink bandwidth of 32 Gbps (each link provides a maximum of 4 Gbps3). Each blade server can have two 4-Gbps virtual HBAs (vHBAs): one for each of the physical SANs. A Cisco UCS 5100 Series Blade Server Chassis can host up to eight half-width (or four full-width) blades, and hence total maximum bandwidth required for the servers is 32 Gbps, assuming that all the servers are SAN attached and need the maximum permissible bandwidth. For example, for a fully populated eight-chassis Cisco Unified Computing System domain with all the servers needing SAN connectivity, the maximum SAN bandwidth required for the servers is 256 Gbps (8 chassis * 8 blades per server * 4 Gbps). This ratio of required server bandwidth to available uplink bandwidth provides the oversubscription. This ratio determines the number of uplinks needed as well. It also determines the number of servers per chassis possible for a given Cisco UCS 6100 Series Fabric Interconnect. The typical oversubscription ratio ranges from 5 to 12 depending on the application.
Type and Number of Cisco MDS 9000 Family Switches Required
The number of uplinks and number of Cisco UCS 6100 Series Fabric Interconnects along with number of storage arrays dictates the type and number of Cisco MDS 9000 Family switches required. The number also depends on the high-availability requirements of the overall deployment. Cisco MDS 9500 Series Multilayer Directors provide superior high availability compared to fixed-configuration fabric switches.
Type and Number of Switching Modules Required on Each Cisco MDS 9000 Family Switch
The type and number of the switching modules depends on the bandwidth requirements of the application and server and storage. The Cisco MDS 9000 Family provides very flexible options in terms of port cost and density to optimize deployment cost. The storage array ports may need higher connectivity speeds because of higher server density in a virtualized system.
Number of Dedicated-Bandwidth Ports Needed on Switching Modules
The bandwidth requirements on the uplinks, and hence the corresponding ports on the Cisco MDS 9000 Family switch, must be higher in a Cisco Unified Computing System deployment for all the reasons explained previously.
Typically, switching modules are optimized for a certain maximum bandwidth depending on the type of module and switch containing them. Depending on applications and connectivity, the capability to provide a certain bandwidth to certain SAN ports is critical. For example, storage ports traditionally need higher, dedicated bandwidth because there are more servers accessing storage ports. Likewise, since there are more servers behind a switch port connected to a Cisco UCS 6100 Series Fabric Interconnect, dedicated bandwidth may be required. The use of dedicated bandwidth will also help optimize available bandwidth across different classes of devices.
Like the Cisco Unified Computing System, the Cisco MDS 9000 Family is designed to provide scalability and performance while simplifying management. This section discusses how Cisco MDS 9000 Family design methodologies and features can be used to address the challenges posed in Cisco Unified Computing System SAN deployments.
Cisco MDS 9000 Family Virtual SANs for Managing the Large Number of Servers
Cisco virtual SANs (VSANs) provide the SAN designer with new tools to highly optimize the scalability, availability, security, and management of SAN deployments. VSANs provide the capability to create completely isolated fabric topologies, each with its own set of fabric services, on top of a scalable common physical infrastructure (Figure 2). VSANs can be used to achieve optimized SAN deployment for the Cisco Unified Computing System.
Figure 2. Optimizing Cisco Unified Computing System SAN Deployments Using VSANs
Increase Scalability of SAN While Maintaining Application Isolation
Because of its higher server density, the Cisco Unified Computing System aids consolidation of different business functions on a smaller physical infrastructure. However, the solution must provide the same level of security and isolation as provided by a solution based on discrete servers or by unconnected networks.
While the foundation of fabric security is embedded in the VSAN-capable hardware, the holistic approach to security provided by the Cisco MDS 9000 Family helps ensure security even in a highly consolidated environment by dividing the physical SAN into logical VSANs (Figure 3).
Figure 3. Maintaining Application Isolation While Increasing the Scalability of the SAN
Enhance Management Security for Servers Using Role-Based Access Control
VSANs provide the capability to segregate the servers into different groups along with the SAN services and hence provide security for management. Using role-based access control (RBAC), which is available on both the Cisco MDS 9000 Family and Cisco UCS Manager, end-to-end logical separation can be achieved.
Deliver Optimized SAN Performance to Servers Using Cisco MDS 9000 Family Quality of Service
The Cisco Unified Computing System helps consolidate many physical servers into a virtualized environment. The challenge is to deliver the right SAN performance to the right group of servers to optimize the overall SAN performance. The Cisco MDS 9000 Family's quality-of-service (QoS) feature can be used to provide the right SAN service level to all groups of servers. The QoS feature also has the flexibly to prioritize a specific flow, as shown in Figure 4, and to rate-limit a specific flow to a user-defined threshold.
Figure 4. SAN Traffic Management for Cisco Unified Computing System Servers
Cisco Unified Computing System Uplink Management
The uplinks for the servers can be configured in two ways depending on the requirements. Although automatic configuration makes deployment easy, manual configuration using pin groups provides more control in assigning certain servers to specific uplinks.
Figure 5. Cisco Unified Computing System Servers Can Be Automatically Assigned to Uplinks
With autoselection, the vHBAs (and hence the Cisco Unified Computing System servers) will be uniformly assigned to the available uplinks depending on the number of logins on each uplink (Figure 5). This approach should work for most deployments and is simple to deploy and manage.
However, if you need more control, you should use manual assignment through pin groups. With this approach, you can assign a group of uplinks for a group of servers, thereby dedicating a certain uplink bandwidth for a group of servers. You create a pin group using one Fibre Channel port on the expansion module. A Cisco Unified Computing System server, through its vHBA, can be configured to use a certain pin group (only one pin group is allowed) to access the Cisco MDS 9000 Family SAN, as shown in Figure 6. If multiple servers are assigned to the same pin group, they are load-balanced among the ports in the pin group just as in the case of automatic assignment.
Figure 6. Uplinks Can Be Assigned Using Pin Groups
If a link fails, the servers on the failed uplink experience traffic disruption. However, the servers will be automatically moved to other available links. In the case of a pin group, the automatic move may be limited by the number of available uplinks in a certain pin group.
If NPV is used on Cisco MDS 9000 Family fabric switches, the possibility of traffic disruption can be eliminated through the use of F-port PortChannels. Cisco UCS 6100 Series Fabric Interconnects and Cisco MDS 9000 Family fabric switches support the same NPV technologies, and the Cisco Unified Computing System will support of F-port PortChannels in the future.
For high availability, servers are dual-homed onto two parallel and physically separate SANs: SAN A and SAN B. In such a design, it is important to maintain the SAN fabric isolation from the server to the storage to achieve the best high-availability behavior. The same principles should also be followed in Cisco Unified Computing System deployments.
Figure 7 shows a typical scenario. In this deployment, a fabric interconnect is connected to both Cisco MDS 9000 Family switches in the core. For a given server, both vHBAs could be logged into one Cisco MDS 9000 Family switch in the core, depending on how NPV on the fabric interconnect load-balances the device logins. However, this creates a single of failure if one of the Cisco MDS 9000 Family switches fails.
Figure 7. Connecting Fabric Interconnects to Cisco MDS 9000 Family Switches in Both Fabrics Creates a Single Point of Failure
It is important to assign the vHBAs to the correct Cisco UCS 6100 Series Fabric Interconnects (for example, the SAN A vHBA is connected to only the SAN A Cisco UCS fabric interconnect) and to connect the Cisco Unified Computing System uplinks to the corresponding Cisco MDS 9000 Family switch in the core, as shown in Figure 8.
Figure 8. Recommended SAN Connectivity for Fabric Interconnects
Zoning is a way to restrict the communication between the devices in a SAN. Although it may seem simplest to create fewer zones, perhaps one zone per application or application cluster, this approach may result in suboptimal scalability of SAN resources. You should follow best practices to get the most out of your SAN. For example, consider two Cisco Unified Computing System servers, S1 and S2 (in Figure 9), that need to communicate with the storage in the following way:
S1, S2 > D1, D2, D3
Figure 9. SAN Zoning for Cisco Unified Computing System Servers
A simple option is to create a zone with the following contents:
Z1: S1, S2, D1, D2, D3
Although this zone may work fine, such a model does not optimize the use of SAN resources. For example, although S1 and S2 do not need to communicate with each other, SAN resources will be provisioned to allow them to communicate. The same thing will happen with D1, D2, and D3. Table 1 shows two of the most widely used approaches to zoning.
Table 1. Common SAN Zone Assignments
Two-Member Zones (One Initiator and One Target)
One-Initiator Zones (One Initiator and Multiple Targets)
Z1-1: S1, D1
Z1-2: S1, D2
Z1-3: S1, D3
Z1-4: S2, D1
Z1-5: S2, D2
Z1-6: S2, D3
Z1-2: S2, D1, D2, D3
Z1-1: S1, D1, D2, D3
Although two-member zones increase the use of SAN switch resources, they increase the number of zones needed-and there are upper limits to the number of zones supported. One-initiator zones reduce the number of zones needed, but they require more SAN switch resources. Hence, the decision is a trade-off between the size of each zone and the number of zones.
SAN Booting to Allow Server Mobility
Booting over a network (LAN or SAN) is a mature technology and an important step in moving toward stateless computing, which eliminates the static binding between a physical server and the OS and applications it is supposed to run. The OS and applications are decoupled from the physical hardware and reside on the network. The mapping between the physical server and the OS on the network is performed on demand when the server is deployed.
Some of the benefits of booting from a network are:
• Reduced server footprint because fewer components (no disk) and resources are needed
• Simplified disaster and server failure recovery
• Higher availability because of the absence of failure-prone local hard drives
• Rapid redeployment
• Centralized image management
With SAN booting, the image resides on the SAN, and the server communicates with the SAN through an HBA (Figure 10). The HBA's BIOS contains the instructions that enable the server to find the boot disk. A common practice is to have the boot disk exposed to the server as LUN ID 0.
Figure 10. SAN Booting for Cisco Unified Computing System Servers
The Cisco UCS M71KR-E Emulex CNA, Cisco UCS M71KR-Q QLogic CNA, and Cisco UCS M81KR Virtual Interface Card (VIC) are all capable of booting from a SAN.
Management of Virtual Servers in the SAN
Typically, virtual servers do not have an identity in the SAN: they do not log in to the SAN like physical servers do. However, if controlling and monitoring of the virtual servers is required, N-port ID virtualization (NPIV) can be used. This approach requires you to:
• Have a Fibre Channel adapter and SAN switch that support NPIV
• Enable NPIV on the virtual infrastructure, such as by using VMware ESX Raw Device Mode (RDM)
• Assign virtual port worldwide names (pWWNs) to the virtual servers
• Provision the SAN switches and storage to allow access
By zoning the virtual pWWNs in the SAN to permit access, you can control virtual server SAN access just as with physical servers. In addition, you can monitor virtual servers and provide service levels just as with any physical server.
This section presents all the steps needed to boot a Cisco Unified Computing System server from a Cisco MDS 9000 Family SAN. The steps include the configuration needed on the Cisco MDS 9000 Family switches using Cisco Fabric Manager and on the Cisco UCS 6100 Series Fabric Interconnects using Cisco UCS Manager. This discussion assumes that the basic provisioning of IP addresses, server profiles, and so on is performed on the Cisco Unified Computing System. For the provisioning of bare-metal servers on the Cisco Unified Computing System, please refer to the Cisco Unified Computing System configuration guide (see For More Information at the end of this document). This discussion also assumes that the physical connections from the Cisco UCS fabric interconnect Fibre Channel ports to the corresponding Cisco MDS 9000 Family switch ports have been set up.
1. Enable NPIV on the Cisco MDS 9000 Family switch.
NPIV must be enabled on Cisco MDS 9000 Family switches on a switch-by-switch basis. It must be enabled on all Cisco MDS 9000 Family switches that will connect to the Cisco Unified Computing System. You can enable NPIV in the physical attributes settings in Cisco Fabric Manager.
Install the Fibre Channel license on Cisco UCS Manager.
Before proceeding with Cisco Unified Computing System provisioning, make sure that the Fibre Channel expansion module has the required licensing to enable the Fibre Channel ports. Use the show license usage command on the Cisco UCS 6100 Series Fabric Interconnect to view the licensing information. (Enter connect nxos from the initial log-in prompt to access the show license usage command.)
Note that if the FC_FEATURES_PKG license is not available (No appears in the Ins column), the Fibre Channel ports on the expansion modules will not be available.
2. Bring up the Cisco UCS fabric interconnect uplinks for SAN connectivity.
The Cisco UCS fabric interconnect uplinks must be brought up before Cisco Fabric Manager can discover the Cisco Unified Computing System.
a. On the Cisco MDS 9000 Family side, make sure the port mode and the speed are both set to "auto." Set the rate mode to "dedicated" to allow maximum bandwidth for the Cisco Unified Computing System uplink. Set the Admin status to "up" to bring up the port.
In Cisco UCS Manager, make sure that the corresponding port is enabled and the VSAN matches the one on the Cisco MDS 9000 Family side.
3. Discover the Cisco Unified Computing System chassis from Cisco Fabric Manager.
Open Cisco Fabric Manager and point it to the seed switch for the SAN. At this point, Cisco Fabric Manager should be able to discover the Cisco Unified Computing System chassis. However, if the Cisco Unified Computing System does not have the same Simple Network Management Protocol Version 3 (SNMPv3) user as the Cisco MDS 9000 Family switches, it will be shown with a red cross mark4. The mark indicates that Cisco Fabric Manager could not fully discover the Cisco Unified Computing System and it will not be able to show the servers logged into the Cisco Unified Computing System.
To fully discover and manage the Cisco Unified Computing System chassis from Cisco Fabric Manager, create the same SNMPv3 users5 on Cisco UCS Manager and the Cisco MDS 9000 Family switches. Cisco Fabric Manager uses the login information to access the Cisco MDS 9000 Family switches and Cisco UCS fabric interconnects, both must have the same set of users.
a. Use Cisco UCS Manager6 to create the SNMPv37 user as follows: On the Admin tab, set Filter to All, select Communication Management, and select Communication Services. In the SNMP Users area, fill in the information for the new SNMP user and then click OK.
Use Cisco Fabric Manager to create the SNMPv3 users for the Cisco MDS 9000 Family switches in the network by navigating through the Users tab.
4. Rediscover the Cisco Unified Computing System from Cisco Fabric Manager.
In Cisco Fabric Manager, open the File menu and return to the control panel. Go to the Fabrics tab and remove the current fabric. Go to the Open tab, select the same Cisco MDS 9000 Family switch (not the Cisco Unified Computing System because an NPV switch cannot be the seed switch), and click Discover to rediscover the switch with the newly created username and password. This process should discover both the Cisco MDS 9000 Family and the Cisco UCS 6100 Series Fabric Interconnect.
After discovery, the Cisco UCS 6100 Series Fabric Interconnect appears on the fabric as shown here8:
Note that you can invoke Cisco UCS Manager though Cisco Fabric Manager by right-clicking the Cisco Unified Computing System icon.
5. Configure VSANs on the Cisco Unified Computing System and Cisco MDS 9000 Family switches.
a. Create a VSAN on the Cisco Unified Computing System using Cisco UCS Manager.
b. Create VSANs on the Cisco MDS 9000 Family switches using Cisco Fabric Manager.
6. Configure Cisco UCS uplinks for SAN connectivity.
At this point, the uplink may be active.
However, if you need more control, you can create pin groups to dedicate uplinks to the servers. Create a pin group for each SAN (invoked from the SAN tree in Cisco UCS Manager). SAN connectivity depends on where each of the Fibre Channel ports on the expansion modules are connected.
7. Associate the uplinks with servers.
Servers access the uplinks using vHBAs. vHBA can access all the available uplinks in a certain VSAN or through pin groups. The uplinks are assigned to the servers though vHBAs, and this association must be created during vHBA creation. Be sure to create vHBAs and assign them the correct fabric ID and VSAN. If the pin group setting is left blank, the vHBA is associated with all available uplinks in the vHBA's VSAN. Note that the VSAN on the vHBA and the pin group setting must match for the server to come up.
8. Set up SAN booting for the Cisco Unified Computing System servers.
The SAN booting configuration process has three parts: storage array configuration, SAN zoning configuration, and Cisco Unified Computing System service profile configuration.
Storage Array Configuration
Provision a special LUN with the correct size to install the OS image. This LUN must be LUN 0 and will be used by the server to obtain the OS image. In addition, configure LUN masking so that the server has access to the LUN. This configuration is typically performed using the pWWN of the server: the corresponding vHBA's pWWN. The LUN masking procedure is specific to the storage array and is usually performed using the array's device manager or command-line interface (CLI).
SAN Zoning Configuration
The SAN controls the login of the devices and communication between these devices. On Cisco MDS 9000 Family switches, port security allows users to control the device login, and zoning controls the interdevice communication. By default, port security is disabled. If the default setting is used, you do not need to do anything. If port security is enabled, either add the vHBA pWWNs to the port security database or configure port security to allow the Cisco UCS server logins.
Zoning, however, is enabled by default, and hence Cisco Unified Computing System servers and corresponding targets need to be added to the zoning database so that a Cisco UCS 5100 Series blade's vHBA can access the target boot LUN on a SAN fabric. Please refer to step 9 for information about configuring zoning on the Cisco MDS 9000 Family.
Service Profile Configuration on Cisco UCS Manager
Follow these steps to enable SAN booting on Cisco UCS Manager:
a. During service profile configuration, for the first and second interfaces, configure the vHBA and assign a WWN9 to the vHBA from a preset WWN pool. Also assign the VSAN for the port. In the example here, vhba1 and vhba2 are created in VSAN 1003.
Incorrect choice of WWN format will result in login failure on Cisco MDS 9000 Family switches. The recommended format for the WWN for a vHBA is 20:00:00:25:B5:xx:yy:zz. Users can use the last three octets for sequential numbering and other information. For instance,
xx = 0 indicates that this is a WWN (not a pWWN)
xx = 01 indicates that the vHBA is associated with fabric interconnect SAN A
xx = 02 indicates that the vHBA is associated with fabric interconnect SAN B
yy:zz = Sequential numbers for the vHBA
The above format will also aid in the tracing and troubleshooting of server login problems.
b. Create the boot policy by right-clicking Boot Policies under Policies > root.
c. In the Create Boot Policy wizard, enter a name for the boot policy and a description. Then under vHBA, select Add SAN Boot. Add the vHBA: vhba1, defined in step 1.
d. In the Add SAN Boot window, add the boot target WWN. Be sure to type the name correctly, and double-check the target boot WWN that the SAN administrator provided.
e. Make sure the boot policy configuration looks similar to the following Create Boot Policy wizard screen in Cisco UCS Manager. Note that if high availability is needed, you can add a secondary boot device type for SAN booting.
f. Select the required service profile and assign the boot policy as shown in the following screen.
g. Because you are changing a service profile that needs to be associated with the blade, the blade will need to be rebooted. Cisco UCS Manager will do that in the background. Check the server status for the progress.
9. Configure the zoning for Cisco Unified Computing System servers.
Configure zoning using the Cisco Fabric Manager Zoning wizard, which can be invoked as shown here:
The Zonesets database screen shows active zone sets, zones, and also all the end devices. The end devices each are shown with a pWWN prefixed with a switch name. The Cisco Unified Computing System servers each are prefixed with a Cisco UCS 6100 Series switch name.
The device list also shows the storage ports that need to be zoned with the Cisco Unified Computing System servers. Create a zone using by choosing Insert Zone from the Create Zone menu and add the zone to the active zone set. The active zone set needs to be activated before the changes are enforced.
1Virtual servers log into the fabric only if N-port ID virtualization (NPIV) is enabled on the virtual infrastructure. NPIV is supported in VMware ESX (NPIV using raw device mapping (RDM) mode) and Hyper-V. Note, however, that virtual servers in any form-with or without explicit login-affect SAN scalability and oversubscription.
2This number applies to the Cisco UCS 6120XP 20-Port Fabric Interconnect with Cisco Unified Computing System Release 1.1(1j). The value is 16 uplinks on Cisco UCS 6140XP 40-Port Fabric Interconnect.
3As of Cisco Unified Computing System Release 1.1(1j).
4This mapping of the Cisco Unified Computing System on the Cisco Fabric Manager is supported from the latest release of the Cisco NX-OS Software. Prior versions showed the Cisco Unified Computing System as an NPV switch with devices hanging off the switch.
5Note that the default UCS manager "admin" user does not have SNMP privileges and hence user name other than "admin" has to be used.
6Depending on the version of the Cisco Unified Computing System, you may need to explicitly enable SNMP. Refer to the Cisco Unified Computing System configuration guide and release notes.
7SNMPv3 is supported on Cisco Unified Computing System starting from Release 1.0(2d).
8This feature is not available prior to Cisco NX-OS Software Release 5.0(1a).
9Follow Cisco's recommended virtual WWN format. Refer to the Cisco Unified Computing System configuration guide for more information.
10Follow Cisco's recommended virtual WWN format. Refer to the Cisco Unified Computing System configuration guide for more information.