The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This section includes the following topics:
N port virtualization (NPV) reduces the number of Fibre Channel domain IDs in SANs. Switches operating in the NPV mode do not join a fabric. They pass traffic between NPV core switch links and end devices, which eliminates the domain IDs for these edge switches.
NPV is supported by the following Cisco MDS 9000 switches and Cisco Nexus 5000 Series switches only:
Note NPV is available on these switches only while in NPV mode; if in switch mode, NPV is not available.
N port identifier virtualization (NPIV) provides a means to assign multiple FC IDs to a single N port. This feature allows multiple applications on the N port to use different identifiers and allows access control, zoning, and port security to be implemented at the application level. Figure 7-1 shows an example application using NPIV.
You must globally enable NPIV for all VSANs on the MDS switch to allow the NPIV-enabled applications to use multiple N port identifiers.
Note All of the N port identifiers are allocated in the same VSAN.
Typically, Fibre Channel networks are deployed using a core-edge model with a large number of fabric switches connected to edge devices. Such a model is cost-effective because the per port cost for director class switches is much higher than that of fabric switches. However, as the number of ports in the fabric increases, the number of switches deployed also increases, and you can end up with a significant increase in the number of domain IDs (the maximum number supported is 239). This challenge becomes even more difficult when additional blade chassis are deployed in Fibre Channel networks.
NPV addresses the increase in the number of domain IDs needed to deploy a large number of the ports by making a fabric or blade switch appear as a host to the core Fibre Channel switch, and as a Fibre Channel switch to the servers in the fabric or blade switch. NPV aggregates multiple locally connected N ports into one or more external NP links, which shares the domain ID of the NPV core switch among multiple NPV switches. NPV also allows multiple devices to attach to same port on the NPV core switch, which reduces the need for more ports on the core.
Figure 7-2 Cisco NPV Fabric Configuration
While NPV is similar to N port identifier virtualization (NPIV), it does not offer exactly the same functionality. NPIV provides a means to assign multiple FC IDs to a single N port, and allows multiple applications on the N port to use different identifiers. NPIV also allows access control, zoning, and port security to be implemented at the application level. NPV makes use of NPIV to get multiple FCIDs allocated from the core switch on the NP port.
Figure 7-3 shows a more granular view of an NPV configuration at the interface level.
Figure 7-3 Cisco NPV Configuration–Interface View
A switch is in NPV mode after a user has enabled NPV and the switch has successfully rebooted. NPV mode applies to an entire switch. All end devices connected to a switch that is in NPV mode must log in as an N port to use this feature (loop-attached devices are not supported). All links from the edge switches (in NPV mode) to the NPV core switches are established as NP ports (not E ports), which are used for typical interswitch links. NPIV is used by the switches in NPV mode to log in to multiple end devices that share a link to the NPV core switch.
Note In-order data delivery is not required in NPV mode because the exchange between two end devices always takes the same uplink to the core from the NPV device. For traffic beyond the NPV device, core switches will enforce in-order delivery if needed and/or configured.
After entering NPV mode, only the following commands are available:
An NP port (proxy N port) is a port on a device that is in NPV mode and connected to the NPV core switch using an F port. NP ports behave like N ports except that in addition to providing N port behavior, they also function as proxies for multiple, physical N ports.
An NP link is basically an NPIV uplink to a specific end device. NP links are established when the uplink to the NPV core switch comes up; the links are terminated when the uplink goes down. Once the uplink is established, the NPV switch performs an internal FLOGI to the NPV core switch, and then (if the FLOGI is successful) registers itself with the NPV core switch’s name server. Subsequent FLOGIs from end devices in this NP link are converted to FDISCs. For more details refer to the “Internal FLOGI Parameters” section.
Server links are uniformly distributed across the NP links. All the end devices behind a server link will be mapped to only one NP link.
When an NP port comes up, the NPV device first logs itself in to the NPV core switch and sends a FLOGI request that includes the following parameters:
After completing its FLOGI request, the NPV device registers itself with the fabric name server using the following additional parameters:
Note The BB_SCN of internal FLOGIs on NP ports is always set to zero. The BB_SCN is supported at the F-port of the NPV device.
Figure 7-4 shows the internal FLOGI flows between an NPV core switch and an NPV device.
Figure 7-4 Internal FLOGI Flows
Table 7-1 identifies the internal FLOGI parameters that appear in Figure 7-4.
|
|
---|---|
The switch name and NP port interface string. Note If there is no switch name available, then the output will display “switch.” For example, switch: fc1/5. |
|
Although fWWN-based zoning is supported for NPV devices, it is not recommended because:
Port numbers on NPV-enabled switches will vary depending on the switch model. For details about port numbers for NPV-eligible switches, see the Cisco NX-OS Family Licensing Guide.
NPV devices use only IP as the transport medium. CFS uses multicast forwarding for CFS distribution. NPV devices do not have ISL connectivity and FC domain. To use CFS over IP, multicast forwarding has to be enabled on the Ethernet IP switches all along the network that physically connects the NPV switch. You can also manually configure the static IP peers for CFS distribution over IP on NPV-enabled switches. For more information, see the Cisco MDS 9000 Family NX-OS System Management Configuration Guide.
This sections discusses the following aspects of load balancing:
Before Cisco MDS SAN-OS Release 3.3(1a), NPV supported automatic selection of external links. When a server interface is brought up, an external interface with the minimum load is selected from the available links. There is no manual selection on the server interfaces using the external links. Also, when a new external interface was brought up, the existing load was not distributed automatically to the newly available external interface. This newly brought up interface is used only by the server interfaces that come up after this interface.
As in Cisco MDS SAN-OS Release 3.3(1a) and NX-OS Release 4.1(1a), NPV supports traffic management by allowing you to select and configure the external interfaces that the server uses to connect to the core switches.
Note When the NPV traffic management is configured, the server uses only the configured external interfaces. Any other available external interface will not be used.
The NPV traffic management feature provides the following benefits:
Disruptive load balance works independent of automatic selection of interfaces and a configured traffic map of external interfaces. This feature forces reinitialization of the server interfaces to achieve load balance when this feature is enabled and whenever a new external interface comes up. To avoid flapping the server interfaces too often, enable this feature once and then disable it whenever the needed load balance is achieved.
If disruptive load balance is not enabled, you need to manually flap the server interface to move some of the load to a new external interface.
By grouping devices into different NPV sessions based on VSANs, it is possible to support multiple VSANs on the NPV-enabled switch. The correct uplink must be selected based on the VSAN that the uplink is carrying.
This section includes the guidelines and limitations for this feature:
Following are recommended guidelines and requirements when deploying NPV:
When deploying NPV traffic management, follow these guidelines:
When NPV is enabled, the following requirements must be met before you configure DPVM on the NPV core switch:
For details about DPVM configuration, see the Cisco MDS 9000 Family NX-OS Fabric Configuration Guide.
Port security is enabled on the NPV core switch on a per interface basis. To enable port security on the NPV core switch for devices logging in via NPV, you must adhere to the following requirements:
Once these requirements are met, you can enable port security as you would in any other context. For details about enabling port security, see the Cisco MDS 9000 Family NX-OS Security Configuration Guide.
This section includes the following topics:
You must globally enable NPIV for all VSANs on the MDS switch to allow the NPIV-enabled applications to use multiple N port identifiers.
Note All of the N port identifiers are allocated in the same VSAN.
To enable or disable NPIV on the switch, follow these steps:
|
|
|
---|---|---|
When you enable NPV, the system configuration is erased and the system reboots with the NPV mode enabled.
Note We recommend that you save the current configuration either on bootflash or a TFTP server before NPV (if the configuration is required for later use). Use the following commands to save either your non-NPV or NPV configuration:
switch#
copy running bootflash:filename
The configuration can be reapplied later using the following command:
copy bootflash:filename running-config
switch#
To configure NPV using the CLI, perform the following tasks:
The NPV traffic management feature is enabled after configuring NPV. Configuring NPV traffic management involves configuring a list of external interfaces to the servers, and enabling or disabling disruptive load balancing.
A list of external interfaces are linked to the server interfaces when the server interface is down, or if the specified external interface list includes the external interface already in use.
To configure the list of external interfaces per server interface, perform the following tasks:
|
|
|
---|---|---|
switch(config)# npv traffic-map server-interface svr-if-range external-interface fc ext-fc-if-range |
Allows you to configure a list of external FC interfaces per server interface by specifying the external interfaces in the svr-if-range. The server to be linked is specified in the ext-fc-if-range. |
|
switch(config)# npv traffic-map server-interface svr-if-range external-interface port-channel ext-pc-if-range |
Allows you to configure a list of external PortChannel1 interfaces per server interface by specifying the external interfaces in the svr-if-range. The server to be linked is specified in the ext-pc-if-range. |
|
switch(config)# no npv traffic-map server-interface svr-if-range external-interface ext-if-range |
1.While mapping non-PortChannel interfaces and PortChannel interfaces to the server interfaces, include them separately in two steps. |
Disruptive load balancing allows you to review the load on all the external interfaces and balance the load disruptively. Disruptive load balancing is done by moving the servers using heavily loaded external interfaces, to the external interfaces running with fewer loads.
To enable or disable the global policy for disruptive load balancing, perform the following tasks:
|
|
|
---|---|---|
This section includes the following topics:
To display NPV configuration information, perform one of the following tasks:
For detailed information about the fields in the output from these commands, refer to the Cisco MDS NX-OS Command Reference.
To view all the NPV devices in all the VSANs that the aggregator switch belongs to, enter the show fcns database command.
For additional details (such as IP addresses, switch names, interface names) about the NPV devices you see in the show fcns database output, enter the show fcns database detail command.
If you need to contact support, enter the show tech-support NPV command and save the output so that support can use it to troubleshoot, if necessary.
To display a list of the NPV devices that are logged in, along with VSANs, source information, pWWNs, and FCIDs, enter the show npv flogi-table command.
To display the status of the different servers and external interfaces, enter the show npv status command.
To display the NPV traffic map, enter the show npv traffic-map command.
To display the NPV internal traffic details, enter the show npv internal info traffic-map command.