By default, Cisco Nexus 5000 Series switches operate in fabric mode. In this mode, the switch provides standard Fibre Channel switching capability and features.
In fabric mode, each switch that joins a SAN is assigned a domain ID. Each SAN (or VSAN) supports a maximum of 239 domain IDs, so the SAN has a limit of 239 switches. In a SAN topology with a large number of edge switches, the SAN may need to grow beyond this limit. NPV alleviates the domain ID limit by sharing the domain ID of the core switch among multiple edge switches.
In NPV mode, the edge switch relays all traffic from server-side ports to the core switch. The core switch provides F port functionality (such as login and port security) and all the Fibre Channel switching capabilities.
The edge switch appears as a Fibre Channel host to the core switch and as a regular Fibre Channel switch to its connected devices.
Figure 34-1 shows an interface-level view of an NPV configuration.
Figure 34-1 NPV Interface Configuration
In NPV mode, the edge switch relays all traffic to the core switch, which provides the Fibre Channel switching capabilities. The edge switch shares the domain ID of the core switch.
To convert a switch into NPV mode, you set the NPV feature to enabled. This configuration command automatically triggers a switch reboot. You cannot configure NPV mode on a per-interface basis. NPV mode applies to the entire switch.
In NPV mode, a subset of fabric mode CLI commands and functionality is supported. For example, commands related to fabric login and name server registration are not required on the edge switch, because these functions are provided in the core switch. To display the fabric login and name server registration databases, you must enter the show flogi database and show fcns database commands on the core switch.
Server interfaces are F ports on the edge switch that connect to the servers. A server interface may support multiple end devices by enabling the N port identifier virtualization (NPIV) feature. NPIV provides a means to assign multiple FC IDs to a single N port, which allows the server to assign unique FC IDs to different applications.
Server interfaces are automatically distributed among the NP uplinks to the core switch. All of the end devices connected to a server interface are mapped to the same NP uplink.
In Cisco Nexus 5000 Series switches, server interfaces can be physical or virtual Fibre Channel interfaces.
NP Uplinks (External Interfaces)
All interfaces from the edge switch to the core switch are configured as proxy N ports (NP ports).
An NP uplink is a connection from an NP port on the edge switch to an F port on the core switch. When an NP uplink is established, the edge switch sends a fabric login message (FLOGI) to the core switch, and then (if the FLOGI is successful) it registers itself with the name server on the core switch. Subsequent FLOGIs from end devices connected to this NP uplink are converted to fabric discovery messages (FDISCs). For additional information about fabric login, see the “Information About Fabric Login” section.
Note In the switch CLI configuration commands and output displays, NP uplinks are called External Interfaces.
In Cisco Nexus 5000 Series switches, NP uplink interfaces must be native Fibre Channel interfaces.
When an NP port becomes operational, the switch first logs itself in to the core switch by sending a FLOGI request (using the port WWN of the NP port).
After completing the FLOGI request, the switch registers itself with the fabric name server on the core switch (using the symbolic port name of the NP port and the IP address of the edge switch).
Table 34-1 identifies port and node names in the edge switch used in NPV mode.
Table 34-1 Edge Switch FLOGI Parameters
The fWWN of the NP port on the edge switch.
The VSAN-based sWWN of the edge switch.
symbolic port name
The edge switch name and NP port interface string.
Note If no switch name is available, the output will read “switch.” For example, switch: fc 1/5.
The IP address of the edge switch.
symbolic node name
The edge switch name.
Note The buffer-to-buffer state change number (BB_SCN) of internal FLOGIs on an NP port is always set to zero. The BB_SCN is supported by the F port on the edge switch.
We do not recommend using fWWN-based zoning on the edge switch for the following reasons:
Zoning is not enforced at the edge switch (rather, it is enforced on the core switch).
Multiple devices attached to an edge switch log in through the same F port on the core, so they cannot be separated into different zones.
The same device might log in using different fWWNs on the core switch (depending on the NPV link it uses) and may need to be zoned using different fWWNs.
NPV supports automatic selection of NP uplinks. When a server interface is brought up, the NP uplink interface with the minimum load is selected from the available NP uplinks in the same VSAN as the server interface.
When a new NP uplink interface becomes operational, the existing load is not redistributed automatically to include the newly available uplink. Server interfaces that become operational after the NP uplink can select the new NP uplink.
In Release 4.0(1a)N2(1) and later software releases, NPV supports traffic maps. A traffic map allows you to specify the NP uplinks that a server interface can use to connect to the core switches.
Note When an NPV traffic map is configured for a server interface, the server interface must select only from the NP uplinks in its traffic map. If none of the specified NP uplinks are operational, the server remains in a non-operational state.
The NPV traffic map feature provides the following benefits:
Facilitates traffic engineering by allowing configuration of a fixed set of NP uplinks for a specific server interface (or range of server interfaces).
Ensures correct operation of the persistent FC ID feature, because a server interface will always connect to the same NP uplink (or one of a specified set of NP uplinks) after an interface reinitialization or switch reboot.
Disruptive Load Balancing
In Release 4.0(0)N1(2a) and later software releases, NPV supports disruptive load balancing. When disruptive load balancing is enabled, NPV redistributes the server interfaces across all available NP uplinks when a new NP uplink becomes operational. To move a server interface from one NP uplink to another NP uplink, NPV forces reinitialization of the server interface so that the server performs a new login to the core switch.
Only server interfaces that are moved to a different uplink are reinitialized. A system message is generated for each server interface that is moved.
Note Redistributing a server interface causes traffic disruption to the attached end devices.
To avoid disruption of server traffic, you should enable this feature only after adding a new NP uplink, and then disable it again after the server interfaces have been redistributed.
If disruptive load balancing is not enabled, you can manually reinitialize some or all of the server interfaces to distribute server traffic to new NP uplink interfaces.
NPV Traffic Management Guidelines
When deploying NPV traffic management, follow these guidelines:
Use NPV traffic management only when automatic traffic engineering does not meet your network requirements.
You do not need to configure traffic maps for all server interfaces. By default, NPV will use automatic traffic management.
Server interfaces configured to use a set of NP uplink interfaces cannot use any other available NP uplink interfaces, even if none of the configured interfaces are available.
When disruptive load balancing is enabled, a server interface may be moved from one NP uplink to another NP uplink. Moving between NP uplink interfaces requires NPV to relogin to the core switch, causing traffic disruption.
To link a set of servers to a specific core switch, associate the server interfaces with a set of NP uplink interfaces that all connect to that core switch.
Configure Persistent FC IDs on the core switch and use the Traffic Map feature to direct server interface traffic onto NP uplinks that all connect to the associated core switch.
Guidelines and Limitations
When configuring NPV, note the following guidelines and limitations:
In-order data delivery is not required in NPV mode because the exchange between two end devices always takes the same uplink from the edge switch to the core. Upstream of the edge switch, core switches will enforce in-order delivery if configured.
You can configure zoning for end devices that are connected to edge switches using all available member types on the core switch. For fWWN, sWWN, domain, or port-based zoning, use the fWWN, sWWN, domain, or port of the core switch in the configuration commands.
Port tracking is not supported in NPV mode.
Port security is supported on the core switch for devices logged in through the NPV switch. Port security is enabled on the core switch on a per-interface basis. To enable port security on the core switch for devices that log in through an NPV switch, you must adhere to the following requirements:
– The internal FLOGI must be in the port security database; in this way, the port on the core switch will allow communications and links.
– All the end device pWWNs must also be in the port security database.
Edge switches can connect to multiple core switches. In other words, different NP ports can be connected to different core switches.
NPV uses a load-balancing algorithm to automatically assign end devices in a VSAN to one of the NP uplinks (in the same VSAN) upon initial login. If there are multiple NP uplinks in the same VSAN, you cannot assign an end device to a specific NP uplink.
If a server interface goes down and then returns to service, the interface is not guaranteed to be assigned to the same NP uplink.
The server interface is only operational when its assigned NP uplink is operational.
Both servers and targets can be connected to the switch when in NPV mode.
Fibre Channel switching is not performed in the edge switch; all traffic is switched in the core switch.
NPV supports NPIV-capable module servers. This capability is called nested NPIV.
Only F, NP, and SD ports are supported in NPV mode.
Configuring NPV mode is described in the following topics:
Note For each server interface, the External Interface value displays the assigned NP uplink.
To display the status of the server interfaces and the NP uplink interfaces, enter the show npv status command:
switch# show npv status
npiv is enabled
Interface: fc2/1, VSAN: 1, FCID: 0x1c0000, State: Up
Interface: fc2/2, VSAN: 1, FCID: 0x040000, State: Up
Interface: fc2/3, VSAN: 1, FCID: 0x260000, State: Up
Interface: fc2/4, VSAN: 1, FCID: 0x1a0000, State: Up
Number of External Interfaces: 4
Interface: vfc3/1, VSAN: 1, NPIV: No, State: Up
Number of Server Interfaces: 1
Note To view fcns database entries for NPV edge switches, you must enter the show fcns database command on the core switch.
To view all the NPV edge switches, enter the show fcns database command on the core switch:
core-switch# show fcns database
For additional details (such as IP addresses, switch names, interface names) about the NPV edge switches that you see in the show fcns database output, enter the show fcns database detail command on the core switch:
core-switch# show fcns database detail
Verifying NPV Traffic Management
To display the NPV traffic map, enter the show npv traffic-map command.
NPV Traffic Map Information:
To display the NPV internal traffic details, enter the show npv internal info traffic-map command.
To display the disruptive load-balancing status, enter the show npv status command:
switch# show npv status
npiv is enabled
disruptive load balancing is enabled
Interface: fc2/1, VSAN: 2, FCID: 0x1c0000, State: Up