The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Fibre Channel interfaces use buffer credits to ensure all packets are delivered to their destination.
This section includes the following topics:
Buffer-to-buffer credits (BB_credits) are a flow-control mechanism to ensure that Fibre Channel switches do not run out of buffers, so that switches do not drop frames. BB_credits are negotiated on a per-hop basis.
The receive BB_credit (fcrxbbcredit) value may be configured for each Fibre Channel interface. In most cases, you do not need to modify the default configuration.
The receive BB_credit values depend on the module type and the port mode, as follows:
Note In the Cisco MDS 9100 Series switches, the groups of ports on the left outlined in white are in dedicated rate mode. The other ports are host-optimized. Each group of 4 host-optimized ports have the same features as for the 32-port switching module.
Note Because Generation 1 modules do not support as many buffer-to-buffer credits as advanced 8-Gbps modules support, you cannot configure an ISL on E or TE ports between a Generation 1 module such as the 16-port 1-, 2-Gbps Fibre Channel Switching Module (DS-X9016) and a advanced 8-Gbps module such as the 48 port 8-Gbps Advanced Fibre Channel module (DS-X9248-256K9) or the 32-port 8-Gbps Advanced Fibre Channel module (DS-X9232-256k9).
Regardless of the configured receive BB_credit value, additional buffers, called performance buffers, improve switch port performance. Instead of relying on the built-in switch algorithm, you can manually configure the performance buffer value for specific applications (for example, forwarding frames over FCIP interfaces).
Note Performance buffers are not supported on the Cisco MDS 9148 Fabric Switch, Cisco MDS 9124 Fabric Switch, the Cisco Fabric Switch for HP c-Class BladeSystem, and the Cisco Fabric Switch for IBM BladeCenter.
For each physical Fibre Channel interface in any switch in the Cisco MDS 9000 Family, you can specify the amount of performance buffers allocated in addition to the configured receive BB_credit value.
The default performance buffer value is 0. If you set the performance buffer value to 0, the built-in algorithm is used. If you do not specify the performance buffer value, 0 is automatically used.
The default performance buffer value is 0. If you use the default option, the built-in algorithm is used. If you do not specify this command, the default option is automatically used.
In the architecture of 4-Gbps, 8-Gbps, and 16-Gbps modules, receive buffers shared by a set of ports are called buffer groups. The receive buffer groups are organized into global and local buffer pools.
The receive buffers allocated from the global buffer pool to be shared by a port group are called a global receive buffer pool. Global receive buffer pools include the following buffer groups:
Note The 48-port and 24-port 8-Gbps modules have dual global buffer pools. Each buffer pool in the 48-port modules support 24 ports and in the 24-port modules each buffer pool supports 12 ports.
Figure 4-1 shows the allocation of BB_credit buffers on line cards (24-port and 48-port 4-Gbps line cards).
Figure 4-1 Receive Buffers for Fibre Channel Ports in a Global Buffer Pool
Figure 4-2 shows the default BB_credit buffer allocation model for 48-port 8-Gbps switching modules. The minimum BB_credits required to bring up a port is two buffers.
Figure 4-2 BB_Credit Buffer Allocation in 48-Port 8-Gbps Switching Modules
Figure 4-3 shows the default BB_credit buffer allocation model for 24-port 8-Gbps switching modules. The minimum BB_credits required to bring up a port is two buffers.
Figure 4-3 BB_Credit Buffer Allocation in 24-Port 8-Gbps Switching Modules
Figure 4-4 shows the default BB_credit buffer allocation model for 4/44-port 8-Gbps host-optimized switching modules. The minimum BB_credits required to bring up a port is two buffers.
Figure 4-4 BB_Credit Buffer Allocation in 4/44-Port 8-Gbps Switching Modules
Figure 4-5 shows the default BB_credit buffer allocation model for 24-port 4-Gbps switching modules. The minimum BB_credits required to bring up a port is two buffers.
Figure 4-5 BB_Credit Buffer Allocation in 24-Port 4-Gbps Switching Modules
Note The default BB_credit buffer allocation is the same for all port speeds.
This section describes how buffer credits are allocated to Cisco MDS 9000 switching modules, and includes the following topics:
When you configure port mode to auto or E on a 4-Gbps module, one of the ports will not come up for the following configuration:
When you configure port mode to auto or E on a 8-Gbps module, one or two of the ports will not come up for the following configuration:
When you configure port mode to auto or E for all ports in the global buffer pool, you need to reconfigure buffer credits on one or more of the ports. The total number of buffer credits configured for all the ports in the global buffer pool should be reduced by 64.
Table 4-1 lists the BB_credit buffer allocation for the 48-port 16-Gbps Fibre Channel Switching Module (DS-X9448-768K9).
|
|
|
---|---|---|
|
||
|
|
|
Note Cisco MDS 9700 linecard is a full-rate card.
The following guidelines apply to BB_credit buffers on the 48-port 16-Gbps Fibre Channel Switching Modules:
Note In MDS 9700 Series Switching Modules, total buffer available are 49800 for 12 port group. One port group comprises of 4 ports, and there are 2 port groups per ASIC. Each port-group consists of total 4150 buffers. These buffers can be allocated to any combination of port(s) using extended buffer configuration. Please see show port-resource module module_number command for details about buffers supported by port-groups.
Table 4-2 lists the BB_credit buffer allocation for the 48-port 8-Gbps Advanced Fibre Channel switching module.
The following guidelines apply to BB_credit buffers on 32/48-port Advanced 8-Gbps Fibre Channel switching modules:
Each port group on the 32/48-port Advanced 8-Gbps Fibre Channel switching module consists of four/six ports. The ports in shared rate mode in a port group can have a maximum bandwidth oversubscription of 1.5:1 considering that each port group has 32-Gbps bandwidth. In case of 32 Port version, each port group of 4 ports has sufficient bandwidth (32 Gbps) to handle the line rate traffic without any oversubscription.
The following example configurations are supported by the 48-port Advanced 8-Gbps Fibre Channel switching modules:
Table 4-3 lists the BB_credit buffer allocation for the 48-port 8-Gbps Fibre Channel switching module.
The following guidelines apply to BB_credit buffers on 48-port 8-Gbps Fibre Channel switching modules:
Each port group on the 48-port 8-Gbps Fibre Channel switching module consists of six ports. The ports in shared rate mode in a port group can have a maximum bandwidth oversubscription of 10:1 considering that each port group has 12.8-Gbps bandwidth.
The following example configurations are supported by the 48-port 8-Gbps Fibre Channel switching modules:
Table 4-4 lists the BB_credit buffer allocation for the 24-port 8-Gbps Fibre Channel switching module.
|
|
||
---|---|---|---|
|
|
||
|
|
|
|
5001 |
|||
|
|||
1.When connected to Generation 1 modules, reduce the maximum BB_credit allocation to 250. |
The following guidelines apply to BB_credit buffers on 24-port 8-Gbps Fibre Channel switching modules:
Each port group on the 24-port 8-Gbps Fibre Channel switching module consists of three ports. The ports in shared rate mode in a port group can have a maximum bandwidth oversubscription of 10:1 considering that each port group has 12.8-Gbps bandwidth.
The following example configurations are supported by the 24-port 8-Gbps Fibre Channel switching modules:
Table 4-5 lists the BB_credit buffer allocation for the 4/44-port 8-Gbps Fibre Channel switching module.
The following guidelines apply to BB_credit buffers on 4/44-port 8-Gbps Fibre Channel switching modules:
Each port group on the 24-port 8-Gbps Fibre Channel switching module consists of 12 ports. The ports in shared rate mode in a port group can have a maximum bandwidth oversubscription of 10:1 considering that each port group has 12.8-Gbps bandwidth.
The following example configurations are supported by the 4/44-port 8-Gbps Fibre Channel switching modules:
Table 4-6 lists the BB_credit buffer allocation for 48-port 4-Gbps Fibre Channel switching modules.
|
|
||
---|---|---|---|
|
|
||
|
|
|
|
|
The following considerations apply to BB_credit buffers on 48-port 4-Gbps Fibre Channel switching modules:
Each port group on the 48-port 4-Gbps Fibre Channel switching module consists of 12 ports. The ports in shared rate mode have bandwidth oversubscription of 2:1 by default. However, some configurations of the shared ports in a port group can have maximum bandwidth oversubscription of 4:1 (considering that each port group has 12.8-Gbps bandwidth).
The following example configurations are supported by the 48-port 4-Gbps Fibre Channel switching modules:
Figure 4-6 Example Speed and Rate Configuration on a 48-Port 4-Gbps Switching Module
Note For detailed configuration steps of this example, see “Configuration Example for 48-Port 4-Gbps Module Interfaces” section.
Figure 4-7 Example Speed and Rate Configuration on a 48-Port 4-Gbps Switching Module
Table 4-7 lists the BB_credit buffer allocation for 24-port 4-Gbps Fibre Channel switching modules.
|
|
||
---|---|---|---|
|
|
||
|
|
|
|
|
The following considerations apply to BB_credit buffers on 24-port 4-Gbps Fibre Channel switching modules:
Each port group on the 24-port 4-Gbps Fibre Channel switching module consists of six ports. The ports in shared rate mode have a bandwidth oversubscription of 2:1 by default. However, some configurations of the shared ports in a port group can have a maximum bandwidth oversubscription of 4:1 (considering that each port group has 12.8-Gbps bandwidth).
The following example configurations are supported by the 24-port 4-Gbps Fibre Channel switching modules:
Note For detailed configuration steps of this example, see the “Configuration Example for 24-Port 4-Gbps Module Interfaces” section.
Figure 4-8 Example Speed and Rate Configuration on a 24-Port 4-Gbps Switching Module
Table 4-8 lists the BB_credit buffer allocation for 18-port 4-Gbps multiservice modules.
|
|
|||
---|---|---|---|---|
|
|
|||
|
|
|
|
|
|
The following considerations apply to BB_credit buffers on18-port 4-Gbps Fibre Channel switching modules:
Table 4-9 lists the BB_credit buffer allocation for 12-port 4-Gbps switching modules.
|
|
|
---|---|---|
|
||
|
|
|
|
The following considerations apply to BB_credit buffers on 12-port 4-Gbps switching modules:
Note Extended BB_credits are allocated across all ports on the switch. That is, they are not allocated by port group.
Note By default, the ports in the 12-port 4-Gbps switching modules come up in 4-Gbps dedicated rate mode but can be configured as 1-Gbps and 2-Gbps dedicated rate mode. Shared mode is not supported.
Table 4-10 lists the BB_credit buffer allocation for 4-port 10-Gbps switching modules.
|
|
|
---|---|---|
|
||
|
|
|
Maximum BB_credit buffers on one of the ports with Enterprise license |
||
7.Ports on the 4-port 10-Gbps cannot operate in FL port mode. |
Note The ports in the 4-port 10-Gbps switching module only support 10-Gbps dedicated rate mode. FL port mode and shared rate mode are not supported.
The following considerations apply to BB_credit buffers on 4-port 10-Gbps switching modules:
Note Extended BB_credits are allocated across all ports on the switch. That is, they are not allocated by port group.
This section describes how buffer credits are allocated to Cisco MDS 9000 Fabric switches, and includes the following topics:
Table 4-11 lists the BB_credit buffer allocation for the 96-port 16-Gbps Fabric switch.
|
|
|
---|---|---|
|
||
|
|
|
Note Cisco MDS 9396s is a full-rate fabric switch.
The following guidelines apply to BB_credit buffers on the 96-port 16-Gbps Fabric switch:
Note In MDS 9396s Fabric Switches, total buffer available are 99600 for 24 port group. One port group comprises of 4 ports, and there are 2 port groups per ASIC. Each port-group consists of total 4150 buffers. These buffers can be allocated to any combination of port(s) using extended buffer configuration. Please see show port-resource module module_number command for details about buffers supported by port-groups.
Table 4-12 lists the BB_credit buffer allocation for 40/48-port 16-Gbps Cisco MDS 9250i and 9148s Fabric switches.
Table 4-12 40/48-Port 16-Gbps Switching Module BB_Credit Buffer Allocation
|
|
|
---|---|---|
|
||
|
|
|
Note Cisco MDS 9148s and Cisco MDS 9250i are full-rate fabric switches.
The following guidelines apply to BB_credit buffers on the 40/48-port 9250i/9148S Fabric switches:
Note The ports that are moved to out-of-service need not be licensed.
Table 4-13 lists the BB_credit buffer allocation for 48-port 8-Gbps Fabric switches.
|
|
|
|
---|---|---|---|
|
|
||
|
The following considerations apply to BB_credit buffers on 48-port 8-Gbps Fabric switches:
Note The ports that are moved to out-of-service need not be licensed.
Table 4-14 lists the BB_credit buffer allocation for MDS 9134 Fabric switches.
|
|
|
|
---|---|---|---|
|
|
||
|
Table 4-15 lists the BB_credit buffer allocation for MDS 9124 Fabric switches.
|
|
|
|
---|---|---|---|
|
|
||
|
Table 4-16 lists the BB_credit buffer allocation for 18-port 4-Gbps Multiservice Modular switches.
|
|
|
|
---|---|---|---|
|
|
||
|
To facilitate BB_credits for long-haul links, the extended BB_credits feature allows you to configure the receive buffers above the maximum value on all 4-Gbps, 8-Gbps, advanced 8-Gbps switching modules. When necessary, you can reduce the buffers on one port and assign them to another port, exceeding the default maximum. The minimum extended BB_credits per port is 256 and the maximum is 4095.
Note Extended BB_credits are not supported on the Cisco MDS 9148 Fabric Switch, Cisco MDS 9134 Fabric Switch, Cisco MDS 9124 Fabric Switch, the Cisco Fabric Switch for HP c-Class BladeSystem, and the Cisco Fabric Switch for IBM BladeCenter.
In general, you can configure any port in a port group to dedicated rate mode. To do this, you must first release the buffers from the other ports before configuring larger extended BB_credits for a port.
Note The ENTERPRISE_PKG license is required to use extended BB_credits on 4-Gbps, 8-Gbps, and advanced 8-Gbps switching modules. Also, extended BB_credits are not supported by ports in shared rate mode.
All ports on the 4-Gbps, and 8-Gbps switching modules support extended BB_credits. There are no limitations for how many extended BB_credits you can assign to a port (except for the maximum and minimum limits). If necessary, you can take interfaces out of service to make more extended BB_credits available to other ports.
You can use the extended BB_credits flow control mechanism in addition to BB_credits for long-haul links.
The BB_credits feature allows you to configure up to 255 receive buffers on Generation 1 switching modules. To facilitate BB_credits for long haul links, you can configure up to 3,500 receive BB_credits on a Fibre Channel port on a Generation 1 switching module.
To use this feature on Generation 1 switching modules, you must meet the following requirements:
Figure 4-9 Port Group Support for the Extended BB_Credits Feature
The port groups that support extended credit configurations are as follows:
– Any one port in ports 1 to 4 (identified as Group 1).
– Any one port in ports 5 to 8 (identified as Group 2).
– Any one port in ports 9 to 12 (identified as Group 3).
Note The last two Fibre Channel ports (port 13 and port 14) and the two Gigabit Ethernet ports do not support the extended BB_credits feature.
– If you assign less than 2,400 extended BB_credits to any one port in a port group, the remaining three ports in that port group can retain up to 255 BB_credits based on the port mode.
Note The receive BB_credit value for the remaining three ports depends on the port mode. The default value is 16 for the Fx mode and 255 for E or TE modes. The maximum value is 255 in all modes. This value can be changed as required without exceeding the maximum value of 255 BB_credits.
– If you assign more than 2,400 (up to a maximum of 3,500) extended BB_credits to the port in a port group, you must disable the other three ports.
– Disable (explicitly) this feature if you need to nondisruptively downgrade to Cisco SAN-OS Release 1.3 or earlier. When you disable this feature, the existing extended BB_credit configuration is completely erased.
Note The extended BB_credit configuration takes precedence over the receive BB_credit and performance buffer configurations.
To use this feature on 4-Gbps or, 8-Gbps switching modules, you must meet the following requirements:
Note Extended BB_credits are not supported on the Cisco MDS 9124 Fabric Switch, Cisco MDS 9134 Fabric Switch, the Cisco Fabric Switch for HP c-Class BladeSystem, and the Cisco Fabric Switch for IBM BladeCenter.
Although the Fibre Channel standards require low bit error rates, bit errors do occur. Over time, the corruption of receiver-ready messages, known as R_RDY primitives, can lead to a loss of credits, which can eventually cause a link to stop transmitting in one direction. The Fibre Channel standards provide a feature for two attached ports to detect and correct this situation. This feature is called buffer-to-buffer credit recovery.
Buffer-to-buffer credit recovery functions as follows: the sender and the receiver agree to send checkpoint primitives to each other, starting from the time that the link comes up. The sender sends a checkpoint every time it has sent the specified number of frames, and the receiver sends a checkpoint every time it has sent the specified number of R_RDY primitives. If the receiver detects lost credits, it can retransmit them and restore the credit count on the sender.
The buffer-to-buffer credit recovery feature can be used on any non arbitrated loop link. This feature is most useful on unreliable links, such as MANs or WANs, but can also help on shorter, high-loss links, such as a link with a faulty fiber connection.
Note The buffer-to-buffer credit recovery feature is not compatible with the distance extension (DE) feature, also known as buffer-to-buffer credit spoofing. If you use intermediate optical equipment, such as DWDM transceivers or Fibre Channel bridges, on ISLs between switches that use DE, then buffer-to-buffer credit recovery on both sides of the ISL needs to be disabled.
The BB_SC_N field (word 1, bits 15-12) specifies the buffer-to-buffer state change (BB_SC) number. The BB_SC_N field indicates that the sender of the port login (PLOGI), fabric login (FLOGI), or ISLs (E or TE ports) frame is requesting 2^SC_BB_N number of frames to be sent between two consecutive BB_SC send primitives, and twice the number of R_RDY primitives to be sent between two consecutive BB_SC receive primitives.
For 4-Gbps and 8-Gbps modules, the BB_SCN on ISLs (E or TE ports) is enabled by default. This can fail the ISLs if used with optical equipment using distance extension (DE), also known as buffer-to-buffer credit spoofing.
On a 4-Gbps module, one port will not come up for the following configuration for all ports:
On a 8-Gbps module, one or two ports will not come up for the following configuration for the first half of the ports, the second half of the ports, or all ports:
When you configure port mode to auto or E and rate-mode to dedicated for all ports in the global buffer pool, you need to reconfigure buffer credits on one or more ports (other than default).
Note If you use distance extension (buffer-to-buffer credit spoofing) on ISLs between switches, the BB_SCN parameter on both sides of the ISL needs to be disabled.
You can also configure the receive data field size for Fibre Channel interfaces. If the default data field size is 2112 bytes, the frame length will be 2148 bytes.
This section includes the following topics:
To configure BB_credits for a Fibre Channel interface, follow these steps:
This example shows the output of the show int fc1/1 command:
16 receive B2B credit remaining
To configure performance buffers for a Fibre Channel interface, follow these steps:
Note Use the show interface bbcredit command to display performance buffer values and other BB_credit information.
To configure extended BB_credits for a MDS-14/2 interface, for a 4-Gbps switching module interface (not including the Cisco MDS 9124 Fabric Switch), or for an interface in a Cisco MDS 9216i switch, follow these steps:
Buffer-to-buffer credit recovery on ISLs (E or TE ports) is enabled by default.
To use buffer-to-buffer credit recovery on a port, follow these steps:
|
|
|
---|---|---|
Selects the interface and enters interface configuration submode. |
||
Disables (default) buffer-to-buffer credit recovery on the interface. |
To use the BB_SC_N field during PLOGI or FLOGI, follow these steps:
To configure the receive data field size, follow these steps:
To display BB_credit configuration information, perform one of the following tasks:
|
|
---|---|
Displays the BB_credit configuration for all the interfaces. |
|
Displays the BB_credit configuration for the specified interfaces. |
For detailed information about the fields in the output from these commands, refer to the Cisco NX-OS Command Reference.
To display the BB_credit information, use the show interface bbcredit command (see Example 4-1 and Example 4-2).
Example 4-1 Displays BB_credit Information
Example 4-2 Displays BB_credit Information for a Specified Fibre Channel Interface