Guidelines for Installing the 20-Port 100Gbps Line Card

Before installing the 20-port 100Gbps Line Card, use the guidelines in the following sections.


Note

The 20-port 100Gbps Line Card is supported with Cisco IOS XR release 6.2.2 and later.

Verify Power Requirements

The NCS 6008 LCC system with the 20-Port 100Gbps Line Card requires additional power. Use the following information to determine the proper number of power modules to install.


Note

The NC6-FANTRAY-2 supports a “power save mode” when there are no 20-Port 100Gbps Line Cards installed. In a system with no 20-Port 100Gbps Line Cards, the output of the show environment power command displays “Power Allocated Watts” as 1000W for the NC6-FANTRAY-2. When a 20-Port 100Gbps Line Card is installed and detected by the system, the power save mode is canceled and the “Power Allocated Watts” changes to 2000W.
Table 1. Allocated Power

System with 10-Port 100Gbps Line Card

Allocated Power (W)

System with 20-Port 100Gbps Line Card

Allocated Power (W)

Difference (per unit)

Difference (maximum units per system)1

NCS6-RP

250

NCS6-RP

250

NCS-FANTRAY

1000

NCS-FANTRAY-2

2000

1000

2000

NCS-FC

150

NC6-FC2-U

455

305

1830

Impedance card (no line card)

250

Impedance card (no line card)

250

NC6-10X100G-M-K

1350

NC6-20X100GE-M-C

2030

680

5440

NC6-10X100G-L-K

1350

NC6-20X100GE-L-C

1930

580

4640

NC6-60X10GE-M-S

1320

N/A

1 2 x RP, 2 x Fantray-2, 6 x FC, 8 x line card

The total power allocated to support the 20-Port 100Gbps Multi-Service Line Card (NC6-20X100GE-M-C) in a fully loaded chassis is 23470W. A fully loaded redundant system would require 46940 W (16 3000W AC power modules, or 23 2100W DC power modules).

The total power allocated to support the 20-Port 100Gbps Lean Core Line Card (NC6-20X100GE-L-C) in a fully loaded chassis is 22670W. A fully loaded redundant system would require 45340 W (16 3000W AC power modules, or 22 2100W DC power modules).

Remove Line Card Slice Configurations

The 10-Port 100Gbps Line Card with CPAK optics modules and the 20-port 100Gbps Line Card with QSFP28 and CPAK optics modules support a multi-slice architecture.

The 10-Port 100Gbps Line Card supports 5 slices (0 – 4). Each slice controls two 100 GE ports that can be configured to operate at 100 GE, 10X10 GE, or OTU4 (OTN).

The 20-port 100Gbps Line Card supports 5 slices (0 – 4). Each slice controls four 100 GE ports. However, only slice 0 (ports 0 - 3) and slice 1 (ports 4 - 7) support 10X10 GE breakout or OTU4 (OTN).

If you have configured 10GE breakout or OTN on a 10-Port 100Gbps Line Card, the configuration will be applied to a different set of ports on the 20-port 100Gbps Line Card after migration.

Use the show running-config | include hw-module command to display any line card slice configurations.

This example shows that no line card slice configurations are present; no changes are needed:


RP/0/RP0/CPU0:router# show running-config | include hw-module
Fri May  5 14:31:41.277 PDT
Building configuration...
RP/0/RP0/CPU0:ios#

This example shows line card slice configurations are present:


RP/0/RP0/CPU0:router# show running-config | include hw-module
Wed May  3 15:23:42.163 PDT
Building configuration...
hw-module location 0/7/CPU0 slice 0 breakout 10G
hw-module location 0/7/CPU0 slice 1 framer-mode OTU4

Use the no hw-module location rack/slot/CPU slice slice_number {breakout 10G | framer-mode OTU4} command to remove the slice configuration.


Note

Failure to remove slice configurations before installing the 20-port 100Gbps Line Card will result in an inconsistency alarm. Use the clear configuration inconsistency command to clear the inconsistency alarm and remove the failed configuration. Refer to the clear configuration inconsistency command in the System Management Command Reference for Cisco NCS 6000 Series Routers.

Install Universal Fabric Cards


Note

This procedure must be completed for each fabric plane.

The 20-port 100Gbps Line Card requires Universal Fabric Card (NC6-FC2-U).

To replace a legacy fabric card with a Universal Fabric Card, perform the following steps:

Before you begin

All card slots must be covered to ensure proper air circulation and cooling within the chassis. Install impedance carriers (NC6-LC-BLANK2) in all slots that are not being used. This ensures proper air flow and maintains system EMC and safety compliance. See the Installing an Impedance Carrier section.


Note

Mixed fabric card operation is not supported beyond the short migration window.

Procedure


Step 1

From SysAdmin VM configuration mode, shut down the fabric plane.

Example:


sysadmin-vm:0_RP0# config
sysadmin-vm:0_RP0(config)# controller fabric plane 0 shutdown
sysadmin-vm:0_RP0(config)# commit
sysadmin-vm:0_RP0(config)# exit

Step 2

Use the show controller fabric plane all detail command to verify that the fabric plane Admin State and Plane State are down.

Example:


sysadmin-vm:0_RP0# show controller fabric plane all detail

Plane Admin Plane  Plane  up->dn  up->mcast Total   Down    PPU
Id    State State  Mode   counter   counter Bundles Bundles State
-----------------------------------------------------------------
0     DN    DN     SC           0          0     16       0    NA
1     UP    UP     SC           0          0     16       0    NA 
2     UP    UP     SC           0          0     16       0    NA
3     UP    UP     SC           0          0     16       0    NA 
4     UP    UP     SC           0          0     16       0    NA
5     UP    UP     SC           0          0     16       0    NA

Step 3

From SysAdmin VM mode, power off the fabric card.

Example:


sysadmin-vm:0_RP0# hw-module location 0/FC0 shutdown
Mon Dec  5  23:54:02.366 UTC
Shutdown hardware module ? [no,yes] yes
0/RP0/ADMIN0:Apr 13 16:45:55.724 : shelf_mgr[2973]: %INFRA-SHELF_MGR-6-USER_ACTION : User root(127.0.0.1)
 requested CLI action 'graceful card shutdown' for location 0/FC0
0/RP0/ADMIN0:Apr 13 16:46:05.755 : shelf_mgr[2973]: %INFRA-SHELF_MGR-4-CARD_SHUTDOWN : Shutting down card 0/FC0
result Card graceful shutdown request on 0/FC0 succeeded.
sysadmin-vm:0_RP0:l7# 0/RP0/ADMIN0:Apr 13 16:46:06.908 : shelf_mgr[2973]: %INFRA-SHELF_MGR-6-HW_EVENT : 
 Rcvd HW event HW_EVENT_POWERED_OFF, event_reason_str 'power_zone:0 off' for card 0/FC0

Step 4

Use the show platform location command to verify that the fabric card is powered off.

Example:


sysadmin-vm:0_RP0# # show platform location 0/FC0
Mon Dec  5  23:54:02.366 UTC
Location  Card Type              HW State      SW State         Config State
----------------------------------------------------------------------------
0/FC0     NC6-FC                 POWERED_OFF   N/A              NSHUT

Step 5

Remove the legacy fabric card following the steps in the Removing a Fabric Card section.

Step 6

Install the UFC following the steps in the Installing a Fabric Card section.

Note 
Do not connect any cables to the UFC.

Wait for the UFC to power up and become operational.


sysadmin-vm:0_RP0:l7# 0/RP0/ADMIN0:Apr 12 17:53:44.661 : shelf_mgr[3284]: %INFRA-SHELF_MGR-6-HW_EVENT :
 Rcvd HW event HW_EVENT_OK, event_reason_str 'remote card ok' for card 0/FC0
0/RP0/ADMIN0:Apr 12 17:53:44.661 : shelf_mgr[3284]: %INFRA-SHELF_MGR-6-CARD_HW_OPERATIONAL :
 Card: 0/FC0 hardware state going to Operational

Step 7

Use the show platform location command to verify that the fabric card is operational.

Example:


sysadmin-vm:0_RP0# show platform location 0/FC0
Mon Dec  5  23:54:02.366 UTC
Location  Card Type              HW State      SW State         Config State
----------------------------------------------------------------------------
0/FC0     NC6-FC2-U              OPERATIONAL   OPERATIONAL      NSHUT

Step 8

Use the show hw-module fpd command to verify the status of all FPDs.

Verify that no FPD components require an upgrade (as indicated by NEED UPGD in the Status field) and that the Running and Programmed fields display the same version. For any FPD components that show status as NEED UPGD, use the upgrade hw-module location location fpd command. For any FPD components that show status as RLOAD REQ, use the hw-module location location reload command.
Step 9

From SysAdmin VM configuration mode, unshut the fabric plane.

Example:


sysadmin-vm:0_RP0# config
sysadmin-vm:0_RP0(config)# no controller fabric plane 0 shutdown
sysadmin-vm:0_RP0(config)# commit
sysadmin-vm:0_RP0(config)# exit

Step 10

Use the show controller fabric plane all detail command to verify that the Admin State and Plane State are up.

Example:


sysadmin-vm:0_RP0# show controller fabric plane all detail

Plane Admin Plane  Plane  up->dn  up->mcast Total   Down    PPU
Id    State State  Mode   counter   counter Bundles Bundles State
-----------------------------------------------------------------
0     UP    UP     SC           0          0     16       0    NA
1     UP    UP     SC           0          0     16       0    NA 
2     UP    UP     SC           0          0     16       0    NA
3     UP    UP     SC           0          0     16       0    NA 
4     UP    UP     SC           0          0     16       0    NA
5     UP    UP     SC           0          0     16       0    NA

Step 11

Repeat these steps for each remaining fabric plane (FC1, FC2, FC3, FC4, and FC5) until all six fabric planes have been migrated.


After all six fabric planes have been migrated, the following log message is displayed:


. . .

%DRIVER-CCC-4-CHASSIS_COMPLETED_MIGRATION : Chassis completed migration. Currently in: 2T fabric mode

. . .

What to do next

After you have migrated all six fabric planes, install the 20-port 100Gbps Line Card following the steps in the Installing a Line Card section.