Cisco ASR 9000 Series Aggregation Services Router Interface and Hardware Component Configuration Guide, Release 4.2.x
Configuring the nV Edge System on the Cisco ASR 9000 Series Router
Downloads: This chapterpdf (PDF - 234.0KB) The complete bookPDF (PDF - 7.72MB) | Feedback

Table of Contents

Configuring the nV Edge System on the Cisco ASR 9000 Series Router

Contents

Prerequisites for Configuration

Overview of Cisco ASR 9000 nV Edge Architecture

Inter Rack Links on Cisco ASR 9000 Series nV Edge System

Failure Detection in Cisco ASR 9000 Series nV Edge System

Scenarios for High Availability

Benefits of Cisco ASR 9000 Series nV Edge System

Restrictions of the Cisco ASR 9000 Series nV Edge System

Implementing a Cisco ASR 9000 Series nV Edge System

Configuring Cisco ASR 9000 nV Edge System

Single Chassis to Cluster Migration

Configuration Examples for nV Edge System

nV Edge System Configuration: Example

IRL (inter-rack-link) Interface Configuration

Cisco nV Edge IRL link Support from 10Gig interface

Additional References

Related Documents

Standards

MIBs

RFCs

Technical Assistance

Configuring the nV Edge System on the Cisco ASR 9000 Series Router

This module describes the configuration of the nV Edge system on the Cisco ASR 9000 Series Aggregation Services Routers.

Feature History for Configuring nV Edge System on Cisco ASR 9000 Series Router

Release
Modification

Release 4.2.1

  • Support for nV Edge system was included on the Cisco ASR 9000 Series Router.

Prerequisites for Configuration

You must be in a user group associated with a task group that includes the proper task IDs. The command reference guides include the task IDs required for each command. If you suspect user group assignment is preventing you from using a command, contact your AAA administrator for assistance.

Before configuring the nV Edge system, you must have these hardware and software installed in your chassis:

  • Hardware : Cisco ASR 9000 Series SPA Interface Processor-700 and Cisco ASR 9000 Enhanced Ethernet line cards are supported. Cisco ASR 9000 Enhanced Ethernet line card 10 GigE links are used as IRLs (inter-rack links).
  • Software : Cisco IOS XR Software Release 4.2.1 or later on Cisco ASR 9000 Series Router.

For more information on hardware requirements, see Cisco ASR 9000 Series Aggregation Services Router Hardware Installation Guide .

Overview of Cisco ASR 9000 nV Edge Architecture

A Cisco ASR 9000 Series nV Edge consists of two or more Cisco ASR 9000 Series Router chassis that are combined to form a single logical switching or routing entity. You can operate two Cisco ASR 9000 Series Router platforms as a single virtual Cisco ASR 9000 Series system. Effectively, they can logically link two physical chassis with a shared control plane, as if the chassis were two route switch processors (RSPs) within a single chassis. See Figure 40. The blue lines on top shows the internal eobc interconnection and the red lines at the bottom show the data plane interconnection.

As a result, you can double the bandwidth capacity of single nodes and eliminate the need for complex protocol-based high-availability schemes. Hence, you can achieve failover times of less than 50 milliseconds for even the most demanding services and scalability needs.

Figure 40 Cisco ASR 9000 nV Edge Architecture


NoteIn Cisco IOS XR Software Release 4.2.1, the scalability of a cluster is limited to two chassis.


Figure 41 EOBC Links on a Cisco ASR 9000 nV Edge System

As illustrated in the Figure 41, the two physical chasses are linked using a Layer 1 1-Gbps connection, with RSPs communicating using a Layer 1 or Layer 2 Ethernet out-of-band channel (EOBC) extension to create a single virtual control plane. Each RSP has 2 EOBC ports and with redundant RSPs there will be 4 connections between the chassis.

The Cisco Virtualized Network Architecture combines the nV Edge system with the satellite devices to offer the Satellite nV architecture. For more information on Satellite nV models, see Configuring the Satellite Network Virtualization (nV) System on the Cisco ASR 9000 Series Router chapter.

Inter Rack Links on Cisco ASR 9000 Series nV Edge System

The IRL (Inter Rack Link) connections are required for forwarded traffic going from one chassis out of interface on the other chassis part of the nV edge system. The IRL has to be a 10 GigE link and it has to be direct L1 connections. The IRLs are used for forwarding packets whose ingress and egress interfaces are on separate racks. There can be a maximum of 16 such links between the chassis. A minimum of two links are required and they should be on two separate line cards, for better resiliency in case one line card goes down due to any fault. See Cisco ASR 9000 nV Edge Architecture.


NoteFor more information on QoS on IRLs, seeCisco ASR 9000 Series Aggregation Services Router Modular QoS Configuration Guide.


Failure Detection in Cisco ASR 9000 Series nV Edge System

In the Cisco ASR 9000 Series nV Edge system, when the Primary DSC node fails, the RSP in the Backup DSC node becomes Primary. It executes the duties of the master RSP that hosts the active set of control plane processes. In a normal scenario of nV Edge System where the Primary and Backup DSC nodes are hosted on separate racks, the failure detection for the Primary DSC happens through communication between the racks.

These mechanisms are used to detect RSP failures across rack boundaries:

  • FPGA state information detected by the peer RSP in the same chassis is broadcast over the control links. This information is sent if any state change occurs and periodically every 200ms.
  • The UDLD state of the inter rack control or data links are sent to the remote rack, with failures detected at an interval of 500ms.
  • A keep-alive message is sent between RSP cards through the inter rack control links, with a failure detection time of 10 seconds.

A Split Brain is a condition where the inter rack links between the routers in a Cisco ASR 9000 Series nV Edge system fails and hence the nodes on both routers start to act as primary node. So, messages are sent between these racks in order to detect Split Brain avoidance. These occur at 200ms intervals across the inter-rack data links.

Scenarios for High Availability

These are some sample scenarios for failure detection:

1. Single RSP Failure in the Primary DSC node - The Standby RSP within the same chassis initially detects the failure through the backplane FPGA. In the event of a failure detection, this RSP transitions to the active state and notifies the Backup DSC node about the failure through the inter-chassis control link messaging.

2. Failure of Primary DSC node and the Standby peer RSP - There are multiple cases where this scenario can occur, such as power-cycle of the Primary DSC rack or simultaneous soft reset of both RSP cards within the Primary rack.

a. The remote rack failure is initially detected by UDLD failure on the inter rack control link. The Backup DSC node checks the UDLD state on the inter rack data link. If the rack failure is confirmed by failure of the data link as well, then the Backup DSC node becomes active.

b. UDLD failure detection occurs every 500ms but the time between control link and data link failure can vary since these are independent failures detected by the RSP and line cards. A windowing period of up to 2 seconds is needed to correlate the control and data link failures and to allow split brain detection messages to be received. The keep-alive messaging between RSPs acts as a redundant detection mechanism, if the UDLD detection fails to detect a reset RSP card.

3. Failure of Inter Rack Control links (Split Brain) - This failure is initially detected by the UDLD protocol on the Inter Rack Control links. In this case, the Backup DSC continues to receive UDLD and keep-alive messages through the inter rack data link. As discussed in the Scenario 2, a windowing period of two seconds is allowed to synchronize between the control and data link failures. If the data link has not failed, or Split Brain packets are received across the Management LAN, then the Backup DSC rack reloads to avoid the split brain condition.

Benefits of Cisco ASR 9000 Series nV Edge System

The Cisco ASR 9000 Series nV Edge system architecture offers these benefits:

1. The Cisco ASR 9000 Series nV Edge System appears as a single switch or router to the neighboring devices.

2. You can logically link two physical chassis with a shared control plane, as if the chassis were two route switch processors (RSPs) within a single chassis. As a result, you can double the bandwidth capacity of single nodes and eliminate the need for complex protocol-based high-availability schemes.

3. You can achieve failover times of less than 50 milliseconds for even the most demanding services and scalability needs.

4. You can manage the cluster as a single entity rather than two entities. Better resiliency is available due to chassis protecting one another.

5. Cisco nV technology allows you to extend Cisco ASR 9000 Series Router system capabilities beyond the physical chassis with remote virtual line cards. These small form-factor (SFF) Cisco ASR 9000v cards can aggregate hundreds of Gigabit Ethernet connections at the access and aggregation layers.

6. You can scale up to thousands of Gigabit Ethernet interfaces without having to separately provision hundreds or thousands of access platforms. This helps you to simplify the network architecture and reduce the operating expenses (OpEx).

7. The multi-chassis capabilities of Cisco IOS XR Software are employed. These capabilities are extended to allow for enhanced chassis resiliency including data plane, control plane, and management plane protection in case of complete failure of any chassis in the Cisco ASR 9000 Series nV Edge System.

8. You can reduce the number of pseudo wires required for achieving pseudowire redundancy.

9. The nV Edge system allows seamless addition of new chassis. There would be no disruption in traffic or control session flap when a chassis is added to the system.

Restrictions of the Cisco ASR 9000 Series nV Edge System

These are some of the restrictions for the Cisco ASR 9000 nV Edge system:

  • The first generation Cisco ASR 9000 Ethernet linecards are not supported.
  • Chassis types that are not similar cannot be connected to form an nV edge system.
  • SFP-GE-S= is the only Cisco supported SFP that is allowed for all inter rack connections.
  • TenGigE SFPs are not supported on EOBC ports.
  • The nV Edge control plane links have to be direct physical connections and no network or intermediate routing or switching devices are allowed in between.
  • The nV Edge system does not support mixed speed links.
  • The nV Edge system does not support the ISM or CGN blade.

Implementing a Cisco ASR 9000 Series nV Edge System

This section explains the implementation of Cisco ASR 9000 Series nV Edge System.

Configuring Cisco ASR 9000 nV Edge System

To bring up the Cisco ASR 9000 nV Cluster, you need to perform these steps outlined in the following subsection.

Single Chassis to Cluster Migration

Consider that there are two individual chassis running Cisco IOS XR Software Release 4.2.x image. Let us refer them as rack0 and rack1 in these steps. If they are already running Cisco IOS XR Software Release 4.2.1 or later, you can avoid the first two steps.

1. You must turbo boot each chassis independently with the Cisco IOS XR Software Release 4.2.1.

2. Upgrade the field programmable devices (FPDs). This step is required because Cisco ASR 9000 Series nV Edge requires at least the RSP rommons to be corresponding to the Cisco IOS XR Software Release 4.2.1.

3. Collect information : You need to know the chassis serial number for each rack that is to be added to the cluster. On an operating system, you can get this from show inventory chassis command. On a system at rommon, you can get the serial number from bpcookie .

4. In order to setup the admin configuration on rack0, enter:

(admin config) # nv edge control serial <rack 0 serial> rack 0
(admin config) # nv edge control serial <rack 1 serial> rack 1
(admin config) # commit

 

5. Reload Rack 0.

6. Boot the RSPs (if you have two) in Rack 1 into the ROMMON mode. Change the ROMMON variables using these commands:

unset CLUSTER_RACK_ID
unset CLUSTER_NO_BOOT
unset BOOT
sync

7. Power down Rack 1.

8. Physically connect the routers. Connect the inter chassis control links on the front panel of the RSP cards (labelled SFP+ 0 and SFP+ 1) together. Rack0-RSP0 connects to Rack1-RSP0, and similarly for RSP1. You can verify the connections once Rack 1 is up using the show nv edge control interface loc 0/RSP0/CPU0 command.

RP/0/RSP0/CPU0:ios# show nv edge control switch interface loc 0/RSP0/CPU0
 
Priority lPort Remote_lPort UDLD STP
======== ===== ============ ==== ========
0 0/RSP0/CPU0/12 1/RSP0/CPU0/12 UP Forwarding
1 0/RSP0/CPU0/13 1/RSP1/CPU0/13 UP Blocking
2 0/RSP1/CPU0/12 1/RSP1/CPU0/12 UP On Partner RSP
3 0/RSP1/CPU0/13 1/RSP0/CPU0/13 UP On Partner RSP
 

NoteYou do not need any explicit command for inter-chassis control links and it is on by default.


9. Bring up Rack 1.

10. You must also connect your Interchassis Data links. You must configure it to be interchassis data link interface using the nv edge interface configuration command under the 10Gig interface (only 10Gig) . Ensure that this configuration on both sides of the inter chassis data link (on rack0 and rack1).

If Bundle-ether is used as the interface, then

You must include lacp system mac h.h.h in the global configuration mode.

You must configure mac-addr h.h.h on the Bundle-ether interface.


NoteStatic MAC on bundle is necessary whether or not the Bundle Ethernet members are sent from the same chassis or a different one.



NoteYou can verify the Interchassis Data Link operation using theshow nv edge data forwarding command.


11. After Rack0 and Rack1 comes up fully with all the RSPs and line cards in XR-RUN state, the show dsc and show redundancy summary commands must have similar command outputs as shown in nV Edge System Configuration: Example section.

Configuration Examples for nV Edge System

This section contains the following examples:

nV Edge System Configuration: Example

The following example shows a sample configuration for setting up the connectivity of a Cisco ASR 9000 Series nV Edge System.

IRL (inter-rack-link) Interface Configuration

interfacetenGigE 0/1/1/1
nv
edge
interface
!

Cisco nV Edge IRL link Support from 10Gig interface

In this case, te0/2/0/0 and te1/2/0/0 provide Inter Rack datalink:

 
RP/0/RSP0/CPU0:cluster_router#show runn interface te1/2/0/0
 
interface TenGigE1 /2/0/0
nv
edge
data
interface
!
 
RP/0/RSP0/CPU0:cluster_router#show runn interface te0/2/0/0
 
interface TenGigE0 /2/0/0
nv
edge
data
interface
!
 
 

Additional References

The following sections provide references to related documents.

Related Documents

Related Topic
Document Title

Cisco IOS XR master command reference

Cisco IOS XR Master Commands List

Satellite System software upgrade and downgrade on Cisco IOS XR Software

Cisco ASR 9000 Series Aggregation Services Router Getting Started Guide

Cisco IOS XR interface configuration commands

Cisco ASR 9000 Series Aggregation Services Router Interface and Hardware Component Command Reference

Satellite Qos configuration information for the Cisco IOS XR software

Cisco ASR 9000 Series Aggregation Services Router Modular Quality of Service Configuration Guide

Layer-2 and L2VPN features on the Satellite system

Cisco ASR 9000 Series Aggregation Services Router L2VPN and Ethernet Services Configuration Guide

Layer-3 and L3VPN features on the Satellite system

Cisco ASR 9000 Series Aggregation Services Router MPLS Layer 3 VPN Configuration Guide

Multicast features on the Satellite system

Cisco ASR 9000 Series Aggregation Services Router Multicast Configuration Guide

Broadband Network Gateway features on the Satellite system

Cisco ASR 9000 Series Aggregation Services Router Broadband Network Gateway Configuration Guide

Information about user groups and task IDs

Configuring AAA Services on Cisco IOS XR Software module of Cisco IOS XR System Security Configuration Guide

Standards

Standards
Title

No new or modified standards are supported by this feature, and support for existing standards has not been modified by this feature.

MIBs

MIBs
Description

CISCO-RF-MIB

Provides DSC chassis active/standby node pair information. In nV Edge scenario, it provides DSC primary/backup RP information and switchover notification.

To locate and download MIBs for selected platforms using
Cisco IOS XR software, use the Cisco MIB Locator found at the following URL:

http://cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml

ENTITY-STATE-MIB

Provides redundancy state information for each node.

CISCO-ENTITY-STATE-EXT-MIB

Extension to ENTITY-STATE-MIB which defines notifications (traps) on redundancy status changes.

CISCO-ENTITY-REDUNDANCY-MIB

Defines redundancy group types such as Node Redundancy group type and Process Redundancy group type.

RFCs

RFCs
Title

None

N.A

Technical Assistance

Description
Link

The Cisco Technical Support website contains thousands of pages of searchable technical content, including links to products, technologies, solutions, technical tips, and tools. Registered Cisco.com users can log in from this page to access even more content.

http://www.cisco.com/techsupport