Cisco IOS Mobile Wireless Radio Access Networking Configuration Guide, Release 15.1
MGX-RPM-1FE-CP Back Card with the Cisco IOS IP-RAN Feature Set
Downloads: This chapterpdf (PDF - 243.0KB) The complete bookPDF (PDF - 1.68MB) | Feedback

MGX-RPM-1FE-CP Back Card with the Cisco IOS IP-RAN Feature Set

Table Of Contents

MGX-RPM-1FE-CP Back Card with the
Cisco IOS IP-RAN Feature Set

Feature Overview

Supported Platforms

Determining Platform Support Through Cisco Feature Navigator

Availability of Cisco IOS Software Images

Configuration Tasks

Verifying the Version of IOS Software

Configuring the FE Interface

Configuring the FE Interface IP Address

Setting the Speed and Duplex Mode

Configuring Routing Protocol Attributes

Configuring PIM

Enabling the FE Interface

Configuring Multilink Interfaces

Configuring the Loopback Interface

Configuring Multilink PPP

Configuring IP Address Assignment

Configuring PPP Multiplexing

Configuring ACFC and PFC Handling During PPP Negotiation

Configuring RTP/UDP Compression

Configuring the RTP/UDP Compression Flow Expiration Timeout Duration

Configuring Routing Protocol Attributes

Configuring PIM

Configuring Virtual Templates

Configuring the IP Address

Configuring Multilink PPP

Enabling Link Quality Monitoring (LQM)

Configuring the Switch Interface and PVCs

Configuring the IP Address

Configuring the PVC

Saving the Configuration

Verifying the Configuration

Monitoring and Maintaining the MGX-RPM-1FE-CP Back Card

Enabling Remote Management of the MGX-RPM-1FE-CP Back Card

Related Documentation


MGX-RPM-1FE-CP Back Card with the
Cisco IOS IP-RAN Feature Set


This document contains the following sections:

Feature Overview

Configuration Tasks

Monitoring and Maintaining the MGX-RPM-1FE-CP Back Card

Configuration Tasks

Feature Overview

The MGX-RPM-1FE-CP (one-port, Fast Ethernet-Co-processor) back card is an MGX8850/RPM-PR back card that off-loads the following processes from the Route Processor Module (RPM-PR):

Compression/decompression of Real-time Transport Protocol (RTP)/User Datagram Protocol (UDP) headers (cRTP/cUDP)

Multiplexing/demultiplexing of Point-to-Point Protocol (PPP) frames

The MGX-RPM-1FE-CP back card is designed to be used with an MGX8850 that is equipped with one or more RPM-PRs and that terminates some number of T1 lines. Each MGX-RPM-1FE-CP back card has a termination capacity of up to eight T1s (four per Multilink PPP [MLP] bundle). The MGX-RPM-1FE-CP is only supported with the MLP encapsulation.

The MGX-RPM-1FE-CP back card contains one Fast Ethernet (100Base-Tx) interface. The interface has an RJ45 connector that is used to connect the card to a Category 5 un-shielded twisted pair (UTP) cable. Both half- and full-duplex operation are supported.

PPP Multiplexing/Demultiplexing

Encapsulated PPP frames contain several bytes of header information, which adds considerable overhead to a network that is used to transport PPP frames.

RFC 3153, PPP Multiplexing, describes a way to overcome this overhead. On the sending end, a multiplexor concatenates multiple PPP frames (subframes) into a single, multiplexed frame (superframe). One header is included in the superframe and the individual PPP subframes are separated by delimiters. On the receiving end, a demultiplexor uses the delimiters to separate the individual PPP subframes.

The MGX-RPM-1FE-CP back card conforms to this specification and acts as both a multiplexor and a demultiplexor.

RTP/UDP Header Compression

RTP is a protocol used for carrying packetized audio and video traffic over an IP network. RTP, described in RFC 1889, is not intended for data traffic, which uses TCP or UDP. Instead, RTP provides end-to-end network transport functions intended for applications with real-time requirements (such as audio, video, or simulation data) over multicast or unicast network services.

In an RTP frame, there is a minimum 12 bytes of the RTP header, combined with 20 bytes of IP header, and 8 bytes of UDP header. This creates a 40-byte IP/UDP/RTP header. By comparison, the RTP packet has a payload of approximately 20 to 160 bytes for audio applications that use compressed payloads. Given this ratio, it is very inefficient to transmit the IP/UDP/RTP header without compressing it.

Figure 3 RTP Header Compression

RFCs 2508 and 2509 describe a method for compressing not only the RTP header, but also the associated UDP and IP headers. Using this method, the 40 bytes of header information is compressed into approximately 2 to 4 bytes, as shown in Figure 3. Because the frames are compressed on a link-by-link basis, the delay and loss rate are lower, resulting in improved performance.

The MGX-RPM-1FE-CP back card offloads both the compression and decompression of RTP frames from the RPM-PR.


Note The MGX-RPM-1FE-CP back card can be used to perform only UDP/IP compression, in which case, the header is reduced from 28 bytes to 2 to 4 bytes.


MGX-RPM-1FE-CP Back Card in an IP-RAN

The MGX-RPM-1FE-CP back card off loads the compression/decompression of RTP/UDP headers and the multiplexing/ demultiplexing of PPP frames.

The supported use of the MGX-RPM-1FE-CP back card is within an IP-RAN of a mobile wireless network. In mobile wireless networks, radio coverage over a geographical space is provided by a network of radios and supporting electronics (Base Transceiver Station [BTS]) distributed over a wide area. Each radio and supporting electronics represents a "cell." In traditional networks, the radio signals or radio data frames collected in each cell are forwarded over a T1 (or similar low-speed, leased) line to a centralized Base Station Controller (BSC) where they are processed.

With the blurring of the lines between voice and data, several alternatives have arisen. One alternative is to replace the T1s with a cell- based AAL2/ATM approach to deliver the frames. This alternative seems to work well because the frame sizes within a wireless network match up nicely with the frame sizes used within an ATM network (10-20 bytes).

Another alternative is to encapsulate the radio frames in UDP frames and transport them over an IP network using header compression and packet multiplexing. This alternative provides better bandwidth efficiency than AAL2 and thus greater backhaul capacity. In this alternative, the MGX 8850 is used as a leased line termination and aggregation device. To enable the delivery of the aggregated back haul IP traffic to and from a routed IP network, the MGX is equipped with RPM-PR blades (which terminate and originate the frames) and MGX-RPM-1FE-CP back cards (which compress and multiplex the frames).

The nature of UDP or RTP header compression is such that compressed packets must be decompressed prior to routing. Also, to optimize network bandwidth, the frames must be multiplexed/compressed before they are sent across the T1 line (and decompressed/demultiplexed before they are sent across the FE interface).

Frames arriving at an FE interface of the MGX-RPM-1FE-CP back card are transferred to the RPM-PR. After the routing decision has been made, the frames are sent to the multiplexing/compression engine, where the PPP frames are multiplexed and the UDP and RTP headers are compressed. The resulting frames are then sent back to the RPM-PR for transmission over the appropriate T1 interface.

Conversely, frames arriving at a T1 interface of the MGX8850 are transferred to the RPM-PR and then to the decompression/demultiplexing engine. Once the UDP and RTP headers are decompressed and the PPP frames are demultiplexed, the resulting frames are sent back to the RPM-PR so that a routing decision can be made. They are then forwarded to the FE interface.

Multilink PPP (MLP) provides a standardized method for spreading traffic across multiple WAN links, while providing multivendor interoperability, packet fragmentation and proper sequencing, and load-balancing on both inbound and outbound traffic. When used in conjunction with Multilink PPP, the MGX-RPM-1FE-CP back card allows customers to increase channel capacity up to eight T1s.

This solution requires the following components:

MGX8850

RPM-PR

MGX-RPM-1FE-CP back card

Frame Relay Service Module (FRSM) card

BTS router (MWR 1941-DC)

The solution uses Open Shortest Path First (OSPF) as the routing protocol and requires MLP for transmission of the packets between the aggregation node (MGX8850) and the BTS. It requires you to configure the following:

The FE interface to support OSPF. Enable multicast routing and indicate a Protocol Independent Multicast (PIM) mode.

One or more PPP multilink interfaces with PPP mux and RTP header compression attributes.

A virtual template for each of the multilink groups.

A PVC under the switch subinterface that references the virtual template.

In addition, you must configure a connection between the PVC and the FRSM as well as a connection between the FRSM and the PVC.

Supported Platforms

The Cisco MGX-RPM-1FE-CP Back Card is supported in the Cisco MGX 8850/RPM-PR platform.

Determining Platform Support Through Cisco Feature Navigator

Cisco IOS software is packaged in feature sets that support specific platforms. To get updated information regarding platform support for this feature, access Cisco Feature Navigator. Cisco Feature Navigator dynamically updates the list of supported platforms as new platform support is added for the feature.

Cisco Feature Navigator is a web-based tool that enables you to determine which Cisco IOS software images support a specific set of features and which features are supported in a specific Cisco IOS image. You can search by feature or release. Under the release section, you can compare releases side by side to display both the features unique to each software release and the features in common.

To access Cisco Feature Navigator, you must have an account on Cisco.com. If you have forgotten or lost your account information, send a blank e-mail to cco-locksmith@cisco.com. An automatic check will verify that your e-mail address is registered with Cisco.com. If the check is successful, account details with a new random password will be e-mailed to you. Qualified users can establish an account on Cisco.com by following the directions at http://www.cisco.com/register.

Cisco Feature Navigator is updated regularly when major Cisco IOS software releases and technology releases occur. For the most current information, go to the Cisco Feature Navigator home page at the following URL:

http://www.cisco.com/go/fn

Availability of Cisco IOS Software Images

Platform support for particular Cisco IOS software releases is dependent on the availability of the software images for those platforms. Software images for some platforms may be deferred, delayed, or changed without prior notice. For updated information about platform support and availability of software images for each Cisco IOS software release, refer to the online release notes or, if supported, Cisco Feature Navigator.

Configuration Tasks

To configure the MGX-RPM-1FE-CP back card, you must first access the RPM-PR command line interface (CLI). The RPM-PR CLI can be accessed using any of the following three methods:

Console port on the front of the RPM-PR

The RPM-PR has an RJ-45 connector on the front of the card module. If you configure the RPM-PR on site, connect a console terminal (an ASCII terminal or a PC running terminal emulation software) directly to the console port on your RPM-PR using an RS-232 to RJ-45 cable for CLI access. The console port is the only way to access the RPM-PR CLI when the card module is first installed into an MGX 8850 chassis.

Change card (cc) command from another MGX 8850 card

After initial configuration, you can also configure the RPM-PR through the PXM. You can access the RPM-PR CLI by using the cc (change card) command from any of the other cards in the MGX 8850 switch. The ATM switch interface on the RPM-PR must be enabled before you can use the cc command.

Telnet from a workstation, PC or another router

After initial configuration, you can also configure the RPM-PR remotely via telnet. After the RPM-PR is installed and has PVCs to other RPM-PRs or routers in the network, you can telnet to the RPM-PR CLI remotely from these other devices.

For more information about accessing the RPM-PR CLI and the basic Cisco IOS command structure, please see the RPM Installation and Configuration Guide.

Configuration of the MGX-RPM-1FE-CP back card requires the following:

Verifying the Version of IOS Software

Configuring the FE Interface

Configuring Multilink Interfaces

Configuring Virtual Templates

Configuring the Switch Interface and PVCs

Saving the Configuration

Verifying the Configuration

Verifying the Version of IOS Software

The MGX-RPM-1FE-CP back card requires Cisco IOS Release 12.2(8) MC1 or a later Cisco IOS Release 12.2 MC on the corresponding RPM-PR. To verify the version of IOS software, use the show version command.

The show version command displays the configuration of the system hardware, the software version, the names and sources of configuration files, and the boot images.

Configuring the FE Interface

To configure the FE interface of the MGX-RPM-1FE-CP back card, complete the following tasks:

Configuring the FE Interface IP Address

Setting the Speed and Duplex Mode

Configuring Routing Protocol Attributes

Configuring PIM

Enabling the FE Interface

Configuring the FE Interface IP Address

To configure the FE interface, use the following commands, beginning in global configuration mode:

 
Command
Purpose

Step 1 

RPM(config)# interface fastethernet slot/port

Specifies the port adapter type and the location of the interface to be configure.

Note The slot is the slot of the MGX8850 where the RPM-PR resides (upper=1, lower=2). The port is the number of the port on the back card.

Step 2 

RPM(config-if)# ip address ip-address subnet-mask

Assigns an IP address and subnet mask to the interface.

Setting the Speed and Duplex Mode

The MGX-RPM-1FE-CP back card can run in full or half duplex mode and at 100 Mbps or 10 Mbps. It also has an auto-negotiation feature that allows the card to negotiate the speed and duplex mode with the corresponding interface on the other end of the connection.

Auto negotiation is the default setting for the speed and transmission mode. However, when using the MGX-RPM-1FE-CP back card in a wireless IP RAN solution, do not use auto negotiation. You must explicitly configure a speed of 100 Mbps and either full- or half-duplex transmission mode.

To configure the speed and duplex operation, use the following commands while in interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# duplex [auto | half | full]

Specifies duplex operation.

Step 2 

RPM(config-if)# speed [auto | 100 | 10]

Specifies speed.

Configuring Routing Protocol Attributes

When used in the IP-RAN solution, the MGX-RPM-1FE-CP back card must be configured to support the OSPF routing protocol.

To configure OSPF routing protocol attributes, use the following commands while in interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# ip ospf message-digest-key key-id md5 key

Enables OSPF Message Digest 5 (MD5) authentication.

Step 2 

RPM(config-if)# ip ospf hello-interval seconds

Specifies the interval between hello packets that the Cisco IOS software sends on the interface.

Step 3 

RPM(config-if)# ip ospf dead-interval seconds

Sets the interval at which hello packets must not be seen before neighbors declare the router down.

Configuring PIM

Because the MGX-RPM-1FE-CP back card is used in a multicast PPP environment, you should configure the Protocol Independent Multicast (PIM) mode of the FE interface.

To configure the PIM mode, use the following command while in interface configuration mode:

Command
Purpose

RPM(config-if)# ip pim {sparse-mode | sparse-dense-mode | dense-mode [proxy-register {list access-list | route-map map-name}]}

Enables PIM on an interface.


Enabling the FE Interface

To enable the FE interface, use the following command while in interface configuration mode:

Command
Purpose

RPM(config-if)# no shutdown

Enables the interface.


Configuring Multilink Interfaces

To configure the multilink interfaces to be used in conjunction with the MGX-RPM-1FE-CP back card, complete the following tasks:

Configuring the Loopback Interface

Configuring Multilink PPP

Configuring IP Address Assignment

Configuring PPP Multiplexing

Configuring RTP/UDP Compression

Configuring Routing Protocol Attributes

Configuring PIM

Configuring the Loopback Interface

The loopback interface is a software-only, virtual interface that emulates an interface that is always up. The interface-number is the number of the loopback interface that you want to create or configure. There is no limit on the number of loopback interfaces you can create.

Because the multilink interface is a virtual interface, you should create a loopback interface for the multilink interface to enable IP processing on the interface without having to assign an explicit IP address.

To configure a loopback interface for the multilink interface, use the following commands, beginning in global configuration mode:

 
Command
Purpose

Step 1 

RPM(config)# interface loopback number

Creates a loopback interface for the multilink interface.

Step 2 

RPM(config-if)# ip address ip-address subnet-mask

Assigns an IP address and subnet mask to the interface.

Step 3 

RPM(config-if)# exit

Exits interface configuration mode.

Configuring Multilink PPP

As higher-speed services are deployed, Multilink-PPP (MLP) provides a standardized method for spreading traffic across multiple WAN links, while providing multivendor interoperability, packet fragmentation and proper sequencing, and load balancing on both inbound and outbound traffic. The MGX-RPM-1FE-CP back card used in conjunction with the Multilink Point-to-Point Protocol (PPP) feature provides customers with the ability to increase channel capacity to up to eight T1s.

A Multilink interface is a special virtual interface which represents a multilink PPP bundle. The multilink interface serves to coordinate the configuration of the bundled link, and presents a single object for the aggregate links. However, the individual PPP links that are aggregated together, must also be configured. Therefore, to enable Multilink PPP on multiple serial interfaces, you need to first set up the multilink interface, and then configure each of the serial interfaces and add them to the same multilink interface.

To set up the multilink interface, use the following commands, beginning in global configuration mode:

 
Command
Purpose

Step 1 

RPM(config)# interface multilink number

Specifies the multilink interface to be configured.

Step 2 

RPM(config-if)# ppp multilink

Enables multilink PPP operation.

Step 3 

RPM(config-if)# no ppp multilink fragmentation1


or

RPM(config-if)# ppp multilink fragment disable2

Disables PPP multilink fragmentation.

Step 4 

RPM(config-if)# multilink-group group-number1

or

RPM(config-if)# ppp multilink group group-number2

Specifies an identification number for the multilink interface.

Step 5 

RPM(config-if)# ip unnumbered loopback number

Enables IP processing on the multilink interface without assigning an explicit IP address to the interface, where number is the number of the loopback interface that you configured in Configuring the Loopback Interface.

1 Cisco IOS Release 12.2(15)MC2a or later.

2 Cisco IOS 12.3(11)T or later.

Configuring IP Address Assignment

A point-to-point interface must be able to provide a remote node with its IP address through the IP Control Protocol (IPCP) address negotiation process. The IP address can be obtained from a variety of sources. The address can be configured through the command line, entered with an EXEC-level command, provided by TACACS+ or the Dynamic Host Configuration Protocol (DHCP), or from a locally administered pool.

IP address pooling uses a pool of IP addresses from which an incoming interface can provide an IP address to a remote node through IPCP address negotiation process. IP address pooling also enhances configuration flexibility by allowing multiple types of pooling to be active simultaneously.

To configure the an IP address assignment, use the following command while in interface configuration mode:

Command
Purpose
RPM(config-if)# peer default ip address {ip-address | dhcp | 
pool [pool-name]}

Specifies an IP address, an address from a specific IP address pool, or an address from the Dynamic Host Configuration Protocol (DHCP) mechanism to be returned to a remote peer connecting to this interface.


Configuring PPP Multiplexing

To enable and control the multiplexing of PPP frames, use the following commands while still in multilink interface configuration mode:

To enable and control the multiplexing of PPP frames, use the following commands while in interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# ppp mux

Enables PPP multiplexing.

Step 2 

RPM(config-if)# ppp mux delay integer

Sets the maximum time delay.

Step 3 

RPM(config-if)# ppp mux subframe length integer

Sets the maximum length of the subframe.

Step 4 

RPM(config-if)# ppp mux frame integer

Sets the maximum length of the superframe

Step 5 

RPM(config-if)# ppp mux subframe count integer

Sets the maximum number of subframes in a superframe.

Step 6 

RPM(config-if)# ppp mux pid integer

Sets the default PPP protocol ID.

Configuring ACFC and PFC Handling During PPP Negotiation

With Cisco IOS Release 12.2(15)MC1 and later, ACFC and PFC handling during PPP negotiation can be configured. By default, ACFC/PFC handling is not enabled.

To configure ACFC handling during PPP negotiation, use the following commands while in multilink interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)#  ppp acfc remote {apply | reject | ignore}

Configures how the router handles the ACFC option in configuration requests received from a remote peer, where:

apply—ACFC options are accepted and ACFC may be performed on frames sent to the remote peer.

reject—ACFC options are explicitly ignored.

ignore—ACFC options are accepted, but ACFC is not performed on frames sent to the remote peer.

Step 2 

RPM(config-if)# ppp acfc local {request | forbid}

Configures how the router handles ACFC in its outbound configuration requests, where:

request—The ACFC option is included in outbound configuration requests.

forbid—The ACFC option is not sent in outbound configuration requests, and requests from a remote peer to add the ACFC option are not accepted.

To configure PFC handling during PPP negotiation, use the following commands while in multilink interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)#  ppp pfc remote {apply | reject | ignore}

Configures how the router handles the PFC option in configuration requests received from a remote peer, where:

apply—PFC options are accepted and PFC may be performed on frames sent to the remote peer.

reject—PFC options are explicitly ignored.

ignore—PFC options are accepted, but PFC is not performed on frames sent to the remote peer.

Step 2 

RPM(config-if)# ppp pfc local {request | forbid}

Configures how the router handles PFC in its outbound configuration requests, where:

request—The PFC option is included in outbound configuration requests.

forbid—The PFC option is not sent in outbound configuration requests, and requests from a remote peer to add the PFC option are not accepted.

Configuring RTP/UDP Compression

Enabling RTP/UDP compression (cRTP/cUDP) on both ends of a low-bandwidth serial link can greatly reduce the network overhead if there is a lot of RTP traffic on that slow link. This compression is beneficial especially when the RTP payload size is small (for example, compressed audio payloads of 20-50 bytes).


Note Before you can enable RTP header compression, you must configure a serial line that uses PPP encapsulation.


To configure RTP header compression when using Cisco IOS Release 12.2(15)MC2a or prior, use the following commands while in multilink interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# ip rtp header-compression


Enables RTP header compression for serial encapsulations.

Step 2 

RPM(config-if)# ip rtp compression-connections 
number

Configures the total number of RTP header compression connections on an interface. By default, a total of 16 RTP compression connections on an interface is supported.

To configure RTP header compression when using Cisco IOS Release 12.3(11)T or later, use the following commands while in multilink interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# ip rtp header-compression ignore-id

Enables RTP header compression for serial encapsulations and suppresses IP ID checking during RTP compression.

Step 2 

RPM(config-if)# ip rtp compression-connections 
number

Configures the total number of RTP header compression connections on an interface. By default, a total of 16 RTP compression connections on an interface is supported.


Note The MGX-RPM-1FE-CP back card supports up to 150 RTP header compression connections on a T1 interface and up to 1000 connections per MLP bundle regardless of whether the bundle contains one T1 interface or four.


Configuring the RTP/UDP Compression Flow Expiration Timeout Duration

To minimize traffic corruption, cUDP flows expire after a period of time during which no packets are passed. When this user defined duration of inactivity occurs on a flow at the compressor, the compressor sends a full header upon receiving a packet for that flow, or, if no new packet is received for that flow, the compressor makes the CID for the flow available for new use. When a packet is received at the decompressor after the duration of inactivity has been exceeded, the packet is dropped and a context state message is sent to the compressor requesting a flow refresh.

The default expiration timeout is 5 seconds. The recommended value is 8 seconds.


Caution Failure of performance/latency scripts could occur if the expiration timeout duration is not changed to the recommended 8 seconds.

To configure the duration of the cUDP flow expiration timeout, use the following command while in multilink interface configuration mode:

Command
Purpose
RPM(config-if)# ppp iphc max-time seconds

Specifies the duration of inactivity, in seconds, that when exceeded causes the cUDP flow to expire.


Configuring Routing Protocol Attributes

When used in the IP-RAN solution, the multilink interface must be configured to support the OSPF routing protocol.

To configure OSPF routing protocol attributes, use the following commands while in interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# ip ospf message-digest-key key-id 
md5 key 

Enables OSPF Message Digest 5 (MD5) authentication.

Step 2 

RPM(config-if)# ip ospf hello-interval seconds 

Specifies the interval between hello packets that the Cisco IOS software sends on the interface.

Step 3 

RPM(config-if)# ip ospf dead-interval seconds 

Sets the interval at which hello packets must not be seen before neighbors declare the router down.

Configuring PIM

Because the MGX-RPM-1FE-CP back card is used in a multicast PPP environment, you should configure the Protocol Independent Multicast (PIM) mode of the multilink interface.

To configure the PIM mode, use the following command while in interface configuration mode:

Command
Purpose
RPM(config-if)# ip pim {sparse-mode | sparse-dense-mode | 
dense-mode [proxy-register {list access-list | route-map 
map-name}]} 

Configures PIM on an interface, where:

sparse-mode—Enables sparse mode of operation.

sparse-dense-mode—Treats the interface in either sparse mode or dense mode of operation, depending on which mode the multicast group operates in.

dense-mode—Enables dense mode of operation.

proxy-register—(Optional) Enables proxy registering on the interface of a designated router (DR) (leading toward the bordering dense mode region) for multicast traffic from sources not connected to the DR.

list access-list—(Optional) Defines the extended access list number or name.

route-map map-name—(Optional) Defines the route map.


Configuring Virtual Templates

To configure the virtual templates to be used in conjunction with the MGX-RPM-1FE-CP back card, complete the following tasks:

Configuring the IP Address

Configuring Multilink PPP

Enabling Link Quality Monitoring (LQM)

Configuring the IP Address

No IP address should be associated with the virtual template.

To configure no IP address, use the following commands, beginning in global configuration mode:

 
Command
Purpose

Step 1 

RPM(config)# interface virtual-template number

Specifies the virtual template interface to be configured.

Step 2 

RPM(config-if)# no ip address

Indicates that no IP address is associated with the virtual template.

Configuring Multilink PPP

To associate the virtual template with a multilink group, use the following commands while in interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# ppp multilink

Enables multilink PPP operation.

Step 2 

RPM(config-if)# ppp multilink queue depth qos number

Specifies link queueing parameters.This command sets the maximum depth for link queues when a bundle has non-FIFO queuing. The possible values are 2 through 255.

Step 3 

RPM(config-if)# multilink-group group-number1 

or

RPM(config-if)# ppp multilink group group-number2 

Specifies an identification number for the multilink interface.

1 Cisco IOS Release 12.2(15)MC2a or prior.

2 Cisco IOS Release 12.3(11)T or later.

Enabling Link Quality Monitoring (LQM)

Link Quality Monitoring (LQM) is available on all serial interfaces running PPP. LQM will monitor the link quality, and if the quality drops below a configured percentage, the router shuts down the link. The percentages are calculated for both the incoming and outgoing directions. The outgoing quality is calculated by comparing the total number of packets and bytes sent with the total number of packets and bytes received by the destination node. The incoming quality is calculated by comparing the total number of packets and bytes received with the total number of packets and bytes sent by the destination peer.

When LQM is enabled, Link Quality Reports (LQRs) are sent, in place of keepalives, every keepalive period. All incoming keepalives are responded to properly. If LQM is not configured, keepalives are sent every keepalive period and all incoming LQRs are responded to with an LQR.


Note LQR is specified in RFC 1989, PPP Link Quality Monitoring, by William A. Simpson of Computer Systems Consulting Services.


To enable LQM on the interface, use the following command while in interface configuration mode:

Command
Purpose
RPM(config-if)# ppp quality percentage

Sets the link quality threshold.

The percentage argument specifies the link quality threshold. That percentage must be maintained, or the link is deemed to be of poor quality and taken down.


Configuring the Switch Interface and PVCs

To configure the switch interface and the permanent virtual circuits (PVCs) to be used in conjunction with the MGX-RPM-1FE-CP back card, complete the following tasks:

Configuring the IP Address

Configuring the PVC

Configuring the IP Address

No IP address should be associated with the switch interface. To configure no IP address, use the following commands, beginning in global configuration mode:

 
Command
Purpose

Step 1 

RPM(config)# interface switch number

Specifies the switch interface to be configured.

Step 2 

RPM(config-if)# no ip address

Indicates that no IP address is associated with the switch interface.

Configuring the PVC

To configure a permanent virtual circuit (PVC) on a switch subinterface, use the following commands while in interface configuration mode:

 
Command
Purpose

Step 1 

RPM(config-if)# interface Switch number.subinterface 
point-to-point

Specifies the switch subinterface.

Step 2 

RPM(config-if)# pvc vpi/vci 

Specifies the PVC to be configured.

Step 3 

RPM(config-if)# encapsulation aal5 encap 
[virtual-template number]

Specifies the ATM adaptation layer (AAL) and encapsulation type for the PVC and to associate the PVC with a virtual template.

Saving the Configuration

To save the configuration, use the following command while in global configuration mode:

Command
Purpose
RPM# copy running-config startup-config

Writes the new configuration to nonvolatile memory.


Verifying the Configuration

To verify the configuration of the PPP multiplexing and the cRTP/cUDP compression on the MGX-RPM-1FE-CP back card, enter the following command:

RPM# show running-config

Monitoring and Maintaining the MGX-RPM-1FE-CP Back Card

The following privilege EXEC commands can be used to monitor and maintain multilink and FE interfaces, and to view information about the PPP mux and header compression configuration:

Command
Purpose
RPM# clear counters fastethernet slot/port

Clears interface counters.

RPM# clear ip rtp header-compression

Clears RTP header compression structures and statistics.

RPM# clear ppp mux interface

Clears the PPP multiplexing interface counters.

RPM# show ppp multilink

Displays MLP and multilink bundle information.

RPM# show ppp multilink interface number

Displays multilink information for the specified interface.

RPM# show interfaces fastethernet slot/port

Displays the status of the FE interface.

RPM# show ppp mux interface interface

Displays statistics for PPP frames that have passed through a given multilink interface.

RPM# show ip rtp header-compression

Displays RTP header compression statistics.

RPM# show controllers fastethernet slot/port

Displays information about initialization block, transmit ring, receive ring and errors for the Fast Ethernet controller chip.


Enabling Remote Management of the MGX-RPM-1FE-CP Back Card

You can use Cisco's network management applications, such as Cisco Works for Mobile Wireless (CW4MW), to monitor and manage aspects of the MGX-RPM-1FE-CP back card.

To enable remote network management of the MGX-RPM-1FE-CP back card, do the following:


Step 1 At the privileged prompt, enter the following command to access configuration mode:

RPM# configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
RPM(config)#

Step 2 At the configuration prompt, enter the following command to assign a host name to each of the network management workstations:

RPM(config)# ip host hostname ip-address

Where hostname is the name assigned to the Operations and Maintenance (O&M) workstation and ip-address is the address of the network management workstation.

Step 3 Enter the following command to log messages to a syslog server host:

RPM(config)# logging hostname

Where hostname is the name assigned to the CW4MW workstation with the ip host command.

Step 4 Enter the following commands to create a loopback interface for O&M:

RPM(config)# interface loopback number
RPM(config-if)# ip address ip-address subnet-mask

Step 5 Exit interface configuration mode:

RPM(config-if)# exit

Step 6 At the configuration prompt, enter the following command to specify the recipient of a Simple Network Management Protocol (SNMP) notification operation:

RPM(config)# snmp-server host hostname [traps | informs] [version {1 | 2c | 3 [auth | 
noauth | priv]}] community-string [udp-port port] [notification-type]

Where hostname is the name assigned to the CW4MW workstation with the ip host command in Step 2.

Step 7 Enter the following commands to specify the public and private SNMP community names:

RPM(config)# snmp-server community public RO
RPM(config)# snmp-server community private RW

Step 8 Enter the following command to enable the sending of SNMP traps:

RPM(config)# snmp-server enable traps

Step 9 Enter the following command to specify the loopback interface from which SNMP traps should originate:

RPM(config)# snmp-server trap-source loopback number

Where number is the number of the loopback interface you configured for the O&M in Step 4.

Step 10 At the configuration prompt, press Ctrl-Z to exit configuration mode.

Step 11 Write the new configuration to nonvolatile memory as follows:

RPM# copy running-config startup-config


Related Documentation

This following documents contain important information related to the MGX-RPM-1FE-CP back card:

Cisco MGX-RPM-1FE-CP Back Card Installation and Configuration Note

Release Notes for the MGX-RPM-1FE-CP Back Card

Cisco MGX 8850 Hardware Installation Guide