Configuring Ethernet Local Management Interface at a Provider Edge
The advent of Ethernet as a metropolitan-area network (MAN) and WAN technology imposes a new set of Operation, Administration, and Management (OAM) requirements on Ethernet's traditional operations, which had centered on enterprise networks only. The expansion of Ethernet technology into the domain of service providers, where networks are substantially larger and more complex than enterprise networks and the user-base is wider, makes operational management of link uptime crucial. More importantly, the timeliness in isolating and responding to a failure becomes mandatory for normal day-to-day operations, and OAM translates directly to the competitiveness of the service provider.
The “Configuring Ethernet Local Management Interface at a Provide Edge” module provides general information about configuring an Ethernet Local Management Interface (LMI), an OAM protocol, on a provider edge (PE) device.
Prerequisites for Configuring Ethernet Local Management Interface at a Provider Edge
Ethernet Operation, Administration, and Management (OAM) must be operational in the network.
For Ethernet OAM to operate, the provider edge (PE) side of a connection must be running Ethernet Connectivity Fault Management (CFM) and Ethernet Local Management Interface (LMI).
All VLANs used on a PE device to connect to a customer edge (CE) device must also be created on that CE device.
To use nonstop forwarding (NSF) and In Service Software Upgrade (ISSU), stateful switchover (SSO) must be configured and working properly.
Restrictions for Configuring
Ethernet Local Management Interface at a Provider Edge
Management Interface (LMI) is not supported on routed ports, EtherChannel port
channels, ports that belong to an EtherChannel, private VLAN ports, IEEE 802.1Q
tunnel ports, Ethernet over Multiprotocol Label Switching (MPLS) ports, or
Ethernet Flow Points (EFPs) on trunk ports.
cannot be configured on VLAN interfaces.
Information About Configuring Ethernet Local Management Interface at a Provider Edge
An Ethernet virtual circuit (EVC) as defined by the Metro Ethernet Forum is a port level point-to-point or multipoint-to-multipoint Layer 2 circuit. EVC status can be used by a customer edge (CE) device to find an alternative path in to the service provider network or in some cases to fall back to a backup path over Ethernet or another alternative service such as ATM.
Ethernet LMI Overview
Ethernet Local Management Interface (LMI) is an Ethernet Operation, Administration, and Management (OAM) protocol between a customer edge (CE) device and a provider edge (PE) device. Ethernet LMI provides CE devices with the status of Ethernet virtual circuits (EVCs) for large Ethernet metropolitan-area networks (MANs) and WANs and provides information that enables CE devices to autoconfigure. Specifically, Ethernet LMI runs on the PE-CE User-Network Interface (UNI) link and notifies a CE device of the operating state of an EVC and the time when an EVC is added or deleted. Ethernet LMI also communicates the attributes of an EVC.
Ethernet LMI interoperates with Ethernet Connectivity Fault Management (CFM), an OAM protocol that runs within the provider network to collect OAM status. Ethernet CFM runs at the provider maintenance level (user provider edge [UPE] to UPE at the UNI). Ethernet LMI relies on the OAM Ethernet Infrastructure (EI) to interwork with CFM to learn the end-to-end status of EVCs across CFM domains.
Ethernet LMI is disabled globally by default. When Ethernet LMI is enabled globally, all interfaces are automatically enabled. Ethernet LMI can also be enabled or disabled at the interface to override the global configuration. The last Ethernet LMI command issued is the command that has precedence. No EVCs, Ethernet service instances, or UNIs are defined, and the UNI bundling service is bundling with multiplexing.
Ethernet CFM Overview
Ethernet Connectivity Fault Management (CFM) is an end-to-end per-service-instance (per VLAN) Ethernet layer Operation, Administration, and Management (OAM) protocol that includes proactive connectivity monitoring, fault verification, and fault isolation. End-to-end CFM can be from provider edge (PE) device to PE device or from customer edge (CE) device to CE device. For more information about Ethernet CFM, see
“Configuring Ethernet Connectivity Fault Management in a Service Provider Network” in the
Carrier Ethernet Configuration Guide.
OAM Manager Overview
The OAM manager is an infrastructure element that streamlines interaction between Operation, Administration, and Management (OAM) protocols. The OAM manager requires two interworking OAM protocols, Ethernet Connectivity Fault Management (CFM) and Ethernet Local Management Interface (LMI). No interactions are required between Ethernet LMI and the OAM manager on the customer edge (CE) side. On the User Provider-Edge (UPE) side, the OAM manager defines an abstraction layer that relays data collected from Ethernet CFM to the Ethernet LMI device.
Ethernet LMI and the OAM manager interaction is unidirectional, from the OAM manager to Ethernet LMI on the UPE side of the device. An information exchange results from an Ethernet LMI request or is triggered by the OAM manager when it receives notification from the OAM protocol that the number of UNIs has changed. A change in the number of UNIs may cause a change in Ethernet virtual circuit (EVC) status.
The OAM manager calculates EVC status given the number of active user network interfaces (UNIs) and the total number of associated UNIs. You must configure CFM to notify the OAM manager of all changes to the number of active UNIs or to the remote UNI ID for a given service provider VLAN (S-VLAN) domain.
The information exchanged is as follows:
EVC name and availability status (active, inactive, partially active, or not defined)
Remote UNI name and status (up, disconnected, administratively down, excessive frame check sequence [FCS] failures, or not reachable)
Remote UNI counts (the total number of expected UNIs and the number of active UNIs)
Benefits of Ethernet LMI at a Provider Edge
Communication of end-to-end status of the Ethernet virtual circuit (EVC) to the customer edge (CE) device
Communication of EVC and user network interface (UNI) attributes to a CE device
Competitive advantage for service providers
HA Features Supported by Ethernet LMI
In access and service provider networks using Ethernet technology, high availability (HA) is a requirement, especially on Ethernet operations, administration, and management (OAM) components that manage Ethernet virtual circuit (EVC) connectivity. End-to-end connectivity status information is critical and must be maintained on a hot standby Route Processor (RP) (a standby RP that has the same software image as the active RP and supports synchronization of line card, protocol, and application state information between RPs for supported features and protocols).
End-to-end connectivity status is maintained on the customer edge (CE), provider edge (PE), and access aggregation PE (uPE) network nodes based on information received by protocols such as Ethernet Local Management Interface (LMI), Connectivity Fault Managment (CFM), and 802.3ah. This status information is used to either stop traffic or switch to backup paths when an EVC is down.
Metro Ethernet clients (E-LMI, CFM, 802.3ah) maintain configuration data and dynamic data, which is learned through protocols. Every transaction involves either accessing or updating data in the various databases. If the database is synchronized across active and standby modules, the modules are transparent to clients.
The Cisco infrastructure provides component application programming interfaces (APIs) that are helpful in maintaining a hot standby RP. Metro Ethernet HA clients (E-LMI, HA/ISSU, CFM HA/ISSU, 802.3ah HA/ISSU) interact with these components, update the database, and trigger necessary events to other components.
Elimination of network downtime for Cisco software image upgrades, resulting in higher availability.
Elimination of resource scheduling challenges associated with planned outages and late night maintenance windows
Accelerated deployment of new services and applications and faster implementation of new features, hardware, and fixes due to the elimination of network downtime during upgrades
Reduced operating costs due to outages while the system delivers higher service levels due to the elimination of network downtime during upgrades
NSF SSO Support in Ethernet LMI
The redundancy configurations stateful switchover (SSO) and nonstop forwarding (NSF) are supported in Ethernet Local Management Interface (LMI) and are automatically enabled. A switchover from an active to a standby Route Processor (RP) or a standby Route Switch Processor (RSP) occurs when the active RP or RSP fails, is removed from the networking device, or is manually taken down for maintenance. The primary function of Cisco NSF is to continue forwarding IP packets following an RP or RSP switchover. NSF also interoperates with the SSO feature to minimize network downtime following a switchover.
For detailed information about the SSO and NSF features, see the
High Availability Configuration Guide.
ISSU Support in Ethernet LMI
In Service Software Upgrade (ISSU) allows you to perform a Cisco software upgrade or downgrade without disrupting packet flow. Ethernet Local Management Interface (LMI) performs updates of the parameters within the Ethernet LMI database to the standby route processor (RP) or standby route switch processor (RSP). This checkpoint data requires ISSU capability to transform messages from one release to another. All the components that perform active processor to standby processor updates using messages require ISSU support. ISSU is automatically enabled in Ethernet LMI.
ISSU lowers the impact that planned maintenance activities have on network availability by allowing software changes while the system is in service. For detailed information about ISSU, see the
High Availability Configuration Guide.
How to Configure Ethernet Local Management Interface at a Provider Edge
For Ethernet Local Management Interface (LMI) to function with Connectivity Fault Management (CFM), you must configure Ethernet virtual circuits (EVCs), Ethernet service instances including untagged Ethernet flow points (EFPs), and Ethernet LMI customer VLAN mapping. Most of the configuration occurs on the provider edge (PE) device on the interfaces connected to the customer edge (CE) device. On the CE device, you need only enable Ethernet LMI on the connecting interface. Also, you must configure operations, administration, and management (OAM) parameters; for example, EVC definitions on PE devices on both sides of a metro network.
CFM and OAM interworking requires an inward facing Maintenance Entity Group End Point (MEP).
configure, change, or remove a user network interface (UNI) service type,
Ethernet virtual circuit (EVC), Ethernet service instance, or customer edge
(CE)-VLAN configuration, all configurations are checked to ensure that the
configurations match (UNI service type with EVC or Ethernet service instance
and CE-VLAN configuration). The configuration is rejected if the configurations
do not match.
Perform this task to configure the OAM manager on a provider
edge (PE) device.
13. Repeat Steps
3 through 12 to define other CFM domains that you want OAM manager to monitor.
the Ethernet virtual circuit (EVC) operations, administration, and management
(OAM) protocol as CFM for the CFM domain maintenance level as configured in
Steps 3 and 4.
If the CFM
domain does not exist, this command is rejected, and an error message is
Device(config-evc)# uni count 3
Sets the User Network Interface (UNI) count for the EVC.
command is not issued, the service defaults to a point-to-point service. If a
value of 2 is entered, point-to-multipoint service becomes an option. If a
value of 3 or greater is entered, the service is point-to-multipoint.
enter a number greater than the number of endpoints, the UNI status is
partially active even if all endpoints are up. If you enter a UNI count less
than the number of endpoints, status might be active, even if all endpoints are
global configuration mode.
3 through 12 to define other CFM domains that you want OAM manager to monitor.
Device(config)# interface gigabitethernet 0/0/2
physical interface connected to the CE device and enters interface
Device(config-if)# service instance 400 ethernet 50
Ethernet service instance on the interface and enters Ethernet service
Ethernet service instance identifier is a per-interface service identifier and
does not map to a VLAN.
Ethernet LMI customer VLAN-to-EVC map for a particular UNI.
both VLAN IDs and untagged VLANs in the map, specify the VLAN IDs first and
then specify the
keyword as follows:
ethernetlmice-vlanmap100,200,300,untagged. Also, if the
keyword is not specified in the map configuration, the main interface line
protocol on the Customer Edge (CE) device will be down.
Device(config-if-srv)# ethernet lmi interface
Ethernet local management interface (LMI) on a UNI.
Device(config-if-srv)# encapsulation dot1q 2
matching criteria to map 802.1Q frames ingress on an interface to the
appropriate service instance.
Device(config-if-srv)# bridge-domain 1
service instance to a bridge domain instance.
The order in which the global and interface configuration commands are issued determines the configuration. The last command that is issued has precedence.
Perform this task to enable Ethernet Local Management Interface (LMI) on a device or on an interface.
Command or Action
Enables privileged EXEC mode.
Enter your password if prompted.
Device# configure terminal
Enters global configuration mode.
Device(config)# interface ethernet 1/3
Defines an interface to configure as an Ethernet LMI interface and enters interface configuration mode.
Device(config-if)# ethernet lmi interface
Configures Ethernet LMI on the interface.
When Ethernet LMI is enabled globally, it is enabled on all interfaces unless you disable it on specific interfaces. If Ethernet LMI is disabled globally, you can use this command to enable it on specified interfaces.
Device(config-if)# ethernet lmi n393 10
Configures Ethernet LMI parameters for the UNI.
Returns to privileged EXEC mode.
Displaying Ethernet LMI and
OAM Manager Information
Perform this task
to display Ethernet Local Management Interface (LMI) or Operation,
Administration, and Management (OAM) manager information. After step 1, all the
steps are optional and can be performed in any order.
The following is
sample output from the
showethernetserviceinstance command using the
Device# show ethernet service instance detail
Service Instance ID: 400
Associated Interface: GigabitEthernet0/0/2
Associated EVC: 50
Pkts In Bytes In Pkts Out Bytes Out
0 0 0 0
Configuration Examples for Ethernet Local Management Interface at a Provider Edge
Example: Ethernet OAM Manager
on a PE Device Configuration
This example shows
a sample configuration of Operation, Administration, and Management (OAM)
manager, Connectivity Fault Management (CFM), and Ethernet Local Management
Interface (LMI) on a provider edge (PE) device. In this example, a bridge
domain is specified.