navbar
White Papers

How to PDF acrobat

Table Of Contents

White Paper

DPT Ring Architecture and Terminology

DPT/SRP Features

Spatial Reuse

Fairness

Ring Resiliency and Restoration

DPT Applications

IntraPoP Connectivity

DPT IP Internetworking Hierarchy

Metro IP Access Rings

Campus Networking

Spatial Reuse Protocol (SRP)

SRP Operations

SRP-fa Overview

SRP-fa Rules

DPT/SRP Performance Simulation

Scenario 1: Metro Access Aggregation Ring

Scenario 2: DPT OC-12 Ring With 128 Nodes

Scenario 3: DPT-OC12 Ring with Large Link Delay Variations under Link Fail

Scenario 4: DPT Ring versus Ethernet Ring

Scenario 5: VoIP across DPT Hierarchy

Summary

References


White Paper


Dynamic Packet Transport Technology and Performance

With the explosion in IP-based customer demand for applications, connectivity, and services, there is tremendous focus on the efficiency and scalability of IP-based optical networking infrastructures. Given the growth projections of data traffic, fueled by the explosive growth of web-based e-commerce, voice over IP (VoIP) and VPN IP data services, there is a trend toward data optimization as the basis for next-generation network designs. Historically, the old world design included statically provisioned time slots and preprovisioned bandwidth for service affecting conditions. For example, a typical long-distance four-fiber bidirectional line switched ring (BLSR) maintains half of the ring capacity in idle state, in case of a failover condition.

Cisco Dynamic Packet Transport (DPT) is an emerging new world optical ring networking technology, which allows the full fiber ring bandwidth to be utilized. DPT provides the potential evolution from multi-layered infrastructure equipment to intelligent network services based on Layer 3 (IP/Multipprotocol Label Switching [MPLS]) service and optical transport layer. Cisco DPT technology utilizes a new media access control (MAC) layer protocol called the Spatial Reuse Protocol (SRP), which is designed to support scalable and optimized IP packet aggregation and transport in local area networks (LANs), metropolitan area networks (MANs) and wide area networks (WANs). By using a fairness algorithm and destination packet removal, SRP enables global bandwidth usage fairness and local spatial reuse in DPT fiber rings. SRP can scale large numbers of nodes while providing guaranteed delivery of high-priority IP packets with bounded end-to-end delay requirements.

Currently, the SRP protocol is being considered as part of the standardization effort in the IEEE Resilient Packet Ring Study Group (RPRSG) and the IETF IP over Packet Transport Ring (IPoPTR) BoF.

This paper discusses the following:

Brief description of the DPT/SRP architecture and terminology

Discussion of the key features of DPT/SRP packet optimized transport technology

Examination of DPT-transport-based IP networking in LAN, MAN, and WAN applications

Analysis of the MAC protocol that underlies the DPT transport technology: SRP and its fairness algorithm

Study, using the OPNET SRP modeling simulation, of the usage fairness of DPT/SRP fiber ring networks with arbitrary IP packet traffic, including general TCP/IP packet delay properties with respect to low priority packets and high-priority voice packets traversing hierarchical DPT network rings; the simulation study covers DPT/SRP performance in comparison with Ethernet switched rings as well as DPT/SRP network usage fairness and convergence under network link failure

DPT Ring Architecture and Terminology

DPT/SRP uses a bidirectional ring consisting of two symmetric counter-rotating fiber rings, each of which can be concurrently utilized to pass both data and control packets. The DPT ring architecture is depicted in Figure 1.

Figure 1 DPT Ring

To distinguish between the two rings, one is referred to as the "inner" ring and the other as the "outer" ring. DPT operates by sending data packets in one direction (downstream) and by sending the corresponding control packets in the opposite direction (upstream) on the other fiber. Thus, DPT can utilize both fibers concurrently to maximize bandwidth for packet transport and to accelerate control signal propagation for adaptive bandwidth utilization and for self-healing purposes.

DPT/SRP Features

Spatial Reuse

One of the key features of a DPT ring is its bandwidth efficiency. By utilizing the spatial reuse capability of SRP, DPT rings can greatly increase the overall aggregate bandwidth. Previous data ring technologies such as FDDI or Token Ring did source stripping. For example, packets circulated, and used bandwidth, around the entire ring before being removed by the sender. In contrast, SRP utilizes destination stripping. The destination node strips the packets from the ring thereby freeing the full bandwidth on other segments of the ring for utilization by other packets. Thus, because each ring node can source packets onto the ring concurrently, this provides a major opportunity to maximize ring bandwidth.

Figure 2 outlines how spatial reuse works. In this example, node 4 is sending traffic to node 7, node 5 to node 6, and node 1 to node 3. Having the destination node strip unicast data from the ring allows other nodes on the ring that are downstream to have full access to the ring bandwidth. In the example given this means that node 1 has full bandwidth access to node 3 while other traffic is being simultaneously transmitted on other parts of the ring.

Figure 2 Spatial Reuse

Fairness

Each node on the DPT ring executes a distributed copy of an algorithm called the SRP fairness algorithm (SRP-fa) designed to ensure the following:

Global fairness—each node gets its fair share of ring bandwidth by controlling the rates at which packets are forwarded onto the ring so that no nodes can act as bandwidth hogs, creating either starvation or excessive delay conditions

Local optimization—ensures that ring nodes maximize the spatial reuse properties of the ring so that they can utilize more than their fair share on local ring segments as long as other ring nodes are not adversely impacted due to traffic locality

Scalability—the SRP-fa is designed for highly efficient and scalable bandwidth control to handle rings with large numbers of routers (up to 128 nodes), running at high-speeds (OC-48c/STM-16c and OC-192c/STM-64c), over widely distributed geographic areas

A more detailed presentation of SRP-fa and its simulation studies are presented in "SRP-fa Overview".

Ring Resiliency and Restoration

DPT ring utilizes Intelligent Protection Switching (IPS) to provide proactive performance monitoring, rapid self-healing, and IP service restoration after ring node or fiber facility events and faults. A ring fiber link failure situation is shown in Figure 3.

Figure 3 Intelligent Protection Switching

IPS provides:

Proactive performance monitoring and fault detection and isolation. IPS allows operation over SONET/SDH or dark fiber and wavelength division multiplexing (WDM)

50-ms self-healing via ring wrapping after Layer 1 fault/event detection without Layer3 routing protocol reconvergence

Optimal rehoming packet path selection after ring wraps without requiring dedicated protection bandwidth

Protection switching hierarchy for cases of multiple concurrent faults or events

Multilayer awareness—IPS monitors and handles events at Layers 1, 2, and 3 instead of just Layer 1 and provides additional packet-optimized capabilities such as packet pass-through mode to avoid ring wraps in the case of service-impacting Layer 3 events

Plug-and-play operation—IPS does not require extensive provisioning and configuration operations involving ring node name/address and topology map construction

DPT Applications

DPT technology enables a wealth of revenue-producing and cost-saving LAN-, MAN-, and WAN-based applications.

IntraPoP Connectivity

A key issue facing large-scale IP service providers is robust, high-performance intraPoP connectivity and cost-effective scaling of the PoP for continued IP traffic and services growth. Current architectural alternatives are depicted in Figure 4.

Figure 4 Alternative IntraPoP Connectivity

The key issues with these architectures include:

Scaling to higher bandwidth (622 Mb and 2.4 Gb soon thereafter)

Proactive performance monitoring and rapid self-healing and restoration in case of service-impacting faults

Port count explosion due to dual-homing requirements and growth in number of access routers

High complexity resulting from multiple technologies and increased number of network elements (such as intermediate switches) and bandwidth-inefficient load-balancing schemes

The DPT ring provides an ideal solution for this challenging environment with the following characteristics:

Scalable bandwidth—DPT rings start at 622 Mb (before statistical multiplexing and spatial reuse factors are included) and readily scale to 2.5 Gb and 10 Gb solutions

Eliminates the complexity associated with intermediate Layer 2 switching solutions

Substantially reduces port counts—each router simply requires a DPT ring card

Provides native proactive performance monitoring, dual-homing, self-healing, and load-balancing capabilities

DPT IP Internetworking Hierarchy

DPT rings provide excellent support for both local-access aggregation and MAN/WAN connectivity via ring hierarchy. Access aggregation rings are utilized to terminate large numbers of customer access pipes and aggregate them up to high-speed routers acting as traffic consolidation and distribution points. These routers are then interconnected via higher-speed distribution rings and provide mesh connectivity to the Internet backbone as depicted in Figure 5.

Figure 5 DPT Ring Hierarchy

Metro IP Access Rings

Another important application of DPT technology will be shared metro/suburban IP access rings as depicted in Figure 6.These rings will provide access to multiple tenants in high-rise business and residential buildings as well as suburban business parks. A router in the building basement will provide access for multiple building tenants to a range of robust, high-bandwidth IP services including virtual private networks (VPNs) and Internet access as well as low-cost, voice and video over IP services sold by aggressive IP service providers.

Figure 6 Metro Access Ring

Campus Networking

Ring architectures have often been an important component of enterprise campus network design with FDDI rings in campus and building backbones and increasingly SONET/SDH rings and SONET/SDH managed bandwidth services for top-tier distributed campuses.

DPT rings provide a logical evolution from FDDI rings for intrabuilding backbones and data center interconnection by cost effectively introducing a major bandwidth upgrade while retaining the benefits of the self-healing ring (Figure 7).

Figure 7 Campus Ring

Spatial Reuse Protocol (SRP)

On a DPT ring, every node executes a distributed copy of the SRP-fa in its MAC layer. To better illustrate the SRP operations and its fairness algorithm, a high-level view of an SRP node is depicted in Figure 8.

Figure 8 High-level View of SRP Node

SRP Operations

The following are the main functions of SRP:

Receive operations, incoming packets entering a node are copied to a receive buffer if the destination address (DA) of the packets matches the node. If a DA matched packet is also a unicast, then the packet is stripped and passed to the appropriate higher layer processes. The packet is placed into the transit buffer (TB) for forwarding to the next node if the following is true:

- The DA of the packet does not match the address of the node
- The packet is a multicast and the source address (SA) of the packet does not match the address of the node
- The packet passes time-to-live and cyclic redundancy check (CRC) tests

Transmit operations, packet sent from the node is either a forwarded packet from the TB or transmit data originating from the node via the Tx buffer. High-priority forwarded packet always gets sent first. High-priority transmit packet may be sent as long as the low-priority transit buffer (LPTB) is not full. A set of usage counters monitor the rate at which low priority transmit data and forwarded data are sent. Low priority data may be sent as long as the usage counter does not exceed an allowed usage governed by the SRP-fa rules and the LPTB has not exceeded the low-priority threshold.

SRP-fa Overview

When an SRP node experiences congestion, it will advertise to upstream nodes via the opposite ring the value of its transmit usage counter. The usage counter is run through a low-pass filter function to stabilize the feedback. Upstream nodes will adjust their transmit rates so as not to exceed the advertised values. Nodes also propagate the advertised value received to their immediate upstream neighbor. Nodes receiving advertised values that are also congested propagate the minimum of their transmit usage and the advertised usage.

Congestion is detected when the depth of the low-priority transit buffer reaches a congestion threshold.

Usage packets are generated periodically to carry the advertised values and also act as keepalives informing the upstream node that a valid data link exists.

The SRP-fa only applies to low priority packets. High priority packet does not follow the SRP-fa rules and can be transmitted at any time as long as there is sufficient transit buffer space. High-priority packets can be rate limited with features such as committed access rate (CAR) before it is sourced onto the ring.

SRP-fa Rules

A node can transmit four types of packets:

High-priority packets from the high priority TB

Low-priority packets from the low priority TB

High-priority packets from the host Tx high-priority first in, first out buffer (FIFO)

Low-priority packets from the host Tx low priority FIFO

High-priority packets from the transit buffer are always sent first. High-priority packets from the host are sent as long as the low-priority transit buffer is not full. Low-priority packets are sent as long as the TB has not crossed the low priority threshold and the SRP-fa rules allow it (my_usage < allowed_usage). If nothing else can be sent, low-priority packets from the LPTB are sent.

SRP-fa pseudo-code

A more precise definition of the fairness algorithm is shown in Table 1.

Table 1  SRP-fa Variables

Variable
Description

lo_tb_depth

Low priority transit buffer depth

my_usage

Count of octets transmitted by host

lp_my_usage

my_usage run through a low-pass filter

my_usage_ok

Flag indicating that host is allowed to transmit

allow_usage

The fair amount each node is allowed to transmit

fwd_rate

Count of octets forwarded from upstream

lp_fwd_rate

fwd_rate run through a low-pass filter

congested

Node cannot transmit host traffic without the TB buffer filling beyond its congestion threshold point

rev_usage

The usage value passed along to the upstream neighbor by usage_pkt


Table 2  SRP-fa Constants

Constant
Description

MAX_ALLOWANCE

Configurable value for maximum allowed usage for this node

DECAY_INTERVAL

8000 octet times @ OC-12, 32,000 octet times @ OC-48

AGECOEFF

Aging coeff for my_usage and fwd_rate (=4)

LP_FWD

Low pass filter for fwd_rate (=64)

LP_MU

Low pass filter for my usage (=512)

LP_ALLOW

Low pass filter for allow usage auto increment (=64)

NULL_RCVD_INFO

All one's in rcvd_usage field

TB_LO_THRESHOLD

TB depth at which no more low priority host traffic can be sent

MAX_LRATE

AGECOEFF*DECAY_INTERVAL


Table 3  Variables Updated Every Cycle

Variable
Description

my_usage

Incremented by 1 for every octet that is transmitted by the host (does not include data transmitted from the Transit Buffer)

fwd_rate

Incremented by 1 for every octet that enters the Transit Buffer


The following formula is calculated each cycle:

if ((my_usage < allow_usage)

&& !((lo_tb_depth > 0) && (fwd_rate < my_usage))

&& (my_usage < MAX_ALLOWANCE))

// true means OK to send host packets

my_usage_ok = true;

The following formula is calculated when a packet is received:

if (usage_pkt.SA == my_SA) &&

[(usage_pkt.RI == my_RingID) ||
node_state == wrapped)]

rcvd_usage = NULL_RCVD_INFO;

else

rcvd_usage = usage_pkt.usage;

The following formula is calculated for every decay interval:

congested = (lo_tb_depth > TB_LO_THRESHOLD/2)

lp_my_usage = ((LP_MU-1) * lp_my_usage + my_usage) / LP_MU

my_usage = my_usage - min(allow_usage/AGECOEFF, my_usage/AGECOEFF)

lp_fwd_rate = ((LP_FWD-1) * lp_fwd_rate + fwd_rate) / LP_FWD

fwd_rate = fwd_rate - fwd_rate/AGECOEFF


Note: lp values must be calculated prior to decrement of non-lp values.


if (rcvd_usage != NULL_RCVD_INFO)

allow_usage = rcvd_usage;

else

allow_usage += (MAX_LRATE - allow_usage) / (LP_ALLOW);

if (congested)

{

if (lp_my_usage < rcvd_usage)

rev_usage = lp_my_usage;

else

rev_usage = rcvd_usage;

}

else if ((rcvd_usage != NULL_RCVD_INFO) &&

(lp_fwd_rate > allow_usage)

rev_usage = rcvd_usage;

else

rev_usage = NULL_RCVD_INFO

if (rev_usage > MAX_LRATE)

rev_usage = NULL_RCVD_INFO;

DPT/SRP Performance Simulation

Presented in this section are five simulation scenarios, which are designed to explore and demonstrate different aspects of DPT/SRP scalability, convergence, and real-time IP services support under various networking and traffic conditions.

The first scenario is a metro DPT ring with a large number of nodes covering 300 km. In this scenario, DPT/SRP technology exhibits superior scalability and fast usage convergence in aggregating a large number of nodes with highly burst traffic.

The second scenario is a large DPT ring with the maximum of 128 nodes and a ring distance of about 500 km.

The third scenario is a DPT ring with a very large hop delay variation and which experiences link fail/ring wrap/link restore/ring unwrap. In this scenario, DPT provides consistent and fast convergence support for TCP applications with fast service restoration and almost no service degradation.

The fourth scenario directly compares a DPT/SRP ring and its competing Ethernet switched ring with STP. The DPT/SRP technology outperforms the competing popular technology with outstanding convergence and fairness.

The fifth simulation scenario is designed to demonstrate DPT support for VoIP traffic with the presence of a significant amount of arbitrary low priority IP traffic across multiple DPT rings.

Scenario 1: Metro Access Aggregation Ring

This scenario shows a metro aggregation access ring with 33 nodes. This ring covers more than 300 km, with each fiber link segment delay being 50 ms. To simulate metro access aggregation to an uplink backbone network, each node starts transmitting to a common destination node_0 at 0.5 second intervals. The DPT ring is shown in Figure 9.

Figure 9 Metro Access Ring with 33 Nodes

Each node is sending traffic at a peak rate of OC-12 with mean on/off periods of 200 ms and 800 ms, respectively. The periods are distributed exponentially. The average traffic rate is about 125 Mbps. In traffic source, the packet arrival is also exponentially distributed during on period, and packet size is exponentially distributed with mean of 512 bytes. The traffic source profile is depicted in Figure 10 and the traffic transmitted onto the ring by each node is shown in Figure 11.

The results clearly show the fast convergence and excellent fairness in a metro aggregation access DPT/SRP ring.

Figure 10 Traffic Source Profiles

Figure 11 Traffic on the Ring

Scenario 2: DPT OC-12 Ring With 128 Nodes

This scenario presents the ultimate challenge for SRP-fa in scalable and cost-effective DPT internetworking solutions. There are 128 nodes in a DPT ring covering a total distance of more than 500 km. Each fiber link accounts for a 20 ms delay, adding to a total ring delay of 2560 ms. A portion of this large ring is shown in Figure 12.

Figure 12 OC-12c/STM-4 DPT Ring with 128 Nodes

As in Scenario 1, each node starts transmitting to a common destination node 0 at a 0.5-second interval. Transmission peak rate is OC-12c/STM-4. Transmission on/off periods are 100 ms and 900 ms respectively. On/off periods are distributed exponentially. The average transmission rate is about 62.2 Mbps. Packet arrival is exponentially distributed during the on period, and packet size is exponentially distributed with a mean of 512 bytes. A sampled traffic source profile is depicted in Figure 13.

Figure 13 Sampled Traffic Source Profiles

The throughput from the sampled ring is shown in Figure 14. Even with 128 nodes, DPT/SRP can achieve rapid and excellent convergence and fairness.

Figure 14 Sampled Ring Throughput

Scenario 3: DPT-OC12 Ring with Large Link Delay Variations under Link Fail

In IntraPoP connectivity, it is often seen that some access nodes are co-located with uplink aggregation nodes while other access nodes may be sited thousands kilometers away. One such DPT ring is shown in Figure 15.

There are six nodes in the ring, three of which (San Francisco 1, 2, and 3) are located in one building. The fiber links between them have a delay of 1 us. Nodes Berkeley and Oakland have a 20 ms delay between them and to the San Francisco nodes. Node San Diego is farther away on the ring with a 5 ms fiber delay to the other nodes.

Figure 15 OC-12c/STM-4 DPT Access Aggregation Ring

As shown in Figure 15, there are three inner-ring traffic streams, San Diego, San Francisco3 and San Francisco1 to San Francisco2, respectively. Each stream is highly bursty and utilizes about 44 percent of OC-12c/STM-4. In addition, there are two outer ring traffic streams, Oakland and Berkeley to San Francisco2. The Oakland stream utilizes about 34 percent of the OC-12c/STM-4 and Berkeley utilizes about 26 percent. The stream transmission times are depicted in Figure 16.

Figure 16 Ring Traffic

At four seconds into the simulation, the inner-ring fiber link between San Francisco1 and San Francisco2 failed. DPT IPS put both San Francisco1 and San Francisco2 in a wrap within 50 ms and rerouted the traffic streams along the outer ring to San Francisco2. At six seconds into the simulation, the fiber link fail is cleared and the ring unwraps.

The traffic throughput on the ring is shown in Figure 16 as well. The total received traffic on SanFrancisco2 is shown in Figure 17.

Figure 17 Total Received Traffic on San Fransisco2

As shown in the simulation results, DPT/SRP-fa exhibits highly robust resilience and achieves fast convergence and excellent fairness even under link failure and ring wrap situations. This guarantees minimum interruption and consistent traffic dynamics for Layer 3 services.

Scenario 4: DPT Ring versus Ethernet Ring

This is a competing metro access aggregation ring scenario. In this scenario, a DPT/SRP ring was compared with its competing Ethernet ring technology in an equal setting, as shown in Figures 18 and 19.

Figure 18 Ethernet Ring

The two rings aggregate similar Ethernet LAN TCP/UDP traffic. To create bandwidth starvation and competition, the Ethernet ring-link segment was purposely left broken between switches: node 3 and node 4. Thus, STP blocks the route selection between node 3 and node 4 and forces unnecessary multihop path with reduced bandwidth multiplication. The Ethernet ring link is limited to 10BaseT.

Figure 19 OC-12c/STM-4 DPT Ring

SRP chooses inner/outer ring to ensure minimum hop count path to increase bandwidth multiplication. For the sake of comparison, only the outer ring was used.

By enabling similar TCP/UDP server and client applications in both rings, TCP/UDP traffic patterns were created as shown Figures 18 and 19.

Figure 20 Ethernet Ring Throughput

Ethernet ring throughput and DPT/SRP ring throughput are shown in Figure 20 and Figure 21, respectively.

Because of the lack of fairness mechanism in Layer 2 access control, the Ethernet ring exhibits poor to no fairness in the presence of unresponsive UDP traffic. On the other hand, the dynamic bandwidth sharing and statistical multiplexing algorithm of SRP ensures fairness and fast convergence across all applications.

Figure 21 DPT/SRP Ring Throughput

Scenario 5: VoIP across DPT Hierarchy

A real-time service such as VoIP requires not only a stringent end-to-end delay guarantee but also well bounded delay variations. DPT/SRP provides real-time service support with high priority transmission in coordination with Layer 3 CoS. However, because of the dynamic bandwidth sharing and traffic statistical multiplexing nature of low-priority traffic, real-time traffic transversing DPT aggregation and distribution rings may pose concerns due to end-to-end delay and jittering.

To address these concerns, a VoIP application was simulated across a DPT hierarchy in the presence of significant low-priority traffic over the rings. The simulated DPT hierarchy consisting of an OC-12c/STM-4 aggregation ring, OC-48c/STM-4 backbone, and OC-12c/STM-4 distribution ring is depicted in Figures 22, 23, and 24, respectively.

VoIP calls are originated in the Gigabit Ethernet LANs as shown Figures 21 and 22, assuming that each LAN hosts 200 callers. The voice traffic is encoded in G.711 explicitly. The aggregated voice traffic and individual voice source are profiled in Figure 25.

Overall voice and background traffic flows are depicted in Figure 26.

Figure 22 Pittsburgh OC-12c/STM-4 DPT Ring

Figure 23 OC-48c/STM-4 DPT Backbone

Figure 24 Boston OC-12c/STM DPT Ring

The referenced VoIP traffic is shown from the source to destination. There are a total of nine background traffic streams, five of them are low-priority traffic and four are high priority. The background traffic distribution is shown in Figure 26. Each background stream takes account of 45 percent of its ring bandwidth. The packet size for high-priority background traffic is 128 bytes.

Figure 25 Voice Traffic

Figure 26 Traffic Flows

There are three simulation runs with low priority packet size being 1.5 KB, 4 KB, and 9 KB, respectively.

The instantaneous packet end-to-end delay measurements for the three runs are depicted in Figure 27. The Probability Density Functions (PDF) of the measurements are depicted in Figure 28.

Figure 27 End-to-End Delay Per Voice Packet

Figure 28 PDF of Voice Traffic End-to-End Delay

Summary

By using OPNET modeling and simulation, the scalability of DPT/SRP-fa has been stretched and validated to its limits. The excellent dynamic traffic multiplexing and priority scheduling capabilities of DPT have been exemplified: excellent delay guarantee and jitter control in supporting IP-enhanced real-time services. In all, the Cisco DPT/SRP technology is the most promising of cost-effective and efficient IP internetworking. Leveraging its IP packet optimized new MAC protocol SRP, DPT brings superior scalability, fast convergence, high bandwidth efficiency, and robust resilience into LAN, MAN, and WAN internetworking and provides excellent support for enhanced IP services such as VoIP and video.

References

White paper, "Dynamic Packet Transport Technology and Applications Overview," http://www.cisco.com/warp/public/cc/techno/media/wan/sonet/dpt/dpta_wp.htm, July 2000.

D. Tsiang and G. Suwala, "The Cisco SRP MAC Layer Protocol," Internet Draft, draft-tsiang-srp-02.txt, May 2000.

B.J Lee, "SRP Fairness Algorithm Simulation Results," http://www.ieee802.org/rprsg/public/presentations/mar2000/index.html, March 2000.

B.J Lee and D. Xie, "RPR Solution (Rationale and Performance," http://www.ieee802.org/rprsg/public/presentations/may2000/index.html, May 2000.

IP over Packet Transport Rings (IPoPTR) BOF, http://www.ietf.org/ietf/00jul/ipoptr-agenda.txt, 48th IETF, Pittsburgh PA, August 2000.

Charles Mujie, "SRP Technology White Paper," February, 2000.


Toolbar

Posted: Thu Jun 10 14:47:34 PDT 2004
All contents are Copyright © 1992--2004 Cisco Systems, Inc. All rights reserved.
Important Notices and Privacy Statement.