Data compression reduces the size of data frames to be transmitted over
a network link. Reducing the size of a frame reduces the time required to
transmit the frame across the network. Data compression provides a coding
scheme at each end of a transmission link that allows characters to be removed
from the frames of data at the sending side of the link and then replaced
correctly at the receiving side. Because the condensed frames take up less
bandwidth, we can transmit greater volumes at a time.
We refer to the data compression schemes used in internetworking
devices as lossless compression algorithms. These schemes reproduce the
original bit streams exactly, with no degradation or loss. This feature is
required by routers and other devices to transport data across the network. The
two most commonly used compression algorithms on internetworking devices are
the Stacker compression and the Predictor data compression algorithms.
For more information on document conventions, see the
Cisco Technical Tips
There are no specific prerequisites for this document.
This document is not restricted to specific software and hardware
Data compression can be broadly classifed into Hardware and Software
compressions. Furthermore Software compression can be of two types,
CPU-intensive or Memory-intensive.
Stacker compression is based on the Lempel-Ziv compression algorithm.
The Stacker algorithm uses an encoded dictionary that replaces a continuous
stream of characters with codes. This stores the symbols represented by the
codes in memory in a dictionary-style list. Because the relationship between a
code and the original symbol varies as the data varies, this approach is more
responsive to the variations in the data. This flexibility is particularly
important for LAN data, because many different applications can be transmitting
over the WAN at any one time. In addition, as the data varies, the dictionary
changes to accommodate and adapt to the varying needs of the traffic. Stacker
compression is more CPU-intensive and less memory-intensive.
To configure Stacker compression, issue the command
compress stac from the interface configuration mode.
For more details, refer to the
The Predictor compression algorithm tries to predict the next sequence
of characters in a data stream by using an index to look up a sequence in the
compression dictionary. It then examines the next sequence in the data stream
to see if it matches. If it does, that sequence replaces the looked-up sequence
in the dictionary. If there is no match, the algorithm locates the next
character sequence in the index and the process begins again. The index updates
itself by hashing a few of the most recent character sequences from the input
stream. No time is spent trying to compress already compressed data. The
compression ratio obtained using predictor is not as good as other compression
algorithms, but it remains one of the fastest algorithms available. Predictor
is more memory-intensive and less CPU-intensive.
To configure Predictor compression, issue the command
compress predictor from the interface configuration
mode. For more details, refer to the
Cisco internetworking devices use the Stacker and Predictor data
compression algorithms. The Compression Service Adapter (CSA) only supports the
Stacker algorithm. The Stacker method is the most versatile, because it runs on
any supported point-to-point layer-2 encapsulation. Predictor only supports PPP
There are no industry-standard compression specifications, but Cisco
IOS® software supports several third-party compression algorithms, including
Hi/fn Stac Limpel Zif Stac (LZS), Predictor, and Microsoft Point-to-Point
Compression (MPPC). These compress data on a per-connection basis or at the
network trunk level.
Compression can take place on an entire-packet, header-only, or
payload-only basis. The success of these solutions are easy to measure via
compression ratio and platform latency.
Cisco IOS software supports the following data compression products:
FRF.9, for Frame Relay compression
Link Access Procedure, Balanced (LAPB) payload compression using
using LZS or Predictor High-Level Data Link Control (HDLC) using
X.25 payload compression of encapsulated traffic
Point-to-Point Protocol (PPP) using LZS, Predictor, and Microsoft
Point-to-Point Compression (MPPC).
However, compression may not always be appropriate, and can be affected
by the following things:
No Standards: Although Cisco IOS software supports
several compression algorithms, they are proprietary and not necessarily
Note: Both ends of a compression transaction must support the same
Data Type: The same compression algorithm yields
different compression ratios depending upon the type of data undergoing
compression. Certain data types are inherently less compressible than others,
which can realize up to a 6:1 compression ratio. Cisco conservatively averages
Cisco IOS compression ratios at 2:1.
Already Compressed Data: Trying to compress already
compressed data, such as JPEG or MPEG files can take longer then transferring
the data without any compression at all.
Processor Usage: Software compression solutions
consume valuable processor cycles in the router. Routers must also support
other functions such as management, security, and protocol translations;
compressing large amounts of data can slow down router performance and cause
The highest compression ratio is usually reached with highly
compressible text files. Compressing data can cause performance degradation
because it is software, not hardware compression. While configuring
compression, use caution with smaller systems that have less memory and slower
CSA performs hardware-assisted high performance compression for Cisco
Internetwork Operating System (Cisco IOSTM) compression services. It is
available for all Cisco 7500 series, 7200 series, and RSP7000 equipped 7000
CSA provides high performance compression at the central site. It is
able to receive multiple compression streams coming from remote Cisco routers
using Cisco IOS software-based compression. CSA maximizes router performance by
offloading compression algorithms from the central processing engines of the
RSP7000, 7200, and 7500, (using distributed compression) allowing them to
remain dedicated to routing and other specialized tasks.
When used in the Cisco 7200 Series router, the CSA can offload
compression at any interface. If used on the VIP2, it offloads compression at
the adjacent port adapter on the same VIP only.
The compression network module dramatically raises the compression
bandwidth of the Cisco 3600 series by off-loading the intensive processing that
compression requires from the main CPU. It uses a dedicated, optimized co -
processor design that supports full-duplex compression and decompression. The
compression is at the link layer or Layer 2 and is supported for PPP and Frame
Low-speed WAN compression can often be supported by the Cisco IOS
software executing on the main Cisco 3600 series CPU. For the Cisco 3620, this
bandwidth is well below T1/E1 rates and for the Cisco 3640, it approaches T1
rates. However, you cannot achieve these rates if the Cisco 3600 system has
other processor-intensive tasks to execute as well. The compression network
module off-loads the main CPU so that it can handle other tasks while raising
the compression bandwidth on both the Cisco 3620 and the Cisco 3640 to 2 E1
full duplex (2 x 2.048 Mbps full duplex). You can utilizee this bandwidth for a
single channel or circuit or spread across as many as 128. Examples range from
an E1 or T1 leased line to 128 ISDN B channels or Frame Relay virtual circuits.
The Data Compression Advanced Integration Module (AIM) for the Cisco
3660 Series uses either of the two available Cisco 3660 internal AIM slots,
ensuring that external slots remain available for components such as integrated
analog voice/fax, digital voice/fax, ATM, channel service unit/digital service
units (CSU/DSUs), analog and digital modems.
Data compression technology maximizes bandwidth and increases WAN link
throughput by reducing frame size and thereby allowing more data to be
transmitted over a link. While software-based compression capabilities can
support fractional T1/E1 rates, hardware based compression off-loads the
platform's main processor to deliver even higher levels of throughput. With a
compression ratio of up to 4:1, the Data Compression AIM supports 16-Mbps of
compressed data throughput without imposing additional traffic latency - enough
to keep four T1 or E1 circuits full of compressed data in both directions
simultaneously. The Data Compression AIM supports LZS and Microsoft
Point-to-Point Compression (MPCC) algorithms.
Data Compression AIM for the Cisco 2600 Series uses the Cisco 2600's
internal Advanced Integration Module slot, so that external slots remain
available for components such as integrated CSU/DSUs, Analog Modems, or
The Data Compression AIM supports 8Mbps of compressed data throughput
without imposing additional traffic latency, and it supports LZS and Microsoft
Point-to-Point Compression (MPCC) algorithms.