Cisco IP Contact Center Enterprise Edition Releases 5.0 and 6.0 Solution Reference Network Design (SRND)
Bandwidth Provisioning and QoS Considerations
Downloads: This chapterpdf (PDF - 393.0KB) The complete bookPDF (PDF - 3.26MB) | Feedback

Bandwidth Provisioning and QoS Considerations

Table Of Contents

Bandwidth Provisioning and QoS Considerations

IPCC Network Architecture Overview

Network Segments

UDP Heartbeat and TCP Keep-Alive

IP-Based Prioritization and QoS

Traffic Flow

Public Network Traffic Flow

Private Network Traffic Flow

Bandwidth and Latency Requirements

Network Provisioning

Quality of Service

QoS Planning

Public Network Marking Requirements

Configuring QoS on IP Devices

Performance Monitoring

Bandwidth Sizing

IPCC Private Network Bandwidth

IPCC Public Network Bandwidth

Bandwidth Requirements for CTI OS Agent Desktop

CTI OS Client/Server Traffic Flows and Bandwidth Requirements

Best Practices and Options for CTI OS Server and CTI OS Agent Desktop

Bandwidth Requirements for Cisco Agent Desktop Release 6.0

Call Control Bandwidth Usage

Silent Monitoring Bandwidth Usage

Service Placement Recommendations

Quality of Service (QoS) Considerations

Cisco Desktop Component Port Usage

Integrating Cisco Agent Desktop Release 6.0 into a Citrix Thin-Client Environment


Bandwidth Provisioning and QoS Considerations


This chapter presents an overview of the IPCC Enterprise network architecture, deployment characteristics of the network, and provisioning requirements of the IPCC network. Essential network architecture concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow categorization, IP-based prioritization and segmentation, and bandwidth and latency requirements. Provisioning guidelines are presented for network traffic flows between remote components over the WAN, including recommendations on how to apply proper Quality of Service (QoS) to WAN traffic flows. For a more detailed description of the IPCC architecture and various component internetworking, see Architecture Overview.

Cisco IP Contact Center (IPCC) has traditionally been deployed using private, point-to-point leased-line network connections for both its private (duplexed controller, side-to-side) as well as public (Peripheral Gateway to Central Controller) WAN network structure. Optimal network performance characteristics (and route diversity for the fault tolerant failover mechanisms) are provided to the IPCC application only through dedicated private facilities, redundant IP routers, and appropriate priority queuing.

Enterprises deploying networks that share multiple traffic classes, of course, prefer to maintain their existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks offer both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.

Beginning with IPCC Enterprise Release 5.0, application layer Quality of Service (QoS) packet marking on the IPCC public path is supported from within the IPCC application, thus simplifying WAN deployment in a converged network environment when that network is enabled for QoS. QoS deployment on the public network enables remote Peripheral Gateways (PGs) to share a converged network and, at the same time, guarantees the stringent ICM/IPCC traffic latency, bandwidth, and traffic-related prioritization requirements inherent in the real-time requirements of the product. This chapter presents recommendations for configuring QoS for the traffic flows over the WAN. The public network that connects the remote PGs to the Central Controller is the main focus.

Historically, two QoS models have been used: Integrated Services (IntServ) and Differentiated Services (DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state information of thousands of reservations has to be maintained at every router along the path. DiffServ, in contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied to the traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution, DiffServ is more widely used and accepted. IPCC applications are not aware of RSVP and, therefore, IPCC does not support IntServ. The QoS considerations in this chapter are based on DiffServ.

Adequate bandwidth provisioning is a critical component in the success of IPCC deployments. Bandwidth guidelines and examples are provided in this chapter to help with provisioning the required bandwidth.

IPCC Network Architecture Overview

IPCC is a distributed, resilient, and fault-tolerant network application that relies heavily on a network infrastructure with sufficient performance to meet the real-time data transfer requirements of the product. A properly designed IPCC network is characterized by proper bandwidth, low latency, and a prioritization scheme favoring specific UDP and TCP application traffic. These design requirements are necessary to ensure both the fault-tolerant message synchronization of specific duplexed Cisco Intelligent Contact Management (ICM) nodes (Central Controller and Peripheral Gateway) as well as the delivery of time-sensitive system status data (agent states, call statistics, trunk information, and so forth) across the system. Expeditious delivery of PG data to the Central Controller is necessary for accurate call center state updates and fully accurate real-time reporting data.

In an IP Telephony environment, WAN and LAN traffic can be grouped into the following categories:

Voice and video traffic

Voice calls (voice carrier stream) consist of Real-Time Transport Protocol (RTP) packets that contain the actual voice samples between various endpoints such as PSTN gateway ports, IP IVR Q-points (ports), and IP phones.

Call control traffic

Call control consists of packets belonging to one of several protocols (H.323, MGCP, SCCP, or TAPI/JTAPI), according to the endpoints involved in the call. Call control functions include those used to set up, maintain, tear down, or redirect calls. For IPCC, control traffic includes routing and service control messages required to route voice calls to peripheral targets (such as agents, skill groups, or services) and other media termination resources (such as IP IVR ports) as well as the real-time updates of peripheral resource status.

Data traffic

Data traffic can include normal traffic such as email, web activity, and CTI database application traffic sent to the agent desktops, such as screen pops and other priority data. IPCC priority data includes data associated with non-real-time system states, such as events involved in reporting and configuration updates.

This chapter focuses primarily on the types of data flows and bandwidth used between a remote Peripheral Gateway (PG) and the ICM Central Controller (CC), on the network path between sides A and B of a PG or of the Central Controller, and on the CTI flows between the desktop application and CTI OS and/or Cisco Agent Desktop servers. Guidelines and examples are presented to help estimate required bandwidth and, where applicable, provision QoS for these network segments.

The flows discussed encapsulate the latter two of the above three traffic groups. Because media (voice and video) streams are maintained primarily between Cisco CallManager and its endpoints, voice and video provisioning is not addressed here.

For bandwidth estimates for the voice RTP stream generated by the calls to IPCC agents and the associated call control traffic generated by the various protocols, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at

http://www.cisco.com/go/srnd

Data traffic consisting of various HTTP, email, and other non-IPCC mission critical traffic will vary according to the specific integration and deployment model used, and this type of traffic is not addressed in this chapter. For information on proper network design for data traffic, refer to the Network Infrastructure and Quality of Service (QoS) documentation available at

http://www.cisco.com/go/srnd

Network Segments

The fault-tolerant architecture employed by IPCC requires two independent communication networks. The private network (or dedicated path) carries traffic necessary to maintain and restore synchronization between the systems and to allow clients of the Message Delivery Subsystem (MDS) to communicate. The public network carries traffic between each side of the synchronized system and foreign systems. The public network is also used as an alternate network by the fault-tolerance software to distinguish between node failures and network failures.


Note The terms public network and visible network are used interchangeably throughout this document.


A third network, the signaling access network, may be deployed in ICM systems that also interface directly with the carrier network (PSTN) and that deploy the Hosted ICM/IPCC architecture. The signaling access network is not addressed in this chapter.

Figure 8-1 illustrates the fundamental network segments for an IPCC Enterprise system with two PGs (with sides A and B co-located) and two geographically separated CallRouter servers.

Figure 8-1 Example of Public and Private Network Segments for an IPCC Enterprise System

The following notes apply to Figure 8-1:

The private network carries ICM traffic between duplexed sides of the CallRouter or a PG pair. This traffic consists primarily of synchronized data and control messages, and it also conveys the state transfer necessary to re-synchronize duplexed sides when recovering from an isolated state. When a router process and its logger process are deployed on separate nodes, most communication between them is also over the private network.

When deployed over a WAN, the private link is critical to the overall responsiveness of the Cisco ICM, and it must meet aggressive latency requirements. The private link must provide sufficient bandwidth to handle simultaneous synchronizer and state transfer traffic, and it must have enough bandwidth left over for the case when additional data will be transferred as part of a recovery operation. IP routers in the private network typically use priority queuing (based on the ICM private high/non-high IP addresses and, for UDP heartbeats, port numbers) to ensure that high-priority ICM traffic does not experience excessive queuing delay.

The public network carries traffic between the Central Controller and call centers (PGs and AWs). The public network can also serve as a Central Controller alternate path, used to determine which side of the Central Controller should retain control in the event that the two sides become isolated from one another. The public network is never used to carry synchronization control traffic.

Remote call centers connect to each Central Controller side via the public network. Each WAN link to a call center must have adequate bandwidth to support the PGs and AWs at the call center. The IP routers in the public network use IP-based priority queuing or QoS to ensure that ICM traffic classes are processed within acceptable tolerances for both latency and jitter.

Call centers (PGs and AWs) local to one side of the Central Controller connect to the local Central Controller side via the public Ethernet, and to the remote Central Controller side over public WAN links. This arrangement requires that the public WAN network must provide connectivity between side A and side B. Bridges may optionally be deployed to isolate PGs from the AW LAN segment to enhance protection against LAN outages.

To achieve the required fault tolerance, the private WAN link must be fully independent from the public WAN links (separate IP routers, network segments or paths, and so forth). Independent WAN links ensure that any single point of failure is truly isolated between public and private networks. Additionally, public network WAN segments traversing a routed network must be deployed so that PG-to-CallRouter route diversity is maintained throughout the network. Be sure to avoid routes that result in common path selection (and, thus, a common point of failure) for the multiple PG-to-CallRouter sessions (see Figure 8-1).

UDP Heartbeat and TCP Keep-Alive

The primary purpose of the UDP heartbeat design is to detect if a circuit has failed. Detection can be made from either end of the connection, based on the direction of heartbeat loss. Both ends of a connection send heartbeats at periodic intervals (typically every 100 or 400 milliseconds) to the opposite end, and each end looks for analogous heartbeats from the other. If either end misses 5 heartbeats in a row (that is, if a heartbeat is not received within a period that is 5 times the period between heartbeats), then the side detecting this condition assumes that something is wrong and the application closes the socket connection. At this point, a TCP Reset message is typically generated from the closing side. Loss of heartbeats can be caused by various reasons, such as: the network failed, the process sending the heartbeats failed, the machine on which the sending process resides is shut down, the UDP packets are not properly prioritized, and so forth.

There are several parameters associated with heartbeats. In general, you should leave these parameters set to their system default values. Some of these values are specified when a connection is established, while others can be specified by setting values in the Microsoft Windows 2000 registry. The two values of most interest are:

The amount of time between heartbeats

The number of missed heartbeats (currently hard-coded as 5) that the system uses to determine whether a circuit has apparently failed

The default value for the heartbeat interval is 100 milliseconds between the central sites, meaning that one site can detect the failure of the circuit or the other site within 500 ms. Prior to ICM Release 5.0, the default heartbeat interval between a central site and a peripheral gateway was 400 ms, meaning that the circuit failure threshold was 2 seconds in this case.

In ICM Releases 5.0 and 6.0, as a part of the ICM QoS implementation, the UDP heartbeat is replaced by a TCP keep-alive message in the public network connecting a Central Controller to a Peripheral Gateway. (An exception is that, when an ICM Release 5.0 or 6.0 Central Controller talks to a PG that is prior to Release 5.0, the communication automatically reverts to the UDP mechanism.) Note that the UDP heartbeat remains unchanged in the private network connecting duplexed sites.

The TCP keep-alive feature, provided in the TCP stack, detects inactivity and in that case causes the server/client side to terminate. It operates by sending probe packets (namely, keep-alive packets) across a connection after the connection has been idle for a certain period, and the connection is considered down if a keep-alive response from the other side is not heard. Microsoft Windows 2000 allows you to specify keep-alive parameters on per-connection basis. For ICM public connections, the keep-alive timeout is set to 5*400 ms, meaning that a failure can be detected after 2 seconds, as was the case with the UDP heartbeat prior to Release 5.0.

The reasons of moving to TCP keep-alive are as follows:

The use of UDP heartbeats creates deployment complexities in a firewall environment. The dynamic port allocation for heartbeat communications makes it necessary to open a large range of port numbers, thus defeating the original purpose of the firewall device.

In a converged network, algorithms used by routers to handle network congestion conditions have different effects on TCP and UDP. As a result, delays and congestion experienced by UDP heartbeat traffic can have, in some cases, little correspondence to the TCP connections.

IP-Based Prioritization and QoS

Simply stated, traffic prioritization is needed because it is possible for large amounts of low-priority traffic to get in front of high-priority traffic, thereby delaying delivery of high-priority packets to the receiving end. In a slow network flow, the amount of time a single large (for example, 1500-byte) packet consumes on the network (and delays subsequent packets) can exceed 100 ms. This delay would cause the apparent loss of one or more heartbeats. To avoid this situation, a smaller Maximum Transmission Unit (MTU) size is used by the application for low-priority traffic, thereby allowing a high-priority packet to get on the wire sooner. (MTU size for a circuit is calculated from within the application as a function of the circuit bandwidth, as configured at PG setup.)

A network that is not prioritized correctly almost always leads to call time-outs and problems from loss of heartbeat as the application load increases or (worse) as shared traffic is placed on the network. A secondary effect often seen is application buffer pool exhaustion on the sending side, due to extreme latency conditions.

ICM applications use three priorities - high, medium, and low. However, prior to QoS, the network effectively recognized only two priorities identified by source and destination IP address (high-priority traffic was sent to a separate IP destination address) and, in the case of UDP heartbeats, by specific UDP port range in the network. The approach with IP-based prioritization is to configure IP routers with priority queuing in a way that gives preference to TCP packets with a high-priority IP address and to UDP heartbeats over the other traffic.

A QoS-enabled network applies prioritized processing (queuing, scheduling, and policing) to packets based on QoS markings as opposed to IP addresses. ICM Release 6.0 provides marking capability of both Layer-3 DSCP and Layer-2 802.1p (using the Microsoft Windows Packet Scheduler) for public network traffic. Traffic marking implies that configuring dual IP addresses on the public Network Interface Controller (NIC) is no longer necessary if the public network is aware of QoS markings.

Traffic Flow

This section briefly describes the traffic flows for the public and private networks.

Public Network Traffic Flow

The active PG continuously updates the Central Controller call routers with state information related to agents, calls, queues, and so forth, at the respective call center sites. This type of PG-to-Central Controller traffic is real-time traffic. The PGs also send up historical data each half hour on the half hour. The historical data is low-priority, but it must complete its journey to the central site within the half hour (to get ready for the next half hour of data).

When a PG starts, its configuration data is supplied from the central site so that it can know which agents, trunks, and so forth it has to monitor. This configuration download can be a significant network bandwidth transient.

In summary, traffic flows from PG to Central Controller can be classified into the following distinct flows:

High-priority traffic — Includes routing and Device Management Protocol (DMP) control traffic. It is sent in TCP with the public high-priority IP address.

Heartbeat traffic — UDP messages with the public high-priority IP address and in the port range of 39500 to 39999. Heartbeats are transmitted at 400-ms intervals bidirectionally between the PG and the Central Controller. The UDP heartbeat traffic does not exist unless the Central Controller talks to a PG that is prior to Release 5.0.

Medium-priority traffic — Includes real-time traffic and configuration requests from the PG to the Central Controller. The medium-priority traffic is sent in TCP with the public high-priority IP address.

Low-priority traffic — Includes historical data traffic, configuration traffic from the CC, and call close notifications. The low-priority traffic is sent in TCP with the public high-priority IP address.

Administrative Workstations (AWs) are typically deployed at ACD sites, and they share the physical WAN/LAN circuits that the PGs use. When this is the case, network activity for the AW must be factored into the network bandwidth calculations. This document does not address bandwidth sizing for AW traffic.

Private Network Traffic Flow

Traffic destined for the critical Message Delivery Service (MDS) client (Router or OPC) is copied to the other side over the private link.

The private traffic can be summarized as follows:

High-priority traffic — Includes routing, MDS control traffic, and other traffic from MDS client processes such as the PIM CTI Server, Logger, and so forth. It is sent in TCP with the private high-priority IP address.

Heartbeat traffic — UDP messages with the private high-priority IP address and in the port range of 39500 to 39999. Heartbeats are transmitted at 100-ms intervals bidirectionally between the duplexed sides.

Medium-priority and low-priority traffic — For the Central Controller, this traffic includes shared data sourced from routing clients as well as (non-route control) call router messages, including call router state transfer (independent session). For the OPC (PG), this traffic includes shared non-route control peripheral and reporting traffic. This class of traffic is sent in TCP sessions designated as medium-priority and low-priority, respectively, with the private non-high priority IP address.

State transfer traffic — State synchronization messages for the Router, OPC, and other synchronized processes. It is sent in TCP with a private non-high-priority IP address.

Bandwidth and Latency Requirements

The amount of traffic sent between the Central Controllers (call routers) and Peripheral Gateways is largely a function of the call load at that site, although transient boundary conditions (for example, startup configuration load) and specific configuration sizes also affect the amount of traffic. A rule of thumb that works well for ICM software releases prior to 5.0 in steady-state operation is 1,000 bytes (8 kb) of data is typically sent from a PG to the Central Controller for each call that arrives at a peripheral. Therefore, if a peripheral is handling 10 calls per second, we would expect to need 10,000 bytes (80 kb) of data per second to be communicated to the Central Controller. The majority of this data is sent on the low-priority path. The ratio of low to high path bandwidth varies with the characteristics of the deployment (most significantly, the degree to which post-routing is performed), but generally it is roughly 10 to 30 percent. Each post-route request generates between 200 and 300 additional bytes of data on the high-priority path. Translation routes incur per-call data flowing in the opposite direction (CallRouter to PG), and the size of this data is fully dependent upon the amount of call context presented to the desktop.

A site that has an ACD as well as a VRU has two peripherals, and the bandwidth requirement calculations should take both peripherals into account. As an example, a site that has 4 peripherals, each taking 10 calls per second, should generally be configured to have 320 kbps of bandwidth. The 1,000 bytes per call is a rule of thumb, but the actual behavior should be monitored once the system is operational to ensure that enough bandwidth exists. (ICM meters data transmission statistics at both the CallRouter and PG sides of each path.)

Again, the rule of thumb and example described here apply to ICM releases prior to 5.0, and they are stated here for reference purpose only. Two bandwidth calculators are supplied for ICM releases 5.0 and 6.0, and they can project bandwidth requirements far more accurately. See Bandwidth Sizing, for more details.

As with bandwidth, specific latency requirements must be guaranteed in order for the ICM to function as designed. The side-to-side private network of duplexed CallRouter and PG nodes has a maximum one-way latency of 100 ms (50 ms preferred). The PG-to-CallRouter path has a maximum one-way latency of 200 ms in order to perform as designed. Meeting or exceeding these latency requirements is particularly important in an environment using ICM post-routing and/or translation routes.

As discussed previously, ICM bandwidth and latency design is fully dependant upon an underlying IP prioritization scheme. Without proper prioritization in place, WAN connections will fail. The Cisco ICM support team has custom tools (for example, Client/Server) that can be used to demonstrate proper prioritization and to perform some level of bandwidth utilization modeling for deployment certification.

Depending upon the final network design, an IP queuing strategy will be required in a shared network environment to achieve ICM traffic prioritization concurrent with other non-ICM traffic flows. This queuing strategy is fully dependent upon traffic profiles and bandwidth availability. As discussed earlier, success in a shared network cannot be guaranteed unless the stringent bandwidth, latency, and prioritization requirements of the product are met.

Network Provisioning

This section covers:

Quality of Service

Bandwidth Sizing

Bandwidth Requirements for CTI OS Agent Desktop

Bandwidth Requirements for Cisco Agent Desktop Release 6.0

Quality of Service

This section covers:

QoS Planning

Public Network Marking Requirements

Configuring QoS on IP Devices

Performance Monitoring

QoS Planning

In planning QoS, a question often arises about whether to mark traffic in the application or at the network edge. Marking traffic in the application saves the access lists for classifying traffic in IP routers and switches, and it might be the only option if traffic flows cannot be differentiated by IP address, port and/or other TCP/IP header fields. As mentioned earlier, ICM currently supports DSCP markings on the public network connection between the Central Controller and the PG. Additionally, when deployed with Microsoft Windows Packet Scheduler, ICM offers shaping and 802.1p. The shaping functionality mitigates the bursty nature of ICM transmissions by smoothing transmission peaks over a given time period, thereby smoothing network usage. The 802.1p capability, a LAN QoS handling mechanism, allows high-priority packets to enter the network ahead of low-priority packets in a congested Layer-2 network segment.

Traffic can be marked or remarked on edge routers and switches if it is not marked at its source or if the QoS trust is disabled in an attempt to prevent non-priority users in the network from falsely setting the DSCP or 802.1p values of their packets to inflated levels so that they receive priority service. For classification criteria definitions on edge routers and switches, see Table 8-1.

Public Network Marking Requirements

The ICM QoS markings are set in compliance with Cisco IP Telephony recommendations but can be overwritten if necessary. Table 8-1 shows the default markings of public network traffic, latency requirement, IP address, and port associated with each priority flow.

For details about Cisco IP Telephony packet classifications, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at

http://www.cisco.com/go/srnd


Note Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31). Therefore, in the interim, Cisco recommends that you reserve both AF31 and CS3 for call signaling.


Table 8-1 Public Network Traffic Markings (Default) and Latency Requirements 

Priority
IP address & port
Latency Requirement
DSCP / 802.1p Using Packet Scheduler
DSCP / 802.1p Bypassing Packet Scheduler

High

High-priority public IP address and high-priority connection port

200 ms

AF31 / 3

AF31 / 3

Medium

High-priority public IP address and medium-priority connection port

1,000 ms

AF31 / 3

AF21 / 2

Low

Non-high-priority public IP address and low-priority connection port

5 seconds

AF11 / 1

AF11 / 1


Configuring QoS on IP Devices

This section presents some representative QoS configuration examples. For details about Cisco campus network design, switch selection, and QoS configuration commands, refer to the Cisco Enterprise Campus documentation available at

http://www.cisco.com/en/US/netsol/ns340/ns394/ns431/networking_solutions_packages_list.html


Note The terms public network and visible network are used interchangeably throughout this document.


Configuring 802.1q Trunks on IP Switches

If 802.1p is an intended feature and the 802.1p tagging is enabled on the visible network NIC card, the switch port into which the ICM server plugs must be configured as an 802.1q trunk, as illustrated in the following configuration example:

switchport mode trunk 
switchport trunk encapsulation dot1q 
switchport trunk native vlan [data/native VLAN #] 
switchport voice vlan [voice VLAN #] 
switchport priority-extend trust 
spanning-tree portfast 

Configuring QoS trust

Assuming ICM DSCP markings are trusted, the following commands enable trust on an IP switch port:

mls qos 
    interface mod/port 
        mls qos trust dscp 

Configuring QoS Class to Classify Traffic

If the ICM traffic comes with two marking levels, AF31 for high and AF11 for non-high, the following class maps can be used to identify the two levels:

class-map match-all ICM_Visible_High
    match ip dscp af31
class-map match-all ICM_Visible_NonHigh
    match ip dscp af11

Configuring QoS Policy to Act on Classified Traffic

The following policy map puts ICM high priority traffic into the priority queue with the minimum and maximum bandwidth guarantee of 500 kbps. The non-high-priority traffic is allocated with a minimum bandwidth of 250 kbps.

policy-map Queuing_T1
    class ICM_Visible_High
        priority 500
    class ICM_Visible_NonHigh
        bandwidth 250

Apply QoS Policy to Interface

The following commands apply the QoS policy to an interface in the outbound direction:

interface mod/port 
    service-policy output Queuing_T1

Performance Monitoring

Once the QoS-enabled processes are up and running, the Microsoft Windows Performance Monitor (PerfMon) can be used to track the performance counters associated with the underlying links. For details on using PerfMon for this purpose, refer to the Cisco ICM Enterprise Edition Administration Guide, available at

http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/coreicm6/config60/index.htm

Bandwidth Sizing

This section briefly describes bandwidth sizing for the public (visible) and private networks.

IPCC Private Network Bandwidth

Because IPCC typically dedicates a network segment to the private path flows (both Central Controller and PG), a specific bandwidth calculation for that segment is usually not required, except when clustering IPCC over the WAN. Cisco therefore does not provide a bandwidth calculator for this purpose. A rule of thumb is to provide a minimum of a T1 link for the Central Controller private path and a minimum of a T1 link for the PG private path.

IPCC Public Network Bandwidth

Special tools are available to help calculate the bandwidth needed for the following public network links:

ICM Central Controller to Cisco CallManager PG

A tool is accessible to Cisco partners and Cisco employees for computing the bandwidth needed between the ICM Central Controller and Cisco CallManager. This tool is called the ACD/CallManager Peripheral Gateway to ICM Central Controller Bandwidth Calculator, and it is available (with proper login authentication) through the Cisco Steps to Success Portal at

http://tools.cisco.com/s2slv2/viewProcessFlow.do?method=browseStepsPage&modulename=browse&stepKeyId=55|EXT-AS-107287|EXT-AS-107288|EXT-AS-107301&isPreview=null&prevTechID=null&techName=IP%20Communications

ICM Central Controller to IP IVR or ISN PG

A tool is accessible to Cisco partners and Cisco employees for computing the bandwidth needed between the ICM Central Controller and the IP IVR PG. This tool is called the VRU Peripheral Gateway to ICM Central Controller Bandwidth Calculator, and it is available (with proper login authentication) through the Cisco Steps to Success Portal at

http://tools.cisco.com/s2slv2/viewProcessFlow.do?method=browseStepsPage&modulename=browse&stepKeyId=55|EXT-AS-107287|EXT-AS-107288|EXT-AS-107301&isPreview=null&prevTechID=null&techName=IP%20Communications

At this time, no tool exists that specifically addresses communications between the ICM Central Controller and the ISN PG. Testing has shown, however, that the tool for calculating bandwidth needed between the ICM Central Controller and the IP IVR PG will also produce accurate measurements for ISN if you perform the flooring substitution in one field:

For the field labeled Average number of RUN VRU script nodes, substitute the number of ICM script nodes that interact with ISN.

Bandwidth Requirements for CTI OS Agent Desktop

This section addresses the traffic and bandwidth requirements between CTI OS Agent Desktop and the CTI OS server. These requirements are important in provisioning the network bandwidth and QoS required between the agents and the CTI OS server, especially when the agents are remote over a WAN link. Even if the agents are local over Layer 2, it is important to account for the bursty traffic that occurs periodically because this traffic presents a challenge to bandwidth and QoS allocation schemes and can impact other mission-critical traffic traversing the network.

CTI OS Client/Server Traffic Flows and Bandwidth Requirements

CTI OS (Releases 4.6.2, 5.x, and 6.x) sends agent skill group statistics automatically every 10 seconds to all agents. This traffic presents a challenge to bandwidth and QoS allocation schemes in the case of centralized call processing with remote IPCC agents over a WAN link.

The statistics are carried in the same TCP connection as agent screen pops and control data. Additionally, transmission is synchronized across all agents logged into the same CTI OS server. This transmission results in an order-of-magnitude traffic spike every 10 seconds, affecting the same traffic queue as the agent control traffic.

The network bandwidth requirements increase linearly as a function of agent skill group membership. The 10-second skill group statistics are the most significant sizing criterion for network capacity, while the effect of system call control traffic is a relatively small component of the overall network load.

CTI OS provides a bandwidth calculator that examines bandwidth requirements for communications between the CTI OS Server and the CTI OS Desktop. It calculates Total Bandwidth, Agent Bandwidth, and Supervisor Bandwidth requirements. This calculator does not take into account the RTP and multimedia messages; it calculates the bandwidth based on the control flow between the CTI OS Server and CTI OS Client only. If one site has multiple CTI OS Servers and each server has dedicated agents, then the bandwidth calculation must to be done separately for each CTI OS Server and added together to derive the total bandwidth of the whole site. The CTI OS Bandwidth Calculator is available at

http://www.cisco.com/univercd/cc/td/doc/product/icm/bandcalc/index.htm

Best Practices and Options for CTI OS Server and CTI OS Agent Desktop

To mitigate the bandwidth demands, use any combination of the following options:

Configure Fewer Statistics

CTI OS allows the system administrator to specify, in the registry, the statistics items that are sent to all CTI OS clients. The choice of statistics affects the size of each statistics packet and, therefore, the network traffic. Configuring fewer statistics will decrease the traffic sent to the agents. The statistics cannot be specified on a per-agent basis at this time. For more information on agent statistics, refer to the CTI OS System Manager's Guide, available at

http://www.cisco.com/univercd/cc/td/doc/product/icm/icmentpr/icm60doc/icm6cti/ctios60/

Install Another CTI OS Server at the Remote Branch

The bandwidth required between the CTI OS server at the central site and the CTI OS server at the remote site in this scenario is a fraction of the bandwidth that would be required if each remote agent had to access the one central CTI OS server every time. This bandwidth (between the CTI OS server at the central site and the CTI OS server at the remote site) can be calculated as follows:

(3000 bytes) * (Calls per second) = 24 kbps * (Calls per second)

For example, if the call center (all agents, not just remote ones) handles 3600 BHCA (which equates to 1 call per second), then the WAN link bandwidth required to any remote branch, regardless of the number of remote agents, would be only 24 kbps. This traffic flow should be prioritized and marked as AF21 or AF11. Any other traffic traversing the link should be added to the bandwidth calculations as well and should be marked with proper classification.

Turn Off Statistics on a Per-Agent Basis

You can turn off statistics on a per-agent basis by using different connection profiles. For example, if remote agents use a connection profile with statistics turned off, these client connections would have no statistics traffic at all between the CTI OS Server and the Agent or Supervisor Desktop. This option could eliminate the need for a separate CTI OS Server in remote locations.

A remote supervisor or selected agents might still be able to log statistics by using a different connection profile with statistics enabled, if more limited statistics traffic is acceptable for the remote site.

In the case where remote agents have their skill group statistics turned off but the supervisor would like to see the agent skill group statistics, the supervisor could use a different connection profile with statistics turned on. In this case, the volume of traffic sent to the supervisor would be considerably less. For each skill group and agent (or supervisor), the packet size for a skill-group statistics message is fixed. So an agent in two skill groups would get two packets, and a supervisor observing five skill groups would get five packets. If we assume 10 agents at the remote site and one supervisor, all with the same two skill groups configured (in IPCC, the supervisor sees all the statistics for the skill groups to which any agent in his agent team belongs), then this approach would reduce skill-group statistics traffic by 90% if only the supervisor has statistics turned on to observe the two skill groups but agents have statistics turned off.

Also, at the main location, if agents want to have their skill-group statistics turned on, they could do so without impacting the traffic to the remote location if the supervisor uses a different connection profile. Again, in this case no additional CTI OS servers would be required.

In the case where there are multiple remote locations, assuming only supervisors need to see the statistics, it would be sufficient to have only one connection profile for all remote supervisors.

Turn Off All Skill Group Statistics in CTI OS

If skill group statistics are not required, turn them all off. Doing so would remove the connections between the CTI OS Server and the Agent or Supervisor Desktop and would eliminate all statistics traffic.

Bandwidth Requirements for Cisco Agent Desktop Release 6.0

This section describes the bandwidth requirements for the Cisco Agent Desktop and Supervisor Desktop applications and the network on which they run. All call scenarios and data presented in this section were tested using the Cisco Agent Desktop software phone (softphone). The reported bandwidth usage represents the total number of bytes sent for the specific scenario. It includes bandwidth for call control and any CTI events returned from the CTI service. By default, all communication between Cisco Desktop applications and the CTI OS server occurs through server port 42028.

Call Control Bandwidth Usage

This section lists bandwidth usage data for the following types of call control communications between Cisco Agent Desktop and the CTI OS Server:

Heartbeats and Skill Statistics

Agent State Change

Typical Call Scenario

Heartbeats and Skill Statistics

Table 8-2 shows the bandwidth usage between Cisco Agent Desktop and the CTI OS and Cisco Agent Desktop servers for heartbeats and skill statistics. This type of data is passed to and from logged-in agents at set intervals, regardless of what the agent is doing. The refresh interval for these skill group statistics was the default setting of 10 seconds. This refresh interval can be configured in CTI OS. Skill group statistics were also configured in CTI OS as described in the Cisco Agent Desktop Installation Guide, available at

http://www.cisco.com

Table 8-2 Bandwidth Usage for Heartbeats and Skill Statistics (Bytes Per Second) 

Server
To Cisco Agent Desktop
From Cisco Agent Desktop
1 Skill
5 Skills
1 Skill
5 Skills

CTI OS

49

234

7

28

Cisco Agent Desktop Base

2

2

2

2

Cisco Agent Desktop Recording

0

0

0

0

Cisco Agent Desktop VoIP Monitor

0

0

0

0

Total

51

236

9

30


Bandwidth from CTI OS to Cisco Agent Desktop:

2.1 Bps + (Number of skills * 46.4 Bps)

Bandwidth from Cisco Agent Desktop to CTI OS:

1.1 Bps + (Number of skills * 5.4 Bps)

Example

If there are 25 remote agents with 10 skills per agent, the number of bytes per second (Bps) sent from the CTI OS server to those desktops across the WAN can be calculated as follows:

25 agents * (2.1 Bps + (10 skills * 46.4 Bps) = 11,653 Bps

11,653 Bps * 8 bits per byte = 93,220 average bits per second = 93.22 kilobits per second (kbps)

Agent State Change

Table 8-3 lists the total bytes of data sent when an agent changes state from Ready to Not Ready and enters a reason code.

Table 8-3 Bytes of Data Used for Agent State Change 

Server
To Cisco Agent Desktop
From Cisco Agent Desktop
1 Skill
5 Skills
1 Skill
5 Skills

CTI OS

2043

6883

523

739

Cisco Agent Desktop Base

268

268

638

638

Cisco Agent Desktop Recording

0

0

0

0

Cisco Agent Desktop VoIP Monitor

0

0

0

0

Total

2311

7151

1161

1377


Example

If there are 25 remote agents with 5 skills per agent, each of whom changes agent state one time, the total number of bytes sent from the CTI OS server to Cisco Agent Desktop is:

25 * 6883 = 172,075 bytes

Typical Call Scenario

Table 8-4 lists the total bytes of data required for a typical call scenario. For this call scenario, Cisco Agent Desktop is used to perform the following functions:

Transition an agent from the work ready state.

Answer an incoming ACD call using the softphone controls.

Put the agent in a work ready state.

Hang up the call using the softphone controls.

Select wrap-up data.

This scenario includes presenting Expanded Call Context (ECC) variables to the agent. Each ECC variable is 20 bytes in length, assuming a worst-case scenario.

Table 8-4 Bytes of Data Used for a Typical Call Scenario 

Service
To Cisco Agent Desktop
From Cisco Agent Desktop
1 Skill
5 Skills
1 Skill
5 Skills
1 ECC
5 ECCs
1 ECC
5 ECCs
1 ECC
5 ECCs
1 ECC
5 ECCs

CTI OS

19683

20199

30804

31263

2371

2371

2749

2749

Cisco Agent Desktop Base

4274

5882

4674

5942

6716

6832

6726

6894

Cisco Agent Desktop Recording

0

0

0

0

0

0

0

0

Cisco Agent Desktop VoIP Monitor

0

0

0

0

0

0

0

0

Total

23957

26081

35478

37205

9087

9203

9475

9643


Example

Assume there are 25 remote agents with 5 skills and 5 ECC variables, who each answer 20 calls in the busy hour. Also assume a full-duplex network, and use the larger of the To/From bandwidth numbers, which is 37,205 bytes in this case.

37,205 bytes per call * 25 agents * 20 calls per hour = 18,602,500 bytes per hour.

(18,602,500 bytes per hour) / (3600 seconds per hour) = 5,167 bytes per second (Bps)


Note Access to LDAP is not included in the calculation because both Cisco Agent Desktop and Cisco Supervisor Desktop read their profiles only once, at startup, and then cache it. The numbers in this example are not based on calls in progress, but on calls attempted or completed. The amount of bandwidth used is per call, and does not depend on the length of the call (a 1-minute call and a 10-minute call typically use the same amount of bandwidth, excluding voice traffic). The example does not take into account the additional traffic generated if calls are transferred, held, or conferenced.


Be sure to mark RTP packets for monitoring, recording, and playback, in addition to other required RTP and signaling marking. For details on traffic marking, refer to the Cisco IP Telephony Solution Reference Network Design (SRND) guide, available at

http://www.cisco.com/go/srnd

Silent Monitoring Bandwidth Usage

Starting with Cisco Agent Desktop Release 4.6, desktop monitoring was introduced as a new feature. Instead of using a centralized VoIP Monitor service, each Cisco Agent Desktop installation includes a miniature VoIP service called the Desktop Monitor service. This service is responsible for all silent monitoring and recording requests for the agent logged in on that desktop.

The bandwidth requirements for the Desktop Monitor service are identical to those of the VoIP Monitor service from the standpoint of monitor requests, but the number of requests sent to the Desktop Monitor service is much lower.

It is possible to have multiple silent monitoring requests for the same agent extension from different Cisco Agent Desktop supervisors. In that case, each monitor request requires the bandwidth of an additional call for the desktop. Unlike the VoIP Monitor service, the maximum number of recording requests that can be sent to the Desktop Monitor is one.

The maximum number of simultaneous monitoring and recording requests is 21 (one monitoring request from each of the 20 allowed supervisors per team, plus one recording request). In practice, there are usually no more than 3 to 5 simultaneous monitoring/recording requests at any one time.

For the purposes of this discussion, 5 simultaneous monitoring/recording sessions are used to calculate the average bandwidth requirements for a single Cisco Agent Desktop installation to support desktop monitoring.

Silent Monitoring IP Traffic Flow

Figure 8-2 shows a main office and a remote office. The main office contains the various Cisco Desktop services and the switch shared with the remote office. Both the main office and the remote office have Cisco Agent Desktop agents and supervisors. In this diagram, all agents and supervisors belong to the same logical contact center (LCC) and are on the same team.

Figure 8-2 Contact Center Diagram

In the main office, agents and supervisors use IP phones. In the remote office, agents and supervisors use media termination softphones.

Bandwidth Requirements for Monitor Services to Cisco Supervisor Desktop

The amount of traffic between the monitor services and the monitoring supervisor is equal to the bandwidth of one IP phone call (two RTP streams of data). (Monitor services refers to both the VoIP Monitor service and the Desktop Monitor service.)

When calculating bandwidth, you must use the size of the RTP packet plus the additional overhead of the networking protocols used to transport the RTP data through the network.

G.711 packets carrying 20 ms of speech data require 64 kbps of network bandwidth. (See Table 8-5.) These packets are encapsulated by four layers of networking protocols (RTP, UDP, IP, and Ethernet). Each of these protocols adds its own header information to the G.711 data. As a result, the G.711 data, once packed into an Ethernet frame, requires 87.2 kbps of bandwidth per data stream as it travels over the network. An IP phone call consists of two streams, one from A to B and one from B to A. For an IP phone call using the G.711 codec, both streams require 87.2 kbps.

Table 8-5 Bandwidth Requirements for Two Streams of Data 

CODEC
Average kbps Per Monitoring Supervisor
Maximum kbps Per Monitoring Supervisor 1

G.711

174.4

174.4

G.711 with silence suppression

61

174.4

G.729

62.4

62.4

G.729 with silence suppression

21.8

62.4

1 Maximum instantaneous bandwidth. When silence suppression is used on a physical channel that has fixed capacity, you must consider this metric because, when a voice signal is present, all of the maximum bandwidth is needed.


For full-duplex connections, the bandwidth speed applies to both incoming and outgoing traffic. (For instance, for a 100-Mbps connection, there is 100 Mbps of upload bandwidth and 100 Mbps of download bandwidth.) Therefore, an IP phone call consumes the bandwidth equivalent of a single stream of data. In this scenario, a G.711 IP phone call with no silence suppression requires 87.2 kbps of the available bandwidth.

Monitor services send out two streams for each monitored call, both going from the service to the requestor. This means that, for each monitor session, the bandwidth requirement is for two streams (174.4 kbps with the G.711 codec).

If a VoIP Monitor service is used to monitor an agent's extension, this bandwidth is required between the VoIP Monitor service and the supervisor's computer. In Figure 8-2, if supervisor A monitors agent A, this bandwidth is required on the main office LAN. If supervisor A monitors agent B at the remote office, another VoIP Monitor service is needed in the remote office (not shown in Figure 8-2). The bandwidth requirement also applies to the WAN link.

If desktop monitoring is used, the bandwidth requirements are between the agent's desktop and the supervisor's desktop. If supervisor A monitors agent A, this bandwidth is required on the main office LAN. If supervisor A monitors agent B in the remote office, the bandwidth requirement also applies to the WAN link.

Bandwidth Requirements for Monitor Service to Recording and Statistics Service

The Recording and Statistics service is used to record agent conversations. See Table 8-5 for the bandwidth requirements between the Recording and Statistics service and the monitor service.

Bandwidth Requirements for Recording Service to Cisco Supervisor Desktop

Cisco Agent Desktop Release 6.0 introduced a new Recording Service. This service sends RTP streams to supervisors for recording playback. The bandwidth used for the RTP streams is identical to silent monitoring. See Table 8-5 for details.

Bandwidth Requirements for Desktop Monitor

If a VoIP Monitor service is used to monitor or record a call, the bandwidth requirement on the service's network connection is two streams of voice data.

If a Desktop Monitor service is used, the additional load of the IP phone call is added to the bandwidth requirement because the IP phone call comes to the same agent where the Desktop Monitor service is located.

In either case, the bandwidth requirement is the bandwidth between the monitor service and the requestor:

VoIP Monitor service to supervisor

Agent desktop to supervisor

VoIP service to Recording and Statistics service

Agent desktop to Recording and Statistics service

Table 8-6 and Table 8-7 display the percentage of total bandwidth available that is required for simultaneous monitoring sessions handled by a single Desktop Monitor service.

The following notes also apply to the bandwidth requirements for the Desktop Monitor service shown in Table 8-6 and Table 8-7:

The bandwidth values are calculated based on the best speed of the indicated connections. A connection's true speed can differ from the maximum stated due to various other factors.

The bandwidth requirements are based on upload speed. Download speed affects only the incoming stream for the IP phone call.

The data represents the codecs without silence suppression. With silence suppression, the amount of bandwidth used may be lower.

The data shown does not address the quality of the speech of the monitored call. If the bandwidth requirements approach the total bandwidth available and other applications must share access to the network, latency (packet delay) of the voice packets can affect the quality of the monitored speech. However, latency does not affect the quality of recorded speech.

The data represents only the bandwidth required for monitoring and recording. It does not include the bandwidth requirements for other Cisco Agent Desktop modules as outlined in other sections of this document.

Table 8-6 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.711 Codec and No Silence Suppression 

Number of Simultaneous Monitoring Sessions
Percentage of Available Upload Bandwidth Required
100 Mbps
10 Mbps
1.544 Mbps
640 kbps
256 kbps
128 kbps
64 kbps
56 kbps

Call only

0.1

0.9

5.6

13.6

34.1

68.1

NS1

NS

1

0.3

2.6

16.8

40.9

NS

NS

NS

NS

2

0.4

4.4

28.1

68.1

NS

NS

NS

NS

3

0.6

6.1

39.3

95.4

NS

NS

NS

NS

4

0.8

7.8

50.5

NS

NS

NS

NS

NS

5

1.0

9.6

61.7

NS

NS

NS

NS

NS

6

1.1

11.3

72.9

NS

NS

NS

NS

NS

7

1.3

13.1

84.2

NS

NS

NS

NS

NS

8

1.5

14.8

95.4

NS

NS

NS

NS

NS

9

1.7

16.6

NS

NS

NS

NS

NS

NS

10

1.8

18.3

NS

NS

NS

NS

NS

NS

1 NS = not supported. The bandwidth of the connection is not large enough to support the number of simultaneous monitoring sessions.


Table 8-7 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.729 Codec and No Silence Suppression 

Number of Simultaneous Monitoring Sessions
Percentage of Available Bandwidth Required
100 Mbps
10 Mbps
1.544 Mbps
640 kbps
256 kbps
128 kbps
64 kbps
56 kbps

Call only

0.0

0.3

2.0

4.9

12.2

24.4

48.8

55.7

1

0.1

0.9

6.0

14.6

36.6

73.1

NS1

NS

2

0.2

1.6

10.0

24.4

60.9

NS

NS

NS

3

0.2

2.2

14.1

34.1

85.3

NS

NS

NS

4

0.3

2.8

18.1

43.9

NS

NS

NS

NS

5

0.3

3.4

22.1

53.6

NS

NS

NS

NS

6

0.4

4.1

26.1

63.4

NS

NS

NS

NS

7

0.5

4.7

30.1

73.1

NS

NS

NS

NS

8

0.5

5.3

34.1

82.9

NS

NS

NS

NS

9

0.6

5.9

38.1

92.6

NS

NS

NS

NS

10

0.7

6.6

42.2

NS

NS

NS

NS

NS

1 NS = not supported. The bandwidth of the connection is not large enough to support the number of simultaneous monitoring sessions.


Bandwidth Requirements for VoIP Monitor Service

The following notes apply to the bandwidth requirements for the VoIP Monitor service, as listed in Table 8-8 and Table 8-9:

Because the VoIP Monitor service is designed to handle a larger load, the number of monitoring sessions is higher than for the Desktop Monitor service.

The bandwidth requirements are based on upload speed. Download speed affects only the incoming stream for the IP phone call.

Some of the slower connection speeds are not shown in Table 8-8 and Table 8-9 because they are not supported for a VoIP Monitor service.

The values in Table 8-8 and Table 8-9 are calculated based on the best speed of the indicated connections. A connection's true speed can differ from the maximum stated due to various other factors.

The data represents the codecs without silence suppression. With silence suppression, the amount of bandwidth used may be lower.

The data shown does not address the quality of the speech of the monitored call. If the bandwidth requirements approach the total bandwidth available and other applications must share access to the network, latency (packet delay) of the voice packets can affect the quality of the monitored speech. However, latency does not affect the quality of recorded speech.

The data represents only the bandwidth required for monitoring and recording. It does not include the bandwidth requirements for other Cisco Agent Desktop modules as outlined in other sections of this document.

Table 8-8 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.711 Codec and No Silence Suppression 

Number of Simultaneous Monitoring Sessions
Percentage of Available Upload Bandwidth Required
100 Mbps
10 Mbps
1.544 Mbps

1

0.2

1.7

11.2

5

0.9

8.7

56.1

10

1.7

17.4

NS1

15

2.6

26.2

NS

20

3.5

34.9

NS

25

4.4

43.6

NS

30

5.2

52.3

NS

35

6.1

61.0

NS

40

7.0

69.8

NS

45

7.8

78.5

NS

50

8.7

87.2

NS

1 NS = not supported. The bandwidth of the connection is not large enough to support the number of simultaneous monitoring sessions.


Table 8-9 Percentage of Available Upload Bandwidth Required for Simultaneous Monitoring Sessions with G.729 Codec and No Silence Suppression 

Number of Simultaneous Monitoring Sessions
Percentage of Available Upload Bandwidth Required
100 Mbps
10 Mbps
1.544 Mbps

1

0.1

0.6

4.0

5

0.3

3.1

20.1

10

0.6

6.2

40.2

15

0.9

9.4

60.2

20

1.2

12.5

80.3

25

1.6

15.6

NS1

30

1.9

18.7

NS

35

2.2

21.8

NS

40

2.5

25.0

NS

45

2.8

28.1

NS

50

3.1

31.2

NS

1 NS = not supported. The bandwidth of the connection is not large enough to support the number of simultaneous monitoring sessions.


Bandwidth Requirements for Cisco Supervisor Desktop to Cisco Desktop Base Services

In addition to the bandwidth requirements discussed in the preceding sections, there is traffic from Cisco Supervisor Desktop to the Cisco Agent Desktop Base Services.

For each agent on the supervisor's team, there is 2 kilobytes (kB) of bandwidth per call sent between Cisco Supervisor Desktop and the Chat service, as shown in Table 8-10.

Table 8-10 Cisco Supervisor Desktop Bandwidth for a Typical Agent Call 

Service
To Cisco Supervisor Desktop
From Cisco Supervisor Desktop

CTI OS

0

0

Cisco Agent Desktop Base

1650

550

Cisco Agent Desktop Recording

0

0

Cisco Agent Desktop VoIP Monitor

0

0

Total

1650

550


The same typical call scenario was used to capture bandwidth measurements for both Cisco Agent Desktop and Cisco Supervisor Desktop. See Typical Call Scenario, for more details.

If there are 10 agents on the supervisor's team and each agent takes 20 calls an hour, the traffic is:

10 agents * 20 calls per hour = 200 calls per hour

200 calls * 1650 bytes per call = 330,000 bytes per hour

(4330,000 bytes) / (3600 seconds per hour) = 92 kBps

92 kBps * 8 bits per byte = 733 kbps.

There is additional traffic sent if the supervisor is viewing reports or if a silent monitor session is started or stopped.

Agent Detail Report

This report is refreshed automatically every 30 seconds. Table 8-11 lists the bandwidth usage per report request.

Table 8-11 Bandwidth Usage for Agent Detail Report (Average Bytes per Report) 

Service
Agent Detail Report
To Cisco Supervisor Desktop
From Cisco Supervisor Desktop

CTI OS

0

0

Cisco Agent Desktop Base

220

200

Cisco Agent Desktop Recording

0

0

Cisco Agent Desktop VoIP Monitor

0

0

Total

220

200


Bandwidth for a supervisor viewing the Agent Detail Report is:

220 bytes per request * 2 requests per minute = 440 bytes per minute

(440 bytes per minute) / (60 seconds per minute) = 7 bytes per second (Bps)

Team Agent Statistics Report

This function is a one-time transfer occurring when the supervisor opens the report (no automatic refresh). The supervisor can refresh the report manually. Table 8-12 lists the bandwidth usage per report request.

Table 8-12 Bandwidth Usage for Team Agent Statistics Report (Average Bytes per Report) 

Service
Team Agent Statistics Report
To Cisco Supervisor Desktop
From Cisco Supervisor Desktop

CTI OS

0

0

Cisco Agent Desktop Base

250 per agent

200

Cisco Agent Desktop Recording

0

0

Cisco Agent Desktop VoIP Monitor

0

0

Total

250 per agent

200


Team Skill Statistics Report

This report is refreshed automatically every 30 seconds. Table 8-13 lists the bandwidth usage per report request.

Table 8-13 Bandwidth Usage for Team Skill Statistics Report (Average Bytes per Report) 

Service
Team Skill Statistics Report
To Cisco Supervisor Desktop
From Cisco Supervisor Desktop

CTI OS

0

0

Cisco Agent Desktop Base

250 per skill

200

Cisco Agent Desktop Recording

0

0

Cisco Agent Desktop VoIP Monitor

0

0

Total

250 per skill

200


Bandwidth for a supervisor viewing the Team Skill Statistics Report with 10 skills in the team is:

250 bytes per skill * 10 skills * 2 requests per minute = 5000 bytes per minute

(5000 bytes per minute) / (60 seconds per minute) = 83 bytes per second (Bps)

Start and Stop Silent Monitoring Requests

Requests to start or stop silent monitor sessions result in a one-time bandwidth usage per request, as listed in Table 8-14.

Table 8-14 Bandwidth Usage to Start or Stop Silent Monitoring (Average Bytes per Request) 

Service
Start Monitoring
Stop Monitoring
To Cisco Supervisor Desktop
From Cisco Supervisor Desktop
To Cisco Supervisor Desktop
From Cisco Supervisor Desktop

CTI OS

0

0

0

0

Cisco Agent Desktop Base

775

450

0

0

Cisco Agent Desktop Recording

0

0

0

0

Cisco Agent Desktop VoIP Monitor

275

500

150

325

Total

1050

950

150

325


Service Placement Recommendations

Table 8-15 summarizes recommendations for service placements that help minimize bandwidth to the desktop. These recommendations apply to deployments with centralized call processing and remote agents.

Table 8-15 Service Placement Recommendations 

Service
Location
Reason

Cisco Agent Desktop Base Services

Central

Traffic to/from centralized IPCC components outweighs the traffic to/from desktops.

VoIP Monitor Service

Remote, near agents

Span of agent traffic to service. This is a requirement, not a recommendation, for silent monitoring and recording.

Recording Service

Close to agents and supervisor if in one location; central if not

No CTI. Lots of traffic to/from desktops and from the VoIP Monitor Service(s).

Cisco Desktop Supervisor

With the agents

Close to the VoIP Monitor service.


For multiple remote locations, each remote location must have a VoIP Monitor service. Multiple VoIP Monitor services are supported in a single logical contact center. The Recording and Statistics service can be moved to the central location if the WAN connections are able to handle the traffic. If not, each site should have its own logical contact center and installation of the Cisco Desktop software.

Quality of Service (QoS) Considerations

When considering which traffic flows are mission-critical and need to be put in a priority queue, rank them in the following order of importance:

1. Customer experience

2. Agent experience

3. Supervisor experience

4. Administrator experience

With this ranking for the service-to-service flows, traffic between the Enterprise Service and the CTI service (call events) is the most critical. Based on the service placement recommendations in Table 8-15, both services should reside in the central location. However, QoS considerations must still be applied.

This traffic should be classified as AF31, similar to voice call control and signaling traffic. The traffic from Cisco Agent Desktop to and from the CTI service (call events, call control) should also be prioritized and classified as AF31.

For IP Phone Agent, communications between the IP Phone Agent service and the CTI service are also important because they affect how quickly agents can change their agent state. This traffic should also be classified as AF31.


Note Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31). Therefore, in the interim, Cisco recommends that you reserve both AF31 and CS3 for call signaling.


The traffic from Cisco Agent Desktop to and from the Chat service (agent information, call status) is less critical and should be classified as AF21 or AF11.

Cisco Desktop Component Port Usage

For details on network usage, refer to the Cisco Contact Center Product Port Utilization Guide, available at

http://www.cisco.com/univercd/cc/td/doc/product/icm/port_uti/index.htm

The Desktop application tags only the RTP packets that are sent from the Desktop Monitor software, VoIP Monitor Service, or Recording Service for silent monitoring, recording, and recording playback.

Integrating Cisco Agent Desktop Release 6.0 into a Citrix Thin-Client Environment

For guidance on installing Cisco Agent Desktop Release 6.0 applications in a Citrix thin-client environment, refer to the documentation for Integrating CAD 6.0 into a Citrix Thin-Client Environment, available at

http://www.cisco.com/application/pdf/en/us/partner/products/ps427/c1244/cdccont_0900aecd800e9db4.pdf