Cisco Unified Contact Center Enterprise Solution Reference Network Design, Release 9.x
Bandwidth Provisioning and QoS Considerations
Downloads: This chapterpdf (PDF - 2.27MB) The complete bookPDF (PDF - 12.19MB) | The complete bookePub (ePub - 7.08MB) | Feedback

Bandwidth Provisioning and QoS Considerations

Contents

Bandwidth Provisioning and QoS Considerations

This chapter presents an overview of the Unified CCE network architecture, deployment characteristics of the network, and provisioning requirements of the Unified CCE network. Essential network architecture concepts are introduced, including network segments, keep-alive (heartbeat) traffic, flow categorization, IP-based prioritization and segmentation, and bandwidth and latency requirements. Provisioning guidelines are presented for network traffic flows between remote components over the WAN, including recommendations on how to apply proper Quality of Service (QoS) to WAN traffic flows. For a more detailed description of the Unified CCE architecture and various component internetworking, see Architecture Overview.

Cisco Unified CCE has traditionally been deployed using private, point-to-point leased-line network connections for both its private (Central Controller or Peripheral Gateway, side-to-side) as well as public (Peripheral Gateway to Central Controller) WAN network structure. Optimal network performance characteristics (and route diversity for the fault-tolerant fail-over mechanisms) are provided to the Unified CCE application only through dedicated private facilities, redundant IP routers, and appropriate priority queuing.

Enterprises deploying networks that share multiple traffic classes, of course, prefer to maintain their existing infrastructure rather than revert to an incremental, dedicated network. Convergent networks offer both cost and operational efficiency, and such support is a key aspect of Cisco Powered Networks.

Provided that the required latency and bandwidth requirements inherent in the real-time nature of this product are satisfied, Cisco supports Unified CCE deployments in a convergent QoS-aware public network as well as in a convergent QoS-aware private network environment. This chapter presents QoS marking, queuing, and shaping recommendations for both the Unified CCE public and private network traffic.

Historically, two QoS models have been used: Integrated Services (IntServ) and Differentiated Services (DiffServ). The IntServ model relies on the Resource Reservation Protocol (RSVP) to signal and reserve the desired QoS for each flow in the network. Scalability becomes an issue with IntServ because state information of thousands of reservations has to be maintained at every router along the path. DiffServ, in contrast, categorizes traffic into different classes, and specific forwarding treatments are then applied to the traffic class at each network node. As a coarse-grained, scalable, and end-to-end QoS solution, DiffServ is more widely used and accepted. Unified CCE applications are not aware of RSVP, and the QoS considerations in this chapter are based on DiffServ.

Adequate bandwidth provisioning and implementation of QoS are critical components in the success of Unified CCE deployments. Bandwidth guidelines and examples are provided in this chapter to help with provisioning the required bandwidth.

Unified CCE Network Architecture Overview

Unified CCE is a distributed, resilient, and fault-tolerant network application that relies heavily on a network infrastructure with sufficient performance to meet the real-time data transfer requirements of the product. A properly designed Unified CCE network is characterized by proper bandwidth, low latency, and a prioritization scheme favoring specific UDP and TCP application traffic. These design requirements are necessary to ensure both the fault-tolerant message synchronization of specific duplexed Unified CCE nodes (Central Controller and Peripheral Gateways) as well as the delivery of time-sensitive system status data (routing messages, agent states, call statistics, trunk information, and so forth) across the system. Expeditious delivery of PG data to the Central Controller is necessary for accurate call center state updates and fully accurate real-time reporting data.

In a Cisco Unified Communications deployment, WAN and LAN traffic can be grouped into the following categories:
  • Voice and video traffic

    Voice calls (voice carrier stream) consist of Real-Time Transport Protocol (RTP) packets that contain the actual voice samples between various endpoints such as PSTN gateway ports, Unified IP IVR Q-points (ports), and IP phones. This traffic includes voice streams of silently monitored and recorded agent calls.

  • Call control traffic

    Call control consists of packets belonging to one of several protocols (H.323, MGCP, SCCP, or TAPI/JTAPI), according to the endpoints involved in the call. Call control functions include those used to set up, maintain, tear down, or redirect calls. For Unified CCE, control traffic includes routing and service control messages required to route voice calls to peripheral targets (such as agents, skill groups, or services) and other media termination resources (such as Unified IP IVR ports) as well as the real-time updates of peripheral resource status.

  • Data traffic

    Data traffic can include normal traffic such as email, web activity, and CTI database application traffic sent to the agent desktops, such as screen pops and other priority data. Unified CCE priority data includes data associated with non-real-time system states, such as events involved in reporting and configuration updates.

This topic focuses primarily on the types of data flows and bandwidth used between a remote Peripheral Gateway (PG) and the Unified CCE Central Controller (CC), on the network path between Sides A and B of a PG or of the Central Controller, and on the CTI flows between the desktop application and CTI OS and Cisco Agent Desktop servers. Guidelines and examples are presented to help estimate required bandwidth and to help implement a prioritization scheme for these WAN segments.

The flows discussed in this chapter encapsulate call control and data traffic. Because media (voice and video) streams are maintained primarily between Cisco Unified Communications Manager and its endpoints, voice and video provisioning are not addressed here.

For bandwidth estimates for the voice RTP stream generated by the calls to Unified CCE agents and the associated call control traffic generated by the various protocols, see the Cisco Unified Communications Solution Reference Network Design (SRND) Guide at http:/​/​www.cisco.com/​en/​US/​products/​sw/​voicesw/​ps556/​tsd_​products_​support_​series_​home.html.

Data traffic and other mission-critical traffic will vary according to the specific integration and deployment model used. For information about proper network design for data traffic, see the Network Infrastructure and Quality of Service (QoS) documentation at http:/​/​www.cisco.com/​en/​US/​netsol/​ns742/​networking_​solutions_​program_​category_​home.html .

Network Segments

The fault-tolerant architecture employed by Unified CCE requires two independent communication networks. The private network (using a separate path) carries traffic necessary to maintain and restore synchronization between the systems and to allow clients of the Message Delivery Subsystem (MDS) to communicate. The public network carries traffic between each side of the synchronized system and foreign systems. The public network is also used as an alternate network by the fault-tolerance software to distinguish between node failures and network failures.


Note


The terms public network and visible network are used interchangeably throughout this document.


A third network, the signaling access network, may be deployed in Unified CCE systems that also interface directly with the carrier network (PSTN) and that deploy the Hosted Unified CCH/Unified CCE architecture. The signaling access network is not addressed in this chapter.


Note


Cisco Unified CCH is deprecated. Use Cisco HCS for Contact Center instead.


The figure below illustrates the fundamental network segments for a Unified CCE system with a duplexed PG and a duplexed Central Controller (with Sides A and B geographically separated).

Figure 1. Example of Public and Private Network Segments for a Unified CCE System

The following notes apply to the figure above:
  • The private network carries Unified CCE traffic between duplexed sides of the Central Controller or a Peripheral Gateway. This traffic consists primarily of synchronized data and control messages, and it also conveys the state transfer necessary to re-synchronize duplexed sides when recovering from an isolated state. When deployed over a WAN, the private network is critical to the overall responsiveness of Cisco Unified CCE. It must meet aggressive latency requirements and, therefore, either IP-based priority queuing or QoS must be used on the private network links.
  • The public network carries traffic between the Central Controller and call centers (PGs and Administration & Data Servers). The public network can also serve as a Central Controller alternate path, used to determine which side of the Central Controller retains control if the two sides become isolated from one another. The public network is never used to carry synchronization control traffic. Public network WAN links must also have adequate bandwidth to support the PGs and Administration & Data Servers at the call center. The IP routers in the public network must use either IP-based priority queuing or QoS to ensure that Unified CCE traffic classes are processed within acceptable tolerances for both latency and jitter.
  • Call centers (PGs and Administration & Data Servers) local to one side of the Central Controller connect to the local Central Controller side through the public Ethernet and to the remote Central Controller side over public WAN links. This arrangement requires that the public WAN network must provide connectivity between Side A and Side B. Bridges may optionally be deployed to isolate PGs and Administration & Data Servers from the Central Controller LAN segment to enhance protection against LAN outages.
  • To achieve the required fault tolerance, the private WAN link must be fully independent from the public WAN links (separate IP routers, network segments or paths, and so forth). Independent WAN links ensure that a single point of failure is truly isolated between the public and the private networks. Deploy public network WAN segments that traverse a routed network so that you maintain PG-to-CC (Central Controller) route diversity throughout the network. Avoid routes that result in common path selection (and, thus, a common point of failure) for the multiple PG-to-CC sessions.

IP-Based Prioritization and QoS

For each of the WAN links in Figure 1, a prioritization scheme is required. Two such prioritization schemes are supported: IP-based prioritization and QoS. Traffic prioritization is needed because it is possible for large amounts of low-priority traffic to get in front of high-priority traffic, thereby delaying delivery of high-priority packets to the receiving end. In a slow network flow, the amount of time a single large (for example, 1500-byte) packet consumes on the network (and delays subsequent packets) can exceed 100 ms. This delay would cause the apparent loss of one or more heartbeats. To avoid this situation, a smaller Maximum Transmission Unit (MTU) size is used by the application for low-priority traffic, thereby allowing a high-priority packet to get on the wire sooner. (MTU size for a circuit is calculated from within the application as a function of the circuit bandwidth, as configured at PG setup.)

A network that is not prioritized correctly almost always leads to call time-outs and problems from loss of heartbeats as the application load increases or (worse) as shared traffic is placed on the network. A secondary effect often seen is application buffer pool exhaustion on the sending side, due to extreme latency conditions.

Unified CCE applications use three priorities: high, medium, and low. However, prior to QoS, the network effectively recognized only two priorities identified by source and destination IP address (high-priority traffic was sent to a separate IP destination address) and, in the case of UDP heartbeats, by specific UDP port range in the network. The approach with IP-based prioritization is to configure IP routers with priority queuing in a way that gives preference to TCP packets with a high-priority IP address and to UDP heartbeats over the other traffic. When using this prioritization scheme, 90% of the total available bandwidth is granted to the high-priority queue.

A QoS-enabled network applies prioritized processing (queuing, scheduling, and policing) to packets based on QoS markings as opposed to IP addresses. Unified CCE provides a marking capability of Layer-3 DSCP for private and public network traffic. Traffic marking in Unified CCE implies that configuring dual IP addresses on each Network Interface Controller (NIC) is no longer necessary because the network is QoS-aware. However, if the traffic is marked at the network edge instead, dual-IP configuration is still required to differentiate packets by using access control lists based on IP addresses. For details, see Where to Mark Traffic.


Note


Layer-2 802.1p marking is also possible if Microsoft Windows Packet Scheduler is enabled (for PG/Central Controller traffic only). However, this is not supported. Microsoft Windows Packet Scheduler is not well supported or suited to Unified CCE, and support is removed in future versions. 802.1p markings are not widely used, nor are they required when DSCP markings are available.


UDP Heartbeat and TCP Keep-Alive

The primary purpose of the UDP heartbeat design is to detect if a circuit has failed. Detection can be made from either end of the connection, based on the direction of heartbeat loss. Both ends of a connection send heartbeats at periodic intervals (typically every 100 or 400 milliseconds) to the opposite end, and each end looks for analogous heartbeats from the other. If either end misses five heartbeats in a row (that is, if a heartbeat is not received within a period that is five times the period between heartbeats), then the side detecting this condition assumes something is wrong and the application closes the socket connection. At that point, a TCP Reset message is typically generated from the closing side. Loss of heartbeats can be caused by various factors, such as: the network failed, the process sending the heartbeats failed, the machine on which the sending process resides is shut down, the UDP packets are not properly prioritized, and so forth.

There are several parameters associated with heartbeats. In general, leave these parameters set to their system default values. Some of these values are specified when a connection is established, while others can be specified by setting values in the Microsoft Windows 2008 registry. The two values of most interest are:
  • The amount of time between heartbeats
  • The number of missed heartbeats (currently hard-coded as 5) that the system uses to determine whether a circuit has apparently failed

The default value for the heartbeat interval is 100 milliseconds between the duplexed sides, meaning that one side can detect the failure of the circuit or the other side within 500 ms. The default heartbeat interval between a central site and a peripheral gateway is 400 ms, meaning that the circuit failure threshold is 2 seconds in this case.

As part of the Unified CCE QoS implementation, the UDP heartbeat is replaced by a TCP keep-alive message in the public network connecting a Central Controller to a Peripheral Gateway. In Unified CCE 7.x and later releases, a consistent heartbeat or keep-alive mechanism is enforced for both the public and private network interface. When QoS is enabled on the network interface, a TCP keep-alive message is sent; otherwise UDP heartbeats are retained.

The TCP keep-alive feature, provided in the TCP stack, detects inactivity and in that case causes the server/client side to terminate. It operates by sending probe packets (namely, keep-alive packets) across a connection after the connection has been idle for a certain period, and the connection is considered down if a keep-alive response from the other side is not heard. Microsoft Windows 2008 allow you to specify keep-alive parameters on a per-connection basis. For Unified CCE public connections, the keep-alive timeout is set to 5 * 400 ms, meaning that a failure can be detected after 2 seconds, as was the case with the UDP heartbeat.

The reasons for moving to TCP keep-alive with QoS enabled are as follows:
  • In a converged network, algorithms used by routers to handle network congestion conditions can have different effects on TCP and UDP. As a result, delays and congestion experienced by UDP heartbeat traffic can have, in some cases, little correspondence to the TCP connections.
  • The use of UDP heartbeats creates deployment complexities in a firewall environment. The dynamic port allocation for heartbeat communications makes it necessary to open a large range of port numbers, thus defeating the original purpose of the firewall device.

HSRP-Enabled Network

In a network where Hot Standby Router Protocol (HSRP) is deployed on the default gateways that are configured on the Unified CCE servers, follow these requirements:

  • Set the HSRP hold time and its associated processing delay lower than five times the heartbeat interval (100 ms on the private network and 400 ms on the public network). This level avoids Unified CCE private network communication outage during HSRP active router switch-over.

    Note


    With convergence delays that exceed private or public network outage notification, HSRP fail-over times can exceed the threshold for network outage detection which results in a fail-over. If the HSRP configuration has primary and secondary designations and the primary path router fails over, HSRP reinstates the primary path when possible. That reinstatement can lead to a second private network outage detection.


    For this reason, do not use primary and secondary designations with HSRP convergence delays that approach 500 ms for the private network and 2 seconds for the public network. On the other hand, convergence delays below the detected threshold (which result in HSRP fail-overs that are transparent to the application) do not mandate a preferred path configuration. This approach is preferable. Keep enabled routers symmetrical if path values and costs are identical. However, if available bandwidth and cost favor one path (and the path transition is transparent), then designation of a primary path and router is advised.

  • The Unified CCE fault-tolerant design requires the private network to be physically separate from the public network. Therefore, do not configure HSRP to fail-over one type of network traffic to the other network link.
  • The bandwidth requirement for Unified CCE must be guaranteed at all times with HSRP, otherwise the system behavior is unpredictable. For example, if HSRP is initially configured for load sharing, ensure that sufficient bandwidth for Unified CCE remains on the surviving links in the worst-case failure situations.

RSVP

Cisco Unified Communications Manager provides support for Resource Reservation Protocol (RSVP) between endpoints within a cluster. As a protocol for call admission control, RSVP is used by the routers in the network to reserve bandwidth for calls.

RSVP traces the path between two RSVP agents that reside on the same LAN as the phones. The RSVP agent is a software media termination point (MTP) that runs on Cisco IOS routers. The RSVP agents are controlled by Unified CM and are inserted into the media stream between the two phones when a call is made. The RSVP agent of the originating phone will traverse the network to the RSVP agent of the destination phone, and reserve bandwidth. Since the network routers keep track of bandwidth usage instead of Unified CM, multiple phone calls can traverse the same RSVP controlled link even if the calls are controlled by multiple Unified CMs.

The figure below shows a scenario in which two different Unified CM clusters provide service to phones at the same remote site. This may occur if a Unified CM cluster is assigned to handle an IP call center. In the scenario, two users at the same office are serviced by different clusters. RSVP offloads the bandwidth calculation responsibilities of Unified CM to the network routers.

Figure 2. Service to phones at a remote site

For more information about Unified CM RSVP, see the Cisco Unified Communications SRND.

Traffic Flow

This section briefly describes the traffic flows for the public and private networks.

Public Network Traffic Flow

The active PG continuously updates the Central Controller call routers with state information related to agents, calls, queues, and so forth, at the respective call center sites. This type of PG-to-Central Controller traffic is real-time traffic. The PGs also send up historical data at each 15-minute or half-hour interval based on configuration of the PG. The historical data is low priority, but it must complete its journey to the central site before the start of the next interval (to get ready for the next half hour of data).

When a PG starts, its configuration data is supplied from the central site so that it can know which agents, trunks, and so forth, it has to monitor. This configuration download can be a significant network bandwidth transient.

In summary, traffic flows from PG to Central Controller can be classified into the following distinct flows:
  • High-priority traffic — Includes routing and Device Management Protocol (DMP) control traffic. It is sent in TCP with the public high-priority IP address.
  • Heartbeat traffic — UDP messages with the public high-priority IP address and in the port range of 39500 to 39999. Heartbeats are transmitted at 400-ms intervals bi-directionally between the PG and the Central Controller. The UDP heartbeat is replaced with TCP keep-alive if QoS is enabled on the public network interface through the Unified CCE setup.
  • Medium-priority traffic — Includes real-time traffic and configuration requests from the PG to the Central Controller. The medium-priority traffic is sent in TCP with the public high-priority IP address.
  • Low-priority traffic — Includes historical data traffic, configuration traffic from the Central Controller, and call close notifications. The low-priority traffic is sent in TCP with the public non-high-priority IP address.

Private Network Traffic Flow

Traffic destined for the critical Message Delivery Service (MDS) client (Router or OPC) is copied to the other side over the private link.

The private traffic can be summarized as follows:
  • High-priority traffic — Includes routing, MDS control traffic, and other traffic from MDS client processes such as the PIM CTI Server, Logger, and so forth. It is sent in TCP with the private high-priority IP address.
  • Heartbeat traffic — UDP messages with the private high-priority IP address and in the port range of 39500 to 39999. Heartbeats are transmitted at 100-ms intervals bi-directionally between the duplexed sides. The UDP heartbeat is replaced with TCP keep-alive if QoS is enabled on the private network interface through the Unified CCE setup.
  • Medium-priority and low-priority traffic — For the Central Controller, this traffic includes shared data sourced from routing clients as well as (non-route control) call router messages, including call router state transfer (independent session). For the OPC (PG), this traffic includes shared non-route control peripheral and reporting traffic. This class of traffic is sent in TCP sessions designated as medium priority and low priority, respectively, with the private non-high priority IP address.
  • State transfer traffic — State synchronization messages for the Router, OPC, and other synchronized processes. It is sent in TCP with a private non-high-priority IP address.

Bandwidth and Latency Requirements

The amount of traffic sent between the Central Controllers (call routers) and Peripheral Gateways is largely a function of the call load at that site, although transient boundary conditions (for example, startup configuration load) and specific configuration sizes also affect the amount of traffic. Bandwidth calculators and sizing formulas can project bandwidth requirements far more accurately. See Bandwidth Requirements for Unified CCE Public and Private Networks for more details.

A site that has an ACD as well as a VRU has two peripherals, and the bandwidth requirement calculations need to take both peripherals into account. As an example, a site that has four peripherals, each taking 10 calls per second, will generally be configured to have 320 kbps of bandwidth. The 1000 bytes per call is a rule of thumb, but monitor the actual behavior once the system is operational to ensure that enough bandwidth exists. (Unified CCE meters data transmission statistics at both the Central Controller and PG sides of each path.)

As with bandwidth, specific latency requirements must be guaranteed for Unified CCE to function as designed. The side-to-side private network of duplexed Central Controller and PG nodes has a maximum one-way latency of 100 ms (50 ms preferred). The PG-to-CC path has a maximum one-way latency of 200 ms to perform as designed. Meeting or exceeding these latency requirements is particularly important in an environment using Unified CCE post-routing and/or translation routes.

As discussed previously, Unified CCE bandwidth and latency design is fully dependent on an underlying IP prioritization scheme. Without proper prioritization in place, WAN connections will fail. The Cisco Unified CCE support team has custom tools (for example, Client/Server) that can be used to demonstrate proper prioritization and to perform some level of bandwidth utilization modeling for deployment certification.

Depending on the final network design, an IP queuing strategy is required in a shared network environment to achieve Unified CCE traffic prioritization concurrent with other non-DNP traffic flows. This queuing strategy is fully dependent on traffic profiles and bandwidth availability, and success in a shared network cannot be guaranteed unless the stringent bandwidth, latency, and prioritization requirements of the product are met.

In general, Agent Greeting feature requires shorter latency cross system. For example, the PG-to-CC path has a maximum one-way latency of 50 ms to support Agent Greeting feature as designed.

Quality of Service

This section covers the planning and configuration issues to consider when moving to a Unified CCE QoS solution.

Where to Mark Traffic

In planning QoS, a question often arises about whether to mark traffic in Unified CCE or at the network edge. Each option has its pros and cons. Marking traffic in Unified CCE saves the access lists for classifying traffic in IP routers and switches.


Note


While Cisco allows Microsoft Packet Scheduler with Unified CCE 8.5, it is not supported and future releases will remove this option.


There are several disadvantages to marking traffic in Unified CCE. First, it is hard to make changes. For instance, to change the marking values for the public network traffic, you have to make changes on all the PGs. For a system with more than 30 PGs, for example, all those changes would require quite a lot of work. Second, QoS trust has to be enabled on access-layer routers and switches, which could open the network to malicious packets with inflated marking levels.


Note


In Windows 2008, you can use the Group Policy Editor to apply a QoS policy to apply DSCP Level 3 markings to packets. You can also administer these policies through the Active Directory Domain Controller. This may simplify the administration issue. For more information, see appropriate Microsoft documentation.


In contrast, marking traffic at the network edge allows for centralized and secured marking policy management, and there is no need to enable trust on access-layer devices. A little overhead is needed to define access lists to recognize Unified CCE packets. For access-list definition criteria on edge routers or switches, see Table 1, Table 2, and Table 3. Do not use port numbers in the access lists for recognizing Unified CCE traffic (although they are provided in the tables for reference purposes) because port numbers make the access lists extremely complex and you would have to modify the access lists every time a new customer instance is added to the system.


Note


A typical Unified CCE deployment has three IP addresses configured on each NIC, and the Unified CCE application uses two of them. For remote monitoring using PCAnywhere or VNC, because the port numbers are not used in the access lists, use the third IP address to prevent the remote monitoring traffic from being marked as the real Unified CCE traffic.


How to Mark Traffic

The default Unified CCE QoS markings can be overwritten if necessary. The tables below show the default markings, latency requirement, IP address, and port associated with each priority flow for the public and private network traffic respectively, where i# stands for the customer instance number. Notice that in the public network the medium-priority traffic is sent with the high-priority public IP address and marked the same as the high-priority traffic, while in the private network it is sent with the non-high-priority private IP address and marked the same as the low-priority traffic.

For details about Cisco Unified Communications packet classifications, see the Cisco Unified Communications System Solution Reference Network Design (SRND) Guide at http:/​/​www.cisco.com/​en/​US/​docs/​voice_ip_comm/​uc_system/​design/​guides/​UCgoList.html.


Note


Cisco has begun to change the marking of voice control protocols from DSCP 26 (PHB AF31) to DSCP 24 (PHB CS3). However, many products still mark signaling traffic as DSCP 26 (PHB AF31). Therefore, in the interim, reserve both AF31 and CS3 for call signaling.


Table 1 Public Network Traffic Markings (Default) and Latency Requirements

Priority

Server-Side IP Address and Port

One-Way Latency Requirement

DSCP / 802.1p Marking

High

IP address: Router's high-priority public IP address

TCP port:
  • 40003 + (i# * 40) for DMP high-priority connection on A
  • 41003 + (i# * 40) for DMP high-priority connection on B

UDP port: 39500 to 39999 for UDP heartbeats if QoS is not enabled on Unified CCE

200 ms

AF31 / 3

Medium

IP address: Router's high-priority public IP address

TCP port:
  • 40017 + (i# * 40) for DMP high-priority connection on A
  • 41017 + (i# * 40) for DMP high-priority connection on B

1000 ms

AF31 / 3

Low

IP address: Router's non-high-priority public IP address

TCP port:
  • 40002 + (i# * 40) for DMP low-priority connection on A
  • 41002 + (i# * 40) for DMP low-priority connection on B

5 seconds

AF11 / 1

Table 2 Router Private Network Traffic Markings (Default) and Latency Requirements

Priority

Server-Side IP Address and Port

One-Way Latency Requirement

DSCP / 802.1p Marking

High

IP address: Router's high-priority private IP address

TCP port: 41005 + (i# * 40) for MDS high-priority connection

UDP port: 39500 to 39999 for UDP heartbeats if QoS is not enabled on Unified CCE

100 ms (50 ms preferred)

AF31 / 3

Medium

IP address: Router's non-high-priority private IP address

TCP port: 41016 + (i# * 40) for MDS medium-priority connection

1000 ms

AF11/1

Low

IP address: Router's non-high-priority private IP address

TCP port:
  • 41004 + (i# * 40) for MDS low-priority connection
  • 41022 + (i# * 40) for CIC StateXfer connection
  • 41021 + (i# * 40) for CLGR StateXfer connection
  • 41023 + (i# * 40) for HLGR StateXfer connection
  • 41020 + (i# * 40) for RTR StateXfer connection

1000 ms

AF11/1

Table 3 PG Private Network Traffic Markings (Default) and Latency Requirements

Priority

Server-Side IP Address and Port

One-Way Latency Requirement

DSCP / 802.1p Marking

High

IP address: PG high-priority private IP address

TCP port:
  • 43005 + (i# * 40) for MDS high-priority connection of PG no.1
  • 45005 + (i# * 40) for MDS high-priority connection of PG no.2

UDP port: 39500 to 39999 for UDP heartbeats if QoS is not enabled on Unified CCE

100 ms (50 ms preferred)

AF31/3

Medium

IP address: PG's non-high-priority private IP address

TCP port:
  • 43016 + (i# * 40) for MDS medium-priority connection of PG no.1
  • 45016 + (i# * 40) for MDS medium-priority connection of PG no.2

1000 ms

AF11/1

Low

IP address: PG's non-high-priority private IP address

TCP port:
  • 43004 + (i# * 40) for MDS low-priority connection of PG no.1
  • 45004 + (i# * 40) for MDS low-priority connection of PG no.2
  • 3023 + (i# * 40) for OPC StateXfer of PG no.1
  • 45023 + (i# * 40) for OPC StateXfer of PG no.2

1000 ms

AF11/1

QoS Configuration

This section presents some QoS configuration examples for the various devices in a Unified CCE system.

Configuring QoS on Unified CCE Router and PG

The QoS setup on the Unified CCE Router and PG is necessary only if the marking is done in the Unified ICM and is trusted by the network. For details, see the Installation Guide for Cisco Unified /​CCE Enterprise & Hosted Editions.

Configuring QoS on Cisco IOS Devices

This section presents some representative QoS configuration examples. For details about campus network design, switch selection, and QoS configuration commands, see the Enterprise QoS Solution Reference Network Design (SRND).


Note


The marking value, bandwidth data, and queuing policy in the examples below are provided for demonstration purpose only. Do not copy and paste the examples without making corresponding changes in the real working system.


Configuring 802.1q Trunks on IP Switches

If 802.1p is an intended feature and the 802.1p tagging is enabled on the NIC for the visible network, the switch port into which the Unified CCE server plugs must be configured as an 802.1q trunk, as illustrated in the following configuration example:
switchport mode trunk 
switchport trunk encapsulation dot1q 
switchport trunk native vlan [data/native VLAN #] 
switchport voice vlan [voice VLAN #] 
switchport priority-extend trust 
spanning-tree portfast

Configuring QoS Trust

Assuming Unified CCE DSCP markings are trusted, the following commands enable trust on an IP switch port:
mls qos 
    interface mod/port 
        mls qos trust dscp 

Configuring Queuing Policy to Act on Marked Traffic

Using the public (visible) network as an example, the class map below identifies two marking levels, AF31 for high-priority traffic (which actually includes medium-priority public network traffic because it is marked the same as the high-priority traffic by default) and AF11 for low-priority traffic:
class-map match-all Unified ICM_Public_High
    match ip dscp af31
class-map match-all ICM_Public_Low
    match ip dscp af11
If the link is dedicated to Unified CCE Public traffic only, the policy map puts ICM_Public_High traffic into the priority queue with the minimum and maximum bandwidth guarantee of 500 kbps, and it puts ICM_Public_Low traffic into the normal queue with a minimum bandwidth of 250 kbps:
policy-map ICM_Public_Queuing
    class ICM_Public_High
        priority 500
    class ICM_Public_Low
        bandwidth 250

You can also use the commands priority percent and bandwidth percent to assign bandwidth on a percentage basis. Assign 90% of the link bandwidth to the priority queue.

If it is a shared link, then use the sizing tools introduced in the section on Bandwidth Provisioning, to calculate the bandwidth requirement at each priority level and add it to the allocation for non-CCE traffic in the same queue. For example, if the link is shared with Unified CM ICCS traffic and RTP traffic and they respectively require 600 kbps and 400 kbps, and if the link also carries the private traffic in case of fail-over and the high-priority and low-priority private Unified CCE traffic respectively require 200 kbps and 100 kbps, the configuration is:
policy-map Converged_Link_Queuing
    class RTP
      priority 400
    class ICCS
        bandwidth 600
    class ICM_Public_High
        bandwidth 500
    class ICM_Public_Low
        bandwidth 250
    class ICM_Private_High
        bandwidth 200
    class ICM_Private_Low
        bandwidth 100

You can also use the commands priority percent and bandwidth percent to assign bandwidth on a percentage basis. If the link is dedicated to Unified CCE traffic only, assign 90% of the link bandwidth to the priority queue. If it is a shared link, use the sizing tools introduced in the section on Bandwidth Provisioning to calculate the bandwidth requirement at each priority level and add it to the allocation for non-CCE traffic in the same queue.

Finally, the queuing policy is applied to the outgoing interface:
interface mod/port 
    service-policy output ICM_Public_Queuing

Configuring Marking Policy to Mark Traffic

As discussed earlier, rather than marking traffic in Unified CCE, another option is to mark traffic at the network edge. First, define access lists to recognize Unified CCE traffic flows:
access-list 100 permit tcp host Public_High_IP any
access-list 100 permit tcp any host Public_High_IP
access-list 101 permit tcp host Public_NonHigh_IP any
access-list 101 permit tcp any host Public_NonHigh_IP
Second, classify the traffic using a class map:
class-map match-all ICM_Public_High
    match access-group 100
class-map match-all ICM_Public_Low
    match access-group 101
Third, define the marking policy using a policy map:
policy-map ICM_Public_Marking
    class ICM_Public_High
        set ip dscp af31
    class ICM_Public_Low
        set ip dscp af11
Finally, apply the marking policy to the incoming interface:
interface mod/port 
    service-policy input ICM_Public_Marking

QoS Performance Monitoring

Once the QoS-enabled processes are up and running, the Microsoft Windows Performance Monitor (PerfMon) can be used to track the performance counters associated with the underlying links. For details on using PerfMon for this purpose, see the Administration Guide for Cisco Unified ICM/​Contact Center Enterprise & Hosted.

Bandwidth Provisioning

This section discusses bandwidth provisioning considerations for the Unified CCE system.

Bandwidth Requirements for Unified CCE Public and Private Networks

This section briefly describes bandwidth sizing for the public (visible) and private networks.

Public Network Bandwidth

Special tools are available to help calculate the bandwidth needed for the following public network links:

  • Unified CCE Central Controller to Unified CM PG

    A tool is accessible to Cisco partners and Cisco employees for computing the bandwidth needed between the Unified CCE Central Controller and Unified CM. This tool is called the ACD/CallManager Peripheral Gateway to Unified CCE Central Controller Bandwidth Calculator, and it is available (with proper login authentication) through the Steps to Success Portal .

  • Unified CCE Central Controller to Unified IP IVR or Unified CVP PG

    A tool is accessible to Cisco partners and Cisco employees for computing the bandwidth needed between the Unified CCE Central Controller and the IP IVR PG. This tool is called the VRU Peripheral Gateway to Unified Central Controller Bandwidth Calculator, and it is also available through the Steps to Success Portal.

At this time, no tool exists that specifically addresses communications between the Unified CCE Central Controller and the Cisco Unified Customer Voice Portal (Unified CVP) PG. Testing has shown, however, that the tool for calculating bandwidth needed between the Unified CCE Central Controller and the Unified IP IVR PG will also produce accurate measurements for Unified CVP if you perform the following substitution in one field:

For the field labeled Average number of RUN VRU script nodes, substitute the number of Unified CCE script nodes that interact with Unified CVP.

Private Network Bandwidth

The following table is a worksheet to assist with computing the link and queue sizes for the private network. Definitions and examples follow the table.


Note


Minimum link size in all cases is 1.5 Mbps (T1).


Table 4 Worksheet for Calculating Private Network Bandwidth
Component Effective BHCA Multiplication Factor Calculated Link Multiplication Factor Calculated Queue  
Router + Logger   * 30   * 0.8   Total Router + Logger High- Priority Queue Bandwidth
Unified CM PG   * 100   * 0.9   Add these numbers together and total in the box below to get the PG High- Priority Queue Bandwidth
Unified IP IVR PG   * 60   * 0.9  
Unified CVP PG   * 120   * 0.9  
Unified IP IVR or Unified CVP Variables   * ((Number of Variables * Average Variable Length)/40)   * 0.9  
    Total Link Size       Total PG High-Priority Queue Bandwidth

If one dedicated link is used between sites for private communications, add all link sizes together and use the Total Link Size at the bottom of the table above. If separate links are used, one for Router/Logger Private and one for PG Private, use the first row for Router/Logger requirements and the bottom three (out of four) rows added together for PG Private requirements.

Effective BHCA (effective load) on all similar components that are split across the WAN is defined as follows:
Router + Logger
This value is the total BHCA on the call center, including conferences and transfers. For example, 10,000 BHCA ingress with 10% conferences or transfers are 11,000 effective BHCA.
Unified CM PG
This value includes all calls that come through Unified CCE Route Points controlled by Unified CM and/or that are ultimately transferred to agents. This assumes that each call comes into a route point and is eventually sent to an agent. For example, 10,000 BHCA ingress calls coming into a route point and being transferred to agents, with 10% conferences or transfers, are 11,000 effective BHCA.
Unified IP IVR PG
This value is the total BHCA for call treatment and queuing. For example, 10,000 BHCA ingress calls, with all of them receiving treatment and 40% being queued, are 14,000 effective BHCA.
Unified CVP PG
This value is the total BHCA for call treatment and queuing coming through a Unified CVP. 100% treatment is assumed in the calculation. For example, 10,000 BHCA ingress calls, with all of them receiving treatment and 40% being queued, are 14,000 effective BHCA.
Unified IP IVR or Unified CVP Variables
This value represents the number of Call and ECC variables and the variable lengths associated with all calls routed through the Unified IP IVR or Unified CVP, whichever technology is used in the implementation.
Example of a Private Bandwidth Calculation
The table below shows an example calculation for a combined dedicated private link with the following characteristics:
  • BHCA coming into the contact center is 10,000.
  • 100% of calls are treated by Unified IP IVR and 40% are queued.
  • All calls are sent to agents unless abandoned. 10% of calls to agents are transfers or conferences.
  • There are four Unified IP IVRs used to treat and queue the calls, with one PG pair supporting them.
  • There is one Unified CM PG pair for a total of 900 agents.
  • Calls have ten 40-byte Call Variables and ten 40-byte ECC variables.
Table 5 Example Calculation for a Combined Dedicated Private Link
Component Effective BHCA Multiplication Factor Calculated Link Multiplication Factor Calculated Queue
Router + Logger 11,000 * 30 330,000 * 0.8 264,000 Total Router + Logger High- Priority Queue Bandwidth
Unified CM PG 11,000 * 100 1,100,000 * 0.9 990,000 Add these three numbers together and total in the box below to get the PG High- Priority Queue Bandwidth
Unified IP IVR PG 14,000 * 60 840,000 * 0.9 756,000
Unified CVP PG 0 * 120 0 * 0.9 0
Unified IP IVR or Unified CVP Variables 14,000 * ((Number of Variables * Average Variable Length)/40) 280,000 * 0.9 252,000
    Total Link Size 2,550,000   1,998,000 Total PG High-Priority Queue Bandwidth
For the combined dedicated link in this example, the results are as follows:
  • Total Link Size = 2,550,000 bps
  • Router/Logger high-priority bandwidth queue of 264,000 bps
  • PG high-priority queue bandwidth of 1,998,000 bps
If this example were implemented with two separate links, Router/Logger private and PG private, the link sizes and queues are as follows:
  • Router/Logger link of 330,000 bps (actual minimum link is 1.5 Mb, as defined earlier), with high-priority bandwidth queue of 264,000 bps
  • PG link of 2,220,000 bps, with high-priority bandwidth queue of 1,998,000 bps
When using Multilink Point-to-Point Protocol (MLPPP) for private networks, set the following attributes for the MLPPP link:
  • Use per-destination load balancing instead of per-packet load balancing.

    Note


    You must have two separate multilinks with one link each for per-destination load balancing.


  • Enable Point-to-Point Protocol (PPP) fragmentation to reduce serialization delay.

Bandwidth Requirements for Unified CCE Clustering Over the WAN

For details about Unified CCE clustering over the WAN, see IPT: Clustering Over the WAN.

Bandwidth must be guaranteed across the highly available (HA) WAN for all Unified CCE private, public, CTI, and Unified Communications Manager intra-cluster communication signaling (ICCS). Moreover, bandwidth must be guaranteed for any calls going across the highly available WAN. Minimum total bandwidth required across the highly available WAN for all Unified CCE signaling is 2 Mbps.

In addition to the bandwidth requirements for the private and public networks, this section adds bandwidth analysis for the connections from Unified IP IVR or Unified CVP PG to Unified IP IVR or Unified CVP, CTI Server to CTI OS, and Unified CM intra-cluster communication signaling (ICCS).

Unified IP IVR or Unified CVP PG to Unified IP IVR or Unified CVP

At this time, no tool exists that specifically addresses communication between the Unified IP IVR or Unified CVP PG and the Unified IP IVR or Unified CVP. However, the tool mentioned in the previous section produces a fairly accurate measurement of bandwidth needed for this communication. Bandwidth consumed between the Unified CCE Central Controller and Unified IP IVR or Unified CVP PG is very similar to the bandwidth consumed between the Unified IP IVR or Unified CVP PG and the Unified IP IVR or Unified CVP.

The VRU Peripheral Gateway to Unified CCE Central Controller Bandwidth Calculator tool is available (with proper login authentication) through the Cisco Steps to Success Portal.

If the Unified IP IVR or Unified CVP PGs are split across the WAN, total bandwidth required are double what the tool reports: once for Unified CCE Central Controller to Unified IP IVR or Unified CVP PG and once for Unified IP IVR or Unified CVP PG to Unified IP IVR or Unified CVP.

CTI Server to CTI OS

The worst case for bandwidth utilization across the WAN link between the CTI OS and CTI Server occurs when the CTI OS is remote from the CTI Server. Use a bandwidth queue to guarantee availability for this worst case.

For this model, the following simple formula can be used to compute worst-case bandwidth requirements:
  • With no Expanded Call Context (ECC) or Call Variables:

    BHCA * 20 = bps

  • With ECC and/or Call Variables

    BHCA * (20 + ((Number of Variables * Average Variable Length) / 40) = bps

Example: With 10,000 BHCA and 20 ECC variables with average length of 40 bits:

10,000 * (20 + ((20 * 40) / 40) = 10,000 * 40 = 400,000 bps = 400 kbps

CTI Server to Finesse

Use the Finesse Bandwidth Calculator to determine the bandwidth required where Finesse connects to the CTI server over a WAN link.

Unified Communications Manager Intra-Cluster Communication Signaling (ICCS)

The bandwidth required for Intra-Cluster Communication Signaling (ICCS) between Unified Communications Manager subscriber nodes is significantly higher when Unified CCE is deployed, due to the number of call redirects and additional CTI/JTAPI communications encompassed in the intra-cluster communications. The following formulae may be used to calculate the required bandwidth for the ICCS and database traffic between Unified CM subscriber nodes when they are deployed with Unified CCE:

  • Unified Communications Manager releases prior to 6.1
    • Intra-Cluster Communications Signaling (ICCS)

      BHCA * 200 = bps

      This is the bandwidth required between each site where Unified Communications Manager subscribers are connected to Voice Gateways, agent phones, and Agent PGs.
    • Database and other communications
    • 644 kbps for each subscriber remote from the publisher
  • Unified CM Release 6.1 and later releases
    • Intra-Cluster Communications Signaling (ICCS)

      Total Bandwidth (Mbps) = ((Total BHCA) / 10,000) [2.25 + (0.006 Delay)], where Delay = Round-trip-time delay in msec

      This is the bandwidth required between each Unified Communications Manager subscriber that is connected to Voice Gateways, agent phones, and Agent PGs.

    • Database and other communications

      1.544 Mbps for each subscriber remote from the publisher

      The BHCA value to use for the ICCS formulae above is the total BHCA for all calls coming into the contact center.

These bandwidth requirements assume proper design and deployment based on the recommendations contained throughout this document. Inefficient design (for example, if ingress calls to Site 1 are treated in Site 2) will cause additional intra-cluster communications, possibly exceeding the defined bandwidth requirements.

Bandwidth Requirements for Gateway PG to System PG

This section provides some basic guidelines for provisioning bandwidth for the connection between the gateway PG and the system PG.

Bandwidth Requirements for Unified CCE Gateway PG to Central Controller

No special considerations are necessary for the PG-to-CC connection over other TDM PGs.

If agent reporting is not used, then uncheck the Enable Agent Reporting checkbox in the Agent distribution tab of the PG explorer to avoid sending unnecessary data over the link. For more information, see Bandwidth and Latency Requirements.

Bandwidth Requirements for Unified CCE Gateway PG to System PG

The figure below illustrates the connection between the parent PG/PIM and the child system PG.

Figure 3. Connection Between Gateway PG and System PG


Note


Do not deploy the gateway PG remote from the system PG that it is monitoring.


The following factors affect the amount of data coming over the link once it is initialized:
  • Message sizes can vary depending on their content (such as the size of extensions, agent IDs, and call data). A Route Request with no data, for example, can be a very small message. If all call variables and ECC variables are populated with large values, this will drastically affect the size of the message.
  • Call scenarios can cause great variation in the number of messages per call that are transmitted over the line. A simple call scenario might cause 21 messages to be transmitted over the line. More complex call scenarios involving queuing, hold retrieves, conferences, or transfers will add greatly to the number of messages per call that are transmitted over the line.
  • The more skill groups to which an agent belongs, the more messages are transmitted over the line. In a simple call scenario, each additional skill group adds two messages per call. These messages are approximately 110 bytes each, depending on field sizes.
Bandwidth Calculation for Basic Call Flow

A basic call flow (simple ACD call with no hold, retrieve, conference, or transfer) with a single skill group will typically generate 21 messages; plan for a minimum of approximately 2700 bytes of required bandwidth for it.

In a basic call flow, there are four places where call variables and ECC data can be sent. Thus, if you use call data and/or ECC variables, they will all be sent four times during the call flow. Using a lot of call data could easily increase (by double, triple or more) the 2700 bytes of estimated bandwidth per call.


Note


Call variables used on the child PG are transmitted to the parent PG regardless of their use or the setting of the MAPVAR parameter. For example, if call variables 1 through 8 are used on the child PG but are never referenced on the parent PG (and assume MAPVAR = EEEEEEEEEE, meaning Export all but Import nothing), they will still be transmitted to the PG where the filtering takes place, therefore bandwidth is still required. For the reverse situation, bandwidth is spared. For example, if the map setting is MAPVAR = IIIIIIIIII (Import all but Export nothing), then bandwidth is spared. Call variable data will not be transmitted to the child PG on a ROUTE_SELECT response.


Basic Call Flow Example
Assume a call rate of 300 simple calls per minute (5 calls per second) and the agents are all in a single skill group with no passing of call variables or ECC data. The required bandwidth in this case is:

5 * 2700 = 13,500 bytes per second = 108 kbps of required bandwidth


Note


A more complex call flow or a call flow involving call data could easily increase this bandwidth requirement.


Bandwidth Requirements for Finesse Client to Finesse Server

The most expensive operation from a network perspective is the agent or supervisor login. This operation involves the web page load and includes the CTI login and the display of the initial agent state. After the desktop web page loads, the required bandwidth is significantly less.

The number of bytes transmitted at the time an agent logs in is approximately 1.1 megabytes. Because of the additional gadgets on the supervisor desktop (Team Performance, Queue Statistics), this number is higher for a supervisor login – approximately 1.5 megabytes. Cisco does not mandate a minimum bandwidth for the login operations. You must determine how long you want the login to take and determine the required bandwidth accordingly. To help you with this calculation, Cisco Finesse provides a bandwidth calculator to estimate the bandwidth required to accommodate the client login time.

Note that during failover, agents are redirected to the alternate Finesse server and required to log in again. For example, if you configure your bandwidth so that login takes 5 minutes and a client failover event occurs, agents will take 5 minutes to successfully log in to the alternate Finesse server.

After login is complete, the most intensive operation for both an agent and a supervisor is making an outbound call to a route point. For the supervisor, updates to the Team Performance and Queue Statistics gadgets may be occurring concurrently. You can use the Cisco Finesse bandwidth calculator to calculate the total bandwidth required for connections between all Finesse clients and the Finesse server.


Note


The Cisco Finesse bandwidth calculator does not include the bandwidth required for any third-party gadgets in the Finesse container or any other applications running on the agent desktop client.


Other applications at the remote client location may compete for total bandwidth to that remote client.

The bandwidth listed in the bandwidth calculator must be available for Finesse after you account for the bandwidth used by other applications, including voice traffic that may share this bandwidth. The performance of the Finesse interface, and potentially the quality of voice sharing this bandwidth, may degrade if sufficient bandwidth is not continuously available.

Auto Configuration

If auto configuration is used, it is possible that the entire agent, skill group, and route-point configuration can be transmitted from the child PG to the parent PG. If not much bandwidth is available, it could take considerable time for this data to be transmitted.

The table below lists the approximate number of bytes (worst case) that are transmitted for each of the data entities. If you know the size of the configuration on a child PG, you can calculate the total number of bytes of configuration data that is transmitted. Note that the values in the table below are worse-case estimates that assume transmitting only one item per record, with each field having the maximum possible size (which is extremely unlikely).

Table 6 Bytes Transmitted per Data Item Under Worst-Case Conditions

Data Item Transmitted

Size

Agent

500 bytes

Call type

250 bytes

Skill group

625 bytes

Device (route point and so forth)

315 bytes

For example, if the child PG has 100 agents, 10 call types, 5 skill groups, and 20 route points, then the amount of configuration data transmitted can be estimated as follows:

100 agents * 500 bytes = 50,000 bytes

10 call types * 250 bytes = 2500 bytes

5 skill groups * 625 bytes = 3125 bytes

20 route points * 315 bytes = 6300 bytes

50,000 + 2500 + 3125 + 6300 = 61,925 bytes

The total amount of data (approximate maximum) transmitted for this configuration is 61,925 bytes.

Options for Gateway PG and Unified CCE

To mitigate the bandwidth demands, use any combination of the following options:

  • Use fewer call and ECC variables on the child PG.

    Certain messages transmit call data from the child Unified CCE system to the parent. Reducing the size and quantity of variables used will reduce the data transmitted for these events. (See the note in Bandwidth Requirements for Unified CCE Gateway PG to System PG.)

  • Use the MAPVAR = IIIIIIIIII and MAPECC = IIIIIIIIII peripheral configuration parameters.

    If you do not use the MAPVAR and MAPECC option (which means that the settings default to MAPVAR = BBBBBBBBBB and MAPECC = BBBBBBBBBB), then for every ROUTE_SELECT sent to the child, all Call and ECC variables used on the parent are also sent to the child. If you use the I (Import) or N (None) option for MAPVAR, MAPECC, or both, then the Gateway PG does not send these variables over the line to the child system. If a lot of call variables and/or ECC variables are used on the parent, these parameter settings can save some bandwidth.


Note


Eliminating Import (I or B setting) of data does not save any bandwidth because, even though the Gateway PG does not import the data, the child Unified CCE system still transmits it.


Outbound Option Bandwidth Provisioning and QoS Considerations

In many Outbound Option deployments, all components are centralized; therefore, there is no WAN network traffic to consider.

For some deployments, if the outbound call center is in one country (for example, India) and the customers are in another country (for example, US), then the WAN network structure must be considered in a Unified CCE environment under the following conditions:
  • In a distributed Outbound Option deployment, when the Voice Gateways are separated from the Outbound Option Dialer servers by a WAN.
  • When using Unified CVP deployments for transfer to an IVR campaign, and the Unified CVP servers are separated from the Outbound Option Dialer servers by a WAN. Provide Unified CVP with its own Cisco Unified SIP Proxy server in the local cluster to reduce the WAN traffic.
  • When using IP IVR deployments for transfer to an IVR campaign, and the IP IVR is separated from the Outbound Option Dialer servers by a WAN. Provide IP IVR with its own Unified CM cluster to reduce the WAN traffic.
  • When deploying a SIP Dialer solution for transfer to an IVR campaign, and the Cisco Unified SIP Proxy servers for the SIP Dialers are separated from the Outbound Option Dialer servers by a WAN.
  • When the third-party recording server is separated from the Outbound Option Dialer servers by a WAN. Configure the recording server local to the Voice Gateways.

Adequate bandwidth provisioning is an important component in the success of the Outbound Option deployments.

Distributed SIP Dialer Deployment

SIP is a text-based protocol; therefore, the packets used are larger than with H.323. The typical SIP outbound call flow uses an average of 12,500 bytes per call that is transferred to an outbound agent. The average hit call signaling bandwidth usage is:

Hit Call Signaling Bandwidth = (12,500 bytes/call) (8 bits/byte) = 100,000 bits per call = 100 Kb per call

The typical SIP outbound call flow uses about 6,200 bytes per call that is disconnected by the outbound dialer. Those outbound calls can be the result of a busy ring no-answer, an invalid number, and so forth. The average non-hit call signaling bandwidth usage is:

Non-Hit Signaling Call Bandwidth = (6,200 bytes/call) (8 bits/byte) = 49,600 bits per call = 49.6 Kb per call

Codec Bandwidth = 80 Kbps per call for g.711 Codec,
or 26 Kbps per call for g.729 Codec

Agent-Based Campaign – No SIP Dialer Recording

The figure below shows an example of the distributed Outbound SIP Dialer deployment for an agent-based campaign.

Figure 4. Distributed Outbound SIP Dialer Deployment for an Agent-Based Campaign

The average WAN bandwidth usage in this case is:

WAN Bandwidth = Calls Per Second *
(Hit Rate * (Codec Bandwidth * Average Call Duration + Hit Call Signaling Bandwidth)
+ (1 – Hit Rate) * Non-Hit Call Signaling Bandwidth)
= Kbps

Example 1
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the agent-based campaign, and a WAN link with g.711 codec and average call duration of 40 seconds, the bandwidth usage is:

60 * (20% * (80 * 40 + 100) + (1 – 20%)*49.6) = 41980.8 kbps = 41.98 Mbps

Example 2
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the agent-based campaign, and a WAN link with g.729 codec and average call duration of 40 seconds, the bandwidth usage is:

60 * (20% * (26 * 40 + 100) + (1 – 20%)*49.6) = 16060.8 kbps = 16.06 Mbps

Agent-Based Campaign – SIP Dialer Recording

The average WAN bandwidth usage in this case is:

WAN Bandwidth = Calls Per Second *
     (Codec Bandwidth * Average Call Duration
     + Hit Rate * Hit Call Signaling Bandwidth
     + (1 - Hit Rate) * Non-Hit Call Signaling Bandwidth)
     = Kbps

Example 3
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the agent campaign, and a WAN link with average g.711 codec and average call duration of 40 seconds, the bandwidth usage is:

60 * (80 * 40 + 20% *100 + (1 – 20%)*49.6) = 199180.8 kbps = 199.18 Mbps

Example 4
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the agent campaign, and a WAN link with average g.729 codec and average call duration of 40 seconds, the bandwidth usage is:

60 * (26 * 40 + 20% *100 + (1 – 20%)*49.6) = 67660.8 kbps = 67.66 Mbps

Transfer-To-IVR Campaign – No SIP Dialer Recording

The following figures show examples of the distributed Outbound SIP Dialer deployment for transfer to an IVR campaign.
Figure 5. Distributed Outbound SIP Dialer Deployment for Transfer to an IVR Campaign Using Cisco Unified CVP

Figure 6. Distributed Outbound SIP Dialer Deployment for Transfer to an IVR Campaign Using Cisco Unified IP IVR

The average WAN bandwidth usage in this case is:

WAN Bandwidth = Calls Per Second * Hit Rate *
  Hit Call Signaling Bandwidth + Calls Per Second * (1 - Hit Rate) *
  Non-Hit Call Signaling Bandwidth
  = Kbps

Example 5
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the transfer-to-IVR campaign, and a WAN link with g.711 codec, the bandwidth usage is:

60 * 20% * 100 + 60 *(1 – 20%)*49.6= 3600 kbps = 3.6 Mbps

Transfer-To-IVR Campaign – SIP Dialer Recording

The average WAN bandwidth usage in this case is:

WAN Bandwidth = Calls Per Second * (Codec Bandwidth * Average Call Duration
     + Hit Rate * Hit Call Signaling Bandwidth
     + (1 - Hit Rate) * Non-Hit Call Signaling Bandwidth)
     = Kbps

Example 6
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the agent campaign, and a WAN link with g.711 codec and average call duration of 40 seconds, the bandwidth usage is:

60 * (80 * 40 + 20% *100 + (1 – 20%)*49.6) = 199180.8 kbps = 199.18 Mbps

Example 7
With call throttling of 60 cps on the SIP Dialer, a 20% hit rate for the transfer-to-IVR campaign, and a WAN link with g.729 codec and average call duration of 40 seconds, the bandwidth usage is:

60 * (26 * 40 + 20% *100 + (1 – 20%)*49.6) = 67660.8 kbps = 67.66 Mbps

Distributed SCCP Dialer Deployment


Note


The SCCP Dialer is deprecated in Unified CCE release 10.0(1).


Call control signaling uses H.323 over the WAN between the Voice Gateway and the Unified CM to which the SCCP dialers are connected. The typical H.323 outbound call flow uses an average of 4,000 bytes per call that is transferred to an outbound agent. The average hit call signaling bandwidth usage is:

Hit Call Signaling Bandwidth = (4,000 bytes/call) (8 bits/byte) = 32,000 bits per call = 32 Kb per call

The typical H.323 outbound call flow uses about 6,200 bytes per call that is disconnected by the outbound dialer. Those outbound calls can be the result of a busy ring no-answer, an invalid number, and so forth. The average non-hit call signaling bandwidth usage is:

Non-Hit Signaling Call Bandwidth = (2,000 bytes/call) (8 bits/byte) = 8,000 bits per call = 8 Kb per call
        Codec Bandwidth = 80 Kbps per call for g.711 Codec,
        or 26 Kbps per call for g.729 Codec

The figure below shows an example of the distributed Outbound SCCP Dialer deployment for an agent-based campaign.

Figure 7. Distributed Outbound SCCP Dialer Deployment for an Agent-Based Campaign

The following figures show examples of the distributed Outbound SCCP Dialer deployment for transfer to an IVR campaign.

Figure 8. Distributed Outbound SCCP Dialer Deployment for Transfer to an IVR Campaign Using Cisco Unified CVP

Figure 9. Distributed Outbound SCCP Dialer Deployment for Transfer to an IVR Campaign Using Cisco Unified IP IVR

The average WAN bandwidth usage in this case is:

WAN Bandwidth = Calls Per Second * Number of SCCP Dialers *
      (Codec Bandwidth * Average Call Duration
     + Hit Rate * Hit Call Signaling Bandwidth
     + (1 - Hit Rate) * Non-Hit Call Signaling Bandwidth)
     = Kbps

Example 8

For two SCCP Dialers with call throttling of 5 cps, a 20% hit rate for the agent campaign, and a WAN link with g.711 codec and average call duration of 40 seconds, the bandwidth usage is:

5*2 * (80 * 40 + 20% *100 + (1 – 20%)*49.6) = 33196.8 kbps = 33.20 Mbps

Example 9

With call throttling of 60 cps on the SCCP Dialer, a 20% hit rate for the transfer-to-IVR campaign, and a WAN link with g.729 codec and average call duration of 40 seconds, the bandwidth usage is:

5 * 2 * (26 * 40 + 20% *100 + (1 – 20%)*49.6) = 11276.8 kbps = 11.28 Mbps

Bandwidth Requirements and QoS for Agent and Supervisor Desktops

There are many factors to consider when assessing the traffic and bandwidth requirements for Agent and Supervisor Desktops in a Unified CCE environment. While the VoIP packet stream bandwidth is the predominant contributing factor to bandwidth usage, other factors such as call control, agent state signaling, Silent Monitoring, recording, and statistics must also be considered.

VoIP packet stream bandwidth requirements are derived directly from the voice codec deployed (g.729, g.711, and so forth), and can range from 4 kbps to 64 kbps per voice stream. Therefore, the contact center's call profile must be well understood because it defines the number of straight calls (incoming or outgoing), consultative transfers, and conference calls, and consequently the number of VoIP packet streams, that are active on the network. In general, the number of VoIP packet streams typically is slightly greater than one per agent, to account for held calls, Silent Monitoring sessions, active recordings, consultative transfers, and conference calls.

Call control, agent state signaling, Silent Monitoring, recording, and statistics bandwidth requirements can collectively represent as much as 25% to 50% of total bandwidth utilization. While VoIP packet stream bandwidth calculations are fairly straightforward, these other factors depend heavily on implementation and deployment details and are therefore discussed further in the sections below.

Because WAN links are usually the lowest-speed circuits in a Cisco Unified Communications network, attention must be given not only to bandwidth, but also to reducing packet loss, delay, and jitter where voice traffic is sent across these links. g.729 is the preferred codec for use over the WAN because the g.729 method for sampling audio introduces the least latency (only 30 ms) in addition to any other delays caused by the network. The g.729 codec also provides good voice quality with good compression characteristics, resulting in a relatively low (8 kbps) bandwidth utilization per stream.

Consider the following QoS factors:
  • Total delay budget for latency, taking into account WAN latency, serialization delays for any local area network traversed, and any forwarding latency present in the network devices.
  • Impact of routing protocols. For example, Enhanced Interior Gateway Routing Protocol (EIGRP) uses quick convergence times and conservative use of bandwidth. EIGRP convergence also has a negligible impact on call processing and Unified CCE agent logins.
  • Method used for silently Monitoring and Recording agent calls. The method used dictates the bandwidth load on a given network link.
  • Cisco Unified Mobile Agent deployments that use QoS mechanisms optimize WAN bandwidth utilization.
  • Use advanced queuing and scheduling techniques in distribution and core areas as well.

Bandwidth Requirements for CTI OS Agent Desktop

This section addresses the traffic and bandwidth requirements between CTI OS Agent Desktop and the CTI OS server. These requirements are important in provisioning the network bandwidth and QoS required between the agents and the CTI OS server, especially when the agents are remote over a WAN link. Even if the agents are local over Layer 2, it is important to account for the bursty traffic that occurs periodically because this traffic presents a challenge to bandwidth and QoS allocation schemes and can impact other mission-critical traffic traversing the network.

CTI-OS Client/Server Traffic Flows and Bandwidth Requirements

The network bandwidth requirements increase linearly as a function of agent skill group membership. The skill group statistics are the most significant sizing criterion for network capacity, while the effect of system call control traffic is a relatively small component of the overall network load. CTI OS Security affects the network load as well. When CTI OS Security is enabled (turned on), the bandwidth requirement increases significantly due to the OpenSSL overhead.

The following table shows the type of messaging of each CTI OS application.

Table 7 Messaging Type By CTI OS Application
Application Name Message Types
CTI OS Agent Desktop Agent state changes
Call Control
Call status information
Chat messages
Agent and skill-group statistics
CTI OS Supervisor Desktop Agent state changes
Call Control
Call status information
Monitoring agent states
Silent Monitoring
Chat messages
Agent and skill-group statistics
All Agents Monitor Application Agent state changes for all agents

Silent Monitoring Bandwidth Usage

Silent Monitoring provides supervisors with a means of listening in on agent calls in Unified CCE call centers that use CTI OS. Voice packets sent to and received by the monitored agent’s IP hardware phone are captured from the network and sent to the supervisor desktop. At the supervisor desktop, these voice packets are decoded and played on the supervisor's system sound card.

Silent Monitoring of an agent consumes roughly the same network bandwidth as an additional voice call. If a single agent requires bandwidth for one voice call, then the same agent being silently monitored would require bandwidth for two concurrent voice calls.

To calculate the total network bandwidth required for your call load, you would then multiply the number of calls by the per-call bandwidth figure for your particular codec and network protocol.

CTI OS Server Bandwidth Calculator

CTI OS provides a bandwidth calculator that examines the CTI OS Server-to-CTI OS Desktop bandwidth, as illustrated in the figure below. It calculates Total bandwidth, agent bandwidth, and supervisor bandwidth requirements with CTI OS Security turned on or off.

Figure 10. CTI OS Server-to-CTI OS Desktop Communication

Bandwidth reductions for CTI OS Server and CTI OS Agent Desktop

To mitigate the bandwidth demands, use any combination of the following options.

Configure fewer statistics

CTI OS allows the system administrator to specify, in the registry, the statistics items that are sent to all CTI OS clients. The choice of statistics affects the size of each statistics packet and, therefore, the network traffic. Configuring fewer statistics decreases the traffic sent to the agents. The statistics cannot be specified on a per-agent basis currently. For more information about agent statistics, see the CTI OS System Manager's Guide for Cisco Unified ICM/​Contact Center Enterprise & Hosted.

Turn off statistics on a per-agent basis

You can turn off statistics on a per-agent basis by using different connection profiles. For example, if Unified Mobile Agents use a connection profile with statistics turned off, these client connections have no statistics traffic between the CTI OS Server and the Agent or Supervisor Desktop. This option could eliminate the need for a separate CTI OS Server in remote locations.

If more limited statistics traffic is acceptable for the remote site, a remote supervisor or selected agents can still log statistics through a different connection profile with statistics enabled.

If Unified Mobile Agents have their skill group statistics turned off, but the supervisor needs to see the agent skill group statistics, the supervisor could use a different connection profile with statistics turned on. In this case, the volume of traffic sent to the supervisor is considerably less. For each skill group and agent (or supervisor), the packet size for a skill-group statistics message is fixed. So an agent in two skill groups would get two packets, and a supervisor observing five skill groups would get five packets. Assume there are 10 agents at a remote site and one supervisor, all with the same two skill groups configured. In Unified CCE, the supervisor sees all the statistics for the skill groups to which any agent in the agent team belongs. If only the supervisor has statistics turned on to observe the two skill groups and agents have statistics turned off, then this approach reduces skill-group statistics traffic by 90%.

Also, at the main location, if agents want to have their skill-group statistics turned on, they could do so without impacting the traffic to the remote location if the supervisor uses a different connection profile. Again, in this case no additional CTI OS servers are required.

In the case where there are multiple remote locations, assuming only supervisors must see the statistics, it is sufficient to have only one connection profile for all remote supervisors.

Turn off all skill group statistics in CTI OS

If skill group statistics are not required, turn them all off. Doing so would remove the connections between the CTI OS Server and the Agent or Supervisor Desktop and would eliminate all statistics traffic.

Bandwidth Requirements for Cisco Agent Desktop

This section presents some design considerations for provisioning network bandwidth, providing security and access to corporate data stores, and ensuring Quality of Service (QoS) for Unified CCE installations that include the Cisco Agent Desktop (CAD) product.

Silent Monitoring Bandwidth Usage

The Silent Monitoring feature of the CAD desktop software, which includes listening to a live call, recording an agent call, and listening to a recorded call, has the largest bandwidth requirements for the CAD product. Properly configuring this feature is especially important for Unified Mobile Agents who are connected to the main site by a WAN connection.

To access the Silent Monitoring feature, a request is sent to a VoIP provider. The VoIP provider captures from the network, or reads from disk, the voice streams representing the call (two voice streams per call) and sends them back to the requestor. The requestor receives the streams and either decodes them for listening or stores them to disk. The bandwidth requirements detailed in this section are for the network links between the requestor and provider.

Silent Monitoring Requestors
There are two possible requestors in the CAD software:
  • Cisco Supervisor Desktop
  • Recording and Playback service

Cisco Supervisor Desktops send Silent Monitoring requests when the supervisor wants to listen to an agent’s call in real-time or listen to a call that was recorded earlier. The Recording and Playback service send recording requests when a supervisor or agent wants to record a call. For listening to or recording a live call, the VoIP provider will capture the voice streams and send them to the requestor. On the supervisor's desktop, these streams are decoded and played through the supervisor's desktop sound card. For recording, the Recording and Playback service receives the voice streams and saves them to disk.

A Unified CCE installation may have one or two Recording services.

Silent Monitoring Providers
There are three possible VoIP providers in the CAD software:
  • Cisco Agent Desktop
  • VoIP Monitor service
  • Recording & Playback service

The Cisco Agent Desktop application contains a module referred to as the Desktop Monitor service, which runs on the agent’s desktop. It is responsible for processing Silent Monitoring requests only for the agent logged into the CAD application on the desktop. It captures voice packets sent to the phone or IP Communicator software phone associated with the logged-in agent. The phone must be a Cisco Unified IP Phone 7910, 7940, 7960, or 7970 connected in series with the agent desktop on the network. These phones are supported because they contain an additional network port that allows the phone to be connected to a network and also to an agent’s computer. They also support the ability of hubs and switches to propagate network traffic through this additional port. This capability is what allows the Desktop Monitor service to see the phone conversations on the agent’s phone.

By default, this service is active on all agent desktops when the application is started. After initial installation of the CAD servers, all agents are already configured to use the Desktop Monitor service for the Silent Monitoring feature.

A VoIP Monitor service is able to handle multiple requests for Silent Monitoring simultaneously. It captures packets directly from the switch through the switch's Switched Port Analyzer (SPAN) configuration. An installation may have up to five VoIP Monitor services on different machines. Off-board VoIP services may be installed at remote office locations. In some instances, this service may be required due to network complexity and capacity planning. Agents must be explicitly configured to use a VoIP Monitor service if this is the method desired for Silent Monitoring for that agent’s device.


Note


Cisco Unified IP Phone Agents who do not have a desktop must be configured to use a VoIP Monitor service for the Silent Monitoring feature.


The Recording and Playback service may also provide the two streams representing a phone call when a supervisor plays back a recorded agent call. In this case, the streams have already been stored on disk from an earlier recording session. The Recording and Playback service reads the raw data files from the disk and sends the RTP streams over the network to the supervisor's desktop, where they are played through the sound card.

As this description indicates, the Recording and Playback service may be either the requestor (for recording a live call) or a provider (for playing back a recorded call).

A VoIP and Recording and Playback services are usually installed along with the CAD base services. Additional VoIP services and a second Recording and Playback service may be installed on other boxes.

The figure below shows a representative Unified CCE installation supporting a remote office over a WAN. Both the main office and the remote office have a VoIP Monitor service on-site.

Figure 11. VoIP Monitor Service at Main and Remote Sites

When you locate the requestors and providers, you can determine where the bandwidth is required for the Silent Monitoring feature. The following notes regarding bandwidth apply:

  • Although an administrator can assign a specific VoIP service to an agent device, the Recording service that is used when calls are recorded is determined at the time the request is made. The same rule applies if two Recording services are installed to load-balance the installation. In some cases, the provider and requestor may be separated by a WAN and would require the bandwidth on the WAN. If a second Recording and Playback service is to be installed, install it on a server at the main office (on the LAN with the CAD base services).
  • If the VoIP provider is a VoIP Monitor service, if the requestor is a Recording service, and if these services reside on the same machine, then there is no additional bandwidth used on the network to record the call.

Regardless of who is the requestor and VoIP provider, the bandwidth requirement between these two points is the bandwidth of the IP call being monitored and/or recorded. For purposes of calculating total bandwidth, you can think of each monitoring/recording session as being a new phone call. Therefore, to calculate bandwidth to support the Silent Monitoring feature, you can use the same calculations used to provision the network to handle call traffic, with the exception that the voice stream provided by the VoIP provider consists of two streams in the same direction. Whereas a normal IP phone call has one stream going to the phone and one stream coming from the phone, the VoIP provider has both streams coming from the provider. Keep this difference in mind when provisioning upload and download speeds for your WANs.

To determine the bandwidth requirements for these voice streams, see the Cisco Unified Communications Solution Reference Network Design (SRND) Guidehttp:/​/​www.cisco.com/​en/​US/​products/​sw/​voicesw/​ps556/​tsd_​products_​support_​series_​home.html.

Cisco Agent Desktop Applications Bandwidth Usage

The CAD desktop applications include:
  • Cisco Agent Desktop
  • Cisco Supervisor Desktop
  • Cisco Desktop Administrator
  • Cisco Desktop Monitoring Console

These applications also require a certain amount of bandwidth, although far less than the Silent Monitoring feature. In addition, the type of communication across the network is bursty. In general, bandwidth usage is low when the agents are not performing any actions. When features or actions are requested, the bandwidth increases for the time it takes to perform the action, which is usually less than one second, then the bandwidth usage drops to the steady-state level. From a provisioning standpoint, one must determine the probability of all the CAD agents performing a particular action at the same time. It might be more helpful to characterize the call center and determine the maximum number of simultaneous actions (in the worst case) to determine instantaneous bandwidth requirements, and then determine what amount of delay is tolerable for a percentage of the requested actions.

For example, the raw bandwidth requirement for 1000 CAD agents logging in simultaneously is about 6.4 kilobytes per second and the login time is about 9 seconds (with no network delay) for each agent. If the WAN link did not have this much bandwidth, logins would take longer as packets were queued before being sent and received. If this queuing delay caused the login attempts to take twice as long (18 seconds in this case), would this delay be acceptable? If not, provision more bandwidth.

Each of these applications communicates with the base CAD services running on server machines. In addition, the agent desktop application communicates with the CTI server through the CTI OS server for call control actions and state changes. The table below lists the types of messages for each application.

Table 8 Messaging Type By CAD Application
Application Name Message Types
Cisco Agent Desktop Login/logoff
Agent state changes
Call Control
Call status information
Desktop Monitoring and recording
Chat messages
Team performance messages
Report generation
Real-time data refresh
CTI OS Supervisor Desktop Login/logoff
Agent state changes
Call status updates
Report Generation
Silent Monitoring
Call recording
Call playback
Chat messages
Team performance messages
Real-time data refresh
Cisco Desktop Administrator Configuration information retrieval and storage
Configuration data refresh
Cisco Desktop Monitoring Console Service discovery
SNMP Get messages
Cisco Agent Desktop Bandwidth Usage

CAD agents are able to log in and log off, change their agent state, handle calls, and send reporting information to the base servers. The bandwidth requirements for these activities are fairly small but can add up when many agents are considered.

Table 2 shows the average bandwidth requirements for various numbers of agents. This information is derived from bandwidth testing and extrapolation of bandwidth data. Because there are many variables that can affect bandwidth, a configuration that resulted in higher bandwidth usage was chosen to provide near worst-case scenarios. If the agent’s WAN link meets or exceeds the bandwidth requirements shown in this table, Cisco Agent Desktop can run without delays in message passing.

The following configuration parameters affect bandwidth and apply to both Table 2 and Table 3:
  • Number of skills per agent: 10
  • Number of agents per team: 20
  • Number of teams: 50
  • Number of agent state changes per agent per hour: 10 (Not including state changes due to handling calls)
  • Calls per agent per hour: 60
  • Team performance messages per team per hour: 8
  • Chat messages sent or received per hour: 20
  • Average chat message size (in bytes): 40
  • Number of calls recorded per hour: 10

Note


The bandwidth requirements shown do not include the bandwidth of the RTP streams for the call, recording, or monitoring sessions, but include only the messaging needed to start and stop the sessions.


Table 9 Average Bandwidth Requirements for Cisco Agent Desktop

Number of Agents

Average Download Bandwidth (Kilobytes per second)

Average Upload Bandwidth (Kilobytes per second)

1

0.02

0.003

100

1.7

0.1

200

3.4

0.3

300

5.0

0.4

500

8.4

0.7

600

10.0

0.8

700

11.7

1.0

800

13.4

1.1

900

15.1

1.3

1000

16.8

1.4

Cisco Supervisor Desktop Bandwidth Usage

A Cisco Supervisor Desktop receives events for all the agents of the team that the supervisor is logged into. This information includes state changes, call handling, login/logoff, and so forth. The more agents, skills, and calls there are the more data is sent to supervisors. In addition, particular reports are automatically refreshed periodically to provide real-time data while the supervisor is viewing the report. Refreshing reports requires additional bandwidth.

Table 3 uses the same basic configuration parameters used to determine the bandwidth numbers in Table 2. In addition, Table 3 takes into account the fact that the Team Skill Statistics report is being viewed and refreshed.

Table 10 Average Bandwidth Requirements for Cisco Supervisor Desktop

Number of Agents

Average Download Bandwidth (Kilobytes per second)

Average Upload Bandwidth (Kilobytes per second)

1

0.02

0.003

100

1.3

0.1

200

2.5

0.3

300

3.7

0.4

400

5.0

0.5

500

6.2

0.6

600

7.5

0.8

700

8.7

0.9

800

10.0

1.0

900

11.2

1.1

1000

12.4

1.3

Cisco Desktop Administrator Bandwidth Usage

The bandwidth requirements for Cisco Desktop Administrator are very small and are seen only when an administrator is actively changing configurations. In general, the bandwidth used by Cisco Desktop Administrator is negligible from a provisioning standpoint.

Cisco Desktop Monitoring Console Bandwidth Usage

The bandwidth requirements for the Cisco Desktop Monitoring Console are very small and short-lived. In general, the bandwidth used by the Cisco Desktop Monitoring Console is negligible from a provisioning standpoint.

Best Practices and Recommendations for Cisco Agent Desktop Service Placement

In a Unified CCE installation using Cisco Agent Desktop, all CAD services except the VoIP Monitor service and the Recording and Playback service must coreside with the PG. You can install the VoIP Monitor Service and Recording and Playback Service on other servers (off-board).

VoIP Monitor Server

A single VoIP Monitor Service can support up to 114 simultaneous Silent Monitoring sessions. Additional VoIP Monitor Services increase the SPAN-based monitoring capacity of the installation.

You can have a maximum of five VoIP Monitor servers in a CAD installation. Only one VoIP Monitor Service may exist on a single server.

The main load on a VoIP Monitor Service is the amount of network traffic that is sent to the VoIP Monitor Service for the devices that are assigned to that VoIP service, not the number of simultaneous monitoring sessions. When Switched Port Analyzer (SPAN) is configured to send traffic from a device to a particular VoIP service, the VoIP services packet sniffer monitors network traffic even without active monitoring sessions. The amount of traffic monitored limits the number of devices that you can assign to a VoIP service.

If a VoIP Monitor Service coresides with the CAD base services on the PG, it supports the network traffic of up to 100 agents. You can dedicate a third NIC for SPAN destination port in this environment, although it is not necessary. If more than 100 agents are configured to use a single VoIP Monitor Service, you must move that service off-board to another server. A single VoIP Monitor Service supports the network traffic of 400 agent phones if you use a 100 Megabit NIC to connect to the switch. A single VoIP Monitor Service supports the network traffic of 1000 agent phones if you use a Gigabit NIC to connect to the switch.


Note


If the switch does not support ingress and egress traffic on the same switch port, then you must use a dedicated NIC to support SPAN services.


Recording and Playback Server

You can have a maximum of two Recording and Playback Services in a CAD installation. As with the VoIP Monitor Service, only one of these services can exist on a single computer.

If the Recording and Playback Service coresides with CAD base services on the PG, it supports up to 32 simultaneous recording sessions. If you require more recording and playback sessions, move the Recording and Playback Service to another server. The Recording and Playback Service can coexist with an off-board VoIP Monitor Service. An off-board Recording and Playback Service supports up to 80 simultaneous recordings.

The Recording and Playback Service converts copies of a call’s RTP packets to RAW files and stores these files for playback using the Cisco Supervisor Desktop. Either the VoIP Monitor server (SPAN capture) or the Cisco Agent Desktop (Desktop capture) directs these RTP packets to the Recording and Playback server. So in a SPAN capture environment, a recording consumes a monitoring session and a recording and playback session.

A second Recording and Playback Service does not increase the recording capacity, but it does provide some load balancing and redundancy. When both Recording and Playback servers are active, the recording client alternates between the two servers and stores the recording files first on one server, then the other.

Bandwidth Requirements for an Administration and Data Server and Reporting

For more information about the bandwidth requirement for Cisco Unified Intelligence Center, see the latest version of the Cisco Unified Intelligence Center Bill of Materials.

Bandwidth Requirements for Cisco EIM/WIM

The bandwidth requirement for Cisco EIM/WIM integrations are documented in EIM/​WIM SRND document.

Bandwidth and Latency Requirements for the User List Tool

In deployments in which an Administration Client is remote (connected over a WAN) from the domain controller and Administration & Data Server, specific network bandwidths and latencies are required to achieve reasonable performance in the User List Tool. Reasonable performance is defined as less than 30 seconds to retrieve users. This information is provided in an effort to set expectations and to encourage upgrading to later Cisco Unified CCE later releases. In this version, changes were made to enhance the performance of the tool under these conditions.

There are a number of other things that can be done to improve performance of the User List Tool. Moving an Administration & Data Server and a domain controller local to the Administration Client can greatly enhance performance, as shown by the LAN row in the table below. Improving the latency in your WAN connection will improve performance, and increasing the bandwidth of your WAN connection will also improve performance.

The following data points describe scenarios in which the User List Tool can retrieve users within 30 seconds in Unified CCE 7.2(3) or later releases. Additionally, laboratory testing has determined that the tool cannot perform reasonably for any number of users on networks with a one-way latency greater than 50 ms.

Table 11 Latency and Bandwidth Requirements for the User List Tool

Maximum One-Way Latency (ms)

Available Bandwidth

Number of Users Supported

Negligible

LAN

8000

15

3.4 Mbits and higher

4000

15

2 Mbits

500

15

256 Kbits

500

50

64 Kbits and higher

25