Installing and Configuring Cisco HCS for Contact Center 9.2(1)
Design Considerations
Downloads: This chapterpdf (PDF - 3.26MB) The complete bookPDF (PDF - 13.52MB) | The complete bookePub (ePub - 5.3MB) | Feedback

Design Considerations

Contents

Design Considerations

Deployment Model Considerations

Cisco HCS for Contact Center supports a subset of the deployment options described in the Unified Contact Center Enterprise Solution Reference Network Design (SRND).

The following figure illustrates the deployment options available to Cisco HCS for Contact Center and shows the options that are not supported.


Note


This is not an exhaustive list. As a rule, if an option or feature is not mentioned in this document, it is not supported in this deployment.


  

  

Figure 1. Cisco HCS for Contact Center 500 and 1000 agent deployment and Solution Reference Network Design

Figure 2. Cisco HCS for Contact Center for 4000 agent deployment and Solution Reference Network Design



  

  

The following figure shows the logical view of Cisco Hosted Collaboration Solution for Contact Center.

Figure 3. Cisco HCS for Contact Center Logical View



Operating Considerations

This section describes the features, configuration limits, and call flows for the Cisco HCS for Contact Center core and optional components.

Peripheral Gateways

The following table describes the deployment of the Peripheral Gateways.

Table 1 Peripheral Gateway for 500 and 1000 Agent Deployment
Number Of Peripheral Gateways Number of PIMs Notes
Two Peripheral Gateways are supported in this deployment One generic PG with the following PIMs:
  • One CUCM PIM

  • Four VRU PIMs

One Media Routing (MR) PG with the following PIMs:
  • One Media Routing PIM for Multichannel

  • One Media Routing PIM for Outbound

  • Unified CCE Call Server contains the following:
    • One Generic PG
    • One Media Routing PG
  • Two of the four VRU PIMs connect to the two Unified CVPs on Side A. Other two VRU PIMs connect to the two Unified CVPs on Side B.

Table 2 Peripheral Gateway for 4000 Agent Deployment
Number Of Peripheral Gateways Number of PIMs Notes
Five Peripheral Gateways are supported in this deployment
Two CUCM PGs
  • One CUCM PIM in each PG

One VRU PGs
  • Sixteen VRU PIMs ( eight are optional)

Two Media Routing (MR) PG
  • One Media Routing PIM for Multichannel

  • Two Media Routing PIM for Outbound one in each MR PG

  • There are 3 PG boxes in each side of core blades and contains the following

    • Unified CCE Agent PG1 contains one CUCM PG, one MR PG with two PIMs, and one Dialer.

    • Unified CCE Agent PG2 contains one CUCM PG, one MR PG with one PIM, and one Dialer.

    • Unified CCE VRU PG1 contains sixteen PIMs across both the sides. Eight connects to eight unified CVPs on side A. Other eight connects to eight unified CVPs on side B.

Agent and Supervisor Capabilities

The following table lists the agent and supervisor capabilities.

Table 3 Agent and Supervisor Capabilities
 

HCS for Contact Center Deployment

Notes
Call Flows All transfers, conferences, and direct agent calls use ICM script.
Agent Greeting Supported
Whisper Announcement Supported
Outbound Dialer Supported
Mobile Agent

Nailed up mode support

Configure on the Unified CCE CTIOS component.

Call-by-Call mode is not supported in this deployment.

Silent Monitoring

  • Unified CM-based (BiB)

  • SPAN for Mobile Agent

You can configure either Unified CM-based or SPAN-based but not both. If you configure Unified CM-based silent monitoring, you cannot monitor mobile agents.

A separate server is required for SPAN-based silent monitoring.

Recording Following are the supported recording types:

This requires third party recording server.

CRM Integration CRM integration is allowed with custom CTI OS Toolkit or Cisco Finesse API.
  • Cisco Finesse gadgets
  • Cisco Finesse Web API or CTI OS APIs
  • Existing CRM connectors
You can integrate with CRM in many ways. You can use:
  • Cisco Finesse gadgets to build a custom CRM-integrated desktop. For example, this can be a Cisco Finesse gadget that fits in a CRM browser-based desktop.
  • Cisco Finesse Web API or CTI OS APIs or the CTI Server protocol to integrate into a CRM application
  • Existing CRM connectors. The connectors available from Cisco for SAP. Each of these connectors has its own capacity limits:
    • SAP can support up to 250 agents and supervisors. Max 3 CPS. Requires its own server. Supports Unified CM BIB-based Recording or Silent Monitoring. Does not support Mobile Agents, Outbound, or Multichannel.

Desktop

Cisco Finesse

Supports Outbound feature (Progressive and Predictive only), Mobile Agent, SPAN-based silent monitoring, and Unified CM-based silent monitoring.

Cisco Computer Telephony Integration Option (CTI OS) Desktop:

  • .NET

  • Java CIL

  • Win32

Supports Agent Greeting, Whisper Announcement, Outbound, Mobile Agent, SPAN-based silent monitoring, and Unified CM-based silent monitoring.

Desktop Customization Cisco Finesse API

CTI OS Toolkit Desktops

CTI OS Toolkit Desktops are listed above, under Desktop.

Voice Infrastructure

The following table lists the voice infrastructure.



Table 4 Voice Infrastructure

Voice Infrastructure

HCS for Contact Center Deployment

Notes

Music on Hold

Unicast

Unified CM Subscriber source only

Multicast is not supported.

This sizing applies to agent node only, for both agent and back-office devices, with all agent devices on the same node pair.

Proxy

No

High Availability (HA) and load balancing are achieved using these solution components:

  • Time Division Multiplexing (TDM) gateway and Unified CM, which use the SIP Options heartbeat mechanism to perform HA.
  • Unified CVP servers, which use the SIP server group and SIP Options heartbeat mechanism to perform HA and load balancing.

Ingress Gateways

ISR G2 Cisco Unified Border Element with combination VXML

3925E and 3945E are the supported GWs.

For SPAN based Silent Monitoring, the Ingress gateway is spanned.

You must configure the gateway MTPs to do a codec pass-through because the Mobile Agent in HCS is configured to use G729 and the rest of the components in HCS support all the codecs.

Protocol

Session Initiation Protocol (SIP) over TCP

SIP over UDP, H323, Media Gateway Control Protocol (MGCP) are not supported.

Proxy /Cisco Unified SIP Proxy (CUSP)

Not supported

Outbound Option: The Outbound dialer can connect to only one physical gateway. See Configuration Limits

Codec

  • IVR: G.711ulaw and G.711alaw
  • Agents: G.711ulaw, G.711 alaw, and G729r8

G.722, iSAC, and iLBC are not supported.

Media Resources

Gateway-based:

  • Conference bridges
  • Transcoders and Universal Transcoders
  • Hardware and IOS Software Media Termination Points.

Unified CM-based (Cisco IP Voice Media Streaming Application) that are not supported:

  • Conference bridges
  • MTPs

Phones

CTI Controlled with BiB:

  • 99xx series: 9951 and 9971 are supported.

  • 89xx series: 8941, 8945, and 8961 are supported.

  • 797x: 7975 is supported.

  • 796x: 7961G, 7962G, and 7965G are supported.

  • 794x: 7941G, 7942G, and 7945G are supported.

  • 69xx series: 6921, 6941, 6945 and 6961 are supported

  • IP communicator

  • Jabber for Windows 9.2

HCS for CC supports all the phones that Unified CCE supports, as long as the phone supports the Built-in-Bridge (BIB), CTI-controlled features under SIP control.

SCCP is not supported.

Administration Guidelines

The following table lists the administration tools.



Table 5 Administration
 

HCS for Contact Center Deployment

Notes

Supported:

Not supported:

Provisioning

Cisco Agent Desktop Admin

Service Creation Environment

  • Unified CCE Script Editor
  • CVP Call Studio

Serviceability

  • Unified System Command Line Interface (CLI)
  • RTMT Analysis Manager Diagnosis

RTMT Analysis Manager Analyze Call Path

IVR and Queuing

The following table describes the IVR and call queuing to help optimize inbound call management.



Table 6 IVR and Queuing
 

HCS for Contact Center Deployment

Notes

Supported:

Not supported:

Voice Response Unit (VRU)

  • Unified CVP Comprehensive Model Type 10
  • Unified CVP VRU types other than Type 10
  • Cisco IP IVR
  • Third-party IVRs

Caller Input

  • DTMF
  • Automatic Speech Recognition and Text-to-speech (ASR/TTS)

Dual Tone Multi-Frequency (DTMF)

  • RFC2833
  • Keypad Markup Language (KPML)

Video

None

CVP Media Server

  • Third-party Microsoft Internet Information Services (IIS), coresident on the Unified CVP Server
  • Tomcat

Reporting

The following table contains information on the reporting.

Table 7 Reporting
 

HCS for Contact Center Deployment

Notes

Tool

Cisco Unified Intelligence Center is the only supported reporting application.

Note   

Unified Intelligence Center historical reporting data and Call Detail data are pulled from the Logger database for 500/1000 agent deployment model and the data pulled from the AW-HDS-DDS server for other deployments.

Not supported with reporting from Logger:

  • Exony VIM
  • Third-party reporting applications

Supported with reporting from AW-HDS-DDS:

  • Exony VIM
  • Third-party reporting applications
  • Custom reporting

Database

Historical and Call Detailed data is stored on the Unified CCE Data Server for 500 and 1000 Agent Deployment and stored on Unified AW-HDS-DDS server for other deployments.

Retention

The logger database retention period is 400 days (13 months) of historical summary data and 35 days (five weeks) of detailed TCD and RCD records.

If you require longer retention periods, add a single Historical Data Server (HDS) to the deployment. See the following table for the HDS minimum requirements.

Note   

This is applicable only for 500 and 1000 agents deployment model and the Retention values are default for other deployments.

Data beyond the configured retention time is purged automatically at 12:30 AM and uses the time zone setting of the core server.

Follow Cisco supported guidelines to run the purge at off-peak hours or during a maintenance window.

Note that you can control or change the automatic purge schedule through the command line interface. You can change it if the automated purge does not occur during your off-peak hours.

The purge has a performance impact on the Logger.

Customers who install the External AW-HDS-DDS on separate servers can point Cisco Unified Intelligence Center to either the logger or the External AW-HDS-DDS, but not to both.

Reports

Each supervisor can run four concurrent Real-Time reports and two historical reports:

  • Real-Time reports contain 100 rows.
  • Historical reports contain 2000 rows.

Table 8 HDS Minimum Requirements for 500 and 1000 agent deployment model.

Virtual Machine

vCPU

RAM (GB)

Disk (GB)

CPU Reservation (MHz)

RAM Reservation (MB)

Unified CCE HDS

1

2

80 (OS) 512* (Database)

2048

* Size the database drive to accommodate the required retention period. For more information about the HDS sizing, refer Virtualization of Unified CCE.

Third-Party Integration

The following table contains third-party integration information.



Table 9 Third-Party Integration

Option

Notes

Recording

All Recording applications that are supported by Unified CCE are supported on HCS for CC. For details, see Recording section in Agent and Supervisor Capabilities.

Wallboards

All Wallboard applications that are supported by Unified CCE are supported on HCS for CC.
Note    Unified Intelligence Center can also be used for Wallboards.

Workforce Management

If you need access to real-time or historical data, then you will require AW-HDS-DDS. All Workforce Management applications that are supported by Unified CCE are supported on HCS for CC.

Database Integration

Unified CVP VXML Server is supported.

ICM DB Lookup is supported.

Automated Call Distributor (ACD)

None

Interactive Voice Response (IVR)

  • Unified IP IVR is not supported.
  • No third-party IVRs are supported.

Configuration Limits

Agents, Supervisors, Teams, Reporting Users

Group Resource 500 Agent Deployment (One PG) 1000 Agent Deployment (One PG) 4000 Agent Deployment (Two PGs)
Agents Agents (Active/Configured) * 500/3000 1000/6000 4000/24000

4000/12000 (Finesse)

Agents with Trace ON 50 100 * 400
Agent Desk Settings * 500 1000 4000
Mobile Agents (Active/Configured) 125/750 (included in maximum 500 concurrent agents) 250/1500 (included in maximum 1000 concurrent agents) 1000/6000 (included in maximum 4000 concurrent agents)
Outbound Agents 125 (included in maximum 500 concurrent agents) 250 (included in maximum 1000 concurrent agents) 500 (included in maximum 4000 concurrent agents)
Agents per team 50 50 * 50
Skills per agent 15 *

5 (Finesse)

15 *

5 (Finesse)

15
Agents per skill group No limit No limit No limit
Attributes per agent * 50 50 50
Supervisors Supervisors (Active/Configured) * 50/300 100/600 400/2400
Teams (Active/Configured) * 50/300

50/120 (Finesse)

100/600

100/120 (Finesse)

400/2400

200/240 (Finesse)

Supervisors per Team 10 * 10 * 10
Teams per supervisor 20 * 20 * 20
Agents per supervisor 20 20 20
Reporting Number of Reporting users 50/300 100/600 400/2400

Note


The sizing limitations of Finesse applies for Agents and Teams.


Group Resource 500 Agent Deployment (One PG) 1000 Agent Deployment (One PG) 4000 Agent Deployment (Two PGs)
Outbound Dialer per system 1 1 2
Number of Campaigns (Agent/IVR based) 50 100 100
Campaign skill groups per campaign 20 20 20
Skills per agent 15 15 15
Dialing Modes
  • Preview

  • Direct Preview

  • Progressive

  • Predictive

  • Preview

  • Direct Preview

  • Progressive

  • Predictive

  • Preview

  • Direct Preview

  • Progressive

  • Predictive

Total Numbers of Agents 125 250 500
Port Throttle 5 10 10
Precision Queues Precision Queues* 2000 2000 2000
Precision Queue steps* 5000 5000 5000
Precision Queue term per Precision Queue* 10 10 10
Precision steps per Precision Queue* 10 10 10
Unique attributes per Precision Queue* 5 5 5
General Attributes* 10000 10000 10000
Bucket Intervals 500 1000 4000
Call Types * 250/500 500/1000 2000/4000
Routing Scripts 250/500 500/1000 2000/4000
Network VRU Scripts * 500 1000 4000
Reason Codes 100 100 100
Skill Groups 3000 3000 * 3000 *
Persistent Enabled Expanded Call Variables * 5 5 5
Persistent Enabled Expanded Call Variable Arrays 0 5 0
Nonpersistent Expanded Call Variables(Bytes)* 2000* 2000* 2000*
Bulk Jobs 200 200 200
CTI All event Clients 9 (Includes CTIOS, Finesse and 5 other CTI All Event Clients) 9 (Includes CTIOS, Finesse and 5 other CTI All Event Clients) 18 (Includes CTIOS, Finesse and 5 other CTI All Event Clients per PG)
Dialed Number Dialed Number (External Voice) 1000 1000 4000
Dialed Number (Internal Voice) 1000 1000 4000
Dialed Number (Multichannel) 500 500 2000
Dialed Number (Outbound Voice) 500 500 2000
Load VRU Ports 900 1800 7200
Inbound Calls per second 5 8 35
Agent Load 30 calls per hour per agent 30 calls per hour per agent 30 calls per hour per agent
Reskilling Dynamic (operations/hr.) 120 120 120

Note


  1. The Symbol "*" indicates that the configuration limits for the above resources can be enforced through CCDM.

  2. For SIP Outbound Dialer in HCS for Contact Center deployment, only one gateway can be connected. The maximum configured ports are 500 dialer ports in the ICM and the IOS gateway.


Optional Component Considerations

Unified WIM and EIM Considerations

This section describes the following considerations for Unified WIM and EIM.

Cisco RSM Considerations

Platform Capabilities

Call Flow

The Supervisor can only monitor agents who are in talking state.

Desktop

CTIOS

Voice Codec

Between Agent and RSM: G.729 (RTP)

Between RSM and VXML Gateway: G.711 (RTSP)

Concurrent Monitoring Sessions

80

Monitored Calls (per minute)

17

Maximum Configured Agents per PG

6000

SimPhone Start line Number Range

Four to fifteen digits

Call Flows

The call flows in the following figures represent units of call flow functionality. You can combine these call flow units in any order in the course of a call.

Figure 4. Basic Call Flow with IVR and Queue to an Agent



Figure 5. Consult Call Flow with IVR and Queue to a Second Agent



Figure 6. Blind Transfer Call Flow with IVR and Queue to a Second Agent




Note


Conference call flows are the same as consult call flows. Both conference call flows and consult call flows conference the call with the agents, rather than holding them during consult. Hold/resume, alternate/reconnect, consult/conference call flows invoke the session initiation protocol (SIP) ReINVITE procedure to move the media streams. Conference to interactive voice response (IVR) call flow is similar to conference with no agent available call flow.


The following table shows the SIP trunk call flow.

Table 10 SIP Trunk Call Flow
Call Flow Logical Call Routing
New call from CUBE(SP) Caller -->CUBE(SP) --> CUBE(E) --> Unified CVP -->Unified Communications Manager
New call from Unified Communications Manager (internal help desk) Caller --> Unified Communications Manager --> CUBE(E) --> Unified CVP
Post routed call from agent-to-agent Agent 1 --> Unified Communications Manager --> Unified CVP --> Unified Communications Manager--> Agent 2

Note


All new calls always enter the Cisco IOS gateway (CUBE-E or TDM-IP gateway) and are associated with the Unified CVP survivability service.

The following table shows the TDM gateway (Local PSTN breakout) call flow.

Table 11 TDM gateway (Local PSTN breakout) Call Flow
Call Flow Logical Call Routing

New call from local PSTN gateway

Caller-->TDM-IP-->Unified CVP-->Unified Communications Manager

New call for IVR based

Caller --> TDM-IP -->Unified CVP -->CUBE(E) or VXML gateway

New call for agent based

Caller-->TDM-IP-->Unified CVP-->Unified Communications Manager-->Agent1


Note


  • All new calls always enter the Cisco IOS gateway (CUBE-E or TDM-IP gateway) and are associated with the Unified CVP survivability service.

The following table lists the supported system call flows.


Note


  1. Configure TDM gateway at CPE. see Configure the Adjacencies.
  2. Configure TDM gateway at shared layer, similar to PSTN configuration.

Table 12 Supported System Call Flows

System Call Flows

Supported

Conference to IVR

Yes

Bridged transfer

Yes

Router requery

Yes

Postroute using Unified CVP

Yes

Prerouting

No

Translation route with third-party VRU

No

ICM routing to devices other than Cisco HCS Unified CCE

No

Domain and Active Directory Considerations

The Unified CCE uses Active Directory (AD) to control users' access rights to perform setup, configuration, and reporting tasks. AD also grants permissions for different components of the system software to interact; for example, it grants permissions for a Distributor to read the Logger database. For more information, see Staging Guide for Cisco Unified Contact Center Enterprise.

To meet these requirements, Cisco HCS for Contact Center must have its own set of Windows Server 2008 Standard R2 domain controllers configured in native mode. For more information, see Call Flows. The domain controller must meet the minimum requirements shown in the following table.

Table 13 Domain Controller Minimum Requirements

Virtual Machine

vCPU

RAM (GB)

Disk C (GB)

CPU Reservation (MHz)

RAM Reservation (MB)

Cisco HCS Domain Controller

1

4

40

1400

512


Note


Use 2 vCPUs for larger directories.


Cisco HCS for Contact Center supports two AD deployment models:

The following figure shows the Cisco HCS for Contact Center AD deployment.

Figure 7. Cisco HCS for Contact Center AD Deployment



AD at Customer Premises

In the AD at the customer premises model, the service provider needs to request that the customer add entries into the customer AD to enable the service provider to sign into the system deployed in the domain. The service provider should be a local machine administrator and belong to the setup group for components that need to be installed and managed in the Cisco HCS for Contact Center environment. To run the Domain Manager, the service provider must be a domain administrator or a domain user with domain read and write permissions to create Organizational Units (OU) and groups.

The end-customer use of the Cisco HCS for Contact Center solution is limited if the customer premises AD is inaccessible to the Cisco HCS for Contact Center Virtual Machines. Cisco strongly advises service providers to work with end customers to ensure that they understand the potential service limitations when they use the AD at the customer premises model.

Cisco HCS for Contact Center also supports a deployment where the Cisco HCS for Contact Center components are associated with the AD at the service provider premises, and the CTI OS client desktops are part of the customer premises corporate AD. Consider the following for the AD in this deployment:

  • The instance administrator account is created in the service provider domain.

  • The instance administrator uses the Unified CCDM and Unified Intelligence Center to create agents, supervisors, and reporting users in the service provider domain.

  • The instance administrator configures all supervisors and reporting users.

AD at Service Provider Premises

In the AD at the service provider premises model, the service provider must have a dedicated AD for each customer instance. Each customer AD needs to be updated with Cisco HCS for Contact Center servers and accounts. The service provider administrator needs to be added to each customer AD to manage the Contact Center environment.

You can use overlapping IP addresses for each customer deployment. For example, Cisco Unified Border Element — Enterprise, Unified CCE, and Unified CVP should be able to overlap IP addresses across customers. When you use overlapping IP addresses, the static Network Address Translation (NAT) provides access from the management system to each Cisco HCS for Contact Center environment.


Note


You must create a two-way external trust between each customer AD and Service provider Management AD to integrate customer instance with Unified CCDM. You must also open the ports in the ASA firewall. Refer to the Install and Configure ASA Firewall and NAT section.


For opening ports and configurations, see http:/​/​support.microsoft.com/​kb/​224196

Storage, Blade Placement, and IOPS Considerations

Storage, Blade Placement, and IOPS Considerations for HCS Shared Management Components

SAN Configuration for HCS Shared Management Components

The HCS deployment requires 1.2 TB of SAN storage for the shared management components. The following table contains the SAN configuration for HCS Shared Management Components. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.

The HCS deployment requires 1.2 TB of SAN storage for the shared management components. The following table contains the SAN configuration for HCS Shared Management Components.

Table 14 SAN Configuration for the Management Components

RAID Group*

VM Datastore

Disk Drive

Virtual Machine

C1 - RAID5

Datastore-C1

600 GB

C

Unified CCDM Database Server, Side A

Unified CCDM Web Server, Side A

D

Unified CCDM Database Server, Side A

Unified CCDM Web Server, Side A

C2 - RAID5

Datastore-C2

600 GB

C

Unified CCDM Database Server, Side B

Unified CCDM Web Server, Side B

D

Unified CCDM Database Server, Side B

Unified CCDM Web Server, Side B

Blade Placement Requirement for HCS Shared Management Components

The HCS deployment requires a single high-density (B230) blade for the shared management components.

The following tables contain the blade placement for the shared management components, chassis 1 and 2.

Table 15  Blade Placement for shared management components - Chassis 1
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
C1 Unified CCDM Database Server, Side A 8 32 100 200 15000 20480
C1 Unified CCDM Web Server, Side A 8 32 100 60 11000 12288
Table 16  Blade Placement for shared management components - Chassis 2
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
C2 Unified CCDM Database Server, Side B 8 32 100 200 15000 20480
C2 Unified CCDM Web Server, Side B 8 32 100 60 11000 12288

IOPS Requirement for HCS Shared Management Components

The following tables contain the IOPS 95th percentile value to design the SAN array and the IOPS average value to monitor the SAN array.

Table 17  IOPS, Disk Read, and Disk Write - Chassis 1
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average
C1 Unified CCDM Database Server, Side A 5900 1050 775 300 50 75 1400 250 175
C1 Unified CCDM Web Server, Side A 900 650 565 100 50 65 200 150 125
Table 18  IOPS, Disk Read, and Disk Write - Chassis 2
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average
C2 Unified CCDM Database Server, Side B 5900 1050 775 300 50 75 1400 250 175
C2 Unified CCDM Web Server, Side B 900 650 565 100 50 65 200 150 125

Storage, Blade Placement, and IOPS Considerations for HCS Core Components

SAN Configuration for 500 Agent Deployment for Core Components

The 500 agent deployment requires 3.25 TB of SAN storage for the core components.

The following table contains the SAN configuration for the 500 agent deployment. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.



Table 19 SAN Configuration for the 500 Agent Deployment

RAID Group*

VM Datastore

Disk Drive

Virtual Machine

A1 - RAID5

Datastore-A1

425 GB

C

Unified CCE Call Server, Side A

Unified Intelligence Center Publisher

Cisco Finesse Publisher

A2 - RAID5

Datastore-A2

1100 GB

C

Unified CCE Data Server, Side A

Unified CVP Reporting Server

D

Unified CCE Data Server, Side A

Unified CVP Reporting Server

A3- RAID5

Datastore-A3

350 GB

C

Unified CVP Call Server 1A

Unified Communication Manager Publisher

Unified Communication Manager Subscriber 1A

B1 - RAID5

Datastore-B1

425 GB

C

Unified CCE Call Server, Side B

Unified Intelligence Center Subscriber

Finesse Subscriber

B2 - RAID5

Datastore-B2

650 GB

C

Unified CCE Data Server, Side B

D

Unified CCE Data Server, Side B

B3- RAID5

Datastore-B3

300 GB

C

Unified CVP Call Server 1B

Unified CVP OAMP Server

Unified Communication Manager Subscriber 1B

Blade Placement Requirement for 500 Agent Deployment Core Components

The 500 agent deployment requires two single high-density B230M2-VCDL1 blades for the mandatory components and the Unified CVP Reporting Server.

The following tables contain the blade placement for the 500 agent deployment, chassis 1 and 2.


Note


The vCPU is oversubscribed, but the overall CPU MHz and memory is not oversubscribed for the blade.


Table 20 Blade Placement for the 500 Agent Deployment - Chassis 1
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)

A1

Unified CCE Call Server Side A

2

4

80

3300

4096

Unified CCE Data Server Side A

2

6

80

512

3400

6144

Unified CVP Call Server No.1

4

4

150

1200

4096

Unified Intelligence Center Publisher

2

6

146

800

6144

Cisco Finesse Publisher

2

8

146

2750

8192

Unified CM Publisher

2

3

60

800

3072

Unified CM Subscriber

2 3 60

800

3072

Unified CVP Reporting Server

4

4

80

300

2500

4096

Table 21 Blade Placement for the 500 Agent Deployment - Chassis 2
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)

B1

Unified CCE Call Server Side B

2

4

80

3300

4096

Unified CCE Data Server Side B

2

6

80

512

3400

6144

Unified CVP Server No. 2

4

4

150

1200

4096

Unified Intelligence Center Subscriber

2

6

146

800

6144

Cisco Finesse Subscriber

2

8

146

2750

8192

Unified CM Subscriber Side B

2

3

60

800

3072

Unified CVP OAMP

2

4

40

1200

4096

IOPS Requirement for 500 Agent Deployment Core Components

The following tables contain the required IOPS (Input Operations Per Second). Use the IOPS 95th percentile value to design the SAN array and the IOPS average value to monitor the SAN array.

Table 22 IOPS, Disk Read, and Disk Write - Chassis 1
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

A1

Unified CCE Call Server Side A

217.25 75.32 58.48 6592 139.4 75.86 14160 4054.4 2437.47

Unified CCE Data Server Side A

2042.6 978.26 268.45 5581 2891.3 731.79 27410 11849.35 3007.67

Unified CVP Server No. 1

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified Intelligence Center Publisher

781.4 628.34 460.17 466 433.1 74.32 7758 6446.3 5727.44

Cisco Finesse Publisher

53.55 48.21 29.68 4 0 0.02 1488 1429.15 920.23

Unified CM Publisher

172.65 72.31 58.32 1068 5 9.11 1860 1775.1 1218.23

Unified CM Subscriber

172.65 72.31 58.32 1068 5 9.11 1860 1775.1 1218.23

Unified CVP Reporting Server

1250 984 329 3126 2068.35 764.24 9166 5945.3 2210.38

Note


  1. Monitor SAN performance for IOPS and disk usage. If usage exceeds thresholds, redeploy disk resources during the service window.

  2. The IOPS values for Unified Communication Manager in the preceding table are based on the BHCA values. These values may differs for different scenarios. For more information, seeIOPS values for Unified Communication Manager.


Table 23 IOPS, Disk Read, and Disk Write - Chassis 2
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

B1

Unified CCE Call Server Side B

217.25 75.32 58.48 6592 139.4 75.86 14160 4054.4 2437.47

Unified CCE Data Server Side B

2042.6 978.26 268.45 5581 2891.3 731.79 27410 11849.35 3007.67

Unified CVP Server No. 2

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified Intelligence Center Subscriber

781.4 628.34 460.17 466 433.1 74.32 7758 6446.3 5727.44

Cisco Finesse Subscriber

53.55 48.21 29.68 4 0 0.02 1488 1429.15 920.23

Unified CM Subscriber Side B

172.65 72.31 58.32 1068 5 9.11 1860 1775.1 1218.23

Unified CVP OAMP

64.02 54.92 42.99 2426.4 16.524 5.02 1254.2 310.8 287.23

Note


  1. Monitor SAN performance for IOPS and disk usage. If usage exceeds thresholds, redeploy disk resources during the service window.

  2. The IOPS values for Unified Communication Manager in the preceding table are based on the BHCA values. These values may differs for different scenarios. For more information, seeIOPS values for Unified Communication Manager.


SAN Configuration for 1000 Agent Deployment for Core Components

The 1000 agent deployment requires 4.5 TB of SAN storage for the core components. The following table contains the SAN configuration for the 1000 agent deployment. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.

Table 24 SAN Configuration for the 1000 Agent Deployment

RAID Group

VM Datastore

Disk Drive

Virtual Machine

A1 - RAID5

Datastore-A1

575 GB

C

Unified CCE Call Server, Side A

Unified CVP Call Server 1A

Unified Intelligence Center Publisher

Cisco Finesse Publisher

A2 - RAID5

Datastore-A2

1500 GB

C

Unified CCE Data Server, Side A

Unified CVP Reporting Server

D

Unified CCE Data Server, Side A

Unified CVP Reporting Server

A3 - RAID5

Datastore-A3

550 GB

C

Unified CVP Call Server 2A

Unified Communication Manager Publisher

Unified Communication Manager Subscriber 1A

B1 - RAID5

Datastore-B1

575 GB

C

Unified CCE Call Server, Side B

Unified CVP Call Server 1B

Unified Intelligence Center Subscriber

Cisco Finesse Subscriber

B2 - RAID5

Datastore-B2

900 GB

C

Unified CCE Data Server, Side B (Operating System Drive)

D

Unified CCE Data Server, Side B (Database Drive)

B3 - RAID5

Datastore-B3

400 GB

C

Unified CVP Call Server 2B

Unified CVP OAMP Server

Unified Communication Manager Subscriber 1B

Blade Placement Requirement for 1000 Agent Deployment Core Components

The 1000 agent deployment requires a single high-density B230M2-VCS1 blade for the mandatory core components. The following tables contain the blade placement for the 1000 agent deployment, chassis 1 and 2.


Note


The vCPU is oversubscribed, but the overall CPU MHz and memory is not oversubscribed for the blade.


Table 25 Blade Placement for the 1000 Agent Deployment - Chassis 1
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)

A1

Unified CCE Call Server Side A

4

8

80

5000

8192

Unified CCE Data Server Side A

4

8

80

750

5100

8192

Unified CVP Call Server No. 1 Side A

4

4

150

1800

4096

Unified CVP Call Server No. 2 Side A

4

4

150

1800

4096

Unified Intelligence Center Publisher

4

6

146

900

6144

Cisco Finesse Publisher

4

8

146

8000

8192

Unified CM Publisher

2

6

110

---

3600

6144

Unified CM Subscriber Side A

2

6

110

---

3600

6144

Unified CVP Reporting Server

4

4

80

438

2500

4096

Table 26 Blade Placement for the 1000 Agent Deployment - Chassis 2
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
B1

Unified CCE Call Server Side B

4

8

80

5000

8192

 

Unified CCE Data Server Side B

4

8

80

750

5100

8192

 

Unified CVP Call Server Side B No. 1

4

4

150

1800

4096

 

Unified CVP Call Server Side B No. 2

4

4

150

1800

4096

 

Unified Intelligence Center Subscriber

4

6

146

900

6144

 

Cisco Finesse Subscriber

4

8

146

8000

8192

 

Unified CM Subscriber Server Side B

2

6

110

---

3600

6144

 

Unified CVP OAMP

2

4

40

1200

4096

IOPS Requirement for 1000 Agent Deployment Core Components

The following tables contain the IOPS (Input Operations Per Second) required for the 1000 agent deployment. Use the IOPS 95th percentile value to design the SAN array and the IOS average value to monitor the SAN array.

Table 27 IOPS, Disk Read, and Disk Write - Chassis 1
Blade Virtual Machine IOPS Disk Read KB/s Disk Write KB/s
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

A1

Unified CCE Call Server Side A

250.25

81.84

71.58

9646.8

156.07

106.292

28204.8

10128.83

6366

Unified CCE Data Server Side A

2244.98

1082.76

312.40

56522.25

18588.62

4271.13

245150.46

18371.31

6317.53

Unified CVP Call Server No. 1 Side A

637

62

25

2450

1401.7

582.12

4433

4354.1

2328.12

Unified CVP Call Server No. 2 Side A

637

62

25

2450

1401.7

582.12

4433

4354.1

2328.12

Unified Intelligence Center Publisher

781.4

628.34

460.17

466

433.1

74.32

7758

6446.3

5727.44

Cisco Finesse Publisher

53.55

48.21

29.68

4

0

0.02

1488

1429.15

920.23

Unified CM Publisher

215

128

107

Unified CM Subscriber Side A

215

128

107

Unified CVP Reporting Server

1250

984

329

3126

2068.35

764.24

9166

5945.3

2210.38


Note


  1. Monitor SAN performance for IOPS and disk usage. If usage exceeds thresholds, redeploy disk resources during the service window.

  2. The IOPS values for Unified Communication Manager in the preceding table are based on the BHCA values. These values may differs for different scenarios. For more information, seeIOPS values for Unified Communication Manager.


Table 28 IOPS, Disk Read, and Disk Write - Chassis 2
Blade Virtual Machine IOPS Disk Read KB/s Disk Write KB/s
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

B1

Unified CCE Call Server No. 1 Side B

250.25

81.84

71.58

9646.8

156.07

106.292

28204.8

10128.83

6366

Unified CCE Data Server Side B

2244.98

1082.76

312.40

56522.25

18588.62

4271.13

245150.46

18371.31

6317.53

Unified CVP Call Server Side B No. 1

637

62

25

2450

1401.7

582.12

4433

4354.1

2328.12

Unified CVP Call Server Side B No. 2

637

62

25

2450

1401.7

582.12

4433

4354.1

2328.12

Unified Intelligence Center Subscriber

781.4

628.34

460.17

466

433.1

74.32

7758

6446.3

5727.44

Cisco Finesse Subscriber

53.55

48.21

29.68

4

0

0.02

1488

1429.15

920.23

Unified CM Subscriber Server Side B

215

128

107

Unified CVP OAMP

64.02

54.92

42.99

2426.4

16.524

5.02

1254.2

310.8

287.23


Note


  1. Monitor SAN performance for IOPS and disk usage. If usage exceeds thresholds, redeploy disk resources during the service window.

  2. The IOPS values for Unified Communication Manager in the preceding table are based on the BHCA values. These values may differs for different scenarios. For more information, seeIOPS values for Unified Communication Manager.


SAN Configuration for 4000 Agent Deployment for Core Components

The 4000 agent deployment requires 10 TB of SAN storage for the core components. The following table contains the SAN configuration for the 4000 agent deployment. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.

Table 29 SAN Configuration for the 4000 Agent Deployment
RAID Group VM Datastore Disk Drive Virtual Machine
A1 - RAID5 Datastore-A1 850 GB C:

Unified CCE Rogger 1A (Operating System Drive)

Unified CVP Reporting Server (Operating System Drive)

D:

Unified CCE Rogger 1A (Database Drive)

Unified CVP Reporting Server (Database Drive)

A2 - RAID5 Datastore-A2 1300 GB C:

Unified CCE AW-HDS-DDS 1A (Operating System Drive)

Unified CCE AW-HDS-DDS 2A (Operating System Drive)

D:

Unified CCE AW-HDS-DDS 1A (Database Drive)

Unified CCE AW-HDS-DDS 2A (Database Drive)

A3 - RAID5 Datastore-A3 500 GB C:

Unified CCE Agent PG 1A

Unified CVP Call Server 1A

Unified CVP Call Server 2A

A4 - RAID5 Datastore-A4 550 GB C:

Unified CCE Agent PG 2A

Unified CCE VRU PG 1A

Unified Intelligence Center Publisher

Cisco Finesse Publisher 1

A5 - RAID5 Datastore-A5 400 GB C:

Unified CVP Call Server 3A

Unified CVP Call Server 4A

A6 - RAID5 Datastore-A6 600 GB C:

Unified Call Manager Publisher

Unified Call Manager Subscriber 1A

Unified Call Manager Subscriber 2A

A7 - RAID5 Datastore-A7 600 GB C:

Unified Call Manager Subscriber 3A

Unified Call Manager Subscriber 4A

Cisco Finesse Publisher 2

A8 - RAID5 Datastore-A8 700 GB C:

Unified CVP Call Server 5A

Unified CVP Call Server 6A

Unified CVP Call Server 7A

Unified CVP Call Server 8A

B1 - RAID5 Datastore-B1 900 GB C:

Unified CCE Rogger 1B (Operating System Drive)

Unified CVP OAMP Server

Unified CVP Reporting Server Side B

D:

Unified CCE Rogger 1B

Unified CVP Reporting Server Side B

B2 - RAID5 Datastore-B2 1300 GB C:

Unified CCE AW-HDS-DDS 1B (Operating System Drive)

Unified CCE AW-HDS-DDS 2B (Operating System Drive)

D:

Unified CCE AW-HDS-DDS 1B (Database Drive)

Unified CCE AW-HDS-DDS 2B (Database Drive)

B3 - RAID5 Datastore-B3 550 GB C:

Unified CCE Agent PG 1B

Unified CCE VRU PG 1B

Unified Intelligence Center Subscriber

Cisco Finesse Subscriber 1

B4 - RAID5 Datastore-B4 450 GB C:

Unified CCE Agent PG 2B

Unified CVP Call Server 1B

Unified CVP Call Server 2B

B5 - RAID5 Datastore-B5 400 GB C:

Unified Call Manager Subscriber 1B

Unified Call Manager Subscriber 2B

B6 - RAID5 Datastore-B6 400 GB C:

Unified CVP Call Server 3B

Unified CVP Call Server 4B

B7 - RAID5 Datastore-B7 600 GB C:

Unified Call Manager Subscriber 3B

Unified Call Manager Subscriber 4B

Cisco Finesse Subscriber 2

B8 - RAID5 Datastore-B8 700 GB C:

Unified CVP Call Server 5B

Unified CVP Call Server 6B

Unified CVP Call Server 7B

Unified CVP Call Server 8B

Blade Placement and IOPS Requirement for 4000 Agent Deployment Core Components

The 4000 agent deployment requires four pairs of single high-density B230M2-VCS1 blade for the mandatory core components. The fourth pair of blade is optional for both CCB and CVP Reporting server. Also, the fourth pair of blade is required when the sum of calls at agents and the IVR exceeds 3600.

The following tables contain the blade placement for the 4000 agent deployment.


Note


The vCPU is oversubscribed, but the overall CPU MHz and memory is not oversubscribed for the blade.


Table 30 Blade Placement for the 4000 Agent Deployment - Chassis 1 Blade 1

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

A1

Unified CCE Rogger Side A

4

6

80

150

5400

6144

Unified CCE Agent PG 1 Side A

2

6

80

__

3600

6144

Unified CCE Agent PG 2 Side A

2

6

80

__

3600

6144

Unified CCE VRU PG 1 Side A

2

2

80

__

1800

2048

Unified CCE AW-HDS-DDS 1 Side A

4

8

80

500

3600

8192

Unified CCE AW-HDS-DDS 2 Side A

4

8

80

500

3600

8192

Table 31 Blade Placement for the 4000 Agent Deployment - Chassis 1 Blade 2

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

A2

Unified Intelligence Center Publisher

4 6

146

__

900

6144

Unified CVP Server 1 Side A 4 4

150

__

1800

4096

Unified CVP Server 2 Side A 4 4

150

__

1800

4096

Unified CVP Server 3 Side A 4 4

150

__

1800

4096

Unified CVP Server 4 Side A 4 4

150

__

1800

4096

Table 32 Blade Placement for the 4000 Agent Deployment - Chassis 1 Blade 3

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

A3

Cisco Finesse Publisher 1 Side A 4 8 146 __ 8000 8192
Unified CM Publisher 2 6 110 __ 3600 6144
Unified CM Subscriber 1 Side A 2 6 110 __ 3600 6144
Unified CM Subscriber 3 Side A 2 6 110 __ 3600 6144
Unified CM Subscriber 5 Side A 2 6 110 __ 3600 6144
Unified CM Subscriber 7 Side A 2 6 110 __ 3600 6144
Cisco Finesse Publisher 2 Side A 4 8 146 __ 8000 8192
Table 33  Blade Placement for the 4000 Agent Deployment - Chassis 1 Blade 4

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

A4

Unified CVP Server 5 Side A 4 4 150 __ 1800 4096
Unified CVP Server 6 Side A 4 4 150 __ 1800 4096
Unified CVP Server 7 Side A 4 4 150 __ 1800 4096
Unified CVP Server 8 Side A 4 4 150 __ 1800 4096
Unified CVP Reporting Server Side A 4 4 80 438 6600 4096
Table 34 Blade Placement for the 4000 Agent Deployment - Chassis 2 Blade 1

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

B1

Unified CCE Rogger Side B 4 6 80 150 5400 6144
Unified CCE Agent PG 1 Side B 2 6 80 __ 3600 6144
Unified CCE Agent PG 2 Side B 2 6 80 __ 3600 6144
Unified CCE VRU PG 1 Side B 2 2 80 __ 1800 2048
Unified CCE AW-HDS-DDS 1 Side B 4 8 80 500 3600 8192
Unified CCE AW-HDS-DDS 2 Side B 4 8 80 500 3600 8192
Table 35 Blade Placement for the 4000 Agent Deployment - Chassis 2 Blade 2

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

B2

Unified Intelligence Center Subscriber 4 6 146 __ 900 6144
Unified CVP Server 1 Side B 4 4 150 __ 1800 4096
Unified CVP Server 2 Side B 4 4 150 __ 1800 4096
Unified CVP Server 3 Side B 4 4 150 __ 1800 4096
Unified CVP Server 4 Side B 4 4 150 __ 1800 4096
Table 36 Blade Placement for the 4000 Agent Deployment - Chassis 2 Blade 3

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

B3

Unified CVP OAMP 2 4 40 1200 4096
Cisco Finesse Subscriber 1 Side B 4 8 146 __ 8000 8192
Cisco Finesse Subscriber 2 Side B 4 8 146 __ 8000 8192
Unified CM Subscriber 2 Side A 2 6 110 __ 3600 6144
Unified CM Subscriber 4 Side A 2 6 110 __ 3600 6144
Unified CM Subscriber 6 Side A 2 6 110 __ 3600 6144
Unified CM Subscriber 8 Side A 2 6 110 __ 3600 6144
Table 37 Blade Placement for the 4000 Agent Deployment - Chassis 2 Blade 4

Blade

Virtual Machine

vCPU

RAM(GB)

Disk C(GB)

Disk D(GB)

CPU Reservation(MHz)

RAM Reservation(MB)

B4

Unified CVP Server 5 Side B 4 4 150 __ 1800 4096
Unified CVP Server 6 Side B 4 4 150 __ 1800 4096
Unified CVP Server 7 Side B 4 4 150 __ 1800 4096
Unified CVP Server 8 Side B 4 4 150 __ 1800 4096
Unified CVP Reporting Server Side B 4 4 80 438 6600 4096

IOPS Requirement for 4000 Agent Deployment Core Components

The 4000 agent deployment requires four pairs of single high-density B230M2-VCS1 blade for the mandatory core components. The fourth pair of blade is optional for both CCB and CVP Reporting server. Also, the fourth pair of blade is required when the sum of calls at agents and the IVR exceeds 3600.

The following tables contain the IOPS (Input Operations Per Second) required for the 4000 agent deployment. Use the IOPS 95th percentile value to design the SAN array and the IOS average value to monitor the SAN array.

Table 38 IOPS, Disk Read, and Disk Write - Chassis 1 Blade 1

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

A1

Unified CCE Rogger Side A

633.85 580.02 203.02 3153 626.25 328.81 40552 9137.3 3722.76

Unified CCE Agent PG 1 Side A

908.5 106.45 54.07 35787 31.4 484.83 59250 7405.6 2490.91

Unified CCE Agent PG 2 Side A

908.5 106.45 54.07 35787 31.4 484.83 59250 7405.6 2490.91

Unified CCE VRU PG 1 Side A

130.65 106.35 63.7 2516 800.95 154.83 4595 4187.2 2432.77

Unified CCE AW-HDS-DDS 1 Side A

1115 898.54 428.94 2732 781.65 485.68 30154 7703.8 2905.42

Unified CCE AW-HDS-DDS 2 Side A

1115 898.54 428.94 2732 781.65 485.68 30154 7703.8 2905.42
Table 39 IOPS, Disk Read, and Disk Write - Chassis 1 Blade 2

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

A2

Unified Intelligence Center Publisher

781.4 628.34 460.17 466 433.1 74.32 7758 6446.3 5727.44

Unified CVP Server 1 Side A

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified CVP Server 2 Side A

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified CVP Server 3 Side A

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified CVP Server 4 Side A

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Table 40 IOPS, Disk Read, and Disk Write - Chassis 1 Blade 3

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

A3

Cisco Finesse Publisher 1 Side A 53.55 48.21 29.68 4 0 0.02 1488 1429.15 920.23
Unified CM Publisher 215 128 107
Unified CM Subscriber 1 Side A 215 128 107
Unified CM Subscriber 3 Side A 215 128 107
Unified CM Subscriber 5 Side A 215 128 107
Unified CM Subscriber 7 Side A 215 128 107
Cisco Finesse Publisher 2 Side A 53.55 48.21 29.68 4 0 0.02 1488 1429.15 920.23
Table 41 IOPS, Disk Read, and Disk Write - Chassis 1 Blade 4

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

A4

Unified CVP Server 1 Side A 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Server 2 Side A 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Server 3 Side A 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Server 4 Side A 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Reporting Server Side A 1250 984 329 3126 2068.35 764.24 9166 5945.3 2210.38
Table 42  IOPS, Disk Read, and Disk Write - Chassis 2 Blade 1

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

B1

Unified CCE Rogger Side B 633.85 580.02 203.02 3153 626.25 328.81 40552 9137.3 3722.76
Unified CCE Agent PG 1 Side B 908.5 106.45 54.07 35787 31.4 484.83 59250 7405.6 2490.91
Unified CCE Agent PG 2 Side B 908.5 106.45 54.07 35787 31.4 484.83 59250 7405.6 2490.91
Unified CCE VRU PG 1 Side B 130.65 106.35 63.7 2516 800.95 154.83 4595 4187.2 2432.77
Unified CCE AW-HDS-DDS 1 Side B 1115 898.54 428.94 2732 781.65 485.68 30154 7703.8 2905.42
Unified CCE AW-HDS-DDS 2 Side B 1115 898.54 428.94 2732 781.65 485.68 30154 7703.8 2905.42
Table 43 IOPS, Disk Read, and Disk Write - Chassis 2 Blade 2

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

B2

Unified Intelligence Center Subscriber

781.4 628.34 460.17 466 433.1 74.32 7758 6446.3 5727.44

Unified CVP Server 1 Side B

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified CVP Server 2 Side B

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified CVP Server 3 Side B

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12

Unified CVP Server 4 Side B

637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Table 44 IOPS, Disk Read, and Disk Write - Chassis 2 Blade 3

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

B3

Unified CVP OAMP 64.02 54.92 42.99 2426.4 16.524 5.02 1254.2 310.8 287.23
Cisco Finesse Subscriber 1 Side B 53.55 48.21 29.68 4 0 0.02 1488 1429.15 920.23
Cisco Finesse Subscriber 2 Side B 53.55 48.21 29.68 4 0 0.02 1488 1429.15 920.23
Unified CM Subscriber 2 Side A 215 128 107
Unified CM Subscriber 4 Side A 215 128 107
Unified CM Subscriber 6 Side A 215 128 107
Unified CM Subscriber 8 Side A 215 128 107
Table 45 IOPS, Disk Read, and Disk Write - Chassis 2 Blade 4

Blade

Virtual Machine

IOPS

Disk Read KB/s

Disk Write KB/s

Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average

B4

Unified CVP Server 1 Side B 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Server 2 Side B 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Server 3 Side B 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Server 4 Side B 637 62 25 2450 1401.7 582.12 4433 4354.1 2328.12
Unified CVP Reporting Server Side B 1250 984 329 3126 2068.35 764.24 9166 5945.3 2210.38

Storage, Blade Placement, and IOPS Considerations for HCS Optional Components

SAN Configurations of 120 Multimedia Agent Deployment for Unified WIM and EIM

The HCS deployment requires 1 TB of SAN storage for the HCS Optional components. The following table contains the SAN configuration of 120 agent deployment for Unified WIM and EIM. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.

RAID Group VM Datastore Disk Drive Virtual Machine
D1-RAID5 Datastore-D1 275 GB C

Unified WIM and EIM DB Server

D

Unified WIM and EIM DB Server

D2-RAID5 Datastore-D2 600GB C

Unified WIM and EIM File Server

Unified WIM and EIM Services Server

Unified WIM and EIM Messaging Server

Unified WIM and EIM Application Server

Unified WIM and EIM Web Server

Cisco Media Blender

Blade Placement Requirement of 120 Multimedia Agent Deployment for Unified WIM and EIM

The following tables contain the blade placement of 120 multimedia agent deployment for Unified WIM and EIM components.

Table 46 Blade Placement of 120 Multimedia Agent Deployment for Unified WIM and EIM components
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
 

Unified WIM and EIM File Server

2

2

80

 

4400

2048

 

Unified WIM and EIM DB Server

2

4

80

150

4400

4096

 

Unified WIM and EIM Services Server

2

4

80

 

4400

4096

 

Unified WIM and EIM Messaging Server

2

2

80

 

4400

2048

 

Unified WIM and EIM Application Server

2

2

80

 

4400

2048

 

Unified WIM and EIM Web Server

1

1

80

 

2200

1024

 

Cisco Media Blender

2

2

80

 

4400

2048


Note


Optional Component OVA supports the specification-based hardware and is supported on UCS B-Series blades & C-Series servers.


IOPS Requirement of 120 Multimedia Agent Deployment for Unified WIM and EIM

The following tables contain the IOPs for 120 multimedia agent deployment for Unified WIM and EIM components.

Table 47 IOPS, Disk Read, and Disk Write
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average
  Unified WIM and EIM File Server 88.45 2.70 1.79 2744 0.00 35.10 304 12.00 8.58
  Unified WIM and EIM DB Server 863.45 195.99 75.28 61712 2259.35 704.11 49943 1347.65 844.04
  Unified WIM and EIM Services Server 117.55 3.19 1.69 5113 2.05 45.25 742 9.05 6.99
  Unified WIM and EIM Messaging Server 93.95 5.89 1.66 9331 176.55 56.95 17783 29.15 64.25
  Unified WIM and EIM Application Server 477.05 39.81 5.21 9925 855.05 124.57 10008 18.20 59.09
  Unified WIM and EIM Web Server 140.30 15.10 4.65 2421 300.80 71.05 864 14.00 10.20
  Cisco Media Blender 480.30 9.87 5.61 390 2.10 1.44 30400 78.75 155.41

SAN Configurations of 240 Multimedia Agent Deployment for Unified WIM and EIM

The HCS deployment requires 1.2 TB of SAN storage for the HCS Optional components. The following table contains the SAN configuration of 240 multimedia agent deployment for Unified WIM and EIM components. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.

RAID Group VM Datastore Disk Drive Virtual Machine
D1-RAID5 Datastore-D1 275GB C

Unified WIM and EIM DB Server

D

Unified WIM and EIM DB Server

D2-RAID5 Datastore-D2 800GB C

Unified WIM and EIM File Server

Unified WIM and EIM Services Server

Unified WIM and EIM Messaging Server 1

Unified WIM and EIM Messaging Server 2

Unified WIM and EIM Application Server

Unified WIM and EIM Web Server 1

Unified WIM and EIM Web Server 2

Cisco Media Blender

Blade Placement Requirement of 240 Multimedia Agent Deployment for Unified WIM and EIM

The following tables contain the blade placement of 240 multimedia agent deployment for Unified WIM and EIM components.

Table 48 Blade Placement 240 Multimedia Agent Deployment for Unified WIM and EIM components
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
 

Unified WIM and EIM File Server

2

2

80

 

4400

2048

 

Unified WIM and EIM DB Server

4

4

80

150

8800

4096

 

Unified WIM and EIM Services Server

2

4

80

 

4400

4096

 

Unified WIM and EIM Messaging Server

2

2

80

 

4400

2048

 

Unified WIM and EIM Application Server 1

2

2

80

 

4400

2048

 

Unified WIM and EIM Application Server 2

2

2

80

 

4400

2048

 

Unified WIM and EIM Web Server 1

1

1

80

 

2200

1024

 

Unified WIM and EIM Web Server 2

1

1

80

 

2200

1024

 

Cisco Media Blender

2

2

80

 

4400

2048


Note


Optional Component OVA supports the specification-based hardware and is supported on UCS B-Series blades & C-Series servers.


IOPS Requirement of 240 Multimedia Agent Deployment for Unified WIM and EIM

The following tables contain the IOPs of 240 multimedia agent deployment for Unified WIM and EIM components.

Table 49 IOPS, Disk Read, and Disk Write
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average
  Unified WIM and EIM File Server 31.65 4.86 1.80 2769 1.10 18.02 15068 62.00 73.57
  Unified WIM and EIM DB Server 997.45 531.84 153.39 54009 15074.85 1453.60 79320 1726.20 1037.57
  Unified WIM and EIM Services Server 115.00 8.18 1.76 3935 94.25 39.26 25311 25.50 137.82
  Unified WIM and EIM Messaging Server 93.95 5.89 1.66 9331 176.55 56.95 19070 52.05 66.03
  Unified WIM and EIM Application Server 1 494.35 39.81 5.21 18045 855.05 124.57 32960 37.30 160.83
  Unified WIM and EIM Application Server 2 494.35 39.81 5.21 18045 855.05 124.57 32960 37.30 160.83
  Unified WIM and EIM Web Server 1 140.30 15.10 7.84 2421 300.80 117.11 864 18.00 11.97
  Unified WIM and EIM Web Server 2 140.30 15.10 7.84 2421 300.80 117.11 864 18.00 11.97
  Cisco Media Blender 480.30 9.87 5.61 390 2.10 1.44 30400 78.75 155.41

SAN Configurations of Unified WIM and EIM Distributed server Deployment

The HCS deployment requires 2.4 TB of SAN storage for the HCS Optional components. The following table contains the SAN configuration of Unified WIM and EIM Distributed server Deployment components. In this table, the C drive is the active primary partition used for the operating system and applications, and the D drive is a secondary partition used for the database.

RAID Group VM Datastore Disk Drive Virtual Machine
D1-RAID5 Datastore-D1 1100GB C

Unified WIM and EIM DB Server

Unified WIM and EIM Reporting Server

D

Unified WIM and EIM DB Server

Unified WIM and EIM Reporting Server

D2-RAID5 Datastore-D2 1300GB C

Unified WIM and EIM File Server

Unified WIM and EIM Services Server

Unified WIM and EIM Messaging Server 1

Unified WIM and EIM Messaging Server 2

Unified WIM and EIM Messaging Server 3

Unified WIM and EIM Messaging Server 4

Unified WIM and EIM Messaging Server 5

Unified WIM and EIM Application Server

Unified WIM and EIM Web Server 1

Unified WIM and EIM Web Server 2

Unified WIM and EIM Web Server 3

Unified WIM and EIM Web Server 4

Unified WIM and EIM Web Server 5

Cisco Media Blender

Blade Placement and IOPS Requirement of Unified WIM and EIM Distributed server Deployment

The following tables contain the blade placement of Unified WIM and EIM Distributed server Deployment components.

Table 50 Blade Placement of Unified WIM and EIM Distributed server Deployment for Unified WIM and EIM components
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
 

Unified WIM and EIM File Server

2

2

80

 

4400

2048

 

Unified WIM and EIM DB Server

8

16

80

438

8710

16384

 

Unified WIM and EIM Reporting Server

8

16

80

438

8710

16384

 

Unified WIM and EIM Services Server

2

8

80

 

4400

8192

 

Unified WIM and EIM Messaging Server

2

2

80

 

4400

2048

 

Unified WIM and EIM Application Server 1

2

2

80

4400

2048

 

Unified WIM and EIM Application Server 2

2

2

80

4400

2048

 

Unified WIM and EIM Application Server 3

2

2

80

4400

2048

 

Unified WIM and EIM Application Server 4

2

2

80

4400

2048

 

Unified WIM and EIM Application Server 5

2

2

80

4400

2048

 

Unified WIM and EIM Web Server 1

1

1

80

 

2200

1024

 

Unified WIM and EIM Web Server 2

1

1

80

 

2200

1024

 

Unified WIM and EIM Web Server 3

1

1

80

 

2200

1024

 

Unified WIM and EIM Web Server 4

1

1

80

 

2200

1024

 

Unified WIM and EIM Web Server 5

1

1

80

 

2200

1024

 

Cisco Media Blender 1

2

2

80

 

4400

2048

 

Cisco Media Blender 2

2

2

80

 

4400

2048


Note


Optional Component OVA supports the specification-based hardware and is supported on UCS B-Series blades & C-Series servers.


IOPS Requirement of Unified WIM and EIM Distributed server Deployment

The following tables contain the IOPS of Unified WIM and EIM Distributed server Deployment components.

Table 51 IOPS, Disk Read, and Disk Write
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average
  Unified WIM and EIM File Server 22 19.1 14.44 5 1 0.15 19 18 14
  Unified WIM and EIM DB Server 5127 4948.3 3670.23 171 26.6 10.6 5125 4866 3658.8
  Unified WIM and EIM Reporting Server 5127 4948.3 3670.23 171 26.6 10.6 5125 4866 3658.8
  Unified WIM and EIM Services Server 69 20.4 6.5 7 1.3 0.29 68 20.4 6.19
  Unified WIM and EIM Messaging Server 4 1.3 1.14 0 0 0 4 1.3 1.14
  Unified WIM and EIM Application Server 1 1523 433.12 75.1 512.2 65.04 16.93 1087 368.82 57.88
  Unified WIM and EIM Application Server 2 1523 433.12 75.1 512.2 65.04 16.93 1087 368.82 57.88
  Unified WIM and EIM Application Server 3 1523 433.12 75.1 512.2 65.04 16.93 1087 368.82 57.88
  Unified WIM and EIM Application Server 4 1523 433.12 75.1 512.2 65.04 16.93 1087 368.82 57.88
  Unified WIM and EIM Application Server 5 1523 433.12 75.1 512.2 65.04 16.93 1087 368.82 57.88
  Unified WIM and EIM Web Server 1 22.6 16.32 8.59 14.8 8.4 1.74 12.2 8.84 6.73
  Unified WIM and EIM Web Server 2 22.6 16.32 8.59 14.8 8.4 1.74 12.2 8.84 6.73
  Unified WIM and EIM Web Server 3 22.6 16.32 8.59 14.8 8.4 1.74 12.2 8.84 6.73
  Unified WIM and EIM Web Server 4 22.6 16.32 8.59 14.8 8.4 1.74 12.2 8.84 6.73
  Unified WIM and EIM Web Server 5 22.6 16.32 8.59 14.8 8.4 1.74 12.2 8.84 6.73
  Cisco Media Blender 1 12.25 7.45 5.51 47 0 0.52 384 289 177.47
  Cisco Media Blender 2 12.25 7.45 5.51 47 0 0.52 384 289 177.47

SAN Configurations for Cisco Remote Silent Monitoring

The HCS deployment requires 100 GB of SAN storage for Cisco Remote Silent Monitoring. The following table contains the SAN configuration for Cisco Remote Silent Monitoring. In this table, the C drive is the active primary partition used for the operating system and applications.

RAID Group VM Datastore Disk Drive Virtual Machine
D3-RAID5 Datastore-D3 100 GB C

Cisco Remote Silent Monitoring

Blade Placement Requirement for Cisco Remote Silent Monitoring

The following tables contain the blade placement for Cisco Remote Silent Monitoring.

Table 52 Blade Placement for Cisco Remote Silent Monitoring
Blade Virtual Machine vCPU RAM (GB) Disk C (GB) Disk D (GB) CPU Reservation (MHz) RAM Reservation (MB)
 

Cisco Remote Silent Monitoring

2

4

50

 

2130

4096


Note


Optional Component OVA supports the specification-based hardware and is supported on UCS B-Series blades & C-Series servers.


IOPS Requirement for Cisco Remote Silent Monitoring

The following tables contain the IOPS for Cisco Remote Silent Monitoring.

Table 53 IOPS, Disk Read, and Disk Write
Blade Virtual Machine IOPS Disk Read Kbytes/sec Disk Write Kbytes / sec
Peak 95th Percentile Average Peak 95th Percentile Average Peak 95th Percentile Average
  Cisco Remote Silent Monitoring 32 4.25 2.38 1054 0 8.63 718 18 14.44

Core Component High Availability Considerations

This section describes the High Availability considerations for Cisco HCS for Contact Center core components:

The following table shows the failover scenarios for the HCS for Contact Center components, the impact on active and new calls, and the postrecovery actions.

Table 54 HCS for Contact Center Failover
Component Failover scenario New call impact Active call impact Post recovery action
Unified CM Visible network failure

Disrupts new calls while the phones route to the backup subscriber. Processes the calls when the routing completes.

In-progress calls remain active, with no supplementary services such as conference or transfer.

After the network of the primary subscriber becomes active, the phones align to the primary subscriber.

Call manager service in Unified CM primary subscriber failure Disrupts new calls while the phones route to the backup subscriber. Processes the calls when the routing completes. In-progress calls remain active, with no supplementary services such as conference or transfer.

After the call manager service in the Unified CM primary subscriber recovers, all idle phones route back to the primary subscriber.

Unified CM CTI Manager service on primary subscriber failure Disrupts new calls while the phones route to the backup subscriber. Processes the calls when the routing completes. In-progress calls remain active, with no supplementary services such as conference or transfer.

After the Unified CM CTI Manager service on primary subscriber recovers, peripheral gateway side B remains active and uses the CTI Manager service on the Unified CM backup subscriber. The peripheral gateway does not switch over.

Gateway Primary gateway is unreachable New calls redirect to the backup gateway. In-progress calls become inactive. After the primary gateway restores, calls (active and new) route back to the primary gateway.
MRCP ASR/TTS Primary server is not accessible New calls redirect to the backup ASR/TTS server In-progress calls remain active and redirect to the backup ASR/TTS server. After the primary server restores, calls (active and new) route back to the primary ASR/TTS server.
Blade Blade failover Disrupts new calls while backup server components become active. In-progress calls become inactive. After backup server components restores, calls (active and new) route back to the primary server.
WAN Link Unified CM calls survivability during WAN link failure. The new calls redirects to the Survivable Remote Site Telephony (SRST). The in-progress calls redirects to the Survivable Remote Site Telephony (SRST). After the WAN Link restores, the calls redirects to the Unified Communications Manager.
Unified CVP calls survivability during WAN link failure. A combination of services from a TCL script (survivability.tcl) and SRST functions handles survivability new calls.

The TCL script redirects the new calls to a configurable destination.

Note    The destination choices for the TCL script are configured as parameters in the Cisco IOS Gateway configuration.
The new calls can also be redirected to the alternative destinations, including the SRST, *8 TNT, or hookflash. For transfers to the SRST call agent, the most common target is an SRST alias or a Basic ACD hunt group.
A combination of services from a TCL script (survivability.tcl) and SRST functions handles survivability in-progress calls.

The TCL script redirects the new calls to a configurable destination.

Note    The destination choices for the TCL script are configured as parameters in the Cisco IOS Gateway configuration.
The in-progress calls can also be redirected to the alternative destinations, including the SRST, *8 TNT, or hookflash. For transfers to the SRST call agent, the most common target is an SRST alias or a Basic ACD hunt group.
After the WAN Link restores, the calls redirects to the Unified CVP.

VMware High Availability

High availability (HA) provides failover protection against hardware and operating system failures within your virtualized Cisco HCS for Contact Center environment.

The following lists the VMware HA considerations for deploying Cisco HCS for Contact Center with VMware HA enabled:

  • Cisco HCS does not support VMware Distributed Resource Scheduler (DRS).

  • Select the Admission Control Policy: Specify a failover host. When an ESXi host fails, all of the VMs on this host fail over to the reserved HA backup host. The failover host Admission Control Policy avoids resource fragmentation. The Cisco HCS for Contact Center deployment models assume a specific VM colocation within a Cisco HCS for Contact Center instance deployment. This VM colocation requirement guarantees system performance, and it is tested for specific Cisco HCS for Contact Center application capacity requirements.

  • Select VM monitoring status options: VM Monitoring Only.

  • Select Host Isolation response: Shut down for all the virtual machines.

  • Configure the Cisco HCS for Contact Center virtual machines with the VM restart priority shown in the following table.

Table 55 Virtual Machine Settings

Virtual Machine

VM Restart Priority

Cisco Unified Intelligence Center Low
Contact Center Domain Manager Low
Unified CVP Reporting Server Low
Unified CCE Call Server Medium
Cisco Finesse Medium
Unified CVP Servers High
Unified CCE Database Server High
  • HA is not required because the Cisco HCS for Contact Center applications are highly available by design.

  • HA Backup Hosts must be in the same cluster, but not in the same physical chassis as the Contact Center blades.

  • For more information about high availability see the VMware vSphere Availability Guide ESXi 5.0.


Note


Because the Router and PGs are co-located in 500 and 1000 agent deployment model, an unlikely dual (Public and Private) network failure could result in serious routing degradation. The Cisco Hosted Collaboration Solution for Contact Center does not tolerate a dual concurrent network failure, so you may need to intervene to restore the system’s full functionality.


Network Link High Availability

The following lists considerations when the network link fails between Cisco HCS for Contact Center setup and Active Directory:

  • Call traffic is not impacted during the link failure.

  • The virtual machines in the domain restrict sign in using the domain controller credentials. You can sign in using cached credentials.

  • If you stop Unified CCE services before the link fails, you must restore the link before starting the Unified CCE components.

  • You will not be able to access the local PG Setup or sign in to the Unified CCE Web Setup.

  • If the link fails while the Cisco HCS services are active, access to Unified CCE Web Setup, configuration tools, and Script Editor fails.

  • Although the Unified CCDM allows login to the portal, access to the reporting page fails.

  • The administrator and superusers can access or configure any attribute except the Reporting Configuration in Cisco Unified Intelligence Center OAMP portal.

  • Agent supervisors cannot sign in to the Cisco Unified Intelligence Center Reporting portal, however supervisors already signed in can access the reports.

Unified CCE High Availability

In 500 and 1000 agent deployment model the Unified CCE Call Server contains the Unified CCE Router, Unified CCE PG, CG, and the CTI OS server and the Database server contains the Logger and the Unified CCE Administration Server and Real-Time Data Server. In 4000 agent deployment the Unified CCE Rogger contains Router and Logger, Unified PG server contains PG, CG, and CTI OS Server.

This section describes how high availability works for each component within a Unified CCE Call Server and Unified Database Server or within CCE Rogger and PG servers.

Agent PIM

Connect Side A of Agent PIM to one subscriber and Side B to another subscriber. Each of Unified CM subscribers A and B must run a local instance of CTI Manager. When PG(PIM) side A fails, PG(PIM) side B becomes active. Agents’ calls in progress continue but with no third-party call control (conference, transfer, and so forth) available from their agent desktop softphones. Agents that are not on calls may notice their CTI desktop disable their agent state or third-party call control buttons on the desktop during the failover to the B-Side PIM. After the failover completes, the agent desktop buttons are restored. When PG side A recovers, PG side B remains active and uses the CTI Manager on Unified CM Subscriber B. The PIM does not fail-back to the A-Side, and call processing continues on the PG Side B.

VRU PIM

When the VRU PIM fails, all the calls in progress or queued in the Unified CVP are dropped. The redundant (duplex) VRU PIM side connects to the Unified CVP and begins processing new calls upon failover. The failed VRU PIM side recovers, and the currently running VRU PIM continues to operate as the active VRU PIM.

CTI Server

CTI Server is redundant and resides on the Unified CCE Call server or PG. When the CTI Server fails, the redundant CTI server becomes active and begins processing call events. Both CTI OS and Unified Finesse Servers are clients of the CTI Server and are designed to monitor both CTI Servers in a duplex environment and maintain the agent state during failover processing. Agents (logged in to either CTI OS desktops or Cisco Finesse) see their desktop buttons dim during the failover to prevent them from attempting to perform tasks while the CTI Server is down. The buttons are restored as soon as the redundant CTI Server is restored and the agent can resume tasks without having to log in again to the desktop application.

CTI OS Server

CTI OS server is a software component that runs co-resident on the Unified CCE Call server or PG. Unlike the PG processes that run in hot-standby mode, both of the CTI OS Server processes run in active mode all the time. The CTI OS server processes are managed by Node Manager, which monitors each process running as part of the CTI OS service and which automatically restarts abnormally terminated processes. When a CTI OS client loses connection to CTI OS side A, it automatically connects to CTI OS server side B. During this transition, the buttons of the CTI Toolkit Agent Desktop are disabled and return to the operational state as soon as it is connected to CTI OS server B. Node Manager restarts CTI OS server A.

Unified CCE Call Router

The Call Router software runs in synchronized execution. Both of the redundant systems run the same memory image of the current state across the system and update this information by passing the state events between the servers on the private connection. If one of the Unified CCE Call Routers fails, the surviving server detects the failure after missing five consecutive TCP keepalive messages on the private LAN. During Call Router failover processing, any Route Requests that are sent to the Call Router from a peripheral gateway (PG) are queued until the surviving Call Router is in active simplex mode. Any calls in progress in the Unified CVP or at an agent are not impacted.

Unified CCE Logger

If one of the Unified CCE Logger and Database Servers fails, there is no immediate impact except that the local Call Router is no longer able to store data from call processing. The redundant Logger continues to accept data from its local Call Router. When the Logger server is restored, the Logger contacts the redundant Logger to determine how long it was off-line. The Loggers maintain a recovery key that tracks the date and time of each entry recorded in the database and these keys are used to restore data to the failed Logger over the private network. Additionally, if the Unified Outbound Option is used, the Campaign Manager software is loaded on Logger A only. If that platform is out of service, any outbound calling stops until the Logger is restored to operational status.

Unified CCE Administration and Data Server

The Unified Contact Center Enterprise Administration and Data Server provides the user interface to the system for making configuration and scripting changes. This component does not support redundant or duplex operation as do the other Unified Contact Center Enterprise system components.

Unified CVP High Availability

Unified CVP high availability describes the behavior of the following Unified CVP solution components.

Unified CVP Call Server

The Unified CVP Call Server component provides the following independent services:

Unified CVP SIP Service

The Unified CVP SIP Service handles all incoming and outgoing SIP messaging and SIP routing. If the SIP service fails, the following conditions apply to call disposition:

  • Calls in progress - If the Unified CVP SIP Service fails after the caller is transferred (including transfers to an IP phone or VoiceXML gateway), then the call continues normally until a subsequent transfer activity (if applicable) is required from the Unified CVP SIP Service.

  • New calls - New calls are directed to an alternate Unified CVP Call Server.

Unified CVP IVR Service

The Unified CVP IVR Service creates the Voice XML pages that implement the Unified CVP Micro applications based on Run VRU Script instructions received from Cisco Unified Intelligent Contact Management (ICM). If the Unified CVP IVR Service fails, the following conditions apply to the call disposition:

  • Calls in progress - Calls in progress are routed by default to an alternate location by survivability on the originating gateway.

  • New calls - New calls are directed to an in-service Unified CVP IVR Service.

Unified CVP Media Server

Store the audio files locally in flash memory on the VoiceXML gateway or on an HTTP or TFTP file server. Audio files stored locally are highly available. However, HTTP or TFTP file servers provide the advantage of centralized administration of audio files. If the media server fails, the following conditions apply to the call disposition:
  • Calls in progress - Calls in progress recover automatically. The high-availability configuration techniques make the failure transparent to the caller.

  • New calls - New calls are directed transparently to the backup media server, and service is not affected.

  • The Unified CVP VXML Server executes advanced IVR applications by exchanging VoiceXML pages with the VoiceXML gateways’ built-in voice browser. If the Unified CVP VXML Server fails, the following conditions apply to the call disposition:

    • Calls in progress - Calls in progress in an ICM-integrated deployment can be recovered using scripting techniques.

    • New calls - New calls are directed transparently to an alternate Unified CVP VXML Server.

Cisco Voice XML Gateway

The Cisco VoiceXML gateway parses and renders VoiceXML documents obtained from one or several sources. If the VoiceXML gateway fails, the following conditions apply to the call disposition:
  • Calls in progress - Calls in progress are routed by default to an alternate location by survivability on the ingress gateway.

  • New calls - New calls find an alternate VoiceXML gateway.

Unified CVP Reporting Server

The Reporting Server does not perform database administrative and maintenance activities such as backups or purges. However, the Unified CVP provides access to such maintenance tasks through the Operations Console. The Single Reporting Server does not necessarily represent a single point of failure, because data safety and security are provided by the database management system, and temporary outages are tolerated due to persistent buffering of information on the source components.

Unified CM

The Unified CVP Call Server recognizes that the Unified CM has failed, assumes the call should be preserved, and maintains the signaling channel to the originating gateway. In this way, the originating gateway has no knowledge that Unified CM has failed.

Additional activities in the call (such as hold, transfer, or conference) are not possible. After the parties go on-hook, the phone routes to another Unified CM server.

New calls are directed to an alternate Unified CM server in the cluster.

Unified CM High Availability

Visible Network Failure

Cisco HCS supports two subscribers; Subscriber A and Subscriber B, each running a local instance of CTI Manager to provide JTAPI services for the Unified CCE peripheral gateways. If the public network of Subscriber A fails, then the peripheral gateway of Subscriber B becomes active. Because the visible network is down, the remote Unified CM subscriber at side A cannot send the phone registration event to the remote peripheral gateway. This results in the failure of active calls. After the network of the Subscriber A becomes active, the phones align to Subscriber A.

Call Manager Service in Subscriber Failure

When the call manager service in Unified CM Subscriber A fails, all registered phones route to Unified CM Subscriber B. However, calls that are in progress with agent phones remain active, but with no phone services such as conference or transfer. Peripheral gateway side A remains active as the CTI manager connection is not lost to Subscriber A.

Subscriber B registers all dialed numbers and phones and therefore the call processing continues. When call manager service in Unified CM Subscriber A recovers, all idle phones route back to Subscriber A.

Unified CTI Manager Service Failover

If the Unified CM CTI Manager service on Subscriber A fails, the peripheral gateway side A detects a failure of the CTI Manager service and induces a failover to peripheral gateway side B. Peripheral gateway side B registers all dialed numbers and phones with the Unified CM CTI Manager service on Subscriber B and call processing continues. Agents with calls in progress stay in progress, but with no third-party call control (conference, transfer, and so on) will be available from their agent desktop soft phones. When the Unified CM CTI Manager service on Subscriber A recovers, peripheral gateway side B remains active and uses the CTI Manager service on Unified CM Subscriber B. The peripheral gateway does not fail-back.

Gateway High Availability

If the primary gateway is unreachable, the CUBE redirects the calls to the backup gateway. Active calls fail. After the primary gateway becomes accessible, calls are directed back to the primary gateway.

MRCP ASR/TTS High Availability

The VoiceXML gateway uses gateway configuration parameters to locate an ASR/TTS primary and the backup server. The backup server is invoked only if the primary server is not accessible and if this is not a load-balancing mechanism. Each new call attempts to connect to the primary server. If failover occurs, the backup server is used for the duration of the call; the next new call attempts to connect to the primary server.

Cisco Finesse High Availability

Cisco Finesse high availability affects the following components:

CTI

Pre-requisites for CTI high availability

The prerequisites for CTI high availability are as follows:

  1. The Unified CCE is deployed in Duplex mode.

  2. The backup CTI server is configured through the Finesse Administration Console.

When Cisco Finesse loses connection to the primary CTI server, it tries to reconnect five times. If the number of connection attempts exceeds the retry threshold, Cisco Finesse then tries to connect to the backup CTI server the same number of times. Cisco Finesse keeps repeating this process until it makes a successful connection to the CTI server.

A loss of connection to the primary CTI server can occur for the following reasons:
  • Cisco Finesse misses three consecutive heartbeats from the connected CTI server.

  • Cisco Finesse encounters a failure on the socket opened to the CTI server.


Note


The new calls and the existing calls do not have any impact during the CTI failover.


During failover, Cisco Finesse does not handle client requests. Requests made during this time receive a 503 Service Unavailable error message. Call control, call data, or agent state actions that occur during CTI failover are published as events to the Agent Desktop following CTI server reconnection.

If an agent makes or answers a call and ends that call during failover, the corresponding events are not published following CTI server reconnection.

Additionally, CTI failover may cause abnormal behavior with the Cisco Finesse Desktop due to incorrect call notifications from Unified CCE. If during failover an agent or supervisor is in a conference call, or signs-in after being on active conference with other devices not associated with another agent or supervisor, the following desktop behaviors may occur:
  • The desktop does not reflect all participants in a conference call.

  • The desktop does not reflect that the signed-in agent or supervisor is in an active call.

  • Cisco Finesse receives inconsistent call notifications from the Unified CCE.

Despite these behaviors, the agent or supervisor can continue to perform normal operations on the phone and normal desktop behavior resumes after the agent or supervisor drops-off the conference call.

AWDB

Pre-requisites for AWDB high availability

The prerequisites for Administrative Workstation Database (AWDB) high availability are as follows:

  • The secondary AWDB is configured.

  • The secondary AWDB host is configured through the Finesse Administration Console.

The following example describes how AWDB failover occurs:
  • When an agent or supervisor makes a successful API request (such as a sign-request or call control request) their credentials are cached in Cisco Finesse for 30 minutes from the time of the request. Therefore, after an authentication, that user is authenticated for 30 minutes, even if both AWDB(s) are down. Cisco Finesse attempts to re-authenticate the user only after the cache expires.

  • AWDB failover occurs if Cisco Finesse loses connection to the primary server and it tries to reconnect to the secondary server. If it cannot connect to any of the AW servers and the cache expired, it returns a 401 Unauthorized HTTP error message.

Cisco Finesse repeats this process for every API request until it connects to one of the AW servers.

During failover, Cisco Finesse does not process requests, but clients still receive events.


Note


The new calls and the existing calls do not have any impact during the AWDB failover.


Cisco Finesse Client

With a two-node Cisco Finesse setup (primary and secondary Cisco Finesse server), if the primary server goes out of service, agents who are signed-in to that server are redirected to the sign-in page of the secondary server.

Client failover can occur for the following reasons:
  • The Cisco Tomcat Service goes down.

  • The Cisco Finesse Web application Service goes down.

  • The Cisco Notification Service goes down.

  • Cisco Finesse loses connection to both CTI servers.

Desktop Behavior

If the Cisco Finesse server fails, the agents logged into that server are put into a NOT READY or pending NOT READY state. Agents remain unaffected as they migrate to the back up side.

If a client disconnects, the agent is put into a NOT READY state with reason code 255. If the agent reconnects within n minutes or seconds, the agent is forced to log out.

Cisco RSM High Availability

The following table shows the Cisco RSM High Availability.



Table 56 Cisco RSM High Availability
Component Failover/Failure Scenario New Call Impact Active Call Impact Postrecovery Action
RSM Server RSM server (hardware) fails Attempts to contact the RSM server fail Active monitoring sessions terminate and supervisor is directed to the main menu Supervisor can monitor calls after the RSM server becomes active
CTI OS Server CTI OS Server Failure Supervisor can monitor new calls without any failure Active monitoring sessions will continue normally Failover is seamless
CTI Active CTI Gateway process fails Supervisor can establish new monitoring sessions until the secondary CTI process becomes active Active monitoring sessions continue normally After the CTI Gateway becomes active the supervisor can establish new monitoring sessions
VLEngine VLEngine fails Supervisor can establish new monitoring sessions when VLEngine becomes active Active monitoring sessions terminate and supervisor is directed to the main menu After the VLEngine becomes active the supervisor can establish new monitoring sessions
PhoneSim PhoneSim fails Supervisor can monitor new calls when PhoneSim becomes active Active monitoring sessions continue normally After the PhoneSim becomes active the supervisor can establish new monitoring sessions
Unified CM Active Subscriber fails New calls cannot be established until the secondary subscriber becomes active Active monitoring sessions continue normally After the secondary subscriber becomes active the supervisor can establish new monitoring sessions
JTAPI JTAPI gateway fails Supervisor can establish new calls without any failure Active monitoring sessions continue normally Failover is seamless
Unified CVP Active CVP fails New calls cannot be established until the Unified CVP becomes active Active monitoring sessions terminate After the Unified CVP becomes active the supervisor can establish new monitoring sessions

Unified WIM and WIM High Availability

The following table contains the Cisco Unified WIM and EIMhigh availability during the failover of Unified CCE processes.

Component Failover scenario New session (Web Callback/ Delayed callback/ Chat/ Email) impact Active session (Web Callback/ Delayed callback/ Chat/ Email) impact Post recovery action
PG Unified Communications Manager PG Failover

Web Callback - The new call is lost, because there is no Longest Available agent during the failure of PG.

Delayed Callback - The new call reaches the customer and the agent after the PG on the other side becomes active and the delay that the customer specifies gets complete.

Chat - The new chat initiated by the customer reaches the agent after the other side of the PG becomes active.

Email - The new Email sent by the customer reaches the agent.

Active Web Callback, Delayed callback, Chat, and Email sessions continue uninterrupted. Agent receives the Call, Chat or Email after the PG becomes active and the agent logins again.
PG MR PG Failover

Web Callback - The new call is established between the customer and the agent after the PG becomes active.

Delayed Callback - The new call reaches the customer and the agent after the PG on the other side becomes active and the delay that the customer specifies gets complete.

Chat - The new chat initiated by the customer reaches the agent once the other side of the PG becomes active.

Email - The new Email sent by the customer reaches the agent.

Active Web Callback, Delayed callback, Chat, and Email sessions continue uninterrupted. Agent receives the Call, Chat or Email once the PG becomes active.
CG CTI Failover

Web Callback -The new call cannot be placed and the customer receives the message, "System cannot assign an Agent to the request."

Delayed Callback - The new call reaches the customer and the agent after the CG on the other side becomes active and the delay that the customer specifies gets complete.

Chat - The new chat initiated by the customer reaches the agent after the other side of the CG process becomes active.

Email - The new Email sent by the customer reaches the agent.

Active Web Callback, Delayed callback, Chat, and Email sessions continue uninterrupted. Agent receives the Call, Chat or Email once the process becomes active.
CTI OS CTI OS Server Failure

Web Callback - The new call is established without any impact.

Delayed Callback - The new call is established without any impact after the delay that the customer specifies gets complete.

Chat - The new chat reaches the agent without any impact.

Email - The new Email sent by the customer reaches the agent.

Active Web Callback, Delayed callback, Chat, and Email sessions continue uninterrupted. Seamless.
Router Router fails

Web Callback - The new call is established through other side of the router process.

Delayed Callback - The new call is established through other side of the router process and once the delay mentioned by the customer completes.

Chat - The new chat reaches the agent through other side of the router process.

Email - The new Email sent by the customer reaches the agent through other side of the router process.

Active Web Callback, Delayed callback, Chat and Email sessions continue uninterrupted. Agent gets the Call, Chat or Email with other side of the router process.

Congestion Control Considerations

Congestion Control protects the Central Controller from overloading. When you enable the Congestion Control, the new calls entering that exceed the Calls Per Second (CPS) capacity of the contact center are rejected or treated by the Routing Clients at call entry point. This prevents an overload on the call router and ensures call-processing throughput when the system is subjected to overload.

Deployment Types

After upgrading or installing the system, configure the system to a valid deployment type. If the supported deployment type is not set, the PGs and NICs cannot connect to the Central Controller and process the call.

The following table lists the supported deployment types with guidelines for selecting a valid deployment type.

Table 57 Supported Congestion Control Deployment Types
Deployment Type Code Deployment Name Guidelines for Selection

11

HCS-CC 1000 Agents

This deployment type is automatically set as part of the install for the HCS-CC 1000 agents deployment type and is unavailable for user selection.

12

HCS-CC 500 Agents

This deployment type is automatically set as part of the install for the HCS-CC 500 agents deployment type and is unavailable for user selection.

Congestion Treatment Mode

There are five options available to handle the calls that are rejected or treated due to congestion in the system. Contact center administrators can choose any of the following options to handle the calls:
  • Treat call with Dialed Number Default Label - The calls to be rejected due to congestion are treated with the default label of the dialed number on which the new call has arrived.

  • Treat call with Routing Client Default Label - The calls to be rejected due to congestion are treated with the default label of the routing client on which the new call arrived.

  • Treat call with System Default Label - The calls to be rejected due to congestion are treated with the system default label set in Congestion Control settings.

  • Terminate call with a Dialog Fail or RouteEnd - Terminates the new call dialog with a dialog failure.

  • Treat call with a Release Message to the Routing Client - Terminates the new call dialog with release message.

The treatment options are set either at the routing client or at global level system congestion settings. If the treatment mode is not selected at the routing client, then the system congestion settings are applied for treating the calls.

Congestion Control Levels and Thresholds

Congestion Control algorithm works in three levels; each level has an onset and an abatement value. Rising to higher congestion can happen from any level to any level. However reducing the congestion level occurs one level at a time.

The following table shows the percentage of the CPS capacity for different congestion levels.



Table 58 Congestion levels and capacities
Congestion Levels Congestion Level Threshold (Percent of Capacity)

Level1Onset

110%

Level1Abate

90%

L1Reduction

10%

Level2Onset

130%

Level2Abate

100%

Level2Reduction

30%

Level3Onset

150%

Level3Abate

100%

Level3Redution

Variable reduction from 100% to 10%

Congestion Control Configuration

Configure the congestion control settings using the Congestion Settings Gadget and the Routing Client Explorer tool. Use the Congestion Settings Gadget to set the system level congestion control. Use the Routing Client Explorer tool to select the Routing Client level treatment options.

After you select the deployment type, the system starts computing the various metrics related to the congestion control and system capacity, and generates the real time report. However, the system cannot reject or treat the calls until you turn on the Congestion Enabled option in the Congestion Control Setting Gadget.

Real Time Capacity Monitoring

System Capacity Real Time provides congestion level information to the user. The report provides the following views:
  • Congestion Information View

  • Rejection Percentage View

  • Key Performance Indicators View

  • System Capacity View

UCS Network Considerations

This section lists Unified Communication System (UCS) network considerations. The following figure shows:

Figure 8. UCS 5108 Chassis for 500 and 1000 Agent Deployment



Figure 9. UCS 5108 Chassis for 4000 Agent Deployment

Data Center Design

Unified CCE on UCS B-Series Blades

The blades use a Cisco Nexus 1000v switch, a vSwitch, and an Active/Active VMNIC. The Cisco Nexus 1000v is the switching platform that provides connectivity of the private VLAN to the virtual machine. The vSwitch controls the traffic for the private and public VLANs. A single vSwitch is used with two VMNICs in Active/Active state.

Redundant Pair of Cisco UCS 2100 Series Fabric Extenders

The Cisco UCS 2100 Series Fabric Extenders provide connectivity between the blades within the chassis to the fabric interconnect.

Redundant Pair of Cisco UCS 6100 Series Fabric Interconnects

The Cisco UCS 6100 Series Fabric Interconnects provide connectivity to the fabric extender from each chassis to the external network, for example, switch, SAN.

Billing Considerations

Complete the following procedure to determine the number of phones registered to Cisco HCS for Contact Center for billing purposes.

Procedure
From the CLI of the Call Manager Publisher virtual machine, run the following query exactly as shown with no new line characters:

run sql select count(*) from applicationuserdevicemap as appuserdev, applicationuser appuser, device dev where appuserdev.fkapplicationuser = appuser.pkid appuserdev.fkdevice = dev.pkid tkmodel != 73 appuser.name = "pguser"

Note   

If you configured the application username to a name other than pguser, you must update appuser.name in the above query. This query is based on the supported Cisco HCS for Contact Center deployment, which only requires that you add CTI route points and phones to the application user. If this is not the case, you may need to modify the query.


License Considerations

Each Cisco HCS for Contact Center license includes:

  • Premium agent capabilities
  • Cisco Unified Intelligence Center Premium

  • One Unified CVP IVR or queuing treatment

  • One Unified CVP redundant IVR or queuing treatment

One Unified CVP IVR or queuing treatment license is defined as a call that receives treatment at a VoiceXML browser for queuing or self service by a Unified CVP call server.

One Unified CVP redundant IVR or queuing treatment license is defined as a call that receives treatment on the secondary Unified CVP call server residing on the secondary side for redundancy purposes.


Note


Both Unified CVP call servers are active and can process calls. This implies that there could be times when you can handle more calls, however, Cisco supports a maximum of 1 IVR or queue treatment port per agent license.


While each HCS for Contact Center license provides a Unified CVP port for self-service or redundancy, current deployment limitations result in slightly lower capacity when running at 100% licensing capacity. For example, 500 agents licensed on a 500 agent deployment model or 1000 agents licensed on a 1000 agent deployment model.

For example, a 500 agent deployment model with 500 agent licenses includes:

  • 500 calls receiving IVR or queue treatment and 400 callers talking to agents

  • 400 calls in queue receiving IVR or queue treatment and at the same time another 500 callers talking to 500 agents

  • 450 calls receiving IVR or queue treatment at 450 agents talking

For example, a 1000 agent deployment model with 1000 agent licenses includes:

  • 1000 calls receiving IVR or queue treatment and 800 callers talking to agents

  • 800 calls in queue receiving IVR and at the same time another 1000 callers talking to 1000 agents

  • 900 agents talking and 900 agents receiving IVR or call treatment

For example, a 4000 agent deployment model with 4000 agent licenses includes:

  • 4000 calls receiving IVR or queue treatment and 3200 callers talking to agents

  • 3200 calls in queue receiving IVR and at the same time another 4000 callers talking to 4000 agents

  • 3600 agents talking and 3600 agents receiving IVR or call treatment

Core Component Integrated Options Considerations

Agent Greeting Design Considerations

The following sections list the functional limitations for Agent Greeting and Whisper Announcement.

Agent Greeting has the following limitations:

  • Agent Greeting is not supported with outbound calls made by an agent. The announcement plays for inbound calls only.

  • Only one Agent Greeting file plays per call.

  • Supervisors cannot listen to agents’ recorded greetings.

  • Agent Greetings do not play when the router selects the agent through a label node.

  • The default CTI OS Toolkit agent desktop includes the Agent Greeting buttons. If Agent Greeting is not configured, the Agent Greeting buttons do not work. If you use the default desktop but do not plan to use Agent Greeting, you should remove the Agent Greeting button.

  • Silent Monitoring (CTI OS and Unified CM-based) is supported with Agent Greeting with the following exception: For Unified-CM based Silent Monitoring, supervisors cannot hear the greetings. If a supervisor clicks the Silent Monitor button on the CTI OS desktop while a greeting is playing, a message displays stating that a greeting is playing and to try again shortly.

Whisper Announcement Design Considerations

Whisper announcement has the following limitations:
  • Whisper Announcement is not supported with outbound calls made by an agent. The announcement plays only for inbound calls.

  • For Whisper Announcement to work with agent-to-agent, you can use the SendToVRU node before the call is sent to the agent. The transferred call must be sent to the Unified CVP before it is sent to another agent, so that Unified CVP can control the call and play the Whisper Announcement, regardless of which node is used to send the call to Unified CVP.

  • Whisper Announcements do not play when the router selects the agent through a label node.

  • Whisper Announcement is not supported with SIP Refer Transfers.

  • Only one Whisper Announcement file plays per call.

  • Silent Monitoring (CTI OS and Unified CM-based) is supported with Whisper Announcement with the following exception: for Unified-CM-based Silent Monitoring, supervisors cannot hear the announcements themselves. The Silent Monitor button on the supervisor desktop is disabled while an announcement is playing.

Local Trunk Design Considerations

The following figure shows these two options, Cisco Unified Border Element—Enterprise at the customer premise and TDM gateway at the customer premise.
Figure 10. CUBE(E) or TDM Gateway at the Customer Premise


Note


The CUBE(E) can also be used as TDM gateway and VXML gateway for the Local PSTN Break Out (LBO).


CUBE-Enterprise at Customer Premise

Consider the following if you use the Cisco Unified Border Element - Enterprise at the customer premise:

  • Cisco Unified Border Element - Enterprise gateway and the Cisco VXML gateway reside at the customer premise and calls are queued at the customer premise.

  • The Cisco Unified Border Element - Enterprise and VXML gateway can be co-located on the same ISR, or located on different ISRs for cases where the number of IVR ports to agent ratio is small.

  • Cisco Unified Border Element - Enterprise Integrated Services Router (ISR) provides the security, routing, and Digital Signal Processors (DSPs) for transcoders.

  • Redundant Cisco Unified Border Element - Enterprise and Cisco VXML ISRs for failover and redundancy.

  • WAN bandwidth must be sized appropriately for calls from CUBE(SP) to CUBE - Enterprise at the customer premise.

  • Cisco Unified Border Element Enterprise supports flow-through mode. Flow-around mode is not supported.

TDM Gateway at Customer Premise

You can route PSTN calls using local gateway trunks if you prefer to keep your E1/T1 PSTN.

Consider the following if you use the TDM gateway at the customer premise:

  • Both the Cisco TDM gateway and the Cisco VXML gateway reside at the customer premise.

  • PSTN delivery is at the local customer premise.

  • The media stays local at the customer premise for the local PSTN breakout. The IVR call leg is deterministically routed to the local VXML gateway and only uses the centralized resources in spill-over scenarios.

  • When media is delivered to a different site, Cisco Unified Communications Manager location-based call admission control limits the number of calls over the WAN link.

  • Calls local to a customer premise use the G.711 codec. Calls going over the WAN link can use the G.729 codec to optimize the WAN bandwidth.

  • ASR/TTS server for local breakout is at the customer premise and resides on a UCS or bare metal server.

  • CUBE(E) can also be used as an alternative for both TDM gateway and VXML gateway.

  • A new call for HCS for Contact Center must originate from the TDM gateway to anchor the call to the survivability service. Cisco Unified Communications Domain Manager 8.1(x) provisions the Contact Center dialed number to route the calls to Unified CM.

    Note


    You need to manually modify the call routing from TDM gateway for the session target to route the call directly to Unified CVP.

    Caller > TDM > CVP


  • If a new call originates from Unified CM, standard call routing logic uses CUBE-Enterprise to anchor the call to the survivability service.

Internal DN from Unified CM > CUBE-Enterprise > Unified CVP

Location-Based Call Admission Control

Location-based Call Admission Control (LBCAC) maximizes local branch resources, keeping a call within the branch whenever possible and limiting the number of calls that go over the WAN. Unified CVP supports queue-at-the-edge, a simpler and more effective configuration. Using the queue-at-the-edge functionality, the originating call from a specific branch office is routed to a local VXML Gateway based on priority. That is, it always chooses a local branch agent if possible.

Figure 11. Location-based Call Admission Control




Note


Multi-Cluster, EL-CAC is not supported.


LBCAC Concept Definitions

To configure local trunk, you must understand the following concepts:

  • Phantom Location - A default location with unlimited bandwidth used when calculating calls that are hair pinned over SIP trunk, or when the SIP call is queued at the local branch, to enable correct bandwidth calculations. The Phantom location should be assigned to the gateway or trunk for Unified CVP.

  • SiteID - The SiteID is a string of numbers that is appended to the label by the Unified CCE so that you can configure the dial plan to route the call to a specific destination, such as the branch VXML gateway or the egress gateway, or Unified CM node. The SiteID can be appended to the front or end of the label, at the end, or not at all. This configuration is separate from the Unified CM location configuration, and is specific to Unified CVP. The SiteID indicates the real location of the call and allows the deduction of the bandwith from the correct location.

For information on local trunk configuration, see Configure Local Trunk.

Core Component Bandwidth, Latency and QOS Considerations

Unified CCE Bandwidth, Latency and QOS Considerations

Agent Desktop to Unified CCE Call Servers/ Agent PG

There are many factors to consider when assessing the traffic and bandwidth requirements between Agent or Supervisor Desktops and Unified CCE Call Servers/Agent PG. While the VoIP packet stream bandwidth is the predominant contributing factor to bandwidth usage, you must also consider other factors such as call control, agent state signaling, silent monitoring, recording, and statistics.

The amount of bandwidth required for CTI Desktop to CTI OS Server messaging is (0.5 x n) + (16 x cps), where n is the number CTI Clients and cps is the number of calls per second.

For example, for a 500 agent deployment, for each contact center (datacenter) and remote site the approximate bandwidth is, (0.5 x 500) + (16 x 1) = 340 kbps.

For example, for a 1000 agent deployment, for each contact center (datacenter) and remote site the approximate bandwidth is, (0.5 x 1000) + (16 x 8) = 608 kbps.

Cisco supports limiting the latency between the server and agent desktop to 400 ms round-trip time for CTI OS (preferably less than 200 ms round-trip time).

Unified CCE Data Server to Unified CCE Call Server for 500 and 1000 Agent Deployment Model

Unified CCE Central Controllers (Routers and Loggers) require a separate network path or link to carry the private communications between the two redundant sides. Latency across the private separate link must not exceed 100 ms one way (200 ms round-trip), but 50 ms (100 ms round-trip) is preferred.

Private Network Bandwidth Requirements for Unified CCE

The following table is a worksheet to assist with computing the link and queue sizes for the private network. Definitions and examples follow the table.


Note


Minimum link size in all cases is 1.5 Mbps (T1).
Table 59 Worksheet for Calculating Private Network Bandwidth

Component

Effective BHCA

Multiplication Factor

Recommended Link

Multiplication Factor

Recommended Queue

Router + Logger

* 30 * 0.8

Total Router + Logger High- Priority Queue Bandwidth

Unified CM PG

* 100 * 0.9

Add these numbers together and total in the box below to get the PG High- Priority Queue Bandwidth

Unified CVP PG

* 120 * 0.9

Unified CVP Variables

* ((Number of Variables * Average Variable Length)/40)

* 0.9

Total Link Size

Total PG High-Priority Queue Bandwidth

If one dedicated link is used between sites for private communications, add all link sizes together and use the Total Link Size at the bottom of the table above. If separate links are used, one for Router/Logger Private and one for PG Private, use the first row for Router/Logger requirements and the bottom three (out of four) rows added together for PG Private requirements.

Effective BHCA (effective load) on all similar components that are split across the WAN is defined as follows:

Router + Logger

This value is the total BHCA on the call center, including conferences and transfers. For example,10,000 BHCA ingress with 10% conferences or transfers are 11,000 effective BHCA.

Unified CM PG

This value includes all calls that come through Unified CCE Route Points controlled by Unified CM and/or that are ultimately transferred to agents. This assumes that each call comes into a route point and is eventually sent to an agent. For example, 10,000 BHCA ingress calls coming into a route point and being transferred to agents, with 10% conferences or transfers, are 11,000 effective BHCA.

Unified CVP PG

This value is the total BHCA for call treatment and queuing coming through a Unified CVP. 100%treatment is assumed in the calculation. For example, 10,000 BHCA ingress calls, with all of them receiving treatment and 40% being queued, are 14,000 effective BHCA.

Unified CVP Variables

This value represents the number of Call and ECC variables and the variable lengths associated with all calls routed through the Unified CVP, whichever technology is used in the implementation.

Example of a Private Bandwidth Calculation
The table below shows an example calculation for a combined dedicated private link with the following characteristics:
  • BHCA coming into the contact center is 10,000.
  • 100% of calls are treated by Unified CVP and 40% are queued.
  • All calls are sent to agents unless abandoned. 10% of calls to agents are transfers or conferences.
  • There are four Unified CVPs used to treat and queue the calls, with one PG pair supporting them.
  • There is one Unified CM PG pair for a total of 900 agents.
  • Calls have ten 40-byte Call Variables and ten 40-byte ECC variables.
Table 60 Example Calculation for a Combined Dedicated Private Link

Component

Effective BHCA

Multiplication Factor

Recommended Link

Multiplication Factor

Recommended Queue

Router + Logger

11,000 * 30 330,000 * 0.8 264,000

Total Router + Logger High- Priority Queue Bandwidth

Unified CM PG

11,000 * 100 1,100,000 * 0.9 990,000

Add these numbers together and total in the box below to get the PG High- Priority Queue Bandwidth

Unified CVP PG

0 * 120 0 * 0.9 0

Unified CVP Variables

14,000

* ((Number of Variables * Average Variable Length)/40)

280,000 * 0.9 252,000

Total Link Size

1,710,000 1,242,000

Total PG High-Priority Queue Bandwidth

For the combined dedicated link in this example, the results are as follows:
  • Total Link Size = 1,710,000 bps
  • Router/Logger high-priority bandwidth queue of 264,000 bps
  • PG high-priority queue bandwidth of 1,242,000 bps
If this example were implemented with two separate links, Router/Logger private and PG private, the link sizes and queues are as follows:
  • Router/Logger link of 330,000 bps (actual minimum link is 1.5 Mb, as defined earlier), with high-priority bandwidth queue of 264,000 bps
  • PG link of 1,380,000 bps, with high-priority bandwidth queue of 1,242,000 bps
When using Multilink Point-to-Point Protocol (MLPPP) for private networks, set the following attributes for the MLPPP link:
  • Use per-destination load balancing instead of per-packet load balancing.
  • Enable Point-to-Point Protocol (PPP) fragmentation to reduce serialization delay.

Note


You must have two separate multilinks with one link each for per-destination load balancing.


Unified CVP Bandwidth, Latency and QOS Considerations

Bandwidth Considerations for Unified CVP

The ingress and VoiceXML gateway is separated from the servers that provide it with media files, VoiceXML documents, and call control signaling. Therefore, you must consider the bandwidth requirement for the Unified CVP.

For example, assume that all calls to a branch begin with 1 minute of IVR treatment followed by a single transfer to an agent that lasts for 1 minute. Each branch has 20 agents, and each agent handles 30 calls per hour for a total of 600 calls per hour per branch. The call average rate is therefore 0.166 calls per second (cps) per branch.

Note that even a small change in these variables can have a large impact on sizing. Remember that 0.166 calls per second is an average for the entire hour. Typically, calls do not come in uniformly across an entire hour, and there are usually peaks and valleys within the busy hour. You should find the busiest traffic period, and calculate the call arrival rate based on the worst-case scenario.

VoiceXML Document Types

On average, a VoiceXML document between the Unified CVP Call Server or Unified CVP VXML Server and the gateway is 7 kilobytes. You can calculate the bandwidth used by approximating the number of prompts that are used per call, per minute. The calculation, for this example, is as follows:

7000 bytes x 8 bits = 56,000 bits per prompt

(0.166 call/second) x (56,000 bit/prompt) x (no. of prompts/call) = bps per branch

Media File Retrieval

You can store the Media files prompts locally in flash memory on each router. This method eliminates bandwidth considerations, but maintainability becomes an issue because you must replace the prompts on every router. If you store the prompts on an HTTP media server (or an HTTP cache engine), the gateway can locally cache voice prompts after it first retrieves them. The HTTP media server can cache many, if not all, prompts, depending on the number and size of the prompts. The refresh period for the prompts is defined on the HTTP media server. Therefore, the bandwidth utilized is limited to the initial load of the prompts at each gateway, plus periodic updates after the expiration of the refresh interval. If the prompts are not cached at the gateway, a significant Cisco IOS performance degradation (as much as 35% to 40%) in addition to the extra bandwidth usage occurs.

Assume that there are a total of 50 prompts, with an average size of 50 KB and a refresh interval of 15 minutes.

The bandwidth usage is:

(50 prompts) x (50,000 bytes/prompt) x (8 bits/byte) = 20,000,000 bits

(20,000,000 bits) / (900 secs) = 22.2 average kbps per branch

QOS Considerations for Unified CVP

The Unified CVP Call Server marks the QoS DSCP for SIP messages.

Table 61 Unified CVP QoS
Component Port Queue PHB DSCP Max latency Round Trip
Media Server TCP 80 CVP-data AF11 10 1 sec
Unified CVP Call Server (SIP) TCP 5060 Call Signaling CS3 24 200 ms
Unified CVP IVR service TCP 8000 CVP-data AF11 10 1 sec
Unified CVP VXML Server TCP 7000 CVP-data AF11 10 1 sec
Ingress Gateway SIP TCP 5060 Call Signaling CS3 24 200 ms
VXML Gateway SIP TCP 5060 Call Signaling CS3 24 200 ms

Note


The Unified CCE and Unified CVP provide a Layer 3 marking (not a Layer 2).


As a general rule, activate QoS at the application layer and trust it in the network.

Unified CM Bandwidth, Latency and QOS Considerations

Agent Phones to Unified Communications Manager Cluster

The amount of bandwidth that is required for phone-to-Unified Communications Manager signaling is 150 bps x n, where n is the number of phones.

For example for a 500 agent deployment model, for each contact center site the approximate required bandwidth is 150 x 500 phones = 75kbps

For example for a 1000 agent deployment model, for each contact center site the approximate required bandwidth is 150 x 1000 phones = 150kbps

Unified IC Bandwidth, Latency and QOS Considerations

Reporting Bandwidth

Definition of Sizing

The following parameters combined affect on the responsiveness and performance of the Cisco Unified Intelligence Center on the desktop:

  • Real-time reports: Simultaneous real-time reports run by a single user.

  • Refresh rate/realtime: Note that if you have a Premium license you can change the refresh rate by editing the Report Definition. The default refresh rate for Unified Intelligence Center Release 9.1(1) is 15 seconds.

  • Cells per report — The number of columns that are retrieved and displayed in a report.

  • Historical report — Number of historical reports run by a single user per hour.

  • Refresh rate/historical — The frequency with report data are refreshed on a historical report.

  • Rows per report — Total number of rows on a single report.

  • Charts per dashboard — Number of charts (pie, bar, line) in use concurrently on a single dashboard.

  • Gauges per dashboard — Number of gauges (speedometer) in use concurrently on a single dashboard.

Network Bandwidth Requirements

The exact bandwidth requirement differs based on the sizing parameters used, such as the number of rows, the refresh frequency, and the number of columns present in each report.

You can use the Bandwidth Calculator to calculate the bandwidth requirements for your Unified Intelligence Center implementation. (Use the same Microsoft Excel file for Releases 9.0 and 8.5.)

Two examples for bandwidth calculation (50 and 100 users):
Unified Intelligence Center User Profile Customer Parameters Value Network Bandwidth Requirement (in Kbps)
Unified Intelligence Center–AW/HDS Client–Unified Intelligence Center Unified Intelligence Center--Unified Intelligence Center Unified Intelligence Center--Unified Intelligence Center for each node
Profile 1

(500 agent deployment)

Total concurrent Users 50 1,283 1,454 N/A N/A
Profile 1 2
Profile 2 15
Number of Average Rows per RT Report 50
Number of Average Columns per RT Report 10
Number of Historical Report 1
Historical Report Interval (in second) 1800
Number of Average Rows per Historical Report 800
Number of Average Columns per Historical Report 10
Number of Nodes on side A 1
Number of Nodes on side B 0
Profile 2

(1000 agent deployment)

Total concurrent Users 100 1,783 4,554 1,934.54 967.27
Number of Real Time Reports 2
Real Time Report Interval (in second) 15
Number of Average Rows per RT Report 50
Number of Average Columns per RT Report 20
Number of Historical Report 1
Historical Report Interval (in second) 1800
Number of Average Rows per Historical Report 200
Number of Average Columns per Historical Report 20
Number of Nodes on side A* 1
Number of Nodes on side B* 2

Cisco Finesse Bandwidth, Latency and QOS Considerations

The most expensive operation from a network perspective is the agent or supervisor login. This operation involves the web page load and includes the CTI login and the display of the initial agent state. After the desktop web page loads, the required bandwidth is significantly less.

The number of bytes transmitted at the time an agent logs in is approximately 2.8 megabytes. Because of the additional gadgets on the supervisor desktop (Team Performance, Queue Statistics), this number is higher for a supervisor login - approximately 5.2 megabytes. Cisco does not mandate a minimum bandwidth for the login operations. You must determine how long you want the login to take and determine the required bandwidth accordingly. To help you with this calculation, Cisco Finesse provides a bandwidth calculator to estimate the bandwidth required to accommodate the client login time. Note that during failover, agents are redirected to the alternate Finesse server and required to log in again. For example, if you configure your bandwidth so that login takes 5 minutes and a client failover event occurs, agents will take 5 minutes to successfully log in to the alternate Finesse server.


Note


The Cisco Finesse bandwidth calculator does not include the bandwidth required for any third-party gadgets in the Finesse container or any other applications running on the agent desktop client.


Silent Monitoring Bandwidth, Latency and QOS Considerations

With Silent Monitoring supervisors can listen to the agent calls in Unified CCE call centers that use CTI OS. Voice packets sent to and received by the monitored agent's IP hardware phone are captured from the network and sent to the supervisor desktop. At the supervisor desktop, these voice packets are decoded and played on the supervisor's system sound card. Silent Monitoring of an agent consumes approximately the same network bandwidth as an additional voice call. If a single agent requires bandwidth for one voice call, then the same agent being silently monitored requires bandwidth for two concurrent voice calls. To calculate the total network bandwidth required for your call load, multiply the number of calls by the per-call bandwidth figure for your particular codec and network protocol.

Cisco RSM Bandwidth, Latency and QOS Considerations

RSM Peer Purpose Protocols Used Data Format Relative Bandwidth Requirements Link Latency Requirements
VRU Service Requests and Responses TCP (HTTP) Textual Minimal < 500 ms avg.
VRU Requested Voice Data from PhoneSim to VRU TCP (HTTP) G711, chunked transfer mode encoding High (about 67 to 87 kbps per session) < 400 ms avg.
Unified CM Issuance of Agent Phone Monitoring TCP (JTAPI) Binary (JTAPI stream) Minimal < 300 ms avg.
CTI OS Server (PG) Environment Events and Supervisor Logins TCP (CTI OS) Binary (CTI OS stream) Minimal < 300 ms avg.
Agent Phones Simulated Phone Signaling TCP or UDP (SIP) Textual Minimal < 400 ms avg.
Agent Phones Monitored Phone Voice Data UDP (RTP) Binary (G.711) High (about 67 to 87 kbps per session) < 400 ms avg

Cisco WIM and EIM Bandwidth, Latency and QOS Considerations

The minimum required network bandwidth for an agent connecting to the Cisco Interaction Manager servers on login is 384 kilobits/second or greater. After login in a steady state an average bandwidth of 40 kilobits/second or greater is required.

An attachment of size up to 50 KB is supported within this required bandwidth. For attachments of size greater than 50 KB, you may experience slow speed temporarily in the agent user interface during download of the attachments.

Firewall Hardening Considerations

This section describes the specific ports required, which should be allowed from the Contact Center and customer networks, but are restricted only to the ports required for the services that need to be exposed, as well as from specific hosts or networks wherever possible. For an inventory of the ports used across the Hosted Collaboration Solutions for Contact Center applications, see the following documentation:


Note


Refer to Step 2g in section Configure Interfaces in the Context for configuring required ports in ASA.


a-Law Codec Support Considerations

HCS for Contact Center supports G.711 a-law codec. This means that the SIP carrier sends the capability as G.711 a-law and G.729. The prompts at the VXML gateway should be G.711 a-law and the agents need to support both G.711 a-law and G.729. a-law supports the following features for Cisco HCS:
  • Agent Greeting

  • Whisper Announcement

  • Call Manager Based Silent Monitoring

  • Outbound (SIP Dialer)

  • Courtesy Callback

  • Post Call Survey

  • Mobile Agents


Note


ASR/TTS is not supported with a-law.


For information on the core component configurations for a-law codec support, see Configure a-Law Codec.

Back-Office Phone Support Considerations

Following are the considerations for the back-office phone support on the same Unified CM for HCS for Contact Center:
  • You must meet the minimum OVA requirements of Unified CM for 500 or 1000 agent deployment as described in Open Virtualization Format Files
  • If you are replacing the Contact Center agent phone already pre-sized in the OVA defined in Configuration Limits with the regular back-office phone. This does not require the re-sizing of the OVA.
  • If you plan to use the Unified CM for all the agents and additional back-office phones or want to increase the OVA size, you must follow the guidelines for Specification-Based Hardware Support and do appropriate sizing.

Supported Gadgets for HCS

To access the gadget, on the database server, click Start and navigate to All Programs > Cisco Unified CCE Tools->Administration Tools and open Unified CCE Web administration. The following table shows the CRUD operations supported by the HCS gadgets.

Gadget Create Read Update Delete
Agent   x x (only attribute assignment)  
Attribute x x x x
Precision queue x x x x
Reason Code x x x x
Bucket interval x x x x
Network VRU script x x x x
congestion control   x x  
deploymenttype   x x  
serviceability   x    

x- Stands for supported