Cisco Packaged Contact Center Enterprise Installation and Upgrade Guide, Release 10.5(1)
Prepare Customer Site Servers
Downloads: This chapterpdf (PDF - 1.42MB) The complete bookPDF (PDF - 4.67MB) | The complete bookePub (ePub - 1.09MB) | Feedback

Prepare Customer Site Servers

Prepare Customer Site Servers

Perform all the procedures in this section on the Side A and the Side B servers.

Prepare Cisco USC C-Series Customer Site Servers

Configure RAID for the C240 MS3 TRC#1

For each array created using this procedure, use the following settings:
  • Stripe size: 128KB
  • Read Policy: Read Ahead Always
  • Write Policy: Write Back with BBU
Procedure
    Step 1   Power on the UCS server, making sure that Quiet Boot is disabled in BIOS.
    Step 2   Press Ctrl-H during the initial startup sequence to enter the MegaRAID BIOS configuration utility.
    Step 3   Click Start.
    Step 4   Select Configuration Wizard on the left panel. Click New Configuration. Then click Next.
    Step 5   At the prompt to clear the configuration, click Yes.
    Step 6   Select Manual Configuration. Then click Next.
    Step 7   On the next screen, in the left panel, add the first eight drives to create Drive Group0 as follows:
    1. Select drives 1 - 8.
    2. Click Add to Array.
    3. Click Accept DG.
    Step 8   Add the remaining eight drives to create Drive Group1 as follows:
    1. On the left panel, select drives 9 - 16.
    2. Click Add to Array.
    3. Click Accept DG.
    4. Click Next to accept the Drive Group.
    Step 9   Add Drive Group0 to a span as follows:
    1. Select Drive Group0.
    2. Click Add to Span.
    3. Click Next.
    Step 10   Configure RAID for Drive Group0 as follows:
    1. For RAID Level, select RAID 5.
    2. For Stripe Size, select 128KB.
    3. For Read Policy, select read ahead = always.
    4. For Write Policy, select write back with bbu.
    5. Click Update Size to finalize the RAID volume and to determine the size of the resulting volume. It resolves to 1.903TB
    6. Click Accept to accept the virtual drive definition, VD0.
    7. Click Next.
    8. Click Back to add the second RAID 5 array.
    Step 11   Click Back to add the second RAID 5 array as follows:
    1. Select Drive Group1.
    2. Click Add to Span.
    3. Click Next.
    Step 12   At the RAID Selection screen:
    1. For RAID Level, select RAID 5.
    2. For Strip Size, select 128KB.
    3. For Read Policy, select read ahead = always.
    4. For Write Policy, select write back with bbu.
    5. Click Update Size. The size resolves to 1.903TB
    6. Click Accept to accept the virtual drive definition, VD1.
    Step 13   Click Yes at the BBU warning screen.
    Step 14   Click Next at the Virtual Live Definition screen to indicate that you have finished defining virtual drives.
    Step 15   Click Accept at the Configuration Preview screen to accept the RAID configuration.
    Step 16   Click Yes to save the configuration.
    Step 17   Click Yes to start drive configuration.
    Step 18   Click Home to exit the Wizard when both drives report their status as Optimal.
    Step 19   Click Exit.

    After RAID configuration is complete on the drives, the system may try to initialize (format) the new RAID array. When this happens, the current initialization progress can be seen from the Web BIOS screen. Wait for the background initialization to complete before proceeding with any of the subsequent server configuration steps such as installing ESXi.

    You can check background initialization progress on either the Web BIOS Home screen or Virtual Drives screen.


    Install VMware vSphere ESXi

    Packaged CCE uses standard installation procedures, which you can find using VMware documentation (VMware site) for the given version supported with the release of Packaged CCE you are installing.

    Packaged CCE has no unique requirements other than that ESXi should be installed on the first drive, as the default boot drive for the server.

    Add Datastores to the Host Server

    After installing vSphere ESXi, add the remaining datastores. Refer to the vSphere Storage Guide for the vSphere ESXi version in your deployment, available at https:/​/​www.vmware.com/​support/​pubs/​.


    Note


    Required datstores are dictated by the hardware platform used. Cisco UCS C-Series servers require a fixed and validated configuration, whereas UCS B-Series may have a variable number of LUNs on the SAN provisioned to meet Packaged CCE IOPS requirements.


    Add Customer ESXi Host to the vCenter

    Refer to the vCenter Server and Host Management documentation at https:/​/​www.vmware.com/​support/​pubs/​


    Note


    vCenter is required for Golden Templates only.


    Customers without vCenter can install on management desktops to administer the Packaged CCE servers.

    Run RAID Config Validator Utility

    After you set up RAID configuration and add the datastores, run this utility to ensure your datastore configuration is correct.

    To run the utility, Java 6 (any update) must be installed. Java 7 and later releases are not supported.

    Procedure
      Step 1   Download the Packaged CCE RAID Config Validator utility from the Packaged CCE Download Software > Deployment Scripts page at http:/​/​software.cisco.com/​download/​type.html?mdfid=284360381&i=rm. Extract the zip file locally.
      Step 2   Open the Windows command prompt and change to the directory where you downloaded the file. Then type this command java -jar PackagedCCEraidConfigValidator-10.5.jar <IP Address of the Side A server> <username> <password> to run to the tool.
      For example:
      C:\Users\Administrator\Desktop>java -jar PackagedCCEraidConfigValidator-10.5.jar xx.xx.xxx.xxx userName password
      Messages appear on the monitor to show that the validation is starting.

      You see an indication of a valid or invalid configuration.

      Errors include:
      • Non-supported server found or used.
      • Incorrect number of datastores found.
      • Incorrect sizes set for the datastores.
      Step 3   Repeat step 2, entering the IP address of the Side B server in Step 3.

      What to Do Next

      If the utility reports an invalid configuration, you must recreate the RAID configuration. To do this, you need to reset the RAID configuration, re-install ESXi, and then re-run the RAID Config Validator utility to revalidate the configuration.

      Prepare Cisco UCS B-Series Customer Site Servers

      The configuration steps in this section require that the customer site UCS B-Series be installed, configured and operational.

      For additional information and guidance on UCS B-Series installation and configuration, please refer to UCS B-Series documentation (http:/​/​www.cisco.com/​c/​en/​us/​products/​servers-unified-computing/​product-listing.html) or your Cisco Data Center Unified Computing Authorized Technology Provider.

      This section includes only specific configuration requirements for Packaged CCE deployments on the UCS B-Series platform. Customers may have varying design and configuration needs due to their data center requirements and infrastructure. However, all configurations must meet the Packaged CCE requirements for high availability. For example, the design must not create the potential for a single point of failure, which can adversely impact the operation of Cisco call processing applications.


      Note


      All UCS hardware configuration examples in this section use the Cisco UCS Manager GUI. UCS Manager CLI or API may also be used.


      Fabric Interconnect Requirements

      Ethernet Mode

      The Fabric Interconnects' Ethernet Mode must be set to End Host.



      Ethernet Uplinks

      Cisco UCS Fabric Interconnect Ethernet uplinks (Uplink Ports) for Packaged CCE are required to be 10G, with each Fabric Interconnect cross-connected to two common-L2 data center switches. The uplinks can be in a single-link, Port-Channel (EtherChannel), vPC or VSS (MEC) uplink topology.

      If any Port-Channel uplink are used, corresponding Port-Channel must be created in UCS Manager, where the ID of the Port-Channel matches that on the data center switch.

      If Port-Channel uplinks to data center switches are used, the UCS B Series Fabric Interconnects support only Link Aggregation Control Protocol (LACP). Ensure that the data center switch and Port-Channels are configured for, and support, LACP. This requirement also applies to vPC and VSS Port-Channels.

      FC Mode

      Both End Host and Switching modes are supported. End Host is the default for FC and FCoE NPIV with a supported FC Switch. Switching mode requires FC Zoning to be configured in the Fabric Interconnects. Refer to UCS Fabric Interconnect documentation for more information on these modes and use cases, and to specific SAN switch and SAN controllers vendor documentation as necessary. UCS Fabric Interconnect documentation is available at http:/​/​www.cisco.com/​c/​en/​us/​products/​servers-unified-computing/​ucs-6200-series-fabric-interconnects/​index.html.

      FC Storage Port and FCoE Uplinks

      Packaged CCE supports all FC and FCoE connected SAN topologies as supported by the UCS Fabric Interconnects, provided that all storage redundancy, latency, IO and bandwidth requirements must be met.


      Note


      If direct-attach SAN is used, qualified direct-attach FC and FCoE storage vendors are currently limited to EMC, Hitachi Data Systems, and NetApp. Please refer to the latest Cisco UCS hardware compatibility list for the most current qualified vendors and models, at http:/​/​www.cisco.com/​en/​US/​products/​ps10477/​prod_​technical_​reference_​list.html.


      QoS System Class and QoS Policy

      Unified CM and CCE applications set L3 QoS DSCP (AF/CS), which is not handled by the Fabric Interconnects; Fabric Interconnects are not L3 aware. Packaged CCE does not require specific QoS System Class or QoS Policy settings for VMware vSwitches.

      Cisco UCS B-Series Blade Requirements

      Cisco UCS Manager uses Pools, Policies and Templates which are collected in a Service Profiles Template and applied to a blade as a Service Profile.

      Packaged CCE does not have any specific requirements for the blade Service Profile or Service Profile Templates, other than the vNIC and vHBA requirements to conforming to network VLAN and FC/FCoE VSAN requirements (see vNIC Requirements and vHBA Requirements).

      For consistent and verifiable configuration and conformance of server configurations, use vNIC, vHBA and Service Profile Templates.

      For more detail on UCS blade configuration and service profiles and templates, refer to the appropriate Cisco UCS Manager documentation.

      vNIC Requirements

      Packaged CCE requires that you configure a minimum of two vNIC Ethernet interfaces on the UCS B-series blade. You must assign each of these two interfaces to alternate Fabric Interconnects for redundancy.



      Do not enable Fabric Failover for any Packaged CCE host vNIC interfaces.



      The VMware VMKernel and Management interface is allowed to share the same vNICs with Packaged CCE.

      This table is an example of collapsed vNIC interfaces for all VLANs:

      vNIC VLANS Fabric Notes

      eth0

      PCCE Visible (Active)

      PCCE Private (Standby)

      VMware Kernel & Management (Active)

      Default VLAN (Active)

      Other Management (Active)

      A

      Active and Standby are denoted to show the reference design for traffic flow through these vNICs as aligned to Fabric Interconnects as controlled in the VMware layer. See the UCS B Series Networking section for more details.

      eth1

      PCCE Visible (Standby)

      PCCE Private (Active)

      VMware Kernel & Management (Standby)

      Default VLAN (Standby)

      Other Management (Standby)

      B


      Note


      Networks other than the Packaged CCE Visible and Private networks are not required to be set to Active/Standby, as shown in the table. They can be set to Active/Active (no override), or assigned as needed to distribute load evenly across the infrastructure.


      vHBA Requirements

      You must configure a minimum of two vHBA FC interfaces on the UCS B-series blade. You must assign each of these two interfaces to alternate Fabric Interconnects for redundancy.



      These FC vHBAs can be used for either FC or FCoE connected SAN. Cisco UCS best practice is to use a different VSAN for each Fabric Interconnect (A/B) path to the SAN, but common VSAN is also supported.

      Common (as depicted) or separate vHBA interfaces may be used for Packaged CCE datastores and ESXi Boot from SAN storage path.

      Packaged CCE Application IOPS for SAN Provisioning

      This section details the Packaged CCE application IO requirements to be used for Storage Area Networks (SAN) provisioning. You must use these data points to properly size and provision LUNs to be mapped to datastores in vSphere to then host the Packaged CCE applications. Partners and Customers should work closely with their SAN vendor to size LUNs to these requirements.

      Packaged CCE on UCS B-Series does not require a fixed or set number of LUNs/Datatores. Instead, customers may use as few as a single, or use a 1 to 1 mapping of application VM to LUN, provided that the Packaged CCE applications IOPS throughput and latency requirements are met. Any given LUN design will vary from vendor to vendor, and SAN model to model. Work closely with your SAN vendor to determine the best solution to meet the given requirements here.

      The IOPS provided in this topic are for Packaged CCE on-box components only. For any off-box applications, refer to each application's documentation for IOPS requirements.

      Requirements and restrictions for SAN LUN Provisioning include the following:

      • VMware vSphere Boot from SAN LUN may not be shared with any Packaged CCE application VMs. Consult VMware and SAN vendor best practices for boot from SAN.
      • Thin provisioned LUNs are supported. They must start with sufficient space to house the total required space of all Packaged CCE apllication VMs, as those VMs vDisks do not support Thin Provisioning.
      • Data de-duplication is not supported on the SAN.
      • RAID 0 or RAID 1 are not supported for the SANs disk arrays used to house the LUNs created. RAID 0 lacks redundancy and RAID 1 negatively impacts application performance. RAID levels 5, 6, 10 are most common. Other advanced RAID levels offered by your SAN vendor are supported, provided that application IOPS, throughput, and latency requirements are met.
      • Tiered storage is supported.
      • 7200 RPM or slower drives are not supported for Packaged CCE use in a SAN due to poor latency. The one exception to this requirement is if the drive is used in a Tiered storage pool with 10,000/15,000 RPM drives and with SSD tiers in the same pool.

      Note


      In the following IOPS and KBps tables:

      • Numbers given for 95th Pct, Average, and Peak are totals of Read + Write.
      • Requirements are per instance of the given application.
      • Any application VM that has multiple vDisks is inclusive of those multiple devices in the sum total values given, and those devices should be deployed on same LUN/Datastore with suffificent resources to meet those requirements.
      • Unified CVP Reporting Server IOPS does not include on-box VXML reporting being enabled. If VXML reporting is enabled, see the Unified CVP Reporting Server IOPS requirements on the "Virtualization for Cisco Unified Customer Voice Portal" wiki page, available at http:/​/​docwiki.cisco.com/​wiki/​Virtualization_​for_​Cisco_​Unified_​Customer_​Voice_​Portal.

      IOPS and KBps for Up to 1000 Agents

      Table 1 IOPS for up to 1000 agents
      Packaged CCE Component 95th Pct Average Peak Write %

      Unified CCE Call Server

      53

      47

      268

      90

      Unified CCE Data Server

      3107

      1020

      5525

      70

      Unified CVP Server

      30

      21

      450

      80

      Finesse Server

      32

      24

      531

      90

      Unified CVP OAMP Server

      80

      22

      100

      60

      Unified Intelligence Center

      648

      444

      1205

      90

      Unified Communications Manager Publisher

      52

      43

      740

      90

      Unified Communications Manager Subscriber

      44

      37

      898

      90

      Unified CVP Reporting Server

      370

      76

      7890

      50

      Table 2 KBps for up to 1000 agents
      Packaged CCE Component 95th Pct Average Peak Write %

      Unified CCE Call Server

      2518

      2090

      31013

      90

      Unified CCE Data Server

      193448

      38833

      520999

      55

      Unified CVP Server

      158

      136

      46695

      80

      Finesse Server

      1375

      1056

      5731

      90

      Unified CVP OAMP Server

      146

      43

      201

      60

      Unified Intelligence Center

      3671

      2480

      10120

      90

      Unified Communications Manager Publisher

      621

      541

      6285

      90

      Unified Communications Manager Subscriber

      1221

      1068

      4547

      90

      Unified CVP Reporting Server

      1341

      584

      34181

      50

      IOPS and KBps for Up to 500 Agents

      Table 3 IOPS for up to 500 agents
      Packaged CCE Component 95th Pct Average Peak Write %

      Unified CCE Call Server

      36

      32

      74

      90

      Unified CCE Data Server

      2011

      428

      4941

      70

      Unified CVP Server

      20

      14

      300

      80

      Finesse Server

      26

      20

      434

      90

      Unified CVP OAMP Server

      80

      22

      100

      60

      Unified Intelligence Center

      412

      327

      990

      90

      Unified Communications Manager Publisher

      45

      38

      622

      90

      Unified Communications Manager Subscriber

      38

      33

      781

      90

      Unified CVP Reporting Server

      192

      41

      4078

      50

      Table 4 KBps for up to 500 agents
      Packaged CCE Component 95th Pct Average Peak Write %

      Unified CCE Call Server

      1401

      1154

      18687

      90

      Unified CCE Data Server

      81903

      10170

      219675

      55

      Unified CVP Server

      125

      121

      37356

      80

      Finesse Server

      781

      512

      4701

      90

      Unified CVP OAMP Server

      146

      43

      201

      60

      Unified Intelligence Center

      2553

      1910

      8096

      90

      Unified Communications Manager Publisher

      394

      325

      4726

      90

      Unified Communications Manager Subscriber

      521

      446

      4371

      90

      Unified CVP Reporting Server

      973

      193

      24314

      50

      IOPS and KBps for Up to 250 Agents

      Table 5 IOPS for up to 250 agents
      Packaged CCE Component 95th Pct Average Peak Write %

      Unified CCE Call Server

      28

      24

      68

      90

      Unified CCE Data Server

      1134

      237

      3517

      70

      Unified CVP Server

      10

      7

      150

      80

      Finesse Server

      25

      18

      447

      90

      Unified CVP OAMP Server

      80

      22

      100

      60

      Unified Intelligence Center

      304

      246

      792

      90

      Unified Communications Manager Publisher

      39

      35

      480

      90

      Unified Communications Manager Subscriber

      29

      27

      679

      90

      Unified CVP Reporting Server

      101

      27

      2719

      50

      Table 6 KBps for up to 250 agents
      Packaged CCE Component 95th Pct Average Peak Write %

      Unified CCE Call Server

      861

      643

      7402

      90

      Unified CCE Data Server

      28139

      4147

      208389

      55

      Unified CVP Server

      71

      58

      20545

      80

      Finesse Server

      633

      401

      4149

      90

      Unified CVP OAMP Server

      146

      43

      201

      60

      Unified Intelligence Center

      2166

      1486

      6747

      90

      Unified Communications Manager Publisher

      319

      262

      3990

      90

      Unified Communications Manager Subscriber

      372

      307

      3999

      90

      Unified CVP Reporting Server

      471

      208

      23002

      50

      NTP and Time Synchronization

      Packaged CCE requires that all parts of the solution have the same time. While time drift occurs naturally, it is critical to configure NTP to keep solution components synchronized.

      To prevent time drifts on Live Data reports, the NTP settings on the Data Server VMs, the Call Server VMs, and on the Cisco Unified Intelligence Center Publisher and Subscriber VMs must be synchronized.

      For Cisco UCS B-series servers, you also must set the time zone and NTP Time Server using the UCS Manager. See Set Time Zone and NTP Time Server for Cisco UCS B-Series Servers for more information.

      Windows Active Directory Domain

      The Windows Active Directory PDC emulator master for the forest in which the Packaged CCE domain resides (whether same, parent, or peer) must be properly configured to use an external time source. This external time source should be a trusted and reliable NTP provider, and if already configured for the customer's forest, must be used (and useable) as same source for all other applications as detailed in this section for the Packaged CCE solution.

      See the following references for properly configuring Windows Active Directory Domain for NTP external time source:

      Microsoft Windows Server Domains do not automatically recover or fail over the authoritative internal time source for the domain when the PDC emulator master server is lost, due to hardware failure or otherwise. This article, Time Service Configuration on the DCwith PDC Emulator FSMO Role, helps describe how you must additionally configure the new target server to be the authoritative internal time source for the domain. It also covers manual intervention to recover and seize or reassign the PDC FSMO role to another domain controller.

      Windows Components in the Domain

      Windows hosts in the domain are automatically configured to synch their time with a PDC emulator, whether by the PDC emulator master with authoritative internal time source or chained from same in the domain forest hierarchy.

      Windows Components Not in the Domain

      Use the following steps to set NTP time source for a Windows Server that is not joined to a domain:
      1. In the Command Prompt window, type the following line and press ENTER: w32tm /config /manualpeerlist:PEERS /syncfromflags:MANUAL

        Note


        Replace peers with a comma-separated list of NTP servers.
      2. Restart the w32time service: net stop w32time && net start w32time.
      3. Synch w32time service with peers: w32tm /resync.
      4. Use the following Service Control command to ensure proper start of the w32time service on any reboot of the server: sc triggerinfo w32time start/networkon stop/networkoff.

      ESXi Hosts

      All Packaged CCE ESXi hosts (including those for optional components), must point to the same NTP server(s) used by the Windows domain PDC emulator master as the their external time source.

      Cisco Integrated Service Routers

      Cisco IOS Voice Gateways must be configured to use the same NTP source for the solution in order to provide accurate time for logging and debugging. See Basic System Management Configuration Guide, Cisco IOS Release 15M&T: Setting Time and Calendar Services.

      VOS Components

      Components such as Unified Intelligence Center, Finesse, Social Miner, and Unified Communications Manager must point to the same NTP servers as the domain authoritative internal time source.

      CLI commands for NTP Servers

      While NTP servers are typically specified at install time, here a few commands you can use from the platform cli of the above listed components, to list, add and remove ntp servers. From the platform CLI:
      • To list existing ntp servers: utils ntp servers list
      • To add an additional ntp server: utils ntp server add <host or ip address to add>
      • To delete an existing ntp server: utils ntp server delete (row number of the item to delete). Press Enter.

      Set Time Zone and NTP Time Server for Cisco UCS B-Series Servers

      Set the time zone and NTP Time server for UCS B-series server in the UCS Manager.

      Procedure
        Step 1   From the Admin tab in UCS Manager, select Stats Mangement > Time Zone Management.
        Step 2   Select the Time Zone from the down-down menu.
        Step 3   Click Add NTP Time Server.
        Step 4   Enter the IP address of the NTP Time Server, and click OK.
        Step 5   Click Save.