For each array
created using this procedure, use the following settings:
Read Ahead Always
Write Back with BBU
Power on the UCS
server, making sure that Quiet Boot is disabled in BIOS.
Ctrl-H during the initial startup sequence to enter
the MegaRAID BIOS configuration utility.
Configuration Wizard on the left panel. Click
Configuration. Then click
At the prompt to
clear the configuration, click
Configuration. Then click
On the next
screen, in the left panel, add the first eight drives to create Drive Group0 as
drives 1 - 8.
remaining eight drives to create Drive Group1 as follows:
On the left
panel, select drives 9 - 16.
accept the Drive Group.
Add Drive Group0
to a span as follows:
for Drive Group0 as follows:
ahead = always.
write back with bbu.
Update Size to finalize the RAID volume and to
determine the size of the resulting volume. It resolves to 1.903TB
to accept the virtual drive definition, VD0.
Back to add the second RAID 5 array.
Back to add the second RAID 5 array as follows:
At the RAID
ahead = always.
write back with bbu.
Update Size. The size resolves to 1.903TB
to accept the virtual drive definition, VD1.
Yes at the BBU warning screen.
Next at the Virtual Live Definition screen to
indicate that you have finished defining virtual drives.
Accept at the Configuration Preview screen to accept
the RAID configuration.
Yes to save the configuration.
Yes to start drive configuration.
Home to exit the Wizard when both drives report
their status as Optimal.
configuration is complete on the drives, the system may try to initialize
(format) the new RAID array. When this happens, the current initialization
progress can be seen from the Web BIOS screen. Wait for the background
initialization to complete before proceeding with any of the subsequent server
configuration steps such as installing ESXi.
You can check
background initialization progress on either the Web BIOS Home screen or
Virtual Drives screen.
Packaged CCE uses
standard installation procedures, which you can find using VMware documentation
(VMware site) for the given version supported with the
release of Packaged CCE you are installing.
Packaged CCE has no
unique requirements other than that ESXi should be installed on the first
drive, as the default boot drive for the server.
Required datstores are dictated by the hardware platform used. Cisco
UCS C-Series servers require a fixed and validated configuration, whereas UCS
B-Series may have a variable number of LUNs on the SAN provisioned to meet
Packaged CCE IOPS requirements.
Windows command prompt and change to the directory where you downloaded the
file. Then type this command
-jar PackagedCCEraidConfigValidator-10.5.jar <IP Address of the Side A
server> <username> <password> to run to the tool.
Messages appear on the monitor to show that the validation is
You see an
indication of a valid or invalid configuration.
Non-supported server found
Incorrect number of
Incorrect sizes set for the
Repeat step 2,
entering the IP address of the Side B server in Step 3.
What to Do Next
If the utility
reports an invalid configuration, you must recreate the RAID configuration. To
do this, you need to reset the RAID configuration, re-install ESXi, and then
re-run the RAID Config Validator utility to revalidate the configuration.
Prepare Cisco UCS
B-Series Customer Site Servers
steps in this section require that the customer site UCS B-Series be installed,
configured and operational.
includes only specific configuration requirements for Packaged CCE deployments
on the UCS B-Series platform. Customers may have varying design and
configuration needs due to their data center requirements and infrastructure.
However, all configurations must meet the Packaged CCE requirements for high
availability. For example, the design must not create the potential for a
single point of failure, which can adversely impact the operation of Cisco call
All UCS hardware
configuration examples in this section use the Cisco UCS Manager GUI. UCS
Manager CLI or API may also be used.
Interconnects' Ethernet Mode must be set to End Host.
Cisco UCS Fabric
Interconnect Ethernet uplinks (Uplink Ports) for Packaged CCE are required to
be 10G, with each Fabric Interconnect cross-connected to two common-L2 data
center switches. The uplinks can be in a single-link, Port-Channel
(EtherChannel), vPC or VSS (MEC) uplink topology.
Port-Channel uplink are used, corresponding Port-Channel must be created in UCS
Manager, where the ID of the Port-Channel matches that on the data center
uplinks to data center switches are used, the UCS B Series Fabric Interconnects
support only Link Aggregation Control Protocol (LACP). Ensure that the data
center switch and Port-Channels are configured for, and support, LACP. This
requirement also applies to vPC and VSS Port-Channels.
Unified CM and CCE
applications set L3 QoS DSCP (AF/CS), which is not handled by the Fabric
Interconnects; Fabric Interconnects are not L3 aware. Packaged CCE does not
require specific QoS System Class or QoS Policy settings for VMware vSwitches.
Cisco UCS Manager
uses Pools, Policies and Templates which are collected in a Service Profiles
Template and applied to a blade as a Service Profile.
Packaged CCE does
not have any specific requirements for the blade Service Profile or Service
Profile Templates, other than the vNIC and vHBA requirements to conforming to
network VLAN and FC/FCoE VSAN requirements (see
vNIC Requirements and
For consistent and
verifiable configuration and conformance of server configurations, use vNIC,
vHBA and Service Profile Templates.
For more detail on UCS blade configuration and service profiles and
templates, refer to the appropriate Cisco UCS Manager documentation.
requires that you configure a minimum of two vNIC Ethernet interfaces on the
UCS B-series blade. You must assign each of these two interfaces to alternate
Fabric Interconnects for redundancy.
Fabric Failover for any Packaged CCE host vNIC interfaces.
The VMware VMKernel
and Management interface is allowed to share the same vNICs with Packaged CCE.
This table is an
example of collapsed vNIC interfaces for all VLANs:
Kernel & Management (Active)
Standby are denoted to show the reference design for traffic flow through these
vNICs as aligned to Fabric Interconnects as controlled in the VMware layer. See
the UCS B Series Networking section for more details.
Kernel & Management (Standby)
than the Packaged CCE Visible and Private networks are not required to be set
to Active/Standby, as shown in the table. They can be set to Active/Active (no
override), or assigned as needed to distribute load evenly across the
You must configure a
minimum of two vHBA FC interfaces on the UCS B-series blade. You must assign
each of these two interfaces to alternate Fabric Interconnects for redundancy.
These FC vHBAs can
be used for either FC or FCoE connected SAN. Cisco UCS best practice is to use
a different VSAN for each Fabric Interconnect (A/B) path to the SAN, but common
VSAN is also supported.
Common (as depicted)
or separate vHBA interfaces may be used for Packaged CCE datastores and ESXi
Boot from SAN storage path.
Application IOPS for SAN Provisioning
details the Packaged CCE application IO requirements to be used for Storage
Area Networks (SAN) provisioning. You must use these data points to properly
size and provision LUNs to be mapped to datastores in vSphere to then host the
Packaged CCE applications. Partners and Customers should work closely with
their SAN vendor to size LUNs to these requirements.
Packaged CCE on
UCS B-Series does not require a fixed or set number of LUNs/Datatores. Instead,
customers may use as few as a single, or use a 1 to 1 mapping of application VM
to LUN, provided that the Packaged CCE applications IOPS throughput and latency
requirements are met. Any given LUN design will vary from vendor to vendor, and
SAN model to model. Work closely with your SAN vendor to determine the best
solution to meet the given requirements here.
The IOPS provided
in this topic are for Packaged CCE on-box components only. For any off-box
applications, refer to each application's documentation for IOPS requirements.
restrictions for SAN LUN Provisioning include the following:
Boot from SAN LUN may not be shared with any Packaged CCE application VMs.
Consult VMware and SAN vendor best practices for boot from SAN.
provisioned LUNs are supported. They must start with sufficient space to house
the total required space of all Packaged CCE apllication VMs, as those VMs
vDisks do not support Thin Provisioning.
de-duplication is not supported on the SAN.
RAID 0 or RAID
1 are not supported for the SANs disk arrays used to house the LUNs created.
RAID 0 lacks redundancy and RAID 1 negatively impacts application performance.
RAID levels 5,
6, 10 are most common. Other advanced RAID levels offered by your SAN vendor
are supported, provided that application IOPS, throughput, and latency
requirements are met.
7200 RPM or
slower drives are not supported for Packaged CCE use in a SAN due to poor
latency. The one exception to this requirement is if the drive is used in a
Tiered storage pool with 10,000/15,000 RPM drives and with SSD tiers in the
In the following
IOPS and KBps tables:
given for 95th Pct, Average, and Peak are totals of Read + Write.
are per instance of the given application.
application VM that has multiple vDisks is inclusive of those multiple devices
in the sum total values given, and those devices should be deployed on same
LUN/Datastore with suffificent resources to meet those requirements.
requires that all parts of the solution have the same time. While time drift
occurs naturally, it is critical to configure NTP to keep solution components
To prevent time
drifts on Live Data reports, the NTP settings on the Data Server VMs, the Call
Server VMs, and on the Cisco Unified Intelligence Center Publisher and
Subscriber VMs must be synchronized.
The Windows Active
Directory PDC emulator master for the forest in which the Packaged CCE domain
resides (whether same, parent, or peer) must be properly configured to use an
external time source. This external time source should be a trusted and
reliable NTP provider, and if already configured for the customer's forest,
must be used (and useable) as same source for all other applications as
detailed in this section for the Packaged CCE solution.
See the following
references for properly configuring Windows Active Directory Domain for NTP
external time source:
Server Domains do not automatically recover or fail over the authoritative
internal time source for the domain when the PDC emulator master server is
lost, due to hardware failure or otherwise. This article,
Time Service Configuration
on the DCwith PDC Emulator FSMO Role, helps describe how you must
additionally configure the new target server to be the authoritative internal
time source for the domain. It also covers manual intervention to recover and
seize or reassign the PDC FSMO role to another domain controller.
Components in the Domain
Windows hosts in
the domain are automatically configured to synch their time with a PDC
emulator, whether by the PDC emulator master with authoritative internal time
source or chained from same in the domain forest hierarchy.
Components Not in the Domain
Use the following
steps to set NTP time source for a Windows Server that is not joined to a
Command Prompt window, type the following line and press ENTER:
/config /manualpeerlist:PEERS /syncfromflags:MANUAL
peers with a comma-separated list of NTP servers.
stop w32time && net start w32time.
service with peers:
following Service Control command to ensure proper start of the w32time service
on any reboot of the server:
triggerinfo w32time start/networkon stop/networkoff.
All Packaged CCE
ESXi hosts (including those for optional components), must point to the same
NTP server(s) used by the Windows domain PDC emulator master as the their
external time source.
Components such as
Unified Intelligence Center, Finesse, Social Miner, and Unified Communications
Manager must point to the same NTP servers as the domain authoritative internal
CLI commands for NTP
While NTP servers
are typically specified at install time, here a few commands you can use from
the platform cli of the above listed components, to list, add and remove ntp
servers. From the platform CLI:
existing ntp servers:
ntp servers list
To add an
additional ntp server:
ntp server add <host or ip address to add>
To delete an
existing ntp server:
ntp server delete (row number of the item to delete). Press