only be deployed on a VMware hypervisor, which must be running on the
or C-series server with Fiber Channel-attached SAN storage (for at least the
first two disks) or with directly attached hard disk drives
A UCS-E module
running within a Cisco ISR-G2 router
(subject to the stated minimum performance) (see
Compatibility matrix below for versions and model numbers).
C-series servers, be sure to include either the battery backup or Super Cap
RAID controller option. If one of these is not present or not operational, the
write cache is disabled on these controllers. When the write cache is disabled,
write throughput is significantly reduced. (See
matrix for detailed disk requirements.)
The primary and
secondary servers must be based on identical hardware, or at least have
identical specifications for CPU speed and disk I/O throughput. They must also
be using the same version of VMware ESXi. Any asymmetry causes accumulating
database latency in one server or the other. Expansion servers do not need to
be running on the identical hardware.
deployed on up to five rack-mounted servers or up to two UCS-E modules,
depending on the capacity and degree of redundancy required. ("server" refers
to a virtual machine, not necessarily a physical machine). There are three
types of servers:
Supports all database operations as well as media operations.
Supports all database operations as well as media operations. Provides
high-availability for the database.
Provides additional capacity for media operations, but not for database
operations. Expansion servers are only used in 7-vCPU deployments, and are
never used in UCS-E module deployments.
diagram shows a primary-only deployment model.
Figure 1. Primary-Only
require database redundancy can deploy a secondary server.
Figure 2. Secondary
Server Deployment Model
recording capacity is required, expansion servers are deployed.
Figure 3. Expansion
Servers Deployment Model
servers are not supported in deployments which do not use the full 7-vCPU
(including UCS-E servers) run the same installed software; they differ only in
function and capacity. The primary server is always the first server to be
installed and is identified as such during the installation process. Secondary
and expansion servers are identified during the initial web-based setup process
for those nodes (after installation is complete).
always stored on the disks that are attached to the server which initially
captured the media.
two-server clusters may be deployed with both blades in the same ISRG2 router
or with one blade in each of two ISRG2 routers. The latter blade is typically
preferred from a fault isolation perspective but is not required. A MediaSense
cluster must be UCS-E-based or rack-mount server-based. It cannot consist of a
combination of the two.
There are two
reasons for expanding system capacity: to be able to handle more simultaneous
calls and to increase the retention period of your recordings.
If your goal is to
handle more calls, then you can add nodes to your cluster. Each node adds both
simultaneous activity capacity (the ability to perform more parallel recording,
monitoring, download, and playback activities) and storage capacity. Servers
may be added to a cluster at any time, but there is no ability to remove
servers from a cluster.
Once your cluster
has reached its maximum size (5 nodes for 7 vCPU systems, 2 nodes for
everything else), your only option is to add a new cluster. If you do, you must
arrange your clusters with approximately equal numbers of nodes. If that is not
possible, then you should at least arrange the call distribution so that an
approximately equal number of calls is directed to each cluster, possibly
leaving the larger cluster underutilized. For more information, see
Very Large Deployments.
There is no
capability to remove a node from a cluster once it has been added.
If your intention
is to achieve a longer retention period for your recordings, you have two
options. You can add nodes, as described above, or you can provision more
storage space per node. Each node can handle up to 12 TB of recording storage
provided that the underlying hardware can support it. UCS-B/C servers with SAN
can handle the storage capacity easily, but servers using direct attached
storage (DAS) and low-end UCS-E servers are more limited.
As with nodes in a
cluster, there is no capability to remove storage capacity from a node once it
has been installed.
require capacity that exceeds that of a single MediaSense cluster must deploy
multiple independent MediaSense clusters. In such deployments, there is
absolutely no interaction across clusters; no cluster is aware of the others.
Load balancing of incoming calls does not extend across clusters. One cluster
might reach capacity and block subsequent recording requests, even if another
cluster has plenty of remaining space.
API clients need
to be designed so that they can access all of the deployed clusters
independently. They can issue queries and subscribe to events from multiple
clusters simultaneously, and even specify that multiple clusters should send
their events to the same SWS URL. This permits a single server application to
receive all events from multiple clusters. Some MediaSense partner applications
already have this capability. MediaSense's built-in Search and Play application
does not have this capability.
describes whether multiple call controllers can share one MediaSense cluster
and whether one call controller can share multiple MediaSense clusters:
Communications Manager clusters to one MediaSense cluster: Supported.
Multiple Unified Border
Element devices to one MediaSense cluster: Supported.
One Unified Communications
Manager cluster to multiple MediaSense clusters: Only supported with
One Unified Border Element
to multiple MediaSense clusters: Supported.
Communications Manager built-in-bridge or network-based recording, phones must
be partitioned among MediaSense clusters so that any given phone is recorded by
one specific MediaSense cluster only; its recordings are never captured by a
server in another cluster. (But there is an option for full cluster failover,
such as in case of a site failure. See
deployments there is a need for multiple Unified Communications Manager
clusters, possibly tied together in an SME network, to share one or a small
number of MediaSense clusters. In this arrangement, there is a chance that two
calls from different Unified Communications Manager clusters carry the same
pair of xRefCi values when they reach MediaSense. This would cause those two
calls to be incorrectly recorded or not recorded at all. (The statistical
probability of this sort of collision is incredibly small and need not be
considered.) This arrangement, therefore, is fully supported.
If your application uses the startRecord API function that asks
MediaSense to make an outbound call to a specified phone number and begin
recording it, only a single Unified Communications Manager node may be
configured as the call controller for this type of outbound call.
Calls Managed by
Unified Border Element Dial Peer
For Unified Border
Element recording, the situation is more flexible. One or more Unified Border
Element systems may direct their forked media to one or more MediaSense
clusters. There is no chance of a call identifier collision because MediaSense
uses Unified Border Element's Cisco-GUID for this purpose, and that GUID is
clusters do not share the load with each other, Unified Border Element must
distribute the load with its recording invitations. This can be accomplished
within each Unified Border Element by configuring multiple recording dial peers
to different clusters so that each one is selected at random for each new call.
Alternatively, each Unified Border Element can be configured with a preference
for one particular MediaSense cluster, and other mechanisms (such as PSTN
percentage allocation) can be used to distribute the calls among different
Unified Border Element devices.
If your goal is to
provide failover among MediaSense clusters rather than load balancing, see
Cisco provides an
OVA virtual machine template in which the minimum size storage partitions and
the required CPU and memory reservations are built in. VMWare ESXi enforces
these minimums and ensures that other VMs do not compete with MediaSense for
these resources. However, ESXi does not enforce disk throughput minimums.
Therefore, you must still ensure that your disks are engineered such that they
can provide the specified IOPS and bandwidth to each MediaSense VM.
OVA template contains a drop-down list that allows you to select one of the
supported VMware virtual machine template options in each release for
MediaSense servers. These template options specify (among other things) the
memory, CPU, and disk footprints for a virtual machine. For each virtual
machine you install, you must select the appropriate template option on which
to begin your installation. You are required to use the Cisco-provided
templates in any production deployment. For low volume lab use, it is possible
to deploy on lesser virtual equipment, but the system constrains its own
capacity and displays warning banners that you are running on unsupported
All disks must
be "thick" provisioned. Thin provisioning is not supported.
Every MediaSense node has a
210GB media partition by default. Additional MediaSense vDisks can be added to
increase storage up to the specified limit.
The following table
summarizes the template options provided.
Maximum Total Recording
80 GB for
80 GB for
210 GB for
either split across the cluster or localized to one node.
expansion nodes supported.
80 GB for
80 GB for
either split across the cluster or localized to one node.
expansion nodes supported.
or secondary node
Terabytes per primary or secondary node.
80 GB for
80 GB for
Terabytes per expansion node.
expansion servers must use the expansion template option.
secondary and expansion OVA template options provision the minimum 210 GB by
default. Additional space may be added before or after MediaSense software
installation. Once provisioned, recording space may never be reduced. The total
amount of media storage across all nodes may not exceed 60 Terabytes on 5-node
clusters or 24 Terabytes on 2-node clusters, and is further limited on UCS-E
deployments depending on the available physical storage space.
and secondary node 2 vCPU and 4 vCPU templates are suitable for UCS-E blade
deployments, although they can also be used on larger systems. Most supported
UCS-E blades have more physical disk space available than the VM template
allocates; the unused space may be used for recorded media or uploaded media.
servers, two storage alternatives for recorded data are currently supported:
Physically attached disks
Fibre Channel-attached SAN
Attached Storage (NAS) is not supported in any MediaSense configuration and SAN
storage is not supported on UCS-E configurations.
Depending on the
hardware model and options purchased, any single node can offer up to 12 TB of
storage with a maximum of 60 TB of storage across five servers. It is not
necessary for all servers to be configured with the same number or type of
virtual disks. (See
Compatibility Matrix for
detailed specifications and other storage configuration requirements.)
This section is
applicable to UCS C-series servers only.
MediaSense must be
configured with RAID-10 for the database and OS disks and either RAID-10 or
RAID-5 for media storage. Using RAID-5 results in hardware savings. It is
slower, but fast enough for media storage. All of the TRC configurations for
UCS C-series servers include an internal SD card that is large enough to
include the ESXi hypervisor. Therefore, Cisco supports installation of ESXi on
the SD card and installation of the MediaSense application on the remaining
The RAID-10 group
would have to hold the ESXi hypervisor as well as the MediaSense application,
which is not generally a recommended practice. All of the TRC configurations
for UCS C-series servers do include an internal SD card that is large enough to
contain ESXi. It is therefore required that ESXi be installed on the SD card
and the MediaSense application be installed on the remaining disk drives.
MediaSense Virtual Machines per Server
As of Release
9.0(1), MediaSense is no longer required to be the only virtual machine running
on a physical server. Other applications can share the server, as long as
MediaSense is able to reserve the minimum resources that it requires.
You can also
deploy multiple MediaSense VMs on a single host. When doing so, however, ensure
that the primary and secondary nodes do not reside on the same physical host.
For example, if your servers can support two MediaSense VMs, then you might lay
out a 5-node cluster as shown in the figure.
Figure 4. Five Node
If your servers
can support three MediaSense VMs, then you can lay them out as shown in this
Figure 5. Three
MediaSense Virtual Machines
determine how many MediaSense VMs a particular server model will support by
referring to the
UC Virtualization Wiki
and use the number of physical CPU cores as a guide. Models with 8 or more
physical cores can support 1 MediaSense VM; models with 14 or more physical
cores can support 2 MediaSense VMs, and models with 20 or more physical cores
can support 3 MediaSense VMs.
servers within a cluster must be in a single campus network. A campus network
is defined as a network in which the maximum round-trip delay between any pair
of MediaSense servers is less than 2 milliseconds. (Some Metropolitan Area
Networks (MANs) may fit this definition as well.)
components, however, may connect to the MediaSense cluster over a WAN, with
In Unified Communications
Manager deployments, media forking phones may be connected to MediaSense over a
SIP Trunks from Unified
Communications Manager may also be routed over a WAN to MediaSense, but calls
may evidence additional clipping at the beginning of recordings due to the
increased round-trip delay.
The connection between
Unified Border Element and MediaSense can be routed over a WAN but affected
calls may experience additional clipping at the beginning of recordings due to
the increased round-trip delay.
The AXL connection between
Unified Communications Manager and MediaSense can be routed over a WAN, but API
and administrator sign-in times may be delayed.
From a high availability
standpoint, API sign-in has a dependency on the AXL link. If that link
traverses a WAN which is unstable, clients may have trouble signing in to the
API service or performing media output requests such as live monitoring,
playback, and recording session download. This applies to remote branch
deployments as well as centralized deployments, and to Unified Border Element
deployments as well as Unified Communications Manager deployments.