Cisco Collaboration Systems Release

Unified Communications (UC), Non-UC, and Third-Party Virtual Machines (VMs) Co-residency Troubleshooting TechNote

Document ID: 113520

Updated: May 11, 2012



This document clarifies some aspects of the support policy for application co-residency defined at as part of the support policy for virtualized Cisco Unified Communications (UC)/Collaboration applications defined at This tech note is applicable to all UC on UCS and other virtualization hardware options including UCS Tested Reference Configuration, UCS Specs-based and HP/IBM Specs-based.



Readers of this document should have knowledge of these topics:

  • UC on UCS solution (Cisco Unified Communications on Cisco Unified Computing System)

  • UCS Tested Reference Configuration hardware

  • Specs-based hardware (UCS, HP or IBM)

  • Virtualization of Cisco Collaboration applications

  • VMware vSphere software

  • Cisco Unified Computing System hardware

Note: See the "Related Information" section of this document for web page links.

Components Used

The information in this document is based on these software and hardware versions:

The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.


Refer to Cisco Technical Tips Conventions for more information on document conventions.

Co-residency and “Quality of Service”

A key principal of both network convergence and virtualization is the sharing of hardware resources.

  • A converged IP network shares network hardware among multiple traffic streams (voice, video, storage access, other data).

  • A virtualized server (or virtualization host) shares compute, storage and network hardware among multiple application virtual machines (VMs).

In both cases, quality of service is required to protect UC from non-UC applications when hardware resources are finite, as such:

  • QoS in routing and switching network hardware to ensure voice/video network traffic gets the needed bandwidth and protection from delay and jitter.

  • Adherence to UC virtualization rules (e.g. physical/virtual hardware sizing, co-residency policy, etc.) to ensure UC VMs get the needed CPU, memory, storage capacity and storage/network performance.

It is impossible for Cisco to test every combination of hardware and application for VM co-residency, particularly for 3rd party app VMs whose behavior may be unpredictable or not clearly defined. Therefore, Cisco only guarantees Cisco UC app VM performance when installed on a UCS Tested Reference Configuration (see and then only when all conditions in the co-residency policy are followed (see

For other environments, uncertainty can be reduced by pre-deployment testing, baselining, following general principles of virtualization, and following the rules of Cisco UC virtualization (at However, Cisco cannot guarantee that VMs will never be starved for resources and never have performance problems.

Key Support Considerations for Non-UC and 3rd party Virtual Machines

To enable Cisco TAC to effectively provide support when running Cisco UC VMs co-resident with non-UC/3rd-party app VMs, customers must ensure either of the following:

  • Non-UC/3rd party VMs are non-critical and are able to be temporarily powered-down if required to facilitate troubleshooting.

  • If no VMs are non-critical, then spare capacity must be provisioned on virtualization hosts or physical servers for relocation (temporary or permanent) of VMs as solutions to application performance problems. Spare capacity is already a recommended design best practice for redundancy or to provide temporary staging of VMs when maintenance is required on hardware or software. Examples of “spare capacity” are extra “empty” physical servers (to provide “hot-standby” or temporary staging), or existing blade/rack-mount servers not fully utilized.

To enable Cisco TAC to effectively provide support when running Cisco UC VMs co-resident with non-UC/3rd-party app VMs, Cisco may require the following activities from the customer for problem diagnosis or resolution:

  • Changes to either the software workload or the physical hardware, to troubleshoot or resolve application performance problems. Examples of when these changes might be required are UC VM receiving insufficient CPU, Memory, network, disk capacity or storage IOPS from the hardware.

  • Examples of what these changes look like in an actual deployment are listed below.

    • Software: temporary power-down of non-critical VMs to facilitate performance troubleshooting

    • Software: move critical VMs and/or non-critical VMs to alternate virtualization host / physical server as temporary or permanent solution.

      • Temporarily reduce the number of virtual machines running on a host if Cisco deems necessary for troubleshooting purposes.

      • Permanently reduce the number of virtual machines running on a host if Cisco determines the host is overloaded.

      • Splitting a dense UC app VM into multiple less-dense VMs, then moving those less-dense VMs to alternate host. E.g. splitting a CUCM 10K user OVA into multiple CUCM 7.5K user OVAs, then relocating some of those CUCM 7.5K user OVAs.

    • These approaches allow reducing the software workload on an overloaded virtualization host / physical server, so that the workload is no longer starved for hardware resources.

  • Hardware: additions/upgrades to "fix" an overloaded host as an alternative to powering-down VMs or moving VMs.

    • E.g. adding more physical disks to increase storage capacity and/or provide IOPS

    • E.g. adding more physical memory or more physical CPU cores

    • E.g. adding physical NIC interfaces to address LAN congestion.

    • These approaches allow "upgrading" the overloaded hardware to accommodate the resource-starved software workload.

Cisco's provision of support is contingent upon customer maintaining a current and fully paid support contract with Cisco.

Related Information

Updated: May 11, 2012
Document ID: 113520