COS provides distributed, resilient, high-performance storage and retrieval of binary large object (blob) data. Object storage is distributed across clusters of hardware systems, or nodes. Each storage cluster is resilient against hard drive failure within a node and against node failure within the cluster. Nodes can be added to or removed from a cluster as needed to provide for changes in cluster capacity.
The primary interface for managing COS content is the OpenStack Swift API, with enhancements to improve quality of service when accessing large media objects. COS includes the COS Service Manager (SM) web GUI, which uses REST APIs to simplify COS setup and management. COS also includes a command-line interface (CLI) for management of remote or programmatic content. In addition, COS provides an authentication and authorization service using the OpenStack Swauth API.
Through its various management interfaces, COS provides access to large media objects, maintains high quality of service, supports cluster management, and coordinates the replication of data across sites to improve resiliency and optimize the physical location of stored data.
COS Release 3.8.2 includes the following new features and enhancements:
Official support for installation on US C3260 single node hardware
Support for small object scaling in cloud DVR (cDVR) applications
COS API extensions to support interaction with Cisco Virtual Media Recorder (VMR)
COS file gateway multi-tenant NFS/CISF with AD/LDAP
Verified support for scaling up to 50 nodes per COS cluster
Incorporates other minor enhancements and bug fixes
Related Software Products
COS 3.8.2 can be implemented as a managed service of the Cisco Media Origination System (MOS) Platform. In this configuration, COS content is managed through the MOS Service Manager (SM) web GUI.
COS 3.8.2 can also work together with Cisco Videoscape Distribution Suite Video Recording (VDS-VR) through Release 4.1.5 to serve as the storage archive for recorded video programming.
Table 1 provides an overview of the COS features. For full descriptions of these features, see the Cisco Cloud Object Store Release 3.8.1 User Guide..
Table 1-1 Overview of COS Features
Cisco UCS and CDE Server Support
Supports installation on the following hardware:
– Cisco UCSC C3260-4U5 Dual Node Rack Server with 56 x 10 TB hard drives (560 TB total storage), giving 28 drives (280 TB) to each server node
– Cisco UCSC C3260-4U4 Single Node Rack Server with 56 x 6 TB hard drives (336 TB total storage), giving all 56 drives to one server node
– Cisco UCSC C3260-4U3 Dual Node Rack Server with 56 x 6 TB hard drives (336 TB total storage), giving 28 drives (168 TB) to each server node
– Cisco UCSC C3160-4U2 Rack Server with 54 x 6 TB hard drives (324 TB total storage)
– Cisco UCSC C3160-4U1 Rack Server with 54 x 4 TB hard drives (216 TB total storage)
– Cisco Content Delivery Engine CDE465-4R4 with 36 x 6 TB hard drives (216 TB total storage)
COS provides a pre-installation script to configure the C3260 server for single or dual node service.
Automated Node Configuration
A single configuration file for all COS nodes can be created in the COS Service Manager GUI (described below) and stored on the PAM or on an FTP or HTTP server, and then downloaded by the COS initialization routine (cosinit) during installation.
A single downloadable configuration file eliminates the need to configure nodes individually, whether manually or via the COS Service Manager GUI.
COS 3.8.2 lets you specify the URL of a configuration file to be used at installation to automatically configure the node according to a predefined template.
Intel Preboot Execution Environment (PXE) Support
PXE can be used to download a network bootstrap program (NBP) to remotely install a COS client over a network.
Improved TCP Transmission
COS 3.8.2 includes optimizations to improve TCP transmit performance.
Small Object Support
For cloud DVR and similar applications, COS 3.8.2 provides Small Object Support to efficiently manage storage of many small files representing media segments.
COS Service Manager GUI
Lets you quickly and easily access many COS deployment, monitoring, and alarm functions.
Displays storage, network bandwidth, session count, and alarms for individual COS disks, nodes, services, and interfaces.
Includes a graphical display of deployment statistics and trends related to disk, service, and interface status.
Supports configuration of COS node service interface from the GUI.
Supports setting of resiliency policies on a per-node basis from the GUI.
Includes the COS Configuration Wizard, which guides you through the steps for configuring a COS cluster and (optionally) generating a configuration profile.
High Availability (HA)
COS supports HA as implemented in MOS, providing redundancy for the PAM VMs. The PAM uses both Cisco and third-party components to support HA.
Simple Auth Service API for authentication of Swift operations.
Based on Swauth Open-Source Middleware API.
Used to manage accounts, users and account service endpoints.
Swift Object Store API
An implementation of a subset of the continually evolving OpenStack Swift API.
Command executions are authenticated using auth tokens provided by Swauth service.
Used to create and manage containers and objects for persistent storage in a COS cluster.
Supports archiving of content from Cisco or ARRIS recorders using DataDirect Networks (DDN) Web Object Scaler (WOS) archive objects.
COS 3.8.2 includes support for a Fanout API to enable interactions with other Cisco applications in the Virtualized Video Processing (V2P) suite.
Object Store Metadata Resiliency
Metadata resiliency is provided by a distributed and replicated Cassandra document database.
Each COS node participates in the persistence of a subset of the Cassandra database.
Manual administrative intervention is required on node failure.
Object Store Data Resiliency
Data is resilient to both hard drive and COS node failures.
Local Erasure Coding (LEC), or local COS node data resiliency, is provided by local software RAID.
Note By default, LEC is enabled and is configured for two drive failures. We recommend using this default configuration for resiliency.
Distributed erasure coding (DEC) provides data resiliency across nodes, protecting stored content from loss due to node failure.
COS cluster data resiliency is provided by object replication (mirroring). The PAM section of the GUI allows for configuration of both local and remote mirror copies.
Note When configuring local mirroring for resiliency, we recommend using no more than one local mirror copy.
Supports configuration of mixed resiliency policies (local erasure with remote mirroring) via the GUI.
Alarms are available for loss of storage.
Management Interface Bonding
Supports defining two node management interface ports as a primary-backup pair.
Service Load Balancing
COS cluster load balancing is provided by DNS round-robin of a FQDN to multiple physical IPv4 addresses hosted by COS nodes.
Optimal load balancing is provided by extensions to the Swift API through the implementation of HTTP redirect.
Remote smoothing facilitates load balancing by moving content to a new node when it is added to a cluster.
Endpoint and Cluster Support
Each COS service instance can have its own endpoint, cluster, and asset redundancy policy.
Each COS endpoint and deployment can be enabled and disabled individually and dynamically, and each has its own AppStatus message for reporting SLA status.
If an endpoint is enabled or disabled, only the network interfaces of the COS nodes attached to the endpoint or cluster are added to or removed from the DNS.
COS AIC Client Management
The COS application instance controller (AIC) Client process is monitored by the monit process that runs on each COS node, and if not running, is restarted.
The COS AIC Client process creates a PID file that is added to the monit script so it can be monitored and restarted.
Command-line scripts support stopping and restarting the AIC Client process manually, bypassing the normal automatic restart process.
Node Decommissioning Paused for Maintenance Mode
If a node is in the process of being decommissioned and any node in its cluster is placed in Maintenance mode, the decommissioning process is paused.
Notes on Cisco UCS C3260 Support
This release adds additional support for the UCS C3260 platform, which supports up to two compute nodes and up to 56 storage disks per chassis.
COS 3.8.2 provides a pre-installation script to enable setup of one or two COS nodes on a UCS C3260 before proceeding with installation of COS software on each COS node configured.
– If a single node is configured, we recommend using the node with either 28 or 56 disks installed.
– If two nodes are configured, we recommend installing all 56 disks. The pre-installation script will assign 28 disks to each node.
Following installation, you must select one of three available storage bundles for each node during cosinit:
– UCS C3260 4U5 (28 disks per node): Select this bundle if you configured a dual COS node setup with 28 x 10 TB disks each.
– UCS C3260-4U4 (56 disks per node): Select this bundle if you configured a single COS node with 56 disks.
– UCS C3260-4U3 (28 disks per node): Select this bundle if you configured a single COS node with 28 disks installed, or a dual COS node setup with 28 x 6 TB disks each.
Note Knowing which storage bundle is configured allows the system to more accurately report disk issues, such as bad or missing disks, after the node is up and running.
In a dual node setup, the web GUI displays the status of only those disks assigned to a particular node:
– Node1 will list Cisco Disk 01-28.
– Node2 will list Cisco Disk 29-56.
On each COS node, eth0 and eth1 are bonded to a bond0 management interface. This differs from the UCS-C3160, where eth0 and eth3 are bonded to a bond0 management interface.
For full details, see Deploying COS in the Cloud Object Storage Release 3.8.1 User Guide.
Note COS 3.8.2 does not support automatic failover of Cassandra working sets in the event of COS node failure. Manual administrative action is required to recover a lost COS node in the event that a COS node cannot be returned to service in a timely manner.
Table 2 lists the hardware models that fully support installation of COS 3.8.2.
Table 2 COS 3.8.2 Supported Hardware
Max HDD Capacity
Max Total Storage
SSDs Used by OS and COS
Other SSDs *
Cisco UCS C3260
56 x 10 TB
4 x 480 GB
Cisco UCS C3260
56 x 6 TB
2 x 480 GB
Cisco UCS C3260
56 x 6 TB
4 x 480 GB
Cisco UCS C3160
54 x 6 TB
2 x 400 GB
1 x 120 GB
Cisco UCS C3160
54 x 4 TB
2 x 400 GB
1 x 120 GB
36 x 6 TB
2 x 480 GB
* Used by previous COS releases to store a crash partition. Not used by COS 3.8.2. See Crash Partition Location for details.
Note You can convert a C3160 to a C3260 in the field. For details, see Migrating a Cisco UCS C3160 Server to a Cisco UCS C3260 Server in the Cisco UCS C3260 Rack Server Installation and Service Guide.
For hardware installation instructions and related details, see the following:
Cisco UCS C3260 Rack Server Installation and Service Guide
Cisco UCS C3160 Rack Server Installation and Service Guide
COS 3.8.2 can operate as a managed service of Cisco MOS, in which case it uses certain MOS HTTP interface components as well as the MOS Document Store for system management. See the Cisco Media Origination System User Guide for your MOS release.
Note COS 3.8.2 has been tested for compatibility with MOS Release 2.7. Later releases of COS are expected to be compatible with later versions of MOS. Contact Cisco for the latest information.
COS 3.8.2 supports a Swift and Swauth API environment, and also supports an HTTP-based API for cluster management.
COS 3.8.2 does not come pre-installed on compatible UCS or CDE hardware. Instead, COS software is provided as a downloadable ISO image that includes the base (CentOS) distribution of Linux along with all of the additional rpm packages needed by a COS node. For installation instructions, see the Cisco Cloud Object Store Release 3.8.1 User Guide.
Note COS Release 3.8.2 does not support upgrade or downgrade from COS 3.5.2 or any earlier release.
Crash Partition Location
When installed on a C3160, COS release 3.5.1 created a crash partition on one of the SSDs at the rear of the chassis. With COS Release 3.8.2, the location of the crash partition depends on the node hardware, as follows:
When installed on a C3260, COS 3.8.2 creates a crash partition along with other system partitions on the software RAID SSDs at the rear of the chassis.
When installed on a C3160, COS 3.8.2 creates a crash partition along with other system partitions on the RAID system drives, which are the SSDs in chassis slots 55 and 56.
These locations assume a fresh installation and not an upgrade (not supported in COS 3.8.2 in any case).
When starting CServer for the first time, enter the command service cserver start at the CLI prompt as shown in the following example:
[cos-node@ root]# service cserver start
Starting CServer using the command service cserver start -C (or -c) results in removal of all content previously stored on the drives in the node. Do not add the -C (or -c) option unless you intend to wipe all existing content from the drives.
Upgrading to a Newer COS Build
The following upgrade procedure applies when updating from a COS 3.7.0 pre-releases (3.7.0-b189 or later) or any 3.8.1 build.
For an installation with only a single COS node, or in a multi-node cluster with only mirroring or local erasure coding (LEC) resiliency policy, perform the following upgrade steps on the node to be updated.
For multi-node clusters that have a Distributed Erasure Coding (DEC) resiliency policy enabled, shut down COS on all nodes prior to updating the nodes, and bring nodes back online only after updating all nodes.
Step 1 Obtain the full ISO image cos_full-3.8.2-b5-x86_64.iso from the COS software download page on the Cisco website.
Step 2 Mount the full ISO image and locate the cos_repo ISO, cos_repo-3.8.2-b5-x86_64.iso, within it.
Step 3 Place a copy of the cos_repo ISO to the root, /, of the COS server to be updated.
Step 4 If the cserver service is running on the node, shut it down.
service cserver stop
Note For multi-node clusters with DEC resiliency policy, perform step 4 on all nodes before proceeding to step 5.
Step 5 Remove all cos*local.repo files located at /etc/yum.repos.d.
rm –f /etc/yum.repos.d/cos*local.repo
Step 6 Mount the cos_repo ISO at /mnt/cdrom.
mount –o loop /cos_repo-3.8.2-b5-x86_64.iso /mnt/cdrom
Step 7 Set up the local COS yum repository using the provided script.
Step 8 Clean the yum database.
yum clean all
Step 9 Update the COS installation.
yum -y update
Step 10 Reboot the node.
Note For multi-node clusters with DEC resiliency policy, perform step 10 on each node after all nodes have been updated.
Caveats describe unexpected behavior in COS software releases. Severity 1 caveats are the most serious caveats; severity 2 caveats are less serious. Severity 3 caveats are moderate caveats, and only selected severity 3 caveats are included in the caveats document.
Caveat numbers and brief descriptions for Cisco COS Release 3.8.2 releases are listed in this section.
Open Caveats for Cisco COS Release 3.8.2
Table 3 lists the open issues in the COS 3.8.2 release.
Step 2 At the Log In screen, enter your registered Cisco.com username and password; then, click Log In. The
Bug Search page opens.
Note If you do not have a Cisco.com username and password, you can register for them at http://tools.cisco.com/RPF/register/register.do.
Step 3 To search for a specific bug, enter the bug ID in the Search For field, and press Enter.
Step 4 To search for bugs in the current release, specify the following criteria:
Select the Model/SW Family Product Category drop-down list box, then enter Cisco Videoscape Distribution Suite for Television or select the name from the Select from list option.
Select Cisco Videoscape Distribution Suite for Television from the list that displays.
The Cloud Object Store type displays in the Software Type drop-down list box.
Advanced Filter Options—Define custom criteria for an advanced search by selecting an appropriate value from the drop-down lists by choosing either one Filter or multiple filters from the available categories. After each selection, the results page will automatically load below the filters pane. If you select multiple filters, it behaves like an AND condition.
– Modified Date—Select one of these options to filter bugs: Last Week, Last 30 days, Last 6 months, Last year, or All.
– Status—Select Fixed, Open, Other, or Terminated.
Select Fixed to view fixed bugs. To filter fixed bugs, uncheck the Fixed check box and select the appropriate suboption (Resolved or Verified) that appears below the Fixed check box.
Select Open to view all open bugs. To filter the open bugs, uncheck the Open check box and select the appropriate suboptions that appear below the Open check box.
Select Other to view any bugs that are duplicates of another bug.
Select Terminated to view terminated bugs. To filter terminated bugs, uncheck the Terminated check box and select the appropriate suboption (Closed, Junked, or Unreproducible) that appears below the Terminated check box. Select multiple options as required.
– Severity—Select the severity level:
– Rating—Select the bug’s quality rating: 5 Stars (excellent), 4 or more Stars (good), 3 or more Stars (medium), 2 or more Stars (moderate), 1 or more Stars (poor), or No Stars.
– Support Cases—Select whether the bug Has Support Cases or No Support Cases.
– Bug Type—Select whether the bug is Employee Visible & Customer Visible or Customer Visible Only.
Step 5 The Bug Toolkit displays the list of bugs based on the specified search criteria.
Step 6 You can save or email the current search by clicking their respective option.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
This product contains watermarking technology that is licensed from Verimatrix, Inc., and such functionality should not be used or distributed further by you without any additional license(s) required from Verimatrix, Inc.
Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses in illustrative content is unintentional and coincidental.