Open Media Distribution (OMD) Introduction

Cisco Open Media Distribution (OMD) is a suite of products that enable Service Providers to efficiently distribute and cache multi-screen video to managed and unmanaged devices on managed and unmanaged networks. The Cisco OMD suite consists of Cisco Media Streamer and Cisco Media Broadcaster. Cisco Media Streamer is a content delivery platform deployed within service provider networks designed to cost effectively deliver immersive multiscreen video experiences from a CDN to managed and unmanaged devices across telco, cable, and mobile access networks. The Cisco Media Broadcaster solution provides service providers a scalable and cost effective live ABR streaming solution to the in-home primary screen.

Content Delivery Network (CDN) and Cisco Media Streamer Overview

This section provides an overview of what a content delivery network (CDN) is and describes the Cisco Media Streamer solution, which is part of the Cisco Open Media Distribution (OMD) Suite.

What is a CDN?

A CDN is a system of distributed servers that deliver content to a user based on the geographic locations of the user, the origin of the requested content, and a content delivery service. The goal of a CDN is to provide efficient delivery of streaming content over HTTP/HTTPS to optimize content delivery and to serve content to end-users with high availability and high performance, while minimizing the traffic on the network.

At its core, a CDN performs two essential functions:

  • It caches content at the edge of the network, closer to end users, to reduce the IP video traffic that needs to traverse the core network, delivering a higher quality experience to viewers.

  • It positions multiservice, multiprotocol content streaming capabilities at the network edge, allowing the operator to adapt video content for virtually any IP video device close to the user that is consuming it.

By 2020 there will be 11 billion connected video devices, and 82 percent of IP traffic will be video (Cisco Visual Networking Index). To meet that demand, today’s CDN infrastructures must evolve to scale cost-effectively, accelerate feature velocity, and deliver simplified and open management tools. By adopting cloud architecture and agile software development methodology and using best-in-class open-source software, Cisco Media Streamer provides an open and flexible CDN platform that delivers the multiscreen Internet video quality that consumers expect and service providers can deploy.

Cisco Media Streamer Overview

The Cisco Media Streamer content delivery platform is designed to deliver immersive multiscreen video experiences to managed and unmanaged devices across telco, cable, and mobile access networks. Media Streamer scales cost-effectively to distribute terabits per second (Tbps) of live, on-demand, and time-shifted video. It enables service providers to compete with over-the-top (OTT) video offerings and generate revenue from wholesale CDN services within their infrastructure. Media Streamer is the foundational IP delivery platform for Cisco’s Infinite Video Platform, which provides comprehensive consumer video experiences.

Media Streamer includes all the core elements of management, request routing, load balancing, caching, and analytics to deliver HTTP and HTTPS content at scale and to easily integrate into your network and middleware. Media Streamer builds on Cisco’s more than 10 years of CDN expertise.

Media Streamer caches and delivers web content, software, and streaming media with support for media players using the following HTTP steaming protocols: Apple HTTP Live Streaming (HLS), Microsoft HTTP Smooth Streaming (HSS), Adobe HTTP Dynamic Streaming (HDS), and MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH). Media Streamer supports video on demand (VoD), live video, time-shifted TV (TSTV), progressive download, secure download, and small object caching from a common high-performance HTTP cache. Media Streamer performs sophisticated algorithms for cache selection based on client location, cache availability, cache load, and content requested.

The primary components of a Cisco Media Streamer solution are:

  • OMD Director: Cisco OMD Director is a cloud-based CDN management system that provides integrated provisioning, monitoring, analytics, alerting, and role-based management. Cisco OMD Director is implemented to be virtualized and further optimized with microservices in containers. OMD Director is the primary user interface of Cisco Media Streamer.

  • OMD Director Portal: An OMD Director portal is optional and does not replace the primary OMD Director Master Controller and Worker nodes. It is intended for customers with downstream Resellers or Content Providers who require access to CDN analytics. An OMD Director portal instance is a separate setup from the primary OMD Director installation, with its own user database. It provides a Director GUI that is limited to working with the CDN analytics dashboards provided by Insights. You cannot manage or view the CDN configuration from an OMD Director portal instance.

  • Media Streamer Core Components:

    • OMD Core Traffic Server: A Traffic Server is an HTTP/S proxy cache that is deployed as either an “edge” or “mid-tier” cache server to form a two-tiered CDN hierarchy. Traffic Servers are fast, scalable, extensible using plug-ins, and HTTP/1.1 compliant.

    • OMD Core Traffic Router: A Traffic Router is a CDN load balancer that redirects HTTP(s) client requests to an edge cache to ensure that the end user is connected to the optimal cache. The Traffic Router determines which cache to redirect a request to based on client proximity, cache load, and content affinity. The Traffic Router is authoritative for the CDN domain and it implements DNS and HTTP routing.

    • OMD Core Traffic Ops: Traffic Ops is an open source management system used to configure advanced settings of the Media Streamer CDN.

  • OMD Insights: OMD Insights is an application that provides CDN application level insights including operations, content popularity, user consumption, and quality of service (QoS). OMD Insights is based on Splunk and aggregates the log data from all of the Traffic Servers and Traffic Routers.

  • OMD Monitor: OMD Monitor is an application that provides in-depth server monitoring, threshold crossing, and alarming based on CPU utilization, port utilization, temperature, disk I/O, and other detailed server metrics.

Key Media Streamer Terminology

This section describes some of the key terms you need to be familiar with when provisioning and managing an Media Streamer CDN.

  • Delivery service: A software structure in Media Streamer that maps an Origin Server to Traffic Servers using an FQDN. It defines a URL that is used to represent an Origin Server and which cache groups can serve content from that server. The Delivery Service also contains configuration parameters that dictate how content is ingested, distributed, and delivered to client devices.

  • Cache group: A cache group is a logical grouping of caches (Traffic Servers) used to provide high availability. A cache group has one single set of geographical coordinates even if the caches that make up the cache group are in different physical locations. To provide site-level redundancy, caches in a cache group should be in separate physical locations. Cache groups are defined to contain either edge caches or mid-tier caches. A cache group serves a particular part of the network as defined in the coverage zone file.

  • Cache: Media Streamer uses a two-tiered CDN Traffic Server (cache) hierarchy:

    • Edge caches: Provide edge caching, content streaming, and download to subscriber IP devices. Traffic routers redirect client requests to edge caches based on geolocation, server availability, server load, and server cache content to provide efficient system-wide load balancing. Edge caches are organized into cache groups. Each edge cache group is configured with a single mid-tier parent cache group and optionally a secondary mid-tier parent cache group for failover.

    • Mid caches: Provide content ingest and storage functionality. When an edge cache does not contain the content requested by the client, the edge cache will proxy the request to a mid-cache server, based on the parent cache group assigned to the edge cache. If the mid-cache server does not contain the content, it is responsible for fetching the content from the Origin Server. Mid-tier caches are also organized into cache groups. Mid-tier cache groups may serve (be a parent to) multiple edge cache groups.

Key Functions of Media Streamer

The following functions are the key functions that Media Streamer provides:

  • Ingest and Distribution:

    • Dynamic: When an edge cache receives a client request, the cache server checks its local cache for the content. If the edge cache does not contain the content, it requests the content from a mid-tier cache. The mid-tier cache checks its local cache for the content. If the mid-tier cache does not contain the content, it will request the content from the Origin Server.

    • Preposition: Typically content is not stored on the cache until a client requests the content. However, in some situations you may want to put content on the cache before it is requested. That is what prepositioning enables you to do. When you configure prepositioning, you define what content to put on the selected caches ahead of time by using a Ingest Manifest file.

  • Delivery: The Traffic Router handles client requests for content and determines the best Cache Group to deliver the content based on client proximity, cache load, and content affinity.

  • Management: OMD Director is the primary Media Streamer management interface. It is a centralized system management application that interacts directly with all of the key components of Media Streamer to enable remote monitoring, management, and diagnostics of the components of the Media Streamer solution from a single tool.

The following figure shows a typical Cisco Media Streamer deployment.

Figure 1. Media Streamer Typical Deployment

Cisco OMD Director Overview

Cisco Open Media Distribution (OMD) Director is a cloud-based element management system that provides provisioning, monitoring, analyzing, alerting, and role-based management for Cisco Media Streamer. OMD Director is a common management system for both Cisco Media Streamer and Cisco Media Broadcaster (multicast ABR live distribution to managed homes). When Cisco Media Broadcaster is deployed with the Cisco Media Streamer solution, a common OMD Director instance is used to provision, monitor, and analyze both the unicast CDN of Media Steamer and the distribution of multicast streams by Media Broadcaster.


Note

For more information on Cisco Media Broadcaster, see Cisco Media Broadcaster Overview.


Cisco OMD Director is the primary user interface for the Cisco Media Streamer solution. It provides a unified interface to seamlessly manage the different components of the Media Streamer platform. OMD Director enables the administrator to rapidly provision new CDNs and caches, and integrates monitoring, alerts, and real-time analytics into a single management tool.

Cisco OMD Director is implemented to be virtualized and further optimized with microservices in containers. Microservices provide modularity between components of the system enabling individual microservices to be replaced, upgraded or scaled up independently of one another. Microservices never share data directly via a database, only through their external APIs and each microservice is responsible for maintaining its own data store.

OMD Director is made up of the following microservices, which are packaged as Docker containers:

  • GUI: A user-facing application that makes calls to the provisioning and monitoring services to enable operators to perform various configuration tasks, monitor the data being reported by various system components, and troubleshoot the CDN.

  • Provisioning: This microservice is responsible for the software installation and configuration of all servers under the control of OMD Director, which includes the instantiation of new virtual machines (VMs) as needed for the initial system setup or elasticity purposes. It provides backend support for CDN configuration and provisioning operations available via the GUI.

  • CDN Manager: The CDN Manager microservice contains much of the business logic inside of OMD Director. This microservice is responsible for the initial setup of the CDN and ongoing updates to CDN topology and delivery services. The CDN Manager microservice communicates with potentially many downstream services such as the Provisioning microservice, Traffic Ops, Sensu, Site24, Insights, and more to accomplish these tasks.

  • Monitoring: This microservice is a set of utilities for evaluating the health and performance of the CDN. It exposes an API that aggregates data from the CDN application, Server Health and Element Level Monitoring, Service Level Agreement (SLA) Monitoring, Business Intelligence Analytics, and Quality of Experience Monitoring. While these sources are external to OMD Director, it is the responsibility of OMD Director to integrate them into a single coherent view of network health.

  • Alarms: This microservice stores and manages alarms and notifications for OMD Director. It reads monitoring data written to an Influx database by clients and generate alerts if the values exceed configured thresholds. It also manages generated alarms and allows users to adjust thresholds for alarms.

  • User Management: This microservice is responsible for securing access to the OMD Director APIs. It authenticates user requests and controls which operations users can execute and it maintains an audit log of all operations performed within the User Management microservice.

  • Backup: This microservice is responsible for running the scheduled back ups for the OMD Director databases and performing the OMD Director database restores upon request.

  • HA: This microservice is responsible for supporting the high availability (HA) functionality in a high availability OMD Director configuration.

With containers comes the need to manage them. The individual microservices must be started, stopped, upgraded, and monitored. Docker Engine provides CLI tools to manage containers, but only on a single host. OMD Director uses Kubernetes to manage the Docker containers across the nodes in the OMD Director installation.

OMD Director High Availability

OMD Director supports an Active/Standby HA configuration and supports inter-site redundancy. OMD Director HA provides the following:

  • Prevents the loss of any configuration data due to a hardware, software or network failure

  • Minimizes the loss of monitoring data such as time series metrics, alarms, and notifications due to a hardware, software or network failure

  • Makes sure alarm emails continue to be sent

  • Continues to support streaming capabilities

In an OMD Director HA configuration, your OMD Director instance can be running in either primary or backup mode. If either the OMD Director primary or backup instance loses connection or fails, the remaining OMD Director instances will transition into a detached state.

  • Primary: You can only make configuration changes from the primary OMD Director instance. The primary instance is the authoritative instance for all data. Changes to all configurations made from the primary OMD Director instance are replicated to the backup instance.

  • Backup: While logged in to the OMD Director instance that is running in backup mode, you can only view the configuration, you cannot make any changes to the configuration. Any buttons or links that would allow configuration changes will be disabled on the backup OMD Director instance. If you are logged in to the backup OMD Director instance, the header will show “This OMD Director is currently running in backup mode”.

  • Detached: While an OMD Director instance is running in a detached state, you can view the configuration and you can acknowledge, clear, and comment on alarms. The buttons or links for any actions that are not allowed will be disabled. If you are logged in to an OMD Director instance that is running in the detached state, the header will show “This OMD Director is currently running in detached mode”.

If the primary OMD Director instance fails, you must manually configure the backup OMD Director instance as the new primary OMD Director instance. For information on this procedure, see OMD Director Manual Failover Procedure.

OMD Director Portal

An OMD Director portal instance, which consists of an OMD Director master portal node and an OMD Director worker portal node, is intended for customers with downstream Resellers or Content Providers who require access to CDN analytics. An OMD Director portal instance is a separate setup from the primary OMD Director installation, with its own user database. It provides a Director GUI that is limited to working with the CDN analytics dashboards provided by Insights. You cannot manage or view the CDN configuration from an OMD Director portal instance.

Cisco OMD Monitor Overview

Cisco OMD Monitor is an application that provides in-depth server monitoring, threshold crossing, and alarming based on CPU utilization, port utilization, disk I/O, and other detailed server metrics for a CDN solution. OMD Monitor enables you to monitor your CDN caching performance including being able to view the key analytics information of all of the CDN Edge caches and Mid caches, system-level statistics for individual cache servers, and CDN alarms that have been generated.

With Cisco OMD Monitor, a Monitor Control node contains the necessary services to provide the following functions:

  • Collect metrics and check results from a network of Monitor clients

  • Store one month of metric data in an InfluxDB database that is available for querying

  • Generate emails for warning and critical alerts

  • Provide an Uchiwa GUI to manage and check metrics results and alerts

  • Provide a Grafana GUI that displays configurable dashboards with graphs of metrics sourced from an InfluxDB

OMD Monitor Deployment Options

While a single OMD Monitor Node may be deployed to collect metrics, deploying two or more OMD Monitor Nodes provides a High Availability (HA) solution. However, it is recommended that you deploy three or more OMD Monitor Control Nodes to support HA in a production environment. If HA is not a requirement, you can deploy a single OMD Monitor Node for deployments that have fewer than 50 clients.

When multiple OMD Monitor Nodes are deployed, they work together to form a Highly Available network and any node can be accessed to view metrics. When there are two or more OMD Monitor Nodes in a network, if any one of the nodes loses connectivity, then any of the remaining nodes can be used to monitor network behavior with no service downtime and no loss of data. When connectivity is restored, the reconnected OMD Monitor Node rejoins the HA Monitor network with no operational intervention.

In an HA OMD Monitor deployment, all of the OMD Monitor Nodes communicate with all of the other Monitor Nodes. The clients will send their metrics to any of the OMD Monitor Nodes, as depicted in Figure 2. The clients will roughly distribute evenly across the available OMD Monitor Nodes, independent of data center location, and will periodically switch the Node to which they are sending metrics. Because all of the OMD Monitor Nodes are in communication with each other, all Nodes know about the status of every client in the network.

Figure 2. OMD High Level Overview

OMD Insights Overview

Currently CDNs are used to rapidly and cost-effectively deliver a wide variety of content to numerous end points. Each CDN generates huge volumes of data in the magnitude of Gigabytes and Terabytes. Considering the size of the CDN, a well-defined solution is required to handle the following:

  • Log management

  • Capacity planning

  • Derive metrics to understand QoS and QoE

  • Forecasting exercises

  • Infrastructure management

OMD Insights can help provide these features. OMD Insights is a comprehensive analytics solution that provides meaningful CDN insights, thereby enabling CDN operators and Content Providers to optimize content delivery by accurately measuring QoS and QoE. OMD Insights is based on Splunk and aggregates the log data from all of the Traffic Servers and Traffic Routers.

OMD Insights provides deep analytics of the OMD environment that is based on mining data from the HTTP log entry of each Traffic Server. This enables OMD Insights to provide comprehensive CDN operational analytics, including analysis of throughput, utilization, and efficiency of video content delivery. It also provides insight into viewer trends. OMD Insights captures real-time data from CDN logs to provide simplified access to actionable data and analytics, so that operation teams can proactively monitor activity and take action before problems occur. It also delivers comprehensive dashboards and trending reports that capture valuable information for CDN capacity planning and delivery optimization. In general OMD Insights provides transaction logging and application level analytics.

Cisco OMD Insights provides the following:

  • Centralized log repository

  • Configuration of the reporting topology

  • Precanned reports and dashboards to help with forecasting exercises

  • Near real-time dashboards

OMD Insights is based on Splunk and aggregates the log data from all of the Traffic Servers and Traffic Routers. Splunk is a powerful analytics tool that can be used in various ways. Splunk can turn machine data into valuable insights regardless of what format the data is in. OMD Insights can use the power of Splunk to enable customers to get value from their data.

Cisco OMD Insights provides the following important steps of any analytics solution:

  • Gets logs

  • Stores logs

  • Maintains logs

  • Makes Insights available

OMD Insights Functional Structure

The following are the key components of OMD Insights:

  • Deployment Server (DS): Distributes applications for other analytics nodes.

  • Cluster Master (MAS): Monitors the slave application changes and pushes the applications to slave nodes.

  • Summary Search Head (SSH): Schedules and tracks most of the jobs.

  • Search Head (SH): Provides the main GUI or front end for OMD Insights, which hosts the predefined dashboards and reports. It scans connected indexers for logs containing keywords being searched and fetches the results.

  • Indexers (IDX): Store raw and summarized log data. These are also called slaves or peers.

  • Universal Forwarder (UF): Forwards log data to the indexers.

The following are the software modules that comprise Cisco OMD Insight:

  • OpenCDN_IDX: The application for the “Indexer” nodes that stores raw and summarized data

  • OpenCDN_Summarization: The application for the Summary Search Head that schedules and tracks a majority of the jobs

  • OpenCDN_SH: This application runs on the Search Head node that is the façade of OMD Insights application

  • OpenCDN_DS: The application for the Deployment server that distributes applications for other analytics nodes

  • OpenCDN_Analytics: This application runs on the Search Head node to provide the predefined dashboards and reports.

  • OpenCDN_appnormalize: This is a generic application that contains configurations for search time field extraction, commonly used macros, external data lookup files, and other details as required. This application is deployed on all analytics node.

  • OMD_Lookups: This application contains the CSV and binary (mmdb) files used for lookup.

  • OMD_Search_Customizations: This application contains configurations that are deployment specific.

  • OpenCDN_MAS: The application for the Cluster Master node that monitors the slave application changes and pushes the applications to slave nodes

  • OpenCDN_UF_EC: The application for the Universal Forwarder on the Edge Caches (EC) that will forward log data to the indexers

  • OpenCDN_UF_MC: The application for the Universal Forwarder on the Mid Caches (MC) that will forward log data to the indexers

  • OMD_UF_TR: The application for the Universal Forwarder on the Traffic Routers (TR) that will forward log data to the indexers

Figure 3. OMD Insights System Flow

OMD Insights HA

OMD Insights supports HA with automatic failover. To support HA for OMD Insights requires the following:

  • Multisite indexer clustering configured

  • Two OMD Insights Search Heads. These can act in Active-Passive mode or Active-Active mode.


    Note

    Even when this flag is set to true, the custom search artifacts created on one OMD Insights Search Head will not be available on the other OMD Insights Search Head.


  • Two OMD Insights Cluster Masters. One will act as the Primary and the other will act as the Standby.

  • Two OMD Insights Deployment Servers: One will act as the Primary and the other will act as the Standby.

  • Two Summary Search Heads: One will act as the Primary and the other will act as the Standby.

When you configure the OMD Insights installation to install and use HA, OMD Insights uses the following components:

  • Load Balancer for OMD Insights Cluster Master: HAProxy load balancer will be installed on all of the OMD Insights cluster peer nodes: Indexers, Search Heads, and Summary Search Heads. This load balancer will be configured with the IP addresses of the primary and standby Cluster Masters. The standby Cluster Master will be configured as a backup so that requests will only be redirected to the standby when the primary node is down.

  • Load Balancer for OMD Insights Deployment Server: The HAProxy load balancer that is installed on the Indexers, Search Heads, and Summary Search Heads will also be configured with the IP addresses of the primary and standby Deployment Servers. The standby Deployment Server will be configured as a backup so that requests will only be redirected to the standby when the primary node is down.

  • keepalived: keepalived is installed on all of the OMD Insights nodes except for the OMD Insights Indexers. keepalived is used for primary nodes and standby nodes to communicate. One of the nodes will be in the MASTER (primary) state and the other node will be in the BACKUP (standby) state.

    A virtual IP address is shared between the nodes. This IP should be separate for each set of nodes. The node that is in the MASTER state will own the VIP and will unicast VRRP messages to the backup node. When there is an application or node failure of the Master node, the VRRP advertisements will stop. The Backup (standby) node will detect the absence of the VRRP messages and transition to the MASTER state. Emails will be triggered when there is a state transition.

    There are custom scripts that periodically check the status of the Splunk Master node. This script checks every 90 seconds to see if the Splunk service is active. If the script fails three consecutive times, this will be considered a failure of the Master and the Backup node will transition to the MASTER state. The reason the script waits for three consecutive misses before transitioning the Backup to the Master is to prevent transitions from occurring when Splunk restarts due to configuration changes.

Figure 4. OMD Insights HA

Cisco Media Broadcaster Overview

The Cisco Media Broadcaster solution transforms unicast video into multicast data that is distributed across network routers to achieve broadcast-efficient video distribution, which provides service providers a cost-effective scalable solution for streaming HTTP Live adaptive bit rate (ABR) video to in-home primary screens. Cisco Media Broadcaster integrates with an existing ABR delivery system without the need to modify the ABR clients or the video headend components, such as the encoder, packager, and content delivery network (CDN).

The Media Broadcaster solution ingests unicast video from the origin servers or CDN, converts the unicast video stream into a multicast data stream, and distributes it to the home router or set-top box (STB) of the managed homes. The home router or STB converts the multicast data stream back to unicast video. The Media Broadcaster solution is implemented to be fully transparent to both the ABR client and the ABR headend.

Because the Media Broadcaster solution is independent of the existing video delivery infrastructure, it can be integrated with any existing video headend or CDN provider. The solution is highly available and scalable and can support millions of viewers at once. The Media Broadcaster solution offers the following benefits:

  • Efficient live ABR video distribution to stream to millions of homes using one IP multicast session

  • Cost reduction, which enables convergence to all ABR headend and all ABR clients

  • Transparent deployment with no change to the ABR video headend infrastructure, CDN, or ABR players

  • Reliable transmission with Forward Error Correction (FEC) or unicast retransmit

  • Optimized network capacity utilization for significant reduction of CDN resources compared to unicast ABR

  • Support for IPv4, IPv6, IGMPv3, and IGMPv4 to be compatible with existing multicast standards and networks

  • Flexible customer premises equipment (CPE) strategy with the ability to integrate the MC-receiver client into an STB or HGW

  • Rapid channel change using unicast client buffer filling from CDN or MC-receiver cache

The primary components of a Cisco Media Broadcaster solution are:

  • OMD Director: Cisco OMD Director is the primary user interface for the Cisco Media Broadcaster solution. It provides a unified interface to seamlessly manage the different components of the Media Broadcaster platform. OMD Director enables the administrator to rapidly configure Broadcasting Groups, including channel lineups and filtering rules, and integrates monitoring into a single management tool.

  • Multicast Controller (MC-Controller): The MC-Controller receives the configuration data for channels and overall system settings (including MC-Receiver configuration and multicast configuration) from OMD Director or a northbound interface, and communicates this configuration data to the other components in the system. The MC-Controller also collects info messages, including statistics and status information from MC-Receiver devices.

  • Multicast Sender (MC-Sender): The MC-Sender is responsible for retrieving ABR content from the CDN and delivering it to the MC-Receivers as multicast streams.

  • Multicast Receiver (MC-Receiver): The MC-Receiver is located on a customer premise equipment (CPE) device that resides at the managed home of the service provider (either a gateway or a STB) and consists of the proxy service and the Multicast Receiver.

  • OMD Insights: OMD Insights is an application that provides deep analytics of the Media Broadcaster environment. It is based on Splunk and aggregates data from logs located on the MC-Controllers and MC-Senders. This enables OMD Insights to provide comprehensive Media Broadcaster operational analytics, including analysis of fetch times, network utilization, and efficiency of video content delivery.

  • OMD Monitor: OMD Monitor is an application that provides in-depth server monitoring, thresholdcrossing, and alarming based on CPU utilization, port utilization, temperature, disk I/O, and other detailed server metrics.


Note

All of the components in the Cisco Media Broadcaster solution are designed specifically to interoperate with any CDN, provided that the content is formatted correctly.


The following diagram shows the high-level Media Broadcaster topology and main components. In it, the MC-Sender and MC-Controller reside on separate systems, which provides scalability and reliability. This is the recommended configuration for production Cisco Media Broadcaster deployments. For lab deployments and proof of concept deployments, the MC-Sender and MC-Controller can reside on the same system to save resources.

MC-Controller

The MC-controller is an application that runs on a Linux operating system (bare metal, virtualized, or containerized) in the service provider premise. The main functions of the MC-Controller are:

  • Distributes the configuration data to the Multicast Sender (MC-Sender) and Multicast Receiver (MC-Receiver).

  • Collects stream status messages from MC-Receivers. These messages include channel viewing and streaming statistics.

  • Communicates the channel lineup, which defines which streams to distribute on which multicast addresses, to the MC-Senders.

  • Communicates to the MC-Receivers which streams to join and the associated multicast addresses (channel lineup).

  • Provides two approaches to determine when to multicast a stream: policy-based or popularity-based. The policy-based approach statically configures a stream to multicast; it does not depend on a certain number of viewers to requesting the stream. The popularity-based approach enables streams to start and stop multicasting based on crossing configured concurrent viewer thresholds.

  • Exports data to a Cisco Media Broadcaster element management system.

The MC-Controller contains the following subcomponents:

  • Broadcaster Configuration Manager (BCM): The BCM retrieves and writes configuration data to the underlying data store (etcd) and versions every stored system configuration. It is the backend to OMD Director, which is the Media Broadcaster configuration management GUI, and is responsible for serving the configuration data to the MCM.

  • Multicast Controller Manager (MCM): The MCM polls the BCM at intervals to retrieve configuration data. It is the management component of an MC-Controller and controls the entire Cisco Media Broadcaster system. It also aggregates viewership data received by the SSR.

  • Data Store Service (etcd): The etcd service stores all of the configuration data for the BCM.

  • Stream Status Receiver (SSR): The SSR handles all of the incoming Stream Status messages and Config Requests from the MC-Receivers, also referred to as the Embedded Multicast Client (EMC). To handle the load of incoming Stream Status messages, there will be many instances of the SSR.

MC-Sender

The MC-Sender is an application that runs on a Linux operating system (bare metal, virtualized, or containerized) in the service provider premise. The primary functions of the MC-Sender are:

  • Acquires a live ABR stream from the CDN or origin server using unicast HTTP/TCP requests and encapsulates the video data and FEC for multicast distribution.

  • Transmits the data packets as a multicast stream to the multicast address group.

  • Provides rate pace for the multicast video distribution rate.

  • Can optionally provide QoS marking (DSCP) on the multicast data.

The MC-Sender contains the following subcomponents:

  • Multicast Server Interface (MSI): Is the interface that the MC-Sender uses to communicate with the MC-Controller.

  • Multicast Sender Manager (MSM): Creates, destroys, and configures individual MSA instances to convert a single ABR profile to a multicast stream.

  • Multicast Sender Application (MSA): Acts as an HTTP client that pulls ABR content (manifests and segments) from the CDN and then sends out the segments using NAC Oriented Reliable Multicast (NORM) to all of the gateways. Each MSA delivers only one stream. Because each MC-Sender is capable of delivering many streams, an MC-Sender will have many MSAs.

MC-Receiver

The MC-Receiver, also referred to as the Embedded Multicast Client (EMC), is an application that can be integrated into the STB or home router that resides in the managed home of the service provider and runs in the CPE middleware operating system. The primary functions of the MC-Receiver are:

  • Streams HTTP/TCP unicast video to HTTP ABR media player.

  • Implements a proxy cache toward the HTTP media player.

  • Acquires (ingests) ABR video using multicast or unicast, with transparent switching, and fills the proxy cache.

  • Caches reliable fragmented HTTP video (HLS, DASH, and HSS)

  • Provides packet error repair using in-band FEC when possible, and falls back to unicast HTTP retransmit from CDN.

  • Streams to in-home multiple media player clients a common stream or multiple unique streams.

    The MC-Receiver intercepts ABR fragment requests from the end clients (iPads, iPhones, and STBs) and does one of the following:

  • If the MC-Receiver has already received the requested content via multicast and still has it in its local cache, it will serve that content back to the end clients (iPads, iPhones, and STBs) without forwarding the request upstream to the CDN.

  • If the MC-Receiver does not have a copy of the content in its cache, then it proxies the request upstream to be fulfilled by the CDN. When the response is returned from the CDN, the MC-Receiver will cache this content in its local cache so that it can provide the content for future requests from downstream clients for the same content.

The MC-Receiver consists of the following components:

  • Proxy Service: Intercepts requests from downstream end user ABR clients and serves them using data from the local cache of the CPE (if available). If the content is not in the local cache of the CPE, the proxy requests the content from the CDN using unicast. The Proxy service will receive the configuration from the MC-Controller, which will determine which streams can be delivered using multicast. For any ABR client requests that can be delivered using multicast, the Proxy service will start a Multicast Receiver for that stream. When the ABR client stops watching the stream, the Proxy services will stop the Multicast Receiver for that stream. The Proxy Service can be thought of as part of the Control Plane.

  • Multicast Receiver: The Multicast Receiver uses NORM with FEC to receive multicast content from the MC-Sender. After the content is received, the proxy service stores it in the local cache on the CPE. The MC Receiver will leave a multicast stream when no requests from any player has been received for certain amount of time. The Proxy service will determine when a Multicast Receiver needs to be started and stopped. The Multicast Receiver can be thought of as part of the Data Plane.

Channel Lineup

What content is delivered by the Cisco Media Broadcaster deployment is defined using a channel lineup. A channel lineup defines where to find the content, and what address and bit rate will be used for multicasting the content. The channel lineup is stored in the MC-Controller and is communicated to the MC-Sender via a StartMulticastReq message. When the MC-Sender receives a StartMulticastReq message, the MC-Sender sends an HTTP request for the manifest to the CDN. After receiving the manifest file, the MC-Sender starts to request the segments that are defined in the manifest and transmits them via reliable multicast (NORM) over the video network interface. MC-Sender continues the same operation of getting the latest manifest, requesting for segments based on the manifest and transmitting them via multicasting for a given video stream until the MC-Controller sends a StopMulticastReq message.

Channel lineups are configured in OMD Director and can also be imported into OMD Director using a CSV file.

For more information on configuring the channel lineups, see the Cisco Media Broadcaster User Guide.

Broadcasting Groups

A Broadcasting Group identifies which MC-Controllers and MC-Senders work together to provide multicasting services for a group of channels. All of the MC-Controllers and MC-Senders in a broadcasting group will use the same filtering rules and server specific settings. Broadcasting groups are part of the high availability (HA) solution of Media Broadcaster.

Media Broadcaster HA Architecture

Cisco Media Broadcaster provides a fully redundant solution for the MC-Controllers and MC-Senders. Media Broadcaster supports an Active/Standby HA solution for the MC-Controller components. For the MC-Senders, all MC-Senders run simultaneously, but handle different multicast streams. If an MC-Sender becomes unavailable, all of the multicasting streams that were running on that MC-Sender are automatically redistributed to by the MC-Controller to the remaining MC-Senders based on the bandwidth usage of each MC-Sender.

MC-Controller HA Solution

Each MC-Controller consists a Broadcaster Configuration Manager (BCM), a Multicast Controller Manager (MCM), and many instances of Stream Status Receivers (SSRs). Cisco Media Broadcaster provides a fully redundant solution for each of these components.

HA for Broadcaster Configuration Manager

keepalived is installed on the BCM instances of all of the MC-Controllers in the Broadcasting group. keepalived, using Virtual Router Redundancy Protocol (VRRP), elects one of the BCM instances as the primary (master) and the remaining BCM instances will be standby (backup). The primary BCM and the standby BCMs use keepalived to communicate with each other to maintain an HA environment. If keepalived determines that the master (active) BCM instance is not available, it will elect a backup (standby) BCM instance as master.

A virtual IP address, also referred to as the floating IP address, is shared between the BCM instances in a Broadcasting group. This virtual IP address needs to be different for each Broadcasting group and must be different from the virtual IP address of any of the other MC-Controller components (such as MCM). keepalived will bind the floating IP address to the management interface of the BCM instance that is in the master state. The floating IP must be reachable by all the BCM instances in the Broadcasting group and must be on the same subnet as the IP addresses assigned to the management interface of the BCM instances.

The BCM instance that is the master will unicast VRRP messages to the backup BCMs. When there is an application or node failure of the master BCM, the VRRP advertisements will stop. The backup (standby) BCM instances will detect the absence of the VRRP messages and perform an election to choose a new master. Keepalived will then bind the floating IP address to the management interface of the new master BCM instance.

HA for Multicast Controller Manager

keepalived is installed on the MCM instances of all of the MC-Controllers in the Broadcasting group. keepalived, using Virtual Router Redundancy Protocol (VRRP), elects one of the MCM instances as the primary (master) and the remaining MCM instances will be standby (backup). The primary MCM and the standby MCMs use keepalived to communicate with each other to maintain an HA environment.

A virtual IP address, also referred to as the floating IP address, is shared between the MCM instances in a Broadcasting group. This virtual IP address needs to be different for each Broadcasting group and must be different from the virtual IP address of any of the other MC-Controller components (such as BCM). Keepalived will bind the floating IP address to the management interface of the MCM instance that is in the master state. The floating IP must be reachable by all the MCM instances in the Broadcasting group and must be on the same subnet as the IP addresses assigned to the management interface of the MCM instances.

keepalived monitors the master MCM by checking the response received from the master MCM health check API. If the response code from this health check API is not 200, keepalived signals that the master (active) MCM instance is not available and will elect a backup (standby) MCM instance as master. keepalived will then bind the floating IP address to the management interface of the new master MCM instance. The new master MCM will start the viewership monitoring cycle so that the new master MCM can start collecting viewership data and manage dynamic channels after one monitoring period has passed.

The active MCM will always point to the floating IP address of the BCM. This makes sure that if the active BCM instance goes down, the active MCM will keep receiving configuration data because keepalived will automatically select a new BCM instance from the backup instances.

HA for Stream Status Receiver

SSR is a service that handles the Stream Status and Config Request messages from the MC-Receivers. To handle the load of incoming Stream Status messages, there will be many instances of the SSR running on one MC-Controller. httpd distributes the incoming Stream Status and Config Request messages to the SSR instances to process.

In a Media Broadcaster HA deployment, HAProxy is used to load balance the Stream Status and Config Request messages that come from the MC-Receivers to the SSRs, through httpd, that are running on all of the MC-Controllers in the Broadcasting group. An HAProxy instance running keepalived will run on each MC-Controller in the Broadcasting group. keepalived, using Virtual Router Redundancy Protocol (VRRP), will elect one of the HAProxy instances as the primary (master) and the remaining HAProxy instances will be standby (backup). The primary HAProxy and the standby HAProxies use keepalived to communicate with each other to maintain an HA environment.

A virtual IP address, also referred to as the floating IP address, is shared between the HAProxy instances in a Broadcasting group. This virtual IP address needs to be different for each Broadcasting group and must be different from the virtual IP address of any of the other MC-Controller components (such as MCM). keepalived will bind the floating IP address to the video interface of the HAProxy instance that is in the master state. The floating IP must be reachable by all the HAProxy instances in the Broadcasting group and must be on the same subnet as the IP addresses assigned to the video interface of the HAProxy instances. Because the messages from the MC-Receivers come from the public Internet, the floating IP address for the HAProxy service must be accessible from the public Internet.

The HAProxy instance that is the master will unicast VRRP messages to the backup HAProxies. When there is an application or node failure of the master HAProxy, the VRRP advertisements will stop. The backup (standby) HAProxy instances will detect the absence of the VRRP messages and perform an election to choose a new master. keepalived will then bind the floating IP address to the video interface of the new master HAProxy instance.

The MC-Receivers will send the Stream Status and Config Request messages to the virtual (floating) IP address that is assigned to the HAProxy instance.

MC-Sender HA Solution

The goal of HA for the MC-Senders is to provide high availability for the multicasting streams that are being serviced by the MC-Senders. All MC-Senders in a Broadcasting group run simultaneously, but handle different multicast streams. If an MC-Sender becomes unavailable, all of the multicasting streams that were running on that MC-Sender are automatically redistributed by the active MCM to the remaining MC-Senders based on the implemented policy. The current policy is based on the bandwidth usage of each MC-Sender, and the multicasting stream will be assigned to the MC-Sender with the lowest bandwidth usage.

When the channel lineup is created, it will be assigned a unique source IP address that is used regardless of which MC-Sender will multicast the stream for this channel. This means that if the MC-Sender that is streaming the channel changes, it is transparent to the MC-Receivers. When an MC-Sender is assigned a channel by the MCM and beings multicasting the stream, the MC-Sender adds the source address for that channel to the interface for multicasting network (video interface). The MC-Sender uses OSPF to advertise this source address to the routers in the Multicast core and access network. When the MC-Sender stops multicasting the stream for the channel, it removes the source address from the interface and OSPF will stop advertising this address.

Deployment Requirements and Limitations

To deploy Cisco Media Broadcaster, there are certain requirements that must be met in the proposed deployment network:

  • Multicast v1: The access network (i.e. the network between the MC-Sender and gateways) must support traffic sent over multicast v1 all the way to the gateways. This is necessary to propagate the traffic from the MC-Senders to all of the gateways downstream.

  • HLS, DASH, and HSS: Currently the Cisco Media Broadcaster supports HLS, DASH, and HSS ABR formats.

MC-Receiver Operating Modes

To operate correctly, the MC-Receiver must function as a proxy through which all media player requests must pass. This is necessary so that the proxy can intercept content requests and have the opportunity to fulfill the requests from the content in its local cache.

Explicit Proxy Mode

In explicit proxy mode, the MC-Receiver will start up and listen for connections on a specified port (the default port is 8123). To use explicit proxy mode, the end clients (ABR players) must be configured to retrieve the ABR content through a proxy located at the IP address of the MC-Receiver on port 8123. The ABR player will then make a request to the CDN for the content via the MC-Receiver proxy. When the MC-Receiver receives this request, it first checks its own cache to see if it can provide the content before forwarding the request upstream to the CDN.


Note

If an end client ABR player does not have this proxy configuration set up correctly, the requests for content will pass through the CPE as normal and bypass the MC-Receiver software, thereby making the MC-Receiver software unable to render any of the benefits of using Cisco Media Broadcaster. You configure the explicit proxy mode in OMD Director.


Transparent Proxy Mode

In transparent proxy mode, the MC-Receiver will set iptables rules so that all traffic sent to port 80 on the gateway device will automatically be re-directed to port 8123. This enables the proxy to try and service the requests using its local cache before forwarding the request upstream. The proxy functionality will behave as described in the section above. You configure the transparent proxy mode in OMD Director.