Explore Cisco
How to Buy

Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

Cisco IP Fabric for Media White Paper

White Paper

Available Languages

Download Options

  • PDF
    (3.8 MB)
    View with Adobe Reader on a variety of devices
Updated:May 20, 2021

Available Languages

Download Options

  • PDF
    (3.8 MB)
    View with Adobe Reader on a variety of devices
Updated:May 20, 2021

Table of Contents

 

 

Prerequisites

This document assumes the reader is familiar with the functioning of a broadcast production facility and the IP transformation happening in the media and broadcasting industry, where production and other use cases leveraging Serial Digital Interface (SDI) infrastructure is moving to an IP infrastructure. The reader must also be familiar with Society of Motion Picture and Television Engineers (SMPTE) 2022-6, 2110 standards as well as have a basic understanding of Precision Time Protocol. As per 2110 or 2022-6 specifications, the traffic on the IP fabric is User Datagram Protocol (UDP) multicast; the reader must have a good understanding of IP unicast and multicast routing and switching.

This document is applicable to Cisco® NX-OS Software Release 9.2 and Cisco Data Center Network Manager (DCNM) 11 and newer.

Introduction

Today, the broadcast industry uses an SDI router and SDI cables to transport video and audio signals. The SDI cables can carry only a single unidirectional signal. As a result, a large number of cables, frequently stretched over long distances, are required, making it difficult and time-consuming to expand or change an SDI-based infrastructure.

Cisco IP Fabric for Media helps you migrate from an SDI router to an IP-based infrastructure (Figures 1 and 2). In an IP-based infrastructure, a single cable has the capacity to carry multiple bidirectional traffic flows and can support different flow sizes without requiring changes to the physical infrastructure.

An IP-based infrastructure with Cisco Nexus® 9000 Series Switches:

      Supports various types and sizes of broadcasting equipment endpoints with port speeds up to 100 Gbps

      Supports the latest video technologies, including 4K and 8K ultra HD

      Allows for a deterministic network with zero packet loss, ultra-low latency, and minimal jitter, and

      Supports the AES67 and SMPTE-2059-2 PTP profiles

SDI router

Figure 1.            

SDI router

IP fabric

Figure 2.            

IP fabric

The Society of Motion Picture and Television Engineers (SMPTE) 2022-6 standard defines the way that SDI is encapsulated in an IP frame. SMPTE 2110 defines how video, audio, and ancillary data are carried over IP. Similarly, Audio Engineering Society (AES) 67 defines the way that audio is carried over IP. All these flows are typically User Datagram Protocol (UDP) and IP multicast flows. A network built to carry these flows must help provide zero-drop transport with forwarding, low latency, and minimal jitter.

Endpoints and IP gateways

In a broadcast production facility, endpoints include cameras, microphone, multi-viewer, switchers, servers (playout), etc. Endpoints have either an SDI interface or an IP interface. Endpoints with an IP interface can be connected directly to a network switch. However, for endpoints that have an SDI interface, an IP Gateway (IPG) is needed to covert SDI to IP (2110/2022-6) and vice versa. In the latter case, the IP gateway is connected to the network switch with the endpoints connected to the IP gateway (Figure 3).

IP endpoints and gateways

Figure 3.            

IP endpoints and gateways

Broadcast controller

In an SDI environment, the broadcast controller managed the cross points of the SDI router (Figure 4). When an operator triggers a ‘take’, which involves switching the destination from source A to source, B using a control panel, the panel communicates with the broadcast controller, signaling the intent to make the switch. The broadcast controller reprograms the cross points on the SDI router to switch the destination from source A to source B.

With an IP infrastructure, there are several options on how the broadcast controller integrates with the network. In most common deployments, when an operator triggers a ‘take’ on a control panel, the panel communicates with the broadcast controller, signaling the intent to make the switch. The broadcast controller then communicates directly with the IP endpoint or IP gateway to trigger an Internet Group Management Protocol (IGMP) leave and join toward the IP network. The network then delivers the new flow to the destination and removes the old. This type of switching is called destination timed switching (Figure 5).

In some deployments, the broadcast controller that uses APIs exposed by the network/network controller can instruct the network to switch a destination from source A to source B without involving the destination triggering an IGMP join as a signaling mechanism. The Advanced Media Workflow (AMWA) group defines IS-04, IS-05, and IS-06 specifications that describe how a broadcast controller, endpoints, and network/network controller communicate with one another to accomplish broadcast workflows in an IP environment.

Broadcast controller in an SDI environment

Figure 4.            

Broadcast controller in an SDI environment

Broadcast controller in an IP environment

Figure 5.            

Broadcast controller in an IP environment

Cisco Nexus 9000 for IP Fabric for Media

Nexus 9000 Series Switches deliver proven high performance and density, low latency, and exceptional power efficiency in a range of form factors. The series also performs line-rate multicast replication with minimal jitter. Each switch can operate as a Precision Time Protocol (PTP) boundary clock and can support the SMPTE 2059-2 and AES67 profiles.

Table 1.        Cisco Nexus 9000 models supporting Cisco IP Fabric for Media

Part number

Nexus 9300-FX3

Nexus 9300-FX2

Nexus 9300-GX

Nexus 9300-GX2A*

Nexus 9300-GX2B*

Nexus 9300-FX

Nexus 9300-FXP

Nexus 9300-EX

9364C and 9332C

9200

9500-R with N9K-X9636C-R and RX

9500-R with N9K-X9636Q-R

9500-R2 withN9K-X9624D-R2*

* Refer to release notes on software release version supporting the platform

Designing the IP fabric

There are multiple design options available to deploy an IP Fabric for Media based on the use case.

      A flexible and scalable layer 3 spine and leaf - provides a flexible and scalable architecture that is suitable for studio deployments (Figures 6 and 8).

      A single switch with all endpoints and IPGs connected to this switch – provides the simplicity needed in an outside broadcasting TV van (OBVAN) and small studio deployment (Figure 7).

Spine and leaf with endpoints and IPGs connected to the leaf

Figure 6.            

Spine and leaf with endpoints and IPGs connected to the leaf

Single switch with endpoints and IPGs connected to the switch

Figure 7.            

Single switch with endpoints and IPGs connected to the switch

Spine and leaf with endpoints and IPGs connected to both spine and leaf

Figure 8.            

Spine and leaf with endpoints and IPGs connected to both spine and leaf

Why use a layer 3 spine and leaf design

Spine and leaf CLOS architecture has proven to be flexible and scalable and is widely deployed in modern data center designs. No matter where the receiver is connected, the path always involves a single hop through the spine, thereby providing deterministic latency.

Although a layer 2 network design may seem simple, it has a very large failure domain. A misbehaving endpoint could potentially storm the network with traffic that is propagated to all devices in the layer 2 domain. Also, in a layer 2 network, traffic is always flooded to the multicast router or querier, which can cause excessive traffic to be sent to the router or querier, even when there are no active receivers. This results in non-optimal and non-deterministic use of bandwidth.

Layer 3 multicast networks contain the fault domain and forward traffic across the network only when there are active receivers, thereby promoting optimal use of bandwidth. This also provides granular application of filtering policy that can be applied to a specific port instead of all devices, like in case of a layer 2 domain.

Building blocks of a layer 3 IP fabric

Various IP protocols are needed to enable the network to carry media flows. As most media flows are UDP multicast flows, the fabric must be configured with protocols that transport multicast (Figure 9). The protocols that come into play include:

      Protocol Independent Multicast (PIM): PIM enables routing multicast between networks.

      Interior Gateway Protocol (IGP): IGP, like Open Shortest Path First (OSPF), is needed to enable unicast routing in the IP fabric. PIM relies on the unicast routing information provided by the IGP to determine the path to the source.

      Internet Group Management Protocol (IGMP): IGMP is a protocol in which the destination (receiver) signals the intent to join a source or leave a source.

      Multicast Source Discovery Protocol (MSDP): MSDP is required for Rendezvous Point (RP) to sync source information when running any source multicast (ASM with IGMPv2).

Along with these protocols, the network must be configured with Quality of Service (QoS) to provide better treatment to media flows (multicast) over file-based flows (unicast).

Building blocks of a media fabric

Figure 9.            

Building blocks of a media fabric

Cisco Non-Blocking Multicast (NBM)

In an IP network, when multiple paths exist between the source and destination, for every request to switch or create a new flow being made by the operator, the protocol setting up the flow path (PIM) chooses one of available paths using a hash. The hash does not consider bandwidth, which may not always result in equal distribution of load across available paths.

In IT data centers, Equal-Cost Multipath (ECMP) is extremely efficient because most traffic is Transmission Control Protocol (TCP)-based, with millions of flows and the load distribution more likely to be uniform across all available paths. However, in a media data center that typically carries uncompressed video along with audio and ancillary flows, Equal-Cost Multipath (ECMP) routing may not always be efficient. There is a possibility that all video flows hashes along the same path, oversubscribing the path.

While PIM is extremely efficient and very mature, it lacks the ability to use bandwidth as a parameter when setting up a flow path. Cisco developed the Non-Blocking Multicast (NBM) process on NX-OS that makes PIM intelligent. NBM brings bandwidth awareness to PIM. NBM and PIM can work together to provide an intelligent and efficient network that prevents oversubscription and provides bandwidth availability for multicast delivery.

PIM with ECMP based on hash (link oversubscription may occur)

Figure 10.         

PIM with ECMP based on hash (link oversubscription may occur)

PIM with ECMP and NBM (assures non-oversubscribed multicast transport)

Figure 11.         

PIM with ECMP and NBM (assures non-oversubscribed multicast transport)

NBM modes: NBM active and NBM passive

As discussed in the previous section, NBM brings bandwidth awareness to the network. Goal of NBM is to ensure that flows are load balanced and that all paths are utilized, as well as to prevent oversubscription during flow setup or when flows need to be rebalanced in the event of a link failure. In the NBM active mode (the default operation of NBM), the responsibility of bandwidth management is with the Nexus switches themselves. Another method of achieving the same outcome is through the use of a Software-Defined Controller (SDN) that can instruct the network to route traffic along a certain path. To enable the use of SDN, NBM can work in “passive” mode. With NBM passive, the network itself does not take any decision on how flows are routed during flow setup, nor does it take any decision on how flows must be recovered in the event of a failure. NBM passive simply exposes an API, using which the SDN controller can instruct what needs to be done during flow setup as well as flow recovery in the event of a failure.

NBM active and NBM passive operation

Figure 12.         

NBM active and NBM passive operation

Designing a non-blocking spine and leaf (CLOS) fabric

SDI routers are non-blocking in nature. A single Ethernet switch such as a Nexus 9000 or 9500 switch is also non-blocking. A CLOS architecture provides flexibility and scalability; however, there are a few design considerations that need to be taken into consideration to ensure a CLOS architecture remains non-blocking.

In an ideal scenario, the sender leaf (first-hop router) sends one copy of the flow to one of the spine switches. The spine creates “N” copies, one for each receiver leaf switch that has interested receivers for that flow. The receiver leaf (last-hop router) creates “N” copies of the flow, one per local receiver connected on the leaf. At times, especially when the system is at its peak capacity, you could encounter a scenario where a sender leaf has replicated a flow to a certain spine, but the receiver leaf cannot get traffic from that spine as its link bandwidth to that spine is completely occupied by other flows. When this happens, the sender leaf must replicate the flow to another spine. This results in the sender leaf using twice the bandwidth for a single flow.

To ensure the CLOS network remains non-blocking, a sender leaf must have enough bandwidth to replicate all of its local senders to all spines. By following this guideline, the CLOS network can be non-blocking.

Bandwidth of all senders connected to a leaf must be equal to the bandwidth of the links going from that leaf to each of the spines. Bandwidth of all receivers connected to a leaf must be equal to the aggregate bandwidth of all links going to all spines from that leaf.

For example: A two-spine design using N9k-C93180YC-FX, with 6x100G uplinks and 300 Gb going to each spine can support 300 Gb of senders and 600 Gb of receivers connected to the leaf.

In a broadcasting facility, most of the endpoints are unidirectional – camera, microphone, multiviewers, etc. In addition, there are more receivers than senders (a typical ratio is 4:1), and, when a receiver no longer needs a flow, it leaves the flows, freeing up the bandwidth. Hence, the network can be designed with the placement of senders and receivers such that the CLOS architecture becomes non-blocking.

Design example

The number and type of leaf and spine switches required in your IP fabric depend on the number and type of endpoints in your broadcasting center.

Follow these steps to help determine the number of leaf switches you need:

Count the number of endpoints (cameras, microphones, gateway, production switchers, etc.) in your broadcasting center. For example, assume that your requirements are as follows:

      Number of 40-Gbps ports required for IPGs: 40

      Number of 10-Gbps ports required for cameras: 150

      Number of 1-Gbps/100M ports required for audio consoles: 50

The uplink bandwidth from a leaf switch to a spine switch must be equal to or greater than the bandwidth provisioned to endpoints.

The 9336FX2 can be used as a leaf switch for 40-Gbps endpoints. Each supports up to 25 x 40-Gbps endpoints and requires 10 x 100-Gbps uplinks.

      The 93180YC-FX can be used as a leaf switch for 10-Gbps endpoints. Each supports up to 48 x 10-Gbps endpoints and requires 6 x 100-Gbps uplinks.

      The 9348FXP can be used as a leaf switch for 1G/100M endpoints. Each supports up to 48 x 1/10GBASE-T endpoints with 2 x 100-Gbps uplinks.

      40 x 40-Gbps endpoints would require 2 x 9336FX2 leaf switches with 20 x 100-Gbps uplinks.

      160 x 10-Gbps endpoints would require 4 x 93180-FX leaf switches with 24 x 100-Gbps uplinks.

      70 x 1-Gbps endpoints would require 2 x 9348FXP leaf switches with 4 x 100-Gbps uplinks. (Not all uplinks are used.)

      The total number of uplinks required is 48 x 100 Gbps.

      The 9500 with a N9K-X9636C-R line card or a 9336FX2 can be used as a spine.

      With a 9336FX2 switch, each switch supports up to 36 x 100-Gbps ports. Two spine switches with 24 x 100-Gbps ports per spine can be used (Figure 12), leaving room for future expansion.

      With 9508 and N9K-X9636C-R line cards, each line card supports 36 x 100-Gbps ports. Two line cards with a single spine switch can be used (Figure 13), leaving room for future expansion.

Network topology with a Nexus 9336FX2 Switch as the spine

Figure 13.         

Network topology with a Nexus 9336FX2 Switch as the spine

Network topology with the Nexus 9508 Switch as the spine

Figure 14.         

Network topology with the Nexus 9508 Switch as the spine

As most deployments utilize network redundancy and hitless merge on destinations (2022-7 as an example), the same network is replicated two times and the endpoints are dual-homed to each network (Figure 14).

Redundant IP network deployment

Figure 15.         

Redundant IP network deployment

Securing the fabric – NBM active mode

In an IP fabric, an unauthorized device could be plugged into the network and compromise production flows. The network must be designed to only accept flows from an authorized source and send flows to an authorized destination. Also, given the network has limited bandwidth, a source must not be able to utilize more bandwidth than what it is authorized to use.

An NBM process provides host policies for use, and the network can restrict what multicast flows a source can transmit to as well as what multicast flows a destination or receiver can subscribe to or join. The NBM active process also provides flow policies to use, and the bandwidth required for a flow or group of flows is specified. NBM utilizes the information in flow policy to reserve end-to-end bandwidth when a flow request is made and also programs a policer on the sender switch (first hop) that restricts the source to only transmit the flow at the rate defined by the policy. If a source transmits at a higher rate, the flow is policed, thereby protecting the network bandwidth and other flows on the fabric.

Host (endpoint) interface bandwidth protection – NBM active mode

NBM ensures an endpoint interface is not oversubscribed by only allowing flows that do not exceed the interface bandwidth. As an example, if the flow policy for groups 239.1.1.1 to 239.1.1.10 used by 3G HD video is set to 3.3 Gbps and the source is connected to a 10-Gbps interface, only the first three flows transmitted by the source are accepted. Even if the actual bandwidth utilized is less than the link capacity, NBM reserves bandwidth specified in the flow policy. The fourth flow would exceed 10 Gbps, hence it is rejected.

On the receiver or destination side, the same logic applies. When a receiver tries to subscribe to more traffic that the link capacity allows, the request is denied.

Note:     This logic only applies when endpoints are connected using a layer 3 interface. Host interface bandwidth tracking does not apply when endpoints are connected using a layer 2 trunk or access interface.

Configuring Non-Blocking Multicast (NBM) – active mode

Prior to configuring NBM, the IP fabric must be configured with a unicast routing protocol such as OSPF, PIM, and Multicast Source Discovery Protocol (MSDP).

Configuring OSPF, PIM, MSDP, and fabric and host links

! OSPF configuration on SPINE and LEAF

feature ospf

 

router ospf 100

interface Ethernet1/1

  ip router ospf 100 area 0.0.0.0

  ip pim sparse-mode

 

! PIM Configuring on SPINE(s)

feature pim

 

interface loopback100

  !loopback used as RP. Configure same loopback with same IP on all SPINES

  ip address 123.123.123.123/32

  ip router ospf 100 area 0.0.0.0

  ip pim sparse-mode

! A multicast RP is needed only when ASM (any source multicast) is used. If the facility uses SSM, RP is not needed for those multicast groups

ip pim rp-address 123.123.123.123 group-list <asm group list>

ip pim prune-on-expiry

ip pim ssm range none

ip pim spt-threshold infinity group-list spt

 

route-map spt permit 10

  match ip multicast group <asm range >

 

interface ethernet1/1

 ip address 1.1.1.1/30

 ip pim sparse-mode

! NOTE: “ip pim ssm range none” does not disable source-specific multicast (SSM). SSM is still supported for any range where receivers send IGMPv3 reports.

! PIM Configuring on Leaf

feature pim

A multicast RP is needed only when ASM (any source multicast) is used. If the facility uses SSM, RP is not needed for those multicast groups

ip pim rp-address 123.123.123.123 group-list <asm range>

ip pim prune-on-expiry

ip pim ssm range none

ip pim spt-threshold infinity group-list spt

 

route-map spt permit 10

  match ip multicast group <asm range>

 

interface ethernet1/49

 ip address 1.1.1.1/30

 ip pim sparse-mode

 

MSDP is needed only when ASM (any source multicast) is used. If the facility uses SSM, MSDP is not needed for those multicast groups

!  Configuring MSDP on SPINES (RP)

! Configuration on Spine 1

feature msdp

 

interface loopback0

 ip pim sparse-mode

 ip address 77.77.77.1/32

 ip router ospf 100 area0

 

ip msdp originator-id loopback0

ip msdp peer 77.77.77.2 connect-source loopback0

ip msdp sa-policy 77.77.77.2 msdp-mcast-all out

ip msdp mesh-group 77.77.77.2 spine-mesh

 

route-map msdp-mcast-all permit 10

 match ip multicast group 224.0.0.0/4

 

! Configuration on Spine 2

feature msdp

 

Interface loopback0

 ip pim sparse-mode

 ip address 77.77.77.2/32

 ip router ospf 100 area0

 

ip msdp originator-id loopback0

ip msdp peer 77.77.77.1 connect-source loopback0

ip msdp sa-policy 77.77.77.1 msdp-mcast-all out

ip msdp mesh-group 77.77.77.1 spine-mesh

 

route-map msdp-mcast-all permit 10

 match ip multicast group 224.0.0.0/4

! Configuring fabric link – links between network switches.

! When multiple links exist between switches, configure them as individual point-to-point layer-3 links.

! Do not bundle the links in port-channel.

interface Ethernet1/49

 ip address x.x.x.x/y

 ip router ospf 100 area 0.0.0.0

 ip pim sparse-mode

 no shutdown

! Configuring host (endpoint) link – links between the network switch and endpoint.

! Endpoints, which are typically sources and destinations, can be connected using a layer 3 interface.

! Or connected using layer 2 trunk/access interface with Switch Virtual Interface (SVI) on the switch.

! Layer 3 interface towards endpoint

 

interface Ethernet1/1

 ip address x.x.x.x/y

 ip router ospf 100 area 0.0.0.0

 ip ospf passive-interface

 ip pim sparse-mode

 ip igmp version 3

 ip igmp immediate-leave

 ip igmp suppress v3-gsq

 no shutdown

 

! Layer 2 interface (trunk or access) towards endpoint

 

interface Ethernet1/1

 switchport

 switchport mode <trunk|access>

 switchport access vlan 10

 switchport trunk allowed vlan 10,20

 spanning-tree port type edge trunk

 

interface vlan 10

 ip address x.x.x.x/y

 ip router ospf 100 area 0.0.0.0

 ip ospf passive-interface

 ip pim sparse-mode

 ip igmp version 3

 ip igmp immediate-leave

 no shutdown

 

Vlan configuration 10

 ip igmp snooping fast-leave

Configuring NBM – active mode

Before NBM can be enabled, the network must be preconfigured with IGP, PIM, and MSDP (when applicable). NBM configuration must be completed before connecting sources and destinations to the network. Failing to do so could result in NBM not computing the bandwidth correctly. As a best practice, keep the endpoint-facing interface administratively down, complete NBM configuration, and re-enable the interfaces.

!enable feature nbm

feature nbm

 

!enable nxapi

feature nxapi

nxapi http port 80

 

! NBM notified to operate in pim active mode

nbm mode pim-active

 

! Carve TCAM needed for NBM to program QOS and flow policers. Reload required post TCAM carving

hardware access-list tcam region ing-racl 256

hardware access-list tcam region ing-l3-vlan-qos 256

hardware access-list tcam region ing-nbm 1536

 

! Defining ASM range

! This is needed in a multi-spine deployment to ensure efficient load balancing of ASM flows. SSM flow range do not need to be defined in this CLI

!ASM flow is the multicast range where destinations or receivers use IGMPv2 join

 

nbm flow asm range 238.0.0.0/8 239.0.0.0/8

 

! define flow policies

! flow polices describe flow parameters such as bandwidth and DSCP(QOS)

! flow policies must be defined on all switches in the fabric and must be the same

 

! default flow policy applies to multicast groups which does not have specific policy

! default flow policy is set to 0 and can be modified if needed

 

nbm flow bandwidth 0 kbps

 

! User defined custom flow policy

nbm flow-policy

  !policy <NAME>

   !bandwidth <bandwidth_reservation>

   !dscp <value>

   !ip group-range first_multicast_ip_address to last_multicast_ip_address

  policy Ancillary

    bandwidth 1000 kbps

    dscp 18

    ip group-range 239.1.40.0 to 239.1.40.255

  policy Audio

    bandwidth 2000 kbps

    dscp 18

    ip group-range 239.1.30.0 to 239.1.30.255

  policy Video_1.5

    bandwidth 1600000 kbps

    dscp 26

    ip group-range 239.1.20.1 to 239.1.20.255

 

 

 

 

 

! Verify flow policy

 

N9K# show nbm flow-policy

--------------------------------------------------------------------------------

| Group Range                     | BW (Kbps)  | DSCP | QOS | Policy Name

--------------------------------------------------------------------------------

| 239.1.40.0-239.1.40.255         | 1000       | 0    | 7   | Ancillary

| 239.1.30.0-239.1.30.255         | 2000       | 18   | 7   | Audio

| 239.1.20.1-239.1.20.255         | 1600000    | 26   | 7   | Video_1.5

--------------------------------------------------------------------------------

Policy instances printed here = 3

Total Policies Defined = 3

 

! NBM host policy can be applied to sender or sources, receiver (local) or pim(external receivers) 

! NBM default host policy is set to permit all and can be modified to deny if needed

! 224.0.0.0/4 matchses all multicast addresses and can be used to match all for multicast

 

nbm host-policy

  sender

    default deny

! <seq_no.> host <sender_ip> group <multicast_group> permit|deny

    10 host 192.168.105.2 group 239.1.1.1/32 permit

    1000 host 192.168.105.2 group 239.1.1.2/32 permit

    1001 host 192.168.101.2 group 239.1.1.0/24 permit

    1002 host 192.168.101.3 group 225.0.4.0/24 permit

    1003 host 192.168.101.4 group 224.0.0.0/4 permit

 

nbm host-policy

receiver

    default deny

!<seq_no.> host <receiver_ip> source <> group <multicast_group> permit|deny

    100 host 192.168.101.2 source 192.205.38.2 group 232.100.100.0/32 permit

    10001 host 192.168.101.2 source 0.0.0.0 group 239.1.1.1/32 permit

    10002 host 192.168.102.2 source 0.0.0.0 group 239.1.1.0/24 permit

    10003 host 192.168.103.2 source 0.0.0.0 group 224.0.0.0/4 permit   

 

! Verify Sender policies configured on the switch

N9K# show nbm host-policy all sender

Default Sender Policy: Deny

Seq Num       Source           Group            Mask  Action

10            192.168.105.2    233.0.0.0        8           Allow

1000          192.168.101.2    232.0.0.0        24          Allow

1001          192.168.101.2    225.0.3.0        24          Allow

1002          192.168.101.2    225.0.4.0        24          Allow

1003          192.168.101.2    225.0.5.0        24          Allow

 

! Verify Sender policies applied to local senders attached to that switch

N9K# show nbm host-policy applied sender all

Default Sender Policy: Deny

Applied host policy for Ethernet1/31/4

Seq Num       Source           Group            Mask  Action

20001         192.26.1.47      235.1.1.167      32          Allow

Total Policies Found = 1

! ! Verify Receiver policies configured on the switch

N9k# show nbm host-policy all receiver local

 Default Local Receiver Policy: Allow

Seq Num       Source           Group            Mask  Reporter         Action

10240         192.205.38.2     232.100.100.9    32          192.168.122.2    Allow

10496         192.205.52.2     232.100.100.1    32          192.168.106.2    Allow

12032         0.0.0.0          232.100.100.32   32          192.169.113.2    Allow

12288         0.0.0.0          232.100.100.38   32          192.169.118.2    Allow

12544         0.0.0.0          232.100.100.44   32          192.169.123.2    Allow

 

N9k# show nbm host-policy applied receiver local all

 Default Local Receiver Policy: Allow

Interface        Seq Num       Source           Group            Mask  Action

Ethernet1/1      10240      192.205.38.2    232.100.100.9          32        Allow

Total Policies Found = 1

NBM passive mode

If an SDN controller is used to program flow path, NBM must work in pim-passive or passive mode. Below are the configurations needed to enable SDN control. The flow setup using SDN control is only available via API (no CLIs are exposed). Details of the APIs are available on the Cisco developer website.

Configuring OSPF, PIM, and fabric and host links

! OSPF configuration on SPINE and LEAF

feature ospf

 

router ospf 100

interface Ethernet1/1

  ip router ospf 100 area 0.0.0.0

  ip pim sparse-mode

 

! PIM Configuring on SPINE(s)

feature pim

 

ip pim ssm range none

 

interface ethernet1/1

 ip address 1.1.1.1/30

 ip pim sparse-mode

 ip pim passive

! PIM Configuring on Leaf

feature pim

ip pim ssm range none

 

interface ethernet1/49

 ip address 1.1.1.1/30

 ip pim sparse-mode

 ip pim passive

 

! Configuring fabric link – links between network switches.

! When multiple links exist between switches, configure them as individual point-to-point layer-3 links.

! Do not bundle the links in port-channel.

interface Ethernet1/49

 ip address x.x.x.x/y

 ip router ospf 100 area 0.0.0.0

 ip pim sparse-mode

 ip pim passive

 no shutdown

! Configuring host (endpoint) link – links between the network switch and endpoint.

! Endpoints, which are typically sources and destinations, can be connected using a layer-3 interface.

! Or connected using layer-2 trunk/access interface with Switch Virtual Interface (SVI) on the switch.

! Layer 3 interface towards endpoint

 

interface Ethernet1/1

 ip address x.x.x.x/y

 ip router ospf 100 area 0.0.0.0

 ip ospf passive-interface

 ip pim sparse-mode

 ip pim passive

 no shutdown

 

! Layer 2 interface (trunk or access) towards endpoint

 

interface Ethernet1/1

 switchport

 switchport mode <trunk|access>

 switchport access vlan 10

 switchport trunk allowed vlan 10,20

 spanning-tree port type edge trunk

 

interface vlan 10

 ip address x.x.x.x/y

 ip router ospf 100 area 0.0.0.0

 ip ospf passive-interface

 ip pim sparse-mode

 ip pim passive

 

Vlan configuration 10

 ip igmp snooping fast-leave

Configuring NBM – passive mode

Before NBM can be enabled, the network must be preconfigured with IGP, PIM.

!enable feature nbm

feature nbm

 

!enable nxapi

feature nxapi

nxapi http port 80

 

! NBM notified to operate in pim active mode

nbm mode pim-passive

 

! Carve TCAM needed for NBM to program QOS and flow policers. Reload required post TCAM carving

hardware access-list tcam region ing-racl 256

hardware access-list tcam region ing-l3-vlan-qos 256

hardware access-list tcam region ing-nbm 1536

File (unicast) and live (multicast) on the same IP fabric

The flexibility of IP allows co-existence of file and live traffic on the same fabric. Using QoS, live traffic is always prioritized over file-based workflows. When NBM active mode programs a multicast flow, it places the flow in a high-priority queue. Using user-defined QoS policies, live traffic can be placed in lower-priority queues. If there is a contention for bandwidth, the QoS configuration always ensures that live wins over file-based workflows.

NBM also allows reservation of a certain amount of bandwidth for unicast workflows in the fabric. By default, NBM assumes all bandwidth can be utilized for multicast traffic.

Use “nbm reserve unicast fabric bandwidth X”, a global Command-Line Interface (CLI) or interface level, to reserve bandwidth for unicast traffic if needed.

When operating in NBM passive mode, the SDN controller is responsible to account for the unicast bandwidth, if any.

The following QoS policies must be applied on all switches to ensure multicast (live) is prioritized over unicast (file):

ip access-list pmn-ucast

 10 permit ip any 0.0.0.0 31.255.255.255

 20 permit ip any 128.0.0.0 31.255.255.255

 30 permit ip any 192.0.0.0 31.255.255.255

 

ip access-list pmn-mcast

 10 permit ip any 224.0.0.0/4

 

class-map type qos match-all pmn-ucast

 match access-group name pmn-ucast

class-map type qos match-any pmn-mcast

 match access-group name pmn-mcast

 

policy-map type qos pmn-qos

 class pmn-ucast

  set qos-group 0

 class pmn-mcast

  set qos-group 7

 

interface ethernet 1/1-54

 service-policy type qos input pmn-qos

Multi-site and remote production – NBM active

Multi-site is a feature that extends NBM across different IP fabrics. It enables reliable transport of flows across sites (Figure 15). An IP fabric enabled with PIM and NBM can connect to any other PIM-enabled fabric. The other fabric could have NBM enabled or could be any IP network that is configured with PIM only. This feature enables use cases such as remote production or connecting the production network with playout etc.

Multi-site network

Figure 16.         

Multi-site network

For multi-site to function, unicast routing must be extended across the fabrics. Unicast routing provides source reachability information to PIM. When NBM is enabled on a fabric, the network switch that interconnects with external sites is enabled with the “nbm external-link” command on the WAN links (Figure 16). A fabric can have multiple such border switches for redundancy and have multiple links on the border switches.

The other end of the link must have PIM enabled. If the other network is also enabled with NBM, then the “nbm external-link” CLI must be enabled. If it is a PIM network without NBM, no additional CLI needs to be configured. Simply enable PIM on the links. The border switches in the NBM fabric will form PIM adjacency with the external network device.

NBM external link

Figure 17.         

NBM external link

Multi-site and NBM active host policy (PIM policy)

To restrict what traffic can leave the fabric, NBM exposes PIM policy, and which one can enforce what multicast flows can exit the fabric. If the PIM or remote-receiver policy restricts a flow and the fabric gets a request for the flow setup on the external link, that request is denied.

nbm host-policy

pim

    default deny

!<seq_no.> source <local_source_ip> source <> group <multicast_group> permit|deny

   default deny

    100 source 192.168.1.1 group 239.1.1.1/32 permit

    101 source 0.0.0.0 group 239.1.1.2/32 permit

    102 source 0.0.0.0 group 230.0.0.0/8 permit

Multi-site and MSDP

When all receivers use IGMPv3 and SSM, no additional configuration is needed to exchange flows between fabrics. However, when using PIM Any-Source Multicast (ASM) with IGMPv2, a full mesh MSDP session must be established between the RPs across the fabrics (Figure 18).

Multi-site and MSDP for any-source multicast (IGMPv2)

Figure 18.         

Multi-site and MSDP for any-source multicast (IGMPv2)

In case these sites are not directly connected as shown below (Figure 19), MSDP can be created between each site and the CORE, instead of full mesh across sites. Between the sites and the CORE, BGP must be used as the routing protocol. The BGP next hop and MSDP peering must use the same IP address. The reason BGP is required is that MSDP does an RPF check for the RP address originating the MSDP SA messages, and the unicast reachability to the RP must be learned over BGP. If each site runs an IGP like OSPF, then a mutual redistribution of routes between BGP and OSPF is needed to establish unicast route advertisement across the sites.

Multi-site and MDSP with CORE router type deployment

Figure 19.         

Multi-site and MDSP with CORE router type deployment

! sample configuration on the CORE

feature msdp

ip msdp peer 192.168.1.0 connected-source et1/54 remote-as 65001

router bgp 65005

neighbor 192.168.1.0 remote-as 65001

address-family ipv4 unicast

 

! sample configuration on border leaf

feature msdp

ip msdp peer 192.168.1.1 connected-source et1/49 remote-as 65005

router bgp 65001

neighbor 192.168.1.1 remote-as 65005

address-family ipv4 unicast

Data Center Network Manager (DCNM) for media fabric

NBM active mode provides multicast transport and security with host and flow policies. DCNM works with NBM to provide visibility and analytics of all the flows in the fabric. DCNM can also be used to provision the fabric, including configuring the IGP (OSPF), PIM, and MSDP using Professional Media Network (PMN) templates and Power-On Auto-Provisioning (POAP). DCNM can further be used to manage host and flow policies and ASM range, unicast bandwidth reservation, and external link for multi-site.

DCNM uses NX-API to push policies and configurations to the switch, and the NBM process uses NX-OS streaming telemetry to stream state information to DCNM (Figure 20). DCNM collects information from individual switches in the fabric, collates them, and presents how flows traverse across the fabric. The configuration in Figure 18 shows the required configuration on the switch to enable telemetry.

To summarize, DCNM can help with:

      Fabric configuration using POAP to help automate configuration

      Topology and host discovery to dynamically discover the topology and host connectivity

      Flow and host policy manager

      End-to-end flow visualization with flow statistics

      The API gateway for the broadcast controller

      Network health monitoring

DCNM and NBM interaction

Figure 20.         

DCNM and NBM interaction

!Telemetry configuration on all network switches

 

feature telemetry

telemetry

  destination-profile

    use-vrf management

  destination-group 200

    ip address DCNM_IP/VIP port 50051 protocol gRPC encoding GPB

  sensor-group 200

    path sys/nbm/show/appliedpolicies depth unbounded

    path sys/nbm/show/stats depth unbounded

  sensor-group 201

    path sys/nbm/show/flows query-condition rsp-subtree-filter=eq(nbmNbmFlow.bucket,"1")&rsp-subtree=full

  sensor-group 202

    path sys/nbm/show/flows query-condition rsp-subtree-filter=eq(nbmNbmFlow.bucket,"2")&rsp-subtree=full

  sensor-group 203

    path sys/nbm/show/flows query-condition rsp-subtree-filter=eq(nbmNbmFlow.bucket,"3")&rsp-subtree=full

  sensor-group 204

    path sys/nbm/show/flows query-condition rsp-subtree-filter=eq(nbmNbmFlow.bucket,"4")&rsp-subtree=full

  sensor-group 205

    path sys/nbm/show/endpoints depth unbounded

  subscription 201

    dst-grp 200

    snsr-grp 200 sample-interval 60000

    snsr-grp 201 sample-interval 30000

    snsr-grp 205 sample-interval 30000

  subscription 202

    dst-grp 200

    snsr-grp 202 sample-interval 30000

  subscription 203

    dst-grp 200

    snsr-grp 203 sample-interval 30000

  subscription 204

    dst-grp 200

    snsr-grp 204 sample-interval 30000

DCNM and NBM passive mode

With NBM passive mode, DCNM can be used for network configuration provisioning. DCNM should be used in read-only mode where it is not managing any policies. See the “DCNM server properties” for details (later in this document).

Cisco DCNM media controller installation

For the steps to install the DCNM media controller, see https://www.cisco.com/c/en/us/support/cloud-systems-management/prime-data-center-network-manager/products-installation-guides-list.html.

The recommended approach is to set up the DCNM media controller in native high-availability mode.

Fabric configuration using Power-On Auto Provisioning (POAP)

POAP automates the process of upgrading software images and installing configuration files on Cisco Nexus switches that are being deployed in the network. When a Cisco Nexus switch with the POAP feature boots and does not find the startup configuration, the switch enters POAP mode, sends a DHCP discover to obtain a temporary IP which is provided by the DCNM Dynamic Host Configuration Protocol (DHCP) server, and bootstraps itself with its interface IP address, gateway, and DCNM Domain Name System (DNS) server IP addresses. It also obtains the IP address of the DCNM server to download the configuration script that is  executed on the switch to download and install the appropriate software image and device configuration file (Figure 21).

POAP process

Figure 21.         

POAP process

The DCNM controller ships with configuration templates: the Professional Media Network (PMN) fabric spine template and the PMN fabric leaf template. The POAP definition can be generated using these templates as the baseline. Alternatively, you can generate a startup configuration for a switch and use it during POAP definition.

When using POAP, follow these steps:

      Create a DCHP scope for temporary IP assignment

      Upload the switch image to the DCNM image repository

      Generate the switch configuration using the startup configuration or a template

These steps are described in Figures 22 through 27.

DCNM > Configure > POAP

Figure 22.         

DCNM > Configure > POAP

DCNM > Configure > POAP > DHCP scope

Figure 23.         

DCNM > Configure > POAP > DHCP scope

DCNM > Configure > POAP > Images and configurations

Figure 24.         

DCNM > Configure > POAP > Images and configurations

DCNM > Configure > POAP > POAP definitions

Figure 25.         

DCNM > Configure > POAP > POAP definitions

DCNM > Configure > POAP > POAP definitions

Figure 26.         

DCNM > Configure > POAP > POAP definitions

Generate a configuration using a template: DCNM > Configure > POAP > POAP definitions > POAP wizard

Figure 27.         

Generate a configuration using a template: DCNM > Configure > POAP > POAP definitions > POAP wizard

Topology discovery

The DCNM media controller automatically discovers the topology when fabric is provisioned using POAP. If the fabric is provisioned through the CLI, the switches need to be manually discovered by DCNM. Figures 28 and 29 show the steps required to discover the fabric.

DCNM > Inventory > Discover switches > LAN switches

Figure 28.         

DCNM > Inventory > Discover switches > LAN switches

DCNM > Media controller > Topology

Figure 29.         

DCNM > Media controller > Topology

Host discovery

NBM discovers an endpoint or host in one of these three ways:

      When the host sends an Address Resolution Protocol (ARP) request for its default gateway: the switch

      When the sender host sends a multicast flow

      When a receiver host sends an IGMP join message

      Host discovered via Address Resolution Protocol (ARP):

    Role: Is empty - nothing is displayed in this field

    DCNM displays the MAC address of the host

    DCNM displays the switch name and interface on the switch where the host is connected

      Host discovered by traffic transmission (source or sender)

    Role: Sender

    DCNM displays the multicast group, switch name, and interface

    If the interface is “empty”, see “fault reason”, which indicates the reason

      Host discovered by IGMP report (receivers)

    Role: Dynamic, static, or external

      Dynamic receiver – receivers that send an IGMP report

      Static receiver – a receiver added using an API or “ip igmp static-oif” on the switch

      External receiver – a receiver outside the fabric

      DCNM displays the multicast group, switch name, and interface

      If the interface is “empty”, see the “fault reason”, which indicates the reason

Figures 30 and 31 show the media controller topology and the discovered host results.

DCNM > Media controller > Topology

Figure 30.         

DCNM > Media controller > Topology

DCNM > Media controller > Host > Discovered host

Figure 31.         

DCNM > Media controller > Host > Discovered host

Host alias

A host alias is used to provide a meaningful name to an endpoint or host. The alias can be referenced in place of an IP address throughout the DCNM GUI (Figure 32).

DCNM > Media controller > Host > Host alias

Figure 32.         

DCNM > Media controller > Host > Host alias

Host policies

The default host policy must be deployed before custom policies are configured. Policy modification is permitted. Policies must be un-deployed before deleted (Figure 33).

Related image, diagram or screenshot

 

DCNM > Media controller > Host > Host policies

Figure 33.         

DCNM > Media controller > Host > Host policies

Applied host policies

Host policies created on DCNM are pushed to all switches in the fabric. The NBM process on the switch only applies relevant policies based on endpoints or hosts directly connected to the switch. Applied host policies provide visibility of where a given policy is applied – on which switch and which interface on the switch (Figure 34).

DCNM > Media controller > Host > Applied host policies

Figure 34.         

DCNM > Media controller > Host > Applied host policies

Flow policy

The default policy is set to 0 Gbps. The default policy must be deployed before any customer flow policy is configured and deployed. Flow policy modification is permitted, but the flows using the policy could be impacted during policy changes. Flow policy must be un-deployed before deleted (Figure 35).

DCNM > Media controller > Flow > Flow policies

 

DCNM > Media controller > Flow > Flow policies

Figure 35.         

DCNM > Media controller > Flow > Flow policies

Flow alias

Operators can find it difficult to track applications using IP addresses. The flow alias provides the ability to provide a meaningful name to a multicast flow (Figure 36).

DCNM > Media controller > Flow > Flow alias

Figure 36.         

DCNM > Media controller > Flow > Flow alias

Flow visibility and bandwidth tracking

One of the broadcast industry’s biggest concerns in moving to IP is maintaining the capability to track the flow path. DCNM provides end-to-end flow visibility on a per-flow basis. The flow information can be queried from the DCNM GUI or through an API (see Figures 37 and 38).

One can view bandwidth utilization per link through the GUI or an API.

DCNM > Media controller > Topology > Multicast group

Figure 37.         

DCNM > Media controller > Topology > Multicast group

DCNM > Media controller > Topology and double-click link

Figure 38.         

DCNM > Media controller > Topology and double-click link

Flow statistics and analysis

DCNM maintains a real-time per-flow rate monitor. It can provide the bit rate of every flow in the system. If a flow exceeds the rate defined in the flow policy, the flow is policed, and the policed rate is also displayed. Flow information can be exported and stored for offline analysis (Figure 39).

DCNM > Media controller > Flow > Flow status

Figure 39.         

DCNM > Media controller > Flow > Flow status

ASM range and unicast reservation

ASM range and unicast bandwidth reservation can be configured and deployed from DCNM (Figure 40).

DCNM > Media controller > Global > Config

Figure 40.         

DCNM > Media controller > Global > Config

External link on a border leaf for multi-site

The external link configuration on a border leaf can be configured using DCNM (Figure 41).

DCNM> Media controller> Global > Config

Figure 41.         

DCNM> Media controller> Global > Config

Events and notification

The DCNM media controller logs events that can be subscribed to using Advanced Message Queuing Protocol (AMQP). The events are also logged and can be viewed through the GUI (Figure 42). Every activity that occurs is logged: a new sender coming online, a link failure, a switch reload, a new host policy pushed out, etc.

DCNM > Media controller > Events

Figure 42.         

DCNM > Media controller > Events

NBM policies ownership with DCNM

Host policies, Flow policies, ASM range, unicast bandwidth reservation, and NBM external links can be configured either on the switch using CLI or provisioned using DCNM. A design time decision must be made as to how these configurations are provisioned. When DCNM is used for provisioning these configurations, CLI must not be used. DCNM takes complete ownership of host policies, flow policies, ASM range, unicast bandwidth reservation, and NBM external links. When a switch is discovered, DCNM re-writes any configuration on the switch with what is defined on DCNM. The same happens when a switch reloads and comes back online; DCNM re-writes all policies, ASM range, unicast bandwidth reservation, and external links. This is the default behavior of DCNM; it assumes all policy and global configuration ownership. However, this behavior can be altered on DCNM by modifying DCNM server properties discussed later in this guide.

DCNM and switch connectivity options

DCNM can be installed as a VM (ova) or on bare metal (ISO). It has three network interfaces: Eth0, Eth1, and Eth2.

Eth0 is used to access the DCNM GUI or any external application, such as the broadcast controller, to communicate with DCNM. Eth1 is used for communication with the OOB management 0 interface of the Cisco Nexus switch. Eth2 is used when communication to a Nexus switch is done via in band (front-panel port).

In most deployments, Eth0 and Eth1 are in the same network along with the management interface of the Nexus switch. POAP works only on the Eth1 interface. Most deployments use Eth1 and the OOB management interface for network switch to DCNM communication (Figure 43).

OOB management connectivity option

Figure 43.         

OOB management connectivity option

In a few cases where in-band communication is the preferred choice for switch-to-DCNM communication (Figure 44), Eth2 is used. In this case, the routing table on DCNM must be modified to use Eth2 as the interface for in-band connectivity to the switch. There is a built-in CLI utility on DCNM that has to be used to set in-band connectivity. Use:

      appmgr setup inband – to configure Eth2 IP

      appmgr setup inband-route – to configure a static route on DCNM CentOS towards the switch in-band IP

      appmgr remove inband-route – to remove routes

In-band connectivity option

Figure 44.         

In-band connectivity option

When a switch is added into DCNM, DCNM configures the SNMP server on the switch that, by default, points to Eth0 IP (VIP) of DCNM. When using in-band, the SNMP server must be manually configured on the switch to point to Eth2 IP. This configuration can be added to all switches using CLI or a template shipped in DCNM.

snmp-server host dcnm-Eth2-ip traps version 2c public udp-port 2162

In addition, DCNM server property must be modified when using in-band. The property “trap.registaddress” must be set to DCNM ETH2 IP (or VIP when using Native HA).

DCNM server properties

DCNM server properties for PMN (IP Fabric for Media) can be accessed using the DCNM GUI. Navigate to server properties using DCNM > Administration > Server properties.

DCNM must be restarted every time a server property is changed in order to take effect.

appmgr restart dcnm —for Standalone deployment

appmgr restart ha-apps —for Native HA deployment

The PMN server properties in DCNM are:

      pmn.hostpolicy.multicast-ranges.enabled (set to false by default)

    By default, the host policy assumes a /32 mask for the multicast group IP

    Setting this flag to “true” enables the use of a mask for the group with the user specifying a sequence number for each policy

    All user-defined policies must be deleted and re-applied when this option is changed

      pmn.deploy-on-import-reload.enabled (set to true by default)

    DCNM assumes all ownership of the host policy, flow policy, ASM range, unicast reserve bandwidth, and external links

    When a switch is imported into DCNM, or when a switch reloads and comes back up, DCNM deletes all of these policies on the switch and the re-configuration, new policy pushed, or configuration is defined on DCNM

    The flag must be set to “false” if policies are configured on the switch using CLI

      trap.registaddress

    Set the IP address the switch would use when sending SNMP trap

    By default, it is set to Eth0 IP. If the switch communicates with DCNM on Eth1 or Eth2 IP, this field must be populated with DCNM’s Eth1/Eth2 IP

If running DCNM in high-availability mode, this field must be populated with the respective interface VIP.

Precision Time Protocol (PTP) for time synchronization

Clock synchronization is extremely important in a broadcasting facility. All endpoints and IPGs that convert SDI to 2110 have to be synchronized to ensure they are able to switch between signals, convert signals from IP back to SDI, etc. If the clocks are not in sync, it could result in lost samples in data and will cause audio splat or loss of video pixel.

PTP can be used to distribute the clock across the Ethernet fabric. PTP provides nanosecond accuracy and ensures all endpoints remain synchronized.

PTP works in a primary-secondary topology. In a typical PTP deployment, a PTP Grand Master (GM) is used as a reference. The GM is then connected to the network switch. The network switch can be configured to act as a PTP boundary clock or PTP transparent clock. In a boundary clock implementation, the switch thrives off the GM and acts as a primary for the devices connected to the switch. In transparent clock implementation, the switch simply corrects timing information in PTP to include the transit delay as the PTP packet traverses the switch. The PTP session is between the secondary and the GM (Figure 45).

Transparent clock versus boundary clock

Figure 45.         

Transparent clock versus boundary clock

To be able to scale, the PTP boundary clock is the preferred implementation of PTP in an IP fabric. This distributes the overall load across all the network switches instead of putting all the load on the GM, which can only support a limited number of secondary.

It is always recommended to use two PTM GMs for redundancy. The same GM pair can be used to distribute the clock to a redundant fabric in a 2022-7 type of deployment.

There are two PTP profiles utilized in the broadcasting industry: AES67 and SMPTE 2059-2. A Nexus switch acting as a boundary clock supports both the 2059-2 and AES67 profile along with IEEE 1588v2.

Common rates that work across all profiles include:

Sync interval -3 (0.125s or 8 packets per second)

Announce interval 0 (1 per second)

Delay request minimum interval -2 (0.25s or 4 per second)

Example:

feature ptp

! ptp source IP can be any IP. If switch has a loopback, use the loopback IP as PTP source

ptp source 1.1.1.1

 

interface Ethernet1/1

  ptp

  ptp delay-request minimum interval smpte-2059-2 -2

  ptp announce interval smpte-2059-2 0

  ptp sync interval smpte-2059-2 -3

Grandmaster and passive clock connectivity

Figure 46.         

Grandmaster and passive clock connectivity

PTP implementation with a redundant network

Figure 47.         

PTP implementation with a redundant network

For further details on designing PTP for Media Networks, refer to the PTP design guide.

To integrate PTP monitoring on DCNM, the following additions are needed to the telemetry configuration:

telemetry

  destination-group 200

    ip address <dcnm_ip> port 50051 protocol gRPC encoding GPB

sensor-group 300

    data-source NX-API

    path "show ptp brief"

    path "show ptp parent"

  sensor-group 301

    data-source NX-API

    path "show ptp corrections"

subscription 300

    dst-grp 200

    snsr-grp 300 sample-interval 30000

    snsr-grp 301 sample-interval 10000

NBM and VRF

Using the concept of virtual routing and forwarding (VRF), network administrators can create multiple logical fabrics within the same physical network fabric. This is done by separating physical interfaces into different VRFs. An example of such deployment may include creating a 2022-7 network on the same physical topology (Figure 48) or running NBM in one VRF and non-NBM multicast in another. Lastly, a deployment model of NBM mode pim-active in a VRF and NBM mode pim-passive in another VRF is also possible (refer to the software release notes for the supported release).

Before you associate an NBM VRF, create the VRF routing context (using the VRF context vrf-name command) and complete the unicast routing and PIM configurations.

nbm vrf vrf-name

nbm mode pim-active|pim-passive

Creating a 2022-7 type deployment using VRF

Figure 48.         

Creating a 2022-7 type deployment using VRF

Media Flow Analytics with RTP flow monitoring

To simplify detection of packet loss on an RTP or EDI flow, including both compressed and uncompressed media, the Cisco Nexus 9000 family of switches FX/FXP/FX3/GX/GX2A/GX2B are able to perform deep packet inspection and trigger a notification when a loss is detected. The detection on the switch can be streamed to DCNM using telemetry. Details on this feature and how to configure it can be found in the Media Flow Analytics white paper.

The configuration given below is needed on the Cisco Nexus 9000 switch to stream RTP/EDI flow monitoring to DCNM.

 

telemetry

  destination-group 200

    ip address <dcnm_ip> port 50051 protocol gRPC encoding GPB

sensor-group 500

    data-source NX-API

    path "show flow rtp details"

    path "show flow rtp errors active"

    path "show flow rtp errors history"

subscription 500

    dst-grp 200

    snsr-grp 500 sample-interval 30000

Integration between the broadcast controller and the network

The IP fabric is only a part of the entire solution. The broadcast controller is another important component that is also responsible for the overall functioning of the facility. With IP deployments, the broadcast controller can interface with the IP fabric to push host and flow policies as well as other NBM configurations such as ASM range, unicast bandwidth reservation, and external link. The broadcast controller does this by interfacing with DCNM or by directly interfacing with the network switch using a network API exposed by the Nexus Operating System (OS). The broadcast controller can also subscribe to notifications from the network layer and present the information to the operator.

The integration between the broadcast controller and the network helps simplify day-to-day operations and provides a complete view of the endpoint and the network in a single pane of glass. 

Deployments in which there is no integration between the broadcast controller and network are also supported and provide complete functionality. In such deployments, the NBM polices and configuration are directly provided to the DCNM GUI or the switch CLI. In addition, both DCNM and NX-OS on the switch expose APIs that enable policy and configuration provisioning using scripts or any other automation. 

For a list of DCNM APIs, visit https://developer.cisco.com/site/data-center-network-manager/?version=11.0(1)

For a list of NBM APIs (IP Fabric for Media), visit https://developer.cisco.com/site/nxapi-dme-model-reference-api/

Designing the control network

The control network can be divided into two segments. One segment is the fabric control network, which includes the network between the IP fabric and the DCNM. The other is the endpoint control network, which enables the broadcast controller to communicate with the endpoints and the DCNM media controller.

Figure 46 shows the logical network connectivity between the broadcast controller, endpoints, and DCNM.

The control network typically carries unicast control traffic between controllers and endpoints.

Control network

Figure 49.         

Control network

Deployment examples

The solution offers a flexible and scalable spine and leaf deployment in addition to using a single modular chassis deployment. IP provides a lot of flexibility and the ability to move flows across studios that could be geographically distributed. It enables the move to Ultra-HD (UHD) and beyond, the use of the same fabric for various media workflows, and other use cases such as resource sharing, remote production, etc.

OBVAN: Deploying an IP fabric inside an outside broadcast production truck

OBVANs are mini-studio and production rooms inside a truck that cover live events such as sports, concerts, etc. Given different events are covered in different formats, one may be HD and another UHD, and at every event location the endpoints are cabled and then moved. The truck requires operational simplicity and a dynamic infrastructure. A single modular switch, such as a Cisco Nexus 9508-R or 9504-R switch, is suitable for a truck (Figure 50).

OBVAN deployment

Figure 50.         

OBVAN deployment

Studio deployment

A studio deployment requires an infrastructure that is flexible and scalable. With SDI, often several cables have to be stretched across long distances, making the infrastructure very rigid. With IP, a single modular chassis can be used, however, the challenges associated with stretching multiple cables to the switch still exist. To support flexibility, studio designs are often deployed using a spine-and-leaf architecture. With this architecture, the leaf can be placed at every studio location and then a single or couple of 100-Gb fibers are connected from the leaf to the spine. This model is similar to how a typical IT infrastructure is designed. The flexibility and the ability to move any flow across any links enables the ability to share resources. This means a few production control rooms can be used to control multiple studios at different times. The primary control room can also be connected to the same fabric. The spine-and-leaf mode can also scale, so that if new studios are deployed, a leaf switch can simply be added to serve that facility (Figure 51).

Flexible spine-and-leaf studio deployment

Figure 51.         

Flexible spine-and-leaf studio deployment

Remote production and multi-site

IP simplifies transport of flows across sites and locations. This enables remote production, a use case where a production room is in the main site, producing an event that is being recorded in a remote site. This can be accomplished by interconnecting the remote leaf to the central location using a service provider link. The same architecture can also be used to interconnect an Outside Broadcasting (OB) truck to a studio and move flows from the OB truck to the studio (Figures 52 and 53).

Remote leaf

Figure 52.         

Remote leaf

In large broadcast facilities that have affiliates across the country, the fabrics can be interconnected and flows can be transported across the facilities.

Multi-site deployment

Figure 53.         

Multi-site deployment

Live production and file workflow on the same IP fabric

The primary benefit of moving to IP is to enable production in higher definition. IP can also help consolidate different resources into a single IP infrastructure. In deployments today, encoders that convert uncompressed video to compressed format typically have an SDI interface connected to an SDI router from which they get compressed flows and an IP interface connected to an IP fabric for compressed workflows. With production now being done in IP, the same encoder can subscribe to an uncompressed 2110 stream, compress it, and transmit it back as a compressed stream on the same IP fabric. Other media assets that are virtualized and running on a server can simply be connected to the IP fabric. IP storage can also be plugged into the fabric. Using QoS, one can easily prioritize one type of traffic over the other (Figure 54).

Converged fabric for media

Figure 54.         

Converged fabric for media

Conclusion

The broadcast media and entertainment industry is going through a massive transformation with the move to IP. The move is happening now and happening quickly. The industry brings in unique challenges and requirements due to the nature of the workloads carried in the IP infrastructure. Along with multicast transport, the need to build a secure fabric with visibility to flows and fabric health is needed. Cisco’s IP Fabric for Media addresses all of these requirements by offering both a flexible and scalable spine and leaf fabric as well as a deployment with a single modular switch. The solution with the Cisco NBM feature offers reliable multicast transport as well as complete control on who is permitted to participate in the fabric. The solution offers remote production capability with the multi-site feature, which enables the ability to move any workload anywhere. With open APIs and the flexibility to integrate with DCNM or integrate directly on the switch, any third-party broadcast controller can interface with the network, abstracting any complexity, and provide the end operator an unchanged experience with IP.

For more information

      Cisco Nexus 9000 Series NX-OS IP Fabric for Media Solution Guide, Release 9.x -  https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/9-x/ip_fabric_for_media/solution/guide/b_Cisco_Nexus_9000_Series_IP_Fabric_for_Media_Solution_Guide_9x.html.

      Cisco DCNM Media Controller User Guide, Release 11.0(1) - https://www.cisco.com/c/en/us/td/docs/dcn/dcnm/1151/configuration/ipfm/cisco-dcnm-ipfm-configuration-guide-1151.html.

      Cisco Nexus 9200 Platform Switches Data Sheet: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-735989.html.

      Cisco Nexus 9300-EX/FX/FX2/FX3/GX/GX2AB Platform Switches Data Sheet: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.html.

      Cisco Nexus 9500 R-Series Data Sheet: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-738321.html.

 

 

 

Learn more