Multiple Site Model
The Cisco IPICS multiple site model consists of a single Cisco IPICS server that provides services for two or more sites and that uses the IP WAN to transport multicast IP voice traffic between the sites. The IP WAN also carries call control signaling between the central site and the remote sites.
Multicast may be enabled between sites, but it is not required. Multiple sites connected by a multicast-enabled WAN are in effect a topologically different case of the single site model, because there is only one multicast domain. The main difference between multiple site model deployments is whether the connecting core network is a service provider network that employs Multiprotocol Label Switching (MPLS). If it is, MPLS with multicast VPNs is deployed to produce a single multicast domain between sites. Multiple sites with no native multicast support between sites can either employ Multicast over Generic Routing Encapsulation (GRE). IPSec VPNs can also be configured between sites to secure inter-site traffic.
Figure 8-2 illustrates a typical Cisco IPICS multiple site deployment, with a Cisco IPICS server at the central site and an IP WAN to connect all the sites.
Figure 8-2 Multiple Site Model
In the multiple site model, connectivity options for the IP WAN include the following:
- Leased lines
- Frame Relay
- Asynchronous Transfer Mode (ATM)
- ATM and Frame Relay Service Inter-Working (SIW)
- MPLS Virtual Private Network
- Voice and Video Enabled IP Security Protocol (IPSec) VPN (V3PN)
Routers that reside at the edges of the WAN require quality of service (QoS) mechanisms, such as priority queuing and traffic shaping, to protect the voice traffic from the data traffic across the WAN, where bandwidth is typically scarce.
This section includes these topics:
MPLS with Multicast VPNs
MPLS does not support native multicast in an MPLS VPN. This section discusses a technique for enabling multicast across an MPLS core. This section assumes that the unicast MPLS core and the VPN have been configured and are operating properly, and it assumes that you are familiar with IP multicast and MPLS. For additional information about these topics, refer to the documentation at this URL:
Figure 8-3 illustrates the topology that is discussed in this section.
Figure 8-3 MPLS with Multicast VPNs
The following terms apply to MPLS:
- Customer Edge Router (CE)—Router at the edge of a network and that has interfaces to at least one Provider Edge (PE) router.
- Data Multicast Distribution Tree (MDT)—Tree created dynamically by the existence of active sources in the network and that is sent to active receivers located behind separate PE routers. Data MDT connects only to PE routers that are attached to CE routers with active sources or receivers of traffic from active sources or that are directly attached to active sources or receivers of traffic.
- Default-MDT—Tree created by the multicast virtual private network (MVPN) configuration. The Default-MDT is used for customer Control Plane and low rate Data Plane traffic. It uses Routing and Forwarding (MVRFs) to connect all of the PE routers in a particular multicast domain (MD). One Default-MD exists in every MD whether there is any active source in the respective customer network.
- LEAF—Describes the recipient of multicast data. The source is thought of as the route and the destination is the leaf.
- Multicast domain (MD)—Collection of MVRFs that can exchange multicast traffic
- Multicast Virtual Route Forwarding (MVRF)—Used by a PE router to determine how to forward multicast traffic across an MPLS core.
- Provider Router (P)—Router in the core of the provider network that has interfaces only to other P routers and other PE routers
- Provider Edge Router (PE)—Router at the edge of the provider network that has interfaces to other P and PE routers and to at least one CE router
- PIM-SSM—PIM Source Specific Multicast
MVPN Basic Concepts
The following basic concepts are key to understanding MVPN:
- A service provider has an IP network with its own unique IP multicast domain (P-Network).
- The MVPN customer has an IP network with its own unique IP multicast domain (C-Network).
- The Service Provider MVPN network forwards the customer IP multicast data to remote customer sites. To do so, the service provider encapsulates customer traffic (C-packets) inside P- packets at the service provider PE. The encapsulated P-packet is then forwarded to remote PE sites as native multicast inside the P-Network
- During the process of forwarding encapsulated P-packets, the P-Network has no knowledge of the C-Network traffic. The PE is the device that participates in both networks. (There may be more than one Customer Network per PE.)
VPN Multicast Routing
A PE router in an MVPN network has several routing tables. There is one global unicast/multicast routing table and a unicast/multicast routing table for each directly connected MVRF.
Multicast domains are based on the principle of encapsulating multicast packets from a VPN in multicast packets to be routed in the core. As multicast is used in the core network, PIM must be configured in the core. PIM-SM, PIM-SSM, and PIM-BIDIR are supported inside the provider core for MVPN. PIM-SM or PIM-SSM is the recommended PIM option in the provider core, because PIM-BIDIR is not supported on all platforms. PIM-SM, PIM-SSM, PIM-BIDIR and PIM-DENSE-MODE are supported inside the MVPN. MVPN leverages Multicast Distribution Trees (MDTs). An MDT is sourced by a PE router and has a multicast destination address. PE routers that have sites for the same MVPN source to a default MDT and join to receive traffic on it.
In addition, a Default-MDT is a tree that is always-on and that transports PIM control-traffic, dense-mode traffic, and rp-tree (*,G) traffic. All PE routers configured with the same default-MDT receive this traffic.
Data MDTs are trees that are created on demand and that will only be joined by the PE routers that have interested receivers for the traffic. Data MDTs can be created either by a traffic rate threshold or a source-group pair. Default-MDTs must have the same group address for all VPN Routing and Forwarding (VRFs) that make up a MVPN. Data MDTs may have the same group address if PIM-SSM is used. If PIM-SM is used, they must have a different group address, because providing the same one could result in the PE router receiving unwanted traffic.
Configuring the Provider Network for MVPN
This section provides an example of how to configure a provider network for MVPN.
The steps required to enable a MVPN in the provider network refer to the topology illustrated in Figure 8-3. In these steps, the customer VPN is called “ipics.”
Step 1 Choose the PIM mode for the provider network.
Cisco recommends PIM-SSM as the protocol in the core. No additional source-discovery BGP configuration is required with the source-discovery attribute. A route distinguisher (RD) type is used to advertise the source of the MDT with the MDT group address. PIM-SM has been the most widely deployed multicast protocol and has been used for both sparsely and densely populated application requirements. PIM SSM is based upon PIM SM. Without the initial Shared Tree and the subsequent cutover to the Shortest Path Tree, either PIM SSM or PIM SM is suitable for the default MDT.
When bidirectional PIM support becomes available on all relevant hardware, it will be the recommendation for the default MDT. For the Data MDT, either PIM SM or PIM SSM is suitable. PIM SSM is simpler to deploy than PIM SM. It does not require a Rendezvous point, and the Provider network is a known and stable group of multicast devices. Cisco recommends the use of PIM SSM for Provider core deployment. This configuration example uses PIM-SSM in the core.
Step 2 Choose the VPN group addresses used inside the provider network:
The default PIM-SSM range is 232/8. However, this address range is designed for global use in the Internet. For use within a private domain, you should use an address outside of this administratively scoped multicast range (as recommended in RFC2365). Using a private address range makes it simpler to filter on boundary routers. Cisco recommends using 239.232/16, because addresses in this range are easily recognizable as both private addresses and SSM addresses by using 232 in the second octet. In the design discussed in this document, the range is divided for default-MDT and data MDT. (Data MDT is discussed elsewhere in the “VPN Multicast Routing” section . Default-MDTs uses 188.8.131.52-184.108.40.206 and Data MDTs uses 220.127.116.11-18.104.22.168. This address range provides support for up to 255 MVRFs per PE router.
Step 3 Configure the provider network for PIM-SSM.
The following commands enable a basic PIM-SSM service.
ip pim ssm range multicast_ssm_range
ip access-list standard multicast_ssm_range
permit 22.214.171.124 0.0.1.255
- On all P and PE routers, configure these commands globally:
- On all P interfaces and PE interfaces that face the core, configure this command:
- On each PE router, configure this command on the loopback interface that is used to source the BGP session:
Step 4 Configure the MDT on the VRF.
- To configure multicast routing on the VRF, configure these commands on all PE routers for the VRF ipics:
ip multicast-routing vrf ipics
- To enable multicast routing for the VRF, configure this command:
Step 5 Configure the PIM mode inside the VPN.
The PIM mode inside the VPN depends on what type of PIM the VPN customer is using. Cisco provides automatic discovery of the group-mode used inside the VPN via auto-rp or bootstrap router (BSR), which requires no additional configuration. Optionally, a provider may choose to provide the RP for the customer by configuring the PE router as an RP inside the VPN. In the topology discussed in this section, the VPN customer provides the RP service and the PE routers will automatically learn the group-to-rendezvous point (RP) via auto-rp.
Configure all PE-CE interfaces for sparse-dense-mode, which ensures that either auto-rp or BSR messages are received and forwarded, and which allows the PE to learn the group-to-rendezvous point (RP) inside the VPN. To do so, configure the following on all customer facing interfaces:
Verifying the Provider Network for MVPN
After you complete the configuration as described in the “Configuring the Provider Network for MVPN” section, use the following procedure to verify that the configuration is correct:
Step 1 Verify BGP updates.
BGP provides for source discovery when SSM is used, which is known as a BGP-MDT update. to verify that all BGP-MDT updates have been received correctly on the PE routers, take either of these actions:
Peer (Route Distinguisher + IPv4) Next Hop
2:65019:1:10.32.73.248 10.32.73.248 (PE-2 Loopback)
2:65019:1:10.32.73.250 10.32.73.250 (PE-3 Loopback)
- Use the show ip pim mdt bgp command:
2:65019:1 indicates the RD-type (2) and RD (65019:1) that is associated with this update.
The remaining output is the address that is used to source the BGP session.
PE1#show ip bgp vpnv4 all
BGP table version is 204, local router ID is 10.32.73.247
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 65019:1 (default for vrf ipics)
*>i10.32.72.48/28 10.32.73.248 0 100 0 ?
Route Distinguisher: 2:65019:1
*> 10.32.73.247/32 0.0.0.0 0 ?
*>i10.32.73.248/32 10.32.73.248 0 100 0 ?
*>i10.32.73.250/32 10.32.73.250 0 100 0 ?
- Use the show ip bgp vpnv4 all command:
Step 2 Verify the global mroute table
Use the show ip mroute mdt-group-address command to verify that there is a (Source, Group) entry for each PE router. Because PIM-SSM is used, the source is the loopback address used to source the BGP session and the Group is the MDT address configured. Without traffic, only default-MDT entries are visible.
PE1#show ip mroute 126.96.36.199
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched
Interface state: Interface, Next-Hop or VCD, State/Mode
(10.32.73.247, 188.8.131.52), 1w0d/00:03:26, flags: sTZ
Incoming interface: Loopback0, RPF nbr 0.0.0.0
FastEthernet0/0, Forward/Sparse, 1w0d/00:02:47
(10.32.73.248, 184.108.40.206), 1w0d/00:02:56, flags: sTIZ
Incoming interface: FastEthernet0/0, RPF nbr 10.32.73.2
MVRF ipics, Forward/Sparse, 1w0d/00:01:30
(10.32.73.250, 220.127.116.11), 1w0d/00:02:55, flags: sTIZ
Incoming interface: FastEthernet0/0, RPF nbr 10.32.73.2
MVRF ipics, Forward/Sparse, 1w0d/00:01:29
Verify that the s flag is set on each (S,G) entry, which indicates that this group is used in ssm mode. Verify that the z flag is set, which indicates that this PE router is a leaf of the multicast tunnel. When the router is a leaf of a multicast tunnel, it has to do additional lookups to determine which MVRF to forward this traffic to, as it is basically a receiver for this traffic. Verify the
I flag is set for the remote PE(S,G) entry. This flag indicates that the router understands it is joining an SSM group. It is as though an IGMPv3 host had requested to join that particular channel.
Step 3 Verify PIM neighbors in the global table.
Use the show ip pim neighbors command on all PE and P routers to verify that the pim neighbors are setup properly in the global table.
Neighbor Interface Uptime/Expires Ver DR
10.32.73.2 FastEthernet0/0 1w4d/00:01:21 v2 1 / DR
10.32.73.70 Serial0/2 1w4d/00:01:29 v2 1 / S
Step 4 Verify PIM neighbors inside the VPN
Use the show ip pim vrf ipics neighbors command on all PE routers to verify that the CE router is seen as a PIM neighbor and that the remote-PE routers are seen as pim neighbors over the tunnel.
PE1#show ip pim vrf ipics neighbor
Neighbor Interface Uptime/Expires Ver DR
10.32.73.66 Serial0/0 1w3d/00:01:18 v2 1 / S
10.32.73.248 Tunnel0 3d17h/00:01:43 v2 1 / S
10.32.73.250 Tunnel0 1w0d/00:01:42 v2 1 / DR S
Step 5 Verify the VPN group-to-rendezvous point (RP).
The main customer site has been configured to use auto-rp within the VPN. VPN IPICS is using the multicast range 18.104.22.168 - 79 for channels and VTGs.
ip pim send-rp-announce Loopback0 scope 16 group-list multicast_range
ip pim send-rp-discovery scope 16
ip access-list standard multicast_range
permit 22.214.171.124 0.0.0.15
Use the show ip pim vrf ipics rp mapping command to verify that the PE router correctly learned the RP mapping information from the VPN.
PE1#show ip pim vrf ipics rp map
RP 10.32.72.248 (?), v2v1
Info source: 10.32.73.62 (?), elected via Auto-RP
Uptime: 1w3d, expires: 00:02:54
This output shows that the PE router has correctly learned the group-to-rendezvous point (RP), which is used inside the VPN. The default-MDT reaches all PE routers in the core of the provide network in which the multicast replication is performed. With only a default-MDT configured, traffic goes to all PE routers, regardless of whether they want to receive the traffic.
Optimizing Traffic Forwarding: Data MDT
Data MDT is designed to optimize traffic forwarding. Data MDT is a multicast tree that is constructed on demand. The conditions to create a data MDT are based upon traffic-load threshold measured in kbps or on an access-list that specifies certain sources inside the VPN. A data MDT is created only by the PE that has the source connected to its site. The data MDT conditions do not have to be configured. However, when there are no conditions set for each (S,G) inside the VPN, a data MDT is created. This data MDT requires resources from the router, so it is recommended that you not create one just because a source exists. A non-zero threshold is recommended, because this value requires an active source to trigger the creation of the Data MDT. The maximum number of multi-VPN Routing/Forwarding (MVRF) entries is 256.
To configure the data MDT under the VRF, use one of the ranges that is described in Step 2 in the “Configuring the Provider Network for MVPN” section. A maximum of 256 addresses is allowed per VRF. This limitation is an implementation choice, not a protocol limitation. Because SSM is used, the data MDT address-range may be the same on all PE routers for the same VPN. Use an inverse-mask to specify the number of addresses used for the data MDT, as shown in the following command:
mdt data 126.96.36.199 0.0.0.255 threshold 1
Verifying Correct Data MDT Operation
Data MDTs create mroute entries in the global table. There also are specific commands for verifying functionality of the sending and receiving PE router. To verify the data MDT operation, there must be multicast traffic between sites that exceeds the configured threshold. An easy way to test the data MDT is to statically join a multicast group in one site and then ping that group from another site, as shown in the following example:
ip address 10.32.72.248 255.255.255.255
ip igmp join-group 188.8.131.52
ping 184.108.40.206 size 500 repeat 100
To verify the data MDT operation, perform the following procedure:
Step 1 Verify the sending PE router.
Use the show ip pim vrf ipics mdt send command on the sending PE router (PE2) to verify the setup of a data mdt.
PE2#show ip pim vrf ipics mdt send
MDT-data send list for VRF: ipics
(source, group) MDT-data group ref_count
(10.32.72.244, 220.127.116.11) 18.104.22.168 1
(10.32.73.74, 22.214.171.124) 126.96.36.199 1
Step 2 Verify the receiving PE router.
Use the show ip pim vrf ipics mdt receive detail command on the receiving PE (PE1) router to verify that this router is receiving on a data mdt.
PE1#show ip pim vrf ipics mdt receive
Joined MDT-data [group : source] for VRF: ipics
[188.8.131.52 : 10.32.73.248] ref_count: 1
[184.108.40.206 : 10.32.73.248] ref_count: 1
At this point, if everything is correctly configured, the sites in VPN IPICS can transfer multicast traffic by using the MPVN and all sites are now in the same multicast domain. Therefore, all channels and users on the Cisco IPICS server can be configured with the same location.
A multicast island is a site in which multicast is enabled. A multi-site deployment can consist of several multicast islands that connect to each other over unicast-only connections. See Figure 8-4.
Figure 8-4 Multicast Islands
Multicast over GRE
Multicast of GRE provides multicast support between islands This section provides an overview of how to configure multicast over GRE. Figure 8-5 illustrates a Cisco IPICS deployment with multicast over GRE.
Figure 8-5 Multicast over a GRE Tunnel
A tunnel is configured between the gateway in Site 1 and the gateway in Site 2, which is sourced with their respective loopback0 interfaces. The ip pim sparse-dense mode command is configured on tunnel interfaces and multicast routing is enabled on the gateway routers. Sparse-dense mode configuration on the tunnel interfaces allows sparse-mode or dense-mode packets to be forwarded over the tunnel depending on the RP configuration for the group.
The following examples show the configuration that is required to implement multicast over GRE between Site 1 and Site 2. Use the same approach between Site 1and Site 3, and between Sites2 and Site 3
ip address 220.127.116.11 255.255.255.255
ip address 192.168.3.1 255.255.255.252
tunnel destination 18.104.22.168
ip address 22.214.171.124 255.255.255.255
ip address 192.168.3.2 255.255.255.252
tunnel destination 126.96.36.199
When you configure PIM sparse mode over a tunnel, make sure to follow these guidelines:
- For successful RPF verification of multicast traffic flowing over the shared tree (*,G) from the RP, configure the ip mroute rp-address nexthop command for the RP address, pointing to the tunnel interface.
For example, assume that Site 1 has the RP (RP address 10.1.1.254). In this case, the mroute on the gateway in Site 2 would be the ip mroute 10.1.1.254 255.255.255.255 tunnel 0 command, which ensures a successful RPF check for traffic flowing over the shared tree.
- For successful RPF verification of multicast (S,G) traffic flowing over the Shortest Path Tree (SPT), configure the ip mroute source-address nexthop command for the multicast sources, pointing to the tunnel interface on each gateway router.
In this case, when SPT traffic flows over the tunnel interface, an ip mroute 10.1.1.0 255.255.255.0 tunnel 0 command is configured on the Site 2 gateway and ip mroute 10.1.2.0 255.255.255.0 tunnel 0 command is configured on the Site 1 gateway. This configuration ensures successful RPF verification for incoming multicast packets over the Tu0 interface.
Bandwidth Considerations when using Multicast over GRE
Cisco IPICS can operate with either the G.711 or the G.729 codec. Table 8-1 lists the bandwidth requirements for a voice call over unicast connection trunks, based on the codec used, the payload size, and whether cRTP, VAD, or both are configured.
Table 8-1 Bandwidth Considerations for Unicast Connection Trunks
Full Rate Bandwidth (kbps)
Bandwidth with cRTP (kbps)
Bandwidth with VAD (kbps)
Bandwidth with cRTP and VAD (kbps)
Bandwidth consumption across a tunnel depends on the number of active channels and VTG users that are communicating between the sites.
The following cases are examples how to calculate bandwidth use across a tunnel.
Case 1: Active channel in Site 1 and Site 2.
All users in Site 1 are using one channel, and all users in Site 2 are using another channel. No multicast voice flows across the tunnel.
Case 2: Active channel has n users in site 1 and m users in site 2.
In the following example, Call bandwidth is the bandwidth value from Table 5-2.
Bandwidth 1 = Call bandwidth * n (Flow from site 1 to site 2)
Bandwidth 2 = Call bandwidth * m (Flow from site 2 to site 1)
Total bandwidth = Bandwidth 1 + Bandwidth 2
(Call bandwidth is the value from Table 3-1.)
Depending on the number of active channels, the number of active users per channel, and whether the channel spans multiple sites, the bandwidth usage could be significant.
IPSec VPNs can be implemented over multicast GRE tunnels. See Figure 8-6.
Figure 8-6 IPSec over Multicast GRE Tunnels
There are a number of ways to configure IPSec over GRE tunnels. Refer to the appropriate Cisco documentation.
A multicast singularity is a restrictive case of the multicast island scenario. Between sites, multicast routing is not enabled. Within a site, multicast is enabled only on Cisco IPICS specific devices: UMS, LMR gateways, and Cisco Unified IP Phones. These Cisco IPICS devices reside in a multicast singularity, as shown in Figure 8-7.
Figure 8-7 Multicast Singularities
The singularities can be connected by using multicast over GRE tunnels (as shown in Figure 8-8).
Figure 8-8 Multicast Singularities with GRE Tunnels
The configuration of a multicast over GRE tunnel is identical to the multicast island scenario except the tunnel must be configured between the routers and not the gateway routers because the gateway routers are not enabled for multicast.
The following rules apply to a multicast singularity:
1. All UMSs and LMR gateways must reside in a multicast singularity. That is, these devices must be on directly connected multicast enabled LANs.
2. All users within the multicast singularity can use a Cisco Unified IP Phone because they are in the multicast enabled zone.
3. Users outside the multicast singularity can use the mobile client.
4. Users outside the multicast singularity cannot use the Cisco Unified IP Phone because this device supports only multicast.
It would be possible to have multiple multicast singularities within the same site and the singularities could be connected with multicast over GRE tunnels. This solution depends on the policies of the organization.
VPN Termination for Mobile Clients
Cisco IPICS Mobile Clients expand the types of devices that can access the network and provide access to the network from virtually anywhere on the Internet.
For information about the ports and transport protocols that Cisco IPICS mobile clients, use see Table 2-2.
In a secure campus network, mobile clients work over WiFi. As the network expands and access shifts to 3G/4G and LTE, an additional level of protection is required. Cisco offers that protection by using the Cisco AnyConnect Mobile VPN Client and Cisco Adaptive Security Appliance (ASA) platforms to create a VPN tunnel between the endpoints and Cisco IPICS.
The VPN tunnel encapsulates and encrypts the traffic and provides the added advantage of overcoming issues with NAT traversal through the carrier network. A Cisco IPICS session that runs over a VPN tunnel is viewed by the service provider as a data call, not a voice call, because the tunneled payload is a data service running on the mobile client.