vDRA

Additional Directors to Handle Gy/Sy Traffic in vDRA

Feature Summary and Revision History

Table 1. Summary Data

Applicable Product(s) or Functional Area

vDRA

Applicable Platform(s)

VNF

Default Setting

Enabled – Configuration Required

Related Changes in This Release

Not Applicable

Related Documentation

Not Applicable

Table 2. Revision History

Revision Details

Release

First introduced

25.1.0

Feature Description

In the current deployment, various Virtual IPs (VIPs) manage different types of interface traffic, including Gx, Rx, Sd, Gy, and Sy.

To address the challenges associated with managing Gy/Sy traffic, the deployment is enhanced by adding two additional directors dedicated explicitly to handling Gy/Sy interface traffic.

Implementation Details:

  • The two new directors are configured to exclusively manage Gy/Sy interface traffic.

  • Existing directors will continue to handle Gx, Rx, Sd, and other traffic types, ensuring balanced and efficient traffic management.

  • The separation of IPC channels for Gy/Sy traffic will prevent bottlenecks and enhance the ability of the system to manage incoming traffic more effectively.

Steps to Add New Directors

Use the following steps to add new directors:

  1. Install New Virtual Machines:

    • Update the setup artifacts with the information for the additional directors.

    • Install the two new virtual machines to accommodate these directors.

  2. Configure VIPs for New Directors:

    This can be achived in two ways.

    • Existing VIP configuration: Update the existing configuration only if a dedicated VIP configuration is present for Gy and Sy by replacing the old director IPs with the new IPs. Use the following CLI command to configure the VIP:

      network dra-distributor  <NAME> 
      service  <EXISTING-SERVICE-NAME> virtual-router-id  <ID>
      interface  <INTERFACE-NAME> service-ip  <VIRTUAL-IP>
      service-port  <PORT> host  <DISTRIBUTOR-IP>
      priority  <PRIORITY>
      real-server  <DIRECTOR-IP>

      Here is the sample configuration:

      admin@orchestrator[WPS-DRA-master](config)# network dra-distributor client service GySy virtual-router-id 10 interface ens160 service-ip 172.XX.XX.102 service-port 3868 host 172.XX.XX.104 priority 20
      admin@orchestrator[WPS-DRA-master](config-host- 172.XX.XX.104)# exit
      admin@orchestrator[WPS-DRA-master](config-service-GySy)# host 172.XX.XX.109 priority 10 
      admin@orchestrator[WPS-DRA-master](config-host- 172.XX.XX.109)# exit
      admin@orchestrator[WPS-DRA-master](config-service-GySy)# real-server 172.XX.XX.103
      admin@orchestrator[WPS-DRA-master](config-real-server-172.XX.XX.103)# exit
      admin@orchestrator[WPS-DRA-master](config-service-GySy)# real-server 172.XX.XX.108
      admin@orchestrator[WPS-DRA-master](config-real-server-172.XX.XX.108)# commit
      Commit complete.
    • New VIP Configuration: Add a new VIP configuration with the following CLI command:

      network dra-distributor  <NAME> 
      service  <NEW-SERVICE-NAME> virtual-router-id  <ID>
      interface  <INTERFACE-NAME> service-ip  <VIRTUAL-IP>
      service-port  <PORT> host  <DISTRIBUTOR-IP>
      priority  <PRIORITY>
      real-server  <DIRECTOR-IP>

      Here is the sample configuration:

      admin@orchestrator[WPS-DRA-master](config)# network dra-distributor client service GySy virtual-router-id 10 interface ens160 service-ip 172.XX.XX.102 service-port 3868 host 172.XX.XX.104 priority 20
      admin@orchestrator[WPS-DRA-master](config-host- 172.XX.XX.104)# exit
      admin@orchestrator[WPS-DRA-master](config-service-GySy)# host 172.XX.XX.109 priority 10 
      admin@orchestrator[WPS-DRA-master](config-host- 172.XX.XX.109)# exit
      admin@orchestrator[WPS-DRA-master](config-service-GySy)# real-server 172.XX.XX.103
      admin@orchestrator[WPS-DRA-master](config-real-server-172.XX.XX.103)# exit
      admin@orchestrator[WPS-DRA-master](config-service-GySy)# real-server 172.XX.XX.108
      admin@orchestrator[WPS-DRA-master](config-real-server-172.XX.XX.108)# commit
      Commit complete.

      Update PB configuration:Add the configuration for the new VIP to the Policy Builder (PB) page.

  3. Include in VMDK Upgrade Sets: Ensure the new directors are included in the Virtual Machine Disk (VMDK) upgrade sets for consistency and future upgrades.

For more information on the configuration, see the DRA Distributor Configuration Chapter in CPS vDRA Configuration Guide.

Weave Replacement with Docker Overlay Network Driver in vDRA

Feature Summary and Revision History

Table 3. Summary Data

Applicable Product(s) or Functional Area

CPS vDRA

Applicable Platform(s)

Not Applicable

Default Setting

Enabled – Configuration Required

Related Changes in This Release

Not Applicable

Related Documentation

  • CPS vDRA Configuration Guide

  • CPS vDRA Operations Guide

Table 4. Revision History

Revision Details

Release

First introduced

25.1.0.

Feature Description

This feature outlines the transition from Weave, a third-party software, to a new Container Network Interface (CNI) solution for vDRA. This transition is necessitated by the shutdown of Weaveworks, the provider of Weave software, which was essential for enabling communication between containers across Virtual Machines (VMs) in the vDRA solution.

Prerequiste

Before you migrate from Weave software to Docker Overlay,

  • Verify that the system and all container services are fully operational and healthy.

  • Ensure the cps.pem file is present in both /home/cps/ and orchestrator container /data/keystore/.

  • Execute the network refresh-overlay-config CLI command before starting the network migration to back up the existing overlay-scripts folder and re-create the latest files.

Both the Weave and Docker Overlay networks cannot co-exist in the same site for communication among containers across VMs. The VM /containers running with Docker Overlay network cannot reach other containers in Weave network. Hence, the default network while upgrading to 25.1.0 version will be Weave. Migrating from Weave to Docker Overlay can be initiated once the site is completely upgraded to 25.1.0 version.

Upgrading to 25.1.0:

  • Weave is the default network when the site is upgraded to 25.1.0.

  • Migration from Weave to Overlay network can be done only after upgrading the site to 25.1.0.

  • Initiate the migration from Weave to Overlay at each site sequentially.

Downgrading from 25.1.0:

  • Initiate the migration from Overlay to Weave network at each site sequentially.

  • Verify if all the VMs are running with Weave network and the system is 100% up.

  • Initiate the downgrade to 24.2.0

Configure Docker Overlay Using CLI Command

The feature allows the migration of different VNFs between Weave and Docker Overlay without service disruption through these CLI commands for enabling and disabling network options:

  • network migrate-to-overlay true - To enable the Docker Overlay network.

  • network migrate-to-weave true - To enable the Weave network

  • network detach-weave true - To detach the Weave network after migrating to Overlay network.

  • network detach-overlay true - To detach the Docker Overlay network after migrating to Weave network.

  • network migration-status - To verify the current migration status.

  • network refresh-overlay-config - To refresh the latest Overlay script configuration file update.


Note


  • During migration, the trafffic must be switched to other SITE.

  • By default, the weave network is enabled.

  • When you execute the CLI command, the current network will be disabled.

  • The Weave software will not be completely disabled. It is still operational for particular scripts or commands such as weave status connections .

  • During CNI migration, the running container will need to be re-created when enabling or disabling the network.


For more information, refer the following guides: