Enable Geo Redundancy Solution

This chapter contains the following topics:

Geo redundancy inventory file guidelines

You must ensure accuracy while preparing the inventory file (.yaml) for geo redundancy activation.

Here are some guidelines to assist you.

  1. Add these fields to the inventory file for each cluster's unified endpoint:

    data_vip: <>
    data_vip_mask: <>
    management_vip: <>
    management_vip_mask: <>
  2. When geo enabling the setup with is_skip_peer_check_enabled set to true, update each cluster's unified endpoint connectivity details based on the deployment model (IPv4, IPv6, or dual stack). For dual stack deployments, IPv6 addresses are given preference.

    If the cluster's unified endpoint type is set to FQDN, ensure that these fields are populated in the inventory file according to the setup configuration.

    management_fqdn
    data_fqdn

    If the cluster's unified endpoint type is set to IP, ensure that these fields are populated in the inventory file according to the setup configuration.

    Table 1. Unified endpoint connectivity details

    Field

    Supported deployment model

    data_vip_ipv4: <>

    IPv4, dual stack

    data_vip_ipv4_mask: <>

    IPv4, dual stack

    management_vip_ipv4: <>

    IPv4, dual stack

    management_vip_ipv4_mask: <>

    IPv4, dual stack

    data_vip_ipv6: <>

    IPv6, dual stack

    data_vip_ipv6_mask: <>

    IPv6, dual stack

    management_vip_ipv6:<>

    IPv6, dual stack

    management_vip_ipv6_mask: <>

    IPv6, dual stack

  3. For IPv4 deployment, ensure the values of these fields match in the cluster's unified endpoint:

    • data_vip must match data_vip_ipv4

    • data_vip_mask must match data_vip_ipv4_mask

    • management_vip must match management_vip_ipv4

    • management_vip_mask must match management_vip_ipv4_mask

  4. For dual stack deployment, ensure the values of these fields match in the cluster's unified endpoint:

    • data_vip must match data_vip_ipv6

    • data_vip_mask must match data_vip_ipv6_mask

    • management_vip must match management_vip_ipv6

    • management_vip_mask must match management_vip_ipv6_mask

Geo redundancy workflow (Day 0)

This topic explains the workflow to enable geo redundancy on Day 0. The workflow provides a high-level description of the tasks required to install and enable geo redundancy in Crosswork Network Controller.


Note


The recommended Day 0 setup for enabling geo redundancy is an empty Crosswork cluster (without any applications, devices or data gateways onboarded).


The following table describes the stages to install and enable the geo redundancy mode on Crosswork Network Controller.

Table 2. Geo redundancy workflow (Day 0)

Step

Action

1. Install the active cluster in AZ1.

Install using your preferred method:

Verify if the installation was successful, and log into the Cisco Crosswork UI.

2. Install the standby cluster in AZ2.

(Optional) 3. Install the arbiter VM in AZ3.

Note

 

Please skip this step if you do not wish to use the auto arbitration functionality.

Follow the instructions in Deploy an arbiter VM.

4. Validate the Crosswork Inventory.

In case of manual installation of Crosswork Cluster, you must import a cluster inventory file (.yaml file) to the Crosswork UI. For more information, see the Import Cluster Inventory topic.

Important

 

If you fail to ensure this step, the geo redundancy enablement will fail.

5. Create a backup of your Crosswork clusters.

Follow the instructions in Manage Backups chapter in Cisco Crosswork Network Controller 7.1 Administration Guide.

Note

 

Importing the cross cluster inventory template cannot be undone if there is no pre-existing backup of the system before the template is loaded.

6. Perform the connectivity checks.

Follow the instructions in Connectivity Checks topic.

7. Prepare and upload the cross cluster inventory file.

Follow the instructions in Prepare the cross cluster inventory topic.

See Sample cross cluster inventory templates for example scenarios that fit your requirement.

8. Enable geo redundancy.

Follow the instructions in Enable Geo Redundancy topic.

9. Install and enroll Crosswork Data Gateway, and onboard devices.

  1. Choose the deployment profile for the Data Gateway VM. See Crosswork Cluster VM Requirements.

    Note

     

    If you are redeploying the same Data Gateway with Crosswork Network Controller, delete the previous Data Gateway entry from the Virtual Machine table under Data Gateway Management. For information on how to delete a Data Gateway VM, see Delete Crosswork Data Gateway from the Crosswork Cluster.

  2. Review the installation parameters to ensure that you have all the required information to install the Data Gateway. See Crosswork Data Gateway Parameters and Deployment Scenarios.

    Note

     

    Use an FQDN, for example, geomanagement.cw.cisco, as the unified multi-cluster domain name. Ensure this FQDN is reachable from both clusters and points to the Active Crosswork VIP in the Geo-HA DNS server. If these conditions are not met, Data Gateway instance enrollment will fail.

  3. Install the Data Gateway using your preferred method:

  4. Verify the Data Gateway enrollment. See Crosswork Data Gateway Authentication and Enrollment.

    Note

     

    Use an FQDN such as geomanagement.cw.cisco that is reachable from both clusters and points to the Active Crosswork VIP in the Geo-HA DNS server; otherwise, the enrollment will fail.

  5. After the installation is complete, perform the postinstallation procedure. See Crosswork Data Gateway Post-installation Tasks.

  6. Repeat steps 1 to 5 in the workflow to install Data Gateways on both standby and active sites.

  7. After you verify that the Data Gateway VM has enrolled with Crosswork Network Controller, create a common Data Gateway pool for active and standby sites. For more information, see the Create a pool in the geo redundancy-enabled sites section in the Cisco Crosswork Network Controller 7.1 Administration Guide.

  8. Assign the Data Gateways to AZ2. For more information. see the Assign Data Gateways to geo redundancy-enabled sites section in the Cisco Crosswork Network Controller 7.1 Administration Guide.

  9. Edit the pool to add the new Data Gateways. For more information, see the Edit or delete a Data Gateway pool section in the Cisco Crosswork Network Controller 7.1 Administration Guide.

10. Configure the cross cluster settings

Follow the instructions in topics below:

11. Complete an on-demand sync operation successfully.

On the Cross Cluster window, select Actions > Synchronize to initiate the sync operation.

12. Install the Crosswork Applications on the active cluster

Follow the instructions in Install Crosswork Network Controller applications topic.

Once geo redundancy is enabled, a Geo Redundancy tile is added to the Application management window. This tile is built-in and cannot be upgraded, uninstalled, or deactivated.

Warning

 
  • Parallel installation of applications on the active and standby clusters should be avoided. Complete the installation on the active cluster before proceeding with the installation on the standby cluster.

  • Applications should not be installed during a periodic or on-demand sync operation. Ensure there is sufficient time for the installation to complete before initiating a sync, and verify that no sync operation is in progress before installing an application. It is recommended to temporarily disable periodic sync when installing applications.

13. Install the Crosswork Applications on the standby cluster

Note

 

Applications on the standby site remain in a degraded state until the first sync completes.

14. Verify that the geo redundancy was successfully enabled on all the AZs.

Perform these checks:

  1. In the Cross Cluster Health Status, ensure the operational state is Connected.

  2. In the Cross Cluster Health Status, ensure that active cluster state is Healthy.

  3. In the Cross Cluster Health Status, ensure that standby cluster state is Healthy.

  4. In the Cross Cluster Health Status, ensure that arbiter VM state is Healthy.

  5. In the Cross Cluster Health Status, ensure the High Availability state is AVAILABLE.

  6. Verify that the heartbeat count between the clusters is incrementing and that no failures are observed over a 30-minute period.

  7. Confirm the completion of one successful sync between the clusters.

For more information, see View Cross Cluster Status topic.

Connectivity Checks

Perform the following connectivity checks before enabling geo redundancy:


Important


  • Static routes are not required for cross cluster connectivity.

  • Mesh connectivity is required between Crosswork Network Controller, Crosswork Data Gateway, NSO, and data interface components across the Availability Zones (AZ).

  • L2/L3 connectivity is supported.


SCP Connectivity

Copy (using SCP) a file from Availability Zone 1 (AZ1) to Availability Zone 2 (AZ2), and from AZ2 to AZ1 in corresponding Crosswork VMs and Crosswork Orchestrator pods to ensure connectivity between both clusters.

# Perform these actions from AZ1 to AZ2 and AZ2 to AZ1
 
cw-admin@192-168-6-101-hybrid:~$ sudo su
[sudo] password for cw-admin:
root@192-168-6-101-hybrid:/home/cw-admin# kubectl exec -it -n=kube-system robot-orch-76856487-562w6 -- bash
robot-orch-76856487-562w6:~# touch t.txt
robot-orch-76856487-562w6:~# scp t.txt cw-admin@YOUR_PEER_CLUSTER_MGMT_VIP:/home/cw-admin/
(cw-admin@192.168.5.100) Password:
t.txt   
 
robot-orch-76856487-562w6:~# scp t.txt cw-admin@YOUR_PEER_CLUSTER_DATA_VIP:/home/cw-admin/
(cw-admin@192.168.5.100) Password:
t.txt

DNS Connectivity

Test the DNS resolution on system wide DNS server.


Note


Ensure validation of IPv4 and/or IPv6 address mapping based on the deployment type (IPv4, IPv6, or dual stack).


### Internal Authortative  resolution
 
dig @your_dns_server_ip  your_name.cw.cisco
 
; <<>> DiG 9.10.6 <<>> @172.28.122.84 geomanagement.cw.cisco
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8167
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;your_name.cw.cisco.        IN  A
 
;; ANSWER SECTION:
your_name.cw.cisco. 5   IN  A   192.168.6.100
 
;; Query time: 126 msec
;; SERVER: 172.28.122.84#53(172.28.122.84)
;; WHEN: Fri Jun 30 23:47:51 PDT 2023
;; MSG SIZE  rcvd: 67
 
### External forwarding and resolution
 
 dig @your_dns_server_ip  ntp.esl.cisco.com
 
; <<>> DiG 9.10.6 <<>> @172.28.122.84 ntp.esl.cisco.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43986
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;ntp.esl.cisco.com.     IN  A
 
;; ANSWER SECTION:
ntp.esl.cisco.com.  1   IN  A   171.68.38.66
 
;; Query time: 311 msec
;; SERVER: 172.28.122.84#53(172.28.122.84)
;; WHEN: Fri Jun 30 23:46:37 PDT 2023
;; MSG SIZE  rcvd: 62

Verify if the DNS TTL in your VM is lesser than 60 seconds (< 60s).

cw-user@admin-M-C2EM ~ %  dig +nocmd +noall +answer @your_dns_server_ip  your_fqdn
geomanagement.cw.cisco. 60  IN  A   192.168.6.100
 
For ipv4 check
 
````dig +nocmd +noall +answer @your_dns_server_ip  your_fqdn ```
 
For ipv6 check
 
````dig +nocmd +noall +answer AAAA @your_dns_server_ip  your_fqdn ```

Prepare the cross cluster inventory

This topic explains the steps to download and prepare the cross cluster inventory file.

Procedure


Step 1

Log in to the Crosswork cluster that will function as the active cluster.

Step 2

From the main menu, choose Administration > Geo Redundancy Manager. The Geo Redundancy Manager window is displayed.

Step 3

Click on sample file to download the sample template (.yaml file) for the cross cluster inventory.

Step 4

Fill the template file with the relevant information for active and standby clusters and the unified cross cluster. For more information, see the examples in Sample cross cluster inventory templates.

Note

 

In a dual stack configuration, IPv6 addresses are preferred for cross-cluster inventory, even if both IPv4 and IPv6 addresses are present. IPv4 addresses are supported in a single stack configuration.


Enable Geo Redundancy

This topic explains the procedure to enable geo redundancy from Crosswork Network Controller UI.

Geo redundancy can be configured at any time after the active cluster is built. The process does vary slightly if the standby cluster is not built and activated within 6 hours of the active cluster. These variations are clearly noted in this procedure.


Tip


Click on How it works? link to view a visual representation of how geo redundancy is enabled.


Before you begin

Ensure you have met all the requirements specified in Geo Redundancy Requirements.

Procedure


Step 1

Log in to the Crosswork Network Controller cluster that will function as the active cluster.

Step 2

From the main menu, choose Administration > Geo Redundancy Manager. The Geo Redundancy Manager window is displayed.

Step 3

Click Import inventory file, and the Import Inventory File dialog box is displayed. Click Browse and select the cross cluster inventory file that you prepared. Verify the contents of the template file.

Step 4

In this step you will be configuring the server to be used with Geo Redundancy. This step cannot be undone. You should have already made a backup of you cluster before proceeding with this action. To activate Geo Redundancy on the server, click Enroll. A service interruption alert is displayed. Click Proceed to continue.

The progress can be viewed from the Jobs window, or by clicking the Details icon icon.

Step 5

After inventory upload is completed in the first cluster, the same process must be repeated in the second cluster. Log in to the Crosswork Network Controller cluster that will function as the standby cluster, and repeat the actions in steps 2 to 4.

Attention

 

Enable Pairing mode if the standby cluster is activated more than six hours after the active cluster. Pairing mode remains active for six hours from the time it is enabled, after which it automatically disables. You cannot disable it manually.

After the standby cluster is configured, the Job status will be displayed as Completed on both clusters. Once the inventory upload is successfully completed on both clusters, the status will be updated in the Geo Redundancy Manager window.

Step 6

(Optional) Log in to the Crosswork Network Controller VM that will function as the arbiter VM, and repeat the actions in steps 2 to 4.

After the arbiter VM is configured, the Job status will be displayed as Completed on all clusters. Once the inventory upload is successfully completed on all clusters, the status will be updated in the Geo Redundancy Manager window.


What to do next

To continue with the activation, see View Cross Cluster Status.

Configure Cross Cluster Settings

Configuring cross cluster settings is important to ensure secure data transfer between clusters, to facilitate reliable backups and recovery, and data compliance.

This topic explains how to configure the cross cluster settings.


Note


The default values shown in the Cross Cluster Configuration UI are the recommended settings.


Before you begin

  • If you choose to enable auto-arbitration, you must have the traffic redirect script ready for upload. The redirect script must be an independent, pre-compiled binary that does not rely on external libraries.

  • You must log in as an admin user to configure cross cluster settings. Only admin users have the necessary permissions to make these changes. Users with a local role, even if they have read, write, and delete permissions, cannot configure cross cluster settings.

Procedure


Step 1

From the main menu, choose Administration > Cross Cluster. The Cross Cluster window is displayed. Click on the Configurations tab.

Step 2

The Configurations window is displayed, with the first step, 1 - Storage settings, highlighted. Fill all the fields provided for the SCP Host server.

To add additional SCP host, select the Additional SCP host checkbox. This is necessary only if the current SCP host is not highly available across both AZs. Adding an additional SCP host allows the peer cluster to be utilized for SCP storage.

Note

 

After a SCP host is configured, you can view the used and free space available in the server.

Figure 1. Storage settings

Step 3

Click Next. The Configurations window is displayed, with the next step, 2 - Sync settings, highlighted. Data synchronizarion ensures high availability, consistency, load balancing, and data compliance between geo redundant clusters.

Enable the Sync slider button to set an auto-sync schedule, and set the sync times. Optionally, you can enable Enable force sync and Enable read only mode for sync as per your requirement. Click the tooltip icon next to each option to learn more.

Note

 

It is recommended to sync at least once every 8 hours.

Figure 2. Sync settings

Step 4

Click Next. The Configurations window is displayed, with the next step, 3 - DNS settings, highlighted. Add the details for the Authoritative DNS server and Port.

Note

 
  • The DNS server should be configured with the same management FQDN and data FQDN displayed on the UI.

  • The DNS record TTL for FQDN must be lesser than 60 seconds (< 60s).

Figure 3. DNS settings

Step 5

Click Next. The Configurations window is displayed, with the final step, 4 - Arbitration settings, highlighted. Set relevant values for the Heartbeat time interval and Failure detection wait period fields.

Step 6

Enable the Auto-arbitration slider to enable the auto arbitration mode (recommended option). This starts the cluster leader election process and updates the leader information. Click Add traffic redirect script to add the redirect script. This script is used to update the DNS entry for the FQDN provided.

In the Add file dialog box, choose either URL or SCP as your preferred protocol. Based on your selection, fill in the additional fields with the required information. Click Add to proceed.

Figure 4. Traffic redirect script added

Step 7

Click Save to save the changes.


Configure cross cluster notification settings

This topic explains how to configure the cross cluster notification settings.

Procedure


Step 1

From the main menu, choose Administration > Cross Cluster. The Cross Cluster window is displayed.

Step 2

Click on the Notification settings tab, and Add notification destination window is displayed.

Step 3

Click Add Criteria, and you will be navigated to the Create Notification Policies window.

Enter relevant values for the following fields, and save the policy.

  • Policy name

  • Criteria

  • Notification destination

For more information, see Create Notification Policy for System Events in the Cisco Crosswork Network Controller 7.1 Administration Guide.


View Cross Cluster Status

This topic explains how to view the cross cluster status after successfully enabling geo redundancy.

Procedure


Step 1

From the main menu, choose Administration > Cross Cluster. The Cross Cluster window is displayed.

The cross cluster health status is displayed along with the cluster leader status, high availability state, heartbeats round trip time, failed heartbeats, and last active cluster change time. You can also view the status of the active and standby clusters along with the operational state and last sync status.

Figure 5. Cross cluster with active, standby, and arbiter AZs

Note

 

By design, the Arbiter name is not displayed on the Cross Cluster page. The Arbiter VM is consistently shown as "Arbiter," regardless of the value of the cluster-id parameter.

Scroll further down to see the data store replication states. The Data stores table will show all the different data stores that have been replicated along with the lag info for them.

  • Each Data store type has a corresponding Sync type. Postgres and Timescale data stores support live synchronization, while gluster is updated via periodic sync.

    Table 3. Data store types

    Data store type

    Sync type

    Postgres

    Streaming

    Timescale

    Streaming

    Gluster

    Periodic

  • The Replication role displays the role of the cluster. For example, the value for an active cluster will be Active.

  • The Replication state displays the status of the data store.

  • The Lag(size) value indicates the lag between the active and standby clusters.

Figure 6. Cross Cluster window - Replication state

Step 2

You can perform the following operations on the Cross Cluster window.

  1. Click Details icon to see a visual representation of the heartbeat trend.

    Figure 7. Cross Cluster window - heartbeat trend
  2. Click on the name of the active and standby clusters to view the cluster details.

  3. Click on Lag(size) for a data store to view a detailed graph of the replication summary.

    Figure 8. Data store Lag trend

Step 3

You can select and perform the following optional operations from the Actions drop-down menu.

  1. Click Actions > Set cluster role to override the switchover process and assign the cluster roles manually. For more information, see Perform switchover manually.

  2. Click Actions > Showtech request, and the Showtech Request pop-up window is displayed. Enter the relevant SCP host details and click Export to download the showtech logs.

  3. Click Actions > Synchronize to initiate an on-demand sync operation.

    Important

     

    Do not perform the Synchronize operation before completing the other configurations (such as storage, DNS, and sync settings). Once a sync is initiated, it cannot be stopped midway.

  4. Click Actions > Repair system to address any sync issues that require the system to kickstart the replication or bootstrap the standby cluster from the active side. This operation will attempt to automatically repair the applications and database.

  5. Click Actions > Test traffic redirect to execute the traffic redirect script. This operation creates a job to validate the arbitration executable. The redirect script is used to update the DNS entry for the FQDN provided.

  6. Click Actions > Switchover to initiate the switchover process. This is a one-click control that performs the three steps of the switchover process. It automates setting the two roles and updating the DNS in a single function. For more information, see Perform switchover manually.


Geo redundancy workflow (Day N)

This topic outlines the high-level workflow for the tasks required to enable geo redundancy on Day N when Crosswork Network Controller (version 7.0) is operating in a standalone cluster.

A Crosswork cluster is considered as a"Day N" scenario if it has configured the below options:

  • AZ1 is deployed.

  • Application are installed.

  • Devices are configured.

  • Crosswork Data Gateway for AZ1 is enrolled.

  • Providers are added.

  • Collection jobs are running.

A backend check is conducted to verify the cluster's eligibility for enabling geo redundancy. If any checks fail, geo redundancy cannot be enabled, and alarms will be generated to notify you.

Once geo redundancy is enabled on Day N, please perform a sync from the Active to the Standby cluster before installing any applications.


Important


Please see the Release Notes for Crosswork Network Controller 7.1.0 to know the NSO and SR-PCE versions compatible with Crosswork Network Controller. The process to upgrade NSO or SR-PCE is not covered in this document. For install instructions, please refer to the relevant product documentation.


Table 4. Geo redundancy workflow (Day N)

Step

Action

1. Install the standby cluster in AZ2.

Install using your preferred method:

Verify if the installation was successful, and log into the Cisco Crosswork UI.

(Optional) 2. Install the arbiter VM in AZ3.

Note

 

Please skip this step if you do not wish to use the auto arbitration functionality.

Follow the instructions in Deploy an arbiter VM.

3. Validate the Crosswork Inventory.

In case of manual installation of Crosswork Cluster, you must import a cluster inventory file (.yaml file) to the Crosswork UI. For more information, see the Import Cluster Inventory topic.

Important

 

If you fail to ensure this step, the geo redundancy enablement will fail.

4. Create a backup of your Crosswork clusters.

Follow the instructions in Manage Backups chapter in Cisco Crosswork Network Controller 7.1 Administration Guide.

Note

 

Importing the cross cluster inventory template cannot be undone if there is no pre-existing backup of the system before the template is loaded.

5. Perform the connectivity checks.

Follow the instructions in Connectivity Checks topic.

6. Prepare and upload the cross cluster inventory file for a geo redundancy setup without the arbiter VM.

Follow the instructions in Prepare the cross cluster inventory topic.

See Sample cross cluster inventory templates for relevant example scenarios that fit your requirement.

7. Enable geo redundancy.

Follow the instructions in Enable Geo Redundancy topic.

8. Install and enroll Crosswork Data Gateway, and onboard devices.

You can choose one of these approaches for AZ2 setup:

Follow this workflow to deploy a new Data Gateway.

  1. Choose the deployment profile for the Data Gateway VM. See Crosswork Cluster VM Requirements.

    Note

     

    If you are redeploying the same Data Gateway with Crosswork Network Controller, delete the previous Data Gateway entry from the Virtual Machine table under Data Gateway Management. For information on how to delete a Data Gateway VM, see Delete Crosswork Data Gateway from the Crosswork Cluster.

  2. Review the installation parameters to ensure that you have all the required information to install the Data Gateway. See Crosswork Data Gateway Parameters and Deployment Scenarios.

    Note

     

    Use an FQDN, for example, geomanagement.cw.cisco, as the unified multi-cluster domain name. Ensure this FQDN is reachable from both clusters and points to the Active Crosswork VIP in the Geo-HA DNS server. If these conditions are not met, Data Gateway instance enrollment will fail.

  3. Install the Data Gateway using your preferred method:

  4. Verify the Data Gateway enrollment. See Crosswork Data Gateway Authentication and Enrollment.

    Note

     

    Use an FQDN such as geomanagement.cw.cisco that is reachable from both clusters and points to the Active Crosswork VIP in the Geo-HA DNS server; otherwise, the enrollment will fail.

  5. After the installation is complete, perform the postinstallation procedure. See Crosswork Data Gateway Post-installation Tasks.

  6. Repeat steps 1 to 5 in the workflow to install Data Gateways on both standby sites.

  7. Assign the Data Gateways to standby site. For more information. see the Assign Data Gateways to geo redundancy-enabled sites section in the Cisco Crosswork Network Controller 7.1 Administration Guide.

  8. Add the new Data Gateways to the existing pool. For more information, see the Edit or delete a Data Gateway pool section in the Cisco Crosswork Network Controller 7.1 Administration Guide.

9. Configure the cross cluster settings.

Follow the instructions in topics below:

10. Complete an on-demand sync operation successfully.

On the Cross Cluster window, select Actions > Synchronize to initiate the sync operation.

11. Install the Crosswork Applications on the active cluster.

Follow the instructions in Install Crosswork Network Controller applications topic.

Once geo redundancy is enabled, a Geo Redundancy tile is added to the Application management window. This tile is built-in and cannot be upgraded, uninstalled, or deactivated.

Warning

 
  • Parallel installation of applications on the active and standby clusters should be avoided. Complete the installation on the active cluster before proceeding with the installation on the standby cluster.

  • Applications should not be installed during a periodic or on-demand sync operation. Ensure there is sufficient time for the installation to complete before initiating a sync, and verify that no sync operation is in progress before installing an application. It is recommended to temporarily disable periodic sync when installing applications.

12. Install the Crosswork Applications on the standby cluster.

Note

 

Applications on the standby site remain in a degraded state until the first sync completes.

13. Verify that the geo redundancy was successfully enabled on the active and standby clusters.

Perform these checks:

  1. In the Cross Cluster Health Status, ensure the operational state is Connected.

  2. In the Cross Cluster Health Status, ensure that active cluster state is Healthy.

  3. In the Cross Cluster Health Status, ensure that standby cluster state is Healthy.

  4. In the Cross Cluster Health Status, ensure the High Availability state is AVAILABLE.

  5. Verify that the heartbeat count between the clusters is incrementing and that no failures are observed over a 30-minute period.

  6. Confirm the completion of one successful sync between the clusters.

For more information, see View Cross Cluster Status topic.

Onboard the arbiter VM

14. Update the cross cluster inventory file with details of the arbiter VM.

Follow the instructions in Prepare the cross cluster inventory topic.

See Sample cross cluster inventory templates for relevant example scenarios that fit your requirement.

15. Import the cross cluster inventory file again, and enable geo redundancy on the arbiter VM.

Perform these steps:

  1. Import the active cluster inventory.

  2. Import the standby cluster inventory.

  3. Enable geo redundancy on the arbiter VM.

For more information, see Enable Geo Redundancy topic.

16. Configure the cross cluster settings

Follow the instructions in topics below:

17. Verify that the geo redundancy was successfully enabled on all the AZs.

Perform these checks:

  1. In the Cross Cluster Health Status, ensure the operational state is Connected.

  2. In the Cross Cluster Health Status, ensure that active cluster state is Healthy.

  3. In the Cross Cluster Health Status, ensure that standby cluster state is Healthy.

  4. In the Cross Cluster Health Status, ensure that arbiter VM state is Healthy.

  5. In the Cross Cluster Health Status, ensure the High Availability state is AVAILABLE.

  6. Verify that the heartbeat count between the clusters is incrementing and that no failures are observed over a 30-minute period.

  7. Confirm the completion of one successful sync between the clusters.

For more information, see View Cross Cluster Status topic.

Deploy an arbiter VM

The arbiter VM is deployed as a single VM using a small profile. The arbiter VM requires a neutral third site which has different subnet from other two sites which has workload.

An arbiter VM requires a resource footprint of 8 vCPUs, 48 GB of RAM, and 650 GB of storage.


Important


After deploying the arbiter VM, ensure that the Crosswork inventory is onboarded before proceeding.


Install via the vCenter UI

For installation instructions, see Install Crosswork Network Controller using the vCenter vSphere UI.

  1. You must set these parameters during the deployment:

    • Select the virtual disk format as Thin provision.

    • Set Datafs Disk Size to 100.

    • Select VM type as Hybrid.

    • Set Cluster seed node to True.

    • Set Enable Skip Auto Install Feature to True.

    • Set Product specific definition as <![CDATA[{"product_image_id": "CNC", "attributes": {"is_arbiter": "true"}}]]>.

    • Set Default Application Resource Profile to Small.

    • Set Default Infra Resource Profile to Small.

  2. Once the deployment is completed, right-click on the VM and select Edit Settings. The Edit Settings dialog box is displayed. Under the Virtual Hardware tab, update these attributes:

    • CPU: change to 8 (for Small profile).

    • Memory: change to 48 GB (for Small profile).

Install using OVF tool

For installation instructions, see Install Crosswork Network Controller via the OVF Tool.

Here is a sample template:

env variables
 
ProductDefinition="<![CDATA[{"product_image_id": "CNC", "attributes": {"is_arbiter": "true"}}]]>"]
numberOfCpus=8
memorySize=49152
EnableSkipAutoInstallFeature="True"
ddatafs=100
 
 
  ovftool --acceptAllEulas --skipManifestCheck --X:injectOvfEnv --datastore="${vmDatastore[$i]}" --diskMode="${vmDiskMode[$i]}" \
    --numberOfCpus:"*"=$numberOfCpus  --memorySize:"*"=$memorySize \
    --coresPerSocket:"*"=1 \
    --viCpuResource=-1:$viCpuResource:-1 --viMemoryResource=-1:$memorySize:-1 \
    --overwrite --noSSLVerify --allowExtraConfig \
    --name="${vmName[$i]}" \
    --prop:"ManagementIPv4Address=${vmMngIP[$i]}" \
    --prop:"ManagementIPv4Gateway=$mngGw" \
    --prop:"ManagementIPv4Netmask=$mngMask" \
    --prop:"ManagementVIP=$mngVIP" \
    --prop:"ManagementVIPName=$mngVIPName" \
    --prop:"DataIPv4Address=${vmDataIP[$i]}" \
    --prop:"DataIPv4Gateway=$dataGw" \
    --prop:"DataIPv4Netmask=$dataMask" \
    --prop:"DataVIP=$dataVIP" \
    --prop:"DataVIPName=$dataVIPName" \
    --net:"Management Network=$mngNet" \
    --net:"Data Network=$dataNet" \
    --prop:"DNSv4=$dns" \
    --prop:"NTP=$ntp" --prop:"Domain=$domain" \
    --prop:"Disclaimer=Cisco..." \
    --prop:"ddatafs=$ddatafs" --prop:"logfs=$logfs" \
    --prop:"Timezone=$timezone" \
    --prop:"DefaultInfraResourceProfile=$DefaultInfraResourceProfile" \
    --prop:"DefaultApplicationResourceProfile=$DefaultApplicationResourceProfile" \
    --prop:"EnableSkipAutoInstallFeature=$EnableSkipAutoInstallFeature" \     
    --prop:"ProductDefinition=$ProductDefinition" \
    --prop:"CWUsername=cw-admin" \
    --prop:"CWPassword=$pass" \
    --prop:"VMType=${vmType[$i]}" --prop:"IsSeed=$isSeed" --prop:"InitNodeCount=$nodes" --prop:"InitMasterCount=$masterCount" \
    --prop:"IgnoreDiagnosticsCheckFailure=True" \
    ${CW_IMAGE} vi://"${vcenter_user}:${vcenter_pass}"@"$url"

Install using Docker installer

For installation instructions, see Install Crosswork Network Controller using the Docker installer tool.

Points to remember:

  1. You must set these parameters in the tfvars file:

    • VMSize="Small"

    • EnableSkipAutoInstallFeature = "True"

    • ManagerDataFsSize = 485


      Note


      Installation using the Docker installer requires a disk size of 485 GB. To use a smaller disk size, deploy the arbiter VM using the vCenter UI or the OVF tool.


  2. It is recommended to create a product-specific definition (JSON file) for the arbiter VM parameter and execute it alongside the auto-action file.

    For example, the definition file is product.json and it defines the is_arbiter attribute.

    {
      "product_image_id": "CNC",
      "attributes": {
        "is_arbiter": "true"
      }
    }

    Run command:

    ./cw-installer.sh install -m /data/<template file name> -o /data/<.ova file> -c /data/product.json -y 
    
    ##Note the "-c /data/product.json" option

    Note


    If the is_arbiter attribute was not provided at deployment time, it can be added later when the arbiter VM is geo-enabled.


Update an Application After Enabling Geo Redundancy

This topic describes the procedure that you must perform to update the application after enabling geo redundancy.

Table 5. Workflow to Update the Application

Step

Action

1. Complete an on-demand sync operation successfully.

On the Cross Cluster window, select Actions > Synchronize to initiate the sync operation.

2. Disable the sync operation.

From the Sync settings window, drag the Sync slider button to disable the sync operation.

3. Install the application updates on the Active cluster.

Follow the instructions in the Install Crosswork Applications topic.

Once geo redundancy is enabled, a Geo Redundancy tile is added to the Application management window. This tile is built in and cannot be upgraded, uninstalled, or deactivated.

Warning

 
  • Parallel installation of applications on the active and standby clusters should be avoided. Complete the installation on the active cluster before proceeding with the installation on the standby cluster.

  • Applications should not be installed during a periodic or on-demand sync operation. Ensure that there is sufficient time for the installation to complete before initiating a sync operation, and verify that no sync operation is in progress before installing an application. We recommend to temporarily disable periodic sync when installing applications.

4. Install the application updates on the Standby cluster.

5. Enable the sync operation in Crosscluster

From the Sync settings window, enable the Sync slider button to set an auto-sync schedule, and set the sync times.

6. Perform the sync operation

After geo redundancy is enabled on both the Active and Standby clusters, update the sync settings and perform the first sync either manually or allow it to occur at the scheduled time. Any further application files should be installed only after the first sync is completed.

Geo redundancy scenarios

There are many scenarios with expected system behaviors that you should be aware of when geo redundancy is enabled.

Application installation

Table 6. Application installation scenarios

Scenario

Expected system behavior

Application, version or cluster compute resource (such as nodes, CPU, memory. and disk) mismatch between active and standby clusters prior to enabling geo redundancy.

An equivalency check done prior to the geo redundancy enablement will identify any mismatch between the active and standby clusters (in terms of applications or versions), and prevent enablement. To proceed, please ensure that applications and versions match on both clusters.

Application or version mismatch between active and standby cluster after enabling geo redundancy.

Any configured sync operation will fail until the mismatch is corrected.

Installing an application or patch while a sync is in progress.

A sync operation can be configured as a periodic event or initiated on demand. While a sync operation is in progress, application installation will not be allowed.

Installing an application or patch when sync is not happening.

When sync is not happening, application installation is allowed.

Backup and restore

Table 7. Backup and restore scenarios

Scenario

Expected system behavior

Taking a backup on the active crosswork cluster.

This operation is allowed. Please take backup of the active cluster to have a point-in-time backup to roll back to in case the data sync is corrupted.

Taking a data only backup on the standby crosswork cluster.

This operation is not permitted.

Perform restore operation on the standby cluster.

This operation is not permitted.

Perform restore operation on the active cluster.

Restore operation is not allowed when auto-arbitration is enabled. You must first disable the auto-arbitration before performing the restore operation. If you want to restore a previous backup, perform the restore on the active cluster. The standby cluster will sync on the next sync cadence.

Configure the remote VM destination for backup and restore.

The remote destination for backup and restore must be re-configured after the switchover, as the destination settings are specific to the cluster.

Password update

Follow the below sequence while updating password on a geo redundant cluster:

  1. Update the password on the active cluster.

  2. Wait for the sync operation to complete, and the password update is pushed to the standby cluster.

  3. Update the inventory file on the active cluster.

Standby cluster behavior

Accessibility of infrastructure and application features in the standby cluster is as follows:


Note


Read operations may be unavailable during synchronization.


Table 8. Standby cluster behavior

Service/Feature

Behavior

Notes

Platform

APIs are read-only, except where noted.

Application Management, Node Lifecycle, and Showtech support both read and write operations.

Geo manager standby

Read and write operations are supported.

Crosscluster (geo)

Read and write operations are supported for APIs.

AAA

Login and logout operations are supported.

Inventory

Only read operations are supported for persisted data.

Collection

Only read operations are supported for configuration.

Topology visualization

Only read operations are supported.

Only UI APIs are exposed.

Grouping

Only read operations are supported.

Both UI and NBI APIs are exposed.

Views

Only read operations are supported.

Only UI APIs are exposed.

Geo server

Only read operations are supported.

Only UI APIs are exposed.

Alarms

Read and write operations are supported for system alarms. Only read operations are supported for other alarms.

Topology

Only read operations are supported.

All topology APIs are exposed.

CLMS

Read operations are not supported.

DG manager

Only read operations are supported.

You can view data gateways, pools, data gateway instances, data destination lists, data destination details, system packages, custom packages, global parameters, resources, and vitals.

Device management ZTP

Only read operations are supported.

All ZTP REST APIs are read-only. Any attempt to write will result in a backend error. These APIs cover ZTP devices, profiles, configuration files, serial numbers, vouchers, software images, and the ZTP dashlet.