Install Crosswork Cluster on VMware vCenter

This chapter contains the following topics:

Installation Overview

Crosswork Network Controller Crosswork Cluster can be installed using the following methods:

  • Cluster Installer Tool: The cluster installer tool is a day-0 installation tool used to deploy the Crosswork cluster with user specified parameters supplied via a template file. The tool is run from a Docker container which can be hosted on any Docker capable platform including a regular PC/laptop. The Docker container contains a template file which is edited to provide the deployment specific data.

  • Manual Installation (via the VMware UI): This option is available for deployments that cannot use the installer tool.

The installer tool method is the preferred option as it is faster and easier to use.

Installation Parameters

This section explains the essential parameters that must be specified during the installation of the Crosswork cluster. Please ensure you have the relevant information for each parameter listed in the table and verify that your environment meets all prerequisite requirements.

The settings recommended in the table represent the least complex configuration. If you encounter network conflicts or wish to implement more advanced security settings (e.g., self-signed certificates), please work with the Cisco Customer Experience team to ensure you are prepared to make the necessary changes for your cluster.


Important


  • Please use the latest template file that comes with the Crosswork installer tool.

  • Secure ZTP and Secure Syslog require the Crosswork cluster to be deployed with FQDN.



Important


In case of dual stack deployment, you need to configure the IPv4 and IPv6 values for the Management, Data, and DNS parameters.

  • ManagementIPv4Address, ManagementIPv6Address

  • ManagementIPv4Netmask, ManagementIPv6Netmask

  • ManagementIPv4Gateway, ManagementIPv6Gateway

  • ManagementVIPv4, ManagementVIPv6

  • DataIPv4Address, DataIPv6Address

  • DataIPv4Netmask, DataIPv6Netmask

  • DataIPv4Gateway, DataIPv6Gateway

  • DataVIPv4, DataVIPv6

  • DNSv4, DNSv6


Table 1. General parameters

Parameter Name

Description

ClusterName

Name of the cluster file.

ClusterIPStack

The IP stack protocol: IPv4, IPv6, or DUALSTACK.

ManagementIPAddress

The Management IP address of the VM (IPv4 and/or IPv6).

ManagementIPNetmask

The Management IP subnet in dotted decimal format (IPv4 and/or IPv6).

ManagementIPGateway

The Gateway IP on the Management Network (IPv4 and/or IPv6). The address must be reachable, otherwise the installation will fail.

ManagementVIP

The Management Virtual IP address (IPv4 and/or IPv6) for the cluster.

ManagementVIPName

Name of the Management Virtual IP for the cluster. This is an optional parameter used to reach Crosswork cluster Management VIP via DNS name. If this parameter is used, the corresponding DNS record must exist in the DNS server.

DataIPAddress

The Data IP address of the VM (IPv4 and/or IPv6).

DataIPNetmask

The Data IP subnet in dotted decimal format (IPv4 and/or IPv6).

DataIPGateway

The Gateway IP on the Data Network (IPv4 and/or IPv6). The address must be reachable, otherwise the installation will fail.

DataVIP

The Data Virtual IP address (IPv4 and/or IPv6) for the cluster.

DataVIPName

Name of the Data Virtual IP for the cluster. This is an optional parameter used to reach Crosswork cluster Data VIP via DNS name. If this parameter is used, the corresponding DNS record must exist in the DNS server.

DNS

The IP address of the DNS server (IPv4 and/or IPv6). The address must be reachable, otherwise the installation will fail.

NTP

NTP server address or name. The address must be reachable, otherwise the installation will fail.

DomainName

The domain name used for the cluster.

CWusername

Username to log into Cisco Crosswork.

CWPassword

Password to log into Cisco Crosswork.

Use a strong VM Password (8 characters long, including upper & lower case letters, numbers, and at least one special character). Avoid using passwords similar to dictionary words (for example, "Pa55w0rd!") or relatable words. While they satisfy the criteria, such passwords are weak and will be rejected resulting in failure to setup the VM.

VMSize

VM size for the cluster. If left empty, the default value ("Large") is selected.

VMName

Name of the VM. A unique VM name is required for each node on the cluster (Hybrid or Worker).

NodeType

Indicates the type of VM. Choose either "Hybrid" or "Worker".

Note

 

The Crosswork cluster requires at least three VMs operating in a hybrid configuration.

IsSeed

Choose "True" if this is the first VM being built in a new cluster.

Choose "False" for all other VMs, or when rebuilding a failed VM.

This parameter is optional for installing using the cluster installer tool.

InitNodeCount

Total number of nodes in the cluster including Hybrid and Worker nodes. The default value is 3. Set this to match the number of VMs (nodes) you are going to deploy. For more information on VM count, see Table 1.

This parameter is optional for installing using the cluster installer tool.

InitLeaderCount

Total number of Hybrid nodes in the cluster. The default value is 3.

This parameter is optional for installing using the cluster installer tool.

BackupMinPercent

Minimum percentage of the data disk space to be used for the size of the backup partition. The default value is 35 (valid range is from 1 to 80).

Please use the default value unless recommended otherwise.

Note

 

The final backup partition size will be calculated dynamically. This parameter defines the minimum.

ManagerDataFsSize

Refers to the data disk size for Hybrid nodes (in Giga Bytes). This is an optional parameter and the default value is 485 (valid range is from 485 to 8000), if not explicitly specified.

Please use the default value unless recommended otherwise.

WorkerDataFsSize

Refers to the data disk size for Worker nodes (in Gigabytes). This is an optional parameter and the default value is 485 (valid range is from 485 to 8000), if not explicitly specified.

Please use the default value unless recommended otherwise.

ThinProvisioned

Set to "false" for production deployments.

EnableHardReservations

Determines the enforcement of VM CPU and Memory profile reservations. This is an optional parameter and the default value is true, if not explicitly specified.

If set as true, the VM's resources are provided exclusively. In this state, the installation will fail if there are insufficient CPU cores, memory or CPU cycles.

If set as false (only set for lab installations), the VM's resources are provided on best efforts. In this state, insufficient CPU cores can impact performance or cause installation failure.

RamDiskSize

Size of the Ram disk.

This parameter is only used for lab installations (value must be at least 2). When a non-zero value is provided for RamDiskSize, the HSDatastore value is not used.

OP_Status

This optional parameter is used (uncommented) to import inventory post manual deployment of Crosswork cluster.

The parameter refers to the state for this VM. To indicate a running status, the value must be 2 (#OP_Status = 2). For more information, see Import Cluster Inventory.

SchemaVersion

The configuration Manifest schema version. This indicates the version of the installer to use with this template.

Schema version should map to the version packaged with the sample template in the cluster installer tool on cisco.com. You should always build a new template from the default template provided with the release you are deploying, as template requirements may change from one release to the next.

LogFsSize

Log partition size (in Giga Bytes). Minimum value is 20 GB and Maximum value is 1000 GB. You are recommended to use the default value.

Timezone

Enter the timezone. Input is a standard IANA time zone (for example, "America/Chicago").

If left blank, the default value (UTC) is selected.

This is an optional parameter.

Note

 
The timestamp in Kafka log messages represents the NSO server time. To avoid any mismatch between the Crosswork server time and the NSO event time, ensure you update the NSO server time before changing the Timezone parameter in Crosswork.
EnableSkipAutoInstallFeature

Any pods marked as skip auto install will not be brought up until a dependent application/pod explicitly asks for it.

If left blank, the default value ("False") is selected.

EnforcePodReservations

Enforces minimum resource reservations for the pod.

If left blank, the default value ("True") is selected.

K8sServiceNetwork

The network address for the kubernetes service network. By default, the CIDR range is fixed to '10.96.0.0/16 '. If you wish to change this default value, please work with the Cisco Customer Experience team.

K8sPodNetwork

The network address for the kubernetes pod network. By default, the CIDR range is fixed to '10.224.0.0/16 '. If you wish to change this default value, please work with the Cisco Customer Experience team.

DefaultApplicationResourceProfile

Resource profile for application pods. If left blank, resource profile defaults to the deployment's VM profile (recommended option).

DefaultInfraResourceProfile

Resource profile for infra pods. If left blank, resource profile defaults to the deployment's VM profile (recommended option).

IsRunDiagnoticsScriptForCheck

Used to enable/disable execution of the diagnostic script. The values are "true" (default value) and "false".

You are recommended to select the default value.

IgnoreDiagnoticsCheckFailure

Used to set the system response in case of a diagnostic check failure.

If set to "true" (default value), the diagnostic check is ignored and installation will continue. If set to "false", the installation is terminated.

You are recommended to select the default value.

Note

 
  • The log files (diagnostic_stdout.log and diagnostic_stderr.log) can be found at /var/log. The result from each diagnostic execution is kept in a file at /home/cw-admin/diagnosis_report.txt.

  • Use diagnostic all command to invoke the diagnostic manually on day N.

  • Use diagnostic history command to view previous test report.

Table 2. VMware template parameters

Parameter Name

Description

VCenterAddress

The vCenter IP or host name.

VCenterUser

The username needed to log into vCenter.

VCenterPassword

The password needed to log into vCenter.

DCname

The name of the Data Center resource to use.

Example: DCname = "WW-DCN-Solutions"

MgmtNetworkName

The name of the vCenter network to attach to the VM's Management interface.

This network must already exist in VMware or the installation will fail.

DataNetworkName

The name of the vCenter network to attach to the VM's Data interface.

This network must already exist in VMware or the installation will fail.

Host

The ESXi host, or ONLY the vcenter cluster/resource group name where the VM is to be deployed.

The primary option is to use the host IP or name (all the hosts should be under the data center). If the hosts are under a cluster in the data center, only provide the cluster name (all hosts within the cluster will be picked up).

The subsequent option is to use a resource group. In this case, a full path should be provided.

Example: Host = "Main infrastructure/Resources/00_trial"

Datastore

The datastore name available to be used by this host or resource group.

The primary option is to use host IP or name. The subsequent option is to use a resource group.

Example: Datastore = "SDRS-DCNSOL-prodexsi/bru-netapp-01_FC_Prodesx_ds_15"

HSDatastore

The high speed datastore available for this host or resource group.

When not using a highspeed data store, set to same value as Datastore.

DCfolder

The resource folder name on vCenter. To be used if you do not have root access as a VMware user, or when you need to create VMs in separate folders for maintenance purposes. You must provide the complete path as value for the DCfolder.

Example: DCfolder = "/WW-DCN-Solutions/vm/00_trial"

Please contact your VMware administrator for any queries regarding the complete folder path.

Leave as empty if not used.

Cw_VM_Image

The name of Crosswork cluster VM image in vCenter.

This value is set as an option when running the cluster installer tool and does not need to be set in the template file.

HostedCwVMs

The IDs of the VMs to be hosted by the ESXi host or resource.

After you have decided the installation parameters values for Crosswork Network Controller, choose the method you prefer and begin your deployment:

Automate Application Installation Using Auto Action Functionality

The auto action functionality is an optional method that enables you to automate the installation and activation of applications as needed during the cluster installation using a day-0 installer. Designed to simplify the installation process, this option can be configured with the Docker installer, direct OVA installation, and the OVF tool.

To enable auto action, you must configure a definition file (JSON format) that lists the tar bundles to be imported and activated. The JSON file is submitted alongside the day-0 installer and overrides the default file bundled in the OVA during installation.


Note


The auto action functionality is supported only for day-0 deployments of non-geo HA cluster builds. Please ensure that the file path specified in the auto action file (the location on your local machine where the files are downloaded) is accessible from the Crosswork cluster.


The auto action definition file customizes two CAPP actions:

  • Add to repository (add_to_repository_requests): The auto action functionality supports HTTP, HTTPS, and SCP protocols with basic authentication (HTTP/HTTPS) to add the application file (CAPP file) to the repository.

    Sample script demonstrating all supported options for adding to the repository:

    {
      "auto_action": {
        "add_to_repository_requests": [
          {
            "file_location": {
              "uri_location": {
                "uri": "https://example.com/path/to/cw-na-element-management-functions-7.0.0.tar.gz",
                "basic_auth": {
                  "username": "user",
                  "password": "xxxx"
                }
              }
            }
          },
          {
            "file_location": {
              "uri_location": {
                "uri": "https://example.com/path/to/cw-na-hi-7.0.0.tar.gz"
              }
            }
          },
          {
            "file_location": {
              "scp_location": {
                "remote_file": "/example/cw-na-hi-7.0.0.tar.gz",
                "ssh_config": {
                  "remote_host": "x.x.x.x",
                  "username": "root",
                  "password": "xxxxx",
                  "port": 22
                }
              }
            }
          }
        ],
      }
    }
  • Activate (install_activate_requests): The CAPP files to be installed and activated are identified using the version and id parameters.

    Example:

    "auto_action": {
            "install_activate_requests": [
                {
                    "package_identifier": {
                        "_comment": "Part of advantage capp",
                        "version": "7.0.0",
                        "id": "capp-coe"
                    }
                },
            ],

Using Docker Installer

While installing via the docker installer, if the JSON file is successfully validated, the installation will proceed. If there are syntax errors in the file, you will be prompted with an error message, and the installation will be halted. Once the errors are corrected, you can retry the installation.

Syntax to execute the auto action file:

./cw-installer.sh install -p -m /data/<template file name> -a <path to json def file> -o /data/<.ova file>

Example:

./cw-installer.sh install -m /data/deployment.tfvars -a https://example.com/path/to/crosswork_auto_action.json -o /data/signed-cw-na-platform-7.0.0-114-release-240831.ova

Using vCenter UI or OVF tool

While installing via the vCenter UI, the CDATA JSON content is validated in the backend. If there are syntax errors in the data, the auto action instructions are skipped and Crosswork Cluster is installed as per the regular installation workflow.

vCenter and the OVF tool do not support the direct upload of JSON format files due to issues with handling special characters. To resolve this, you must compress the JSON file and enclose it in CDATA format.

<![CDATA[{auto action json compressed content}]]>

Note


The CDATA example below has line breaks for the purpose of readability. During production deployment, kindly execute the CDATA without any line breaks.


Example:

<![CDATA[{"auto_action":{"add_to_repository_requests":[{"file_location":{"uri_location":{"uri":"<file path>/<filename.tar.gz>"}}}],
"install_activate_requests":[{"package_identifier":{"id":"capp-coe","version":"7.0.0"}}]}}]]>

Sample auto action definition file


Note


Make sure to replace the placeholder values (e.g., //example.com/path/to/cw-na-cncessential-7.0.0-240831.tar.gz, username, password, id, and version) with actual values relevant to your environment.


{
    "auto_action": {
        "add_to_repository_requests": [
            {
                "file_location": {
                    "uri_location": {
                        "uri": "https://example.com/path/to/cw-na-cncessential-7.0.0-240831.tar.gz"
                    }
                }
            },
            {
                "file_location": {
                    "uri_location": {
                        "uri": "https://example.com/path/to/cw-na-cncadvantage-7.0.0-240831.tar.gz"
                    }
                }
            },
            {
                "file_location": {
                    "uri_location": {
                        "uri": "https://example.com/path/to/cw-na-cncaddon-7.0.0-240831.tar.gz"
                    }
                }
            }
        ],
        "install_activate_requests": [
            {
                "package_identifier": {
                    "_comment": "Part of essentials capp",
                    "version": "7.0.0",
                    "id": "capp-common-ems-services"
                }
            },
            {
                "package_identifier": {
                    "_comment": "Part of advantage capp",
                    "version": "7.0.0",
                    "id": "capp-cat"
                }
            },
            {
                "package_identifier": {
                    "_comment": "Part of advantage capp",
                    "version": "7.0.0",
                    "id": "capp-coe"
                }
            },
            {
                "package_identifier": {
                    "_comment": "Part of advantage capp",
                    "version": "7.0.0",
                    "id": "capp-aa"
                }
            },
            {
                "package_identifier": {
                    "_comment": "Part of add on capp",
                    "version": "7.0.0",
                    "id": "capp-ca"
                }
            },
            {
                "package_identifier": {
                    "_comment": "Part of add on capp",
                    "version": "7.0.0"",
                    "id": "capp-hi"
                }
            }
        ]
    }
}  

Install Cisco Crosswork on VMware vCenter using the Cluster Installer Tool

This section explains the procedure to install Cisco Crosswork on VMware vCenter using the cluster installer tool.


Note


The time taken to create the cluster can vary based on the size of your deployment profile and the performance characteristics of your hardware.


Before you begin

Pointers to know when using the cluster installer tool:

  • Make sure that your environment meets all the vCenter requirements specified in Installation Prerequisites for VMware vCenter.

  • If you intend to use a dual stack configuration for your deployment, make sure that the host machine running the Docker installer meets the following requirements:

    • It must have an IPv6 address from the same prefix as the Crosswork Management IPv6 network, or be able to route to that network. To verify this, try pinging the Gateway IP of the Management IPv6 network from the host. To utilize the host's IPv6 network, use the parameter --network host when running the Docker installer.

    • Confirm that the provided IPv6 network CIDR and gateway are valid and reachable.

  • The edited template in the /data directory will contain sensitive information (VM passwords and the vCenter password). The operator needs to manage access to this content. Store the templates used for your install in a secure environment or edit them to remove the passwords after the install is complete.

  • The install.log, install_tf.log, and .tfstate files will be created during the install and stored in the /data directory. If you encounter any trouble with the installation, provide these files to the Cisco Customer Experience team when opening a case.

  • The install script is safe to run multiple times. Upon error, input parameters can be corrected and rerun. Running the cluster installer tool multiple times may result in the deletion and re-creation of VMs.

  • In case you are using the same cluster installer tool for multiple Crosswork cluster installations, it is important to run the tool from different local directories, allowing for the deployment state files to be independent. The simplest way for doing so is to create a local directory for each deployment on the host machine and map each one to the container accordingly.

  • Docker version 19 or higher is required while using the cluster installer tool option. For more information on Docker, see https://docs.docker.com/get-docker/

  • In order to change install parameters or to correct parameters following installation errors, it is important to distinguish whether the installation has managed to deploy the VMs or not. Deployed VMs are evidenced by the output of the installer similar to: vsphere_virtual_machine.crosswork-IPv4-vm["1"]: Creation complete after 2m50s [id=4214a520-c53f-f29c-80b3-25916e6c297f]

  • In case of deployed VMs, changes to the Crosswork VM settings or the Data Center host for a deployed VM are NOT supported. To change a setting using the installer when the deployed VMs are present, the clean operation needs to be run and the cluster redeployed. For more information, see Delete the VM using the Cluster Installer Tool.

  • A VM redeployment will delete the VM's data, hence caution is advised. We recommend you perform VM parameter changes from the Crosswork UI, or alternatively one VM at a time.

  • Installation parameter changes that occur prior to any VM deployment, e.g. an incorrect vCenter parameter, can be performed by applying the change and simply re-running the install operation.

  • If you want to use the auto action functionality, the definition file (JSON format) must be specified while executing the OVA file. For more information, see Automate Application Installation Using Auto Action Functionality.

Known limitations:

These following scenarios are the caveats for installing the Crosswork cluster using the cluster installer tool.

  • The vCenter host VMs defined must use the same network names (vSwitch) across all hosts in the data center.

  • The vCenter storage folders or datastores organized under a virtual folder structure, are not supported currently. Please ensure that the datastores referenced are not grouped under a folder.

  • Any VMs that are not created by the day 0 installer (for example, manually brought up VMs), cannot be changed either by the day 0 installer or via the Crosswork UI later. Similarly, VMs created via the Crosswork UI cannot be modified using the day 0 installer. When making modifications after the initial deployment of the cluster, ensure that you capture the inventory information.

  • vCenter UI provides a service where a user accessing via IPv4 can upload images to the IPv6 ESXi host. Cluster installer tool cannot use this service. Follow either of the following workarounds for IPv6 ESXi hosts:

    1. Install the OVA template image manually (for more information, see Manual Installation of Cisco Crosswork using vCenter vSphere UI).

    2. Run the cluster installer tool from an IPv6 enabled machine. To do this, configure the Docker daemon to map an IPv6 address into the docked container.


Note


The cluster installer tool will deploy the software and power on the virtual machines. If you wish to power on the virtual machines yourself, use the manual installation.


Procedure


Step 1

In your vCenter data center, go to Host > Configure > Networking > Virtual Switches and select the virtual switch. In the virtual switch, select Edit > Security, and configure the following DVS port group properties:

  • Set Promiscuous mode as Reject

  • Set MAC address changes as Reject

Confirm the settings and repeat the process for each virtual switch used in the cluster.

Step 2

In your Docker capable machine, create a directory where you will store everything you will use during this installation.

Note

 

If you are using a Mac, please ensure that the directory name is in lower case.

Step 3

Download the installer bundle (.tar.gz file) and the OVA file from cisco.com to the directory you created previously. For the purpose of these instructions, we will use the file names as signed-cw-na-platform-installer-7.0.0-85-release700-240823.tar.gz and signed-cw-na-platform-7.0.0-85-release700-240823.ova respectively.

Attention

 

The file names mentioned in this topic are sample names and may differ from the actual file names in cisco.com.

Step 4

Use the following command to unzip the installer bundle:

tar -xvf signed-cw-na-platform-installer-7.0.0-85-release700-240823.tar.gz

The contents of the installer bundle are unzipped to a new directory (e.g. signed-cw-na-platform-installer-7.0.0-85-release700). This new directory will contain the installer image (cw-na-platform-installer-7.0.0-85-release700-240823.tar.gz) and files necessary to validate the image.

Step 5

Change the directory to the directory created by opening the file and then review the contents of the README file in order to understand everything that is in the package and how it will be validated in the following steps.

Step 6

Use the following command to verify the signature of the installer image:

  1. Ensure you have python installed. If not, go to python.org and download the version of python that is appropriate for your work station.

  2. Use python --version to find out the version of python on your machine.

  3. Depending on the python version use one of these commands to validate the file.

    If you are using python 2.x, use the following command to validate the file:

    python cisco_x509_verify_release.py -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file>
    -v dgst -sha512

    If you are using python 3.x, use the following command to validate the file:

    python cisco_x509_verify_release.py3 -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file>
    -v dgst -sha512

Step 7

Use the following command to load the installer image file into your Docker environment.

docker load -i <.tar.gz file>

For example:

docker load -i cw-na-platform-installer-7.0.0-85-release700-240823.tar.gz

Step 8

Run Docker image list or Docker images command to get the "image ID" (which is needed in the next step).

For example:

docker images

The result will be similar to the following: (section we will need is underlined for clarity)

My Machine% docker images
REPOSITORY                        TAG                                                 IMAGE ID             CREATED        SIZE
dockerhub.cisco.com/cw-installer  cw-na-platform-installer-7.0.0-85-release700-240823   a4570324fad30  7 days ago     276MB

Note

 

Pay attention to the "CREATED" time stamp in the table presented when you run docker images, as you might have other images present from the installation of prior releases. If you wish to remove these, the docker image rm {image id} command can be used.

Step 9

Launch the Docker container using the following command:

docker run --rm -it -v `pwd`:/data {image id of the installer container}

To run the image loaded in our example, the command would be:

docker run --rm -it -v `pwd`:/data a4570324fad30

Note

 
  • You do not have to enter that full value. In this case, "docker run --rm -it -v `pwd`:/data a45" was adequate. Docker requires enough of the image ID to uniquely identify the image you want to use for the installation.

  • In the above command, we are using the backtick (`). Do not use the single quote or apostrophe (') as the meaning to the shell is very different. By using the backtick (recommended), the template file and OVA will be stored in the directory where you are on your local disk when you run the commands, instead of inside the container.

  • When deploying a IPv6 or dual stack cluster, the installer needs to run on an IPv6 enabled container/VM. This requires additionally configuring the Docker daemon before running the installer, using the following method:

    • Linux hosts (ONLY): Run the Docker container in host networking mode by adding the "–network host" flag to the Docker run command line.

      docker run --network host <remainder of docker run options>
  • Centos/RHEL hosts, by default, enforce a strict SELinux policy which does not allow the installer container to read from or write to the mounted data volume. On such hosts, run the Docker volume command with the Z option as shown below:

    docker run --rm -it -v `pwd`:/data:Z <remainder of docker options>

Step 10

Navigate to the directory with the VMware template.

cd /opt/installer/deployments/7.0.0/vcentre

Step 11

Copy the template file found under /opt/installer/deployments/7.0.0/vcentre/deployment_template_tfvars to the /data folder using a different name.

For example: cp deployment_template_tfvars /data/deployment.tfvars

For the rest of this procedure, we will use deployment.tfvars in all the examples.

Step 12

Edit the template file located in the /data directory in a text editor, to match your planned deployment. Refer to the Installation Parameters table for details on the required and optional fields and their proper settings. The Sample manifest templates for VMware vCenter includes an example that you can reference for proper formatting. The example is more compact due to the removal of descriptive comments:

Step 13

From the /opt/installer directory, run the installer.

./cw-installer.sh install -p -m /data/<template file name> -o /data/<.ova file> -y

For example:

./cw-installer.sh install -p -m /data/deployment.tfvars -o /data/signed-cw-na-platform-7.0.0-85-release700-240823.ova -y

Important

 

If you want to use the auto action functionality, the definition file (JSON format) must be specified while executing the OVA file in the following format:

./cw-installer.sh install -p -m /data/<template file name> -a <path to json def file> -o /data/<.ova file>

Step 14

Read, and then enter "yes" if you accept the End User License Agreement (EULA). Otherwise, exit the installer and contact your Cisco representative.

Step 15

Enter "yes" when prompted to confirm the operation.

Note

 

It is not uncommon to see some warnings like the following during the install:

Warning: Line 119: No space left for device '8' on parent controller '3'.
Warning: Line 114: Unable to parse 'enableMPTSupport' for attribute 'key' on element 'Config'.

If the install process proceeds to a successful conclusion (see sample output below), these warnings can be ignored.

Sample output:

cw_cluster_vms = <sensitive>
INFO: Copying day 0 state inventory to CW
INFO: Waiting for deployment status server to startup on 10.90.147.66. Elapsed time 0s, retrying in 30s
Crosswork deployment status available at http://{VIP}:30602/d/NK1bwVxGk/crosswork-deployment-readiness?orgId=1&refresh=10s&theme=dark 
Once deployment is complete login to Crosswork via: https://{VIP}:30603/#/logincontroller 
INFO: Cw Installer operation complete.

Note

 

If the installation fails due to a timeout, you should try rerunning the installation (step 13) without the -p option. This will deploy the VMs serially rather than in parallel.

If the installer fails for any other reason (for example, mistyped IP address), correct the error and rerun the install script.

If the installation fails (with or without the -p), open a case with Cisco and provide the .log files that were created in the /data directory (and the local directory where you launched the installer docker container), to Cisco for review. The two most common reasons for the install to fail are: (a) password that is not adequately complex, and (b) errors in the template file.


What to do next

Sample manifest templates for VMware vCenter

This topic contains manifest template examples for various scenarios of Crosswork cluster deployment.


Note


In case you are using resource pools, please note that individual ESXi host targeting is not allowed and vCenter is responsible for assigning the VM to a host in the resource pool. If vCenter is not configured with resource pools, then the exact ESXi host path must be passed.


Example 1 to deploy a cluster (3 hybrid, 2 workers) on 5 hosts

The following example deploys a Crosswork cluster containing 3 Hybrid nodes (IDs 0,1, 2) and 2 worker nodes (IDs 3, 4).

*******
vCenter Example
********
 
# In case of IPv6, specify ClusterIPstack as IPv6 and continue specifying IPv6 Configuration. 

ClusterIPStack = "IPv4"
ManagementVIP = "172.25.87.94"
ManagementIPNetmask = "255.255.255.192"
ManagementIPGateway = "172.25.87.65"
DataVIP = "192.168.123.94"
DataIPNetmask = "255.255.255.0"
DataIPGateway = "0.0.0.0"
DNS = "171.70.168.183"
DomainName = "cisco.com"
CWPassword = "Password!!"
VMSize = "Large"
NTP = "ntp.cisco.com"
CloneTimeOut = 90
ManagerDataFsSize = 450
ThinProvisioned = true
BackupMinPercent = 50
EnableHardReservations = false
ManagerDataFsSize     = 450
WorkerDataFsSize      = 450
 
CwVMs = {
  "0" = {
    VMName = "vm0",
    ManagementIPAddress = "172.25.87.82",
    DataIPAddress = "192.168.123.82",
    NodeType = "Hybrid"
  },
  "1" = {
    VMName = "vm1",
    ManagementIPAddress = "172.25.87.83",
    DataIPAddress = "192.168.123.83",
    NodeType = "Hybrid"
  },
  "2" = {
    VMName = "vm2",
    ManagementIPAddress = "172.25.87.84",
    DataIPAddress = "192.168.123.84",
    NodeType = "Hybrid"
  },
  "3" = {                                
 
    VMName = "vmworker0",                      
    ManagementIPAddress = "172.25.87.85",
    DataIPAddress = "192.168.123.84",   
    NodeType = "Worker"                  
  }, 
  "4" = {                                
 
    VMName = "vmworker1",                      
    ManagementIPAddress = "172.25.87.86",
    DataIPAddress = "192.168.123.86",   
    NodeType = "Worker"                  
  },   
 
}
 
 
/********* vCentre Resource Data with Cw VM assignment *********/
 
VCentreDC = {
  VCentreAddress = "172.25.87.90",
  VCentreUser = administrator@vsphere.local,
  VCentrePassword = "******",
  DCname = "dc-cr",
  MgmtNetworkName = "VM Network",
  DataNetworkName = "DPortGroup10", 
  VMs = [
    {
      HostedCwVMs = ["0","1","2","3", "4"],
      Host = "172.25.87.93",
      Datastore = "datastore3"
     HSDatastore = "datastore3",
    },]
}

Example 2 to deploy a cluster (3 hybrid nodes) on 2 hosts

The following example deploy Crosswork cluster with Hosts specified:

/********************************************
* Cw Cluster deployment input data TEMPLATE *
*              vcentre version              *
*              EDIT BEFORE USE              *
*                    v4.2.0                 *
*********************************************
#See at the end of the file for a configured sample

/********* Crosswork Cluster Data  *********/

  # The name of the Crosswork Cluster.
  ClusterName      = "CW-Cluster-01"

  # Provide  name of Cw VM image in vcentre or leave empty
  # When empty the image name will be populated from the uploaded image
  Cw_VM_Image = "cw-na-platform-4.3.0-88-release-220809"    # Line added automatically by installer.

  # The IP stack protocol: IPv4 or IPv6
  ClusterIPStack        = "IPv4"

  # The Management Virtual IP for the cluster
  ManagementVIP     = "10.90.147.66"

  # Optional: The Management Virtual IP host-name
  ManagementVIPName = ""

  # The Management IP subnet in dotted decimal format for ipv4 or prefix length for ipv6
  ManagementIPNetmask = "255.255.255.192"

  # The Gateway IP on the Management Network
  ManagementIPGateway = "10.90.147.65"

  # The Data Virtual IP for the cluster. Use 0.0.0.0 or ::0 to disable
  DataVIP           = "192.168.5.66"

  # Optional: The Data Virtual IP host-name
  DataVIPName = ""

  # The Data IP subnet in dotted decimal format for ipv4 or prefix length for ipv6
  # Provied any regular mask when not in use
  DataIPNetmask       = "255.255.255.0"

  # The Gateway IP on the Management Network
  DataIPGateway       = "192.168.5.1"

  #  The IP address of the DNS server
  DNS                 = "171.70.168.183"

  # The domain name to use for the cluster
  DomainName            = "cisco.com"

  # Sets the cw-admin user ssh login password for all VMs in the cluster
  # The password MUST be of min length 8 and strong
  CWPassword            = "Password!!"

  # Sets the VM size for the cluster. The only supported option is Large.
  VMSize                = "Large"

  # NTP server address or name
  NTP                   = "ntp.esl.cisco.com"

  # Configuration Manifest schema version
  SchemaVersion                   = "6.0.0"

  # Data disk size for Manager/Hybrid nodes in GB. Min 450 Max 8000
  ManagerDataFsSize = 450
  # Data disk size for Worker nodes in GB. Min 450 Max 8000
  WorkerDataFsSize = 450

  // Thin or thick provisioning for all disks. Set to true for thin provisioning, false for thick
  ThinProvisioned = true

  # Log partition size in GB. Min 10 Max 1000
  LogFsSize = 10

  # Minimum percentage of the data disk space to be used for the size of the backup partition
  # Note: The final backup partition size will be calculated dynamically. This parameter defines the minimum.
  # Valid range 1 - 80
  BackupMinPercent = 50

  # Enforces VM profile reservations as "hard"
  EnableHardReservations = true

  # FOR DEMO USE ONLY - NOT TO BE USED IN PRODUCTION DEPLOYMENTS
  # Ram disk size in GB
  RamDiskSize           = 10 


/********* Crosswork VM Data Map *********/
# Configure named entries for each Cw VM.
# Number of Hybrid VMs minimum: 3; maximum: 3
# Number of Worker VMs minimum: 0; maximum: 3

CwVMs = {
    # Seed VMs' data.
    # IMPORTANT: A VM with id "0" MUST be present in the initial day0 install manifest and its role MUST be
    # set to either MASTER or HYBRID.
  "0" = {

    # This VM's name
    VMName                = "CW_Node_0",

    # This VMs' management IP address
    ManagementIPAddress = "10.90.147.67",

    # This VMs' data IP address. Use 0.0.0.0 or ::0 to disable
    DataIPAddress       = "192.168.5.67",

    # This Cw VM's type - use "Hybrid" for initial install
    NodeType               = "Hybrid",

    # The state for this VM; 2 = running. Only uncomment when doing a manual inventory import
    #Op_Status = 2
  },

   # Second VMs' data
  "1" = {

    # This VM's name
    VMName                = "CW_Node_1",

    # This VMs' management IP address
    ManagementIPAddress = "10.90.147.68",

    # This VMs' data IP address
    DataIPAddress       = "192.168.5.68",

    # This Cw VM's type - use "Hybrid" for initial install
    NodeType               = "Hybrid",

    # The state for this VM; 2 = running. Only uncomment when doing a manual inventory import
    #Op_Status = 2
  },

  "2" = {

    # This VM's name
    VMName                = "CW_Node_2",
    ManagementIPAddress = "10.90.147.69",
    DataIPAddress       = "192.168.5.69",

    # This Cw VM's type - use "Hybrid" for initial install
    NodeType               = "Hybrid",

    # The state for this VM; 2 = running. Only uncomment when doing a manual inventory import
    #Op_Status = 2
  }
}



/********* vcentre Resource Data with Cw VM assignment *********/

VCentreDC = {

  # The vcentre IP or host name
  VCentreAddress = "10.88.192.244",

  # The username to use for logging into vcentre
  VCentreUser = "Cisco_User",

  # The vcentre password for the user
  VCentrePassword = "Password",

  # The name of the Data Centre resource to use
  DCname = "Cisco-CX-Lab",

  # The name of the vcentre network to attach to the Cw VM Management interface
  # NOTE: Escape any special characters using their URL escape codes, eg use "%2f" instead of "/"
  MgmtNetworkName = "VM Network",

  # The name of the vcentre network to attach to the Cw VM Data interface.
  # Leave empty if not used.
  # NOTE: Escape any special characters using their URL escape codes, eg use "%2f" instead of "/"
  DataNetworkName = "Crosswork-Internal",

  # The resource folder name on vcentre. Leave empty if not used.
  DCfolder = "",

  # List of the vcentre host resources along with the VMs names
  # that each that each resource will host. Add additional stanzas, separated by a ','
  # for each additional ESXi host or resource
  VMs = [{

    # The ESXi host, or ONLY the vcentre cluster/resource group name.
    Host = "10.90.147.99",

    # The datastore name available to be used by this host or resource group.
    Datastore = "Datastore-1",

    # The high speed datastore available for this host or resource group.
    # Set to same value as Datastore if unsure.
    HSDatastore = "Datastore-1"

    # The ids of the VMs to be hosted by the above ESXi host or resource. These have to match to the Cw VM
    # ids specified in the Cw VM map. Separate multiple VMs the given
    # host with a ',', eg ["0","1"].
    HostedCwVMs = ["0","1"]

    },
    {
    Host = "10.90.147.93"
    Datastore = "Datastore-2"
    HSDatastore = "Datastore-2"
    HostedCwVMs =["2"]
    } 
  ]
}

Example 3 to deploy a cluster (3 hybrid, 2 workers) in a Resource Group

The following example deploys Crosswork cluster with resource groups:

/********************************************
* Cw Cluster deployment input data TEMPLATE *
*              vcentre version              *
*              EDIT BEFORE USE              *
*                    v6.0.0                 *
*********************************************
#See at the end of the file for a configured sample

/********* Crosswork Cluster Data  *********/

  # The name of the Crosswork cluster.
  ClusterName      = "CW-cluster-01"

  # Provide  name of Cw VM image in vcentre or leave empty
  # When empty the image name will be populated from the uploaded image
  Cw_VM_Image = "cw-na-platform-6.0.0-414-develop-230926"    # Line added automatically by installer.

  # The IP stack protocol: IPv4 or IPv6
  ClusterIPStack        = "IPv4"

  # The Management Virtual IP for the cluster
  ManagementVIP     = "10.201.240.158"

  # Optional: The Management Virtual IP host-name
  ManagementVIPName = ""

  # The Management IP subnet in dotted decimal format for ipv4 or prefix length for ipv6
  ManagementIPNetmask = "255.255.255.224"

  # The Gateway IP on the Management Network
  ManagementIPGateway = "10.201.240.129"

  # The Data Virtual IP for the cluster. Use 0.0.0.0 or ::0 to disable
  DataVIP           = "192.168.77.158"

  # Optional: The Data Virtual IP host-name
  DataVIPName = ""

  # The Data IP subnet in dotted decimal format for ipv4 or prefix length for ipv6
  # Provied any regular mask when not in use
  DataIPNetmask       = "255.255.255.0"

  # The Gateway IP on the Management Network
  DataIPGateway       = "192.168.77.1"

  #  The IP address of the DNS server
  DNS                 = "172.18.108.43 172.18.108.34"

  # The domain name to use for the cluster
  DomainName            = "cisco.com"

  # Kubernetes Service Network Customization - The default network '10.96.0.0'.
  # NOTE: The CIDR range is fixed '/16', no need to enter.
  #       Only IPv4 is supported, IPv6 customization is NOT supported.
  K8sServiceNetwork = "10.96.0.0"

  # Kubernetes Service Network Customization - The default network '10.244.0.0'.
  # NOTE: The CIDR range is fixed '/16', no need to enter.
  #       Only IPv4 is supported, IPv6 customization is NOT supported.
  K8sPodNetwork = "10.244.0.0"

  # Sets the cw-admin user ssh login password for all VMs in the cluster
  # The password MUST be of min length 8 and strong
  CWPassword            = "Password"

  # Sets the VM size for the cluster. The only supported option is Large.
  VMSize                = "Large"

  # NTP server address or name
  NTP                   = "ntp.esl.cisco.com"

  # Configuration Manifest schema version
  SchemaVersion                   = "6.0.0"

  # Data disk size for Manager/Hybrid nodes in GB. Min 450 Max 8000
  ManagerDataFsSize = 450
  # Data disk size for Worker nodes in GB. Min 450 Max 8000
  WorkerDataFsSize = 450

  // Thin or thick provisioning for all disks. Set to true for thin provisioning, false for thick
  ThinProvisioned = true

  # Log partition size in GB. Min 10 Max 1000
  LogFsSize = 10

  # Minimum percentage of the data disk space to be used for the size of the backup partition
  # Note: The final backup partition size will be calculated dynamically. This parameter defines the minimum.
  # Valid range 1 - 80
  BackupMinPercent = 50

  # Enforces VM profile reservations as "hard"
  EnableHardReservations = "false"

  # FOR DEMO USE ONLY - NOT TO BE USED IN PRODUCTION DEPLOYMENTS
  # Ram disk size in GB
  RamDiskSize           = 0

  # Pods that are marked as skip auto install will not be brought up until a dependent application/pod explicitly asks for it
  EnableSkipAutoInstallFeature = "False"

  # DEMO/DEV USE ONLY - Enforce pod minimum resource reservations. Default and for production use is True
  EnforcePodReservations = "True"

  # Optional: Provide a standard IANA time zone. Default value is Etc/UTC if not specified
  Timezone = ""

/********* Crosswork VM Data Map *********/
# Configure named entries for each Cw VM.
# Number of Hybrid VMs minimum: 3; maximum: 3
# Number of Worker VMs minimum: 0; maximum: 3

CwVMs = {
  "0" = {
    VMName                = "cw-vm-0",
    ManagementIPAddress = "10.201.240.130",
    DataIPAddress       = "192.168.77.130",
    NodeType               = "Hybrid",
    #Op_Status = 2
  },
  "1" = {
    VMName                = "cw-vm-1",
    ManagementIPAddress = "10.201.240.131",
    DataIPAddress       = "192.168.77.131",
    NodeType               = "Hybrid",
    #Op_Status = 2
  },
  "2" = {
    VMName                = "cw-vm-2",
    ManagementIPAddress = "10.201.240.132",
    DataIPAddress       = "192.168.77.132",
    NodeType               = "Hybrid",
    #Op_Status = 2
  },
  "3" = {
    # This VM's name
    VMName                = "cw-worker-3",
    ManagementIPAddress = "10.201.240.133",
    DataIPAddress       = "192.168.77.133",
    NodeType               = "Worker",
                                                                                       
    # The state for this VM; 2 = running. Only uncomment when doing a manual inventory import
    #Op_Status = 2
  },
  "4" = {
    # This VM's name
    VMName                = "cw-worker-4",
    ManagementIPAddress = "10.201.240.134",
    DataIPAddress       = "192.168.77.134",
    NodeType               = "Worker",
    #Op_Status = 2
  }  
}



/********* vcentre Resource Data with Cw VM assignment *********/

VCentreDC = {
  VCentreAddress = "10.88.192.244",
  VCentreUser = "Cisco_User",
  VCentrePassword = "Password",
  DCname = "rcdn5-spm-dc-01",
  MgmtNetworkName = "Management Network",
  DataNetworkName = "Data Network",
  DCfolder = "" 
  VMs = [{
    Host = "{path to resource Group}",                                                                                    
    Datastore = "iSCSI-DataStore",                                                                                        
    HSDatastore = "iSCSI-DataStore",                                                                                      
    HostedCwVMs = ["0","1","2","3","4"],                                                                                                  
    }
  ]
}

Manual Installation of Cisco Crosswork using vCenter vSphere UI

This section explains how to build the cluster using the vCenter user interface. This same procedure can be used to add or replace nodes if necessary.

The manual installation workflow is broken into two parts. In the first part, you create a template. In the second part, you deploy the template as many times as needed to build the cluster of 3 Hybrid nodes (typically) along with any Worker nodes that your environment requires.

  1. Build the OVF template

  2. Deploy the template


Note


If the cluster has already been installed (no matter the method used) the template file will already exist unless it was deleted. In this case, you can directly go to deploying the template (the second part of this procedure).


The manual installation is preferred if you face any of the following reasons:

  • Owing to your data center configuration, you cannot deploy the cluster using the installer tool.

  • You need to add nodes to the existing cluster.

  • You need to replace a failed node.

  • You want to migrate a node to a new host machine.


Important


Anytime the configuration of the cluster is changed manually—whether to install the Crosswork cluster, add nodes, or move nodes to new hosts using the procedures detailed in this section—you must import the cluster inventory file (.tfvars file) to the Crosswork UI. You must set the parameter OP_Status = 2 to enable manual import of the inventory. For more information, see Import Cluster Inventory.


Build the OVF template

Before you begin

Procedure


Step 1

Download the latest available Cisco Crosswork platform image file (*.ova) to your system.

Step 2

With VMware ESXi running, log into the VMware vSphere Web Client. On the left navigation pane, choose the ESXi host or cluster where you want to deploy the VM.

Step 3

In the vSphere UI, go to Host > Configure > Networking > Virtual Switches and select the virtual switch. In the virtual switch, select Edit > Security, and configure the following DVS port group properties:

  • Set Promiscuous mode as Reject

  • Set MAC address changes as Reject

Confirm the settings and repeat the process for each virtual switch used in the cluster.

Step 4

Review and confirm that your network settings meet the requirements.

Ensure that the networks that you plan to use for Management network and Data network are connected to each host where VMs will be deployed.

Step 5

Choose Actions > Deploy OVF Template.

Caution

 

The default VMware vCenter deployment timeout is 15 minutes. If vCenter times out during deployment, the resulting VM will not be bootable. To prevent this, we recommend that you document the choices (such as IP address, gateway, DNS server, etc.) so that you can enter the information quickly and avoid any issues with the VMware configuration.

Step 6

The VMware Deploy OVF Template window appears, with the first step, 1 - Select an OVF template, highlighted. Click Choose Files to navigate to the location where you downloaded the OVA image file and select it. Once selected, the file name is displayed in the window.

Step 7

Click Next. The Deploy OVF Template window is refreshed, with 2 - Select a name and folder now highlighted. Enter a name and select the respective data center for the Cisco Crosswork VM you are creating.

We recommend that you include the Cisco Crosswork version and build number in the name, for example: Cisco Crosswork 7.0 Build 152.

Step 8

Click Next. The Deploy OVF Template window is refreshed, with 3 - Select a compute resource highlighted. Select the host or cluster for your Cisco Crosswork VM.

Step 9

Click Next. The VMware vCenter Server validates the OVA. Network speed will determine how long validation takes. After the validation is complete, the Deploy OVF Template window is refreshed, with 4 - Review details highlighted.

Step 10

Review the OVF template that you are deploying. Note that this information is gathered from the OVF, and cannot be modified.

Step 11

Click Next. The Deploy OVF Template window is refreshed, with 5 - License agreements highlighted. Review the End User License Agreement and if you agree, click the I accept all license agreements checkbox. Otherwise, contact your Cisco Experience team for assistance.

Step 12

Click Next The Deploy OVF Template window is refreshed, with 6 - Configuration highlighted. Crosswork supports only the following deployment configurations: IPv4 Network, IPv6 Network, and Dual Stack Network. Please select your preferred deployment configuration.

Figure 1. Select a deployment configuration

Step 13

Click Next. The Deploy OVF Template window is refreshed, with 7 - Select Storage highlighted. Choose the relevant option from the Select virtual disk format drop-down list.

Note

 

For production deployment, choose the Thick Provision Eager Zeroed option because this will preallocate disk space and provide the best performance. For lab purposes, we recommend the Thin Provision option because it saves disk space.

From the table, choose the datastore you want to use, and review its properties to ensure there is enough available storage.

Figure 2. Select Storage

Step 14

Click Next. The Deploy OVF Template window is refreshed, with 8 - Select networks highlighted. From the Destination Network drop-down list, select the proper networks for the Management Network and the Data Network.

Figure 3. Select a deployment configuration

Important

 

Admin Network and NBI Network are not applicable for Crosswork Network Controller deployments. You should leave these fields with the default values.

Step 15

Click Next. The Deploy OVF Template window is refreshed, with 9 - Customize template highlighted.

Note

 

As you are creating a template now, enter the IP information for the first node.

  1. Expand the Management Network settings. Provide information for the IPv4, IPv6 or dual stack deployment (as per your selection).

  2. Expand the Data Network settings. Provide information for the IPv4, IPv6 or dual stack deployment (as per your selection).

    Figure 4. Customize template settings
  3. Expand the Deployment Credentials settings. Enter relevant values for the VM Username and Password.

    Note

     

    Use a strong VM Password (8 characters long, including upper & lower case letters, numbers, and at least one special character). Avoid using passwords similar to dictionary words (for example, "Pa55w0rd!") or relatable words. While they satisfy the criteria, such passwords are weak and will be rejected resulting in failure to setup the VM.

  4. Expand the DNS and NTP Servers settings. According to your deployment configuration (IPv4, IPv6 or dual stack), the fields that are displayed are different. Provide information in the following three fields:

    • DNS IP Address: The IP addresses of the DNS servers you want the Cisco Crosswork server to use. Separate multiple IP addresses with spaces.

    • DNS Search Domain: The name of the DNS search domain.

    • NTP Servers: The IP addresses or host names of the NTP servers you want to use. Separate multiple IPs or host names with spaces.

    • Timezone: Enter the timezone details. Default value is UTC.

    Figure 5. Customize template - DNS and NTP Servers

    Note

     

    The DNS and NTP servers must be reachable using the network interfaces you have mapped on the host. Otherwise, the configuration of the VM will fail.

  5. The default Disk Configuration settings should work for most environments. Change the settings only if you are instructed to by the Cisco Customer Experience team.

  6. Expand Crosswork Configuration and enter your legal disclaimer text (users will see this text if they log into the CLI).

  7. Expand Crosswork Cluster Configuration. Provide relevant values for the following fields:

    Figure 6. Customize template - Crosswork Cluster Configuration (part 1)
    • VM Type:

      • Choose Hybrid if this is one of the 3 Hybrid nodes.

      • Choose Worker if this is a Worker node.

    • Cluster Seed node:

      • Choose True if this is the first VM being built in a new cluster.

      • Choose False for all other VMs, or when rebuilding a failed VM.

    • Crosswork Management Cluster Virtual IP: Enter the Management Virtual IP address.

    • Crosswork Management Cluster Virtual IP Name: Enter the Management Virtual IP DNS name.

    • Crosswork Data Cluster Virtual IP: Enter the Data Virtual IP address.

    • Crosswork Data Cluster Virtual IP Name: Enter the Data Virtual IP DNS name.

    • Initial node count: Set to the default value, which is 3.

    • Initial leader node count: Set to the default value, which is 3.

    Figure 7. Customize template - Crosswork Cluster Configuration (part 2)
    • Location of VM: Enter the location of VM.

    • K8 Orchestrator: Enforces minimum resource reservations for the pod. If left blank, the default value ("True") is selected.

    • Kubernetes Service Network: The network address for the kubernetes service network. By default, the CIDR range is fixed to '/16'.

    • Kubernetes Pod Network: The network address for the kubernetes pod network. By default, the CIDR range is fixed to '/16'.

    Figure 8. Customize template - Crosswork Cluster Configuration (part 3)
    • Installation type:

      • For new cluster installation: Do not select the check box.

      • Replacing a failed VM: Select the check box.

      • Adding a new worker node to an existing cluster: Do not select the check box.

    • Enable Skip Auto Install Feature: Any pods marked as skip auto install will not be brought up until a dependent application/pod explicitly asks for it. If left blank, the default value ("False") is selected.

    • Auto Action Manifest Definition: Auto action functionality enables you to customize the installation of applications along with the cluster installation. For more information, see Automate Application Installation Using Auto Action Functionality.

      • If you plan to use the auto action functionality, enter manifest definition details. You must compress or minimize the auto action JSON file and enclose it in CDATA format. The format is <![CDATA[{auto action json compressed content}]]>.

        Sample auto action CDATA:

        <![CDATA[{"auto_action":{"add_to_repository_requests":[{"file_location":{"uri_location":{"uri":"file:///example.com/path/to/cw-na-cncadvantage-7.0.0-240831.tar.gz"}}}],
        "install_activate_requests":[{"package_identifier":{"id":"capp-coe","version":"7.0.0"}}]}}]]>
      • If you do not plan to use this functionality, leave the field blank.

    • Default Application Resource Profile: Use the default value (Empty).

    • Default Infra Resource Profile: Use the default value (Empty).

    Figure 9. Customize template - Crosswork Cluster Configuration (part 4)
    • CA Private Key: Use the default value (Empty).

    • CA Public Key: Use the default value (Empty).

    • Use NonDefault Calico Bgp Port: Leave the checkbox unchecked.

    • Ignore Diagnose Failure: Use the default value (True).

    • Enable Diagnostics Script Check run: Use the default value (True).

    Figure 10. Customize template - Crosswork Cluster Configuration (part 5)

Step 16

Click Next. The Deploy OVF Template window is refreshed, with 10 - Ready to Complete highlighted.

Step 17

Review your settings and then click Finish if you are ready to begin deployment. Wait for the deployment to finish before continuing. To check the deployment status:

  1. Open a VMware vCenter client.

  2. In the Recent Tasks tab of the host VM, view the status of the Deploy OVF template and Import OVF package jobs.

Step 18

To finalize the template creation, select the host and right-click on the newly installed VM and select Template > Convert to Template. A prompt confirming the action is displayed. Click Yes to confirm. The template is created under the VMs and Templates tab in the vSphere Client UI.

This is the end of the first part of the manual installation workflow. In the second part, use the newly created template to build the cluster VMs.


Deploy the template

Procedure


Step 1

To build a VM, right-click on the template and select New VM from This Template.

Note

 

If the template is no longer present, go back and create the template. For more information, see Build the OVF template.

Step 2

The VMware Deploy From Template window appears, with the first step, highlighting the 1 - Select a name and folder section. Enter a name and select the respective data center for the VM.

Note

 

If this is a new VM, the name must be unique and cannot be the same name as the template. If this VM is replacing an existing VM (for example, CW-VM-0) give the VM a unique temporary name (for example, CW-VM-0-New).

Step 3

Click Next. The Deploy From Template window will refresh, highlighting the 2 - Select a compute resource section. Select the host for your Cisco Crosswork VM.

Step 4

Click Next. The Deploy From Template window will refresh, highlighting the 3 - Select Storage section. Choose Same format as source option as the virtual disk format (recommended).

The recommended configuration for the nodes uses a combination of high-speed (typically SSD based) and normal (typical disks) storage. If you are following the recommended configuration follow the steps for two data stores. Otherwise, follow the steps for using a single data store.

  • If you are using two data stores (regular and high speed):

    • Enable Configure per disk option.

    • Select same data store (regular) as the Storage setting for disks 1 through 5. This data store must have 916 GB of space.

    • Select the host's high speed (ssd) data store as the Storage setting for disk 6. The high speed data store must have at least 50 GB of space.

    • Click Next.

      Figure 11. Select Storage - Configure per disk
  • If you are using a single data store: Select the data store you wish to use, and click Next.

  • If your data center uses shared storage: Configure all the drives to utilize the shared storage, and click Next.

Step 5

The Deploy From Template window will refresh, highlighting the 4 - Select clone options section with the following checkboxes visible on the screen. Unless you have been given specific instructions to make modifications, select Next.

  • Customize the operating system: Check this box if you want to customize the operating system to avoid conflicts when deploying the VM. This step is optional.

    • If you check this box, the Deploy From Template window will refresh, highlighting the Customize guest OS section. Make the necessary changes, and click Next.

  • Customize this virtual machine's hardware: Check this box if you want to modify the IP settings or resource settings of this VM. This step is optional.

    • If you check this box, the Deploy From Template window will refresh, highlighting the Customize hardware section. Make the necessary changes, and click Next.

  • Power on virtual machine after creation: Leave this checkbox unselected.

Step 6

Click Next. The Deploy From Template window will refresh, highlighting the 5 - Customize vApp properties section. The vApp properties are prepopulated with the values entered during the template creation. Some of the values will need to be updated with the appropriate values for each node being deployed.

Tip

 
  • It is recommended to change only the fields that are unique to each node. Leave all other fields at the default values.

  • If this VM is being deployed to replace a failed VM, or to migrate the VM to a new host, the IP and other settings must match the machine being replaced.

  • Set the node type (Hybrid/Worker).

  • Management Network settings: Enter correct IP values for each VM in the cluster.

  • Data Network settings: Enter correct IP values for each VM in the cluster.

  • Deployment Credentials: Enter same deployment credentials for each VM in the cluster.

  • DNS and NTP Servers: Enter correct values for the DNS and NTP servers.

  • Disk Configuration: Leave at the default settings unless directed otherwise by the Cisco Customer Experience team.

  • Crosswork Configuration: Enter the disclaimer message.

  • Crosswork Cluster Configuration:

    • VM Type: Select Hybrid or Worker

    • Cluster Seed node:

      • Choose True if this is the first VM being built in a new cluster.

      • Choose False for all other VMs, or when rebuilding a failed VM or moving the VM to a new host.

    • Crosswork Management Cluster Virtual IP: The Virtual IP will remain same for each cluster node.

    • Crosswork Data Cluster Virtual IP: The Virtual IP will remain same for each cluster node.

Step 7

Click Next. The Deploy From Template window will refresh, highlighting the 6 - Ready to complete section. Review your settings and then click Finish if you are ready to begin deployment.

Step 8

For the newly created VM, confirm that the resource settings allocated for the VM match those specified in Identify the Resource Footprint.

Step 9

Repeat from Step 1 to Step 8 to deploy the remaining VMs in the cluster.

Important

 

When deploying the cluster for the first time, make sure the IP addresses and Seed node settings are correct. When replacing or migrating a node make sure the settings match the original VM.

Step 10

Choose the relevant action:

  • If you are deploying a new VM, power on the VM selected as the cluster seed node. After a delay of few minutes, power on the remaining nodes. To power on, expand the host’s entry, click the Cisco Crosswork VM, and then choose Actions > Power > Power On.
  • If this VM is replacing an existing VM, perform the following:
    • Power down the existing VM.

    • Change the name of the original VM (for example, change to CW-VM-0-Old)

    • Change the name of the replacement VM (for example change to CW-VM-0-New) to match the name of the original VM (for example, CW-VM-0).

    • Power on the new VM. Monitor the health of the cluster using the Crosswork UI.

    • Once the cluster is healthy and stable, delete the original VM (now named as CW-VM-0-Old)

Step 11

The time taken to create the cluster can vary based on the size of your deployment profile and the performance characteristics of your hardware. See Monitor Cluster Activation to know how you can check the status of the installation.

Note

 

If you are running this procedure to replace a failed VM, then you can check the status from the Cisco Crosswork GUI (go to Administration > Crosswork Manager and click on the cluster tile to check the Crosswork Cluster status.

Note

 

If you are using the process to build a new Worker node, the node will automatically register itself with the existing Kubernetes cluster. For more information on how the resources are allocated to the Worker node, see the Rebalance Cluster Resources topic in the Cisco Crosswork Network Controller 7.0 Administration Guide.


What to do next

Return to the installation workflow: Install Cisco Crosswork Network Controller on VMware vCenter

Monitor Cluster Activation

This section explains how to monitor and verify if the installation has completed successfully. As the installer builds and configures the cluster it will report progress. The installer will prompt you to accept the license agreement and then ask if you want to continue the install. After you confirm, the installation will progress and any possible errors will be logged in either installer.log or installer_tf.log. If the VMs get built and are able to boot, the errors in applying the operator specified configuration will be logged on the VM in the /var/log/firstboot.log.


Note


During installation, Cisco Crosswork will create a special administrative ID (virtual machine (VM) administrator, cw-admin, with the password that you provided in the manifest template. In case the installer is unable to apply the password, it creates the administrative ID with the default password cw-admin). The first time you log in using this administrative ID, you will be prompted to change the password.

The administrative username is reserved and cannot be changed. Data center administrators use this ID to log into and troubleshoot the Crosswork application VM.


The following is a list of critical steps in the process that you can watch for to be certain that things are progressing as expected:

  1. The installer uploads the crosswork image file (.ova file) to the vCenter data center.


    Note


    On running, the installer will upload the .ova file into the vCenter if it is not already present, and convert it into a VM template. After the installation is completed successfully, you can delete the template file from the vCenter UI (located under VMs and Templates) if the image is no longer needed.


  2. The installer creates the VMs, and displays a success message (e.g. "Creation Complete") after each VM is created.


    Note


    For VMware deployments, this activity can also be monitored from the vSphere UI.


  3. After each VM is created, it is powered on (either automatically when the installer completes, or after you power on the VMs during the manual installation). The parameters specified in the template are applied to the VM, and it is rebooted. The VMs are then registed by Kubernetes to form the cluster.

  4. Once the cluster is created and becomes accessible, a success message (e.g. "Crosswork Installer operation complete") will be displayed and the installer script will exit and return you to a prompt on the screen.

You can monitor startup progress using the following methods:

  • Using browser accessible dashboard:

    1. While the cluster is being created, monitor the setup process from a browser accessible dashboard.

    2. The URL for this grafana dashboard (in the format http://{VIP}:30602/d/NK1bwVxGk/crosswork-deployment-readiness?orgId=1&refresh=10s&theme=dark) is displayed once the installer completes. This URL is temporary and will be available only for a limited time (around 30 minutes).

    3. At the end of the deployment, the grafana dashboard will report a "Ready" status. If the URL is inaccessible, use the SSH console described in this section to monitor the installation process.

      Figure 12. Crosswork Deployment Readiness
  • Using the console:

    1. Check the progress from the console of one of the hybrid VMs or by using SSH to the Virtual IP address.

    2. In the latter case, login using the cw-admin user name and the password you assigned to that account in the install template.

    3. Switch to super user using sudo su - command.

    4. Run kubectl get nodes (to see if the nodes are ready) and kubectl get pods (to see the list of active running pods) commands.

    5. Repeat the kubectl get pods command until you see robot-ui in the list of active pods.

    6. At this point, you can try to access the Cisco Crosswork UI.

After the Cisco Crosswork UI becomes accessible, you can also monitor the status from the UI. For more information, see Log into the Cisco Crosswork UI.

Failure Scenario

In the event of a failue scenario (listed below), contact the Cisco Customer Experience team and provide the installer.log, installer_tf.log, and firstBoot.log files (there will be one per VM) for review:

  • Installation is incomplete

  • Installation is completed, but the VMs are not functional

  • Installation is completed, but you are directed to check /var/log/firstBoot.log or /opt/robot/bin/firstBoot.log file.

What to do next:

Return to the installation workflow: Install Cisco Crosswork Network Controller on VMware vCenter

Log into the Cisco Crosswork UI

Once the cluster activation and startup have been completed, you can check if all the nodes are up and running in the cluster from the Cisco Crosswork UI.


Note


For the supported browser versions, see the Compatibility Information section in the Cisco Crosswork Network Controller 7.0 Release Notes.


Perform the following steps to log into the Cisco Crosswork UI and check the cluster health:


Note


If the Cisco Crosswork UI is not accessible, during installation, please access the host's console from the VMware or AWS UI to confirm if there was any problem in setting up the VM. When logging in, if you are directed to review the firstboot.log file, please check the file to determine the problem. If you are able to identify the error, rectify it and restart the node(s). If you require assistance, please contact the Cisco Customer Experience team.


Procedure


Step 1

Launch one of the supported browsers.

Step 2

In the browser's address bar, enter:

https://<Crosswork Management Network Virtual IP (IPv4)>:30603/

or

https://[<Crosswork Management Network Virtual IP (IPv6)>]:30603/

Note

 

Please note that the IPv6 address in the URL must be enclosed with brackets.

Note

 

You can also log into the Crosswork UI using the Crosswork FQDN name.

The Log In window opens.

Note

 

When you access the Cisco Crosswork for the first time, some browsers display a warning that the site is untrusted. When this happens, follow the prompts to add a security exception and download the self-signed certificate from the Cisco Crosswork server. After you add a security exception, the browser accepts the server as a trusted site in all future login attempts. If you want to use a CA signed certificate, see the Manage Certificates topic in the Cisco Crosswork Network Controller 7.0 Administration Guide.

Step 3

Log into the Cisco Crosswork as follows:

  1. Enter the Cisco Crosswork administrator username admin and the default password admin.

  2. Click Log In.

  3. When prompted to change the administrator's default password, enter the new password in the fields provided and then click OK.

    Note

     

    Use a strong VM Password (minimum 8 characters long, including upper & lower case letters, numbers, and one special character). Avoid using passwords similar to dictionary words (for example, "Pa55w0rd!") or relatable words.

The Crosswork Manager window is displayed.

Step 4

Click on the Crosswork Health tab, and click the Crosswork Platform Infrastructure tab to view the health status of the microservices running on Cisco Crosswork.

Step 5

(Optional) Change the name assigned to the admin account (by default, it is "John Smith") to something more relevant.

Step 6

In case of manual installation: After logging into the Crosswork UI, ensure the cluster is healthy. Download the cluster inventory sample (.tfvars file) from the Crosswork UI and update it with information about the VMs in your cluster, along with the data center parameters. Then, import the file back into the Crosswork UI. For more information, see Import Cluster Inventory.


What to do next

Return to the installation workflow: Install Cisco Crosswork Network Controller on VMware vCenter

Import Cluster Inventory

If you have installed your cluster manually using the vCenter UI, you must import an inventory file (.tfvars file) to Cisco Crosswork to reflect the details of your cluster. A sample inventory file can be downloaded from the Crosswork UI.

Note


If the manual installation was performed to replace a failed VM, you must delete the original VM after importing the cluster inventory file.



Attention


Crosswork cannot deploy or remove VM nodes in your cluster until you complete this operation.



Note


Please uncomment the "OP_Status" parameter while importing the cluster inventory file manually. If you fail to do this, the status of the VM will incorrectly appear as "Initializing" even after the VM becomes functional. 


Procedure


Step 1

From the main menu, choose Administration > Crosswork Manager.

Step 2

On the Crosswork Summary tab, click the System Summary tile to display the Cluster Management window. Ensure the cluster is healthy.

Step 3

Choose Actions > Import Cluster Inventory to display the Import Cluster Inventory dialog box.

Step 4

(Optional) Click Download sample template file to download the template. Update the file with information about the VMs in your cluster, along with the data center parameters. For more details on the installation parameters, see Installation Parameters.

Step 5

Click Browse and select the cluster inventory file.

Step 6

Click Import to complete the operation.


Troubleshoot the Cluster

By default, the installer displays progress data on the command line. The install log is fundamental in identifying the problems, and it is written into the /data directory.

Table 3. General scenarios

Scenario

Possible Resolution

Certificate Error

The ESXi hosts that will run the Crosswork application and the data gateway VM must have NTP configured, or the initial handshake may fail with "certificate not valid" errors.

Image upload takes a long time or upload is interrupted.

The image upload duration depends on the link and datastore performance and can be expected to take around 10 minutes or more. If an upload is interrupted, the user needs to manually remove the partially uploaded image file from vCenter via the vSphere UI.

vCenter authorization

The vCenter user needs to have authorization to perform the actions as described in Installation Prerequisites for VMware vCenter.

Floating VIP address is not reachable

The VRRP protocol requires unique router_id advertisements to be present on the network segment. By default, Crosswork uses the ID 169 on the management and ID 170 on the data network segments. A symptom of conflict, if it arises, is that the VIP address is not reachable. Remove the conflicting VRRP router machines or use a different network.

Crosswork VM is not allowing the admin user to log in

OR

The following error is displayed:

Error: Invalid value for variable on cluster_vars.tf line 113:

├────────────────

This was checked by the validation rule at cluster_vars.tf:115,3-13. Error: expected length of name to be in the range (1 - 80), got with data.vsphere_virtual_machine.template_from_ovf, on main.tf line 32, in data "vsphere_virtual_machine" "template_from_ovf": 32: name = var.Cw_VM_Image Mon Aug 21 18:52:47 UTC 2023: ERROR: Installation failed. Check installer and the VMs' log by accessing via console and viewing /var/log/firstBoot.log

This happens when the password is not complex enough. Create a strong password, update the configuration manifest and redeploy.

Use a strong VM Password (8 characters long, including upper & lower case letters, numbers, and at least one special character). Avoid using passwords similar to dictionary words (for example, "Pa55w0rd!") or relatable words. While they satisfy the criteria, such passwords are weak and will be rejected resulting in failure to setup the VM.

Deployment fails with: Failed to validate Crosswork cluster initialization.

The clusters' seed VM is either unreachable or one or more of the cluster VMs have failed to get properly configured.

  1. Check whether the VM is reachable, and collect logs from /var/log/firstBoot.log and /var/log/vm_setup.log

  2. Check the status of the other cluster nodes.

The VMs are deployed but the Crosswork cluster is not being formed.

A successful deployment allows the operator logging in to the VIP or any cluster IP address to run the following command to get the status of the cluster:
sudo kubectl get nodes
A healthy output for a 3-node cluster is:
NAME                  STATUS   ROLES    AGE   VERSION
172-25-87-2-hybrid.cisco.com   Ready    master   41d   v1.16.4
172-25-87-3-hybrid.cisco.com   Ready    master   41d   v1.16.4
172-25-87-4-hybrid.cisco.com   Ready    master   41d   v1.16.4

In case of a different output, collect the following logs: /var/log/firstBoot.log and /var/log/vm_setup.log

In addition, for any cluster nodes not displaying the Ready state, collect:
sudo kubectl describe node <name of node>

The following error is displayed while uploading the image:

govc: The provided network mapping between OVF networks and the system network is not supported by any host.

The Dswitch on the vCenter is misconfigured. Please check whether it is operational and mapped to the ESXi hosts.

VMs deploy but install fails with Error: timeout waiting for an available IP address

Most likely cause would be an issue in the VM parameters provided or network reachability. Enter the VM host through the vCenter console. and review and collect the following logs: /var/log/firstBoot.log and /var/log/vm_setup.log

When deploying on a vCenter, the following error is displayed towards the end of the VM bringup:

Error processing disk changes post-clone: disk.0: ServerFaultCode: NoPermission: RESOURCE (vm-14501:2000), ACTION (queryAssociatedProfile): RESOURCE (vm-14501), ACTION (PolicyIDByVirtualDisk)

Enable Profile-driven storage. Query permissions for the vCenter user at the root level (i.e. for all resources) of the vCenter.

On running or cleaning, installer reports Error: cannot locate virtual machine with UUID "xxxxxxx": virtual machine with UUID "xxxxxxxx" not found

The installer uses the tfstate file stored as /data/crosswork-cluster.tfstate to maintain the state of the VMs it has operated upon. If a VM is removed outside of the installer, that is through the vCenter UI, this state is out of synchronization.

To resolve, remove the /data/crosswork-cluster.tfstate file.

Scenario

In a cluster with five or more nodes, the databases move to hybrid nodes during a node reload scenario. Users will see the following alarm:

“The robot-postgres/cw-timeseries-db pods are currently running on hybrid nodes. Please relocate them to worker nodes if they're available and health."

Resolution

To resolve the alarm, invoke the move API to move the databases to worker nodes.

Use the following script to place services. It returns a job ID that can be queried to ensure the job is completed.

[Place Services]

Request
curl --request POST --location 'https://<Vip>:30603/crosswork/platform/v2/placement/move_services_to_nodes' \
--header 'Content-Type: application/json' \
--header 'Authorization: <your-jwt-token>' \
--data '{
    "service_placements": [
        {
            "service": {
                "name": "robot-postgres",
                 "clean_data_folder": true
            },
            "nodes": [
                {
                    "name": "fded-1bc1-fc3e-96d0-192-168-5-114-worker.cisco.com"
                },
                {
                    "name": "fded-1bc1-fc3e-96d0-192-168-5-115-worker.cisco.com"
                }
            ]
        },
        {
            "service": {
                "name": "cw-timeseries-db",
                 "clean_data_folder": true
            },
            "nodes": [
                {
                    "name": "fded-1bc1-fc3e-96d0-192-168-5-114-worker.cisco.com"
                },
                {
                    "name": "fded-1bc1-fc3e-96d0-192-168-5-115-worker.cisco.com"
                }
            ]
        }
    ]
}'
 
 
Response
 
{
    "job_id": "PJ5",
    "result": {
        "request_result": "ACCEPTED",
        "error": null
    }
}

[GetJobs]

Request
curl --request POST --location 'https://<Vip>:30603/crosswork/platform/v2/placement/jobs/query' \
--header 'Content-Type: application/json' \
--header 'Authorization: <your-jwt-token>' \
--data '{"job_id":"PJ5"}'
 
Response
 
{
    "jobs": [
        {
            "job_id": "PJ1",
            "job_user": "admin",
            "start_time": "1714651535675",
            "completion_time": "1714652020311",
            "progress": 100,
            "job_status": "JOB_COMPLETED",
            "job_context": {},
            "job_type": "MOVE_SERVICES_TO_NODES",
            "error": {
                "message": ""
            },
            "job_description": "Move Services to Nodes"
        }
    ],
    "query_options": {
        "pagination": {
            "page_token": "1714650688679",
            "page_size": 200
        }
    },
    "result": {
        "request_result": "ACCEPTED",
        "error": null
    }
}

[GetEvents]

Request
 
curl --request POST --location 'https://<Vip>:30603/crosswork/platform/v2/placement/events/query' \
--header 'Content-Type: application/json' \
--header 'Authorization: <your-jwt-token>' \
--data '{}'

Response
 
{
    "events": [
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Operation done",
            "event_time": "1714725461179"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Moving replica pod , to targetNodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com 
                    fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com]",
            "event_time": "1714725354163"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "robot-postgres - Cleaning up target nodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com 
                       fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com] for stale data folder",
            "event_time": "1714725346515"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Replica pod not found for service robot-postgres",
            "event_time": "1714725346507"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Started moving leader and replica pods for service robot-postgres",
            "event_time": "1714725346504"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": " robot-postgres - Source and target nodes are not subsets, source nodes 
                    [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com] , 
            target nodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com]",
            "event_time": "1714725346293"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Verified cw-timeseries-db location on target nodes",
            "event_time": "1714725345692"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Moved leader pod cw-timeseries-db-0, to targetNodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com 
                        fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com]",
            "event_time": "1714725345280"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "cw-timeseries-db-0 is ready",
            "event_time": "1714725345138"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "cw-timeseries-db-0 is ready",
            "event_time": "1714725241401"
        },
         
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Checking for cw-timeseries-db-0 pod is ready",
            "event_time": "1714725211296"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Moving leader pod cw-timeseries-db-0, to targetNodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com 
                        fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com]",
            "event_time": "1714725211256"
        }
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "cw-timeseries-db-1 is ready",
            "event_time": "1714725132896"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Checking for cw-timeseries-db-1 pod is ready",
            "event_time": "1714725131684"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Moving replica pod cw-timeseries-db-1, to targetNodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com 
                       fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com]",
            "event_time": "1714725128203"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "cw-timeseries-db - Cleaning up target nodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com 
                        fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com] for stale data folder",
            "event_time": "1714725119505"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "Started moving leader and replica pods for service cw-timeseries-db",
            "event_time": "1714725117684"
        },
        {
            "event_tags": [
                {
                    "tag_type": "JOB_ID_EVENT",
                    "tag_value": "PJ5"
                }
            ],
            "message": "cw-timeseries-db - Source and target nodes are not subsets, source nodes 
                       [fded-1bc1-fc3e-96d0-192-168-6-111-hybrid.cisco.com fded-1bc1-fc3e-96d0-192-168-6-113-hybrid.cisco.com] , 
            target nodes [fded-1bc1-fc3e-96d0-192-168-6-115-worker.cisco.com fded-1bc1-fc3e-96d0-192-168-6-116-worker.cisco.com]",
            "event_time": "1714725115883"
        }
    ],
    "query_options": {
        "pagination": {
            "page_token": "1714725115883",
            "page_size": 200
        }
    },
    "result": {
        "request_result": "ACCEPTED",
        "error": null
    }
}
Table 4. Installer tool scenarios

Scenario

Possible Resolution

Missing or invalid parameters

The installer provides a clue as regards to the issue; however, in case of errors in the manfiest file HCL syntax, these can be misguiding. If you see "Type errors", check the formatting of the configuration manifest.

The manifest file can also be passed as a simple JSON file. Use the following converter to validate/convert: https://www.hcl2json.com/

Error conditions such as:

Error: Error locking state: Error acquiring the state lock: resource temporarily unavailable

Error: error fetching virtual machine: vm not found

Error: Invalid index

These errors are common when re-running the installer after an initial run is interrupted (Control C, or TCP timeout, etc). Remediation steps are:

  1. Run the clean operation (./cw-installer.sh clean -m <your manifest here>) OR remove the VM files manually from the vCenter.

  2. Remove the state file (rm /data/crosswork-cluster.tfstate).

  3. Retry the installation (./cw-installer.sh clean -m <your manifest here>).

The VMs take a long time to deploy

The time needed to clone the VMs during the installation will be determined by the workload on the disk drives used by the host machines. Running the install serially (without the [-p] flag) will lessen this load while increasing the time needed to deploy the VMs.

Installer reports plan to add more resources than the current number of VMs

Other than the Crosswork cluster VMs, the installer tracks other meta-resources. Thus, when doing an installation of, say a 3-VM cluster, the installer may report a "plan" to add more resources than the number of VMs.

On running or cleaning, installer reports Error: cannot locate virtual machine with UUID "xxxxxxx": virtual machine with UUID "xxxxxxxx" not found

To resolve, remove the /data/crosswork-cluster.tfstate file.

The installer uses the tfstate file stored as /data/crosswork-cluster.tfstate to maintain the state of the VMs it has operated upon. If a VM is removed outside of the installer, that is through the vCenter UI, this state is out of synchronization.

Encountered one of the following errors during execution:

Error 1:

% docker run --rm -it -v `pwd`:/data a45 docker: invalid reference format: repository name must be lowercase. See 'docker run --help'

Error 2:

docker: Error response from daemon: Mounts denied: approving /Users/Desktop: file does not exist ERRO[0000] error waiting for container: context canceled

Move the files to a directory where the path is in lowercase (all lowercase, no spaces or other special characters). Then navigate to that directory and rerun the installer.
Table 5. Dual stack scenarios

Scenario

Possible Resolution

During deployment, the following error message is displayed:

ERROR: No valid IPv6 address detected for IPv6 deployment.

If you intend to use a dual stack configuration for your deployment, make sure that the host machine running the Docker installer meets the following requirements:

  • It must have an IPv6 address from the same prefix as the Crosswork Management IPv6 network, or be able to route to that network. To verify this, try pinging the Gateway IP of the Management IPv6 network from the host. To utilize the host's IPv6 network, use the parameter --network host when running the Docker installer.

  • Confirm that the provided IPv6 network CIDR and gateway are valid and reachable.

During deployment, the following error message is displayed:

ERROR: seed v4 host empty

Ensure you use the approved version of Docker installer (19 or higher) to run the deployment.

During deployment, the following error message is displayed:

ERROR: Installation failed. Check installer and the VMs' log by accessing via console and viewing /var/log/firstBoot.log

Common reasons for failed installation are:

  • Incorrect IPv4 or IPv6 Gateway IP for either Management or Data interfaces.

  • Unreachable IPv4 or IPv6 Gateway IP for either Management or Data interfaces.

  • Errors in mapping the vCenter networks in MgmtNetworkName and DataNetworkName parameters in .tfvars file

Check the firstBoot.log file for more information, and contact Cisco Customer Experience team for any assistance.

What to do next:

Return to the installation workflow: Install Cisco Crosswork Network Controller on VMware vCenter