Install Cisco Crosswork Network Controller on a Single VM

This chapter contains the following topics:

Introduction

This chapter explains the requirements and processes to install Crosswork Network Controller on a single VM or node. Cisco Crosswork Network Controller enables you to proactively manage your end-to-end networks, by providing automation solutions to ensure faster innovation, optimal user experience, and operational excellence.

Crosswork Network Controller, when deployed on a single VM, is available in these tiers:

  • Essentials: includes Element Management Functions and embedded Collectors.

  • Advantage: includes the Essentials package, Optimization Engine, Active Topology, Service Health, and embedded NSO.

Installation requirements

VMware requirements

  • Hypervisor and vCenter supported:

    • VMware vCenter Server 8.0 (U2c or later) and ESXi 8.0 (U2b or later)

    • VMware vCenter Server 7.0 (U3p or later) and ESXi 7.0 (U3p or later)

  • Crosswork Network Controller VM must be hosted on hardware with Hyper Threading disabled.

  • Ensure that profile-driven storage is enabled by the vCenter admin user. Query permissions for the vCenter user at the root level (for all resources) of the vCenter.

  • We also recommend you to enable vCenter storage control.

  • The networks required for the Crosswork Management and Data networks need to be built and configured in the data centers, and must allow low latency L2 communication (latency with RTT <= 10 ms).

  • Ensure the user account you use for accessing vCenter has the following privileges:

    • VM (Provisioning): Clone VM on the VM you are cloning.

    • VM (Provisioning): Customize on the VM or VM folder if you are customizing the guest operating system.

    • VM (Inventory): Create from the existing VM on the data center or VM folder.

    • VM (Configuration): Add a new disk on the data center or VM folder.

    • Resource: Assign a VM to a resource pool on the destination host or resource pool.

    • Datastore: Allocate space on the destination datastore or datastore folder.

    • Network: Assign the network to which the VM will be assigned.

    • Profile-driven storage (Query): This permission setting needs to be allowed at the root of the data center tree level.

KVM requirements

The following requirements are mandatory if you are planning to install Crosswork Network Controller on RHEL KVM.

Table 1. Host bare metal requirements (per bare metal server)

Component

Minimum requirement per node

XLarge profile

Large profile

Small profile

1

Processor

Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz or latest

NIC

2 x 10 Gbps NICs

OS

Red Hat Enterprise Linux 9.4

RAM

128 GB

96 GB

48 GB

CPU

24 vCPUs

12 vCPUs

8 vCPUs

Storage

1 TB

1 TB

650 GB

1 Used only for the deployment of the arbiter VM in a geo HA setup.

Note


  • Ensure the networks required for the Crosswork Management and Data networks are built and configured in the data centers. These networks must allow low-latency L2 communication with a round-trip time (RTT) of 10 ms or less.

  • The same network name must be used and configured on the RHEL bare metal host machine that is hosting the Crosswork Network Controller VM.

  • Crosswork Network Controller VM must run on hardware with Hyper Threading disabled to ensure consistent, real-time performance for CPU-intensive workloads, as Hyper Threading may cause resource contention and unpredictable performance.


VM requirements

Table 2. Network requirements

Requirement

Description

Network Connections

For production deployments, we recommend that you use dual interfaces, one for the Management network and one for the Data network.

For optimal performance, the Management and Data networks should use links configured at a minimum of 10 Gbps with a latency of less than 10 milliseconds.

IP Addresses

4 IPv4 and/or IPv6 addresses: A management and data IP address for the Hybrid node being deployed, and two additional IP addresses to be used as the Virtual IP (VIP) address (one for the Management network and one for the Device network).

Note

 
  • The IP addresses must be able to reach the gateway address for the network, or the installation will fail.

  • When deploying with IPv6 or dual stack, the installation needs to run on an IPv6 enabled container/VM.

  • At this time, your IP allocation is permanent and cannot be changed without re-deployment. For more information, contact the Cisco Customer Experience team.

Interfaces

Crosswork is deployed on a single VM with 2 interfaces.

  • No. of NICs: 2

  • vNIC0: Management Traffic (for accessing the interactive console and passing the Control/Data information between servers).

  • vNIC1: Device Access Traffic (for device access and data collection).

Note

 

Due to security policies, traffic from subnets of a vNIC received on other vNICs is dropped. For example, in a setup with two vNICs, all device traffic (incoming and outgoing) must be routed through the default vNIC1.

NTP Server

The IPv4 and/or IPv6 addresses or host names of the NTP server you plan to use. If you want to enter multiple NTP servers, separate them with spaces. These should be the same NTP servers you use to synchronize the Crosswork application VM clock, devices, clients, and servers across your network.

Ensure that the NTP servers are reachable on the network before attempting installation. The installation will fail if the servers cannot be reached.

DNS Servers

The IPv4 and/or IPv6 addresses of the DNS servers you plan to use. These should be the same DNS servers you use to resolve host names across your network.

Ensure that the DNS servers are reachable on the network before attempting installation. The installation will fail if the servers cannot be reached.

DNS Search Domain

The search domain you want to use with the DNS servers, for example, cisco.com. You can have only one search domain.

Backup Server

Cisco Crosswork will back up the configuration of the system to an external server using SCP. The SCP server storage requirements will vary slightly but you must have at least 25 GB of storage.

FQDN (Optional)

The installation process supports using either a VIP (Virtual IP address) or an FQDN (Fully Qualified Domain Name) to access the VM.

If you choose to use the FQDN, you will need one for the Management and one for the Data network.

Note

 

If you choose to supply the FQDNs during the initial installation, the DNS server must be populated with them before the VM is powered on; otherwise, the installation script will fail to complete the environment setup.

Port requirements

Table 3. Ports used by Crosswork single VM deployment on the management network
Port Protocol Used for Direction

30602

TCP

Monitoring the installation (Crosswork Network Controller)

Inbound

30603

TCP

Crosswork Network Controller Web user interface (NGINX server listens for secure connections on port 443)

Inbound

30604

TCP

Classic Zero Touch Provisioning (Classic ZTP) on the NGINX server

Inbound

30653

TCP

Raft peer cluster communication port

Inbound

30617

TCP

Secure Zero Touch Provisioning (Secure ZTP) on the ZTP server

Inbound

30620

TCP

Receiving plug-and-play HTTP traffic on the ZTP server

Inbound

7

TCP/UDP

Discovering endpoints using ICMP

Outbound

22

TCP

Initiating SSH connections with managed devices

Outbound

22

TCP

Remote SSH connection

Inbound

53

TCP/UDP

Connecting to DNS

Outbound

123

UDP

Network Time Protocol (NTP)

Outbound

830

TCP

Initiating NETCONF

Outbound

When configuring the ports for Embedded Collectors, ensure that the ports mentioned in the following tables are configured on the device. For example, in case the port used for sending traps was previously set to 1062, change it to a port that is within the acceptable range for deploying a single virtual machine. The acceptable range is provided with the port number in Table.

Table 4. Ports used by Crosswork single VM deployment on the Device Network
Port Protocol Used for Direction

161

UDP

SNMP Collector

Outbound

31062

Accepted range of ports is 30160–31560

UDP

Inbound

22

TCP

CLI Collector

Outbound

30614

Accepted range of ports is 30160–31560

TLS

Syslog Collector

This is the default value. You can change this value after installation from the Cisco Crosswork UI.

Inbound

30898

Accepted range of ports is 30160–31560

TCP

30514

Accepted range of ports is 30160–31560

UDP

30621

TCP

FTP (available on data interface only). The additional ports used for file transfer are 31121 (TCP), 31122 (TCP), and 31123 (TCP).

This port is available only when the supported application is installed on Cisco Crosswork and the FTP settings are enabled.

Inbound

30622

TCP

SFTP (available on data interface only)

This port is available only when the supported application is installed on Cisco Crosswork and the SFTP settings are enabled.

Inbound

Site Specific

2

TCP

gNMI collector

Outbound

Site Specific

3

Site Specific

Kafka and gRPC destination

Outbound

2

For default port information of a device, see the platform-specific documentation.

Ensure that port number on the device is the same as that configured on Device Management > Network Devices > Edit Device.

3

You cannot modify the port numbers of system-created destinations as they are created with predefined ports.

To modify the user-defined destination ports, edit the port number from Administration > Data Collector(s) Global Settings > Data destinations > Edit destination.

Installation Parameters

This section explains the important parameters that must be specified while installing the Crosswork VM. Kindly ensure that you have relevant information to provide for each of the parameters mentioned in the table.


Attention


Please use the latest template file that comes with the Crosswork build file.



Note


In case of dual stack deployment, you need to configure the IPv4 and IPv6 values for the Management, Data, and DNS parameters.

  • ManagementIPv4Address, ManagementIPv6Address

  • ManagementIPv4Netmask, ManagementIPv6Netmask

  • ManagementIPv4Gateway, ManagementIPv6Gateway

  • ManagementVIPv4, ManagementVIPv6

  • DataIPv4Address, DataIPv6Address

  • DataIPv4Netmask, DataIPv6Netmask

  • DataIPv4Gateway, DataIPv6Gateway

  • DataVIPv4, DataVIPv6

  • DNSv4, DNSv6


Table 5. General parameters

Parameter Name

Description

ClusterIPStack

The IP stack protocol: IPv4 or IPv6

ManagementIPAddress

The Management IP address of the VM (IPv4 or IPv6).

ManagementIPNetmask

The Management IP subnet in dotted decimal format (IPv4 or IPv6).

ManagementIPGateway

The Gateway IP on the Management Network (IPv4 or IPv6). The address must be reachable, otherwise the installation will fail.

ManagementVIP

The Management Virtual IP for the Crosswork VM.

DataIPAddress

The Data IP address of the VM (IPv4 or IPv6).

DataIPNetmask

The Data IP subnet in dotted decimal format (IPv4 or IPv6).

DataIPGateway

The Gateway IP on the Data Network (IPv4 or IPv6). The address must be reachable, otherwise the installation will fail.

DataVIP

The Data Virtual IP for the Crosswork VM.

DNS

The IP address of the DNS server (IPv4 or IPv6). The address must be reachable, otherwise the installation will fail.

NTP

NTP server address or name. The address must be reachable, otherwise the installation will fail.

DomainName

The domain name used for the VM.

CWPassword

Password to log into Cisco Crosswork. When setting up a VM, ensure the password is strong and meets the following criteria:

  • It must be at least 8 characters long and include uppercase and lowercase letters, numbers, and at least one special character.

  • The following special characters are not allowed: backslash (\), single quote ('), or double quote (").

  • Avoid using passwords that resemble dictionary words (e.g., "Pa55w0rd!") or relatable words. While such passwords may meet the specified criteria, they are considered weak and will be rejected, resulting in a failure to set up the VM.

VMSize

Size of the VM. Crosswork Network Controller supports both the "XLarge" and "Large" profiles.

For more information, see Resource footprint for single VM deployments.

VMName

Name of the VM.

NodeType

Indicates the type of VM. Choose Hybrid.

IsSeed

Set to "True".

InitNodeCount

Set value to 1.

InitMasterCount

Set value to 1.

BackupMinPercent

Minimum percentage of the data disk space to be used for the size of the backup partition. The default value is 35 (valid range is from 1 to 80).

Please use the default value unless recommended otherwise.

Note

 

The final backup partition size will be calculated dynamically. This parameter defines the minimum.

ThinProvisioned

Set to false for production deployments.

SchemaVersion

The configuration Manifest schema version. This indicates the version of the installer to use with this template.

Schema version should map to the version packaged with the sample template in the installer tool on cisco.com. You should always build a new template from the default template provided with the release you are deploying, as template requirements may change from one release to the next.

LogFsSize

Log partition size (in Giga Bytes). Minimum value is 20 GB and Maximum value is 1000 GB.

If left blank, the default value (20 GB) is selected.

EnableSkipAutoInstallFeature

Pods marked as "skip auto install" will not be brought up unless explicitly requested by a dependent application or pod. By default, the value is set as "False".

For single VM deployment, you must set the value as "True".

Note

 
  • If left blank, the default value ("False") is automatically selected.

  • This parameter accepts a string value, so be sure to enclose the value in double quotes.

EnforcePodReservations

Enforces minimum resource reservations for the pod. If left blank, the default value ("True") is selected.

This parameter accepts a string value, so be sure to enclose the value in double quotes.

K8sServiceNetwork

The network address for the kubernetes service network. By default, the CIDR range is fixed to '/16'.

K8sPodNetwork

The network address for the kubernetes pod network. By default, the CIDR range is fixed to '/16'.

IgnoreDiagnosticsCheckFailure

Used to set the system response in case of a diagnostic check failure.

If set to "False" (default value), the installation will terminate if the diagnostic check reports an error. If set to "True", the diagnostic check will be ignored, and the installation will continue.

You are recommended to select the default value. This parameter accepts a string value, so be sure to enclose the value in double quotes.

Note

 
  • The log files (diagnostic_stdout.log and diagnostic_stderr.log) can be found at /var/log. The result from each diagnostic execution is kept in a file at /home/cw-admin/diagnosis_report.txt.

  • Use diagnostic all command to invoke the diagnostic manually on day N.

  • Use diagnostic history command to view previous test report.

ManagementVIPName

Name of the Management Virtual IP for the Crosswork VM. This is an optional parameter used to reach Crosswork Management VIP via DNS name. If this parameter is used, the corresponding DNS record must exist in the DNS server.

DataVIPName

Name of the Data Virtual IP for the Crosswork VM. This is an optional parameter used to reach Crosswork Data VIP via DNS name. If this parameter is used, the corresponding DNS record must exist in the DNS server.

EnableHardReservations

Determines the enforcement of VM CPU and Memory profile reservations. This is an optional parameter and the default value is "True", if not explicitly specified. This parameter accepts a string value, so be sure to enclose the value in double quotes.

If set as "True", the VM's resources are provided exclusively. In this state, the installation will fail if there are insufficient CPU cores, memory or CPU cycles.

If set as "False" (only set for lab installations), the VM's resources are provided on best efforts. In this state, insufficient CPU cores can impact performance or cause installation failure.

ManagerDataFsSize

This parameter is applicable only when installing with the Docker installer tool.

Refers to the data disk size for the Crosswork node (in Giga Bytes). This is an optional parameter and the default value is 485 (valid range is from 485 to 8000), if not explicitly specified.

Please use the default value unless recommended otherwise.

RamDiskSize

Size of the Ram disk.

This parameter is only used for lab installations (value must be at least 2). When a non-zero value is provided for RamDiskSize, the HSDatastore value is not used.

Timezone

Enter the timezone. Input is a standard IANA time zone (for example, "America/Chicago"). If left blank, the default value (UTC) is selected. This parameter accepts a string value, so be sure to enclose the value in double quotes.

This is an optional parameter.

Table 6. VMware template parameters

Parameter Name

Description

VCenterAddress

The vCenter IP or host name.

VCenterUser

The username needed to log into vCenter.

VCenterPassword

The password needed to log into vCenter.

DCname

The name of the Data Center resource to use.

Example: DCname = "WW-DCN-Solutions"

MgmtNetworkName

The name of the vCenter network to attach to the VM's Management interface.

This network must already exist in VMware or the installation will fail.

DataNetworkName

The name of the vCenter network to attach to the VM's Data interface.

This network must already exist in VMware or the installation will fail.

Host

The ESXi host, or ONLY the vcenter VM/resource group name where the VM is to be deployed.

The primary option is to use the host IP or name (all the hosts should be under the data center). If the hosts are under a VM in the data center, only provide the VM name (all hosts within the VM will be picked up).

The subsequent option is to use a resource group. In this case, a full path should be provided.

Example: Host = "Main infrastructure/Resources/00_trial"

Datastore

The datastore name available to be used by this host or resource group.

The primary option is to use host IP or name. The subsequent option is to use a resource group.

Example: Datastore = "SDRS-DCNSOL-prodexsi/bru-netapp-01_FC_Prodesx_ds_15"

HSDatastore

The high speed datastore available for this host or resource group.

When not using a highspeed data store, set to same value as Datastore.

Cw_VM_Image

The name of Crosswork VM image in vCenter.

This value is set as an option when running the installer tool and does not need to be set in the template file.

HostedCwVMs

The ID of the VM to be hosted by the ESXi host or resource.

Install Crosswork Network Controller using the vCenter vSphere UI

This topic explains how to deploy Crosswork Network Controller on a single VM using the vCenter user interface.

This is the recommended method for installing Crosswork Network Controller on a single VM.

Procedure


Step 1

Download the latest available Cisco Crosswork platform image file (*.ova) to your system.

Step 2

With VMware ESXi running, log into the VMware vSphere Web Client. On the left navigation pane, choose the ESXi host where you want to deploy the VM.

Step 3

In the vSphere UI, go to Host > Configure > Networking > Virtual Switches and select the virtual switch for the Management Network that will be used to access the UI of the VM. In the virtual switch, select Edit > Security, and configure the following DVS port group properties:

  • Set Promiscuous mode as Reject

  • Set MAC address changes as Reject

Confirm the settings and repeat the process for the virtual switch that will be used for the Data Network.

Step 4

Review and confirm that your network settings meet the requirements.

Ensure that the networks that you plan to use for Management Network and Data network are connected to the host. Contact your Cisco Experience team for assistance.

Step 5

Choose Actions > Deploy OVF Template.

Caution

 

The default VMware vCenter deployment timeout is 15 minutes. If vCenter times out during deployment, the resulting VM will not be bootable. To prevent this, we recommend that you document the choices (such as IP address, gateway, DNS server, etc.) so that you can enter the information quickly and avoid any issues with the VMware configuration.

Step 6

The VMware Deploy OVF Template window appears, with the first step, 1 - Select an OVF template, highlighted. Click Choose Files to navigate to the location where you downloaded the OVA image file and select it. Once selected, the file name is displayed in the window.

Step 7

Click Next. The Deploy OVF Template window is refreshed, with 2 - Select a name and folder now highlighted. Enter a name and select the respective data center for the Cisco Crosswork VM you are creating.

We recommend that you include the Cisco Crosswork version and build number in the name, for example: Cisco Crosswork 7.1 Build 152.

Step 8

Click Next. The Deploy OVF Template window is refreshed, with 3 - Select a compute resource highlighted. Select the host for your Cisco Crosswork VM.

Step 9

Click Next. The VMware vCenter Server validates the OVA. Network speed will determine how long validation takes. After the validation is complete, the Deploy OVF Template window is refreshed, with 4 - Review details highlighted.

Step 10

Review the OVF template that you are deploying. This information is gathered from the OVF, and cannot be modified.

Note

 

You may see alerts regarding the OVF package containing advanced configuration options and/or about trusted certificates. These are common and you can safely select the "Ignore" option.

Step 11

Click Next. The Deploy OVF Template window is refreshed, with 5 - License agreements highlighted. Review the End User License Agreement and if you agree, click the I accept all license agreements checkbox. Otherwise, contact your Cisco Experience team for assistance.

Step 12

Click Next The Deploy OVF Template window is refreshed, with 6 - Configuration highlighted. Choose the desired deployment configuration.

Figure 1. Select a deployment configuration

Step 13

Click Next. The Deploy OVF Template window is refreshed, with 7 - Select Storage highlighted. Choose the relevant option from the Select virtual disk format drop-down list. From the table, choose the datastore you want to use, and review its properties to ensure there is enough available storage.

Figure 2. Select Storage

Note

 

For production deployment, choose the Thick Provision Eager Zeroed option because this will preallocate disk space and provide the best performance. For lab purposes, we recommend the Thin Provision option because it saves disk space.

Step 14

Click Next. The Deploy OVF Template window is refreshed, with 8 - Select networks highlighted. From the Destination Network drop-down list, select the proper networks for the Management Network and the Data Network.

Figure 3. Select networks

Step 15

Click Next. The Deploy OVF Template window is refreshed, with 9 - Customize template highlighted.

  1. Expand the Management Network settings. Provide information for the IPv4 and/or IPv6 deployment (as per your selection) such as IP address, IP netmask, IP gateway, virtual IP address, and virtual IP DNS name.

  2. Expand the Data Network settings. Provide information for the IPv4 and/or IPv6 deployment (as per your selection) such as IP address, IP netmask, IP gateway, virtual IP address, and virtual IP DNS name.

  3. Expand the Deployment Credentials settings. Enter relevant values for the VM Username and Password.

    Note

     

    Avoid using passwords that resemble dictionary words (for example, 'Pa55w0rd!') or easily guessable patterns. While such passwords might meet the initial criteria, they are considered weak and could cause the VM setup to fail without a clear explanation. To ensure a successful installation, use a complex password with a minimum of 8 characters that combines uppercase and lowercase letters, numbers, and special characters in a non-predictable sequence.

  4. Expand the DNS and NTP Servers settings. According to your deployment configuration (IPv4 and/or IPv6), the fields that are displayed are different. Provide information in the following three fields:

    • DNS IP Address: The IP addresses of the DNS servers you want the Cisco Crosswork server to use. Separate multiple IP addresses with spaces.

    • NTP Servers: The IP addresses or host names of the NTP servers you want to use. Separate multiple IPs or host names with spaces.

    • DNS Search Domain: The name of the DNS search domain.

    • Timezone: Enter the timezone details. Default value is UTC.

    Note

     

    The DNS and NTP servers must be reachable using the network interfaces you have mapped on the host. Otherwise, the configuration of the VM will fail.

  5. Expand the Disk Configuration settings. Provide relevant values for these fields:

    • Logfs Disk Size

    • Datafs Disk Size

    • Corefs Partition Size

    • High Speed Disk Size

    • Minium backup partition size

    The default disk configuration settings should work for most environments. Change the settings only if you are instructed to by the Cisco Customer Experience team.

  6. Expand Crosswork Configuration and enter your legal disclaimer text (users will see this text if they log into the CLI).

  7. Expand Crosswork Cluster Configuration. Provide relevant values for these fields:

    • VM Type: Choose Hybrid.

    • Cluster Seed node: Choose True.

    • Crosswork Management Cluster Virtual IP: Enter virtual IP of the management network.

    • Crosswork Management Cluster Virtual IP Name: Enter DNS hostname of virtual IP interface of the management network.

    • Crosswork Data Cluster Virtual IP: Enter virtual IP of the data network.

    • Crosswork Data Cluster Virtual IP Name: Enter DNS hostname of virtual IP interface of the data network.

    • Initial hybrid node count: Set to 1.

    • Initial total node count: Set to 1.

    • Location of VM: Enter the location of VM.

    • Disclaimer: Enter your legal disclaimer text (users will see this text if they log into the CLI).

    • Installation type: Not applicable to single VM deployment. Do not select any checkbox.

    • Enable Skip Auto Install Feature: Set to True.

    • Auto Action Manifest Definition: Use the default value (Empty).

    • Product specific definition: Enter the product specific definition.

      Note

       

      While deploying an arbiter VM, set the value as <![CDATA[{"product_image_id": "CNC", "attributes": {"is_arbiter": "true"}}]]>.

    • Ignore Diagnostic Failure?: Use the default value (False).

  8. Expand Crosswork Cluster Configuration. Provide relevant values for the following fields:

Step 16

Click Next. The Deploy OVF Template window is refreshed, with 10 - Ready to Complete highlighted.

Step 17

Review your settings and then click Finish if you are ready to begin deployment. Wait for the deployment to finish before continuing. To check the deployment status:

  1. Open a VMware vCenter client.

  2. In the Recent Tasks tab of the host VM, view the status of the Deploy OVF template and Import OVF package jobs.

Step 18

Once the deployment is completed, right-click on the VM and select Edit Settings. The Edit Settings dialog box is displayed. Under the Virtual Hardware tab, update these attributes:

Table 7. VM attributes

VM profile

CPU

Memory

XLarge

24

128 GB

Large

12

96 GB

Small

Note

 

Used only for deploying an arbiter VM.

8

48 GB

For more information, see Resource footprint for single VM deployments.

Click OK to save the changes.

Step 19

Power on the Crosswork VM. To power on, expand the host’s entry, click the Cisco Crosswork VM, and then choose Actions > Power > Power On.

The time taken to create the VM can vary based on the size of your deployment profile and the performance characteristics of your hardware.


Install Crosswork Network Controller via the OVF Tool

This topic explains how to deploy Crosswork Network Controller on a single VM using the OVF tool. You must modify the list of mandatory and optional parameters in the script as per your requirements and run the OVF tool.

Follow these steps to log in to the Cisco Crosswork VM from SSH:

Before you begin

  • In your vCenter data center, go to Host > Configure > Networking > Virtual Switches and select the virtual switch. In the virtual switch, select Edit > Security, and ensure that the following DVS port group properties are as shown:

    • Set Promiscuous mode as Reject

    • Set MAC address changes as Reject

    Confirm the settings and repeat the process for each virtual switch used by Crosswork.

  • Ensure you are using the OVF tool version 4.4 or higher.

Procedure


Step 1

On the machine where you have the OVF tool installed, use the following command to confirm that you have OVF tool version 4.4:

ovftool --version

Step 2

Create the script file (see example in this step) and provide relevant information as per your target environment (such as IP addresses, gateway, netmask, password, and VCENTER_PATH, etc.).

Note

 

The file names mentioned in this topic are sample names and may differ from the actual file names on cisco.com.

Important

 

This is a sample script for deploying an XLarge VM profile. If you need to deploy a Large of Small VM profile, please replace the XLarge values with corresponding values for your desired profile.

  • XLarge

    
    --numberOfCpus:"*"=24 --viCpuResource=:50000: \
    --memorySize:"*"=131072 --viMemoryResource=:131072: \
    
  • Large

    
    --numberOfCpus:"*"=12  --memorySize:"*"=98304 \
    --viCpuResource=-1:18000:-1 --viMemoryResource=-1:98304:-1 \
  • Small

    Note

     

    Only used to deploy an arbiter VM.

    
    --numberOfCpus:"*"=8  --memorySize:"*"=49152 \
    
    ProductDefinition="<![CDATA[{"product_image_id": "CNC", "attributes": {"is_arbiter": "true"}}]]>"]
    EnableSkipAutoInstallFeature="True"
    ddatafs=100
    
cat svm_install.sh
#!/usr/bin/env bash
Host="X.X.X.X"
DM="thick"
DS="DS36"
Deployment="cw_ipv4"
DNSv4="10.10.0.99"
NTP="ntp.cisco.com"
Timezone="US/Pacific"
EnforcePodReservations="True"
EnableSkipAutoInstallFeature="True"
Domain="cisco.com"
Disclaimer="ACCESS IS MONITORED"
VM_NAME="svmEMS"
DataNetwork="DataNet"
ManagementNetwork="MgmtNet"
DataIPv4Address="x.x.x.x"
DataIPv4Gateway="x.x.x.x"
DataIPv4Netmask="x.x.x.x"
ManagementIPv4Address="x.x.x.x"
ManagementIPv4Gateway="x.x.x.x"
ManagementIPv4Netmask="x.x.x.x"
K8sServiceNetworkV4="10.75.0.0"
K8sPodNetworkV4="10.225.0.0"
Password="CLI Password"
Username="cw-admin"
ManagementVIP="x.x.x.x"
DataVIP="x.x.x.x"
VMType="Hybrid"
IsSeed="True"
InitNodeCount="1"
InitMasterCount="1"
 
SVM_OVA_PATH=$1
 
VCENTER_LOGIN="Administrator%40vsphere%2Elocal:Password%40123%21@x.x.x.x"
VCENTER_PATH="DC1/host"
 
ovftool --version
ovftool --acceptAllEulas --skipManifestCheck --X:injectOvfEnv -ds=$DS \
--numberOfCpus:"*"=24 --viCpuResource=:50000: \
--memorySize:"*"=131072 --viMemoryResource=:131072: \
--diskMode=$DM --overwrite --powerOffTarget --powerOn --noSSLVerify \
--allowExtraConfig \
--deploymentOption=$Deployment \
--prop:"DNSv4=${DNSv4}" \
--prop:"NTP=${NTP}" \
--prop:"Timezone=${Timezone}" \
--prop:"EnforcePodReservations=${EnforcePodReservations}" \
--prop:"EnableSkipAutoInstallFeature=${EnableSkipAutoInstallFeature}" \
--prop:"Domain=${Domain}" \
--prop:"Disclaimer=${Disclaimer}" \
--name=$VM_NAME \
--net:"Data Network=${DataNetwork}" \
--net:"Management Network=${ManagementNetwork}" \
--prop:"DataIPv4Address=${DataIPv4Address}" \
--prop:"DataIPv4Gateway=${DataIPv4Gateway}" \
--prop:"DataIPv4Netmask=${DataIPv4Netmask}" \
--prop:"ManagementIPv4Address=${ManagementIPv4Address}" \
--prop:"ManagementIPv4Gateway=${ManagementIPv4Gateway}" \
--prop:"ManagementIPv4Netmask=${ManagementIPv4Netmask}" \
--prop:"K8sServiceNetworkV4=${K8sServiceNetworkV4}" \
--prop:"K8sPodNetworkV4=${K8sPodNetworkV4}" \
--prop:"CWPassword=${Password}" \
--prop:"CWUsername=${Username}" \
--prop:"ManagementVIP=${ManagementVIP}" \
--prop:"DataVIP=${DataVIP}" \
--prop:"VMType=${VMType}" \
--prop:"IsSeed=${IsSeed}" \
--prop:"InitNodeCount=${InitNodeCount}" \
--prop:"InitMasterCount=${InitMasterCount}" \
$SVM_OVA_PATH \
vi://$VCENTER_LOGIN/$VCENTER_PATH/$Host

Step 3

Download the OVA and install scripts from cisco.com. For the purpose of these instructions, we use the file name as signed-cw-na-cnc-advantage-svm-7.1.0-48-release710-250625.ova.

Use the following command to extract the files from the tar bundle:

tar -xvzf signed-cw-na-cnc-advantage-svm-7.1.0-48-release710-250625.ova

The OVA is extracted:

svm]# ls -al
-rw-r--r--   1 root root 15416145920 Mar 28 11:12 cw-na-cnc-advantage-svm-7.1.0-48-release710-250625.ova
-rwxr-xr-x   1 root root        2324 Apr  2 14:06 svm_install.sh

Step 4

Use the following command to make the scripts executable:

chmod +x {filename}

For example:

chmod +x svm_install.sh

Step 5

Execute the script with the OVA file name as parameter:

svm]# ./svm_install.sh cw-na-cnc-advantage-svm-7.1.0-48-release710-250625.ova
VMware ovftool 4.4.0 (build-16360108)
Opening OVA source: cw-na-cnc-advantage-svm-7.1.0-48-release710-250625.ova
<Removed some output >
Completed successfully

The time taken to create the VM can vary based on the size of your deployment profile and the performance characteristics of your hardware.


Install Crosswork Network Controller using the Docker installer tool

This section explains the procedure to install Crosswork Network Controller on a single VM using the docker installer tool. This method is the least recommended compared to using the vCenter UI or the OVF tool for installation.

Before you begin

  • Make sure that your environment meets all the vCenter requirements specified in Installation requirements.

  • The edited template in the /data directory contains sensitive information (VM passwords and the vCenter password). The operator needs to manage access to this content. Store the templates used for your install in a secure environment or edit them to remove the passwords.

  • The install.log, install_tf.log, and .tfstate files will be created during the install and stored in the /data directory. If you encounter any trouble with the installation, provide these files to the Cisco Customer Experience team when opening a case.

  • The install script is safe to run multiple times. Upon error, input parameters can be corrected and re-run. You must remove the install.log, install_tf.log, and tfstate files before each re-run. Running the installer tool multiple times may result in the deletion and re-creation of VMs.

  • In case you are using the same installer tool for multiple Crosswork installations, it is important to run the tool from different local directories, allowing for the deployment state files to be independent. The simplest way for doing so is to create a local directory for each deployment on the host machine and map each one to the container accordingly.

  • Docker version 19 or higher is required while using the installer tool. For more information on Docker, see https://docs.docker.com/get-docker/.

  • In order to change install parameters or to correct parameters following installation errors, it is important to distinguish whether the installation has managed to deploy the VM or not. Deployed VM is evidenced by the output of the installer similar to:

    vsphere_virtual_machine.crosswork-IPv4-vm["1"]: Creation complete after 2m50s [id=4214a520-c53f-f29c-80b3-25916e6c297f]
  • If you do not have Python installed, go to python.org and download the version of Python that is appropriate for your work station.

Known limitations:

  • The vCenter host VMs defined must use the same network name (vSwitch) across all hosts in the data center.

  • The vCenter storage folders or datastores organized under a virtual folder structure, are not currently supported. Ensure that the datastores referenced are not grouped under a folder.

Procedure


Step 1

In your Docker-capable machine, create a directory where you will store everything you will use during this installation.

Note

 

If you are using a Mac, ensure that the directory name is in lower case.

Step 2

Download the installer bundle (.tar.gz file) and the OVA file from cisco.com to the directory you created previously. For the purpose of these instructions, we will use the file name as cnc-advantage-single-node-docker-deployment-7.1.0-48.tar.gz and cnc-advantage-single-node-deployment-7.1.0-48.ova.

Attention

 

The file names mentioned in this topic are sample names and may differ from the actual file names in cisco.com.

Step 3

Use the following command to extract the installer bundle:

tar -xvf cnc-advantage-single-node-docker-deployment-7.1.0-48.tar.gz

The contents of the installer bundle are unzipped to a new directory (e.g. cnc-advantage-single-node-docker-deployment-7.1.0-48). The extracted files will contain the installer image (cw-na-cnc-advantage-svm-installer-7.1.0-48-releasecnc710-250606.tar.gz) and files necessary to validate the image.

Step 4

Review the contents of the README file to understand everything that is in the package and how it will be validated in the following steps.

Step 5

Use the following command to verify the signature of the installer image:

Note

 

Use python --version to find out the version of python on your machine.

If you are using Python 2.x, use the following command to validate the file:

python cisco_x509_verify_release.py -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file>
-v dgst -sha512

If you are using Python 3.x, use the following command to validate the file:

python3 cisco_x509_verify_release.py3 -e <.cer file> -i <.tar.gz file> -s <.tar.gz.signature file>
-v dgst -sha512

Step 6

Use the following command to load the installer image file into your Docker environment.

docker load -i <.tar.gz file>

For example:

docker load -i cw-na-cnc-advantage-svm-installer-7.1.0-48-releasecnc710-250606.tar.gz

Step 7

Run the Docker image list or Docker images command to get the "image ID" (which is needed in the next step).

For example:

docker images

The result will be similar to the following: (section we will need is underlined for clarity)

My Machine% docker images
REPOSITORY                        TAG                                                 IMAGE ID             CREATED        SIZE
dockerhub.cisco.com/cw-installer  cw-na-cnc-advantage-svm-7.1.0-48-releasecnc710-250606   a4570324fad30  7 days ago     276MB

Note

 

Pay attention to the "CREATED" time stamp in the table presented when you run docker images, as you might have other images present from the installation of prior releases. If you wish to remove these, the docker image rm {image id} command can be used.

Step 8

Launch the Docker container using the following command:

docker run --rm -it -v `pwd`:/data {image id of the installer container}

To run the image loaded in our example, use the following command:

docker run --rm -it -v `pwd`:/data a4570324fad30

Note

 
  • You do not have to enter that full value. In this case, "docker run --rm -it -v `pwd`:/data a45" was adequate. Docker requires enough of the image ID to uniquely identify the image you want to use for the installation.

  • In the above command, we are using the backtick (`). Do not use the single quote or apostrophe (') as the meaning to the shell is very different. By using the backtick (recommended), the template file, and OVA will be stored in the directory where you are on your local disk when you run the commands, instead of inside the container.

  • When deploying a IPv6 setup, the installer needs to run on an IPv6 enabled container/VM. This requires additionally configuring the Docker daemon before running the installer, using the following method:

    • Linux hosts (ONLY): Run the Docker container in host networking mode by adding the "–network host" flag to the Docker run command line.

      docker run --network host <remainder of docker run options>
  • Centos/RHEL hosts, by default, enforce a strict SELinux policy which does not allow the installer container to read from or write to the mounted data volume. On such hosts, run the Docker volume command with the Z option as shown below:

    docker run --rm -it -v `pwd`:/data:Z <remainder of docker options>

Note

 

The Docker command provided will use the current directory to read the template and the ova files, and to write the log files used during the install. If you encounter either of the following errors you should move the files to a directory where the path is in lowercase (all lowercase, no spaces or other special characters). Then navigate to that directory and rerun the installer.

Error 1:

% docker run --rm -it -v `pwd`:/data a45
docker: invalid reference format: repository name must be lowercase.
See 'docker run --help'

Error 2:

docker: Error response from daemon: Mounts denied: approving /Users/Desktop: file does not exist
ERRO[0000] error waiting for container: context canceled

Step 9

Navigate to the directory with the VMware template.

cd /opt/installer/deployments/7.1.0/vcentre

Step 10

Copy the template file found under /opt/installer/deployments/7.1.0/vcentre/deployment_template_tfvars to the /data folder using a different name.

For example: cp deployment_template_tfvars /data/deployment.tfvars

For the rest of this procedure, we will use deployment.tfvars in all the examples.

Step 11

Edit the template file located in the /data directory in a text editor, to match your planned deployment (for reference, see Crosswork Network Controller VM templates for single VM deployments).

Step 12

From the /opt/installer directory, run the installer.

./cw-installer.sh install -m /data/<template file name> -o /data/<.ova file>

For example:

./cw-installer.sh install -m /data/deployment.tfvars -o /data/cnc-advantage-single-node-deployment-7.1.0-48.ova

Step 13

Read, and then enter "yes" if you accept the End User License Agreement (EULA). Otherwise, exit the installer and contact your Cisco representative.

Step 14

Enter "yes" when prompted to confirm the operation.

Note

 

It is not uncommon to see some warnings like the following during the install:

Warning: Line 119: No space left for device '8' on parent controller '3'.
Warning: Line 114: Unable to parse 'enableMPTSupport' for attribute 'key' on element 'Config'.

If the install process proceeds to a successful conclusion (see sample output below), these warnings can be ignored.

Sample output:

cw_vms = <sensitive>
INFO: Copying day 0 state inventory to CW
INFO: Waiting for deployment status server to startup on 10.90.147.66. Elapsed time 0s, retrying in 30s
Crosswork deployment status available at http://{VIP}:30602/d/NK1bwVxGk/crosswork-deployment-readiness?orgId=1&refresh=10s&theme=dark 
Once deployment is complete login to Crosswork via: https://{VIP}:30603/#/logincontroller 
INFO: Cw Installer operation complete.

Note

 

If the installation fails, open a case with Cisco and provide the .log files that were created in the /data directory (and the local directory where you launched the installer Docker container), to Cisco for review. The two most common reasons for the install to fail are: (a) password that is not adequately complex, and (b) errors in the template file. If the installer fails for any errors in the template (for example, mistyped IP address), correct the error and rerun the install script.


Install Crosswork Network Controller VM using CLI

This section provides the high-level workflow for installing Crosswork Network Controller VM on KVM via CLI.

Table 8. Installation workflow

Step

Action

1. Ensure you have performed the preliminary checks.

See Preliminary checks for details.

2. Set up and validate the KVM environment.

See Set up and validate KVM on RHEL.

3. Configure network bridges and SRIOV

See Configure network bridges and SRIOV.

4. Install Crosswork Network Controller VM on KVM.

See Install Crosswork Network Controller VM on KVM using CLI.

Known limitations

  • If you are using a non-root user ID for the deployment of nodes on the bare metals, ensure that the particular user ID has been added to the sudoers list (i.e., /etc/sudoers).

Preliminary checks

  1. Virtualization: Ensure that your system supports virtualization. This is typically enabled in the BIOS. To check, use these commands:

    • For Intel CPUs: grep -wo 'vmx' /proc/cpuinfo

    • For AMD CPUs: grep -wo 'svm' /proc/cpuinfo

  2. KVM modules: Ensure that the KVM modules are loaded: lsmod | grep kvm

Set up and validate KVM on RHEL

Follow these steps to set up KVM on RHEL.

Procedure


Step 1

Refresh repositories and install updates. This command updates all the packages on your system to their latest versions.

sudo dnf update -y

Step 2

Reboot the system after all the updates are installed successfully.

sudo reboot

Step 3

Install virtualization tools.

  1. Install virt-install and virt-viewer.

    sudo dnf install virt-install virt-viewer -y

    virt-install is a command-line tool for creating virtual machines.

    virt-viewer is a Lightweight UI for interacting with VMs.

  2. Install libvirt virtualization daemon, which is necessary for managing VMs.

    sudo dnf install -y libvirt
  3. Install virt-manager, a graphical interface for managing VMs.

    sudo dnf install virt-manager -y
  4. Install additional virtualization tools for managing VMs.

    sudo dnf install -y virt-top libguestfs-tools

Step 4

Start and enable libvirtd virtualization daemon.

  1. Start the libvirtd daemon.

    sudo systemctl start libvirtd
  2. Enable the libvirtd daemon.

    sudo systemctl enable libvirtd
  3. Verify that the Daemon is running.

    sudo systemctl status libvirtd

Step 5

Add users to the required groups, for example, libvirt and qemu. In the following commands, replace your_username with the actual username.

sudo usermod --append --groups libvirt your_username
sudo usermod --append --groups qemu your_username

Step 6

Ensure that IOMMU is enabled. If it is not enabled, run this command to enable it.

grubby --update-kernel=ALL --args=intel_iommu=on
dmesg | grep -I IOMMU

Step 7

Check IOMMU and validate the setup. Ensure that all checks show as PASS.

virt-host-validate

If the IOMMU check is not PASS, then use the following commands to enable it.

sudo grubby --update-kernel=ALL --args=intel_iommu=on
sudo reboot

Configure network bridges and SRIOV

Crosswork needs the 10G interface for all the data layer communications to support functionality at a scale. You may choose any networking configuration which can provide 10G throughput.

The following sections explain how to enable bridging and SRIOV network configuration.


Note


For KVM deployment, configure either network bridges or SRIOV, but not both.


Configure network bridges

A network bridge acts like a virtual network switch, allowing multiple network interfaces to communicate as if they are on the same physical network.

Follow these steps to configure network bridges.

Procedure

Step 1

Create a new network connection of type "bridge" with the interface name intMgmt and assign it the connection name intMgmt.

nmcli connection add type bridge ifname intMgmt con-name intMgmt

Step 2

Add a new bridge-port connection, associating the physical network interface <interface1> with the previously created bridge intMgmt.

nmcli connection add type bridge-port ifname <interface1> controller intMgmt

Step 3

Assign IP address to the bridge.

nmcli connection modify intMgmt ipv4.addresses <IPv4-address>/<subnet-mask>

Step 4

Bring up the intMgmt network connection.

nmcli connection up intMgmt

Step 5

Create another network bridge connection with the interface name intData and assign it the connection name intData.

nmcli connection add type bridge ifname intData con-name intData

Step 6

Add a bridge-port connection, associating the physical network interface <interface2> with the previously created bridge intData.

nmcli connection add type bridge-port ifname <interface2> controller intData

Step 7

Assign IP address to intData.

nmcli connection modify intData ipv4.addresses <IPv4-address>/<subnet-mask>

Step 8

Bring up the intData network connection.

nmcli connection up intData

Configure SRIOV

SRIOV allows a single physical network interface to be shared among multiple VMs by creating multiple Virtual Functions (VFs).

Follow these steps to configure SRIOV.

Procedure

Step 1

Open the rc.local file in the vi editor.

vi /etc/rc.d/rc.local

Step 2

Set the number of VFs for the network interfaces based on your requirement. For instance, in a Cisco Crosswork Planning single VM installation, you need a minimum of two network interfaces—one for management and the other for data. Two VFs are configured for each interface by default. You may also configure additional VFs for future scalability needs.

For example, to set the number of VFs to 2 for each <interface1> and <interface2>, use these commands. In this example, <interface1> refers to the management interface and <interface2> refers to the data interface.

echo 2 > /sys/class/net/<interface1>/device/sriov_numvfs
echo 2 > /sys/class/net/<interface2>/device/sriov_numvfs

Step 3

Change the permissions of the rc.local file to make it executable.

chmod +x /etc/rc.d/rc.local

Step 4

If any of the interfaces are configured over the VLAN, set the VLAN IDs to the interfaces.

ip link set <interface1> vf 0 vlan <vlanid>
ip link set <interface2> vf 1 vlan <vlanid>

Step 5

Save the changes and reboot the system.

Step 6

List all the PCI devices for all the virtual functions in a tree format. This is useful for verifying the setup and ensuring that the VFs are correctly recognized by the KVM hypervisor.

virsh nodedev-list --tree

In this procedure, since we set the number of VFs as 2 in Step 2, two VFs for each management interface and data interface are created. As a result, a total of four PCI devices are generated: two for management and two for data.

This PCI device information is used during the installation process with SRIOV (Step 4 of Install Crosswork Network Controller cluster on KVM using CLI).


Install Crosswork Network Controller VM on KVM using CLI

Follow these steps to install Crosswork on KVM.


Note


The time taken to create the VM can vary based on the size of your deployment profile and the performance characteristics of your hardware.


Before you begin

Ensure that

Procedure


Step 1

As a first step, prepare the config IOS file (ovf-env.xml) for the Crosswork Network Controller VM. For more information, see Example 2: Deploy Crosswork Network Controller VM on KVM (single VM deployment).

  1. Update the ovf-env.xml file as per your needs. For more information on the parameters, see Installation Parameters.

    $ cat ovf-env.xml
  2. Generate the IOS file.

    $ mkisofs -R -relaxed-filenames -joliet-long -iso-level 3 -l -o cnc1.iso ovf-env.xml

    Note

     

    In the above command, "cnc1" is the host name of the Cisco Crosswork Network Controller VM.

Step 2

Download the Crosswork Network Controller VM qcow2 tar file and extract it.

tar -xvf cnc-advantage-single-node-deployment-7.1.0-48-qcow2.tar.gz

This command creates three qcow2 files:

  • cnc-advantage-single-node-deployment-7.1.0-48_dockerfs.qcow2

  • cnc-advantage-single-node-deployment-7.1.0-48_extrafs.qcow2

  • cnc-advantage-single-node-deployment-7.1.0-48_rootfs.qcow2

Step 3

Navigate to the required installation folder and create three disks.

cd cnc1/
qemu-img create -f qcow2 disk3 20G
qemu-img create -f qcow2 disk4 485G
qemu-img create -f qcow2 disk6 15G
ls -1
cw_dockerfs.vmdk.qcow2
cw_extrafs.vmdk.qcow2
cw_rootfs.vmdk.qcow2
disk3
disk4
disk6

Step 4

Install the Crosswork Network Controller VM using network bridge or SRIOV.

In this example, "cnc1" is the host name of the Cisco Crosswork Network Controller VM.

  • Using network bridges:

    virt-install --boot uefi --boot hd,cdrom --connect qemu:///system --virt-type kvm --name cnc1 --ram 98304 --vcpus 12 --os-type linux --disk path=cnc-advantage-single-node-deployment-7.1.0-48_rootfs.qcow2,format=qcow2,bus=scsi --disk path=cnc-advantage-single-node-deployment-7.1.0-48_dockerfs.qcow2,format=qcow2,bus=scsi --disk path=disk3,format=qcow2,bus=scsi --disk path=disk4,format=qcow2,bus=scsi --disk path=cnc-advantage-single-node-deployment-7.1.0-48_extrafs.qcow2,format=qcow2,bus=scsi --disk path=disk6,format=qcow2,bus=scsi --disk=cnckvm.iso,device=cdrom,bus=scsi --import --network bridge=intMgmt,model=virtio --network bridge=intData,model=virtio --noautoconsole --os-variant ubuntu22.04 --graphics vnc,listen=0.0.0.0
  • Using SRIOV:

    virt-install --boot uefi --boot hd,cdrom --connect qemu:///system --virt-type kvm --name cnc1 --ram 98304 --vcpus 12 --cpu host-passthrough --disk path=cw_rootfs.vmdk.qcow2,format=qcow2,bus=scsi --disk path=cw_dockerfs.vmdk.qcow2,format=qcow2,bus=scsi --disk path=disk3,format=qcow2,bus=scsi --disk path=disk4,format=qcow2,bus=scsi --disk path=cw_extrafs.vmdk.qcow2,format=qcow2,bus=scsi --disk path=disk6,format=qcow2,bus=scsi --disk=cnc1.iso,device=cdrom,bus=scsi --import --network none --host-device=pci_0000_01_10_0 --host-device=pci_0000_01_10_0 --os-variant ubuntu-lts-latest &