Before creating and deploying a cluster, configure one environment and one deployer. A cluster has an environment field to
reference to its corresponding environment.
clusters:
<SMI cluster name>
type: "opshub"
# optional cluster `size` field. Support `small` or `normal`.Default value is `small` if not specified.
size: small
environment: <environment of vCenter hosting the SMI cluster>
gateway: <gateway IP address>
username: <user name for the SMI cluster>
# SSH private-key-file with path relative to the staging directory
# If the line is missing, ssh private key will be auto-generated and saved inside .sec/
private-key-file: <path and filename for ssh private key>
# The following two fields are for multi-node cluster only
primary-vip: <virtual IP address for the management network in CIDR format>
vrouter-id: <VRRP ID for the management network>
# ingress-hostname only supports '.' and alphanumeric characters
ingress-hostname: "smartphy.example.com"
pod-subnet: <IP address range for kubernetes pod in CIDR format "192.168.120.0/24">
#pod-subnet is an optional field. If you do not specify the IP address, "192.168.0.0/16" will be assigned by default.
service-subnet: <IP address range for kubernetes service in CIDR format "10.96.120.0/24">
# service-subnet is an optional field. If you do not specify the IP address, "10.96.0.0/12" will be assigned by default.
docker-bridge-subnet: ["IP address range for docker bridge in CIDR format '10.96.120.0/24'"]
# docker-bridge-subnet is an optional field. If you do not specify the IP address, "172.17.0.0/16" will be assigned by default.
# For Multi-Node cluster only
nodes:
- host: <ESXi host 1 IP address>
addresses: [ <CONTROL-PLANE 1 IP>, <ETCD 1 IP>, <INTRA 1 IP>, <OPS 1 IP> ]
datastore: <vCenter datastore for host 1>
# datastore-folder parameter is optional, it accepts subfolder structure too. If you don't specify the folder, cluster and deployer are created under root folder of VM datastore.
datastore-folder: "my-folder/cluster"
- host: <ESXi host 2 IP address>
addresses: [ <CONTROL-PLANE 2 IP>, <ETCD 2 IP>, <INTRA 2 IP>, <OPS 2 IP> ]
datastore: <vCenter datastore for host 2>
# datastore-folder parameter is optional, it accepts subfolder structure too. If you don't specify the folder, cluster and deployer are created under root folder of VM datastore.
datastore-folder: "my-folder/cluster"
- host: <ESXi host 3 IP address>
addresses: [ <CONTROL-PLANE 3 IP>, <ETCD 3 IP>, <INTRA 3 IP>, <OPS 3 IP> ]
datastore: <vCenter datastore for host 3>
# datastore-folder parameter is optional, it accepts subfolder structure too. If you don't specify the folder, cluster and deployer are created under root folder of VM datastore.
datastore-folder: "my-folder/cluster"
apps:
- smartphy:
nodes:
- host: <ESXi host 1 IP address>
nics: <vCenter network for CIN>
ops:
interfaces:
- addresses: [ <OPS 1 IP> ]
vip: [ <LIST of virtual IP for CIN network in CIDR format> ]
vrouter-id: <VRRP ID for CIN network>
routes:
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
- host: <ESXi host 2 IP address>
nics: <vCenter network for CIN>
ops:
interfaces:
- addresses: [ <OPS 2 IP> ]
vip: [ <LIST of virtual IP for CIN network in CIDR format> ]
vrouter-id: <VRRP ID for CIN network>
routes:
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
- host: <ESXi host 3 IP address>
nics: <vCenter network for CIN>
ops:
interfaces:
- addresses: [ <OPS 3 IP> ]
vip: [ <LIST of virtual IP for CIN network in CIDR format> ]
vrouter-id: <VRRP ID for CIN network>
routes:
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
# For Single-Node cluster only
clusters:
"cicd-smi-aio":
type: "opshub"
environment: "cicd-vcenter"
username: "build"
# private-key-file must exist in the path of staging/install directory.
# file path is relative to the staging/install directory.
private-key-file: "cmts.pem"
# pod-subnet is an optional field. If you do not specify the IP address, "192.168.0.0/16" will be assigned by default.
pod-subnet: "192.168.120.0/24"
# service-subnet is an optional field. If you do not specify the IP address, "10.96.0.0/12" will be assigned by default.
service-subnet: "10.96.120.0/24"
# docker-bridge-subnet is an optional field. If you do not specify the IP address, "172.17.0.0/16" will be assigned by default.
docker-bridge-subnet: ["172.17.120.0/24"]
gateway: "172.22.80.1"
ingress-hostname: smartphy.example.com
nodes:
- host: <ESXi host IP address>
addresses: [ <AIO VM IP address> ]
datastore: <vCenter datastore for host>
# datastore-folder parameter is optional, it accepts subfolder structure too. If you don't specify the folder, cluster and deployer are created under root folder of VM datastore.
datastore-folder: "my-folder/cluster"
apps:
- smartphy:
nodes:
- host: <ESXi host IP address>
nics: [<vCenter network for CIN>]
control-plane:
interfaces:
- addresses: [ <LIST of IP addresses for CIN network in CIDR format> ]
routes:
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
- { dest: [ <LIST of destination subnets> ], nhop: <next hop IP> }
Command
|
Description
|
<cluster name>
|
Cluster name.
|
type
|
Use opshub for Cisco Smart PHY cluster.
|
size
|
Small or normal. When the size is not specified, the default value is
small.
|
environment
|
Reference to vCenter environment.
|
gateway
|
Gateway for the cluster nodes.
|
username
|
Username of the cluster.
|
private-key-file
|
SSH private-key-file with the path relative to the staging directory.
If the line is missing, the SSH private key will be auto-generated and saved inside .sec/ .
|
primary-vip
|
Primary virtual IP address in CIDR format (multinode only).
|
vrouter-id
|
VRRP ID for management network (multinode only).
|
ingress-hostname
|
Fully Qualified Domain Name (FQDN) assigned to the cluster. Only
alphanumeric characters and period (.) are allowed.
Your
authoritative DNS server must be configured to resolve the specified
FQDN and the following subdomain:
Alternatively, if your authoritative DNS server supports wildcards,
you must configure the DNS to resolve the specified FQDN and a wildcard
record covering the subdomains listed here.
If you do not specify an
FQDN:
-
The cluster IP address is used to generate an FQDN leveraging
nip.io as the domain and top-level
domain (TLD). For example, if the IP address of the cluster
is 10.0.0.2, the generated FQDN is
10.0.0.2.nip.io . The subdomains listed
here are also leveraged.
-
Your DNS servers must allow the resolution of the
nip.io domain. If resolution of
nip.io is blocked, you cannot access
the cluster.
|
host
|
ESXi IP address where VMs are hosted.
|
service-subnet
|
Service subnet range to configure k8s and calico in CIDR format. The
default value is 10.96.0.0/12.
|
pod-subnet
|
Pod subnet to configure k8s and calico in CIDR format. The default
value is 192.168.0.0/16.
|
docker-bridge-subnet
|
IP address range for docker bridge in CIDR format. It is an optional
field. If you do not specify the IP address, 172.17.0.0/16 will be
assigned by default.
|
apps
|
Application to be installed on top of the platform, in this case
smartphy .
|
addresses
|
IP addresses assigned to control-plane, etcd, infra and docsis or
operations nodes respectively.
|
CIN Configuration
|
vip
|
Virtual IP address in CIDR format.
|
vrouter-id
|
VRRP ID for CIN.
|
addresses
|
CIN IP addresses in CIDR format.
|
nics
|
vCenter NICs for CIN.
|
For Single-Node cluster
|
host
|
ESXi IP address where VM is hosted.
|
control-plane
|
Cisco Smart PHY CIN configuration.
|
Guidelines for configuring a cluster:
-
The name of the cluster can have only lowercase letters, digits, and hyphens (-).
-
The private-key-file
field, when present, should refer to the SSH private key file. This file must be in the staging directory and must not be
accessible (read/write/execute) to other users.
If the private-key-file line is missing, the deployer
script generates an SSH private key for the cluster and places it in the .sec
subdirectory under the staging directory. The filename is <cluster-name>_auto.pem
.
-
Configure the virtual IP address of the Smart PHY cluster and VRRP ID (vrouter-id
at cluster level) for the management network for multinode clusters. The management network supports only IPv4. The vrouter-id
can take values 1–254.
-
If multiple clusters share the same management subnet, the VRRP ID for each cluster must be unique in the management subnet.
Note |
If the cluster ingress hostname is smartphy.example.com , you can
access opscenter CLI and RESTCONF endpoints as follows:
-
Use opscenter.smartphy.example.com/cee-data/restconf to
access cee-data opscenter RESTCONF endpoint.
-
Use opscenter.smartphy.example.com/cee-data/cli to access
cee-data opscenter CLI.
-
Use opscenter.smartphy.example.com/opshub-data/restconf to
access opshub-data opscenter RESTCONF endpoint.
-
Use opscenter.smartphy.example.com/opshub-data/cli to access
opshub-data opscenter CLI.
-
Use opscenter.smartphy.example.com/smartphy-data/restconf to
access smartphy-data opscenter RESTCONF endpoint.
-
Use opscenter.smartphy.example.com/smartphy-data/cli to
access smartphy-data opscenter CLI.
|
Configure TLS Certificate
When Cisco Operations Hub cluster is deployed, a self-signed certificate is
configured by default. You can replace the self-signed certificate with a CA signed
certificate through the Deployer CLI. Use the following commands as example to
configure a CA signed TLS certificate.
product opshub# config terminal
Entering configuration mode terminal
product example deployer(config)# clusters {k8s-cluster-name}
product example deployer(config-clusters-******)# secrets tls opshub-data cert-api-ingress ?
Possible completions:
certificate Path to PEM encoded public key certificate.
private-key Private key associated with given certificate.
<cr>
product example deployer(config-clusters-******)# secrets tls nginx-ingress default-ssl-certificate ?
Possible completions:
certificate Path to PEM encoded public key certificate.
private-key Private key associated with given certificate.
<cr>
product example deployer(config-clusters-******)#commit
product example deployer(config-clusters-******)#exit
product example deployer(config-clusters-******)# cluster <cluster-name> actions sync run force-vm-redeploy false