Catalyst Center High Availability Guide, Release 2.3.7.x
This guide details how Catalyst Center implements high availability (HA).
![]() Note |
For a description of disaster recovery functionality in Catalyst Center, see the "Implement Disaster Recovery" chapter in the Cisco Catalyst Center Administrator Guide. |
Catalyst Center high availability
Catalyst Center’s HA framework reduces downtime from failures and makes your network more resilient. The HA framework provides near real-time synchronization of changes across your cluster nodes, giving your network redundancy to deal with issues.
Supported synchronization types include
-
database changes (such as updates related to configuration, performance, and monitoring data), and
-
file changes (such as report configurations, configuration templates, TFTP root directory, administration settings, licensing files, and the key store).
This guide describes
-
the requirements to use HA
-
the deployment process
-
administration best practices, and
-
Catalyst Center's response to failure scenarios.
![]() Note |
Catalyst Center provides HA support for both Automation and Assurance functionalities. |
High availability requirements
To enable HA, your production environment must meet these requirements:
-
Your cluster consists of three Catalyst Center appliances with the same machine profile (for example, three third-generation large appliances).
See Supported appliances.
-
Your secondary appliances run the same version of Catalyst Center as the primary appliance.
-
Your cluster's appliances belong to the same network and reside in the same site.
Note
This requirement applies to all multinode cluster deployments. The Catalyst Center appliance does not support the distribution of nodes across multiple networks or sites.
-
Your cluster's round-trip time (RTT) is 10 milliseconds or less.
Supported appliances
Table 1 lists the Catalyst Center appliances that support HA.
Machine profile |
Machine profile alias |
Cisco part number | Number of cores |
---|---|---|---|
medium |
medium |
Second-generation:
|
44 |
Third-generation:
|
32 |
||
t2_large |
large |
Second-generation:
|
56 |
Third-generation:
|
|||
t2_2xlarge |
extra large |
Second-generation:
|
112 |
Third-generation:
|
80 |
![]() Important |
Catalyst Center 2.3.7.9 adds support for mixed three-node clusters that have HA enabled. A valid mixed cluster meets these requirements:
|
High availability functionality
Catalyst Center supports a three-node cluster configuration that provides both software and hardware HA:
-
Software HA: Restarts any services on a node that have failed. If a service fails, Catalyst Center restarts that service on the same node or a different cluster node.
-
Hardware HA: Uses multiple appliances, disk drives (within each appliance's RAID configuration), and power supplies to tolerate a failure by these components until the faulty component is restored or replaced.
![]() Important |
|
Clustering and database replication
Catalyst Center enables distributed processing and database replication among multiple nodes. Clustering enables HA by sharing resources and features.
Security replication
In a multinode environment, Catalyst Center replicates the security features of one node to two other nodes, including any X.509 certificates or trustpools. After joining these nodes to form a three-node cluster, Catalyst Center shares the GUI user credentials across the nodes. Catalyst Center does not share the CLI user credentials since each node has different credentials.
System upgrade
In a multinode cluster, you can trigger an upgrade of the whole cluster in the Catalyst Center GUI. The GUI represents the entire cluster, not just a single node. An upgrade that is triggered in the GUI automatically updates all cluster nodes.
![]() Note |
After initiating a system upgrade to update Catalyst Center's core infrastructure, Catalyst Center enters maintenance mode. In maintenance mode, Catalyst Center is unavailable until the upgrade process completes. Take this into account when scheduling a system upgrade. |
Confirm a successful system upgrade
You can confirm that your system upgrade was successful by completing these steps in the GUI.
Procedure
Step 1 |
From the main menu, choose . |
Step 2 |
In the System Update area, verify that the latest system package is installed. |
High availability deployment
This section provides best practices for deploying and administering an HA-enabled cluster in your production environment.
Deployment recommendations
Catalyst Center supports three-node clusters. The odd number of nodes provides the quorum that is necessary to perform any operation in a distributed system. Instead of three separate nodes, Catalyst Center views them as one logical entity accessed through a virtual IP address.
Follow these guidelines when deploying HA:
-
When setting up a three-node cluster, avoid spanning a LAN across slow links to prevent network failures and prolonged service recovery times. When configuring the cluster interface on a three-node cluster, ensure that all the cluster nodes reside in the same subnet.
-
Avoid overloading a single interface with management, data, and HA responsibilities, which can negatively impact HA operation. At a minimum, use the Cluster and Enterprise interfaces to keep cluster and enterprise traffic separate.
-
In the appliance configuration wizards, Catalyst Center prepopulates the Services Subnet and Cluster Services Subnet fields with link-local (169.x.x.x) subnets. We recommend that you use the default subnets. If you choose to specify different subnets, they must conform to the IETF RFC 1918 and 6598 specifications for private networks.
For details, see RFC 1918 (Address Allocation for Private Internets) and RFC 6598 (IANA-Reserved IPv4 Prefix for Shared Address Space).
-
Enable HA during off-hours, because Catalyst Center will enter maintenance mode and be unavailable until it finishes redistributing services.
Deploy a cluster
To deploy Catalyst Center on a three-node cluster with HA enabled, complete this procedure:
Procedure
Step 1 |
Configure Catalyst Center on the first node in your cluster. Refer to the Installation Guide topic that is specific to the configuration wizard you want to use and your appliance type:
|
||
Step 2 |
Configure Catalyst Center on the second node in your cluster. Refer to the Installation Guide topic that is specific to the configuration wizard you want to use and your appliance type:
|
||
Step 3 |
Configure Catalyst Center on the third node in your cluster. Refer to the secondary appliance configuration topic that you viewed in the previous step. |
||
Step 4 |
Activate HA on your cluster:
|
Administer a cluster
The topics in this section cover the administrative tasks that you must complete when HA is enabled in your production environment.
Run maglev commands
To change the IP address, static route, DNS server, or maglev user password that is currently configured for a Catalyst Center appliance, run the sudo maglev-config update CLI command.
Common cluster node operations
Table 1 describes the most common operations that you will complete when managing the nodes in your cluster.
Task | Action | ||||
---|---|---|---|---|---|
From the CLI, shut down all nodes in a three-node cluster. |
Run the sudo shutdown -h now command on all of the nodes at the same time. When powering nodes back on, be sure to power on all nodes at the same time through Cisco IMC. |
||||
Shut down or disconnect one node for maintenance (in situations where you are not just rebooting the node). |
Run these commands:
After performing maintenance on the node, complete these steps:
|
||||
Reboot one or more nodes after making changes that may require a reboot. |
Run the sudo shutdown -r now command on the relevant nodes. |
||||
Prepare a node for Return Merchandise Authorization (RMA). |
|
||||
Update an appliance's Cisco IMC firmware. |
Perform these actions:
|
![]() Important |
|
Replace a failed node
To replace a node that has failed, complete these tasks:
-
Remove the failed node from your cluster.
-
Replace the failed node with another node.
Remove the failed node
When a node fails because of a hardware failure, remove it from the cluster. For assistance with this task, contact the Cisco TAC.
![]() Warning |
A two-node cluster (a transient configuration that's not supported for normal use) results when one of these situations occurs:
While a two-node cluster is active, you cannot remove either of its nodes. |
Add a replacement node
After removing the failed node, add a replacement node to the cluster. Set aside at least 30 minutes for this task.
Procedure
Step 1 |
On the replacement node, install the same software version that the other nodes in the cluster are running.
|
||
Step 2 |
After the installation is complete, enter the magctl node display command. The replacement node should show the Ready status. |
||
Step 3 |
Redistribute services to the replacement node by activating HA on your cluster:
|
||
Step 4 |
Verify that the services have been redistributed: magctl appstack status The replacement node should show a Running status. |
Minimize failure and outage impact
In a typical three-node Catalyst Center cluster, each node connects to a single cluster switch through the node’s cluster port interface. Connectivity with the cluster switch requires two transceivers and a fiber optic cable—any of which can fail. The cluster switch can also fail due to a power loss or a manual restart, causing an outage of your Catalyst Center cluster and a loss of all controller functionality.
To minimize the impact of a failure or outage on your cluster, follow at least one of these guidelines:
-
Perform management operations (such as software upgrades, configuration reloads, and power cycling) during noncritical periods, as these operations can cause a cluster outage.
-
Connect your cluster nodes to a switch that supports the in-service software upgrade (ISSU) feature. This feature allows you to upgrade the system software while the system continues to forward traffic, using nonstop forwarding (NSF) with stateful switchover (SSO) to perform software upgrades with no system downtime.
-
Connect your cluster nodes to a switch stack, which allows you to connect each cluster node to a different member of the switch stack joined using Cisco StackWise. Because the cluster is connected to multiple switches, the impact of one switch going down is mitigated.
High availability failure scenarios
Nodes can fail due to issues related to
-
software
-
network access, and
-
hardware.
When a failure occurs, Catalyst Center normally detects it within two minutes and attempts to resolve the failure automatically. Failures that persist for longer than two minutes can require user intervention.
Table 1 describes failure scenarios that your cluster can encounter and how Catalyst Center responds in each scenario. Pay attention to the table's first column, which indicates the scenarios that require action from you to restore your cluster's operation.
![]() Note |
For a cluster to operate, Catalyst Center's HA implementation requires at least two cluster nodes to be up at any given time. |
Requires User Action |
Failure Scenario |
HA Behavior |
||
---|---|---|---|---|
Yes |
Any node in the cluster goes down. |
Immediately perform an Automation backup. See the "Backup and Restore" chapter in the Cisco Digital Network Architecture Center Administrator Guide. |
||
No |
A node fails, becomes unreachable, or experiences a service failure for less than two minutes. |
After the node is restored:
|
||
No |
A node fails, becomes unreachable, or experiences a service failure for longer than two minutes. |
After the node is restored, the following actions take place:
|
||
Yes |
Two nodes fail or are unreachable. |
The cluster is broken, and the GUI is not accessible until connectivity is restored.
|
||
Yes |
A node fails and needs to be removed from a cluster. |
Contact the Cisco TAC for assistance. |
||
No |
All the nodes lose connectivity with one another. |
The GUI is not accessible until connectivity is restored. After connectivity is restored, operations resume and the data shared by cluster members are synced. |
||
Yes |
A backup is scheduled and a node goes down because of a hardware failure. |
Contact the Cisco TAC for a replacement node, as well as assistance with joining the new node to the cluster and restoring services on the two remaining nodes. |
||
Yes |
A red banner in the GUI indicates that a node is down: " |
The banner indicates that the node is down. As a result, Assurance data collection and processing stops and data is not available. If the node comes back up, your Assurance functionality is restored. If the failure is related to a hardware failure, do the following:
|
||
Yes |
A red banner in the GUI indicates that a node is down, but eventually changes to yellow, with this message: " |
The system is still usable. Investigate why the node is down, and bring it back up. |
||
Yes |
A failure occurs while upgrading a cluster. |
Contact the Cisco TAC for assistance. |
||
No |
An appliance port fails. |
|
||
Yes |
Appliance hardware fails. |
Replace the hardware component (such as a fan, power supply, or disk drive) that failed. Because multiple instances of these components are found in an appliance, the failure of one component can be tolerated temporarily. As the RAID controller syncs a newly added disk drive with the other drives on the appliance, there might be a degradation in performance on the I/O system while this occurs. |
Service redistribution times
Table 1 shows the time required to redistribute services to two nodes in an HA environment.
Event | Duration of node unavailability |
---|---|
Catalyst Center detects a node is down |
Two minutes |
Start of service redistribution |
One minute |
Completion of service redistribution |
25 minutes |
The time required to redistribute services varies. It depends on the number of pods in the node that went down, as well as
the time required to relocate those pods to other nodes. The time required for services to reach the Ready
state after redistribution depends on their respective liveness and readiness probes.
Service distribution is not affected by a cluster’s VIP, as no pods are mapped to it, and Kubernetes is unaware of this VIP. However, if the node that went down was the leader for the controller manager or kube-scheduler service, this can slightly delay redistribution. Before a new leader can take over and perform the switchover, either
-
the lease held by the leader must expire, or
-
the renewal deadline must pass.
Pod behavior during a node failure
This topic describes how pods move if a node fails and becomes unreachable or experiences a service failure that lasts longer than five minutes:
-
StatefulSet: The pod provides data storage. This type of pod is node-bound, using local persistent volume (LPV). When the node is down, all the stateful sets on that node move to Pending state.
Examples of StatefulSet pods include
-
MongoDB
-
Elasticsearch, and
-
PostgreSQL.
-
-
DaemonSet: By design, the pod is strictly node-bound.
Examples of DaemonSet pods include
-
agent
-
broker-agent, and
-
keepalived.
-
-
Stateless deployment:
-
The pod, which does not have its own datastore, uses a StatefulSet for data storage and retrieval.
-
Deployment scale varies. Some deployments have one pod instance, such as spf-service-manager-service; some have two pod instances, such as apic-em-inventory-manager-service; and some have three pod instances, such as kong, platform-ui, and collector-snmp.
-
The single-instance stateless pods are free to move across nodes based on the current state of the cluster.
-
The two-instance stateless pods have flexibility to move across nodes, but no two instances of stateless pods can run on the same node.
-
The three-instance stateless pods have node antiaffinity, meaning that no two instances can run on the same node.
-