The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The following table identifies the high-level tasks that are required to configure a multiple-node cluster.
Step |
Task |
Related Information |
||
---|---|---|---|---|
1. |
Install a minimum of four Cisco ICFP virtual appliances. The role that is assigned to each appliance during installation depends on whether you use VMware or OpenStack. |
|||
2. |
Configure two primary nodes. |
|||
3. |
Configure two or more service nodes. |
|||
4. |
Configure additional storage. |
|||
5. |
Configure the two primary nodes for HA. |
|||
6. |
(OpenStack only) Configure VIP access. |
|||
7. |
Configure a load balancer for the service nodes in the cluster.
|
Your load balancer documentation |
To configure a Cisco ICFP virtual appliance that has been installed using the Standalone Mode role for a multiple-node cluster, you must first configure it as a primary node or service node by using the ShellAdmin console. This procedure describes how to configure a standalone node as a primary node. To configure a standalone node as a service node, see Configuring a Service Node.
Install a Cisco ICFP virtual appliance using the Standalone Mode role.
To configure a Cisco ICFP virtual appliance that has been installed using the Standalone Mode role for a multiple-node cluster, you must first configure it as a primary node or as a service node by using the ShellAdmin console. This procedure describes how to configure a standalone node as a service node. To configure a standalone node as a primary node, see Configuring a Primary Node.
Install a Cisco ICFP virtual appliance using the Standalone Mode role.
Obtain the IP address of a primary node in the cluster or the virtual IP address (VIP) of an HA pair in the cluster.
Back up any data in the virtual appliance database that you want to keep. When the virtual appliance is reconfigured as a service node, the existing data is deleted.
The default disk size of 100 GB for Cisco ICFP is not sufficient for configuring Cisco ICFP in a multiple-node cluster. As a result, you must add additional disk space before configuring a multiple-node cluster. You can use either NFS or a Cinder volume as described in the following topics:
If you did not configure an NFS server for a Cisco ICFP virtual appliance when you installed it, you can configure the appliance for NFS by using the ShellAdmin console.
Note | We recommend that you configure additional storage for all Cisco ICFP nodes. If additional storage is not configured, all VM images that are uploaded from Cisco Intercloud Fabric are stored on the node's local disk. If the node fails, one or both of the following can occur:
If NFS is not available, you can configure a Cinder volume as described in Configuring a Cinder Volume. |
The default disk size of 100 GB for the Cisco ICFP virtual appliance is not sufficient for configuring Cisco ICFP in a multiple-node cluster. If you do not have access to an NFS server, you can increase the disk size by creating additional Cinder volumes. Cinder volumes that you create are formatted as physical disks and then combined to form a logical volume that can be mounted on the VM in a specific location.
Configure a Cisco ICFP virtual appliance as a service node by using the ShellAdmin console. For more information, see Configuring a Service Node.
If you have not already done so, configure the root user password for the Cisco ICFP service node. For more information, see the "Using Cisco ICFP ShellAdmin Commands" chapter in the Cisco Intercloud Fabric for Provider Administrator Guide.
Collect the following information:
After you deploy Cisco ICFP virtual appliances, you can configure them for high availability (HA) by using the ShellAdmin console.
When configuring HA:
Deploy or configure two Cisco ICFP virtual appliances as primary nodes:
To deploy a Cisco ICFP virtual appliance with the Primary Mode role, see Deployment Workflows.
To configure an existing Cisco ICFP virtual appliance as a primary node, see Configuring a Primary Node.
Identify a virtual IP (VIP) address for the HA pair.
Determine which node will be the active node and which node will be the standby node.
On the node that will be the standby node, move any existing data that you want to save to another location.
Step 1 | Using SSH, log in to the ShellAdmin console of the node that will be the active node for the HA pair. |
Step 2 | At the ShellAdmin prompt, choose Setup HA. A warning is displayed stating that the contents of the database on the standby node will be deleted. |
Step 3 | When prompted, enter Y to configure the node for HA. |
Step 4 | Enter A to configure the node as the active node. |
Step 5 | When prompted, enter Y to configure the node as the active node. Cisco ICFP detects and displays the IP address of the current node. |
Step 6 | Enter Y to confirm the node IP address. |
Step 7 | Enter the standby node IP address. |
Step 8 | Enter the VIP to
use for the IP pair.
Information similar to the following is displayed:
-------------------------------------------- HA Configuration Information: -------------------------------------------- This node will be configured as active node Active Node IP address: 123.45.1.61 Standby Node IP address: 123.45.1.62 Virtual IP address: 123.45.1.60 -------------------------------------------- Proceed with setting up HA with above configuration [y/n]: |
Step 9 | Enter Y to confirm the configuration and continue, or N to change the values. If you choose to continue, Cisco ICFP displays progress messages while it configures the active node for HA. |
Step 10 | While Cisco ICFP configures the active node for HA, log in to the ShellAdmin console of the node that will be the standby node for the HA pair. |
Step 11 | At the ShellAdmin prompt, choose Setup HA. |
Step 12 | Enter Y to configure the node for HA. |
Step 13 | Enter B to configure the node as the standby node. |
Step 14 | When prompted, enter Y to configure the node as the standby node. Cisco ICFP detects and displays the IP address of the current node. |
Step 15 | Enter Y to confirm the node IP address. |
Step 16 | Enter the active node IP address. |
Step 17 | Enter the VIP to
use for the HA pair.
Information similar to the following is displayed:
-------------------------------------------- HA Configuration Information: -------------------------------------------- This node will be configured as standby node Active Node IP address: 123.45.1.61 Standby Node IP address: 123.45.1.62 Virtual IP address: 123.45.1.60 -------------------------------------------- Proceed with setting up HA with above configuration [y/n]: |
Step 18 | Enter Y to confirm the configuration. Cisco ICFP displays progress messages while it configures the standby node for HA and synchronizes the database information on both nodes. |
Step 19 | When prompted, press Enter to return to the ShellAdmin menu. |
For OpenStack environments, continue with Configuring VIP Access for HA in OpenStack.
After Cisco ICFP primary nodes are configured for HA, the virtual IP address (VIP) is used in the event of failover. However, OpenStack Neutron does not allow a host to accept packets with an IP address in the packet header that does not match the destination host IP address. As a result, packets sent to the VIP do not reach the node to which the VIP is assigned. To allow the packets to reach HA pair, the VIP must be added as an allowed address for both nodes (active and standby) in the HA pair.
This procedure describes how to configure VIP access on the nodes in the HA pair by using the OpenStack neutron port-update command. For more information, see the OpenStack documentation at docs.openstack.org.
Step 1 | Obtain a list of
networks by entering the following command:
$ neutron net-listInformation similar to the following is displayed: +--------------------------------------+----------------------+-----------------------------------------------------+ | id | name | subnets | +--------------------------------------+----------------------+-----------------------------------------------------+ | 2d84eaa4-8b81-4dc8-9897-dd8ef4719f8b | public-direct-600 | 3e0b77fe-fc66-4913-bc58-7f62d4ab247a 10.203.28.0/23 | | | | 5c2f73a9-4e2f-498c-8244-6aefe5129fdd 10.203.50.0/23 | | | | ba29165f-c88a-496a-9adc-99ee90407ebe 10.203.24.0/23 | | | | d5b69780-aefb-42a6-8ba5-aaf405fb36a0 10.203.30.0/24 | | b5d8d461-74d7-45a4-a1f0-f7ac96586bd5 | Net1 | c0921b42-2896-4b32-b33e-f54db9e5a3d6 192.168.0.0/24 | | ca80ff29-4f29-49a5-aa22-549f31b09268 | public-floating-601 | 0cfde3f1-e28b-4b87-8095-e0014b0ee573 | | | | 348a808d-ce64-43bc-a9d9-c20e52d2ac06 | | | | 3784170e-5d7f-48b4-b63d-aab4a0fef769 | | ff95095f-89f0-4005-b709-70a75212d73c | icfp-ha-123-network | 1099b814-05d9-4da0-93d1-06167db4891f 192.168.1.0/24 | +--------------------------------------+----------------------+-----------------------------------------------------+ |
Step 2 | Obtain a list of
ports on the network on which the active and standby nodes in the HA pair are
deployed by entering the following command:
$ neutron port-list -- --network_id=net_id where net_id is the identifier for the required network. In this example, the network name is icfp-ha-123-network. $ neutron port-list -- --network_id=ff95095f-89f0-4005-b709-70a75212d73cInformation similar to the following is displayed: +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+ | 4a439cf1-b95e-49ba-a8d6-0b03a8142dd2 | | fa:16:3e:f6:f8:a9 | {"subnet_id": "1099b814-05d9-4da0-93d1-06167db4891f", "ip_address": "192.168.1.12"} | | 93d0a69a-7bb8-4719-9ed7-63c10accd78b | | fa:16:3e:1f:7f:d2 | {"subnet_id": "1099b814-05d9-4da0-93d1-06167db4891f", "ip_address": "192.168.1.11"} | | 9d626a64-ee7c-410b-ae00-661dd275de79 | | fa:16:3e:61:81:4b | {"subnet_id": "1099b814-05d9-4da0-93d1-06167db4891f", "ip_address": "192.168.1.14"} | | cf56fd7b-2896-4e06-b520-1d2258ad6158 | | fa:16:3e:ab:27:ca | {"subnet_id": "1099b814-05d9-4da0-93d1-06167db4891f", "ip_address": "192.168.1.13"} | | d7457d29-44ba-46ef-b47a-4b94c9199902 | | fa:16:3e:ad:d0:e9 | {"subnet_id": "1099b814-05d9-4da0-93d1-06167db4891f", "ip_address": "192.168.1.15"} | +--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+ |
Step 3 | In the output of the previous step, locate the port ID for the active node. |
Step 4 | Update the port
so that it accepts traffic from the VIP by entering the following command:
$ neutron port-update active-port-id --allowed_address_pairs list=true type=dict ip_address=vip where: For example, if the IP address of the active node is 192.168.1.11 and the VIP is 192.168.1.10, the command resembles the following: $ neutron port-update 93d0a69a-7bb8-4719-9ed7-63c10accd78b --allowed_address_pairs list=true type=dict ip_address=192.168.1.10 |
Step 5 | View the port details and confirm that the
allowed_address_pairs field lists the VIP by
entering the following command:
$ neutron port-show active-port-id where active-port-id is the identifier for the port configured in the previous step. Using the current example, the command and results resemble the following: $ neutron port-show 93d0a69a-7bb8-4719-9ed7-63c10accd78b +-----------------------+-------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+-------------------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | {"ip_address": "192.168.1.10", "mac_address": "fa:16:3e:1f:7f:d2"} | | device_id | b7b8eeb5-70ad-49ac-a3b4-6d8a144293a2 | | device_owner | compute:alln01-1-csi | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "1099b814-05d9-4da0-93d1-06167db4891f","ip_address": "192.168.1.11"} | | id | 93d0a69a-7bb8-4719-9ed7-63c10accd78b | | mac_address | fa:16:3e:1f:7f:d2 | | name | | | network_id | ff95095f-89f0-4005-b709-70a75212d73c | | security_groups | f995d22f-edb8-47c0-9aff-6339a15fb5be | | status | ACTIVE | | tenant_id | b1436740f8db42e39904ee9779f67eb8 | +-----------------------+-------------------------------------------------------------------------------------+ |
Step 6 | Configure the
standby node to accept VIP traffic by entering the following command:
$ neutron port-update standby-port-id --allowed_address_pairs list=true type=dict ip_address=vip where: |
Step 7 | View the port details for the standby node
and confirm that the allowed_address_pairs field lists
the VIP:
$ neutron port-show standby-port-id |
Step 8 | (Optional)
Complete the following steps to configure the VIP so that it is accessible from
an external network and so that the VIP uses a floating IP address:
|
Cisco ICFP enables you to move from a standalone configuration to a cluster. Moving from a standalone configuration to a cluster involves moving the database contents from the existing standalone node to the active HA node in the cluster as described in this procedure.
After moving the database contents, you can configure and test the cluster setup without modifying or affecting the standalone setup. For more information about configuring a multiple-node cluster, see Workflow for Configuring Clusters.
Step 1 | In the ShellAdmin console for the standalone node, back up the existing database as follows: | ||
Step 2 | Deploy or configure two primary nodes by
using any of the following methods:
| ||
Step 3 | Restore the backed-up database from Step 1 onto one of the primary nodes: | ||
Step 4 | In the
ShellAdmin console, configure the two primary nodes as an HA pair.
For more information, see Configuring HA. | ||
Step 5 | Configure service nodes for the cluster. For more information, see Configuring a Service Node. |
Cisco ICFP enables you to configure an HA pair and then restore a database from an existing standalone node to the HA pair.
Note | You must stop and start services in the sequence described in this procedure to successfully restore the database on the HA pair. |
Step 1 | Stop the VIP service on the current standby node in the HA pair as follows: |
Step 2 | Stop the VIP service on the current active node in the HA pair as follows: Stopping the VIP service on the active node in an HA pair automatically stops the Infrastructure Manager services if they are running. |
Step 3 | On the active node in the HA pair, restore the database backup obtained from the standalone node as follows: |
Step 4 | Restart the VIP service on the active node as follows: Starting the VIP service on the active node in an HA pair automatically starts the Infrastructure Manager services on that node. |
Step 5 | Restart the VIP service on the standby node in the HA pair as follows: |
After configuring Cisco ICFP for HA, you can view the configuration details, check the status of the active and standby nodes, and view detailed replication status.
Step 1 | Log in to the ShellAdmin console for one of the nodes in the HA pair. |
Step 2 | At the prompt, choose Display HA Status. Information similar to the following is displayed:
Configured HA role for this node is: Active Current HA role for this node is: Active HA Configuration properties for this node are: ACTIVE_IP_ADDRESS=123.16.1.30 STANDBY_IP_ADDRESS=123.16.1.3 VIRTUAL_IP_ADDRESS=123.16.1.25 IP address of this node is: 123.16.1.30 Checking if Virtual IP Address is reachable...OK Virtual IP Address service status on this node...OK Checking DB replication from 123.16.1.30 to 123.16.1.3...OK Checking DB replication from 123.16.1.3 to 123.16.1.30...OK Do you want to view detailed replication status ? [y/n] |
Step 3 | To view detailed information, enter Y. Information similar to the following is displayed:
Slave_IO_State : Waiting for master to send event Master_Host : 123.16.1.3 Master_User : replicator Master_Port : 3306 Connect_Retry : 60 Master_Log_File : mysql-bin.000002 Read_Master_Log_Pos : 645644 Relay_Log_File : mysqld-relay-bin.000004 Relay_Log_Pos : 361 Relay_Master_Log_File : mysql-bin.000002 Slave_IO_Running : Yes Slave_SQL_Running : Yes Replicate_Do_DB : Replicate_Ignore_DB : … |
Step 4 | Use your arrow keys to scroll through the information, and enter Q to stop viewing the detailed information and press Enter to return to the menu. |
After configuring Cisco ICFP for HA, Cisco ICFP checks HA status every five minutes. Any warning or failure messages that are issued are included in the log file for syslog messages. This log file commonly resides in /var/log/ with the name messages. To view these messages, log in as root and use a text editor as described in this procedure.
Step 1 | In the ShellAdmin console, choose Log in as Root. |
Step 2 | Enter Y to confirm the login request, and enter the root account password at the prompt. |
Step 3 | Enter the
following command to view the contents of the log file:
vi /directory-path/filename where directory-path is location of the log file and filename is the name of the log file. For example, you might enter the following: vi /var/log/messages |
Step 4 | To identify messages that pertain to HA, look for entries that contain the string icfpp-ha as shown in the following example:
Jul 3 03:29:01 icfpp-ha-primary rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="3946" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Jul 8 03:45:01 icfpp-ha-primary rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="3946" x-info="http://www.rsyslog.com"] rsyslogd was HUPed |
Step 5 | Address any HA-related messages as needed. |