Stretched Cluster Upgrade

Overview

This section provides information related to upgrading a Cisco HyperFlex Stretched Cluster. The procedure for performing a Stretched Cluster upgrade is similar to the regular HyperFlex cluster upgrade procedure.

Upgrade Guidelines for Stretched Cluster

  • Only split upgrade of the HX Data Platform is supported. Upgrade of UCS firmware is not supported.

  • Manual cluster bootstrap is required for upgrade from a pre-3.5 release to 3.5(1a).

    Auto bootstrap is supported for upgrade from 3.5(1a) to later releases.

  • HyperFlex Witness node version 1.0.2 is supported from 3.5(1a) or later releases. An upgrade of the HyperFlex Witness node is not required when upgrading stretched clusters to 3.5(1a) or later releases.

HX Data Platform Software Versions for HyperFlex Witness Node

HyperFlex Release

Witness Node Version

3.5(2h)

1.0.8

3.5(2g)

1.0.6

3.5(2f)

1.0.6

3.5(2e)

1.0.4

3.5(2d)

1.0.3

3.5(2c)

Release Deferred

3.5(2b)

1.0.3

3.5(2a)

1.0.3

3.5(1a)

1.0.2


Note

Cisco HyperFlex Release 3.5(2f) requires that stretch clusters upgrade the Witness VM to version 1.0.6. For details on how to upgrade the Witness VM, see Upgrading a Witness VM.



Note

Older versions of witness VMs are supported when the cluster is upgraded to the latest HXDP version.


Upgrading HyperFlex Stretched Cluster Using HX Connect

Follow these steps when upgrading a HyperFlex Stretched Cluster from a current HX Data Platform version of 3.0(1x) or later releases.


Warning

Do NOT use Cisco HyperFlex Release 3.5(2c) with Stretched Clusters. Please wait for Cisco HyperFlex Release 3.5(2d). For more information, refer to the Software Advisory for CSCvp90129: Stretched cluster nodes that experience failures may become unavailable.



Note

If the HyperFlex package update is interrupted due to power failure or reboot of the node that is being upgraded, the controller VM must be re-imaged or manual intervention is required to fix the issue depending on the state of the system. For more details, contact Cisco TAC.


Before you begin

  • Complete pre-upgrade validation checks. See Upgrade Prerequisites for more details.

  • Download the latest Cisco HX Data Platform Upgrade Bundle for upgrading existing clusters from previous releases, from Software Download .

  • Complete steps 1 to 6 in the Online Upgrade Process Work flow. See Online Upgrade Process Workflow for more details.

    • Upgrade Cisco UCS Infrastructure.

    • Bootstrap to upgrade Cisco HX Data Platform plug-in.

    • Disable snapshot schedule, on the bootstrapped storage controller VM.

  • If DRS is Enabled, the VMs are automatically migrated to other hosts with vMotion.


    Note

    If DRS is not enabled and the VMs of the node are not migrated with vMotion, all the VMs on the node will be automatically shutdown. For more information, see VMware Documentation for Migration with vMotion.


Procedure


Step 1

Log in to HX Connect.

  1. Enter the HX Storage Cluster management IP address in a browser. Navigate to https://<storage-cluster-management-ip>.

  2. Enter the administrative username and password.

  3. Click Login.

Step 2

In the Navigation pane, select Upgrade.

Step 3

On the Select Upgrade Type page, select HX Data Platform and complete the following fields:

UI Element Essential Information

Drag the HX file here or click to browse

Upload the latest Cisco HyperFlex Data Platform Upgrade Bundle for upgrading existing clusters with previous release.tgz package file from Download Software - HyperFlex HX Data Platform.

Sample file name format: storfs-packages-3.5.2a-31601.tgz.

Current version

Displays the current HyperFlex Data Platform version.

Current cluster details

Lists the HyperFlex cluster details like the HyperFlex version and Cluster upgrade state.

Bundle version

Displays the HyperFlex Data Platform version of the uploaded bundle.

(Optional) Checksum field

The MD5 Checksum number is stored in a separate text file at the /tmp directory where the upgrade package was downloaded.

This is an optional step that helps you verify the integrity of the uploaded upgrade package bundle.

Step 4

Enter the vCenter credentials.

UI Element Essential Information

User Name field

Enter the vCenter <admin> username.

Admin Password field

Enter the vCenter <admin> password.

Step 5

Click Upgrade to begin the cluster upgrade process.

Step 6

The Validation Screen on the Upgrade Progress page displays the progress of the checks performed. Fix validation errors, if any. Confirm that the upgrade is complete.


Upgrading a Witness VM

Before you begin

  • Upgrade HyperFlex Stretched Cluster.

  • The upgraded HyperFlex Stretched Cluster must be in healthy state. To check the health state of Stretched Cluster after upgrade, run the following command:
    root@StCtlVM:~# stcli cluster info | grep healthy

Procedure


Step 1

Log in to the witness VM using SSH and execute the following command to stop the service exhibitor.

root@WitnessVM:~# service exhibitor stop
Step 2

Copy the exhibitor.properties file available in the /usr/share/exhibitor/ path to a remote machine from where you can retrieve the exhibitor.properties file.

scp root@<Witness-VM-IP>:/usr/share/exhibitor/exhibitor.properties user@<Remote-Machine>:/directory/exhibitor.properties
Step 3

Log out from the Witness VM. Power off and rename the Witness VM to WitnessVM.old.

Note 

Confirm that the IP address of the old Witness VM is unreachable, using the ping

command.
Step 4

Deploy a new Witness VM and configure the same IP address as the old Witness VM.

Note 

If the IP address is not reachable, the Witness OVA deployment may contain stale entries in the /var/run/network directory. You must manually remove these entries and reboot the VM to have the assigned IP address become reachable on the network.

To reboot the VM, open the VM console in vCenter/vSphere and execute the following command:

rm -rf /var/run/network/*
reboot
Step 5

Log in to the new witness VM using SSH and execute the following command to stop the service exhibitor.

root@WitnessVM:~# service exhibitor stop
Step 6

Copy the exhibitor.properties file from the remote machine (copied in Step 2) to the /usr/share/exhibitor/ path of the new Witness VM.

scp /directory/exhibitor.properties root@<Witness-VM-IP>:
/usr/share/exhibitor/exhibitor.properties
Step 7

Verify if the following symlinks are preserved in the new Witness VM:

root@Cisco-HX-Witness-Appliance:~# cd /etc/exhibitor/
root@Cisco-HX-Witness-Appliance:/etc/exhibitor# ls -al
total 8
drwxr-xr-x 2 root root 4096 Sep 11 13:00 .
drwxr-xr-x 88 root root 4096 Sep 11 12:55 ..
lrwxrwxrwx 1 root root 41 Sep 11 13:00 exhibitor.properties 
lrwxrwxrwx 1 root root 37 Jul 24 16:49 log4j.properties

If the symlinks are not available, execute the following command:

root@Cisco-HX-Witness-Appliance:/etc/exhibitor# ln -s /usr/share/exhibitor/exhibitor.properties exhibitor.properties
root@Cisco-HX-Witness-Appliance:/etc/exhibitor# ln -s /usr/share/exhibitor/log4j.properties log4j.properties
root@Cisco-HX-Witness-Appliance:/etc/exhibitor# ls -al
total 8
drwxr-xr-x 2 root root 4096 Sep 11 13:00 .
drwxr-xr-x 88 root root 4096 Sep 11 12:55 ..
lrwxrwxrwx 1 root root 41 Sep 11 13:00 exhibitor.properties -> /usr/share/exhibitor/exhibitor.properties
lrwxrwxrwx 1 root root 37 Jul 24 16:49 log4j.properties -> /usr/share/exhibitor/log4j.properties
Step 8

Start the service exhibitor by executing the following command:

root@Cisco-HX-Witness-Appliance:~# service exhibitor start
exhibitor start/running, process <ID>