Installation

Initial Setup in Cluster Installation for RDM

The following table lists issues you could encounter during initial setup in a cluster installation with raw device mapping (RDM):

Issue

Resolution

In ISO installation, I/O error on shared LUN.

Click Ignore and normal installation proceeds.

High Availability or other RDM-related feature is not ready.

Make sure that multipath is not enabled.

Cannot convert standalone mode to HA cluster.

Check if you have multipath enabled. If yes, contact Cisco Technical Assistance Center to disable multipath.

Validations on Node A

No shared storage devices detected during first node installation.

Check if the RDM and shared disk have been added with required configuration. See Installation and Upgrade Guide, specific to the Cisco UCS Central release version you are using.

Failed to write on disk.

  • RDM may have persistent writing or LUN ownership issues. Verify that the RDM has the same specifications as mentioned in the "Adding and Setting up an RDM Shared Storage on VMware" in the Installation and Upgrade Guide.

  • Verify that you disabled SCSI filtering in Hyper-V.

  • Verify that the path selection policy for the RDM hard disk is fixed (VMware).

Validations on Node B

Peer node unreachable.

Verify the following:

  • Installation is complete on node A.

  • Network connectivity between both nodes is active.

Expected shared storage device not found.

Make sure that you configure the same shared storage device (same LUN) for both nodes.

Node cannot be added to the cluster.

Verify the following:

  • Both nodes are at the same Cisco UCS Central release version.

  • The IP address configured on the second node matches the value of peer node IP entered during first node setup.

  • Username and password for the peer node is correct.

You have enabled multipath while setting up HA.

Contact Cisco Technical Assistance Center.

Cluster State Issues

The following table lists issues that you could encounter in cluster state:

Issue Resolution

Node state - down.

Verify the following:

  • Peer node is powered up.

  • Network connectivity between both nodes is active.

Management Services State - DOWN. This means that one or more services are down on the node.

Use the show pmon state command to verify the process states in local management.

UCSC# connect local-mgmt
UCSC(local-mgmt)# show pmon state

High Availability is not ready. No devices found for quorum.

For HA, you must have at least one registered Cisco UCS Manager domain in Cisco UCS Central. Check the current registration status for registered domains.

I/O error in quorum devices.

Check the availability of the Cisco UCS domains with quorum, using either one or both cluster nodes.

I/O error in shared storage.

Verify that the LUN is unaltered. Only the VMs in the cluster nodes can share the LUN.

Network Access

The following table lists issues related to accessing the network after installation:

Issues Resolution

Cisco UCS Central GUI or CLI is not accessible through a virtual IP.

Check if election completed successfully and primary node is selected. It can take up to 5 minutes after election to enable virtual IP.

VM IPs are not reachable.

Run the reset-network <IP><Mask><Gateway> local-mgmt command on the affected VMs console in VSphere Client.

UCSC# connect local-mgmt
UCSC(local-mgmt)# reset-network <IP><Mask><Gateway>