Changing the IP Address, Hostname, Domain Name and Node Name for Cisco Unified Presence Release 8.6(5)
Change Server Node Name
Downloads: This chapterpdf (PDF - 236.0KB) The complete bookPDF (PDF - 1.24MB) | Feedback

Change Server Node Name

Table Of Contents

Change Server Node Name

Procedure Overview

Procedure Workflow

Update Cisco Unified Presence Node Name

Verify Database Replication

Verify Updates on Cisco Unified Communications Manager


Change Server Node Name


Procedure Overview

Procedure Workflow

Update Cisco Unified Presence Node Name

Verify Database Replication

Verify Updates on Cisco Unified Communications Manager

Procedure Overview

This procedure allows you to modify the node name that is associated with a Cisco Unified Presence server or group of servers. This procedure modifies the node name as it appears in the Cluster Topology window in the Cisco Unified Presence Administration GUI.


Caution This procedure is used only when changing the node name of a Cisco Unified Presence server where there are no network level changes required. If changes to the network IP address, hostname or domain name are required, complete the relevant procedure in this document instead.


Caution Changing the node name on any server in a Cisco Unified Presence cluster will result in server restarts and interruptions to presence services and other system functions. Because of this impact to the system, you must perform this node name change procedure during a scheduled maintenance window.

This procedure supports the following node name change scenarios:

IP address to hostname

IP address to Fully Qualified Domain Name (FQDN)

hostname to IP address

hostname to FQDN

FQDN to hostname

FQDN to IP address


Note See the Deployment Guide for Cisco Unified Presence for more information about node name recommendations.


Procedure Workflow

The following table contains the step-by-step instructions for modifying the node name that is associated with a Cisco Unified Presence server or group of servers. The detailed instructions for this procedure specify the exact order of steps for performing the change.

If you are performing this procedure across multiple clusters you must complete the changes sequentially on one cluster at a time.

Table 7-1 Workflow to modify the node name

Step
Task

1

Complete the Pre-Change Tasks on all nodes that are to be updated.

This procedure includes several prerequisite steps, including a list of services to shut down prior to making the change.

Some of these steps may apply only to the publisher node and therefore you can skip them when you run through the procedure for subscriber nodes.

2

Update Cisco Unified Presence Node Name from the Cisco Unified Presence Administration GUI.

3

Verify Database Replication from the Administration CLI.

After the node name updates are complete, you must verify database replication.

4

Verify Updates on Cisco Unified Communications Manager.

You must ensure that the Application Server entries for the servers that have been updated reflect the new node name on the Cisco Unified Communications Manager Administration GUI.

5

Complete the Post-Change Tasks list on the updated node.

Perform a series of steps to ensure the node is operational again.


Update Cisco Unified Presence Node Name

If multiple servers within a cluster are being modified, you must complete the following procedure sequentially for each of these servers.

If the publisher node is being modified, you must complete this procedure for the subscriber nodes first, before completing the procedure on the publisher node.

Before You Begin

Complete the pre-change tasks. See Pre-Change Tasks.

Procedure


Step 1 Sign in to the Cisco Unified Presence Administration GUI on the server.

Step 2 Navigate to System > Cluster Topology.

Step 3 Choose the server from the tree-view on the left hand pane of the Cluster Topology page.

On the right hand pane, you should see the Node Configuration section and the Fully Qualified Domain Name/IP Address field.

Step 4 Update the Fully Qualified Domain Name/IP Address field with the new node name.

Step 5 Select Save.

Step 6 If multiple servers within a cluster are being modified, repeat this procedure for each server.


What To Do Next

Verify Database Replication.

Verify Database Replication

You must verify that the new node name has replicated across the cluster.

Before You Begin

Update the Cisco Unified Presence node name. See Update Cisco Unified Presence Node Name.


Note Use the validation mechanisms listed below to verify that the new node name(s) have been replicated across the cluster and that database replication is operational.


Procedure


Step 1 To validate that the new node name has been correctly replicated, run the following command from the Administration CLI on all nodes in the cluster:

run sql name from ProcessNode
 
   

The following example shows the command output:

admin:run sql select name from ProcessNode
name                  
===================== 
EnterpriseWideData    
server1.example.com 
server2.example.com 
server3.example.com 
server4.example.com 
 
   

Verify that there is an entry for each node in the cluster that specifies the new node name. No old node name should appear in the output. Proceed as follows:

a. If any new node names are missing or if there are references to old node names proceed to Step 2 to further validate database replication.

b. If the output is as expected, the validation has passed and you can ignore the remaining steps in this procedure.

Step 2 If the new node name is not correctly listed in Step 1, verify that database replication is in a correct state in the cluster by running the following command from the Administration CLI on the publisher node:

utils dbreplication runtimestate
 
   

The following example output displays:

admin: utils dbreplication runtimestate
 
   
DDB and Replication Services: ALL RUNNING
 
   
DB CLI Status: No other dbreplication CLI is running...
 
   
Cluster Replication State: BROADCAST SYNC Completed on 1 servers at: 2012-09-26-15-18
	Last Sync Result: SYNC COMPLETED 257 tables sync'ed out of 257
	Sync Errors: NO ERRORS
 
   
DB Version: ccm9_0_1_10000_9000
Number of replicated tables: 257
Repltimeout set to: 300s
 
   
Cluster Detailed View from gwydlvm020105 (2 Servers):
			      		PING 	 	 REPLICATION 			 REPL.   DBver&   REPL.  REPLICATION SETUP 
SERVER-NAME  IP ADDRESS      (msec)  RPC?   STATUS         QUEUE   TABLES   LOOP?  (RTMT) & details
-----------  --------------   ------  ----   -----------   -----    ------  -----   ----------------- 
server1	     192.168.10.201   0.038   Yes    Connected     0        match   Yes     (2) PUB Setup Completed 
server2      192.168.10.202   0.248   Yes    Connected     0        match   Yes     (2) Setup Completed
server3      192.168.10.203   0.248   Yes    Connected     0        match   Yes     (2) Setup Completed
server4      192.168.10.204   0.248   Yes    Connected     0        match   Yes     (2) Setup Completed
 
   

Proceed as follows:

a. Verify that the output shows a replication status of Connected and a replication setup value of (2) Setup Complete for each node. This means that the replication network within the cluster is up and you can proceed to Step 3 to repair any mismatches between nodes in the cluster.

b. If the replication status and replication setup value are not as expected, then the replication network within the cluster is broken and you must proceed to Step 5 to attempt to reestablish replication.

Step 3 Run the following command from the Administration CLI on the publisher node to attempt to repair replication:

utils dbreplication repair all
 
   

The following example shows the command output.

admin:utils dbreplication repair all
 -------------------- utils dbreplication repair -------------------- 
 
   
Replication Repair is now running in the background.
Use command 'utils dbreplication runtimestate' to check its progress 
 
   
Output will be in file cm/trace/dbl/sdi/ReplicationRepair.2013_03_06_12_33_57.out
 
   
Please use "file view activelog cm/trace/dbl/sdi/ReplicationRepair.2013_03_06_12_33_57.out 
" command to see the output 
 
   

Note Depending on the size of the database, it may take several minutes to repair database replication.


Proceed to Step 4 to monitor the progress of the replication repair.

Step 4 Run the following command from the Administration CLI on the publisher node to check the progress of replication repair:

utils dbreplication runtimestate
 
   

The following example shows the output when replication is complete. The text in bold highlights the final status of the replication repair:

admin:utils dbreplication runtimestate 
 
   
DB and Replication Services: ALL RUNNING
 
   
Cluster Replication State: Replication repair command started at: 2013-03-06-12-33
     Replication repair command COMPLETED 269 tables processed out of 269
     No Errors or Mismatches found.
 
   
     Use 'file view activelog cm/trace/dbl/sdi/ReplicationRepair.2013_03_06_12_33_57.out' 
to see the details
 
   
DB Version: ccm8_6_4_98000_192
Number of replicated tables: 269
 
   
Cluster Detailed View from PUB (2 Servers):
 
   
           	            					PING  	    		REPLICATION	REPL.	DBver& 	REPL.	REPLICATION SETUP
SERVER-NAME	IP ADDRESS  	(msec)	RPC?	STATUS     		QUEUE	TABLES 		LOOP?	(RTMT) & details 
-----------			-----------			------	----		-----------			-----		--------		----		-----------------
server1			10.53.56.17		0.052	Yes		Connected		0  		match		Yes  	(2) PUB Setup Completed
server2			10.53.56.14		0.166	Yes		Connected		0  		match		Yes  	(2) Setup Completed
 
   

Proceed as follows:

a. If replication repair runs to completion without any errors or mismatches, return to Step 1 to validate that the new node name is now correctly replicated.

b. If errors or mismatches are found, there may be a transient mismatch between servers. Return to Step 3 to run the replication repair again.


Note If, after several attempts to repair replication, mismatches or errors are being reported, contact your Cisco Support Representative to resolve this issue.


Step 5 Run the following command from the Administration CLI on the publisher node to attempt to reestablish replication:

utils dbreplication reset all
 
   

The following example shows the command output:

admin:utils dbreplication reset all
This command will try to start Replication reset and will return in 1-2 minutes.
Background repair of replication will continue after that for 1 hour.
Please watch RTMT replication state. It should go from 0 to 2. When all subs
have an RTMT Replicate State of 2, replication is complete.
If Sub replication state becomes 4 or 1, there is an error in replication setup.
Monitor the RTMT counters on all subs to determine when replication is complete.
Error details if found will be listed below
OK [10.53.56.14]
 
   

Note Depending on the size of the database, it may take several minutes to over an hour for replication to be fully reestablished.


Proceed to Step 6 to monitor the progress of the replication reestablishment.

Step 6 Monitor the progress of the attempt to reestablish database replication in Step 5 by running the following command from the Administration CLI on the publisher node:

utils dbreplication runtimestate
 
   

The following example output displays:

admin: utils dbreplication runtimestate
 
   
DDB and Replication Services: ALL RUNNING
 
   
DB CLI Status: No other dbreplication CLI is running...
 
   
Cluster Replication State: BROADCAST SYNC Completed on 1 servers at: 2012-09-26-15-18
	Last Sync Result: SYNC COMPLETED 257 tables sync'ed out of 257
	Sync Errors: NO ERRORS
 
   
DB Version: ccm9_0_1_10000_9000
Number of replicated tables: 257
Repltimeout set to: 300s
 
   
Cluster Detailed View from gwydlvm020105 (2 Servers):
			      		PING 	 	 REPLICATION 			 REPL.   DBver&   REPL.  REPLICATION SETUP 
SERVER-NAME  IP ADDRESS      (msec)  RPC?   STATUS         QUEUE   TABLES   LOOP?  (RTMT) & details
-----------  --------------   ------  ----   -----------   -----    ------  -----   ----------------- 
server1	     192.168.10.201   0.038   Yes    Connected     0        match   Yes     (2) PUB Setup Completed 
server2      192.168.10.202   0.248   Yes    Connected     0        match   Yes     (2) Setup Completed
server3      192.168.10.203   0.248   Yes    Connected     0        match   Yes     (2) Setup Completed
server4      192.168.10.204   0.248   Yes    Connected     0        match   Yes     (2) Setup Completed
 
   

Replication is considered to be reestablished when all nodes show a replication status of Connected and a replication setup value of (2) Setup Complete. Proceed as follows:

a. If replication is reestablished, return to Step 1 to validate that the new node name is now correctly replicated.

b. If replication does not recover, contact your Cisco Support Representative to resolve this issue.


Caution Do not proceed beyond this point if database replication is broken.


What To Do Next

Verify Updates on Cisco Unified Communications Manager

Verify Updates on Cisco Unified Communications Manager

Verify that the Application Server entry for this server has been updated to reflect the new node name on the Cisco Unified Communications Manager Administration GUI.

You must complete this procedure for each node name that has been changed.

Before You Begin

Ensure that database replication is operational on all nodes. See Verify Database Replication.

Procedure


Step 1 Sign in to the Cisco Unified Communications Manager Administration GUI and Navigate to System > Application Server.

Step 2 Click Find, if required, on the Find and List Application Servers page.

Step 3 Ensure that an entry exists for the updated node name in the list of Application Servers. If there is no entry, add an entry for the new node name.


What To Do Next

Complete the post-change task list on all applicable nodes within the cluster. See Post-Change Tasks.