Managing RAID
Last Updated: June 27, 2011
This chapter provides information about managing Redundant Array of Inexpensive Disks (RAID). It contains the following sections:
•Entering the RAID Management Command Environment
•Hot-Swapping the Faulty RAID 1 Disk Drive
•Commands for Managing RAID
•Troubleshooting RAID
Entering the RAID Management Command Environment
To add or modify RAID configuration, you must enter the RAID management command environment.
Note The RAID management CLI is different from the Cisco IOS CLI. For a list of frequently used commands that you can use to manage RAID, see the "Commands for Managing RAID" section.
Before you begin, do the following:
•Shutdown all virtual machines.
•Backup the datastore.
•Enable Remote Tech Support (SSH) through the VMware vSphere HypervisorTM DCUI or the vSphere Client.
To enter the RAID management command environment, complete the following steps:
Step 1 Use the SSH Client software to log into the VMware vSphere HypervisorTM. Do the following:
a. For the hostname, enter the IP address of the VMware vSphere HypervisorTM or the DNS hostname.
b. If prompted to accept the server's host key, click Yes.
c. Use root for the username.
d. When prompted for a password, do the following:
–If you are a first time user, leave the password empty by pressing Enter.
–If you have configured a password, enter that password, and then press Enter.
The following Tech Support Shell prompt appears:
Step 2 At the Tech Support Shell prompt, use the promise-raid-cli command, and then press Enter. You enter the RAID management command environment:
Step 3 Use the appropriate RAID management command(s) as shown in the following example:
For a list of frequently used commands that you can use to manage RAID, see the "Commands for Managing RAID" section.
Step 4 Use the exit command to exit from the RAID management command environment:
Step 5 Use the exit command to exit from the Tech Support Shell:
Related Topics
•Understanding RAID Options, page 4-1
•Commands for Managing RAID
•Determining the Location of the Physical SATA Drive
Hot-Swapping the Faulty RAID 1 Disk Drive
Note Hot-Swapping is supported in RAID 1 mode only.
If you chose the RAID 1 option during installation, and one of the disk drive fails, you can replace that faulty disk drive with a new disk drive.
To hot-swap the faulty RAID 1 disk drive, complete the following steps:
Step 1 Determine the location of the faulty disk drive. See the "Determining the Location of the Physical SATA Drive" section.
Step 2 Remove the faulty disk drive.
Step 3 Insert a new disk drive. The rebuild process starts automatically on the new disk drive.
The rebuild process might take approximately two hours to complete. You can continue to perform normal system operations during the rebuild process.
Caution
Make sure that you do not unplug the functional disc drive during the rebuild process. If you do, you will lose data.
If the rebuild process does not start, see the "Rebuild Process Does Not Start" section to resolve the problem.
Step 4 (Optional) To check the rebuild status, use the rb -a list command from the RAID management command environment, as shown in the following example:
Rebuild is in progress 10% in logical drive with ID 0 on controller #0!
You can also check the rebuild status in syslog.
Related Topics
•Understanding RAID Options, page 4-1
•Entering the RAID Management Command Environment
•Rebuild Process Does Not Start
Determining the Location of the Physical SATA Drive
To determine the location of the physical SATA drive, complete the following steps:
Step 1 From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click a datastore, and then choose Properties. The Properties dialog box opens.
e. From the Properties dialog box, click the Manage Paths... button. From the Paths pane, look at the Runtime Name column. The Runtime Name column provides information about the location of datastore1 as shown in the following example:
where T1 is the location of the datastore. T is the SCSI target and 1 is the ID number of JBOD volume where the datastore is located. T1 maps to the JBOD volume ID 1 in Step 2.
Step 2 From the RAID management command environment, use the logdrv command, as shown in the following example:
ID RAID Disks Stripe Size(MB) DiskID Name
0 JBOD 1 N/A 476940.02 (1) JBOD on port 01 (00)
1 JBOD 1 N/A 476940.02 (2) JBOD on port 02 (00)
T1 from Step 1 maps to the JBOD logical volume 1 (ID column). This JBOD logical volume 1 (ID column) is built on top of the physical SATA drive 2 (DiskID column). DiskID 2 indicates that it is the second physical SATA drive.
Note The DiskID number that is displayed from the RAID management CLI output does not match with the Disk number that is displayed on the front panel of the Cisco SRE Service Module:
•Disk 0 on the front panel of the Cisco SRE Service Module is represented as DiskID 1 in the RAID management CLI output.
•Disk 1 on the front panel of the Cisco SRE Service Module is represented as DiskID 2 in the RAID management CLI output.
Commands for Managing RAID
You can use the RAID management CLI to add or modify RAID configuration, to view RAID status, and to migrate between RAID levels.
Note The RAID management CLI is different from the Cisco IOS CLI.
Table 7-1 provides a list of frequently used commands that you can use to manage RAID. A comprehensive list of all of the RAID Management commands is available online. You can either use the help command to access all of the RAID commands, or you can prepend a command with help to display details about that command. For example, the help logdrv command provides the syntax and options for the logdrv command.
Table 7-1 RAID Management Commands
|
|
phydrv -a list
or
phydrv
raid-cli> phydrv -a list
ID CH Size Model Serial F/W 1 0 476940.02MB Hitachi HTE545050B9A30100726PBN40317EASNPE 100726PBN40317EASNPE PB4OC64G 2 1 476940.02MB Hitachi HTE545050B9A30100726PBN40317EA189E 100726PBN40317EA189E PB4OC64G |
Displays information about physical drives. Note The DiskID number that is displayed from the RAID management CLI output does not match with the Disk number that is displayed on the front panel of the Cisco SRE Service Module: –Disk 0 on the front panel of the Cisco SRE Service Module is represented as DiskID 1 in the RAID management CLI output. –Disk 1 on the front panel of the Cisco SRE Service Module is represented as DiskID 2 in the RAID management CLI output. |
logdrv -a list
or
logdrv
raid-cli> logdrv -a list
ID RAID Disks Sectors Size(MB) DiskID Name 0 JBOD 1 64 476940.02 (1) JBOD on port 01 (00) 1 JBOD 1 64 476940.02 (2) JBOD on port 02 (00) |
Displays information about logical RAID volumes. Note The DiskID number that is displayed from the RAID management CLI output does not match with the Disk number that is displayed on the front panel of the Cisco SRE Service Module: –Disk 0 on the front panel of the Cisco SRE Service Module is represented as DiskID 1 in the RAID management CLI output. –Disk 1 on the front panel of the Cisco SRE Service Module is represented as DiskID 2 in the RAID management CLI output. |
logdrv -a clear -i 0
or
logdrv -a clear
raid-cli> logdrv -a clear -i 0
|
Removes all of the logical volumes from the system. |
logdrv -a del -i 0 -l 1
or
logdrv -a del -l 1
raid-cli> logdrv -a del -i 0 -l 1
|
Removes the second logical volume. Note The logical volume ID can start with 0. |
logdrv -a add -p 1,2 -e 0,0 -z "raid=raid0,name=RAID0,init=quick"
raid-cli> logdrv -a add -p 1,2 -e 0,0 -z "raid=raid0,name=RAID0,init=quick"
ID RAID Disks Sectors Size(MB) DiskID Name 0 RAID0 2 64 953752.00 (1,2) RAID0 |
Creates a simple block level striping RAID 0 volume using the full capacity (single full extent on each of the drives) across both first and second SATA drives. -z can be used to change stripe block size. Note Cisco SRE-V supports a maximum of two logical volumes. |
logdrv -a add -p 1,2 -e 0,0, -z "raid=raid1,name=RAID1,init=quick"
raid-cli> logdrv -a add -i 0 -p 1,2 -e 0,0 -z "name=RAID1,raid=raid1"
ID RAID Disks Sectors Size(MB) DiskID Name 0 RAID1 2 64 476876.00 (1,2) RAID1 |
Creates a simple 1:1 mirrored RAID 1 volume using the full capacity on both SATA drives. Note You can create a maximum of two logical volumes. |
logdrv -a add -i 0 -p 1,2 -z "raid=JBOD,init=quick"
raid-cli> logdrv -a add -i 0 -p 1,2 -z "raid=JBOD,init=quick"
ID RAID Disks Sectors Size(MB) DiskID Name 0 JBOD 1 64 476940.02 (1) JBOD on port 01 (00) 1 JBOD 1 64 476940.02 (2) JBOD on port 02 (00) |
Removes the RAID metadata on both SATA drives and makes them non-RAID drives. If a datastore is not created on top of the JBOD volume, that JBOD volume might disappear after reboot. |
logdrv -a list -v
or
logdrv -v
******************************* raid-cli> logdrv -a list -v Array ID : 0 Array name : RAID0 Array size : 953752.00 MB Array stripe size in number of blocks : 64 Array sector size : 512 Array raid mode : RAID0 Array write cache mode : Write Through Number of disks in array : 2 Disk members with ID in array : (1,2) Array activity status : Idle Array functional status : Online Driver Cache Mode: Write Thru Driver Lookahead threshold: 0 Driver Consolidation: enabled |
Displays details about the RAID configuration. |
event -a list -v -c 10
raid-cli> event -a list -v -c 10 Time: Jan 29, 2011 06:19:00 EventID: 0x90001 Event Description: Logical drive "RAID0" deleted Time: Jan 29, 2011 06:18:40 EventID: 0x90000 Event Description: Logical drive "RAID0" created Time: Jan 29, 2011 06:18:30 EventID: 0x90001 Event Description: Logical drive "RAID0" deleted |
Displays the oldest 10 events (if available) from the event queue. |
Related Topics
•Understanding RAID Options, page 4-1
•Entering the RAID Management Command Environment
Troubleshooting RAID
This section contains the following:
•Cannot View Datastores
•Rebuild Process Does Not Start
Cannot View Datastores
Problem
After disk migration, reboot, or Cisco SRE-V software upgrade, you are unable to view the datastores in Inventory > Configuration > Storage.
Solution
To resolve this problem, do the following:
1. Rescan the system a couple of times. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the Datastores pane (right pane), choose Rescan All... (located on the upper right corner). The Rescan dialog box opens.
d. Click OK.
2. If rescanning the system does not resolve the problem, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. Click the Add Storage... button. The Select Storage Type wizard page opens.
d. From the right pane, choose Disk/LUN, and then click Next. The Select Disk/LUN wizard page opens.
e. From the right pane, choose a disk, and then click Next. The Select VMFS Mount Options wizard page opens.
f. From the right pane, choose the Assign a New Signature radio button, and then click Next.
g. Choose Free Space, and then click Finish.
The missing datastores display with a modified name, which you can change. For example, Datastore_R0 displays as snap-XXXXX-datastore_R0; and datastore_R1 displays as snap-XXXXX-datastore_R1. To change the name of the datastore, right-click the appropriate datastore, and the choose Rename.
Related Topics
•Upgrading the Cisco SRE-V Software, page 4-9
Rebuild Process Does Not Start
Problem
After hot-swapping the faulty RAID 1 disk drive, the rebuild process does not start.
Probable Cause
RAID configuration shows a new JBOD logical volume, which you must delete.
Solution
To resolve this problem, do the following:
1. From the RAID management CLI, use the logdrv command to view RAID configuration.
When the valid partition table appears on the newly inserted drive, you might notice that the RAID configuration is unbalanced, as shown in the following example:
ID RAID Disks Sectors Size(MB) DiskID Name
0 RAID1 2 64 476876.00 (-,2) RAID1
1 JBOD 1 64 476940.02 (1) JBOD on port 01 (00)
where (-,2) RAID1 represents an unbalanced array; and ID 1 shows a new JBOD logical volume. JBOD represents non-RAID, which you must delete.
2. Use the logdrv -a del -1 <ID number of the JBOD> command to delete the JBOD logical volume:
raid-cli> logdrv -a del -l 1
Note -l in the command is lower case "L" and stands for logical.
After the JBOD logical volume is deleted, the rebuild process starts automatically on the new disk drive.
The rebuild process might take approximately two hours to complete. You can continue to perform normal system operations during the rebuild process.
Caution
Do not unplug the functional disk drive during the rebuild process. If you do, you will lose data.
3. (Optional) To check the rebuild status, use the rb -a list command, as shown in the following example:
Rebuild is in progress 10% in logical drive with ID 0 on controller #0!
You can also check the rebuild status in syslog.
Related Topic
•Hot-Swapping the Faulty RAID 1 Disk Drive