Managing RAID
This chapter provides information about managing Redundant Array of Inexpensive Disks (RAID). It contains the following sections:
•Entering the RAID Management Command Environment
•Migrating from Non-RAID Mode to RAID 0 Mode
•Migrating from Non-RAID Mode to RAID 1 Mode
•Hot-Swapping the Faulty RAID 1 Disk Drive
•Commands for Managing RAID
•Troubleshooting RAID
Entering the RAID Management Command Environment
To add or modify RAID configuration, you must enter the RAID management command environment.
|
Note The RAID management CLI is different from the Cisco IOS CLI. For a list of frequently used commands that you can use to manage RAID, see the "Commands for Managing RAID" section.
|
Before you begin, do the following:
•Shutdown all virtual machines.
•Backup the datastore.
•If you are removing a disk volume, you must first remove the datastore, and then remove the disk volume.
To enter the RAID management command environment, complete the following steps:
Step 1 Enter the Cisco SRE-V command environment. See the "Entering the Cisco SRE-V Command Environment" section.
Step 2 From the Console Manager interface, use the hypervisor set disk maintenance command to move the system into disk maintenance mode:
SRE-Module# hypervisor set disk maintenance
When prompted, confirm that you want to continue with the operation. The command is executed and the system reboots.
|
Note In disk maintenance mode, the datastores and virtual machines are maintained but the system logs and temporary files that are not stored in the datastore are deleted.
|
Step 3 Enter the RAID management command environment. From the Console Manager interface, use the raid setup command:
You are about to enter RAID management CLI console.
It is strongly recommended to shut down all the VMs and backup the datastore before
changing any RAID configuration.
A manual reboot is required after changing any RAID configuration.
Step 4 Use the appropriate RAID management command(s) as shown in the following example:
For a list of frequently used commands that you can use to manage RAID, see the "Commands for Managing RAID" section.
Step 5 Use the exit command to exit from the RAID management CLI console:
Step 6 Rescan the system. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the Datastores pane (right pane), choose Rescan All... (located on the upper right corner). The Rescan dialog box opens.
d. Click OK.
Step 7 From the Console Manager interface, use the hypervisor unset disk maintenance command to move the system out of disk maintenance mode:
SRE-Module# hypervisor unset disk maintenance
When prompted, confirm that you want to continue with the operation. The command is executed and the system reboots.
Related Topics
•Understanding RAID Options
•Commands for Managing RAID
•Determining the Location of the Physical SATA Drive
Migrating from Non-RAID Mode to RAID 0 Mode
Note•We recommend that you export data to a remote network data storage system, and then import it. If you do not have a remote network data storage system in place, then follow this procedure.
•The migration process can take several hours.
Use this procedure if you want to migrate from non-RAID mode to RAID 0 mode and you have done the following:
•Upgraded from Cisco SRE-V 1.0 to Cisco SRE-V 1.1 using the upgrade procedure. See the "Upgrading the Cisco SRE-V Software" section.
•Installed Cisco SRE-V 1.1 (clean install) and selected non-RAID mode during the installation process. See the "Installing the Cisco SRE-V Software—Clean Install" section.
For information about RAID options, see the "Understanding RAID Options" section.
Basic Workflow
The Cisco SRE 900 Service Module contains two physical drives with two datastores. Do the following:
1. Move all of the user data and virtual machines into one datastore. For example, move the data and virtual machines from datastore1 to a temporary data repository, datastore2.
2. Delete the datastore from which you moved data, which in this example is datastore1, and then convert the physical drive that contained datastore1 to RAID 0.
3. Create a new datastore, for example, datastore_R0, and then move the user data from the temporary data repository (datastore2) to the new datastore (datastore_R0).
4. Delete the temporary data repository (datastore2) and then convert the physical drive that contained the temporary data repository (datastore2) to RAID 0.
Procedure
To migrate from non-RAID mode to RAID 0 mode, complete the following steps:
Step 1 Shutdown and unregister all virtual machines.
Step 2 From the Console Manager interface, use the hypervisor set disk maintenance command to move the system into disk maintenance mode:
SRE-Module# hypervisor set disk maintenance
When prompted, confirm that you want to continue with the operation. The command is executed and the system reboots.
|
Note In disk maintenance mode, the datastores and virtual machines are maintained but the system logs and temporary files that are not stored in the datastore are deleted.
|
Step 3 Move all of the user data and virtual machines into one datastore. This datastore will serve as a temporary repository for data. For example move the data and virtual machines from datastore1 to a temporary data repository, datastore2. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the datastore from which you want to move the user data (for example, datastore1), and then choose Browse Datastore...
e. Choose the folder or folders that you want to move, and then click the Move-File icon (located in the tool bar). Click Yes in the confirmation dialog box. The Move Items To... dialog box opens. Do the following:
–From the Datastores pane (left pane), choose the datastore into which you want to move the data.
–From the Destination Folder pane (right pane), choose the target folder into which you want to move the data.
–Click the Move button. The data moves from datastore1 to datstore2.
Step 4 Determine which JBOD volume you want to migrate to RAID 0. In this example, the JBOD volume to migrate to RAID 0 is the one that contains datastore1. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the datastore from which you moved the user data (datastore1), and then choose Properties. The Properties dialog box opens.
e. From the Properties dialog box, click the Manage Paths... button. From the Paths pane, look at the Runtime Name column. The Runtime Name column provides information about the location of datastore1 as shown in the following example:
where T0 is the location of the datastore1. T is the SCSI target and 0 is the ID number of JBOD volume. T0 maps to the JBOD volume ID 0 in Step 6.
For more information about locating the physical SATA drive, see the "Determining the Location of the Physical SATA Drive" section.
Step 5 Delete the datastore from which you moved data, which in this example is datastore1. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the datastore from which you moved data (datastore1), and then choose Delete. The datastore is deleted when the Recent Tasks pane displays Completed.
Step 6 Migrate the JBOD volume (JBOD 0) to RAID 0. In Step 4, you determined that the JBOD volume you must migrate to RAID 0 is the one that contained datastore1, which is located at T0. Do the following:
a. Delete the JBOD volume. Do the following:
–From the RAID management CLI, use the logdrv command to determine the ID number of the JBOD volume that you want to delete, as shown in the following example:
ID RAID Disks Sectors Size(MB) DiskID Name
0 JBOD 1 64 476940.02 (1) JBOD on port 01 (00)
1 JBOD 1 64 476940.02 (2) JBOD on port 02 (00)
The JBOD volume ID 0 maps to the SCSI Target T0 (location of the datastore1), which you determined in Step 4. This is the JBOD volume that you must delete.
Make a note of the DiskID number. The DiskID number helps you determine the location of the physical SATA drive. For more information about locating the physical SATA drive, see the "Determining the Location of the Physical SATA Drive" section.
–Use the logdrv -a del -l <ID number of the JBOD> command to delete the JBOD volume:
raid-cli> logdrv -a del -l 0
|
Note -l in the command is lower case "L" and stands for logical.
|
b. Rescan the system. From the vSphere Client GUI Home page, do the following:
–Choose Inventory > Configuration.
–From the Hardware pane (left pane), choose Storage.
–From the Datastores pane (right pane), choose Rescan All... (located on the upper right corner). The Rescan dialog box opens.
–Click OK.
c. Create a new single-drive RAID 0 volume on the SATA drive with full disk capacity. From the RAID management CLI console, use the
logdrv -a add -p 1 -e 0 -z "raid=raid0,name=RAID0,init=quick" command:
raid-cli> logdrv -a add -p 1 -e 0 -z "raid=raid0,name=RAID0,init=quick"
where -p 1 is the physical SATA drive, which you determined in step a.; and -e 0 indicates full disk capacity.
d. Rescan the system.
Step 7 Create a new datastore. For example, create datastore_R0 on the new RAID 0 volume that has vmhba0:C0:T0:L0 in the Runtime Name field. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. Click the Add Storage... button. The Add Storage wizard opens and walks you through the steps to create the new datastore (datastore_R0).
Step 8 Move the user data from the temporary data repository (datastore2) to the new datastore (datastore_R0). For procedure to move data, see Step 3.
Step 9 Delete the temporary data repository (datastore2). For procedure to delete the datastore, see Step 5.
Step 10 Migrate the other JBOD to RAID 0. In this example, migrate JBOD 1 to RAID 0. Do the following:
a. Delete the other remaining JBOD logical volume. From the RAID management CLI, use the logdrv -a del -l <ID number of the JBOD> command:
raid-cli> logdrv -a del -l 1
|
Note -l in the command is lower case "L" and stands for logical.
|
b. Rescan the system.
c. Direct RAID 0/raw-drive to RAID 0 migration. From the RAID management CLI, do the following:
–Use the logdrv command to identify the RAID 0 volume ID.
–Use the migrate -a start -i 0 -l 0 -p 2 -s "raid=raid0" command to merge the unused SATA drive (parameter -p 2) to the RAID 0 volume (parameter -l 0).
|
Note The migration process can take several hours.
|
d. (Optional) To determine the migration status, do one of the following:
–From the RAID management CLI, use the bga or migrate command:
or
–From the vSphere Client GUI Home page, do the following:
Choose System Logs.
From the drop-down menu, choose Server log [/var/log/messages].
Click the Show All button to view the migration status.
e. Use the exit command to exit from the RAID management CLI console:
f. Rescan the system.
|
Note The rescan after the migration is completed provides the disk capacity, which can be approximately 1TB.
|
Step 11 Expand the new datastore (datastore_R0). From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the new datastore (datastore_R0), and then choose Properties. The Properties dialog box opens.
e. From the Properties dialog box, click the Increase... button. The Increase Storage wizard opens and walks you through the steps to expand the datastore.
Step 12 From the Console Manager interface, use the hypervisor unset disk maintenance command to move the system out of disk maintenance mode:
SRE-Module# hypervisor unset disk maintenance
When prompted, confirm that you want to continue with the operation. The command is executed and the system reboots.
Step 13 After completing these steps, if you are unable to view the datastores in Inventory > Configuration > Storage, see the "Cannot View Datastores" section to resolve the problem.
Related Topics
•Entering the Cisco SRE-V Command Environment
•Entering the RAID Management Command Environment
•Understanding RAID Options
•Determining the Location of the Physical SATA Drive
•Commands for Managing RAID
Migrating from Non-RAID Mode to RAID 1 Mode
Note•We recommend that you export data to a remote network data storage system, and then import it. If you do not have a remote network data storage system in place, then follow this procedure.
•The migration process can take several hours.
Use this procedure if you want to migrate from non-RAID mode to RAID 1 mode and you have done the following:
•Upgraded from Cisco SRE-V 1.0 to Cisco SRE-V 1.1 using the upgrade procedure. See the "Upgrading the Cisco SRE-V Software" section.
•Installed Cisco SRE-V 1.1 (clean install) and selected non-RAID mode during the installation process. See the "Installing the Cisco SRE-V Software—Clean Install" section.
For information about RAID options, see the "Understanding RAID Options" section.
Basic Workflow
The Cisco SRE 900 Service Module contains two physical drives with two datastores. To convert the two physical drives, which are in non-RAID mode to RAID 1 mode, you must first convert one physical drive to RAID 0, and then convert both the physical drives to RAID 1:
Non-RAID (two physical drives) > RAID 0 (one physical drive) > RAID 1 (two physical drives)
Do the following:
1. Move all of the user data and virtual machines into one datastore. For example, move the data and virtual machines from datastore1 to a temporary data repository, datastore2.
2. Delete the datastore from which you moved data, which in this example is datastore1, and then convert the physical drive that contained datastore1 to RAID 0.
3. Create a new datastore, for example, datastore_R1, and then move the user data from the temporary data repository (datastore2) to the new datastore (datastore_R1).
4. Delete the temporary data repository (datastore2) and then convert both the physical drives (the drive that you converted to RAID 0 and the other remaining physical drive) to RAID 1.
Procedure
To migrate from non-RAID to RAID 1 mode, complete the following steps:
Step 1 Shutdown and unregister all virtual machines.
Step 2 From the Console Manager interface, use the hypervisor set disk maintenance command to move the system into disk maintenance mode:
SRE-Module# hypervisor set disk maintenance
When prompted, confirm that you want to continue with the operation. The command is executed and the system reboots.
|
Note In disk maintenance mode, the datastores and virtual machines are maintained but the system logs and temporary files that are not stored in the datastore are deleted.
|
Step 3 Move all of the user data and virtual machines into one datastore. This datastore will serve as a temporary repository for data. For example move the data and virtual machines from datastore1 to a temporary data repository, datastore2. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the datastore from which you want to move the user data (for example, datastore1), and then choose Browse Datastore...
e. Choose the folder or folders that you want to move, and then click the Move-File icon (located in the tool bar). Click Yes in the confirmation dialog box. The Move Items To... dialog box opens. Do the following:
–From the Datastores pane (left pane), choose the datastore into which you want to move the data.
–From the Destination Folder pane (right pane), choose the target folder into which you want to move the data.
–Click the Move button. The data moves from datastore1 to datstore2.
Step 4 Determine which JBOD volume you want to migrate to RAID 0. In this example, the JBOD volume to migrate to RAID 0 is the one that contains datastore1. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the datastore from which you moved the user data (datastore1), and then choose Properties. The Properties dialog box opens.
e. From the Properties dialog box, click the Manage Paths... button. From the Paths pane, look at the Runtime Name column. The Runtime Name column provides information about the location of datastore1 as shown in the following example:
where T1 is the location of the datastore1. T is the SCSI target and 1 is the ID number of JBOD volume. T1 maps to the JBOD volume ID 1 in Step 6.
For more information about locating the physical SATA drive, see the "Determining the Location of the Physical SATA Drive" section.
Step 5 Delete the datastore from which you moved data, which in this example is datastore1. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click the datastore from which you moved data (datastore1), and then choose Delete. The datastore is deleted when the Recent Tasks pane displays Completed.
Step 6 Migrate the JBOD volume to RAID 0. In Step 4, you determined that the JBOD volume you must migrate to RAID 0 is the one that contained datastore1, which is located at T1. Do the following:
a. Delete the JBOD volume. Do the following:
–From the RAID management CLI, use the logdrv command to determine the ID number of the JBOD volume that you want to delete, as shown in the following example:
ID RAID Disks Sectors Size(MB) DiskID Name
0 JBOD 1 64 476940.02 (1) JBOD on port 01 (00)
1 JBOD 1 64 476940.02 (2) JBOD on port 02 (00)
The JBOD volume ID 1 maps to the SCSI Target T1 (location of the datastore1), which you determined in Step 4. This is the JBOD volume that you must delete.
Make a note of the DiskID number. The DiskID number helps you determine the location of the physical SATA drive. For more information about locating the physical SATA drive, see the "Determining the Location of the Physical SATA Drive" section.
–Use the logdrv -a del -l <ID number of the JBOD> command to delete the JBOD volume:
raid-cli> logdrv -a del -l 1
|
Note -l in the command is lower case "L" and stands for logical.
|
b. Rescan the system. From the vSphere Client GUI Home page, do the following:
–Choose Inventory > Configuration.
–From the Hardware pane (left pane), choose Storage.
–From the Datastores pane (right pane), choose Rescan All... (located on the upper right corner). The Rescan dialog box opens.
–Click OK.
c. Create a new single-drive RAID 0 volume on the SATA drive with full disk capacity.
From the RAID management CLI console, use the
logdrv -a add -p 2 -e 0 -z "raid=raid0,name=RAID1,init=quick" command:
raid-cli> logdrv -a add -p 2 -e 0 -z "raid=raid0,name=RAID1,init=quick"
where -p 2 is the physical SATA drive, which you determined in step a.; and -e 0 indicates full disk capacity.
d. Rescan the system.
Step 7 Create a new datastore. For example, create datastore_R1 on the new RAID 0 volume that has vmhba0:C0:T1:L0 in the Runtime Name field. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. Click the Add Storage... button. The Add Storage wizard opens and walks you through the steps to create the new datastore (datastore_R1).
Step 8 Move the user data from the temporary data repository (datastore2) to the new datastore (datastore_R1). For procedure to move data, see Step 3.
Step 9 Delete the temporary data repository (datastore2). For procedure to delete the datastore, see Step 5.
Step 10 Migrate the single-drive RAID 0 and the remaining JBOD volume to RAID 1. Do the following:
a. Delete the other JBOD logical volume. From the RAID management CLI, use the
logdrv -a del -l <ID number of the JBOD> command:
raid-cli> logdrv -a del -l 0
|
Note -l in the command is lower case "L" and stands for logical.
|
b. Direct RAID 0/raw-drive to RAID 1 migration. From the RAID management CLI, do the following:
–Use the logdrv command to identify the RAID 0 volume ID.
–Use the migrate -a start -i 0 -l 1 -p 1 -s "raid=raid1" command to merge the unused SATA drive (parameter -p 1) to the RAID 0 volume (parameter -l 0).
|
Note The migration process can take several hours.
|
c. (Optional) To determine the migration status, do one of the following:
–From the RAID management CLI, use the bga or migrate command.
or
–From the vSphere Client GUI Home page, do the following:
Choose System Logs.
From the drop-down menu, choose Server log [/var/log/messages].
Click the Show All button.
d. Use the exit command to exit from the RAID management CLI console:
Step 11 From the Console Manager interface, use the hypervisor unset disk maintenance command to move the system out of disk maintenance mode:
SRE-Module# hypervisor unset disk maintenance
When prompted, confirm that you want to continue with the operation. The command is executed and the system reboots.
Step 12 After completing these steps, if you are unable to view the datastores in Inventory > Configuration > Storage, see the "Cannot View Datastores" section to resolve the problem.
Related Topics
•Entering the Cisco SRE-V Command Environment
•Entering the RAID Management Command Environment
•Understanding RAID Options
•Determining the Location of the Physical SATA Drive
•Commands for Managing RAID
Hot-Swapping the Faulty RAID 1 Disk Drive
|
Note Hot-Swapping is supported in RAID 1 mode only.
|
If you chose the RAID 1 option during installation, and one of the disk drive fails, you can replace that faulty disk drive with a new disk drive.
To hot-swap the faulty RAID 1 disk drive, complete the following steps:
Step 1 Determine the location of the faulty disk drive. See the "Determining the Location of the Physical SATA Drive" section.
Step 2 Remove the faulty disk drive.
Step 3 Insert a new disk drive. The rebuild process starts automatically on the new disk drive.
The rebuild process might take approximately two hours to complete. You can continue to perform normal system operations during the rebuild process.
|
Caution Make sure that you do not unplug the functional disc drive during the rebuild process. If you do, you will lose data.
|
If the rebuild process does not start, see the "Rebuild Process Does Not Start" section to resolve the problem.
Step 4 (Optional) To check the rebuild status, do the following:
a. Enter the Cisco SRE-V command environment. See the "Entering the Cisco SRE-V Command Environment" section.
b. Enter the RAID management command environment. From the Console Manager interface, use the raid setup command:
You are about to enter RAID management CLI console.
It is strongly recommended to shut down all the VMs and backup the datastore before
changing any RAID configuration.
A manual reboot is required after changing any RAID configuration.
c. From the RAID management CLI, use the rb -a list command, as shown in the following example:
Rebuild is in progress 10% in logical drive with ID 0 on controller #0!
You can also check the rebuild status in syslog.
Related Topics
•Understanding RAID Options
•Entering the RAID Management Command Environment
•Rebuild Process Does Not Start
Determining the Location of the Physical SATA Drive
To determine the location of the physical SATA drive, complete the following steps:
Step 1 From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the right pane, click the Datastores button.
d. Right-click a datastore, and then choose Properties. The Properties dialog box opens.
e. From the Properties dialog box, click the Manage Paths... button. From the Paths pane, look at the Runtime Name column. The Runtime Name column provides information about the location of datastore1 as shown in the following example:
where T1 is the location of the datastore. T is the SCSI target and 1 is the ID number of JBOD volume where the datastore is located. T1 maps to the JBOD volume ID 1 in Step 2.
Step 2 From the RAID management CLI, use the logdrv command, as shown in the following example:
ID RAID Disks Stripe Size(MB) DiskID Name
0 JBOD 1 N/A 476940.02 (1) JBOD on port 01 (00)
1 JBOD 1 N/A 476940.02 (2) JBOD on port 02 (00)
T1 from Step 1 maps to the JBOD logical volume 1 (ID column). This JBOD logical volume 1 (ID column) is built on top of the physical SATA drive 2 (DiskID column). DiskID 2 indicates that it is the second physical SATA drive.
Note The DiskID number that is displayed from the RAID Management CLI output does not match with the Disk number that is displayed on the front panel of the Cisco SRE Service Module:
•Disk 0 on the front panel of the Cisco SRE Service Module is represented as DiskID 1 in the RAID management CLI output.
•Disk 1 on the front panel of the Cisco SRE Service Module is represented as DiskID 2 in the RAID management CLI output.
Commands for Managing RAID
You can use the RAID management CLI to add or modify RAID configuration, to view RAID status, and to migrate between RAID levels.
|
Note The RAID management CLI is different from the Cisco IOS CLI.
|
Table 9-1 provides a list of frequently used commands that you can use to manage RAID. A comprehensive list of all of the RAID Management commands is available online. You can either use the help command to access all of the RAID commands, or you can prepend a command with help to display details about that command. For example, the help logdrv command provides the syntax and options for the logdrv command.
Table 9-1 RAID Management Commands
|
|
phydrv -a list or phydrv
raid-cli> phydrv -a list
ID CH Size Model Serial F/W 1 0 476940.02MB Hitachi HTE545050B9A30100726PBN40317EASNPE 100726PBN40317EASNPE PB4OC64G 2 1 476940.02MB Hitachi HTE545050B9A30100726PBN40317EA189E 100726PBN40317EA189E PB4OC64G |
Displays information about physical drives. Note The DiskID number that is displayed from the RAID Management CLI output does not match with the Disk number that is displayed on the front panel of the Cisco SRE Service Module: –Disk 0 on the front panel of the Cisco SRE Service Module is represented as DiskID 1 in the RAID management CLI output. –Disk 1 on the front panel of the Cisco SRE Service Module is represented as DiskID 2 in the RAID management CLI output. |
logdrv -a list or logdrv
raid-cli> logdrv -a list
ID RAID Disks Sectors Size(MB) DiskID Name 0 JBOD 1 64 476940.02 (1) JBOD on port 01 (00) 1 JBOD 1 64 476940.02 (2) JBOD on port 02 (00) |
Displays information about logical RAID volumes. Note The DiskID number that is displayed from the RAID Management CLI output does not match with the Disk number that is displayed on the front panel of the Cisco SRE Service Module: –Disk 0 on the front panel of the Cisco SRE Service Module is represented as DiskID 1 in the RAID management CLI output. –Disk 1 on the front panel of the Cisco SRE Service Module is represented as DiskID 2 in the RAID management CLI output. |
logdrv -a clear -i 0 or logdrv -a clear
raid-cli> logdrv -a clear -i 0
|
Removes all of the logical volumes from the system. |
logdrv -a del -i 0 -l 1 or logdrv -a del -l 1
raid-cli> logdrv -a del -i 0 -l 1
|
Removes the second logical volume. Note The logical volume ID can start with 0. |
logdrv -a add -p 1,2 -e 0,0 -z "raid=raid0,name=RAID0,init=quick"
raid-cli> logdrv -a add -p 1,2 -e 0,0 -z "raid=raid0,name=RAID0,init=quick"
ID RAID Disks Sectors Size(MB) DiskID Name 0 RAID0 2 64 953752.00 (1,2) RAID0 |
Creates a simple block level striping RAID 0 volume using the full capacity (single full extent on each of the drives) across both first and second SATA drives. -z can be used to change stripe block size. Note Cisco SRE-V supports a maximum of two logical volumes. |
logdrv -a add -p 1,2 -e 0,0, -z "raid=raid1,name=RAID1,init=quick"
raid-cli> logdrv -a add -i 0 -p 1,2 -e 0,0 -z "name=RAID1,raid=raid1"
ID RAID Disks Sectors Size(MB) DiskID Name 0 RAID1 2 64 476876.00 (1,2) RAID1 |
Creates a simple 1:1 mirrored RAID 1 volume using the full capacity on both SATA drives. Note You can create a maximum of two logical volumes. |
logdrv -a add -i 0 -p 1,2 -z "raid=JBOD,init=quick"
raid-cli> logdrv -a add -i 0 -p 1,2 -z "raid=JBOD"
ID RAID Disks Sectors Size(MB) DiskID Name 0 JBOD 1 64 476940.02 (1) JBOD on port 01 (00) 1 JBOD 1 64 476940.02 (2) JBOD on port 02 (00) |
Removes the RAID metadata on both SATA drives and makes them non-RAID drives. If a datastore is not created on top of the JBOD volume, that JBOD volume might disappear after reboot. |
logdrv -a list -v or logdrv -v
******************************* raid-cli> logdrv -a list -v Array ID : 0 Array name : RAID0 Array size : 953752.00 MB Array stripe size in number of blocks : 64 Array sector size : 512 Array raid mode : RAID0 Array write cache mode : Write Through Number of disks in array : 2 Disk members with ID in array : (1,2) Array activity status : Idle Array functional status : Online Driver Cache Mode: Write Thru Driver Lookahead threshold: 0 Driver Consolidation: enabled |
Displays details about the RAID configuration. |
event -a list -v -c 10
raid-cli> event -a list -v -c 10 Time: Jan 29, 2011 06:19:00 EventID: 0x90001 Event Description: Logical drive "RAID0" deleted Time: Jan 29, 2011 06:18:40 EventID: 0x90000 Event Description: Logical drive "RAID0" created Time: Jan 29, 2011 06:18:30 EventID: 0x90001 Event Description: Logical drive "RAID0" deleted |
Displays the oldest 10 events (if available) from the event queue. |
Related Topics
•Understanding RAID Options
•Entering the RAID Management Command Environment
Troubleshooting RAID
This section contains the following:
•Cannot View Datastores
•Rebuild Process Does Not Start
Cannot View Datastores
Problem
After disk migration, reboot, or Cisco SRE-V software upgrade, you are unable to view the datastores in Inventory > Configuration > Storage.
Solution
To resolve this problem, do the following:
1. Rescan the system a couple of times. From the vSphere Client GUI Home page, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. From the Datastores pane (right pane), choose Rescan All... (located on the upper right corner). The Rescan dialog box opens.
d. Click OK.
2. If rescanning the system does not resolve the problem, do the following:
a. Choose Inventory > Configuration.
b. From the Hardware pane (left pane), choose Storage.
c. Click the Add Storage... button. The Select Storage Type wizard page opens.
d. From the right pane, choose Disk/LUN, and then click Next. The Select Disk/LUN wizard page opens.
e. From the right pane, choose a disk, and then click Next. The Select VMFS Mount Options wizard page opens.
f. From the right pane, choose the Assign a New Signature radio button, and then click Next.
g. Choose Free Space, and then click Finish.
The missing datastores display with a modified name, which you can change. For example, Datastore_R0 displays as snap-XXXXX-datastore_R0; and datastore_R1 displays as snap-XXXXX-datastore_R1. To change the name of the datastore, right-click the appropriate datastore, and the choose Rename.
Related Topics
•Migrating from Non-RAID Mode to RAID 0 Mode
•Migrating from Non-RAID Mode to RAID 1 Mode
•Upgrading the Cisco SRE-V Software
Rebuild Process Does Not Start
Problem
After hot-swapping the faulty RAID 1 disk drive, the rebuild process does not start.
Probable Cause
RAID configuration shows a new JBOD logical volume, which you must delete.
Solution
To resolve this problem, do the following:
1. Enter the Cisco SRE-V command environment. See the "Entering the Cisco SRE-V Command Environment" section.
2. Enter the RAID management command environment. From the Console Manager interface, use the raid setup command:
You are about to enter RAID management CLI console.
It is strongly recommended to shut down all the VMs and backup the datastore before
changing any RAID configuration.
A manual reboot is required after changing any RAID configuration.
3. (Optional) To view RAID events, enter the event -a list -v -c 10 command:
raid-cli> event -a list -v -c 10
Event Description: Disk 1 plugged in
4. Use the logdrv command to view RAID configuration.
When the valid partition table appears on the newly inserted drive, you might notice that the RAID configuration is unbalanced, as shown in the following example:
ID RAID Disks Sectors Size(MB) DiskID Name
0 RAID1 2 64 476876.00 (-,2) RAID1
1 JBOD 1 64 476940.02 (1) JBOD on port 01 (00)
where (-,2) RAID1 represents an unbalanced array; and ID 1 shows a new JBOD logical volume. JBOD represents non-RAID, which you must delete.
5. Use the logdrv -a del -1 <ID number of the JBOD> command to delete the JBOD logical volume:
raid-cli> logdrv -a del -l 1
|
Note -l in the command is lower case "L" and stands for logical.
|
After the JBOD logical volume is deleted, the rebuild process starts automatically on the new disk drive.
The rebuild process might take approximately two hours to complete. You can continue to perform normal system operations during the rebuild process.
|
Caution Make sure that you do not unplug the functional disc drive during the rebuild process. If you do, you will lose data.
|
6. (Optional) To check the rebuild status, use the rb -a list command, as shown in the following example:
Rebuild is in progress 10% in logical drive with ID 0 on controller #0!
You can also check the rebuild status in syslog.
Related Topic
•Hot-Swapping the Faulty RAID 1 Disk Drive