The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This part contains the following chapters:
You can create a storage profile both at an org level and at a service-profile level. A service profile can have a dedicated storage profile as well as a storage profile at an org level.
In UCS M-Series Modular Servers, servers in a chassis can use storage that is centralized in that chassis. You can select and configure the disks to be used for storage. A logical collection of these physical disks is called a disk group. Disk groups allow you to organize local disks. The storage controller controls the creation and configuration of disk groups.
A disk group configuration policy defines how a disk group is created and configured. The policy specifies the RAID level to be used for the disk group. It also specifies either a manual or an automatic selection of disks for the disk group, and roles for disks. You can use a disk group policy to manage multiple disk groups. However, a single disk group can be managed only by one disk group policy.
A hot spare is an unused extra disk that can be used by a disk group in the case of failure of a disk in the disk group. Hot spares can be used only in disk groups that support a fault-tolerant RAID level.
A disk group can be partitioned into virtual drives. Each virtual drive appears as an individual physical device to the Operating System.
All virtual drives in a disk group must be managed by using a single disk group policy.
Applying—Creation of the virtual drive is in progress.
Applied—Creation of the virtual drive is complete, or virtual disk policy changes are configured and applied successfully.
Failed to apply—Creation, deletion, or renaming of a virtual drive has failed due to errors in the underlying storage subsystem.
Orphaned—The service profile that contained this virtual drive is deleted.
Not in use—The service profile that contained this virtual drive is in the disassociated state.
Optimal—The virtual drive operating condition is good. All configured drives are online.
Degraded—The virtual drive operating condition is not optimal. One of the configured drives has failed or is offline.
Note | This state does not occur if you select the always write back mode. |
Partially degraded—The operating condition in a RAID 6 virtual drive is not optimal. One of the configured drives has failed or is offline. RAID 6 can tolerate up to two drive failures.
Offline—The virtual drive is not available to the RAID controller. This is essentially a failed state.
Unknown—The state of the virtual drive is not known.
The RAID level of a disk group describes how the data is organized on the disk group for the purpose of ensuring availability, redundancy of data, and I/O performance.
Striping—Segmenting data across multiple physical devices. This improves performance by increasing throughput due to simultaneous device access.
Mirroring—Writing the same data to multiple devices to accomplish data redundancy.
Parity—Storing of redundant data on an additional device for the purpose of error correction in the event of device failure. Parity does not provide full redundancy, but it allows for error recovery in some scenarios.
Spanning—Allows multiple drives to function like a larger one. For example, four 20 GB drives can be combined to appear as a single 80 GB drive.
RAID 0 Striped—Data is striped across all disks in the array, providing fast throughput. There is no data redundancy, and all data is lost if any disk fails.
RAID 1 Mirrored—Data is written to two disks, providing complete data redundancy if one disk fails. The maximum array size is equal to the available space on the smaller of the two drives.
RAID 5 Striped Parity—Data is striped across all disks in the array. Part of the capacity of each disk stores parity information that can be used to reconstruct data if a disk fails. RAID 5 provides good data throughput for applications with high read request rates.
RAID 5 distributes parity data blocks among the disks that are part of a RAID-5 group and requires a minimum of three disks.
RAID 6 Striped Dual Parity—Data is striped across all disks in the array and two sets of parity data are used to provide protection against failure of up to two physical disks. In each row of data blocks, two sets of parity data are stored.
Other than addition of a second parity block, RAID 6 is identical to RAID 5 . A minimum of four disks are required for RAID 6.
RAID 10 Mirrored and Striped—RAID 10 uses mirrored pairs of disks to provide complete data redundancy and high throughput rates through block-level striping. RAID 10 is mirroring without parity and block-level striping. A minimum of four disks are required for RAID 10.
When you specify a disk group configuration, and do not specify the local disks in it, Cisco UCS Manager determines the disks to be used based on the criteria specified in the disk group configuration policy. Cisco UCS Manager can make this selection of disks in multiple ways.
When all qualifiers match for a set of disks, then disks are selected sequentially according to their slot number. Regular disks and dedicated hot spares are selected by using the lowest numbered slot.
The following is the disk selection process:
Iterate over all local LUNs that require the creation of a new virtual drive. Iteration is based on the following criteria, in order:
Note | If you specify Any as the type of drive, the first available drive is selected. After this drive is selected, subsequent drives will be of a compatible type. For example, if the first drive was SATA, all subsequent drives would be SATA. Cisco UCS Manager Release 2.5 supports only SATA and SAS. Cisco UCS Manager Release 2.5 does not support RAID migration. |
Select dedicated hot spares by using the same method as normal disks. Disks are only selected if they are in an Unconfigured Good state.
If a provisioned LUN has the same disk group policy as a deployed virtual drive, then try to deploy the new virtual drive in the same disk group. Otherwise, try to find new disks for deployment.
Some modifications that are made to the LUN configuration when LUNs are already deployed on an associated server are supported.
The following are the types of modifications that can be performed:
Creation of a new virtual drive.
Deletion of an existing virtual drive, which is in the orphaned state.
The removal of a LUN will cause a warning to be displayed. Ensure that you take action to avoid loss of data.
Some modifications to existing LUNs are not possible without destroying the original virtual drive and creating a new one. All data is lost in these types of modification, and these modifications are not supported.
RAID-level changes that do not support reconstruction. For example, RAID5 to RAID1.
Shrinking the size of a virtual drive.
RAID-level changes that support reconstruction, but where there are other virtual drives present on the same drive group.
Disk removal when there is not enough space left on the disk group to accommodate the virtual drive.
Explicit change in the set of disks used by the virtual drive.
When the following sequence of events takes place:
The LUN is successfully deployed, which means that a virtual drive is created, which uses the slot.
You remove a disk from the slot, possibly because the disk failed.
You insert a new working disk into the same slot.
For non-redundant virtual drives (RAID 0), when a physical drive is removed, the state of the virtual drive is Inoperable. When a new working drive is inserted, the new physical drive goes to an Unconfigured Good state.
For non-redundant virtual drives, there is no way to recover the virtual drive. You must delete the virtual drive and re-create it.
For redundant virtual drives (RAID 1, RAID 5, RAID 6, RAID 10) with no hot spare drives assigned, virtual drive mismatch, virtual drive member missing, and local disk missing faults appear until you insert a working physical drive into the same slot from which the old physical drive was removed.
If the physical drive size is greater than or equal to that of the old drive, the storage controller automatically uses the new drive for the virtual drive. The new drive goes into the Rebuilding state. After rebuild is complete, the virtual drive goes back into the Online state.
For redundant virtual drives (RAID 1, RAID 5, RAID 6, RAID 10) with hot spare drives assigned, when a drive fails, or when you remove a drive, the dedicated hot spare drive, if available, goes into the Rebuilding state with the virtual drive in the Degraded state. After rebuilding is complete, that drive goes to the Online state.
Cisco UCSM raises a disk missing and virtual drive mismatch fault because although the virtual drive is operational, it does not match the physical configuration that Cisco UCSM expects.
if you insert a new disk in the slot with the disk missing, automatic copy back starts from the earlier hot spare disk to the newly inserted disk. After copy back, the hot spare disk is restored. In this state all faults are cleared.
If automatic copy back does not start, and the newly inserted disk remains in the Unconfigured Good, JBOD, or Foreign Configuration state, remove the new disk from the slot, reinsert the earlier hot spare disk into the slot, and import foreign configuration. This initiates the rebuilding process and the drive state becomes Online. Now, insert the new disk in the hot spare slot and mark it as hot spare to match it exactly with the information available in Cisco UCSM.
If a hot spare drive is replaced, the new hot spare drive will go to the Unconfigured Good, Unconfigured Bad, JBOD, or Foreign Configuration state.
Cisco UCSM will raise a virtual drive mismatch or virtual drive member mismatch fault because the hot spare drive is in a state different from the state configured in Cisco UCSM.
You must manually clear the fault. To do this, you must perform the following actions:
If you insert new physical drives into unused slots, neither the storage controller nor Cisco UCSM will make use of the new drive even if the drive is in the Unconfigured Good state and there are virtual drives that are missing good physical drives.
The drive will simply go into the Unconfigured Good state. To make use of the new drive, you will need to modify or create LUNs to reference the newly inserted drive.
When you use UCSM to create a virtual drive, UCSM assigns a unique ID that can be used to reliably identify the virtual drive for further operations. UCSM also provides the flexibility to provide a name to the virtual drive at the time of service profile association. Any virtual drive without a service profile or a server reference is marked as an orphan virtual drive.
In addition to a unique ID, a name is assigned to the drive. Names can be assigned in two ways:
When configuring a virtual drive, you can explicitly assign a name that can be referenced in storage profiles.
If you have not preprovisioned a name for the virtual drive, UCSM generates a unique name for the virtual drive.
You can rename virtual drives that are not referenced by any service profile or server.
A LUN is dereferenced when it is no longer used by any service profile. This can occur as part of the following scenarios:
The LUN is no longer referenced from the storage profile
The storage profile is no longer referenced from the service profile
The server is disassociated from the service profile
The server is decommissioned
When the LUN is no longer referenced, but the server is still associated, re-association occurs.
When the service profile that contained the LUN is disassociated, the LUN state is changed to Not in use.
When the service profile that contained the LUN is deleted, the LUN state is changed to Orphaned.
Cisco UCS Manager does not support a combination of SAS and SATA drives in storage configurations.
Cisco UCS Manager Release 2.5 only supports a stripe size of 64k and more. Having a stripe size of less than 64k will result in failure when a service profile is associated.
The storage controller allows 64 virtual drives per controller and 2 virtual drives per server in Cisco UCS Manager Release 2.5.
Configuring Storage Profiles
Configuring a disk group involves the following:
You can configure the disks in a disk group policy automatically or manually.
You can create storage profile policies from the Storage tab in the Navigation pane. Additionally, you can also configure the default storage profile that is specific to a service profile from the Servers tab.
Step 1 | In the Navigation pane, click the Storage tab. |
Step 2 | On the Storage tab, expand |
Step 3 | Expand the node
for the organization where you want to create the storage profile.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click the organization and select Create Storage Profile. |
Step 5 | In the Create Storage Profile dialog box, specify the storage profile Name. You can provide an optional Description for this storage profile. |
Step 6 | (Optional)In the Storage Items area, Create Local LUNs and add them to this storage profile. |
Step 7 | Click OK. |
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Storage tab. | |
Step 2 | On the Storage tab, expand | |
Step 3 | Expand the node for the organization that contains the storage profile that you want to delete. | |
Step 4 | Right-click the storage profile that you want to delete and select Delete. | |
Step 5 | Click Yes in the confirmation box that appears. |
You can create local LUNs within a storage profile policy from the Storage tab in the Navigation pane. Additionally, you can also create local LUNs within the default storage profile that is specific to a service profile from the Servers tab.
Step 1 | In the Navigation pane, click the Storage tab. | ||||||||||||||
Step 2 | On the Storage tab, expand | ||||||||||||||
Step 3 | Expand the node for the organization that contains the storage profile within which you want to create a local LUN. | ||||||||||||||
Step 4 | In the Work pane, click the General tab. | ||||||||||||||
Step 5 | In the Actions area, click Create Local LUN. | ||||||||||||||
Step 6 | In the Create
Local LUN dialog box, complete the following fields:
| ||||||||||||||
Step 7 | (Optional)Click Create Disk Group Policy to create a new disk group policy for this local LUN. | ||||||||||||||
Step 8 | Click OK. |
You can change the local LUN visibility order to the server. This operation will reboot the server.
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Storage tab. | |
Step 2 | On the Storage tab, expand | |
Step 3 | Expand the node for the organization that contains the storage profile within which you want to reorder local LUNs. | |
Step 4 | Expand Local LUNs for the storage profile that you want and select the LUN that you want to reorder. | |
Step 5 | In the Work pane, click the General tab. | |
Step 6 | In the Properties area, change the Order of the local LUN. | |
Step 7 | Click Save Changes. |
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Storage tab. | |
Step 2 | On the Storage tab, expand | |
Step 3 | Expand the node for the organization that contains the storage profile from which you want to delete a local LUN. | |
Step 4 | Expand Local LUNs for the storage profile that you want and select the LUN that you want to delete. | |
Step 5 | Right-click the LUN that you want to delete and select Delete. | A confirmation dialog box appears. |
Step 6 | Click Yes. |
You can associate a storage profile with an existing service profile or a new service profile. Creating a Service Profile with the Expert Wizard provides more information about associating a storage profile with a new service profile.
Step 1 | In the Navigation pane, click the Servers tab. |
Step 2 | On the Servers tab, expand . |
Step 3 | Expand the node for the organization that contains the service profile that you want to associate with a storage profile. |
Step 4 | Choose the service profile that you want to associate with a storage profile. |
Step 5 | In the Work pane, click the Storage tab. |
Step 6 | Click the LUN Configuration subtab. |
Step 7 | In the Actions area, click Modify Storage Profile. The Modify Storage Profile dialog box appears. |
Step 8 | Click the Storage Profile Policy tab. |
Step 9 | To associate an existing storage profile with this service profile, select the storage profile that you want to associate from the Storage Profile drop-down list, and click OK. The details of the storage profile appear in the Storage Items area. |
Step 10 | To create a new storage profile and associate it with this service profile, click Create Storage Profile, complete the required fields, and click OK. Creating a Storage Profile provides more information on creating a new storage profile. |
Step 11 | (Optional)To dissociate the service profile from a storage profile, select No Storage Profile from the Storage Profile drop-down list, and click OK. |
Storage profiles can be defined under org and as a dedicated storage profile under service profile. Thus, a service profile inherits local LUNs from both possible storage profiles. It can have a maximum of 2 such local LUNs. You can display the details of all local LUNs inherited by a service profile by using the following command:
Step 1 | In the Navigation pane, click the Servers tab. |
Step 2 | On the Servers tab, expand . |
Step 3 | Expand the node for the organization that contains the service profile that you want to display. |
Step 4 | Choose the service profile whose inherited local LUNs you want to display. |
Step 5 | In the Work pane, click the Storage tab. |
Step 6 | Click the
LUN
Configuration subtab, and then click the
Local
LUNs tab.
|
Step 1 | In the Navigation pane, click the Equipment tab. |
Step 2 | On the Equipment tab, expand |
Step 3 | Choose the server to display detailed information about all the LUNs used by it. |
Step 4 | In the Work pane, click the General tab. |
Step 5 | Expand the Storage Details area. Details of the LUNs that are used by the server appear in the LUN References table. |
Command or Action | Purpose |
---|
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Equipment tab. | |
Step 2 | On the Equipment tab, expand | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the Disks subtab. | |
Step 5 | Right-click the
disk that you want and select one of the following operations:
|
The following operations can be performed only on orphaned virtual drives:
Command or Action | Purpose |
---|
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Equipment tab. | |
Step 2 | On the Equipment tab, expand | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUNs subtab. | |
Step 5 | Right-click the virtual drive that you want and select Rename Referenced LUN. | |
Step 6 | In the Rename Referenced LUN dialog box that appears, enter the new LUN Name. | |
Step 7 | Click OK. |
Local LUN Operations in a Service Profile
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Servers tab. | |
Step 2 | On the Servers tab, expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN for which you want to preprovision a LUN name and select Pre-Provision LUN Name. | |
Step 6 | In the Set Pre-Provision LUN Name dialog box, enter the LUN name. | |
Step 7 | Click OK. |
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Servers tab. | |
Step 2 | On the Servers tab, expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN that you want to claim and select Claim Orphan LUN. | |
Step 6 | In the Claim Orphan LUN dialog box that appears, select an orphaned LUN to claim ownership. | |
Step 7 | Click OK. |
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Servers tab. | |
Step 2 | On the Servers tab, expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN that you want to deploy or undeploy and select Set Admin State. | |
Step 6 | In the Set Admin State dialog box that appears, select Online to deploy a LUN or Undeployed to undeploy a LUN. | |
Step 7 | Click OK. |
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click the Servers tab. | |
Step 2 | On the Servers tab, expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN for which you want to rename the referenced LUN, and select Rename Referenced LUN. | |
Step 6 | In the Rename Referenced LUN dialog box that appears, enter the new name of the referenced LUN. | |
Step 7 | Click OK. |