- Preface
- New and Changed Information
- Overview of Cisco Unified Computing System
- Overview of Cisco UCS Manager
- Overview of Cisco UCS Manager GUI
- Configuring the Fabric Interconnects
- Configuring Ports and Port Channels
- Configuring Communication Services
- Configuring Authentication
- Configuring Organizations
- Configuring Role-Based Access Control
- Configuring DNS Servers
- Configuring System-Related Policies
- Managing Licenses
- Managing Virtual Interfaces
- Registering Cisco UCS Domains with Cisco UCS Central
- LAN Uplinks Manager
- VLANs
- Configuring LAN Pin Groups
- Configuring MAC Pools
- Configuring Quality of Service
- Configuring Network-Related Policies
- Configuring Upstream Disjoint Layer-2 Networks
- Configuring Named VSANs
- Configuring SAN Pin Groups
- Configuring WWN Pools
- Configuring Storage-Related Policies
- Configuring Fibre Channel Zoning
- Configuring Server-Related Pools
- Setting the Management IP Address
- Configuring Server-Related Policies
- Configuring Server Boot
- Deferring Deployment of Service Profile Updates
- Service Profiles
- Configuring Storage Profiles
- Managing Power in Cisco UCS
- Managing Time Zones
- Managing the Chassis
- Managing Blade Servers
- Managing Rack-Mount Servers
- Starting the KVM Console
- CIMC Session Management
- Managing the I/O Modules
- Backing Up and Restoring the Configuration
- Recovering a Lost Password
- Storage Profiles
- Disk Groups and Disk Group Configuration Policies
- RAID Levels
- Automatic Disk Selection
- Supported LUN Modifications
- Unsupported LUN Modifications
- Disk Insertion Handling
- Virtual Drive Naming
- LUN Dereferencing
- Controller Constraints and Limitations
- Configuring Storage Profiles
- Configuring a Disk Group Policy
- Creating a Storage Profile
- Deleting a Storage Profile
- Configuring Local LUNs
- PCH SSD Controller Definition
- Associating a Storage Profile with an Existing Service Profile
- Displaying Details of All Local LUNs Inherited By a Service Profile
- Importing Foreign Configurations for a RAID Controller on a Blade Server
- Importing Foreign Configurations for a RAID Controller on a Rack Server
- Configuring Local Disk Operations on a Blade Server
- Configuring Local Disk Operations on a Rack Server
- Configuring Virtual Drive Operations
- Boot Policy for Local Storage
- Local LUN Operations in a Service Profile
Configuring Storage
Profiles
This part contains the following chapters:
- Storage Profiles
- Disk Groups and Disk Group Configuration Policies
- RAID Levels
- Automatic Disk Selection
- Supported LUN Modifications
- Unsupported LUN Modifications
- Disk Insertion Handling
- Virtual Drive Naming
- LUN Dereferencing
- Controller Constraints and Limitations
- Configuring Storage Profiles
Storage Profiles
You can create a storage profile both at an org level and at a service-profile level. A service profile can have a dedicated storage profile as well as a storage profile at an org level.
Disk Groups and Disk Group Configuration Policies
You can select and configure the disks to be used for storage. A logical collection of these physical disks is called a disk group. Disk groups allow you to organize local disks. The storage controller controls the creation and configuration of disk groups.
A disk group configuration policy defines how a disk group is created and configured. The policy specifies the RAID level to be used for the disk group. It also specifies either a manual or an automatic selection of disks for the disk group, and roles for disks. You can use a disk group policy to manage multiple disk groups. However, a single disk group can be managed only by one disk group policy.
A hot spare is an unused extra disk that can be used by a disk group in the case of failure of a disk in the disk group. Hot spares can be used only in disk groups that support a fault-tolerant RAID level. In addition, a disk can be allocated as a global hot spare, which means that it can be used by any disk group.
Virtual Drives
A disk group can be partitioned into virtual drives. Each virtual drive appears as an individual physical device to the Operating System.
All virtual drives in a disk group must be managed by using a single disk group policy.
Configuration States
-
Applying—Creation of the virtual drive is in progress.
-
Applied—Creation of the virtual drive is complete, or virtual disk policy changes are configured and applied successfully.
-
Failed to apply—Creation, deletion, or renaming of a virtual drive has failed due to errors in the underlying storage subsystem.
-
Orphaned—The service profile that contained this virtual drive is deleted or the service profile is no longer associated with a storage profile.
Deployment States
Operability States
-
Optimal—The virtual drive operating condition is good. All configured drives are online.
-
Degraded—The virtual drive operating condition is not optimal. One of the configured drives has failed or is offline.
-
Cache-degraded—The virtual drive has been created with a write policy of write back mode, but the BBU has failed, or there is no BBU.
Note
This state does not occur if you select the always write back mode.
-
Partially degraded—The operating condition in a RAID 6 virtual drive is not optimal. One of the configured drives has failed or is offline. RAID 6 can tolerate up to two drive failures.
-
Offline—The virtual drive is not available to the RAID controller. This is essentially a failed state.
-
Unknown—The state of the virtual drive is not known.
Presence States
RAID Levels
The RAID level of a disk group describes how the data is organized on the disk group for the purpose of ensuring availability, redundancy of data, and I/O performance.
-
Striping—Segmenting data across multiple physical devices. This improves performance by increasing throughput due to simultaneous device access.
-
Mirroring—Writing the same data to multiple devices to accomplish data redundancy.
-
Parity—Storing of redundant data on an additional device for the purpose of error correction in the event of device failure. Parity does not provide full redundancy, but it allows for error recovery in some scenarios.
-
Spanning—Allows multiple drives to function like a larger one. For example, four 20 GB drives can be combined to appear as a single 80 GB drive.
-
RAID 0 Striped—Data is striped across all disks in the array, providing fast throughput. There is no data redundancy, and all data is lost if any disk fails.
-
RAID 1 Mirrored—Data is written to two disks, providing complete data redundancy if one disk fails. The maximum array size is equal to the available space on the smaller of the two drives.
-
RAID 5 Striped Parity—Data is striped across all disks in the array. Part of the capacity of each disk stores parity information that can be used to reconstruct data if a disk fails. RAID 5 provides good data throughput for applications with high read request rates.
RAID 5 distributes parity data blocks among the disks that are part of a RAID-5 group and requires a minimum of three disks.
-
RAID 6 Striped Dual Parity—Data is striped across all disks in the array and two sets of parity data are used to provide protection against failure of up to two physical disks. In each row of data blocks, two sets of parity data are stored.
Other than addition of a second parity block, RAID 6 is identical to RAID 5 . A minimum of four disks are required for RAID 6.
-
RAID 10 Mirrored and Striped—RAID 10 uses mirrored pairs of disks to provide complete data redundancy and high throughput rates through block-level striping. RAID 10 is mirroring without parity and block-level striping. A minimum of four disks are required for RAID 10.
-
RAID 50 Striped Parity and Striped—Data is striped across multiple striped parity disk sets to provide high throughput and multiple disk failure tolerance.
-
RAID 60 Striped Dual Parity and Striped—Data is striped across multiple striped dual parity disk sets to provide high throughput and greater disk failure tolerance.
Automatic Disk Selection
When you specify a disk group configuration, and do not specify the local disks in it, Cisco UCS Manager determines the disks to be used based on the criteria specified in the disk group configuration policy. Cisco UCS Manager can make this selection of disks in multiple ways.
When all qualifiers match for a set of disks, then disks are selected sequentially according to their slot number. Regular disks and dedicated hot spares are selected by using the lowest numbered slot.
The following is the disk selection process:
-
Iterate over all local LUNs that require the creation of a new virtual drive. Iteration is based on the following criteria, in order:
-
Select regular disks depending on the minimum number of disks and minimum disk size. Disks are selected sequentially starting from the lowest numbered disk slot that satisfies the search criteria.
Note
If you specify Any as the type of drive, the first available drive is selected. After this drive is selected, subsequent drives will be of a compatible type. For example, if the first drive was SATA, all subsequent drives would be SATA.
-
Select dedicated hot spares by using the same method as normal disks. Disks are only selected if they are in an Unconfigured Good state.
-
If a provisioned LUN has the same disk group policy as a deployed virtual drive, then try to deploy the new virtual drive in the same disk group. Otherwise, try to find new disks for deployment.
Supported LUN Modifications
Some modifications that are made to the LUN configuration when LUNs are already deployed on an associated server are supported.
The following are the types of modifications that can be performed:
-
Creation of a new virtual drive.
-
Deletion of an existing virtual drive, which is in the orphaned state.
-
Non-disruptive changes to an existing virtual drive. These changes can be made on an existing virtual drive without loss of data, and without performance degradation:
The removal of a LUN will cause a warning to be displayed. Ensure that you take action to avoid loss of data.
Unsupported LUN Modifications
Some modifications to existing LUNs are not possible without destroying the original virtual drive and creating a new one. All data is lost in these types of modification, and these modifications are not supported.
-
RAID-level changes that do not support reconstruction. For example, RAID5 to RAID1.
-
Shrinking the size of a virtual drive.
-
RAID-level changes that support reconstruction, but where there are other virtual drives present on the same drive group.
-
Disk removal when there is not enough space left on the disk group to accommodate the virtual drive.
-
Explicit change in the set of disks used by the virtual drive.
Disk Insertion Handling
When the following sequence of events takes place:
-
The LUN is created in one of the following ways: -
The LUN is successfully deployed, which means that a virtual drive is created, which uses the slot.
-
You remove a disk from the slot, possibly because the disk failed.
-
You insert a new working disk into the same slot.
- Non-Redundant Virtual Drives
- Redundant Virtual Drives with No Hot Spare Drives
- Redundant Virtual Drives with Hot Spare Drives
- Replacing Hot Spare Drives
- Inserting Physical Drives into Unused Slots
Non-Redundant Virtual Drives
For non-redundant virtual drives (RAID 0), when a physical drive is removed, the state of the virtual drive is Inoperable. When a new working drive is inserted, the new physical drive goes to an Unconfigured Good state.
For non-redundant virtual drives, there is no way to recover the virtual drive. You must delete the virtual drive and re-create it.
Redundant Virtual Drives with No Hot Spare Drives
For redundant virtual drives (RAID 1, RAID 5, RAID 6, RAID 10, RAID 50, RAID 60) with no hot spare drives assigned, virtual drive mismatch, virtual drive member missing, and local disk missing faults appear until you insert a working physical drive into the same slot from which the old physical drive was removed.
If the physical drive size is greater than or equal to that of the old drive, the storage controller automatically uses the new drive for the virtual drive. The new drive goes into the Rebuilding state. After rebuild is complete, the virtual drive goes back into the Online state.
Redundant Virtual Drives with Hot Spare Drives
For redundant virtual drives (RAID 1, RAID 5, RAID 6, RAID 10, RAID 50, RAID 60) with hot spare drives assigned, when a drive fails, or when you remove a drive, the dedicated hot spare drive, if available, goes into the Rebuilding state with the virtual drive in the Degraded state. After rebuilding is complete, that drive goes to the Online state.
Cisco UCSM raises a disk missing and virtual drive mismatch fault because although the virtual drive is operational, it does not match the physical configuration that Cisco UCSM expects.
if you insert a new disk in the slot with the disk missing, automatic copy back starts from the earlier hot spare disk to the newly inserted disk. After copy back, the hot spare disk is restored. In this state all faults are cleared.
If automatic copy back does not start, and the newly inserted disk remains in the Unconfigured Good, JBOD, or Foreign Configuration state, remove the new disk from the slot, reinsert the earlier hot spare disk into the slot, and import foreign configuration. This initiates the rebuilding process and the drive state becomes Online. Now, insert the new disk in the hot spare slot and mark it as hot spare to match it exactly with the information available in Cisco UCSM.
Replacing Hot Spare Drives
If a hot spare drive is replaced, the new hot spare drive will go to the Unconfigured Good, Unconfigured Bad, JBOD, or Foreign Configuration state.
Cisco UCSM will raise a virtual drive mismatch or virtual drive member mismatch fault because the hot spare drive is in a state different from the state configured in Cisco UCSM.
You must manually clear the fault. To do this, you must perform the following actions:
Inserting Physical Drives into Unused Slots
If you insert new physical drives into unused slots, neither the storage controller nor Cisco UCSM will make use of the new drive even if the drive is in the Unconfigured Good state and there are virtual drives that are missing good physical drives.
The drive will simply go into the Unconfigured Good state. To make use of the new drive, you will need to modify or create LUNs to reference the newly inserted drive.
Virtual Drive Naming
When you use UCSM to create a virtual drive, UCSM assigns a unique ID that can be used to reliably identify the virtual drive for further operations. UCSM also provides the flexibility to provide a name to the virtual drive at the time of service profile association. Any virtual drive without a service profile or a server reference is marked as an orphan virtual drive.
In addition to a unique ID, a name is assigned to the drive. Names can be assigned in two ways:
-
When configuring a virtual drive, you can explicitly assign a name that can be referenced in storage profiles.
-
If you have not preprovisioned a name for the virtual drive, UCSM generates a unique name for the virtual drive.
You can rename virtual drives that are not referenced by any service profile or server.
LUN Dereferencing
A LUN is dereferenced when it is no longer used by any service profile. This can occur as part of the following scenarios:
-
The LUN is no longer referenced from the storage profile
-
The storage profile is no longer referenced from the service profile
-
The server is disassociated from the service profile
-
The server is decommissioned
When the LUN is no longer referenced, but the server is still associated, re-association occurs.
When the service profile that contained the LUN is deleted, the LUN state is changed to Orphaned.
Controller Constraints and Limitations
-
For Cisco UCS C240, C220, C24, and C22 servers, the storage controller allows 24 virtual drives per server. For all other servers, the storage controller allows 16 virtual drives per server.
-
In Cisco UCS Manager Release 2.2(4), blade servers do not support drives with a block size of 4K, but rack-mount servers support such drives. If a drive with a block size of 4K is inserted into a blade server, discovery fails and the following error message appears:Unable to get Scsi Device Information from the system.
Configuring Storage Profiles
Configuring a Disk Group Policy
Configuring a disk group involves the following:
Configuring a Disk Group Policy
You can configure the disks in a disk group policy automatically or manually.
Creating a Storage Profile
You can create storage profile policies from the Storage tab in the Navigation pane. Additionally, you can also configure the default storage profile that is specific to a service profile from the Servers tab.
Step 1 | In the Navigation pane, click Storage. |
Step 2 | Expand |
Step 3 | Expand the node
for the organization where you want to create the storage profile.
If the system does not include multitenancy, expand the root node. |
Step 4 | Right-click the organization and select Create Storage Profile. |
Step 5 | In the Create Storage Profile dialog box, specify the storage profile Name. You can provide an optional Description for this storage profile. |
Step 6 | (Optional) In the Storage Items area, Create Local LUNs and add them to this storage profile. |
Step 7 | Click OK. |
Creating a Specific Storage Profile
Step 1 | Expand . |
Step 2 | Expand the node
for the organization that contains the service profile for which you want to
create a specific storage profile.
If the system does not include multitenancy, expand the root node. |
Step 3 | Choose the service profile for which you want to create a specific storage profile. |
Step 4 | In the Work pane, click the tab. |
Step 5 | In the Actions area, click Modify Storage Profile. |
Step 6 | In the Modify Storage Profile dialog box, click the Specific Storage Profile tab. |
Step 7 | Click Create Specific Storage Profile. |
Step 8 | (Optional)
In the
Specific Storage Profile area, complete the
Description field to set the description of
the storage profile.
Each service profile can have only one specific storage profile. Hence, the name of this storage profile is provided by default. |
Step 9 | In the Storage Items area, Create Local LUNs and add them to this storage profile. |
Step 10 | Click OK. |
Step 11 | If a confirmation dialog box displays, click Yes. |
Deleting a Storage Profile
Command or Action | Purpose |
---|
Configuring Local LUNs
You can create local LUNs within a storage profile policy from the Storage tab in the Navigation pane. Additionally, you can also create local LUNs within the default storage profile that is specific to a service profile from the Servers tab.
Step 1 | In the Navigation pane, click Storage. | ||||||||||||||
Step 2 | Expand | ||||||||||||||
Step 3 | Expand the node for the organization that contains the storage profile within which you want to create a local LUN. | ||||||||||||||
Step 4 | In the Work pane, click the General tab. | ||||||||||||||
Step 5 | In the Actions area, click Create Local LUN. | ||||||||||||||
Step 6 | In the Create
Local LUN dialog box, complete the following fields:
| ||||||||||||||
Step 7 | (Optional) Click Create Disk Group Policy to create a new disk group policy for this local LUN. | ||||||||||||||
Step 8 | Click OK. |
Deleting Local LUNs
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Storage. | |
Step 2 | Expand | |
Step 3 | Expand the node for the organization that contains the storage profile from which you want to delete a local LUN. | |
Step 4 | Expand Local LUNs for the storage profile that you want and select the LUN that you want to delete. | |
Step 5 | Right-click the LUN that you want to delete and select Delete. | A confirmation dialog box appears. |
Step 6 | Click Yes. |
PCH SSD Controller Definition
Cisco UCS Manager Platform Controller Hub (PCH) Solid State Drive (SSD) Controller Definition provides a local storage configuration in storage profiles where you can configure all the disks in a single RAID or in a JBOD disk array.
The PCH Controller Definition configuration provides the following features:
-
Ability to configure a single LUN RAID across two internal SSDs connected to the onboard PCH controller
-
A way to configure the controller in two modes: AHCI (JBOD) and SWRAID (RAID).
-
Ability to configure the PCH storage device in an Embedded Local LUN and Embedded Local Disk boot policy so precision control for boot order is achieved even with the presence of other bootable local storage devices in the server. Do not use the Local LUN or the Local JBOD options to boot from PCH disks
-
Scrub policy support for the internal SSD drives. This is applicable only for the SWRAID mode. This does not apply for the AHCI and NORAID of PCH Controller modes.
-
Firmware upgrade support for the internal SSD drives. Disk firmware upgrade is supported only when the PCH Controller is in SWRAID mode. It is not supported for AHCI mode.
You configure PCH controller SSDs in a storage profile policy. You enable or disable protect configuration which saves the LUN configuration even after a service profile disassociation. You choose a controller mode. The PCH controller configuration supports only these two RAID options: RAID0 and RAID1. Use No RAID configuration option for AHCI mode where all the disks connected to the controller configured as JBOD disks. The configuration deployment happens as part of the storage profile association to a service profile process.
Cisco UCS Manager supports the following the internal SSDs:
Embedded RAID Hub Controllers are split into two controllers: SATA and sSATA (Secondary SATA. Cisco UCS Manager support for PCH Controller definition is limited only for the first SATA controller and two internal SSDs which are embedded into the riser and connected to the first SATA controller. The first SATA controller controls the two SSDs in internal riser and also the front panel drives in slot 1 to 4. The sSATA controller controls only the front panel drives in slot 5 to 8. The CPU sees these controllers as two independent devices. There are two different sets of PCI definition for SATA and sSATA controller. The Cisco UCS Manager support will be added only for the first SATA controller which manages the internal SSDs.
For the PCH Controller Definition configuration in a Cisco UCS Manager boot policy two new devices exist to select: PCH LUN and PCH Disk. EmbeddedLocalLun represents the boot device in SWRAID mode and EmbeddedLocalDisk represent the boot devices in AHCI mode.
The system uses the same scrub policy is used to scrub supported SSDs. If the scrub is Yes, configured LUNs are destroyed as part of disassociation or re-discovery. If the scrub is No, configured LUNs are saved during disassociation and re-discovery.
Cisco UCS Manager supports firmware upgrade for the internal SSDs only when the PCH Controller is in SWRAID mode. It is not supported in the AHCI mode.
- Creating a Storage Profile PCH Controller Definition
- Modifying a Service Profile PCH Controller Definition
- Deleting a Storage Profile PCH Controller Definition
- PCH Controller Definition Configuration Troubleshooting
Creating a Storage Profile PCH Controller Definition
The PCH Controller Definition provides a storage configuration in Storage Profiles where you can configure internal SSDs connected to a PCH controller. You create a name for the controller definition, specify whether you want the storage profile to retain the configuration even if the storage profile is disassociated from the service profile, and chose the RAID level to indicate the controller mode.
Step 1 | In the Navigation pane, click the Storage tab. | ||||||||||||||
Step 2 | Right-click Storage Profiles. | ||||||||||||||
Step 3 | Choose Create Storage Profile from the pop-up menu or click Storage Profile or click the Storage Profile link on the Getting Started tab. | ||||||||||||||
Step 4 | In the Navigation pane, right-click a specific Storage Profile and chose Show Navigator from the pop-up menu. | ||||||||||||||
Step 5 | In the Create Storage Profile dialog box, click the Controller Definitions tab and configure the following information: | ||||||||||||||
Step 6 | Type a storage
profile
Name.
The name can be no linger than 32 characters long. | ||||||||||||||
Step 7 | (Optional) Type a Description for this storage profile. | ||||||||||||||
Step 8 | Click [+] at the right of the dialog box to display the Create PCH Controller Definition. | ||||||||||||||
Step 9 | In
Create
PCH Controller Definition dialog box, configure the following
information:
| ||||||||||||||
Step 10 | Click OK. The new PCH Controller Definition appears in the navigation pane. |
Modifying a Service Profile PCH Controller Definition
Step 1 | In the Navigation pane, click the Storage tab. | ||||||||||||||
Step 2 | Expand Storage Profiles to the specific storage profile name that you want. | ||||||||||||||
Step 3 | Expand Controller Definitions and click the specific controller definition that you want. | ||||||||||||||
Step 4 | On the
General tab, modify the following information:
| ||||||||||||||
Step 5 | Click OK.
The system displays whether it saved the modified PCH Controller Definition successfully. |
Deleting a Storage Profile PCH Controller Definition
Step 1 | In the Navigation pane, click the Storage tab. |
Step 2 | Expand Storage Profiles. |
Step 3 | Expand PCH Controller Definitions. |
Step 4 | In the Navigation pane, click the specific Controller Definition that you want. |
Step 5 | In the General tab Actions area, click Delete. |
Step 6 | Confirm whether
you want to delete the definition.
The system displays whether it deleted the definition successfully. if not, see PCH Controller Definition Configuration Troubleshooting |
Step 7 | If successfully deleted, click OK. |
PCH Controller Definition Configuration Troubleshooting
PCH Controller Definition Creation
Unsuccessful PCH Controller Definition configuration exists under the following situations:
-
You try to configure a Controller definition for an unsupported server model
-
You try to use the legacy local disk configuration policy and also configures the PCH storage in storage profile
-
You try to configure same controller using storage profile controller definition and also by using storage profile Local LUN configuration interface
-
If the Protect Configuration checkbox is ON and you configured the RAID Type differently than the deployed configuration in SWRAID mode.
-
If the Protect Configuration checkbox is ON and the RAID Type does not match the present controller mode.
![]() Warning | Any configuration change in the PCH storage configuration (like Controller mode change, RAID level change or controller qualifier change) for an already associated server triggers a PNUOS boot to happen causing a down time for the host OS. |
Boot Policy
A configuration error occurs for any of the following cases:
-
You select PCH Disk in boot policy but the primary or secondary target path slot number did not match with any of the inventoried internal SSD slot numbers.
-
You select both PCH LUN and PCH Disk at the sam a time in the boot policy.
Firmware
For an incompatible software combination, there will not be any configuration error to at the time of association. However the storage configuration for the PCH SSD controller might fail or might not be deployed during association if you do not use the supported software combinations. Also, booting from the PCH SSD controller internal SSD might fail at the end of association for an incompatible software combination.
Associating a Storage Profile with an Existing Service Profile
You can associate a storage profile with an existing service profile or a new service profile. Creating a Service Profile with the Expert Wizard in the Cisco UCS Manager GUI Configuration Guide, Release 2.2 provides more information about associating a storage profile with a new service profile.
Step 1 | In the Navigation pane, click Servers. |
Step 2 | Expand . |
Step 3 | Expand the node for the organization that contains the service profile that you want to associate with a storage profile. |
Step 4 | Choose the service profile that you want to associate with a storage profile. |
Step 5 | In the Work pane, click the Storage tab. |
Step 6 | Click the LUN Configuration subtab. |
Step 7 | In the Actions area, click Modify Storage Profile. The Modify Storage Profile dialog box appears. |
Step 8 | Click the Storage Profile Policy tab. |
Step 9 | To associate an existing storage profile with this service profile, select the storage profile that you want to associate from the Storage Profile drop-down list, and click OK. The details of the storage profile appear in the Storage Items area. |
Step 10 | To create a new storage profile and associate it with this service profile, click Create Storage Profile, complete the required fields, and click OK. Creating a Storage Profile provides more information on creating a new storage profile. |
Step 11 | (Optional) To dissociate the service profile from a storage profile, select No Storage Profile from the Storage Profile drop-down list, and click OK. |
Displaying Details of All Local LUNs Inherited By a Service Profile
Storage profiles can be defined under org and as a dedicated storage profile under service profile. Thus, a service profile inherits local LUNs from both possible storage profiles. It can have a maximum of 2 such local LUNs. You can display the details of all local LUNs inherited by a service profile by using the following command:
Step 1 | In the Navigation pane, click Servers. |
Step 2 | Expand . |
Step 3 | Expand the node for the organization that contains the service profile that you want to display. |
Step 4 | Choose the service profile whose inherited local LUNs you want to display. |
Step 5 | In the Work pane, click the Storage tab. |
Step 6 | Click the
LUN
Configuration subtab, and then click the
Local
LUNs tab.
|
Importing Foreign Configurations for a RAID Controller on a Blade Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server of the RAID controller for which you want to import foreign configurations. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the Controller subtab. | |
Step 6 | In the Actions area, click Import Foreign Configuration. |
Importing Foreign Configurations for a RAID Controller on a Rack Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server of the RAID controller for which you want to import foreign configurations. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the Controller subtab. | |
Step 6 | In the Actions area, click Import Foreign Configuration. |
Configuring Local Disk Operations on a Blade Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server for which you want to configure local disk operations. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the Disks subtab. | |
Step 6 | Right-click the
disk that you want and select one of the following operations:
|
Configuring Local Disk Operations on a Rack Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server for which you want to configure local disk operations. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the Disks subtab. | |
Step 6 | Right-click the
disk that you want and select one of the following operations:
|
Configuring Virtual Drive Operations
The following operations can be performed only on orphaned virtual drives:
- Deleting an Orphan Virtual Drive on a Blade Server
- Deleting an Orphan Virtual Drive on a Rack Server
- Renaming an Orphan Virtual Drive on a Blade Server
- Renaming an Orphan Virtual Drive on a Rack Server
Deleting an Orphan Virtual Drive on a Blade Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server for which you want to delete an orphan virtual drive. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the LUNs subtab. | |
Step 6 | Right-click the virtual drive that you want and select Delete Orphaned LUN. |
A confirmation dialog box appears. |
Step 7 | Click Yes. |
Deleting an Orphan Virtual Drive on a Rack Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server for which you want to delete an orphan virtual drive. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the LUNs subtab. | |
Step 6 | Right-click the virtual drive that you want and select Delete Orphaned LUN. |
A confirmation dialog box appears. |
Step 7 | Click Yes. |
Renaming an Orphan Virtual Drive on a Blade Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server for which you want to rename an orphan virtual drive. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the LUNs subtab. | |
Step 6 | Right-click the virtual drive that you want and select Rename Referenced LUN. | |
Step 7 | In the Rename Referenced LUN dialog box that appears, enter the new LUN Name. | |
Step 8 | Click OK. |
Renaming an Orphan Virtual Drive on a Rack Server
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Equipment. | |
Step 2 | Expand . | |
Step 3 | Choose the server for which you want to rename an orphan virtual drive. | |
Step 4 | In the Work pane, click the Inventory tab and then the Storage subtab. | |
Step 5 | Click the LUNs subtab. | |
Step 6 | Right-click the virtual drive that you want and select Rename Referenced LUN. | |
Step 7 | In the Rename Referenced LUN dialog box that appears, enter the new LUN Name. | |
Step 8 | Click OK. |
Boot Policy for Local Storage
You can specify the primary boot device for a storage controller as a local LUN or a JBOD disk. Each storage controller can have one primary boot device. However, in a storage profile, you can set only one device as the primary boot LUN.
Configuring the Boot Policy for a Local Device
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Servers. | |
Step 2 | Expand . | |
Step 3 | Expand the node for the organization where you want to create the policy. |
If the system does not include multitenancy, expand the root node. |
Step 4 | Select the boot policy that you want to configure. | |
Step 5 | In the Work pane, click the General tab. | |
Step 6 | Click the down arrows to expand the Local Devices area. | |
Step 7 | Click Add Local LUN to configure the boot order of the local LUN. | |
Step 8 | To configure the local LUN as the primary boot device, select Primary. | |
Step 9 | In the LUN Name field, enter the name of the LUN to be configured as the primary boot device. | |
Step 10 | Click OK. |
Configuring the Boot Policy for a Local JBod Device
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Servers. | |
Step 2 | Expand . | |
Step 3 | Expand the node for the organization where you want to create the policy. |
If the system does not include multitenancy, expand the root node. |
Step 4 | Select the boot policy that you want to configure. | |
Step 5 | In the Work pane, click the General tab. | |
Step 6 | Click the down arrows to expand the Local Devices area. | |
Step 7 | Click Add Local JBod to configure the local JBod device as the primary boot device. |
BOD is supported only on the following servers: |
Step 8 | In the Disk Slot Number field, enter the slot number of the JBod disk to be configured as the primary boot device. | |
Step 9 | Click OK. |
Local LUN Operations in a Service Profile
Preprovisioning a LUN Name
Preprovisioning a LUN name can be done only when the admin state of the LUN is Undeployed. If this LUN name exists and the LUN is orphaned, its is claimed by the service profile. If this LUN does not exist, a new LUN is created with the specified name.
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Servers. | |
Step 2 | Expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN for which you want to preprovision a LUN name and select Pre-Provision LUN Name. | |
Step 6 | In the Set Pre-Provision LUN Name dialog box, enter the LUN name. | |
Step 7 | Click OK. |
Claiming an Orphan LUN
Claiming an orphan LUN can be done only when the admin state of the LUN is Undeployed. You can explicitly change the admin state of the LUN to Undeployed for claiming an orphan LUN.
If the LUN name is empty, set a LUN name before claiming it.
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Servers. | |
Step 2 | Expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN that you want to claim and select Claim Orphan LUN. | |
Step 6 | In the Claim Orphan LUN dialog box that appears, select an orphaned LUN. | |
Step 7 | Right-click the LUN and select Set Admin State. | |
Step 8 | In the Set Admin State dialog box that appears, select Undeployed to undeploy a LUN and claim ownership. | |
Step 9 | Click OK. |
Deploying and Undeploying a LUN
You can deploy or undeploy a LUN. If the admin state of a local LUN is Undeployed, the reference of that LUN is removed and the LUN is not deployed.
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Servers. | |
Step 2 | Expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN that you want to deploy or undeploy and select Set Admin State. | |
Step 6 | In the Set Admin State dialog box that appears, select Online to deploy a LUN or Undeployed to undeploy a LUN. | |
Step 7 | Click OK. |
Renaming a Service Profile Referenced LUN
Command or Action | Purpose | |
---|---|---|
Step 1 | In the Navigation pane, click Servers. | |
Step 2 | Expand . | |
Step 3 | In the Work pane, click the Storage tab. | |
Step 4 | Click the LUN Configuration tab. | |
Step 5 | In the Local LUNs subtab, right-click the LUN for which you want to rename the referenced LUN, and select Rename Referenced LUN. | |
Step 6 | In the Rename Referenced LUN dialog box that appears, enter the new name of the referenced LUN. | |
Step 7 | Click OK. |