Cisco UCS C240 M3 Server Installation and Service Guide
RAID Controller Considerations
Downloads: This chapterpdf (PDF - 1.31MB) The complete bookPDF (PDF - 17.46MB) | The complete bookePub (ePub - 2.35MB) | Feedback

Table of Contents

RAID Controller Considerations

Supported RAID Controllers and Required Cables

LSI Nytro MegaRAID 8110-4i Considerations

Mixing Drive Types in RAID Groups

Battery Backup Units

Factory-Default Option ROM Settings

RAID Controller Migration

Embedded MegaRAID Controller

Notes on Supported Embedded MegaRAID Levels

Installing a SCU Upgrade ROM Module For Embedded RAID SAS Support

Installing a Software RAID Key Module for Embedded RAID 5 Support

Enabling the Embedded RAID Controller in the BIOS

Disabling the Embedded RAID Controller in the BIOS

Launching the LSI Embedded RAID Configuration Utility

Installing LSI MegaSR Drivers For Windows and Linux

Downloading the LSI MegaSR Drivers

Microsoft Windows Driver Installation

Linux Driver Installation

RAID Controller Cabling

Cable Routing

Cisco UCS C240 Server Cabling Instructions

Backplane and Expander Options

SFF 24-Drive Backplane With Expander Cabling

SFF 16-Drive Backplane, No Expander

LFF 12-Drive Backplane With Expander

Restoring RAID Configuration After Replacing a RAID Controller

For More Information

Supported RAID Controllers and Required Cables

This server supports the RAID controller options, cable requirements, and RAID backup units shown in Table C-1 .


Caution Do not mix controller types in the server. Do not use the embedded MegaRAID controller and a hardware RAID controller card at the same time. This is not supported and could result in data loss.


Note This server supports up to two PCIe-style RAID controllers. Do not mix controller types in the server.



Note The SAS expander is required for the SFF 24-drive option and the LFF 12-drive option.
The SFF 16-drive option does not use the SAS expander.



Note The embedded RAID option is available only with the SFF 16-drive backplane. It does not operate through an expander.


 

Table C-1 Cisco UCS C240 RAID Options

Controller
Style
Maximum Drives
SCPM1
RAID Levels
Required Cables
Embedded RAID2

Onboard

  • 4 SATA internal (default)
  • SFF/no expander:

8 SAS internal3

No

0, 1, 54, 10

  • 8 drives, SFF/no expander:

(kit of 4 UCSC-CABLE2)

Cisco UCS RAID SAS 2008M-8i:

PID UCSC-RAID-MZ-C240

Mezzanine

  • SFF/expander:

16 internal

  • LFF/expander:

12 internal

  • SFF/no expander:

8 internal

No

0, 1, 1E, 10

  • 16 drives SFF/expander:

(kit pair UCSC-CABLE6)

  • 12 drives, LFF/expander:

(kit pair UCSC-CABLE4)

  • 8 drives, SFF/no expander:

(kit of 4 UCSC-CABLE2)

Cisco UCS RAID SAS 2008M-8i:

PID UCSC-RAID-11-C240

(Includes RAID 5 & 50)

Mezzanine

  • SFF/expander:

16 internal

  • LFF/expander:

12 internal

  • SFF/no expander:

8 internal

No

0, 1, 1E, 5, 10, 50

  • 16 drives SFF/expander:

(kit pair UCSC-CABLE6)

  • 12 drives, LFF/expander:

(kit pair UCSC-CABLE4)

  • 8 drives, SFF/no expander:

(kit of 4 UCSC-CABLE2)

LSI MegaRAID SAS 9266CV-8i

PCIe

  • SFF/expander:

24 internal

  • LFF/expander:

12 internal

  • SFF/no expander:

8 internal or
16 internal with dual controllers

SCPM

0, 1, 5, 6, 10, 50, 60

  • 24 drives, SFF/expander:

(kit pair UCSC-CABLE6)

  • 12 drives, LFF/expander:

(kit pair UCSC-CABLE4)

  • 8 drives, no expander:

(kit of 4 UCSC-CABLE2)

  • 16 drives, no expander:

(kit of 4 UCSC-CABLE2)

LSI MegaRAID SAS 9271-8i

PCIe

  • SFF/expander:

24 internal

  • LFF/expander:

12 internal

  • SFF/no expander:

8 internal or
16 internal with dual controllers

No

0, 1, 5, 6, 10, 50, 60

  • 24 drives, SFF/expander:

(kit pair UCSC-CABLE6)

  • 12 drives, LFF/expander:

(kit pair UCSC-CABLE4)

  • 8 drives, no expander:

(kit of 4 UCSC-CABLE2)

  • 16 drives, no expander:

(kit of 4 UCSC-CABLE2)

LSI MegaRAID SAS 9271CV-8i

PCIe

  • SFF/expander:

24 internal

  • LFF/expander:

12 internal

  • SFF/no expander:

8 internal or
16 internal with dual controllers

SCPM

0, 1, 5, 6, 10, 50, 60

  • 24 drives, SFF/expander:

(kit pair UCSC-CABLE6)

  • 12 drives, LFF/expander:

(kit pair UCSC-CABLE4)

  • 8 drives, no expander:

(kit of 4 UCSC-CABLE2)

  • 16 drives, no expander:

(kit of 4 UCSC-CABLE2)

LSI Nytro MegaRAID 8110-4i5

PCIe

  • SFF/expander:

24 internal

  • LFF/expander:

12 internal

SCPM

0, 1, 5, 6, 10, 50, 60

  • 24 drives, SFF/expander:

(kit pair UCSC-CABLE6)

  • 12 drives, LFF/expander:

(kit pair UCSC-CABLE4)

LSI MegaRAID SAS 9285CV-8e

PCIe

8 external

SCPM

0, 1, 5, 6, 10, 50, 60

Not sold by Cisco

LSI MegaRAID SAS 9286CV-8e

PCIe

8 external

SCPM

0, 1, 5, 6, 10, 50, 60

Not sold by Cisco

1.SCPM = SuperCap power module (RAID backup unit). See Battery Backup Units.

2.The embedded RAID controller must be enabled in the BIOS.

3.Embedded RAID SAS drive control requires an optional SCU ROM upgrade chip to be installed on the motherboard.

4.Embedded RAID 5 support requires an optional software key.

5.See LSI Nytro MegaRAID 8110-4i Considerations.

LSI Nytro MegaRAID 8110-4i Considerations

Note the following restrictions regarding support of the LSI Nytro MegaRAID 8110-4i card in this server:

  • This card is not supported with the SFF 16-drive direct-connect backplane version of the server.
  • This card is supported only in slot 3 of the server.
  • This card is supported only in dual-CPU configurations.
  • This card is supported only with hard disk drives (not solid state drives).
  • This card cannot coexist with any installed GPU card.
  • This card cannot coexist with multiple RAID controllers.

Mixing Drive Types in RAID Groups

Table C-2 lists the technical capabilities for mixing hard disk drive (HDD) and solid state drive (SSD) types in a RAID group. However, see the best practices recommendations that follow for the best performance.

 

Table C-2 Drive Type Mixing in RAID Groups

Mix of Drive Types
in RAID Group
Allowed?

SAS HDD + SATA HDD

Yes

SAS SSD + SATA SSD

Yes

HDD + SSD

No

Best Practices For Mixing Drive Types in RAID Groups

For the best performance, follow these guidelines:

  • Use either all SAS or all SATA drives in a RAID group.
  • Use the same capacity for each drive in the RAID group.
  • Never mix HDDs and SSDs in the same RAID group.

Battery Backup Units

This server supports installation of up to two RAID battery backup units (BBUs) or SuperCap power modules (SCPMs). The units mount to clips on the removable air baffle (see Figure 3-37).


Note The iBBU09 battery backup unit (BBU) has been phased out by Cisco and replaced with the SuperCap power module (SCPM). If you are replacing a BBU, order the SCPM for the replacement (UCS-RAID-CV-SC=). Cards that used the BBU are compatible with the SCPM.


The SCPM provides approximately 3 years of backup for the disk write-back cache DRAM in the case of sudden power loss by offloading the cache to the NAND flash.

For RAID backup unit replacement instructions, see Replacing the LSI RAID Battery Backup Unit or SuperCap Power Module.

Factory-Default Option ROM Settings

Table C-3 describes the option ROM (OPROM) settings for card slots that are made at the factory for various configurations. The version of server and the number of CPUs affect the OPROM settings.


Note If an option is listed as “not allowed” in Table C-3, that is because it is not supported in the particular configuration described in that table row. See the footnotes below the table for more information.


For additional information about RAID controller support, see Supported RAID Controllers and Required Cables.

 

Table C-3 Cisco UCS C240 Factory-Default Option ROM Settings

Server Version
Number of CPUs
Embedded SW RAID Enabled?
Mezz. RAID Controller Installed?
PCIe RAID Controller 1 Installed?
PCIe RAID Controller 2 Installed?
LOM ports
Comments

C240 SFF 24 HDD/ C240 LFF 12 HDD

1

Not allowed6

Not allowed7

Installed:
PCIe slot 3 enabled8

Not allowed

Always enabled

All other OPROM is disabled.

Requires expander.

C240 SFF 24 HDD/ C240 LFF 12 HDD

2

Not allowed

Installed:
connector enabled

Not allowed9

Not allowed

Always enabled

All other OPROM is disabled.

Requires expander.

C240 SFF 24 HDD/ C240 LFF 12 HDD

2

Not allowed

Not allowed10

Installed:
PCIe slot 4 enabled

Not allowed

Always enabled

All other OPROM is disabled.

Requires expander.

C240 SFF 16 HDD

1

Enabled

Not allowed11

Not allowed

Not allowed

Always enabled

All other OPROM is disabled.

Maximum 8 drives.

C240 SFF 16 HDD

1

Not allowed

Not allowed

Installed:
PCIe slot 3 enabled

Not allowed

Always enabled

All other OPROM is disabled.

Maximum 8 drives.

C240 SFF 16 HDD

2

Enabled

Not allowed

Not allowed

Not allowed

Always enabled

All other OPROM is disabled.

Maximum 8 drives.

C240 SFF 16 HDD

2

Not allowed

Installed:
connector enabled

Not allowed

Not allowed

Always enabled

All other OPROM is disabled.

Maximum 8 drives.

C240 SFF 16 HDD

2

Not allowed

Not allowed

Installed:
PCIe slot 4 enabled

Absent

Always enabled

All other OPROM is disabled.

Maximum 8 drives.

C240 SFF 16 HDD

2

Not allowed

Not allowed

Installed:
PCIe slot 4 enabled

Installed:
PCIe slot 3 enabled

Always enabled

All other OPROM is disabled.

Max. 16 drives12.

6.The embedded SW RAID controller is supported only with the 16 HDD direct-connect backplane. It is not supported with an expander.

7.In a single-CPU configuration, the mezzanine card slot is not supported.

8.In a single-CPU configuration, PCIe slots 4 and 5 are not supported.

9.You cannot mix controller types in a server. A PCIe-style controller cannot be used when a mezzanine-style controller is used.

10.You cannot mix controller types in a server. A mezzanine-style controller cannot be used when a PCIe-style controller is used.

11.You cannot use the embedded SW RAID and HW RAID (mezzanine or PCIe card) at the same time.

12.Control of all 16 drives requires 2 PCIe-style RAID controllers.

RAID Controller Migration

This server supports hardware RAID (mezzanine and PCIe controller cards) and embedded software RAID. See Table C-4 for which migrations are allowed and a summary of migration steps.

 

Table C-4 RAID Controller Migration

Starting RAID Controller
Migrate to HW RAID Allowed?
Migrate to SW RAID Allowed?

None (no drives)

Onboard SCU Storage support is Disabled in BIOS

Allowed

1. Install card.

2. Install cables.

Allowed

1. Install desired upgrade modules to motherboard.

2. Enable SCU storage support in BIOS.

3. Install cables.

Embedded SW RAID

Onboard SCU Storage support is Enabled in BIOS


Caution Data migration from SW RAID to HW RAID is not supported and could result in data loss.

Allowed only before there is data on the drives; data migration is not supported.

1. Disable SCU storage support in BIOS.

2. Install card.

3. Install cables.

Not applicable

HW RAID

Onboard SCU Storage support is Disabled in BIOS

Not applicable

Not allowed

Embedded MegaRAID Controller


Note VMware ESX/ESXi or any other virtualized environments are not supported for use with the embedded MegaRAID controller. Hypervisors such as Hyper-V, Xen, or KVM are also not supported for use with the embedded MegaRAID controller.



Note The embedded RAID option is available only with the SFF 16-drive backplane. It does not operate through an expander.


This server includes an embedded MegaRAID controller with two mini-SAS connectors on the motherboard.


Note You cannot downgrade from using a RAID controller card to using the embedded controller (see RAID Controller Migration). Instructions for installing upgrade modules and enabling the embedded controller in the BIOS are included here for those upgrading a server with no RAID controller or drives.



Caution Data migration from SW RAID (embedded RAID) to HW RAID (a controller card) is not supported and could result in data loss. Migrations from SW RAID to HW RAID are supported only before there is data on the drives, or the case in which there are no drives in the server (see RAID Controller Migration).

  • You can migrate from using the embedded controller to using a RAID card only before there is data on the drives. In this case, you must disable the embedded controller. See Disabling the Embedded RAID Controller in the BIOS.
  • The required drivers for this controller are already installed and ready to use with the LSI SWRAID Configuration Utility. However, if you will use this controller with Windows or Linux, you must download and install additional drivers for those operating systems. See Installing LSI MegaSR Drivers For Windows and Linux.

This section contains the following topics:

Notes on Supported Embedded MegaRAID Levels

The following RAID levels are supported by the embedded MegaRAID controller.

  • RAID 0—You can configure a RAID 0 virtual drive (VD) using one or more physical drives (PDs). This level supports up to eight VDs and PDs.
  • RAID 1—A RAID 1 VD is configured from only two PDs. This level supports up to eight PDs (four RAID arrays) and eight VDs.
  • RAID 5—You can configure a RAID 5 VD using three or more PDs. This level supports up to eight PDs and eight VDs.
  • RAID 10—This is a spanned VD; that is, RAID 0 is implemented on two or more RAID 1 VDs. This level supports up to eight PDs (two to four RAID 1 volumes spanned) and one VD.

Note None of these RAID levels require drives of the same size. The smallest drive in the array determines the size of the VD.



Note An array can be divided into multiple VDs of the same RAID level, except when using RAID 10. Mixed arrays are not permitted. For example, you cannot configure a three-drive array into RAID 0 and RAID 5 VDs. Unlike RAID 0, 1, and 5, you cannot create multiple RAID 10 VDs from the same array. A single RAID 10 VD uses up the entire array.


Installing a SCU Upgrade ROM Module For Embedded RAID SAS Support

The SCU Upgrade ROM module contains a chip on a small circuit board. This module attaches to a motherboard header. This chip upgrades the standard four-drive SATA support to add SAS support for up to eight drives.


Note The Cisco PID UCSC-RAID-ROM5= includes the SCU upgrade ROM module.
The Cisco PID UCSC-RAID-ROM55= includes the SCU upgrade ROM module and the RAID 5 key.


To install a SCU upgrade ROM module, follow these steps:


Step 1 Locate the header labelled “PCH UPGRD SKU ROM” under any cables that are routed along the chassis wall (see Figure C-1 ).

Step 2 Align the connector on the SCU upgrade ROM module with the pins on the header, then gently push the connector onto the pins.

Step 3 Replace the top cover.

Step 4 Replace the server in the rack, replace cables, and then power on the server by pressing the Power button.

Step 5 Continue with either Installing a Software RAID Key Module for Embedded RAID 5 Support or Enabling the Embedded RAID Controller in the BIOS .

Figure C-1 SCU Upgrade ROM and RAID 5 Key Header Locations on Motherboard

 

1

SCU upgrade ROM header
PCH UPGRD SKU ROM
(adds SAS drive support)

2

Software RAID 5 key header
SW RAID KEY
(adds RAID 5 support)


 

Installing a Software RAID Key Module for Embedded RAID 5 Support

The software RAID key module contains a chip on a small circuit board. This module attaches to a motherboard header. This chip upgrades SAS support to add RAID 5 support (RAID 0, 1, 5, and 10 for up to eight drives).


Note You must have the SCU upgrade ROM module installed before you can use this module.


To install a RAID 5 software key module, follow these steps:


Step 1 Locate the header that is labeled “SW RAID KEY” (see Figure C-1).

Step 2 Install the RAID 5 software key module onto the pins of the header.

Step 3 Replace the top cover.

Step 4 Replace the server in the rack, replace cables, and then power on the server by pressing the Power button.


 

Enabling the Embedded RAID Controller in the BIOS

When you order the server with this controller, the controller is enabled in the BIOS at the factory.


Note The default setting in the BIOS for the embedded controller is Disabled. When you order the server with the embedded controller, the BIOS setting is Enabled at the factory. However, if a server is reset to defaults, this BIOS setting is reverted to Disabled. Use the procedure below to re-enable the embedded controller.


Use the following procedure to enable the LSI MegaSR drivers.


Step 1 Boot the server and press F2 when prompted to enter the BIOS Setup utility.

Step 2 Select the Advanced tab, then South Bridge.

Step 3 Set Onboard SCU Storage Support to Enabled.

Step 4 Press F10 to save your changes and exit the utility.


 

Disabling the Embedded RAID Controller in the BIOS


Caution Data migration from SW RAID to HW RAID is not supported and could result in data loss. Migrations from SW RAID to HW RAID are supported only before there is data on the drives, or the case in which there are no drives in the server.

If you migrate from using this embedded controller to a RAID controller card, you must disable the embedded controller in the server BIOS (see caution above).

Use the following procedure to disable the LSI MegaSR drivers.


Step 1 Boot the server and press F2 when prompted to enter the BIOS Setup utility.

Step 2 Select the Advanced tab, then South Bridge.

Step 3 Set Onboard SCU Storage Support to Disabled.

Step 4 Press F10 to save your changes and exit the utility.


 

Launching the LSI Embedded RAID Configuration Utility

Launch the utility by pressing Ctrl+M when you see the prompt during system boot.

For more information about using the Embedded MegaRAID software to configure your disk arrays, see the LSI Embedded MegaRAID Software User Guide .

Installing LSI MegaSR Drivers For Windows and Linux


Note The required drivers for this controller are already installed and ready to use with the LSI SWRAID Configuration Utility. However, if you will use this controller with Windows or Linux, you must download and install additional drivers for those operating systems.


This section explains how to install the LSI MegaSR drivers for the following supported operating systems:

  • Microsoft Windows Server
  • Red Hat Enterprise Linux (RHEL)
  • SuSE Linux Enterprise Server (SLES)

For the specific supported OS versions, see the Hardware and Software Interoperability Matrix for your server release.

This section contains the following topics:

Downloading the LSI MegaSR Drivers

The MegaSR drivers are included in the C-series driver ISO for your server and OS. Download the drivers from Cisco.com:


Step 1 Find the drivers ISO file download for your server online and download it to a temporary location on your workstation:

a. See the following URL: http://www.cisco.com/cisco/software/navigator.html

b. Click Unified Computing and Servers in the middle column.

c. Click Cisco UCS C-Series Rack-Mount Standalone Server Software in the right-hand column.

d. Click your model of server in the right-hand column.

e. Click Unified Computing System (UCS) Drivers .

f. Click the release number that you are downloading.

g. Click Download to download the drivers ISO file.

h. Verify the information on the next page, then click Proceed With Download .

i. Continue through the subsequent screens to accept the license agreement and then browse to a location where you want to save the drivers ISO file.


 

Microsoft Windows Driver Installation

This section explains the steps to install the LSI MegaSR driver in a Windows installation.

This section contains the following topics:

Windows Server 2008R2 Driver Installation

Perform the following steps to install the LSI MegaSR device driver in a new Windows Server 2008R2 operating system. The Windows operating system automatically adds the driver to the registry and copies the driver to the appropriate directory.


Step 1 Create a RAID drive group using the LSI SWRAID Configuration utility before you install this driver for Windows. Launch this utility by pressing Ctrl+M when LSI SWRAID is shown during BIOS post.

Step 2 Download the Cisco UCS C-Series drivers ISO, as described in Downloading the LSI MegaSR Drivers.

Step 3 Prepare the drivers on a USB thumb drive:

a. Burn the ISO image to a disc.

b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:

/<OS>/Storage/Intel/C600/

c. Expand the Zip file, which contains the folder with the MegaSR driver files.

d. Copy the expanded folder to a USB thumb drive.

Step 4 Start the Windows driver installation using one of the following methods:

  • To install from local media: Connect an external USB DVD drive to the server and then insert the first Windows install disc into the drive. Skip to Step 6.
  • To install from remote ISO: Log in to the server’s CIMC interface and continue with the next step.

Step 5 Launch a Virtual KVM console window and select the Virtual Media tab.

a. Click Add Image and browse to select your remote Windows installation ISO file.

b. Select the check box in the Mapped column for the media that you just added, then wait for mapping to complete.

Step 6 Power cycle the server.

Step 7 Press F6 when you see the F6 prompt during bootup. The Boot Menu window opens.

Step 8 On the Boot Manager window, select the physical disc or virtual DVD and press Enter . The Windows installation begins when the image is booted.

Step 9 Press Enter when you see the prompt, “Press any key to boot from CD.”

Step 10 Observe the Windows installation process and respond to prompts in the wizard as required for your preferences and company standards.

Step 11 When Windows prompts you with “Where do you want to install Windows,” install the drivers for embedded MegaRAID:

a. Click Load Driver . You are prompted by a Load Driver dialog to select the driver to be installed.

b. Connect the USB thumb drive that you prepared in Step 3 to the target server.

c. On the Windows Load Driver dialog that you opened in Step a, click Browse.

d. Use the dialog to browse to the location of the drivers folder on the USB thumb drive, and click OK.

Windows loads the drivers from the folder and when finished, the driver is listed under the prompt, “Select the driver to be installed.”

e. Click Next to install the drivers.


 

Updating the Windows Driver

Perform the following steps to update the LSI MegaSR driver for Windows or to install this driver on an existing system booted from a standard IDE drive.


Step 1 Click Start , point to Settings , and then click Control Panel .

Step 2 Double-click System , click the Hardware tab, and then click Device Manager . Device Manager starts.

Step 3 In Device Manager, double-click SCSI and RAID Controllers , right-click the device for which you are installing the driver, and then click Properties .

Step 4 On the Driver tab, click Update Driver to open the Update Device Driver wizard, and then follow the wizard instructions to update the driver.


 

Linux Driver Installation

This section explains the steps to install the embedded MegaRAID device driver in a Red Hat Enterprise Linux installation or a SuSE Linux Enterprise Server installation.

This section contains the following topics:

Obtaining the Driver Image File

See Downloading the LSI MegaSR Drivers for instructions on obtaining the drivers. The Linux driver is offered in the form of dud-[ driver version ].img, which is the boot image for the embedded MegaRAID stack.


Note The LSI MegaSR drivers that Cisco provides for Red Hat Linux and SUSE Linux are for the original GA versions of those distributions. The drivers do not support updates to those OS kernels.


Preparing Physical Installation Diskettes For Linux

This section describes how to prepare physical Linux installation diskettes from the driver image files, using either the Windows operating system or the Linux operating system.


Note Alternatively, you can mount the dud.img file as a virtual floppy disk, as described in the installation procedures.


Preparing Physical Installation Diskettes With the Windows Operating System:

Under Windows, you can use the RaWrite floppy image-writer utility to create disk images from image files. Perform the following steps to build installation diskettes.


Step 1 Download the Cisco UCS C-Series drivers ISO, as described in Downloading the LSI MegaSR Drivers and save it to your Windows system that has a diskette drive.

Step 2 Extract the dud.img file:

a. Burn the ISO image to a disc.

b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:

/<OS>/Storage/Intel/C600/

c. Expand the Zip file, which contains the folder with the driver files.

Step 3 Copy the driver update disk image dud-[driver version].img and your file raw write.exe to a directory.


Note RaWrite is not included in the driver package.


Step 4 If necessary, use this command to change the file name of the driver update disk to a name with fewer than eight characters: copy dud-[ driver version ].img dud.img

Step 5 Open the DOS Command Prompt and navigate to the directory where raw write.exe is located.

Step 6 Type the following command to create the installation diskette: raw write

Step 7 Press Enter .

You are prompted to enter the name of the boot image file.

Step 8 Type the following: dud.img

Step 9 Press Enter .

You are prompted for the target diskette.

Step 10 Insert a floppy diskette into the floppy drive and type: A:

Step 11 Press Enter .

Step 12 Press Enter again to start copying the file to the diskette.

Step 13 After the command prompt returns and the floppy disk drive LED goes out, remove the diskette.

Step 14 Label the diskette with the image name.


 

Preparing Installation Disks With a Linux Operating System:

Under Red Hat Linux and SuSE Linux, you can use a driver diskette utility to create disk images from image files. Perform the following steps to create the driver update disk:


Step 1 Download the Cisco UCS C-Series drivers ISO, as described in Downloading the LSI MegaSR Drivers and save it to your Linux system that has a diskette drive.

Step 2 Extract the dud.img file:

a. Burn the ISO image to a disc.

b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:

/<OS>/Storage/Intel/C600/

c. Expand the Zip file, which contains the folder with the driver files.

Step 3 Copy the driver update disk image dud-[driver version].img to your Linux system.

Step 4 Insert a blank floppy diskette into the floppy drive.

Step 5 Confirm that the files are in the selected directory.

Step 6 Create the driver update diskette using the following command:

dd if=dud-[ driver version ].img of=/dev/fd0

Step 7 After the command prompt returns and the floppy disk drive LED goes out, remove the diskette.

Step 8 Label the diskette with the image name.


 

Installing the Red Hat Linux Driver

For the specific supported OS versions, see the Hardware and Software Interoperability Matrix for your server release.

This section describes the fresh installation of the Red Hat Enterprise Linux device driver on systems with the embedded MegaRAID stack.


Step 1 Create a RAID drive group using the LSI SWRAID Configuration utility before you install this driver for the OS. Launch this utility by pressing Ctrl+M when LSI SWRAID is shown during BIOS post.

Step 2 Prepare the dud.img file using one of the following methods:

Step 3 Extract the dud.img file:

a. Burn the ISO image to a disc.

b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:

/<OS>/Storage/Intel/C600/

c. Copy the dud-< driver version >.img file to a temporary location on your workstation.

Step 4 Start the Linux driver installation using one of the following methods:

  • To install from local media: Connect an external USB DVD drive to the server and then insert the first RHEL install disc into the drive.
    Then continue with Step 6.
  • To install from remote ISO: Log in to the server’s CIMC interface. Then continue with the next step.

Step 5 Launch a Virtual KVM console window and select the Virtual Media tab.

a. Click Add Image and browse to select your remote RHEL installation ISO file.

b. Click Add Image again and browse to select your dud.img file.

c. Select the check boxes in the Mapped column for the media that you just added, then wait for mapping to complete.

Step 6 Power cycle the server.

Step 7 Press F6 when you see the F6 prompt during bootup. The Boot Menu window opens.

Step 8 On the Boot Manager window, select the physical disc or virtual DVD and press Enter .

The RHEL installation begins when the image is booted.

Step 9 Type one of the following commands at the boot prompt:

  • For RHEL 5. x (32- and 64-bit), type:
    Linux dd blacklist=isci blacklist=ahci noprobe=<ata drive number >
  • For RHEL 6. x (32- and 64-bit), type:
    Linux dd blacklist=isci blacklist=ahci nodmraid noprobe=<ata drive number >

Note The noprobe values depend on the number of drives. For example, to install RHEL 5.7 on a RAID 5 configuration with three drives, enter:
Linux dd blacklist=isci blacklist=ahci noprobe=ata1 noprobe=ata2 noprobe=ata3


Step 10 Press Enter.

The prompt asks whether you have a driver disk.

Step 11 Use the arrow key to select Yes , and then press Enter .

Step 12 Select fd0 to indicate that you have a floppy diskette with the driver on it.

Step 13 Do one of the following actions:

  • If you prepared the IMG file on a physical diskette in Step 2: Connect an external USB diskette drive to the target server and then insert the diskette in the A:/ drive and press Enter .
  • If you mapped the IMG file as a virtual floppy in Step 5: Select the location of the virtual floppy.

The installer locates and loads the driver for your device. The following message appears:

Loading megasr driver...

Step 14 Follow the Red Hat Linux installation procedure to complete the installation.

Step 15 Reboot the system.


 

Installing the SUSE Linux Enterprise Server Driver

For the specific supported OS versions, see the Hardware and Software Interoperability Matrix for your server release.

This section describes the installation of the SuSE Linux Enterprise Server driver on a system with the embedded MegaRAID stack.

Us e the following procedure to install the SLES drivers.


Step 1 Create a RAID drive group using the LSI SWRAID Configuration utility before you install this driver for the OS. Launch this utility by pressing Ctrl+M when LSI SWRAID is shown during BIOS post.

Step 2 Prepare the dud.img file using one of the following methods:

Step 3 Extract the dud.img file:

a. Burn the ISO image to a disc.

b. Browse the contents of the drivers folders to the location of the embedded MegaRAID drivers:

/<OS>/Storage/Intel/C600/

c. Copy the dud-< driver version >.img file to a temporary location on your workstation.

Step 4 Start the Linux driver installation using one of the following methods:

  • To install from local media: Connect an external USB DVD drive to the server and then insert the first RHEL install disc into the drive. Skip to Step 6.
  • To install from remote ISO: Log in to the server’s CIMC interface and continue with the next step.

Step 5 Launch a Virtual KVM console window and select the Virtual Media tab.

a. Click Add Image and browse to select your remote RHEL installation ISO file.

b. Click Add Image again and browse to select your dud.img file.

c. Select the check box in the Mapped column for the media that you just added, then wait for mapping to complete.

Step 6 Power cycle the server.

Step 7 Press F6 when you see the F6 prompt during bootup. The Boot Menu window opens.

Step 8 On the Boot Manager window, select the physical disc or virtual DVD and press Enter . The SLES installation begins when the image is booted.

Step 9 When the first SLES screen appears, select Installation on the menu.

Step 10 Type one of the following in the Boot Options field:

  • For SLES 11 and SLES 11 SP1 (32- and 64-bit), type: brokenmodules=ahci
  • For SLES 11 SP2 (32-and 64-bit), type: brokenmodules=ahci brokenmodules=isci

Step 11 Press F6 for the driver and select Yes .

Step 12 Do one of the following actions:

  • If you prepared the IMG file on a physical diskette in Step 2: Connect an external USB diskette drive to the target server and then insert the diskette in the A:/ drive and press Enter .
  • If you mapped the IMG file as a virtual floppy in Step 5: Select the location of the virtual floppy.

“Yes” appears under the F6 Driver heading.

Step 13 Press Enter to select Installation.

Step 14 Press OK .

The following message appears: LSI Soft RAID Driver Updates added.

Step 15 At the menu, select the driver update medium and press the Back button.

Step 16 Continue and complete the installation process by following the prompts.


 

RAID Controller Cabling

This section includes the following topics:

Cable Routing

The RAID controller connectors in this server are shown in Figure C-2. The red line indicates the recommended cable routing path from the backplane to the possible controller locations.

Figure C-2 RAID Controller Connectors

 

1

Drive backplane
(Only the SFF 16-drive option uses direct connection to the backplane.)

4

Embedded RAID connectors on motherboard

2

Expander
(Only the SFF 24-drive and LFF 12-drive options require the expander.)

5

Mezzanine card SAS connectors
(if present)

3

RAID backup unit mounting locations on removable air baffle (not shown)

6

PCIe risers for LSI MegaRAID cards

Backplane and Expander Options

The server is orderable in three different versions, each with one of three different front panel/backplane configurations:

  • Cisco UCS C240 (small form-factor (SFF) drives, with 24-drive backplane and expander).
    Holds up to twenty-four 2.5-inch hard drives or solid state drives.
  • Cisco UCS C240 (small form-factor (SFF) drives, with 16-drive backplane, no expander).
    Holds up to sixteen 2.5-inch hard drives or solid state drives.
  • Cisco UCS C240 (large form-factor (LFF) drives, with 12-drive backplane and expander).
    Holds up to twelve 3.5-inch hard drives.

Note The SAS expander is required for the SFF 24-drive option and the LFF 12-drive option.
The SFF 16-drive option does not use the SAS expander.



Note This server supports up to two PCIe-style RAID controllers. However, do not mix controller types in the server.



Note The embedded RAID option is available only with the SFF 16-drive backplane. It does not operate through an expander.


SFF 24-Drive Backplane With Expander Cabling

The cable connections required for each type of controller are as follows:

Mezzanine-Style Card

This option can control up to 16 drives.

The required UCSC-CABLE6 cable kit has two cables. Cable 1 controls drives 1–8 and cable 2 controls drives 9–16.

1. Connect cable 1 from connector SAS1 on the card to the SAS1 connector on the expander.

2. Connect cable 2 from connector SAS2 on the card to the SAS2 connector on the expander.

PCIe-Style Card

This option can control up to 24 drives.

The required UCSC-CABLE6 cable kit has two cables. Cable 1 controls drives 1–12 and cable 2 controls drives 13–24.

1. Connect cable 1 from connector SAS1 on the card to the SAS1 connector on the expander.

2. Connect cable 2 from connector SAS2 on the card to the SAS2 connector on the expander.

SFF 16-Drive Backplane, No Expander

The SFF 16-drive option does not use a SAS expander, so connections from the controller are made directly to the backplane. The cable connections required for each type of controller are as follows:

Embedded RAID

This option can control up to eight drives.

The required UCSC-CABLE2 cable kit has four cables. Cable 1 controls drives 1–4 and cable 2 controls drives 5–8. (With this embedded RAID option, only two of the four cables in the kit are used.)

1. Connect cable 1 from connector SASPORT 1 on the motherboard to the SAS1 connector on the backplane.

2. Connect cable 2 from connector SASPORT 2 on the motherboard to the SAS2 connector on the backplane.

Mezzanine-Style Card

This option can control up to 8 drives.

The required UCSC-CABLE2 cable kit has four cables. Cable 1 controls drives 1–4 and cable 2 controls drives 5–8.

1. Connect cable 1 from the card connector SAS1 to the SAS1 connector on the backplane.

2. Connect cable 2 from the card connector SAS2 to the SAS2 connector on the backplane.

PCIe-Style Card

This option can control up to 8 drives with one controller; you can control up to 16 drives with two identical PCIe-style controllers and the four cables that are included in the UCSC-CABLE2 kit.

The required UCSC-CABLE2 cable kit has four cables.

  • Cable 1 controls drives 1–4 and cable 2 controls drives 5–8.
  • With a second PCIe-style controller in the server, cable 3 controls drives 9–12 and cable 4 controls drives 13–16.

Make the following connections to your first controller card to control up to eight drives:

1. Connect cable 1 from the first card SAS1 connector to the SAS1 connector on the backplane.

2. Connect cable 2 from the first card SAS2 connector to the SAS2 connector on the backplane.

Make the following connections to your second controller card to control 9 to 16 drives:

1. Connect cable 1 from the second card SAS1 connector to the SAS3 connector on the backplane.

2. Connect cable 2 from the second card SAS2 connector to the SAS4 connector on the backplane.

LFF 12-Drive Backplane With Expander

Mezzanine-Style Card

This option can control up to 12 drives.

The required UCSC-CABLE4 cable kit has two cables. Cable 1 controls drives 1–6 and cable 2 controls drives 7–12.

1. Connect cable 1 from connector SAS1 on the card to the SAS1 connector on the expander.

2. Connect cable 2 from connector SAS2 on the card to the SAS2 connector on the expander.

PCIe-Style Card

This option can control up to 12 drives.

The required UCSC-CABLE4 cable kit has two cables. Cable 1 controls drives 1–6 and cable 2 controls drives 7–12.

1. Connect cable 1 from connector SAS1 on the card to the SAS1 connector on the expander.

2. Connect cable 2 from connector SAS2 on the card to the SAS2 connector on the expander.

Restoring RAID Configuration After Replacing a RAID Controller

When you replace a RAID controller, the RAID configuration that is stored in the controller is lost. Use the following procedure to restore your RAID configuration to your new RAID controller.


Step 1 Replace your RAID controller. See Replacing a PCIe Card.

Step 2 If this was a full chassis swap, replace all drives into the drive bays, in the same order that they were installed in the old chassis.

Step 3 Reboot the server and watch for the prompt to press F.


Note For newer RAID controllers, you are not prompted to press F. Instead, the RAID configuration is imported automatically. In this case, skip to Step 6.


Step 4 Press F when you see the following on-screen prompt:

Foreign configuration(s) found on adapter.
Press any key to continue or ‘C’ load the configuration utility,
or ‘F’ to import foreign configuration(s) and continue.
 

Step 5 Press any key (other than C) to continue when you see the following on-screen prompt:

All of the disks from your previous configuration are gone. If this is
an unexpected message, then please power of your system and check your cables
to ensure all disks are present.
Press any key to continue, or ‘C’ to load the configuration utility.
 

Step 6 Watch the subsequent screens for confirmation that your RAID configuration was imported correctly.

  • If you see the following message, your configuration was successfully imported. The LSI virtual drive is also listed among the storage devices.
N Virtual Drive(s) found on host adapter.
 
  • If you see the following message, your configuration was not imported. This can happen if you do not press F quickly enough when prompted. In this case, reboot the server and try the import operation again wen you are prompted to press F.
0 Virtual Drive(s) found on host adapter.
 


 

For More Information

The LSI utilities have help documentation for more information about using the utilities.

For basic information about RAID and for using the utilities for the RAID controller cards supported in Cisco servers, see the Cisco UCS Servers RAID Guide .

For more information about using the Embedded MegaRAID software to configure your disk arrays, see the LSI Embedded MegaRAID Software User Guide .

Full LSI documentation is also available:

  • LSI MegaRAID SAS Software User’s Guide (for LSI MegaRAID)

http://www.cisco.com/en/US/docs/unified_computing/ucs/3rd-party/lsi/mrsas/userguide/LSI_MR_SAS_SW_UG.pdf

  • LSI SAS2 Integrated RAID Solution User Guide (for LSI SAS 2008)

http://www.cisco.com/en/US/docs/unified_computing/ucs/3rd-party/lsi/irsas/userguide/LSI_IR_SAS_UG.pdf