Cisco UCS S3260 System Storage Management

Storage Server Features and Components Overview

Storage Server Features

The following table summarizes the Cisco UCS S3260 system features:

Table 1. Cisco UCS S3260 System Features

Feature

Description

Chassis

Four rack unit (4RU) chassis

Processors

  • Cisco UCS S3260 M3 server nodes: Two Intel Xeon E5-2600 v2 Series processors inside each server node.

  • Cisco UCS S3260 M4 server nodes: Two Intel Xeon E5-2600 v4 Series processors inside each server node.

  • Cisco UCS S3260 M5 server nodes: Two Intel Skylake 2S-EP processors inside each server node.

Memory

Up to 16 DIMMs inside each server node.

Multi-bit error protection

This system supports multi-bit error protection.

Storage

The system has the following storage options:

  • Up to 56 top-loading 3.5-inch drives

  • Up to four 3.5-inch, rear-loading drives in the optional drive expander module

  • Up to four 2.5-inch, rear-loading SAS solid state drives (SSDs)

  • One 2.5-inch, NVMe drive inside the server node

    Note 

    This is applicable for S3260 M4 servers only.

  • Two 7 mm NVMe drive inside the server node

    Note 

    This is applicable for S3260 M5 servers only.

  • Two 15 mm NVMe drive supported for IO Expander

Disk Management

The system supports up to two storage controllers:

  • One dedicated mezzanine-style socket for a Cisco storage controller card inside each server node

RAID Backup

The supercap power module (SCPM) mounts to the RAID controller card.

PCIe I/O

The optional I/O expander provides two 8x Gen 3 PCIe expansion slots.

Release 3.2(3) and later supports the following for S3260 M5 servers:

  • Intel X550 dual-port 10GBase-T

  • Qlogic QLE2692 dual-port 16G Fiber Channel HBA

  • N2XX-AIPCI01 Intel X520 Dual Port 10Gb SFP+ Adapter

Network and Management I/O

The system can have one or two system I/O controllers (SIOCs). These provide rear-panel management and data connectivity.

  • Two SFP+ 40 Gb ports each SIOC.

  • One 10/100/1000 Ethernet dedicated management port on each SIOC.

The server nodes each have one rear-panel KVM connector that can be used with a KVM cable, which provides two USB, one VGA DB-15, and one serial DB-9 connector.

Power

Two or four power supplies, 1050 W each (hot-swappable and redundant as 2+2).

Cooling

Four internal fan modules that pull front-to-rear cooling, hot-swappable. Each fan module contains two fans.

In addition, there is one fan in each power supply.

Front Panel Features

The following image shows the front panel features for the Cisco UCS S3260 system:

Figure 1. Front Panel Features


1

Operations panel

6

Temperature status LED

2

System Power button/LED

7

Power supply status LED

3

System unit identification button/LED

8

Network link activity LED

4

System status LED

9

Pull-out asset tag (not visible under front bezel)

5

Fan status LED

10

Internal-drive status LEDs

Rear Panel Features

The following image shows the rear panel features for the Cisco UCS S3260 system:

Figure 2. Front Panel Features


Disk Slots

1

Server bay 1

  • (Optional) I/O expander, as shown (with Cisco UCS S3260 M4 and M5 server node only)

  • (Optional) server node

  • (Optional) drive expansion module

8

Not used at this time

2

Server bay 2

  • (Optional) server node (Cisco UCS S3260 M4 and M5 shown)

    (Optional) drive expansion module

9

Not used at this time

3

System I/O controller (SIOC)

  • SIOC 1 is required if you have a server node in server bay 1

  • SIOC 2 is required if you have server node in server bay 2

10

Solid state drive bays (up to four 2.5-inch SAS SSDs)

  • SSDs in bays 1 and 2 require a server node in server bay 1

  • SSDs in bays 3 and 4 require a server node in server bay 2

4

Power supplies (four, redundant as 2+2)

11

Cisco UCS S3260 M4 server node label (M4 SVRN)

Note 

This label identifies a Cisco UCS S3260 M4 and M5 server node. The Cisco UCS S3260 M3 server node does not have a label.

5

40-Gb SFP+ ports (two on each SIOC)

12

KVM console connector (one each server node).

Used with a KVM cable that provides two USB, one VGA, and one serial connector

6

Chassis Management Controller (CMS) Debug Firmware Utility port (one each SIOC)

13

Server node unit identification button/LED

7

10/100/1000 dedicated management port, RJ-45 connector (one each SIOC)

14

Server node power button

15

Server node reset button (resets chipset in the server node

Storage Server Components

Server Nodes

The Cisco UCS S3260 system consists of one or two server nodes, each with two CPUs, DIMM memory of 128, 256, or 512 GB, and a RAID card up to 4 GB cache or a pass-through controller. The server nodes can be one of the following:

  • Cisco UCS S3260 M3 Server Node

  • Cisco UCS S3260 M4 Server Node—This node might include an optional I/O expander module that attaches to the top of the server node.

  • Cisco UCS S3260 M5 Server Node—This node might include an optional I/O expander module that attaches to the top of the server node.

Disk Slots

The Cisco UCS S3260 chassis has 4 rows of 14 disk slots on the HDD motherboard and 4 additional disk slots on the HDD expansion tray. The following image shows the disk arrangement for the 56 top-accessible, hot swappable 3.5-inch 6 TB or 4 TB 7200 rpm NL-SAS HDD drives. A disk slot has two SAS ports and each is connected a SAS expander in the chassis.

Figure 3. Cisco UCS S3260 Top View


The following image shows the Cisco UCS S3260 chassis with the 4 additional disk slots on the HDD expansion tray.

Figure 4. Cisco UCS 3260 with the HDD expansion tray (Rear View)


If you have two server nodes with two SIOCs, you will have the following functionality:

  1. The top server node works with the left SIOC (Server Slot1 with SIOC1).

  2. The bottom server works with the right SIOC (Sever Slot 2 with SIOC2).

If you have one server node with two SIOCs, you can enable Server SIOC Connectivity functionality. Beginning with release 3.1(3), Cisco UCS S3260 system supports Server SIOC Connectivity functionality. Using this functionality, you can configure the data path through both the primary and auxiliary SIOCs when the chassis has single server and dual SIOCs set up.

SAS Expanders

The Cisco UCS S3260 system has two SAS expanders that run in redundant mode and connect the disks at the chassis level to storage controllers on the servers. The SAS expanders provide two paths between a storage controller, and hence enable high availability. They provide the following functionality:

  • Manage the pool of hard drives.

  • Disk zone configuration of the hard drives to storage controllers on the servers.

Beginning with release 3.2(3a), Cisco UCS Manager can enable single path access to disk by configuring single DiskPort per disk slot. This ensures that the server discovers only a single device and avoid a multi-path configuration.

The following table describes how the ports in each SAS expander are connected to the disks based on the type of deployment.

Port range

Connectivity

1-56

Top accessible disks

57-60

Disks in the HDD expansion tray.


Note

The number of SAS uplinks between storage controller and SAS expander can vary based on the type of controller equipped in the server.


Storage Enclosures

A Cisco UCS S3260 system has the following types of storage enclosures:

Chassis Level Storage Enclosures
  • HDD motherboard enclosure—The 56 dual port disk slots in the chassis comprise the HDD motherboard enclosure.

  • HDD expansion tray—The 4 additional dual disk slots in the Cisco UCS S3260 system comprise the HDD expansion tray.


    Note

    The HDD expansion tray is a field replaceable unit (FRU). The disks will remain unassigned upon insertion, and can be assigned to storage controllers. For detailed steps on how to perform disk zoning, see Disk Zoning Policies
Server level Storage Enclosures

Server level storage enclosures are pre-assigned dedicated enclosures to the server. These can be one of the following:

  • Rear Boot SSD enclosure—This enclosure contains two 2.5 inch disk slots on the rear panel of the Cisco UCS S3260 system. Each server has two dedicated disk slots. These disk slots support SATA SSDs.

  • Server board NVMe enclosure—This enclosure contains one PCIe NVMe controller.


Note

In the Cisco UCS S3260 system, even though disks can be physically present on the two types of enclosures described above, from the host OS all the disks are viewed as part of one SCSI enclosure. They are connected to SAS expanders that are configured to run as single SES enclosure.


Storage Controllers

Mezzanine Storage Controllers

The following table lists the storage controller type, firmware type, modes, sharing and OOB support for the various storage controllers.

Table 2.

Storage Controller Type

Firmware type

Modes

Sharing

OOB Support

UCSC-S3X60-R1GB

Mega RAID

HW RAID, JBOD

No

Yes

UCS-C3K-M4RAID

Mega RAID

HW RAID, JBOD

No

Yes

UCSC-S3X60-HBA

Initiator Target

Pass through

Yes

Yes

UCS-S3260-DHBA

Initiator Target

Pass through

Yes

Yes

UCS-S3260-DRAID

Mega RAID

HW RAID, JBOD

No

Yes

Other storage controllers
SW RAID Controller—The servers in the Cisco UCS S3260 system support two dedicated internal SSDs embedded into the PCIe riser that is connected to the SW RAID Controller. This controller is supported on the Cisco C3000 M3 servers.

NVMe Controller—This controller is used by servers in the Cisco UCS S3260 system for inventory and firmware updates of NVMe disks.

For more details about the storage controllers supported in the various server nodes, see the related service note:

Cisco UCS S3260 Storage Management Operations

The following table summarizes the various storage management operations that you can perform with the Cisco UCS Manager integrated Cisco UCS S3260 system.

Operation

Description

See:

Disk Sharing for High Availability

The SAS expanders in the Cisco UCS S3260 system can manage the pool of drives at the chassis level. To share disks for high availability, perform the following:

  1. Creating disk zoning policies.

  2. Creating disk slots and assigning ownership.

  3. Associating disks to chassis profile.

"Disk Zoning Policies" section in this guide.

Storage Profiles, Disk Groups and Disk Group Configuration Policies

You can utilize Cisco UCS Manager's Storage Profile and Disk Group Policies for defining storage disks, disk allocation and management in the Cisco UCS S3260 system.

"Storage Profiles" section in the Cisco UCS Manager Storage Management Guide, Release 3.2.

Storage Enclosure Operations

You can swap the HDD expansion tray with a server, or remove the tray if it was previously inserted.

"Removing Chassis Level Storage Enclosures" section in this guide.

Disk Sharing for High Availability

Disk Zoning Policies

You can assign disk drives to the server nodes using disk zoning. Disk zoning can be performed on the controllers in the same server or on the controllers on different servers. Disk ownership can be one of the following:
Unassigned

Unassigned disks are those not visible to the server nodes.

Dedicated

If this option is selected, you will need to set the values for the Server, Controller, Drive Path, and Slot Range for the disk slot.


Note

A disk is visible only to the assigned controller.


Beginning with release 3.2(3a), Cisco UCS Manager can enable single path access to disk by configuring single DiskPort per disk slot for Cisco UCS S3260 M5 and higher servers. Setting single path configuration ensures that the server discovers the disk drive only through a single drive path chosen in the configuration. Single path access is supported only for Cisco UCS S3260 Dual Pass Through Controller (UCS-S3260-DHBA)

Once single path access is enabled, you cannot downgrade to any release earlier than 3.2(3a). To downgrade, disable this feature and assign all the disk slots to both the disk ports by configuring disk path of the disk slots to Path Both in disk zoning policy.

Shared

Shared disks are those assigned to more than one controller. They are specifically used when the servers are running in a cluster configuration, and each server has its storage controllers in HBA mode.


Note

Shared mode cannot be used under certain conditions when dual HBA controllers are used.


Chassis Global Hot Spare

If this option is selected, you will need to set the value for the Slot Range for the disk.


Important

Disk migration and claiming orphan LUNs: To migrate a disk zoned to a server (Server 1) to another server (Server 2), you must mark the virtual drive (LUN) as transport ready or perform a hide virtual drive operation. You can then change the disk zoning policy assigned for that disk. For more information on virtual drive management, see the Disk Groups and Disk Configuration Policies section of the Cisco UCS Manager Storage Management Guide.


Creating a Disk Zoning Policy

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope org org-name

Enters organization mode for the specified organization. To enter the root organization mode, type / as the org-name .

Step 2

UCS-A org/ # create disk-zoning-policy diskzoning policy-name

Creates a disk zoning policy name with the specified disk zoning policy name.

Step 3

UCS-A /org/disk-zoning-policy* # commit-buffer

Commits the transaction to the system configuration.

Example

The following example creates the dzp1 disk zoning policy:


UCS-A# scope org
UCS-A /org # create disk-zoning-policy dzp1
UCS-A /org/disk-zoning-policy*# commit-buffer
UCS-A /org/disk-zoning-policy# 

Creating Disk Slots and Assigning Ownership

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope org org-name

Enters organization mode for the specified organization. To enter the root organization mode, type / as the org-name .

Step 2

UCS-A org/ # disk-zoning-policy disk-zoning-policy-name

Enters the disk zoning policy.

Step 3

UCS-A org/disk-zoning-policy # create disk-slot slot-id

Creates disk slot with the specified slot number.

Step 4

UCS-A org/disk-zoning-policy/disk-slot* # set ownership ownership-type {chassis-global-host-spare\dedicated\shared\unassigned}

Specifies the disk ownership to be one of the following:

  • chassis-global-hot-spare—Chassis Global Hot Spare

  • dedicated—Dedicated

    Beginning with release 3.2(3a), Cisco UCS Manager can enable single path access to disk by configuring single DiskPort per disk slot. This ensures that the server discovers only a single device and avoid a multi-path configuration.

    Drive Path options are:

    • path-both (Default) - Drive path is zoned to both the SAS expanders.

    • path-0 - Drive path is zoned to SAS expander 1.

    • path-1 - Drive path is zoned to SAS expander 2.

    Use the following command to set the drivepath:

    set drivepath drivepath{path-0/path-1/path-both}

  • shared—Shared

    Note 

    Shared mode cannot be used under certain conditions when dual HBA controllers are used. To view the conditions for Shared mode for Dual HBA controller, see Table 1.

  • unassigned—Unassigned

Step 5

UCS-A org/disk-zoning-policy/disk-slot* # create controller-ref server-id sas controller-id

Creates controller reference for the specified server slot.

Step 6

UCS-A org/disk-zoning-policy/disk-slot # commit-buffer

Commits the transaction.

Table 3. Limitations for Shared Mode for Dual HBA Controller

Server

HDD Tray

Controller

Shared mode Support

Cisco UCS S3260

No

Dual HBA

Not Supported

Cisco UCS S3260

HDD Tray

Dual HBA

Not Supported

Pre-Provisioned

HDD Tray

Dual HBA

Not Supported

Example

The following example creates disk slot 1, sets the ownership as shared, creates a controller reference for the server slot 1, and commits the transaction:

UCS-A# scope org
UCS-A /org # scope disk-zoning-policy test
UCS-A /org/disk-zoning-policy* # create disk-slot 1
UCS-A /org/disk-zoning-policy/disk-slot* # set ownership shared
UCS-A /org/disk-zoning-policy/disk-slot* # create controller-ref 1 sas 1
UCS-A /org/disk-zoning-policy/disk-slot* # create controller-ref 2 sas 1
UCS-A /org/disk-zoning-policy/disk-slot* #commit-buffer
UCS-A /org/disk-zoning-policy/disk-slot # 


Associating Disk Zoning Policies to Chassis Profile

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope org org-name

Enters organization mode for the specified organization. To enter the root organization mode, type / as the org-name .

Step 2

UCS-A org/ # create chassis-profile chassis-profile-name

Creates a chassis profile with the specified name.

Step 3

UCS-A org/chassis-profile* # set disk-zoning-policy disk-zoning-policy

Sets the specified disk-zoning-policy.

Step 4

UCS-A org/chassis-profile* # commit-buffer

Commits the transaction.

Step 5

UCS-A org/chassis-profile # associate chassis chassis-id

Associates the disks in the disk zoning policy to the chassis with the specified chassis number.

Example

The following example creates the ch1 chassis profile, sets the disk zoning policy all56shared, commits the transaction and associates the disk in the all56shared policy with chassis 3:

UCS-A# scope org
UCS-A /org # create chassis-profile ch1
UCS-A /org/chassis-profile* # set disk-zoning-policy all56shared
UCS-A /org/chassis-profile* # commit-buffer
UCS-A /org/chassis-profile # associate chassis 3
UCS-A /org/fw-chassis-pack/pack-image # 


Disk Migration

Before you can migrate a disk zoned from one server to another, you must mark the virtual drive(LUN) as transport ready or perform a hide virtual drive operation. This will ensure that all references from the service profile have been removed prior to disk migration. For more information on virtual drives, please refer to the "virtual drives" section in the Cisco UCS Manager Storage Management Guide, Release 3.2.

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope chassis chassis-num

Enters chassis mode for the specified chassis.

Step 2

UCS-A /chassis# scope virtual-drive-container virtual-drive-container-num

Enters the virtual drive container with the specified number.

Step 3

UCS-A /chassis/virtual-drive-container# scope virtual-drive virtual-drive--num

Enters the virtual drive for the specified virtual drive container.

Step 4

UCS-A /chassis/virtual-drive-container/virtual-drive# scope virtual-drive virtual-drive--num set admin-state admin-state

Specifies one of the following admin states for the virtual drive:

  • clear-transport-ready — Sets the state of the virtual drive to no longer be transport ready.

  • delete — Deletes the virtual drive.

  • hide— Choose this option for the safe migration of the virtual drive from one server to another.

    Note 

    All virtual drives on a disk group must be marked as hidden before migrating or unassigning the disks from a server node.

  • transport-ready — Choose this option for the safe migration of the virtual drive from one server to another.

    Note 

    When a virtual drive is marked as transport ready, the storage controller will disable all IO operations on the drive. In addition, after zoning the virtual drive and importing the foreign configuration, the virtual drive will be operational.

Step 5

UCS-A /chassis/virtual-drive-container/virtual-drive# commit-buffer

Commits the transaction to the system configuration.

Example

The following example sets the state of the virtual drive 1001 in the virtual drive container 1 to transport ready:


UCS-A# scope chassis
UCS-A /chassis# scope virtual-drive-container 1
UCS-A /chassis/virtual-drive-container# scope virtual-drive 1001
UCS-A /chassis/virtual-drive-container/virtual-drive# set admin-state transport-ready
UCS-A /chassis/virtual-drive-container/virtual-drive# commit-buffer

Storage Enclosure Operations

Removing Chassis Level Storage Enclosures

You can remove the storage enclosure corresponding to HDD expansion tray in Cisco UCS Manager after it is physically removed. You cannot remove server level or any other chassis level storage enclosures.

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope chassis chassis-id

Enters chassis mode for the specified chassis.

Step 2

UCS-A /chassis # remove storage-enclosure storage-enclosure-name

Removes the chassis level storage enclosure with the specified name.

Example

The following example removes storage enclosure 25 from chassis 2:


UCS-A# scope chassis 2
UCS-A /chassis# remove storage-enclosure 25
UCS-A /chassis# 

SAS Expander Configuration Policy

Creating SAS Expander Configuration Policy

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope org org-name

Enters organization mode for the specified organization. To enter the root organization mode, type / as the org-name .

Step 2

UCS-A org/ # create sas-expander-configuration-policy sas-expander-configuration-policy-name

Creates a SAS expander configuration policy with the specified policy name.

Step 3

(Optional) UCS-A /org/sas-expander-configuration-policy* # set descr description

(Optional)

Provides a description for the policy.

Step 4

(Optional) UCS-A /org/sas-expander-configuration-policy* # set 6g-12g-mixed-mode disabled|enabled|no-change

(Optional)
Note 

Enabling or disabling 6G-12G Mixed Mode causes system reboot.

  • Disabled—Connection Management is disabled in this policy and the Sas Expander uses only 6G speeds even if 12G is available.

  • Enabled—Connection Management is enabled in this policy and it intelligently shifts between 6G and 12 G speeds based on availability.

  • No Change (Default) —Pre-existing configuration is retained.

Step 5

UCS-A /org/sas-expander-configuration-policy* # commit-buffer

Commits the transaction to the system configuration.

Example

The following example creates the secp1 SAS expander configuration policy:


UCS-A# scope org
UCS-A /org # create sas-expander-configuration-policy secp1
UCS-A /org/sas-expander-configuration-policy*# set 6g-12g-mixed-mode enabled 
UCS-A /org/sas-expander-configuration-policy*# commit-buffer
UCS-A /org/sas-expander-configuration-policy# 

Deleting a SAS Expander Configuration Policy

Procedure

  Command or Action Purpose
Step 1

UCS-A# scope org org-name

Enters organization mode for the specified organization. To enter the root organization mode, type / as the org-name .

Step 2

UCS-A org/ # delete sas-expander-configuration-policy sas-expander-configuration-policy-name

Deletes a SAS expander configuration policy with the specified policy name.

Step 3

UCS-A /org* # commit-buffer

Commits the transaction to the system configuration.

Example

The following example deletes the secp1 SAS expander configuration policy:


UCS-A# scope org
UCS-A /org # delete create sas-expander-configuration-policy secp1
UCS-A /org*# commit-buffer
UCS-A /org/#