From release Cisco VIM 3.0.0, a command expand-storage
is available to add disks so as to expand an already deployed Ceph storage cluster. You can deploy the storage nodes in the
Openstack PoD in one of two ways:
 Note |
The expand-storage command is supported only on storage nodes with a combination of HDD and SSD drives.
|
You must install disk drives based on how the node was originally deployed. If you have a storage node with a 1 SSD/4 HDDs
ratio, insert disks with that same SSD/HDD ratio for expanding the storage. The expand-storage command looks for blocks of
disks with that ratio during the expansion process, and installs one block at a time.
Workflow
-
Use ciscovim list-nodes to find a node with a role of “block-storage”
-
Insert a block of disks based on your storage deployment into the target node
-
Log into the CIMC of the target node and navigate to the storage panel.
-
Verify that no disks are reporting errors.
-
If any disk reports foreign configuration, clear the configuration.
-
Enable JBOD if RAID controller is used.
-
Run cloud-sanity and ensure that no failure occurs.
-
Run osdmgmt check-osds to check the state and number of the OSDs currently installed.
-
Run expand-storage with the name of the target storage node.
-
Run osdmgmt check-osds to check the state and number of the OSDs .
-
Compare the outputs of the two osdmgmt check-osd commands and verify whether the new disks were added as additional OSDs.
The expand storage command runs in the same manner as other ciscovim commands but with various steps executed at a time.The
command execution is stopped in case of any failure. You can view the logs once the command execution is complete. The steps
for the expand-storage command are:
-
Hardware validations
-
Baremetal
-
CEPH for expansion
-
VMTP
Command Help
$ ciscovim help expand-storage
usage: ciscovim expand-storage --setupfile SETUPFILE [-y] <node>
Expand storage node capacity
Positional arguments:
<node> Expand Storage capacity of a storage node
Optional arguments:
--setupfile SETUPFILE <setupdata_file>. Mandatory for any POD management
operation.
-y, --yes Yes option to perform the action
Workflow command examples:
To expand the storage of node i13-27-test, get the current number/state of the OSDs in the cluster.
$ ciscovim list-nodes
+-------------+--------+---------------+---------------+
| Node Name | Status | Type | Management IP |
+-------------+--------+---------------+---------------+
| i13-20 | Active | control | 15.0.0.7 |
| i13-21 | Active | control | 15.0.0.8 |
| i13-22 | Active | control | 15.0.0.5 |
| i13-23 | Active | compute | 15.0.0.6 |
| i13-24 | Active | compute | 15.0.0.10 |
| i13-25 | Active | block_storage | 15.0.0.11 |
| i13-26 | Active | block_storage | 15.0.0.9 |
| i13-27-test | Active | block_storage | 15.0.0.4 |
+-------------+--------+---------------+---------------+
ciscovim osdmgmt show check-osds --id <id>
+--------------------+-------------+---------------+-----------+---------+
| Message | Host | Role | Server | State |
+--------------------+-------------+---------------+-----------+---------+
| Overall OSD Status | i13-25 | block_storage | 15.0.0.11 | Optimal |
| | i13-26 | block_storage | 15.0.0.9 | Optimal |
| | i13-27-test | block_storage | 15.0.0.4 | Optimal |
| | | | | |
| Number of OSDs | i13-25 | block_storage | 15.0.0.11 | 10 |
| | i13-26 | block_storage | 15.0.0.9 | 10 |
| | i13-27-test | block_storage | 15.0.0.4 | 12 |
+--------------------+-------------+---------------+-----------+---------+
+-------------+--------+--------+----+------------+-----------+-----------
| Host | OSDs | Status | ID | HDD Slot | Path | Mount
+-------------+--------+--------+----+------------+-----------+-----------
.
omitted for doc
.
| i13-27-test | osd.2 | up | 2 | 4 (JBOD) | /dev/sda1 |
| | osd.5 | up | 5 | 3 (JBOD) | /dev/sdb1 |
| | osd.8 | up | 8 | 6 (JBOD) | /dev/sdc1 |
| | osd.11 | up | 11 | 2 (JBOD) | /dev/sdd1 |
| | osd.14 | up | 14 | 5 (JBOD) | /dev/sde1 |
| | osd.19 | up | 19 | 9 (JBOD) | /dev/sdi1 |
| | osd.24 | up | 24 | 10 (JBOD) | /dev/sdj1 |
| | osd.27 | up | 27 | 8 (JBOD) | /dev/sdl1 |
| | osd.28 | up | 28 | 12 (JBOD) | /dev/sdm1 |
| | osd.29 | up | 29 | 11 (JBOD) | /dev/sdn1 |
| | osd.30 | up | 30 | 13 (JBOD) | /dev/sdo1 |
| | osd.31 | up | 31 | 17 (JBOD) | /dev/sdp1 |
+-------------+--------+--------+----+------------+-----------+
Run the expand-storage command
# ciscovim expand-storage i13-27-test --setupfile setup_data.yaml
Perform the action. Continue (Y/N)Y
Monitoring StorageMgmt Operation
. . . . Cisco VIM Runner logs
The logs for this run are available in
<ip>:/var/log/mercury/05f068de-86fd-479c-afda-c54b14ffdd3e
############################################
Cisco Virtualized Infrastructure Manager
############################################
[1/3][VALIDATION: INIT] [ / ] 0min 0sec
Management Node Validations!
.
.
Omitted for doc
.
.
[1/3][VALIDATION: Starting HW Validation, takes time!!!] [ DONE! ]
Ended Installation [VALIDATION] [Success]
[2/3][CEPH: Checking for Storage Nodes] [ DONE! ]
[2/3][CEPH: Creating Ansible Inventory] [ DONE! ]
.
.
Omitted for doc
.
.
[2/3][CEPH: Waiting for server to come back first try] [ DONE! ]
Ended Installation [CEPH] [Success]
VMTP Starts
/home/vmtp/.ssh/id_rsa already exists.
.
.
Omitted for doc
.
.
[3/3][VMTP: INIT] [ DONE! ]
Ended Installation [VMTP] [Success]
The logs for this run are available in
<ip>:/var/log/mercury/05f068de-86fd-479c-afda-c54b14ffdd3e
===========
Check the OSDs
ciscovim osdmgmt create check-osds
+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| action | check-osds |
| command | create |
| created_at | 2019-01-07T19:00:23.575530+00:00 |
| id | adb56a08-fdc5-4810-ac50-4ea6c6b38e3f |
| locator | False |
| osd | None |
| result | |
| servers | None |
| status | not_run |
| updated_at | None |
+------------+--------------------------------------+
ciscovim osdmgmt list check-osds
+--------------------------------------+------------+----------+----------
| ID | Action | Status | Created |
+--------------------------------------+------------+----------+----------
| cd108b85-2678-4aac-b01e-ee05dcd6fd02 | check-osds | Complete | 2019-01-
| adb56a08-fdc5-4810-ac50-4ea6c6b38e3f | check-osds | Complete | 2019-01-|
+--------------------------------------+------------+----------+----------
ciscovim osdmgmt show check-osds --id <id>
+--------------------+-------------+---------------+-----------+---------+
| Message | Host | Role | Server | State |
+--------------------+-------------+---------------+-----------+---------+
| Overall OSD Status | i13-25 | block_storage | 15.0.0.11 | Optimal |
| | i13-26 | block_storage | 15.0.0.9 | Optimal |
| | i13-27-test | block_storage | 15.0.0.4 | Optimal |
| | | | | |
| Number of OSDs | i13-25 | block_storage | 15.0.0.11 | 10 |
| | i13-26 | block_storage | 15.0.0.9 | 10 |
| | i13-27-test | block_storage | 15.0.0.4 | 16 |
+--------------------+-------------+---------------+-----------+---------+
+-------------+--------+--------+----+------------+-----------+-----------
| Host | OSDs | Status | ID | HDD Slot | Path | Mount
+-------------+--------+--------+----+------------+-----------+-----------
.
omitted for doc
.
| i13-27-test | osd.2 | up | 2 | 4 (JBOD) | /dev/sda1 |
| | osd.5 | up | 5 | 3 (JBOD) | /dev/sdb1 |
| | osd.8 | up | 8 | 6 (JBOD) | /dev/sdc1 |
| | osd.11 | up | 11 | 2 (JBOD) | /dev/sdd1 |
| | osd.14 | up | 14 | 5 (JBOD) | /dev/sde1 |
| | osd.19 | up | 19 | 9 (JBOD) | /dev/sdi1 |
| | osd.24 | up | 24 | 10 (JBOD) | /dev/sdj1 |
| | osd.27 | up | 27 | 8 (JBOD) | /dev/sdl1 |
| | osd.28 | up | 28 | 12 (JBOD) | /dev/sdm1 |
| | osd.29 | up | 29 | 11 (JBOD) | /dev/sdn1 |
| | osd.30 | up | 30 | 13 (JBOD) | /dev/sdo1 |
| | osd.31 | up | 31 | 17 (JBOD) | /dev/sdp1 |
| | osd.32 | up | 32 | 15 (JBOD) | /dev/sdq1 |
| | osd.33 | up | 33 | 14 (JBOD) | /dev/sdr1 |
| | osd.34 | up | 34 | 16 (JBOD) | /dev/sds1 |
| | osd.35 | up | 35 | 7 (JBOD) | /dev/sdt1 |
+-------------+--------+--------+----+------------+-----------+