Guest

Cisco BTS 10200 Softswitch

Multiple EMS Spindle Installation Guide

  • Viewing Options

  • PDF (184.2 KB)
  • Feedback
Multiple EMS Spindle Installation Guide

Table Of Contents

Multiple EMS Spindle Installation Guide

Understanding the Multiple EMS Spindle Installation

Prerequisites

Provisioning

Installation and Disk Replacement Procedures

Element Management System External Disk Replacement

Element Management System External Disk Installation

Element Management System External Disk Deactivation


Multiple EMS Spindle Installation Guide


Revised: February 13, 2008

This document describes the Multiple Element Management System (EMS) Spindle installation for Release 6.0 of the Cisco BTS 10200 Softswitch and explains how to use it.

Understanding the Multiple EMS Spindle Installation

The Multiple EMS Spindle installation introduces additional EMS spindles into the BTS 10200 system. This reduces the negative impact on provisioning throughput caused by input and output (I/O) contention, which customers have expressed concern about.

The Multiple EMS Spindle installation enables the BTS 10200 to optionally support additional EMS spindles in a seamless, transparent manner with respect to existing (and future) functionality. The number of spindles that can be added depends on the host. The following combination is supported:

1280/1290—Four external drives via a single external drive chassis per EMS host

An installation script will convert the BTS 10200 to leverage the additional spindles. The installation script includes the creation of file systems on the added spindles. The current directory structure will be used as is with various subdirectories becoming mount points. The installation script is capable of recognizing the host and the number of additional spindles and performs the conversion accordingly. An equivalent un-install script will un-install the additional spindles. The multiple spindle install/un-install scripts are independent of any existing BTS install/un-install scripts. The install script will need to be re-run only in the case of a fresh OS install.

With a 1280/1290, the following combination of added external chassis and drives (one chassis and four drives added per EMS host) is supported as shown in Table 1.

Table 1 Supported Combinations

Qty
Part Number
Description

2

XTA3120R01A0T146Z

Sun StorageTek 3120 Rack Ready, 146 GB (2 * 73 GB 10 Krpm disks), Ultra320 SCSI-JBOD, and 2 AC power supplies, RoHS-5 Compliant

4

XTA-SC1NC-73G10K

Drive in SE3000 carrier, 73GB 10K RPM, U320 SCSI LVD, RoHS-6 Compliant


In addition, two of the following SCSI cables as shown in Table 2 (either 0.8, 2, 4 or 10 meters to accommodate local requirements) are required.

Table 2 SCSI Cables

Oty
Part Number
Description

2

X1132A-Z

SCSI Cable, 0.8 meter, SCSI-3 to VHDCI, RoHS-6 Compliant

2

X3832A-Z

UltraSCSI Cable, 2 meter, HD68 to VHDC, RoHS-6 Compliant

2

X3830A-Z

4-meter HD68 to VHDCI68 differential Ultra SCSI cable, RoHS-6 Compliant

2

X3831A-Z

10-meter HD68 to VHDCI68 differential Ultra SCSI cable, RoHS-6 Compliant


Other part numbers and combinations of the specified part numbers are not supported.

Constraints:

Support will be limited to the combinations of hosts and additional spindles as indicated.

An EMS may have a single pair of spindles or a single pair of spindles with additional spindles.

The additional EMS may have a single pair of spindles or a single pair of spindles and additional spindles.

Prerequisites

The Multiple EMS Spindle installation is dependent on BTS 10200 capabilities embodied in Solaris 0606, which is mandatory for BTS Release 5.0.2 MR1 and later.

Provisioning

The implementation of the Multiple EMS Spindle installation provides a single executable script that will, when executed, migrate the Oracle database from its current location on the /opt partition to a new zfs file system on a new disk pool. The disk pool consists of four additional spindles, as dictated by system requirements, and is composed of two mirrored strips. The new file system (or file systems) will retain exactly the same hierarchy as the current directory structure. The current directory entries become mount points for the new file system(s) and will be persistent across reboots and operating system (OS) or application upgrades.

Installation and Disk Replacement Procedures

The following sections contain the installation and disk replacement procedures associated with the Cisco BTS 10200 Softswitch Multiple EMS Spindle installation.

Element Management System External Disk Replacement


Note This section applies only to the replacement of a disk drive in a StorageTek 3120 JBOD external drive chassis as described in this installation guide and the movement of the Oracle database to the external spindles using the referenced db2ext script as directed.


Prior to executing the following procedures, the db2ext script has to be run with the single parameter move to create the file system used for the Oracle database.

Example:

/opt/utils/db2ext move

For more information on this topic, refer to the External Disk Installation section.


Step 1 Log in as root.

Step 2 Determine which disk needs to be replaced. A disk needing replacement will have a status other than ONLINE, as shown for c2t8d0. The drive to replace corresponds to the tnn in the drive number and will be labeled as such on the drive chassis. In the example it is labeled d8.

Example:

<hostname># zpool status
pool: Orapool
state: ONLINE
scrub: none requested
config:

NAME
STATE
READ
WRITE
CKSUM
Orapool
DEGRADED
0
0
0
mirror
DEGRADED
0
0
0
c2t8d0
FAULTY
0
0
0
c2t9d0
ONLINE
0
0
0
mirror
ONLINE
0
0
0
c2t10d0
ONLINE
0
0
0
c2t11d0
ONLINE
0
0
0


Note If all platforms on this node are ACTIVE, then you need to switch over to the STANDBY side using the CLI command.


Step 3 Execute the platform stop all command.

<hostname># platform stop all

Step 4 Deactivate the defective drive prior to removal using the following two commands:

<hostname># zpool offline Orapool c2t8d0
<hostname># cfgadm -c unconfigure c2::dsk/c2t8d0

Step 5 Replace the defective drive and then activate it with the following two commands:

<hostname># cfgadm -c configure c2::dsk/c2t8d0
<hostname># zpool online Orapool c2t8d0

The newly replaced drive will automatically be mirrored. The progress of the mirroring operation may be monitored using the status command given previously. When completed it will appear as follows:

<hostname># zpool status Orapool
pool: Orapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Oct 23 12:22:53 2007
config:

NAME
STATE
READ
WRITE
CKSUM
Orapool
ONLINE
0
0
0
mirror
ONLINE
0
0
0
c2t9d0
ONLINE
0
0
0
c2t8d0
ONLINE
0
0
0
mirror
ONLINE
0
0
0
c2t10d0
ONLINE
0
0
0
c2t11d0
ONLINE
0
0
0


For additional information consult the Solaris ZFS Administration Guide available at http://docs.sun.com/app/docs. For cabling instructions for the chassis itself refer to the Sun StorEdge 3120 SCSI Array Quick Installation Guide shipped with your storage array.

Element Management System External Disk Installation

You must first shut down the EMS software on the standby host to which you are attaching the array. The following steps will make the storage array visible to the Solaris OS.


Step 1 Execute the platform stop all command.

<hostname># platform stop all

Step 2 Execute the touch command with reconfigure parameter.

<hostname># touch /reconfigure

Step 3 Execute a sync as shown.

<hostname># sync; shutdown -y -g0 -i 5


For cabling instructions for the chassis refer to the Sun StorEdge 3120 SCSI Array Quick Installation Guide shipped with your storage array. The array should be installed as a single bus single initiator array, with the short bridge cable connecting ports 1 and 4 with the ports numbered left to right as viewed from the rear. The long SCSI cable will connect the 1280 EMS host to port 2 on the array, the second port from the left as viewed from the rear.

The db2ext script is delivered in the BTSemtools package and resides in the /opt/utils folder. The script is self documenting, by entering /opt/utils/db2ext with no parameters. It is assumed that the db2ext script has been run to create the file system used for the Oracle database. To create the file system, run the script with the move parameter.

Example:

/opt/utils/db2ext move


Step 1 Execute the platform stop all command.

<hostname># platform stop all

Step 2 Use the db2ext script with move parameter to move the Oracle database.

<hostname># db2ext move

Step 3 Execute the platform start command.

<hostname># platform start

Step 4 Switch over the active platform and repeat Step 1-Step 3 on the other now standby host.


Element Management System External Disk Deactivation

If for any reason you need to move the Oracle database from the external drive array back to its initial internal drive location do the following:


Step 1 Execute the platform stop all command.

<hostname># platform stop all

Step 2 Use the db2ext script with the unmove parameter to move the Oracle database back to its initial internal drive location.

<hostname># db2ext unmove

Step 3 Execute the platform start command.

<hostname># platform start

Step 4 Switch over the active platform and repeat Step 1-Step 3 on the other now standby host.