GSS Installation Management


GSS Installation Management
 
 
This chapter provides information and procedures to perform installations and removals (uninstallations) of the GTPP Storage Server (GSS) software application with all of its various components.
This chapter also includes procedures for upgrading the GSS software application and the PostgreSQL database.
Important: It is recommended that you select the deployment and configurations that best match your service requirements. All elements must be setup prior to attempting any of the procedures detailed in this chapter. To perform any of the procedures listed in this chapter, you must be logged into the server as a root user.
 
Installation First <X-Refs (online)>step <$paranum>s
The following procedure is relevant for both stand-alone and cluster nodes.
Before you begin the installation process, there are four steps you should take to ensure a quick and successful installation of the GSS. Following completion of these steps, you will need to unpack the compressed GSS application components.
 
<X-Refs (online)>step <$paranum> 1 - Verifying System Requirements
This section lists the basic requirements needed for the system verification.
uname -a
Refer to the System Requirements and Recommendations section in the Overview chapter of this guide to confirm that your system meets the minimum requirements for:
 
<X-Refs (online)>step <$paranum> 2 - Verifying Hardware Status
The first thing that you need to do is to ensure that the system hardware has been provisioned properly for your application. This includes:
 
df -kh
Refer to the hard disk partitioning recommendations outlined in the GSS Hardware Sizing and Provisioning Guidelines section in the Overview chapter of this guide.
 
<X-Refs (online)>step <$paranum> 3 - Setting the System Environment Configuration
This step is required to define how the PostgreSQL database engine processes, stores, and retrieves information contained in the various databases stored using the UNIX file subsystem.
Failure to configure these settings may cause data loss and will minimally cause errors in the operation.
Use a text editor to add the following values to the bottom of the system file in the /etc directory and then complete <X-Refs (online)>step <$paranum> 4 before beginning the installation of the GSS application components.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
 
<X-Refs (online)>step <$paranum> 4 - Enabling the Database Environment
After adding the above values to the system file in the /etc directory, restart the system before installation of the GSS application and components. Enter the UNIX command:
reboot
Once you have completed the installation preparation, then you are ready to unpack the compressed GSS software files. This process is explained in the next section.
 
Unpacking the Compressed GSS
The components that comprise the GSS application software are bundled and distributed in a single compressed file package with a .tar.gz extension. Your sales representative will inform you how to download the appropriate GSS package for your requirements.
Step 1
Step 2
mkdir /packages
Important: Remember that within a procedure, information inside angle brackets <variable> represents a variable that can be defined by either the user or generated by the system. For example: Create the /<package> directory to hold the application packages.
Step 3
Step 4
gunzip gss_<version>.tar.gz
version is the version number of the GSS software distributed in the compressed tar file. For example, gss_8.0.71.tar.gz.
Step 5
Locate the tar file GSS_<version>.tar in the /<packages> directory and untar the file by entering the following command:
tar -xvf gss_<version>.tar
During the untar process, a /gss_<version> directory (for example: /gss_8.0.6x) is created in the /<packages> directory:
# ls -al
total 205798
drwxr-xr-x 3 root other 512 Dec 19 07:03 .
drwxr-xr-x 28 root root 1024 Dec 19 07:22 ..
drwxr-xr-x 2 1460 100 1024 Dec 19 07:23 gss_8_x_xx
-rw-r--r-- 1 root other 105292800 Dec 17 15:59 gss_8_x_xx.tar
Step 6
Change to the /<packages>/gss_<version> directory to confirm the presence of the following files:
drwxrwxr-x 2 1071 100 512 Dec 12 15:01 .
drwxr-xr-x 3 root other 512 Dec 15 09:54 ..
-r-xr-xr-x 1 1071 100 1889 Dec 12 15:01 Global_Gss_Unistall.sh
-r--r--r-- 1 1071 100 1094 Dec 12 15:01 README_INSTALL
-r--r--r-- 1 1071 100 4459 Dec 12 15:01 README_UNINSTALL
-rwxr-xr-x 1 1071 100 118784 Dec 12 15:01 StarentGss.tar
-r-xr-xr-x 1 1071 100 4381 Dec 12 15:01 cluster_db_upgrade
-r-xr-xr-x 1 1071 100 103111 Dec 12 15:01 create_gss_instance
-rwxrwxr-x 1 1071 100 13414400 Dec 12 15:00 db.tar
-r--r--r-- 1 1071 100 1168 Dec 12 15:01 gss.env
-rwxrwxr-x 1 1071 100 96828928 Dec 12 15:01 gss.tar
-r--r--r-- 1 1071 100 6459 Dec 12 15:01 gss_db.sql
-r--r--r-- 1 1071 100 1927 Dec 12 15:01 gssclusterconfig
-r-xr-xr-x 1 1071 100 19751 Dec 12 15:01 inst
-r-xr-xr-x 1 1071 100 3224 Dec 12 15:01 inst_db
-r-xr-xr-x 1 1071 100 75779 Dec 12 15:01 inst_serv
-r-xr-xr-x 1 1071 100 2835 Dec 12 15:01 make_gss_instance
-r-xr-xr-x 1 1071 100 4463 Dec 12 15:01 make_postgres_instance.sh
-r--r--r-- 1 1071 100 1328 Dec 12 15:01 nvpair.dtd
-r-xr-xr-x 1 1071 100 1513 Dec 12 15:01 postgresctl
-r--r--r-- 1 1071 100 1012 Dec 12 15:01 sc_event.dtd
-r--r--r-- 1 1071 100 1067 Dec 12 15:01 sc_reply.dtd
 
Step 7
To install multiple instances of GSS, refer to Multiple Instances of GSS section.
 
Complete GSS
This section includes procedures for installation, uninstallation and upgrade to support multiple instances of GSS on a same cluster setup and standalone mode.
 
 
Installing the Complete GSS - Stand-alone Node
This section describes the process for installing the GSS server application, and all of the associated GSS components, for a stand-alone deployment.
 
Using the Installation Script
Installation is accomplished using the inst_serv script. It provides a menu-driven interface with question prompts. Most prompts display default values or information derived from the server’s current setup - such as IP addresses for configured interfaces.
The following information will help you use the installation script most effectively:
Ctrl-C will abort the installation process at any time during the procedure.
The information from the prompts is used to generate the GSS configuration file (gss.cfg). This file can be changed at anytime after the installation.
Important: It is recommended that you fill in path prompts only after you have created the directories to be used.
 
Installation Procedure - Stand-alone Node
The following procedure assumes that you are logged in to the GSS server with root privileges and that you are starting at the root directory level.
Step 1
Change to the /<packages>/gss_<version> directory where you stored the GSS application software in step 5 of the previous section.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of the operating system installed on the system. If it is not matching the requirements in the Minimum System Requirements for Stand-alone Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command:
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Standalone Mode Installation [in standalone environment]...
Step 3
If you did not make the changes to the system file, then abort now (CTRL-C), make the changes to the system file, and then reboot. After rebooting, begin the installation procedure again.
If you made changes to the system file as in the Installation First Steps section, then continue to the next step.
GTPP Storage Server installation directory
Path where Gss will be installed :/opt
GSS Installation dir [/opt] ? </home/export/install_8_0_xx>
Step 4
Enter the name of the directory where the GSS active components are to be installed, for example /<install_dir>. It is recommended that you include this directory at the root level. The installation script creates the directory if needed.
Shortly after typing /<install_dir> and pressing Enter, the following appears:
Entering n will save configuration values and take you to next configuration, To change the default values, enter option number
*** PostgreSQL installation configuration ***
1) PostgreSQL port : 5432
2) PostgreSQL login : postgres
3) PostgreSQL passwd : postgres
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Entering n will save configuration values and take you to next configuration, To change the default values, enter option number
Important: All values that appear initially for this menu are system defaults and you do not need to make changes if the values are acceptable.
Step 5
Enter the line number to change a parameter value, if needed. Then enter n to save changes (if made) or defaults and move to the next menu.
*** GSS Configuration Parameters ***
1) File Format for data files : starent
2) Hard Limit Interval for File Generation (mins) : 0
3) Support for LRSN rewrite : n
4) Encoding of IP Address in binary format : n
5) Enable redundant data file support : n
6) GSN Location : GSN
p) Go back to previous menu
n) Proceed to next configuration
a) Abort Installation
Important: The GSS and FileGen are set to run in archive mode by default. The archive mode, used in deployments that do not include CGFs, instructs the server to save records to a file.
Step 6
*** Network Interface Configurations ***
Currently configured IP interfaces on the machine : 10.8.1.205
1 ) 10.1.1.111
2 ) 123.1.2.33
p ) Go back to previous menu
n ) Proceed to next configuration
a ) Abort Installation
Enter your choice : n
Important: Note that the script has sensed both the number of interfaces and their IP addresses.
Step 7
Pressing n completes the menu-driven portion of the installation process and displays the configuration that you have created.
========================================================================
Standalone Mode Installation
========================================================================
GSS installation path : /<install_dir>/gss
*** PostgreSQL Configurations ***
PostgreSQL port : 5432
PostgreSQL login : postgres
PostgreSQL passwd : postgres
**** GSS Configurations ***
File Format for data files : starent
Hard Limit Interval for File Generation : 0
GTPP Dictionary : custom1
Support for LRSN rewrite : n []
Encoding of IP Address in binary format : n
GSN Location : GSN
Enable redundant data file support : n
*** Network Host Configurations ***
IP Address of the machine to be used : 10.1.1.111
========================================================================
You are given the opportunity to modify the configuration that you have created. (The values displayed above were entered for illustration and not as recommendations.)
Do you want to Modify Configuration [n] ? n
Step 8
Press y to return to the menus and change the configuration or press n to continue the installation process.
Installing GSS..... Please wait.....
Extracting perl tar... Done.
Add following entry to crontab (if not already present) to remove processed data files in /<install_dir>/gss/data after storage period of 7 day(s)
0 * * * * /<install_dir>/gss/bin/cleanup.sh >> /<install_dir>/gss/log/cleanup.log 2>&1
To start Process Monitor Tool along with GSS and Filegen (if not started already) : execute "/<install_dir>/gss/serv start"
To get help on "GSS": execute "/<install_dir>/gss/serv help"
For additional info and performance tuning please read README & GSS User Guide in doc directory
Do You Want To Start GSS : [y] ? y
This is the last action that you must take to complete the installation process.
Step 9
Enter n to complete the installation without starting the GSS. Press return to accept the yes default to complete the installation process and start the GSS.
This will start Process Monitor Tool along with GSS and Filegen using params listed in /<install_dir>/gss/etc/gss.cfg
Please see log/psmon.log file for log messages
Starting Process Monitor Tool...
Done.
Capturing status, please wait for a while...
======================================================================
0 1118 12:04:16 TS 59 0:00 <install_dir>/gss/bin/gssfilegen 1
0 1015 12:04:10 TS 59 0:00 /usr/bin/bash <install_dir>/gss/serv start 29661
0 1102 12:04:15 TS 59 0:00 <install_dir>/gss/lib/perl5.8.5/bin/perl <install_dir>/gss/psmon --daemon --cro 1
0 1113 12:04:16 TS 59 0:00 <install_dir>/gss/bin/gss 1
======================================================================
GTPP Storage Server Version 8.0.xx installation done.
The status display indicates that GSS, FileGen, PSMON, and PostgreSQL have all been started. If nothing displays, turn to the Troubleshooting the GSS section in the GTPP Storage Server Administration chapter. In most cases, if the other components are started, then the PostgreSQL has also been started.
Step 10
# cd /<install_dir>
# ls -al
total 16
drwxr-xr-x 4 root other 512 Dec 9 17:29 .
drwxr-xr-x 11 root root 512 Dec 9 17:28 ..
-rwxrwxrwx 1 root other 3782 Dec 9 17:28 StandaloneGSSUninstall.sh
drwxrwxr-x 12 root root 512 Dec 9 17:29 gss
drwxr-xr-x 6 postgres other 512 Dec 9 17:28 postgres
Step 11
# cd gss
# ls -al
total 232
drwxrwxr-x 12 root root 512 Dec 21 22:43 .
drwxr-xr-x 4 root other 512 Dec 21 22:43 ..
-rw-r--r-- 1 posgres other 533 Aug 17 10:58 .configfile
-r--r--r-- 1 posgres root 1168 Aug 17 10:58 .gss.env
-rw------- 1 root other 5 Dec 21 22:43 .gss.pid
-rw------- 1 root other 5 Dec 21 22:43 .gssfilegen.pid
-rw------- 1 root other 7 Dec 21 22:43 .gssfilegen.seq
-rw-r--r-- 1 root other 0 Dec 21 22:42 .inst_serv.err
-rwxr-xr-x 1 root root 4057 Mar 3 2006 README
drwxrwxr-x 2 root root 512 Dec 21 22:42 bin
drwxrwxr-x 2 root root 512 Sep 8 2004 data
drwxrwxr-x 2 root root 512 May 31 2005 doc
drwxrwxr-x 2 root root 512 Dec 21 22:42 etc
-rwxr-xr-x 1 root other 22480 Dec 21 22:42 gss_ctl
drwxrwxr-x 3 root root 512 Dec 21 22:42 lib
drwxrwxr-x 3 root root 512 Dec 21 22:43 log
-rwxr-xr-x 1 root other 54445 Dec 21 22:42 psmon
-rw-r--r-- 1 root other 4 Dec 21 22:43 psmon.pid
-rwxr-xr-x 1 root other 19850 Dec 21 22:42 serv
drwxrwxr-x 2 root root 512 Oct 10 05:43 sql
drwxrwxr-x 2 root root 512 Oct 10 05:43 template
drwxrwxr-x 2 root root 512 Sep 8 2004 tmp
drwxrwxr-x 3 root root 512 Dec 21 22:42 tools
Step 12
# cd etc
# ls -al
total 42
drwxrwxr-x 2 root root 512 Dec 21 22:42 .
drwxrwxr-x 12 root root 512 Dec 21 22:43 ..
-rw-r--r-- 1 root other 10921 Dec 21 22:42 gss.cfg
-rw-r--r-- 1 root other 2459 Dec 21 22:42 gsslogger.xml
-rw-r--r-- 1 root other 3690 Dec 21 22:42 psmon.cfg
-rw-r--r-- 1 root other 261 Dec 21 22:42 uninstall_config_file
Before working with the GSS, it is recommended to create a write-protected copy of the gss.cfg file and store it in a separate directory. To ensure you remember the configuration for your software version, we suggest that you store the in the /<packages>/<gss_version directory.
 
Installing the Complete GSS - Node 1 in Cluster
This section describes the process for installing the GSS server application, and all of the associated GSS components, on the primary GSS node of the cluster.
Prior to installing the GSS application, ensure that the cluster is installed and configured as needed. For information on installing and configuring the Sun cluster, refer to the Sun documentation.
 
Using the Installation Script
Installation is accomplished using the inst_serv script. It provides a menu-driven interface with question prompts. Most prompts display default values or information derived from the server’s current setup - such as IP addresses for configured interfaces.
The following information will help you use the installation script most effectively:
Ctrl-C will abort the installation process at any time during the procedure.
The information from the prompts is used to generate the GSS configuration file (gss.cfg). This file can be changed at anytime after the installation.
Important: It is recommended that you fill in path prompts only after you have created the directories to be used.
 
Installation Procedure - Node 1
The following procedure assumes that you are logged in to the GSS server with root privileges and that you are starting from the root directory.
Step 1
Change to the /<packages>/gss_<version> directory where you stored the GSS application software.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of operating system and cluster software installed on the system. If it is not matching the requirements in the Minimum System Requirements for Cluster Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command.
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Using cluster for Installation
Cluster Mode Installation (To be used in cluster environment) [n] ? y
Important: Note that the script senses whether the server is a Stand-alone node or a Cluster node. If you made changes to the system file as in the Installation First Steps section, then you can continue. If not, abort the installation using CTRL+C, make the changes to the system file, and then reboot. After rebooting, begin the installation procedure again.
Step 3
Enter y(yes) to continue the installation.
GTPP Storage Server installation directory
Specify the common path for GSS data and logs on all cluster nodes [/sharedgss/gss] ?
Step 4
Press Enter to accept the default directory /sharedgss/gss or enter the name of another directory. Next you are prompted for the location to install the GSS.
Path where Gss will be installed :
GSS Installation dir ? /TEST_GSS/cvserver
Important: In case of cluster mode, it is highly recommended that you do not install the GSS application in the /opt, /opt/gss/, or /opt/postgres directory.
Step 5
Press Enter to accept the default or enter the name of the directory where the GSS active components are to be installed. It is recommended that you put this directory at the root level. The installation script creates the directory if needed.
Shortly after responding to the prompt for the installation directory, the following appears:
Do you want Backup installation for current cluster mode installation [y/n]: [n] ?
Important: This enables backup mode for the GSS node in a cluster deployment.
If you do not want the backup for cluster mode installation, proceed to step 7. Otherwise, continue with next step.
Step 6
Enter y(yes) to enable node switchover. The installation continues with a menu to configure the PostgreSQL parameters for backup.
*** PostgreSQL configuration for backup Installation ***
1) PostgreSQL port for backup installation :
2) PosrgreSQL login for backup installation :
3) PostgreSQL data directory for backup installation :
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Step 7
Entering n will save configuration values and take you to next configuration. To change the default values, enter option number
*** PostgreSQL installation configuration ***
1) PostgreSQL port : 5432
2) PostgreSQL login : postgres
3) PostgreSQL passwd : postgres
4) Shared PostgreSQL dir :/sharedpostgres
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Step 8
Entering n will save configuration values and take you to next configuration. To change the default values, enter option number
Step 9
Enter n to save changes or defaults and move to the next menu.
Important: The GSS and FileGen are set to run in archive mode by default. The archive mode, used in deployments that do not include CGFs, instructs the server to save records to a file.
*** GSS Configuration Parameters ***
1) File Format for data files : starent
2) Hard Limit Interval for File Generation (mins) : 0
3) Support for LRSN rewrite : n
4) Encoding of IP Address in binary format : n
5) Enable redundant data file support : n
6) GSN Location : GSN
7) Specify the GTPP Dictionary : custom6
p) Go back to previous menu
n) Proceed to next configuration
a) Abort Installation
Important: The Specify the GTPP Dictionary option appears only if the Support for LRSN rewrite or Encoding of IP Address in binary format parameter is enabled. Otherwise, the GTPP dictionary is set to default.
Step 10
Enter the number or letter of your choice. Make changes as needed. Then enter n to save changes or defaults and move to the next menu.
*** Network Interface Configuration ***
1) Logical Host IP Addresss :
2) Logical Host Name :
3) Additional Logical Host Name[eg.For Mediation Server] : n
p) Go back to previous menu
n) Proceed to next configuration
a) Abort Installation
Enter your choice : n
Step 11
Enter Your Choice : 1
Please specify already available logical host Address for GSS cluster : ?
Step 12
Enter the IP address of the logical host, press Enter, and move to the next prompt.
Enter Your Choice :
Step 13
Please specify Logical hostname for above logical host address : ?
Step 14
Pressing n completes the menu-driven portion of the installation process and displays the configuration that you have created.
=======================================================================
Cluster Mode Installation
=======================================================================
GSS installation path : /TEST_GSS/cvserver/gss
Common path for GSS data and logs : /sharedgss/gss
*** Backup PostgresSQL Configurations ***
PostgreSQL port for backup installation : 5477
PosrgreSQL login for backup installation : backpost
PostgreSQL data directory for backup installation : /backpost
*** PostgreSQL Configurations ***
PostgreSQL port : 5432
PostgreSQL login : gsspg
PostgreSQL passwd : gsspg
Shared PostgreSQL dir : /sharedpostgres
**** GSS Configurations ***
File Format for data files : custom7
Hard Limit Interval for File Generation : 2
GTPP Dictionary : custom6
Support for LRSN rewrite : n []
Encoding of IP Address in binary format : n
GSN Location : GSN
Enable redundant data file support : n
*** Network Host Configurations ***
Logical Host IP Addresss : 10.1.1.1
Logical Host Name : gssserv
Additional Logical Host Name : n
=======================================================================
You are given the opportunity to modify the configuration that you have created.
Do you want to Modify Configuration [y/n] : [n] ?
Step 15
Press y to return to the menus and change the configuration or press n or Enter to continue the installation process.
Installing GSS..... Please wait.....
Extracting perl tar...
Done.
Add following entry to crontab (if not already present) to remove processed data files in /sharedgss/gss/data after storage period of 7 day(s)
0 * * * * /TEST_GSS/cvserver/gss/bin/cleanup.sh >> /sharedgss/gss/clustems1_log//cleanup.log 2>&1
For additional info and performance tuning please read README & GSS User Guide in doc directory
GSS Cluster Agent Installation configuration
Proceeding with GSS Cluster Agent Installation
Extracting StarentGss.tar...
Processing package instance <StarentGss> from </TEST_GSS/gss_x_x_xx>
Sun Cluster resource type for Gss server(sparc) 3.0.0,REV=
Sun Microsystems, Inc.
Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <StarentGss> [y,n,?] y
Step 16
Enter y(yes) to continue the installation.
Installing Sun Cluster resource type for Gss server as <StarentGss>
## Installing part 1 of 1.
/opt/StarentGss/README.Gss
/opt/StarentGss/bin/Gss_mon_check.ksh
/opt/StarentGss/bin/Gss_mon_start.ksh
/opt/StarentGss/bin/Gss_mon_stop.ksh
/opt/StarentGss/bin/Gss_probe.ksh
/opt/StarentGss/bin/Gss_svc_start.ksh
/opt/StarentGss/bin/Gss_svc_stop.ksh
/opt/StarentGss/bin/Gss_update.ksh
/opt/StarentGss/bin/Gss_validate.ksh
/opt/StarentGss/bin/gethostnames
/opt/StarentGss/bin/gettime
/opt/StarentGss/bin/hasp_check
/opt/StarentGss/bin/simple_probe
/opt/StarentGss/etc/Starent.Gss
/opt/StarentGss/man/man1m/Gss_config.1m
/opt/StarentGss/man/man1m/removeGss.1m
/opt/StarentGss/man/man1m/startGss.1m
/opt/StarentGss/man/man1m/stopGss.1m
/opt/StarentGss/util/Gss_config
/opt/StarentGss/util/removeGss
/opt/StarentGss/util/startGss
/opt/StarentGss/util/stopGss
[ verifying class <none> ]
## Executing postinstall script.
Installation of <StarentGss> was successful.
GTPP Storage Server Version x.x.xx installation done.
GSS installation on node1 is complete. please proceed with installation on node2....
 
Installing the Complete GSS - Node 2 in Cluster
This section describes the process for installing the GSS server application, with all of the GSS components, on to a second GSS node of a cluster.
The following procedure assumes that you are logged in to the node 2 (standby) GSS server with root privileges and that you are starting from the root directory
Step 1
Change to the /<packages>/gss_<version> directory where you stored the GSS application software.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of operating system and cluster software installed on the system. If it is not matching the requirements in the Minimum System Requirements for Cluster Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command.
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Using cluster for Installation
Cluster Mode Installation (To be used in cluster environment)[y/n]: [n] ?
Step 3
Enter y(yes) to perform GSS installation in cluster mode and continue the installation.
GTPP Storage Server installation directory
Specify the common path for GSS data and logs on all cluster nodes [/sharedgss/gss] ?
Step 4
Enter the name of the directory where files that will be common to all nodes in the cluster are to be stored, for example /<shared_gss_dir>. You must enter the same path that you entered when you installed the GSS on node 1.
Shortly after typing /<shared_gss_dir> and pressing Enter, the configuration created for cluster node 1 appears:
========================================================================
Cluster Mode Installation
========================================================================
GSS installation path : /TEST_GSS/cvserver/gss
Common path for GSS data and logs : /sharedgss/gss
*** Backup PostgresSQL Configurations ***
PostgreSQL port for backup installation : 5477
PostgreSQL login for backup installation : backpost
PostgreSQL data directory for backup installation : /backpost
*** PostgreSQL Configurations ***
PostgreSQL port : 5499
PostgreSQL login : gsspg
PostgreSQL passwd : gsspg
Shared PostgreSQL dir : /sharedpostgres
**** GSS Configurations ***
File Format for data files : custom7
Hard Limit Interval for File Generation : 2
GTPP Dictionary : custom6
Support for LRSN rewrite : n []
Encoding of IP Address in binary format : n
GSN Location : GSN
Enable redundant data file support : n
*** Network Host Configurations ***
Logical Host IP Addresss : 10.4.72.110
Logical Host Name : gssserv
Additional Logical Host Name : n
========================================================================
If you want to change configuration values, do not start GSS after installation. Change required configuration parameters from /TEST_GSS/cvserver/gss/etc/gss.cfg and then start GSS..
Proceed With Installation [y/n] : [y] ?
Step 5
Press Enter to continue the installation.
Installing GSS..... Please wait.....
Extracting perl tar... Done.
Add following entry to crontab (if not already present) to remove processed data files in /sharedgss/gss/data after storage period of 7 day(s)
0 * * * * /TEST_GSS/cvserver/gss/bin/cleanup.sh >> /sharedgss/gss/clustems2_log//cleanup.log 2>&1
For additional info and performance tuning please read README & GSS User Guide in doc directory
GSS Cluster Agent Installation configuration
Proceeding with GSS Cluster Agent Installation
Extracting StarentGss.tar...
Processing package instance <StarentGss> from </TEST_GSS/gss_8_1_dict21>
Sun Cluster resource type for Gss server(sparc) 3.0.0,REV=
Sun Microsystems, Inc.
Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <StarentGss> [y,n,?]
Step 6
Enter y (yes) to continue the installation.
Installing Sun Cluster resource type for Gss server as <StarentGss>
## Installing part 1 of 1.
/opt/StarentGss/README.Gss
/opt/StarentGss/bin/Gss_mon_check.ksh
/opt/StarentGss/bin/Gss_mon_start.ksh
/opt/StarentGss/bin/Gss_mon_stop.ksh
/opt/StarentGss/bin/Gss_probe.ksh
/opt/StarentGss/bin/Gss_svc_start.ksh
/opt/StarentGss/bin/Gss_svc_stop.ksh
/opt/StarentGss/bin/Gss_update.ksh
/opt/StarentGss/bin/Gss_validate.ksh
/opt/StarentGss/bin/gethostnames
/opt/StarentGss/bin/gettime
/opt/StarentGss/bin/hasp_check
/opt/StarentGss/bin/simple_probe
/opt/StarentGss/etc/Starent.Gss
/opt/StarentGss/man/man1m/Gss_config.1m
/opt/StarentGss/man/man1m/removeGss.1m
/opt/StarentGss/man/man1m/startGss.1m
/opt/StarentGss/man/man1m/stopGss.1m
/opt/StarentGss/util/Gss_config
/opt/StarentGss/util/removeGss
/opt/StarentGss/util/startGss
/opt/StarentGss/util/stopGss
[ verifying class <none> ]
## Executing postinstall script.
Installation of <StarentGss> was successful.
Do you want to start GSS on cluster nodes [y/n] : [y] ?
Step 7
Enter y (yes) to continue the installation.
Starting GSS on cluster node....
Capturing status, please wait for a while...
=============================================================
0 6523 00:17:37 TS 59 00:01 /TEST_GSS/cvserver/gss/lib/perl5.8.5/bin/perl /TEST_GSS/cvserver/gss/psmon --da 1
0 6539 00:17:38 TS 59 00:00 /TEST_GSS/cvserver/gss/bin/gssfilegen 1
0 6534 00:17:38 TS 59 00:00 /TEST_GSS/cvserver/gss/bin/ gss 1
gsspg 6549 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6486 1 0 00:17:32 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6552 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6548 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6545 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6560 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6550 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6562 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6500 6486 0 00:17:32 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6498 6486 0 00:17:32 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6558 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6543 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6554 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6559 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6556 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6551 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6561 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6565 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6501 6486 0 00:17:32 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6540 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6566 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6553 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6564 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6555 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6546 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6563 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6557 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6544 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
gsspg 6547 6486 0 00:17:38 ? 0:00 /TEST_GSS/cvserver/postgres/bin/postgres -D /TEST_GSS/cvserver/postgres/data
=============================================================
If there is any problem with cluster mode GSS, you can start backup mode GSS using "/TEST_GSS/cvserver/gss/serv start_backup".
GTPP Storage Server Version x.x.xx installation done.
The status display indicates that GSS, FileGen, PSMON, and Postgres have all been started. If nothing displays, turn to the Troubleshooting the GSS section in the GTPP Storage Server Administration chapter. In most cases, if the other components started, then the PostgreSQL has also been started.
Step 8
ps -ef
The resulting display will display all the processes running and there will most likely be multiple lines that confirm the PostgreSQL is running as they look similar to the following:
postgres 15080 14972 0 07:23:25 ? 0:00 /<clus_install_dir>/postgres/bin/postgres -D
Step 9
# cd etc
# ls -al
total 38
drwxr-xr-x 2 1460 100 512 Apr 20 07:22 .
drwxr-xr-x 12 1460 100 512 Apr 20 07:23 ..
-rw-r--r-- 1 root other 12561 Apr 20 07:22 gss.cfg
-rw-r--r-- 1 root other 3521 Apr 20 07:22 psmon.cfg
Before doing anything with the GSS, it is recommended to create a write-protected copy of the gss.cfg file and store it in a separate directory. To ensure you remember the configuration for your software version, we suggest that you store it in the /<packages>/<gss_version> directory.
 
Uninstalling the Complete GSS - Stand-alone Node
Uninstalling the GSS accomplishes the following:
Uninstalling is easily performed using the node-specific script and configuration file that are generated during installation. During GSS installation, these files are inserted into the generated directory where the GSS application files are stored.
Step 1
# cd /<install_dir>
Step 2
# cd install_8071
# ls -al
total 16
drwxr-xr-x 4 root other 512 Dec 21 22:43 .
drwxr-xr-x 11 root root 512 Dec 21 22:42 ..
-rwxrwxrwx 1 root other 3782 Dec 21 22:42 StandaloneGSSUninstall.sh
drwxrwxr-x 12 root root 512 Dec 21 22:43 gss
drwxr-xr-x 6 postgres other 512 Dec 21 22:42 postgres
Step 3
# ./StandaloneGSSUninstall.sh
Uninstallation Process will Remove Installation directories
Do You Want To Proceed With Uninstallation [y/n] : [n] ? y
Step 4
Enter y(yes) to continue the uninstall.
The system goes to the installation directory where the system-generated uninstall_config_file resides and uses the information in that file to complete the install process.
Uninstalling </<install-dir>/gss/etc/uninstall_config_file...
This will stop Process Monitor Tool along with GSS and Filegen
Please see log/psmon.log file for log messages
Stopping Process Monitor Tool...
Done.
Stopping GSS...
Done.
Stopping GSS FileGen...
Done.
waiting for server to shut down.... done
server stopped
starting cleanup process
Cleaning the gss installation paths ...
Cleaning the postgres installation paths ...
Cleaning the shared memory and semaphores ...
Deleting the postgres user created during installation ...
Cleaning the postgres lock from /tmp directory ...
Uninstallation process completed successfully.........#
 
Uninstalling the Complete GSS - Cluster Nodes
Uninstalling the GSS from a cluster node accomplishes the following:
During both the installation and the upgrade processes, the system generates system-specific uninstall files: an uninstall script and an uninstall configuration file.
Begin with the primary node 1.
Step 1
# cd /<clus_install_dir>
Step 2
# cd install_8071
# ls -al
total 16
drwxr-xr-x 4 root other 512 Dec 21 22:43 .
drwxr-xr-x 11 root root 512 Dec 21 22:42 ..
-rwxrwxrwx 1 root other 3782 Dec 21 22:42 ClusterGSSUninstall.sh
drwxrwxr-x 12 root root 512 Dec 21 22:43 gss
drwxr-xr-x 6 postgres other 512 Dec 21 22:42 postgres
Step 3
bash-2.05# ./ClusterGSSUninstall.sh
Uninstallation Process will remove shared directories, installation directories....
Do You Want To Proceed With Uninstallation [y/n] : [n] ?
Step 4
Enter y (yes) to continue the uninstall and start the cleanup operation.
Gss-harg: invalid resource group
starting cleanup process
checking if resource group exists..
Cleaning the gss and postgres common data paths ...
Cleaning the starentGss package ...
The following package is currently installed:
StarentGss Sun Cluster resource type for Gss server
(sparc) 3.0.0,REV=
Do you want to remove this package? [y,n,?,q]
Step 5
Enter y (yes) to continue.
## Removing installed package instance <StarentGss>
This package contains scripts which will be executed with super-user permission during the process of removing this package.
Do you want to continue with the removal of this package [y,n,?,q]
Step 6
Enter y (yes) to continue.
## Verifying package dependencies.
## Processing package information.
## Executing preremove script.
Resource <Gss-hars> has been removed already
Resource type <Starent.Gss> has been un-registered already
Network Resource not removed...
You may run removeGss again with the -h option to remove network resource.
## Removing pathnames in class <none>
/opt/StarentGss/util/stopGss
/opt/StarentGss/util/startGss
/opt/StarentGss/util/removeGss
/opt/StarentGss/util/Gss_config
/opt/StarentGss/util
/opt/StarentGss/man/man1m/stopGss.1m
/opt/StarentGss/man/man1m/startGss.1m
/opt/StarentGss/man/man1m/removeGss.1m
/opt/StarentGss/man/man1m/Gss_config.1m
/opt/StarentGss/man/man1m
/opt/StarentGss/man
/opt/StarentGss/etc/Starent.Gss
/opt/StarentGss/etc
/opt/StarentGss/bin/simple_probe
/opt/StarentGss/bin/hasp_check
/opt/StarentGss/bin/gettime
/opt/StarentGss/bin/gethostnames
/opt/StarentGss/bin/Gss_validate.ksh
/opt/StarentGss/bin/Gss_update.ksh
/opt/StarentGss/bin/Gss_svc_stop.ksh
/opt/StarentGss/bin/Gss_svc_start.ksh
/opt/StarentGss/bin/Gss_probe.ksh
/opt/StarentGss/bin/Gss_mon_stop.ksh
/opt/StarentGss/bin/Gss_mon_start.ksh
/opt/StarentGss/bin/Gss_mon_check.ksh
/opt/StarentGss/bin
/opt/StarentGss/README.Gss
/opt/StarentGss
## Executing postremove script.
## Updating system information.
Removal of <StarentGss> was successful.
Cleaning the gss installation paths
Cleaning the postgres installation paths ...
Cleaning the /opt directories paths ...
Cleaning the shared memory and semaphores
Deleting the postgres user created during installation ...
Cleaning the postgres lock from /tmp directory
Uninstallation process completed successfully.....
root@clustgss2#
Step 7
Login to node 2 and repeat step 1 through step 6 to uninstall GSS from the standby node.
 
Upgrading the GSS Stand-alone Node
This process upgrades both the GSS and Postgres database application files on a stand-alone node. The script begins by stopping all active processes and then restarts them after the upgrade is completed.
This upgrade process is valid for upgrading:
 
Preparing to Upgrade
Step 1
Step 2
 
Change to /<clus_install_dir>/gss/etc and make a copy of the current gss.cfg.
Rename the file and store it in a separate directory. It is recommended to store the copy in the /<packages>/<gss_version> directory holding the current GSS version.
Step 3
gunzip gss_<version>.tar.gz
<version> is the version number of the GSS software distributed in the compressed tar file. For example, gss_8.0.xx.tar.gz.
Step 4
Locate the tar file GSS_<version>.tar in the /<packages> directory and untar the file by entering the following command:
tar -xvf gss_<version>.tar
During the untar process, a /gss_<version> directory (for example: /gss_8.0.xx) is created in the /<packages> directory.
 
Using the Installation Script
Installation is accomplished using the inst_serv script. It provides a menu-driven interface with question prompts. Most prompts display default values or information derived from the server’s current setup - such as IP addresses for configured interfaces.
The following information will help you use the installation script most effectively:
Ctrl-C will abort the installation process at any time during the procedure.
The information from the prompts is used to generate the GSS configuration file (gss.cfg). This file can be changed at anytime after the installation.
Important: It is recommended that you fill in path prompts only after you have created the directories to be used.
 
Upgrading a GSS Stand-alone Node
The following procedure assumes that you are logged in to the GSS server with root privileges and that you are starting at the root directory level.
Step 1
Change to the /<packages>/gss_<version> directory that was created when the files were uncompressed.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of the operating system installed on the system. If it is not matching the requirements in the Minimum System Requirements for Stand-alone Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command:
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Standalone Mode Installation [in standalone environment]...
GTPP Storage Server installation directory
Path where Gss will be installed :
GSS Installation dir ? /<install_dir>
Important: If the upgrade is to a GSS 8.0.xx then /opt will appear as a default value for the GSS Installation dir parameter.
Step 3
</install_dir>
Step 4
Do you want to upgrade GSS [n] ? y
Step 5
Enter y (yes) to continue the upgrade.
Starting Standalone upgrade
This will stop Process Monitor Tool along with GSS and Filegen
Please see log/psmon.log file for log messages
Stopping Process Monitor Tool...
Done.
Stopping GSS...
Done.
Stopping GSS FileGen...
Process GSS FileGen could not be stopped..
Killing it...
Done.
waiting for server to shut down.... done
server stopped
server starting
Starting database upgrade...
Database upgrade completed.....
waiting for server to shut down.... done
server stopped
Do You Want To Start GSS [y/n]: [y] ?
Step 6
Enter y or press Enter to start the GSS and all related processes (e.g. the Process Monitor).
This will start Process Monitor Tool along with GSS and Filegen using params listed in /<install_dir>/gss/etc/gss.cfg
Please see log/psmon.log file for log messages
Starting Process Monitor Tool...
Done.
Capturing status, please wait for a while...
=============================================================
0 17453 11:35:15 TS 59 0:00 /</install_dir>/gss/lib/perl5.8.5
/bin/perl -w /</install_dir>/gss/psmon --daemon --cron 1
0 17460 11:35:16 TS 59 0:01 /</install_dir>/gss/bin/gss 1
0 17465 11:35:16 TS 59 0:01 /</install_dir>/gss/bin/gssfilegen 1
0 17108 11:33:08 TS 59 0:00 tee -a /</install_dir>/gss_7_1_67
/installation_log_</install_dir>_gss1 17067
0 2791 Oct_13 TS 59 1:01 /bin/bash /</install_dir>/gss/bin
/test_process.sh 1
0 17421 11:35:12 TS 59 0:00 tee -a /</install_dir>/gss_7_1_67/
installation_log_</install_dir>_sol1ems 17106
postgres 17479 17435 0 11:35:16 ? 0:00 /</install_dir>/postgres/
bin/postgres -D /</install_dir>/postgres/data -i
postgres 17485 17435 0 11:35:16 ? 0:00 /</install_dir>/postgres/
bin/postgres -D /</install_dir>/postgres/data -i
...
...
...
postgres 17491 17435 0 11:35:16 ? 0:00 /</install_dir>/postgres/
bin/postgres -D /</install_dir>/postgres/data -i
=============================================================
GTPP Storage Server Version <gss_x_x_xx> installation done.
====================================================================
The status display indicates that GSS, FileGen, PSMON, and PostgreSQL have all been started. If nothing displays, turn to the Troubleshooting the GSS section in the GTPP Storage Server Administration chapter.
 
Upgrading the GSS - Cluster Nodes
This process upgrades both the GSS and Postgres database application files on the nodes in a cluster.
This upgrade process is valid for upgrading:
 
Preparing to Upgrade
Step 1
Step 2
 
Change to /<clus_install_dir>/gss/etc directory and make a copy of the current gss.cfg.
Rename the file and store it in a separate directory. It is recommended to store the copy in the /<packages>/<gss_version> directory holding the current GSS version.
Step 3
gunzip gss_<version>.tar.gz
<version> is the version number of the GSS software distributed in the compressed tar file. For example, gss_8.0.xx.tar.gz.
Step 4
Locate the tar file GSS_<version>.tar in the /<packages> directory and untar the file by entering the following command:
tar -xvf gss_<version>.tar
During the untar process, a /gss_<version> directory (for example: /gss_8.0.xx) is created in the /<packages> directory.
 
Upgrading Node1 - Primary Node
By using the upgrade procedure, you ensure that no CDRs are lost as there will always be one of the nodes in active/online mode.
The primary node in a cluster is typically referred to as node1. You should begin the process by logging in to the node1.
Step 1
./inst_serv
Important: This script will check the version of operating system and cluster software installed on the system. If it is not matching the requirements in the Minimum System Requirements for Cluster Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command:
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Cluster mode GSS installation exists /<server_name>
Step 2
Do you wish to upgrade this installation [y/n]: [n] ?
The following prompt appears only when the backup mode has not been configured during the fresh installation of GSS.
Do you want Backup installation for current cluster mode installation [y/n]: [n] ?
*** PostgreSQL configuration for backup Installation ***
1) PostgreSQL port for backup installation :
2) PosrgreSQL login for backup installation :
3) PostgreSQL data directory for backup installation :
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Step 3
Please note that GSS upgrade in cluster mode needs following steps:-
1. Database upgrade on either of the nodes.
2. The node should be in standalone mode (after DB upgrade is done).
--------------------------------------------------------------
This procedure will walk you through database upgrade first and then boot the node in standalone mode.
Step 4
Important: The database can only be upgraded on the node where the GSS is currently online (primary).
Do you want to upgrade database? (If you have already upgraded it on another node, you may answer no to this question and proceed to boot the node in standalone mode.) [y/n] : [y] ? y
The script proceeds to inform you of the next actions that are being taken.
Resource group "Gss-harg" is online on node "clustgss2".
For DB upgrade you need to start/switch resourcegroup on node clustgss1.
Do you want to start/switch resourcegroup "Gss-harg" on node clustgss1. [y/n] : [y] ?
Bringing resource group online on node clustgss1, please wait...
Using "cluster_db_upgrade" script from "/<packages>/gss_<version>" to upgrade database.
clustgss
Starting database upgrade...
Database upgrade completed successfully................
You need to reboot the node in standalone mode to upgrade the GSS
After rebooting node in standalone mode, use "inst_serv" script for GSS upgrade.
Do you want to reboot the node in standalone mode [y/n] : [y] ?
Step 5
Press Enter or type y to reboot into stand-alone mode and continue the upgrade process.
Rebooting node .....
Step 6
login as: root
# clresourcegroup status
=== Cluster Resource Groups ===
Group Name         Node Name   Suspended   Status
----------         ---------   ---------   ------
Gss-harg            <name_node2>  No       Online
Gss-harg            <name_node1>  No       Offline
The display above indicates the upgrade script successfully performed the switchover.
Important: It will take a minute or two for the resource group to switch to node2 and for node1 to reboot in stand-alone mode.
Step 7
scstat
The system should indicate the node is not a cluster node:
scstate: not a cluster member.
Step 8
Return to node1 and change to the directory where the GSS application files were stored, /<packages>/gss_<version> and initiate the installation script.
# ./inst_serv
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Cluster mode GSS installation exists /<clus_install_dir>
Do you wish to upgrade this installation [y/n]: [n] ?
Step 9
Press Enter or type y (yes) to continue the upgrade.
Starting upgrade of cluster
Creating backup of previous installation files as /<gss_install_dir>/gss_backup.tar
Backup done.
GSS Cluster Agent Installation configuration
Proceeding with GSS Cluster Agent Installation
Extracting StarentGss.tar...
Processing package instance <StarentGss> from </<package_dir> /gss_<x_x_xx>
Sun Cluster resource type for Gss server (sparc) 3.0.0,REV=
Sun Microsystems, Inc.
This appears to be an attempt to install the same architecture and version of a package which is already installed. This installation will attempt to overwrite this package.
Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
28 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <StarentGss> [y,n,?] y
Step 10
Press Enter or type y to confirm the upgrade of the cluster resources.
Installing Sun Cluster resource type for Gss server as <StarentGss>
## Installing part 1 of 1.
[ verifying class <none> ]
## Executing postinstall script.
Installation of <StarentGss> was successful.
Extracting perl tar... Done.
Starting Backup database upgrade...
Backup Database upgrade completed.....
waiting for server to shut down.... done
server stopped
You need to reboot the node in cluster mode for normal cluster mode operation.
If database is not upgraded on another node, please run "cluster_db_upgrade" script on another node for database upgrade.
Do you want to reboot the node in cluster mode [y/n] : [y] ?
Step 11
Rebooting node .....
Step 12
clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name         Status
---------         ------
<node_name1>       Online
<node_name2>       Online
The system recognizes node1 as the standby cluster node.
Step 13
./GSS -version
Important: ./serv version is used when upgrading to GSS versions higher than x.x.69.
Step 14
Important: Reminder from previous step - /opt is a default installation directory for some versions of GSS.
cd /opt/gss
./GSS switch
Important: ./serv switch is used when upgrading to GSS versions higher than x.x.69.
After entering the switchover command, the system displays the following:
Resource group “Gss-harg” is online on node “<name_node2>”.
Bringing resource group online on node “<name_node1>”. Please wait ...
Done.
Step 15
clresourcegroup status
=== Cluster Resource Groups ===
Group Name         Node Name       Suspended    Status
----------       -------------     ---------    ------
Gss-harg         <name_node1>          No       Online
                 <name_node2>          No       Offline
Now GSS resource group is active with the new release and node2 is in standby mode and free to be upgraded.
Step 16
Perform step 1 to step 11 on node2 to upgrade it to the newer version of GSS.
Important: While executing step 3 on the second node, do not choose to upgrade database, as it has already upgraded for first node. After completing the steps on node2, GSS upgrade on cluster setup will be completed.
 
Multiple Instances of GSS
This section includes procedures for installation, uninstallation and upgrade to support multiple instances of GSS on a same cluster setup and standalone mode.
 
The GSS installer installs all the binaries for GSS, PostgreSQL and StarentGSS (cluster resource type) package in /opt/gss_global directory. The installer has to be executed on both the nodes of the cluster. The inst_serv script is used to create instances after initial installation. Each instance will have a separate configuration file, log directory, and tools directory. During uninstallation, each instance can be uninstalled separately and will not have any impact on the other instances. Global installation can be only uninstalled if there are no instances configured or running on the system.
 
Installing multiple GSS Instances - Stand-alone Node
This section describes the process for installing multiple instances of GSS in a stand-alone deployment.
 
Using the Installation Script - Stand-alone Node
Installation is accomplished using the inst_serv script to install GSS binaries. It provides a menu-driven interface with question prompts. Most prompts display default values or information derived from the server’s current setup - such as IP addresses for configured interfaces.
The following information will help you use the installation script most effectively:
Ctrl-C will abort the installation process at any time during the procedure.
The information from the prompts is used to generate the GSS configuration file (gss.cfg). This file can be changed at anytime after the installation.
Important: It is recommended that you fill in path prompts only after you have created the directories to be used.
 
Installation Procedure - Stand-alone Node
The following procedure assumes that you are logged in to the GSS server with root privileges and that you are starting at the root directory level.
Step 1
Change to the /<packages>/gss_<version> directory where you stored the GSS application software in step 5 of the previous section.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of the operating system installed on the system. If it is not matching the requirements in the Minimum System Requirements for Stand-alone Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command:
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Standalone Mode Installation [in standalone environment]...
Step 3
If you did not make the changes to the system file, then abort now (CTRL-C), make the changes to the system file, and then reboot. After rebooting, begin the installation procedure again.
If you made changes to the system file as in the Installation First <X-Refs (online)>step <$paranum>s section, then continue to the next step.
GTPP Storage Server installation directory
Path where Gss will be installed :/opt
GSS Installation dir [/opt] ? </home/export/install_8_0_xx>
Step 4
Shortly after typing /<install_dir> and pressing Enter, the following appears:
Entering n will save configuration values and take you to next configuration, To change the default values, enter option number
*** PostgreSQL installation configuration ***
1) PostgreSQL port : 5432
2) PostgreSQL login : postgres
3) PostgreSQL passwd : postgres
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Entering n will save configuration values and take you to next configuration, To change the default values, enter option number
Important: All values that appear initially for this menu are system defaults and you do not need to make changes if the values are acceptable.
Step 5
Enter the line number to change a parameter value, if needed. Then enter n to save changes (if made) or defaults and move to the next menu.
*** GSS
Configuration Parameters ***
1) File Format for data files : starent
2) Hard Limit Interval for File Generation (mins) : 0
3) Support for LRSN rewrite : n
4) Encoding of IP Address in binary format : n
5) Enable redundant data file support : n
6) GSN Location : GSN
p) Go back to previous menu
n) Proceed to next configuration
a) Abort Installation
Important: The GSS and FileGen are set to run in archive mode by default. The archive mode, used in deployments that do not include CGFs, instructs the server to save records to a file.
Step 6
*** Network Interface Configurations ***
Currently configured IP interfaces on the machine : 10.8.1.205
1 ) 10.1.1.111
2 ) 123.1.2.33
p ) Go back to previous menu
n ) Proceed to next configuration
a ) Abort Installation
Enter your choice : n
Important: Note that the script has sensed both the number of interfaces and their IP addresses.
Step 7
Pressing n completes the menu-driven portion of the installation process and displays the configuration that you have created.
========================================================================
Standalone Mode Installation
========================================================================
GSS installation path : /<install_dir>/gss
*** PostgreSQL Configurations ***
PostgreSQL port : 5432
PostgreSQL login : postgres
PostgreSQL passwd : postgres
**** GSS Configurations ***
File Format for data files : starent
Hard Limit Interval for File Generation : 0
GTPP Dictionary : custom1
Support for LRSN rewrite : n []
Encoding of IP Address in binary format : n
GSN Location : GSN
Enable redundant data file support : n
*** Network Host Configurations ***
IP Address of the machine to be used : 10.1.1.111
========================================================================
You are given the opportunity to modify the configuration that you have created. (The values displayed above were entered for illustration and not as recommendations.)
Do you want to Modify Configuration [n] ? n
Step 8
Press y to return to the menus and change the configuration or press n to continue the installation process.
Installing GSS..... Please wait.....
Extracting perl tar... Done.
Add following entry to crontab (if not already present) to remove processed data files in /<install_dir>/gss/data after storage period of 7 day(s)
0 * * * * /<install_dir>/gss/bin/cleanup.sh >> /<install_dir>/gss/log/cleanup.log 2>&1
To start Process Monitor Tool along with GSS and Filegen (if not started already) : execute "/<install_dir>/gss/serv start"
To get help on "GSS": execute "/<install_dir>/gss/serv help"
For additional info and performance tuning please read README & GSS User Guide in doc directory
Do You Want To Start GSS : [y] ? y
This is the last action that you must take to complete the installation process.
Step 9
Enter n to complete the installation without starting the GSS. Press return to accept the yes default to complete the installation process and start the GSS.
This will start Process Monitor Tool along with GSS and Filegen using params listed in /<install_dir>/gss/etc/gss.cfg
Please see log/psmon.log file for log messages
Starting Process Monitor Tool...
Done.
Capturing status, please wait for a while...
======================================================================
0 1118 12:04:16 TS 59 0:00 <install_dir>/gss/bin/gssfilegen 1
0 1015 12:04:10 TS 59 0:00 /usr/bin/bash <install_dir>/gss/serv start 29661
0 1102 12:04:15 TS 59 0:00 <install_dir>/gss/lib/perl5.8.5/bin/perl <install_dir>/gss/psmon --daemon --cro 1
0 1113 12:04:16 TS 59 0:00 <install_dir>/gss/bin/gss 1
======================================================================
GTPP Storage Server Version 8.0.xx installation done.
The status display indicates that GSS, FileGen, PSMON, and PostgreSQL have all been started. If nothing displays, turn to the Troubleshooting the GSS section in the GTPP Storage Server Administration chapter. In most cases, if the other components are started, then the PostgreSQL has also been started.
Step 10
# cd /<install_dir>
# ls -al
total 16
drwxr-xr-x 4 root other 512 Dec 9 17:29 .
drwxr-xr-x 11 root root 512 Dec 9 17:28 ..
-rwxrwxrwx 1 root other 3782 Dec 9 17:28 StandaloneGSSUninstall.sh
drwxrwxr-x 12 root root 512 Dec 9 17:29 gss
drwxr-xr-x 6 postgres other 512 Dec 9 17:28 postgres
Step 11
# cd gss
# ls -al
total 232
drwxrwxr-x 12 root root 512 Dec 21 22:43 .
drwxr-xr-x 4 root other 512 Dec 21 22:43 ..
-rw-r--r-- 1 posgres other 533 Aug 17 10:58 .configfile
-r--r--r-- 1 posgres root 1168 Aug 17 10:58 .gss.env
-rw------- 1 root other 5 Dec 21 22:43 .gss.pid
-rw------- 1 root other 5 Dec 21 22:43 .gssfilegen.pid
-rw------- 1 root other 7 Dec 21 22:43 .gssfilegen.seq
-rw-r--r-- 1 root other 0 Dec 21 22:42 .inst_serv.err
-rwxr-xr-x 1 root root 4057 Mar 3 2006 README
drwxrwxr-x 2 root root 512 Dec 21 22:42 bin
drwxrwxr-x 2 root root 512 Sep 8 2004 data
drwxrwxr-x 2 root root 512 May 31 2005 doc
drwxrwxr-x 2 root root 512 Dec 21 22:42 etc
-rwxr-xr-x 1 root other 22480 Dec 21 22:42 gss_ctl
drwxrwxr-x 3 root root 512 Dec 21 22:42 lib
drwxrwxr-x 3 root root 512 Dec 21 22:43 log
-rwxr-xr-x 1 root other 54445 Dec 21 22:42 psmon
-rw-r--r-- 1 root other 4 Dec 21 22:43 psmon.pid
-rwxr-xr-x 1 root other 19850 Dec 21 22:42 serv
drwxrwxr-x 2 root root 512 Oct 10 05:43 sql
drwxrwxr-x 2 root root 512 Oct 10 05:43 template
drwxrwxr-x 2 root root 512 Sep 8 2004 tmp
drwxrwxr-x 3 root root 512 Dec 21 22:42 tools
Step 12
# cd etc
# ls -al
total 42
drwxrwxr-x 2 root root 512 Dec 21 22:42 .
drwxrwxr-x 12 root root 512 Dec 21 22:43 ..
-rw-r--r-- 1 root other 10921 Dec 21 22:42 gss.cfg
-rw-r--r-- 1 root other 2459 Dec 21 22:42 gsslogger.xml
-rw-r--r-- 1 root other 3690 Dec 21 22:42 psmon.cfg
-rw-r--r-- 1 root other 261 Dec 21 22:42 uninstall_config_file
Before working with the GSS, it is recommended to create a write-protected copy of the gss.cfg file and store it in a separate directory. To ensure you remember the configuration for your software version, we suggest that you store the in the /<packages>/<gss_version directory.
 
Installing multiple GSS Instances - Node 1 in Cluster
This section describes the process for installing multiple GSS instances on the primary GSS node of the cluster.
Prior to installing the GSS application, ensure that the cluster is installed and configured as needed. For information on installing and configuring the Sun cluster, refer to the Sun documentation.
 
Using the Installation Script - Cluster Node 1
Installation is accomplished using the inst_serv script. It provides a menu-driven interface with question prompts. Most prompts display default values or information derived from the server’s current setup.
The following information will help you use the installation script most effectively:
Ctrl-C will abort the installation process at any time during the procedure.
The information from the prompts is used to generate the GSS configuration file (gss.cfg). Using a text editor, this file can be changed at anytime after the installation.
 
Installation Procedure - Node 1
The following procedure assumes that you are logged in to the GSS server with root privileges and that you are starting from the root directory.
Step 1
Change to the /<packages>/gss_<version> directory where you stored the GSS application software.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of operating system and cluster software installed on the system. If it is not matching the requirements in the Minimum System Requirements for Cluster Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command.
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Using cluster for Installation
Cluster Mode Installation (To be used in cluster environment) [n] ? y
Important: Note that the script senses whether the server is a Stand-alone node or a Cluster node. If you made changes to the system file as in the Installation First Steps section, then you can continue. If not, abort the installation using CTRL+C, make the changes to the system file, and then reboot. After rebooting, begin the installation procedure again.
Step 3
Enter y (yes) to continue the installation.
GTPP Storage Server installation directory
Specify the common path for GSS data and logs on all cluster nodes [/sharedgss/gss] ?
Step 4
Press Enter to accept the default directory /sharedgss/gss or enter the name of another directory. Next you are prompted for the location to install the GSS.
Path where Gss will be installed :
GSS Installation dir ? /TEST_GSS/cvserver
Important: In case of cluster mode, it is highly recommended that you do not install the GSS application in the /opt, /opt/gss/, or /opt/postgres directory.
Step 5
Press Enter to accept the default or enter the name of the directory where the GSS active components are to be installed. It is recommended that you put this directory at the root level. The installation script creates the directory if needed.
Shortly after responding to the prompt for the installation directory, the following appears:
Do you want Backup installation for current cluster mode installation [y/n]: [n] ?
Important: This enables backup mode for the GSS node in a cluster deployment.
If you do not want the backup for cluster mode installation, proceed to step 7. Otherwise, continue with next step.
Step 6
Enter y (yes) to enable node switchover. The installation continues with a menu to configure the PostgreSQL parameters for backup.
*** PostgreSQL configuration for backup Installation ***
1) PostgreSQL port for backup installation :
2) PosrgreSQL login for backup installation :
3) PostgreSQL data directory for backup installation :
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Step 7
Entering n will save configuration values and take you to next configuration. To change the default values, enter option number
*** PostgreSQL installation configuration ***
1) PostgreSQL port : 5432
2) PostgreSQL login : postgres
3) PostgreSQL passwd : postgres
4) Shared PostgreSQL dir :/sharedpostgres
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Step 8
Entering n will save configuration values and take you to next configuration. To change the default values, enter option number
Step 9
Enter n to save changes or defaults and move to the next menu.
Important: The GSS and FileGen are set to run in archive mode by default. The archive mode, used in deployments that do not include CGFs, instructs the server to save records to a file.
*** GSS Configuration Parameters ***
1) File Format for data files : starent
2) Hard Limit Interval for File Generation (mins) : 0
3) Support for LRSN rewrite : n
4) Encoding of IP Address in binary format : n
5) Enable redundant data file support : n
6) GSN Location : GSN
7) Specify the GTPP Dictionary : custom6
p) Go back to previous menu
n) Proceed to next configuration
a) Abort Installation
Important: The Specify the GTPP Dictionary option appears only if the Support for LRSN rewrite or Encoding of IP Address in binary format parameter is enabled. Otherwise, the GTPP dictionary is set to default.
Step 10
Enter the number or letter of your choice. Make changes as needed. Then enter n to save changes or defaults and move to the next menu.
*** Network Interface Configuration ***
1) Logical Host IP Addresss :
2) Logical Host Name :
3) Additional Logical Host Name[eg.For Mediation Server] : n
p) Go back to previous menu
n) Proceed to next configuration
a) Abort Installation
Enter your choice : n
Step 11
Enter Your Choice : 1
Please specify already
available logical host Address for GSS cluster : ?
Step 12
Enter the IP address of the logical host, press Enter, and move to the next prompt.
Enter Your Choice :
Step 13
Please specify Logical hostname for above logical host address : ?
Step 14
Pressing n completes the menu-driven portion of the installation process and displays the configuration that you have created.
=======================================================================
Cluster Mode Installation
=======================================================================
GSS installation path : /TEST_GSS/cvserver/gss
Common path for GSS data and logs : /sharedgss/gss
*** Backup PostgresSQL Configurations ***
PostgreSQL port for backup installation : 5477
PosrgreSQL login for backup installation : backpost
PostgreSQL data directory for backup installation : /backpost
*** PostgreSQL Configurations ***
PostgreSQL port : 5432
PostgreSQL login : gsspg
PostgreSQL passwd : gsspg
Shared PostgreSQL dir : /sharedpostgres
**** GSS Configurations ***
File Format for data files : custom7
Hard Limit Interval for File Generation : 2
GTPP Dictionary : custom6
Support for LRSN rewrite : n []
Encoding of IP Address in binary format : n
GSN Location : GSN
Enable redundant data file support : n
*** Network Host Configurations ***
Logical Host IP Addresss : 10.1.1.1
Logical Host Name : gssserv
Additional Logical Host Name : n
=======================================================================
You are given the opportunity to modify the configuration that you have created.
Do you want to Modify Configuration [y/n] : [n] ?
Step 15
Press y to return to the menus and change the configuration or press n or Enter to continue the installation process.
Installing GSS..... Please wait.....
Extracting perl tar...
Done.
Add following entry to crontab (if not already present) to remove processed data files in /sharedgss/gss/data after storage period of 7 day(s)
0 * * * * /TEST_GSS/cvserver/gss/bin/cleanup.sh >> /sharedgss/gss/clustems1_log//cleanup.log 2>&1
For additional info and performance tuning please read README & GSS User Guide in doc directory
GSS Cluster Agent Installation configuration
Proceeding with GSS Cluster Agent Installation
Extracting StarentGss.tar...
Processing package instance <StarentGss> from </TEST_GSS/gss_x_x_xx>
Sun Cluster resource type for Gss server(sparc) 3.0.0,REV=
Sun Microsystems, Inc.
Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <StarentGss> [y,n,?] y
Step 16
Enter y (yes) to continue the installation.
Installing Sun Cluster resource type for Gss server as <StarentGss>
## Installing part 1 of 1.
/opt/StarentGss/README.Gss
/opt/StarentGss/bin/Gss_mon_check.ksh
/opt/StarentGss/bin/Gss_mon_start.ksh
/opt/StarentGss/bin/Gss_mon_stop.ksh
/opt/StarentGss/bin/Gss_probe.ksh
/opt/StarentGss/bin/Gss_svc_start.ksh
/opt/StarentGss/bin/Gss_svc_stop.ksh
/opt/StarentGss/bin/Gss_update.ksh
/opt/StarentGss/bin/Gss_validate.ksh
/opt/StarentGss/bin/gethostnames
/opt/StarentGss/bin/gettime
/opt/StarentGss/bin/hasp_check
/opt/StarentGss/bin/simple_probe
/opt/StarentGss/etc/Starent.Gss
/opt/StarentGss/man/man1m/Gss_config.1m
/opt/StarentGss/man/man1m/removeGss.1m
/opt/StarentGss/man/man1m/startGss.1m
/opt/StarentGss/man/man1m/stopGss.1m
/opt/StarentGss/util/Gss_config
/opt/StarentGss/util/removeGss
/opt/StarentGss/util/startGss
/opt/StarentGss/util/stopGss
[ verifying class <none> ]
## Executing postinstall script.
Installation of <StarentGss> was successful.
GTPP Storage Server Version x.x.xx installation done.
GSS installation on node1 is complete. please proceed with installation on node2....
 
Installing multiple GSS Instances - Node 2 in Cluster
This section describes the process for installing multiple GSS instances on to a second GSS node in a cluster setup.
The following procedure assumes that you are logged in to the node 2 (standby) GSS server with root privileges and that you are starting from the root directory.
Step 1
Change to the /gss_<version> directory where the GSS files are stored.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of operating system and cluster software installed on the system. If it is not matching the requirements in the Minimum System Requirements for Cluster Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command.
S T A R E N T G T P P S T O R A G E S E R V E R
Copyright(c) 2009 Starent Networks Corp. All rights reserved.
==================== Mon Dec 14 17:03:00 IST 2009 ====================
Checking For Root Privileges ........ Done
Start installing new version of GSS: [y] ?
Extracting GSS package ..... Done
Extracting Perl ........ Done.
GSS Cluster Agent Installation ...... Done
=======================================================================
GSS basic package installed at /opt/gss_global.
For creating multiple GSS instances, please run /opt/gss_global/make_gss_instance.
=======================================================================
Step 3
Run /opt/gss_clobal/make_gss_instance to create a GSS instance.
bash-3.00# /opt/gss_global/make_gss_instance
Cluster Mode Installation (To be used in cluster environment) [y/n]: [n] ? y
Step 4
Enter y (yes) to continue the installation.
Using cluster for Installation
Specify the common path for GSS data and logs on all cluster nodes [/sharedgss/gss1] ?
GTPP Storage Server installation directory
Step 5
Enter the name of the directory where files that will be common to all nodes in the cluster are to be stored, for example /<shared_gss_dir>. You must enter the same path when you installed the GSS on node 1.
Shortly after typing /<shared_gss_dir> and pressing Enter, the configuration created for cluster node 1 appears.
=======================================================================
Cluster Mode Installation
=======================================================================
GSS installation path : /GSS1/gss
Common path for GSS data and logs : /sharedgss/gss1
GSS Instance Name : GSS1
*** PostgreSQL Configurations ***
PostgreSQL port : 5432
PostgreSQL login : 99999
PostgreSQL passwd : postgres
Shared PostgreSQL dir : /sharedpostgres/GSS1
Shared PostgreSQL UID : 99999
**** GSS Configurations ***
Archive Mode : y
File Format for data files : custom3
Hard Limit Interval for File Generation : 15
GTPP Dictionary : custom3
Support for LRSN rewrite : n []
Encoding of IP Address in binary format : n
GSN Location : GSN
GSS Port : 50000
Enable redundant data file support : n
*** Network Host Configurations ***
Logical Host IP Addresss : 192.168.143.100
Logical Host Name : gssserv
Additional Logical Host Name : n
=======================================================================
If you want to change configuration values, do not start GSS after installation. Change required configuration parameters from /GSS1/gss/etc/gss.cfg and then start GSS..
Proceed With Installation [y/n] : [y] ?
Step 6
Press Enter to continue the installation.
Installing GSS..... Please wait.....
Add following entry to crontab (if not already present) to remove processed data files in /sharedgss/gss1/data after storage period of 7 day(s)
0 * * * * /GSS1/gss/bin/cleanup.sh >> /GSS1/gss/log/cleanup.log 2>&1
Starting second node installation..........................
Do you want to start GSS on cluster nodes [y/n] : [y] ?
Starting GSS on cluster node....
Capturing status, please wait for a while...
=============================================================
0 21170 17:04:12 TS 59 00:00 /GSS1/gss/bin/gssfilegen 1
0 21154 17:04:11 TS 59 00:01 /GSS1/gss/lib/perl5.8.5/bin/perl /GSS1/gss/psmon --daemon --cron /GSS1/gss 1
0 21165 17:04:12 TS 59 00:00 /GSS1/gss/bin/gss 1
99999 21187 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21189 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21115 1 0 17:04:07 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21184 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21173 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21177 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21122 21115 0 17:04:07 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21175 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21188 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21191 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21182 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21179 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21174 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21190 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21183 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21185 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21123 21115 0 17:04:07 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21186 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21176 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21181 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21117 21115 0 17:04:07 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21178 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21180 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21195 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21194 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21197 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21192 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21196 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
99999 21193 21115 0 17:04:13 ? 0:00 /opt/gss_global/global/postgres/bin/postgres -D /GSS1/postgres/data
=============================================================
GTPP Storage Server Version 9.0.97 Instance Creation done.
Step 7
Enter y to create another GSS instance. Repeat the above steps in order to add and modify the new GSS instance with configurations as required.
Do you want to add another GSS instance [y] ?
Step 8
Enter n if you want to create more instances.
Do you want to add another GSS instance [y] ? n
____________________________________________________
Instance Name : GSS1
Instance Path : /GSS1/gss
Instance Status : Running
____________________________________________________
Instance Name : GSS2
Instance Path : /GSS2/gss
Instance Status : Running
____________________________________________________
The status display indicates the name, path and status of the instance.
Step 9
ps -ef
The resulting display will display all the processes running and there will most likely be multiple lines that confirm the PostgreSQL is running as they look similar to the following:
postgres 15080 14972 0 07:23:25 ? 0:00 /<clus_install_dir>/postgres/bin/postgres -D
Step 10
# cd etc
# ls -al
total 38
drwxr-xr-x 2 1460 100 512 Apr 20 07:22 .
drwxr-xr-x 12 1460 100 512 Apr 20 07:23 ..
-rw-r--r-- 1 root other 12561 Apr 20 07:22 gss.cfg
-rw-r--r-- 1 root other 3521 Apr 20 07:22 psmon.cfg
Before doing anything with the GSS, it is recommended to create a write-protected copy of the gss.cfg file and store it in a separate directory. To ensure you remember the configuration for your software version, we suggest that you store it in the /<packages>/<gss_version> directory.
 
Uninstalling Multiple GSS Instances - Stand-alone Node
This section describes the process for uninstalling multiple instances of GSS and all of the associated GSS components, for a stand-alone deployment.
 
Using the Uninstallation Script
Uninstallation is performed using the uninstall script that needs to be run for each instance of GSS and then for global instance on standalone and cluster modes. This removes the shared directories and installation directories. It is recommended to ensure the following:
 
Copy and rename the old gss.cfg file to a safe directory.
 
Uninstallation Procedure - Stand-alone Node
The StandaloneGSSUninstall.sh uninstallation script is the script used for uninstall process on a standalone mode. This script created in each GSS instance installation directory. For example, GSS instance installation directory is /cvserver then, uninstallation script is created in /cvserver as StandaloneGSSUninstall.sh.
Uninstalling the respective instance of GSS in a standalone mode accomplishes the following:
Step 1
# cd /<instance_dir>
Step 2
Uninstallation of instance 1
bash-3.00# cd GSS1
bash-3.00# ls
StandaloneGSSUninstall.sh gss postgres#
bash-3.00# ./StandaloneGSSUninstall.sh
/GSS1/gss
Uninstallation Process will remove shared directories, installation directories....
Do You Want To Proceed With Uninstallation [y/n] : [n] ? y
Step 3
Enter y (yes) to continue the uninstall process.
The system goes to the installation directory where the system-generated uninstall_config_file resides and uses the information in that file to complete the install process.
This will stop Process Monitor Tool along with GSS and Filegen
Please see log/psmon.log file for log messages
Stopping Process Monitor Tool...
Done.
Stopping GSS...
Done.
Stopping GSS FileGen...
Done.
Partial FileGen not running
Final FileGen not running
waiting for server to shut down.... done
server stopped
starting cleanup process
Cleaning the gss installation paths ...
Cleaning the postgres installation paths ...
Cleaning the shared memory and semaphores ...
Deleting the postgres user created during installation ...
Cleaning the postgres lock from /tmp directory ...
Uninstallation process completed successfully.........
The first instance is uninstalled.
Step 4
Uninstallation of instance 2
bash-3.00# cd GSS2/
bash-3.00# ls
StandaloneGSSUninstall.sh gss postgres
bash-3.00# ./StandaloneGSSUninstall.sh
/GSS2/gss
Uninstallation Process will Remove Installation directories
Do You Want To Proceed With Uninstallation [y/n]: [n] ? y
Step 5
Enter y (yes) to continue the uninstall process.
This will stop Process Monitor Tool along with GSS and Filegen
Please see log/psmon.log file for log messages
Stopping Process Monitor Tool...
Done.
Stopping GSS...
Done.
Stopping GSS FileGen...
Done.
Partial FileGen not running
Final FileGen not running
waiting for server to shut down.... done
server stopped
starting cleanup process
Cleaning the gss installation paths ...
Cleaning the postgres installation paths ...
Cleaning the shared memory and semaphores ...
Deleting the postgres user created during installation ...
Cleaning the postgres lock from /tmp directory ...
Uninstallation process completed successfully.........
Step 6
Uninstall the global instance after removing each instance of GSS. Change to the /opt/gss_global/ directory where the Global_Gss_Unistall.sh is placed. This script deletes the installation directories from /opt/gss_global directory and removes the starentGss package.
bash-3.00# cd /opt/gss_global/
bash-3.00# ls
Global_Gss_Unistall.sh bin make_gss_instance sc_event.dtd
Logs global nvpair.dtd sc_reply.dtd
bash-3.00# ./Global_Gss_Unistall.sh
Uninstallation Process will remove shared directories, installation directories....
Do You Want To Proceed With Uninstallation [y/n]: [n] ? y
Step 7
Enter y (yes) to continue the uninstall process.
Starting cleanup process
Cleaning the starentGss package ...
The following package is currently installed:
StarentGss Sun Cluster resource type for Gss server (sparc) 3.0.0,REV=
Do you want to remove this package? [y,n,?,q] y
Step 8
Enter y (yes) to remove the installed package instance.
## Removing installed package instance <StarentGss>
This package contains scripts which will be executed with super-user permission during the process of removing this package.
Do you want to continue with the removal of this package [y,n,?,q] y
Step 9
Enter y (yes) to continue the uninstall process.
## Verifying package <StarentGss> dependencies in global zone
## Processing package information.
## Executing preremove script.
Resource <Gss-hars> has been removed already
Resource type <Starent.Gss> has been un-registered already
Network Resource not removed...
You may run removeGss again with the -h option to remove network resource.
## Removing pathnames in class <none>
/opt/StarentGss/util/stopGss
/opt/StarentGss/util/startGss
/opt/StarentGss/util/removeGss
/opt/StarentGss/util/Gss_config
/opt/StarentGss/util
/opt/StarentGss/man/man1m/stopGss.1m
/opt/StarentGss/man/man1m/startGss.1m
/opt/StarentGss/man/man1m/removeGss.1m
/opt/StarentGss/man/man1m/Gss_config.1m
/opt/StarentGss/man/man1m
/opt/StarentGss/man
/opt/StarentGss/etc/Starent.Gss
/opt/StarentGss/etc
/opt/StarentGss/bin/simple_probe
/opt/StarentGss/bin/hasp_check
/opt/StarentGss/bin/gettime
/opt/StarentGss/bin/gethostnames
/opt/StarentGss/bin/Gss_validate.ksh
/opt/StarentGss/bin/Gss_update.ksh
/opt/StarentGss/bin/Gss_svc_stop.ksh
/opt/StarentGss/bin/Gss_svc_start.ksh
/opt/StarentGss/bin/Gss_probe.ksh
/opt/StarentGss/bin/Gss_mon_stop.ksh
/opt/StarentGss/bin/Gss_mon_start.ksh
/opt/StarentGss/bin/Gss_mon_check.ksh
/opt/StarentGss/bin
/opt/StarentGss/README.Gss
/opt/StarentGss
## Executing postremove script.
## Updating system information.
Removal of <StarentGss> was successful.
Global GSS uninstalled ...
bash-3.00#
 
Uninstalling multiple GSS Instances - Cluster Nodes
This section describes the process for uninstalling multiple instances of GSS and all of the associated GSS components, for a cluster deployment.
 
Using the Uninstallation Script
Uninstallation is performed using the uninstall script that needs to be run for each instance of GSS and then for global instance on standalone and cluster modes. This removes the shared directories and installation directories. It is recommended to ensure the following:
 
Copy and rename the old gss.cfg file to a safe directory.
 
Uninstallation Procedure - Cluster Node
The ClusterGSSUninstall.sh uninstallation script is the script used for uninstall process on a standalone mode. This script created in each GSS instance installation directory. For example, GSS instance installation directory is /cvserver then, uninstallation script is created in /cvserver as ClusterGSSUninstall.sh. During both the installation and upgrade processes, the system generates system-specific uninstall files: an uninstall script and an uninstall configuration file.
Uninstalling the respective instance of GSS in a cluster mode accomplishes the following:
Step 1
# cd /<instance_dir>
Step 2
Uninstallation of instance 1 from node 1
bash-3.00# cd GSS1
bash-3.00# ls
ClusterGSSUninstall.sh gss postgres
bash-3.00# ./ClusterGSSUninstall.sh
Uninstallation Process will remove shared directories, installation directories....
Do You Want To Proceed With Uninstallation [y/n] : [n] ? y
Step 3
Enter y (yes) to continue the uninstall and start the cleanup operation.
starting cleanup process
checking if resource group exists..
resourcegroup GSS1 exists
Bringing Resource Group offline from both the nodes.....
removing cluster resources..
Bringing the resource group offline..
Disabling Gss-hars_GSS1
Disable gssserv
Disable GSS1_crnp
unmanaging GSS1
removing CRNP
invalid resource
clresource: (C615326) Will not attempt to delete resource "CRNP".
removing Gss-hars_GSS1
removing gssserv
clresource: (C383672) Will not attempt to delete "gssserv" as another resource in the system has dependency on it.
removing GSS1_crnp
Remove the resource group
Cleaning the gss and postgres common data paths ...
Cleaning the gss installation paths
Cleaning the postgres installation paths ...
Cleaning the /opt directories paths ...
Cleaning the shared memory and semaphores
Deleting the postgres user created during installation ...
Cleaning the postgres lock from /tmp directory
Uninstallation process completed successfully.....
Step 4
Uninstallation of instance 2 from node 1
bash-3.00# cd GSS2
bash-3.00# ls
ClusterGSSUninstall.sh gss postgres
bash-3.00# ./ClusterGSSUninstall.sh
Uninstallation Process will remove shared directories, installation directories....
Do You Want To Proceed With Uninstallation [y/n] : [n] ? y
Step 5
Enter y (yes) to continue the uninstall process.
starting cleanup process
checking if resource group exists..
resourcegroup GSS2 exists
Bringing Resource Group offline from both the nodes.....
removing cluster resources..
Bringing the resource group offline..
Disabling Gss-hars_GSS2
Disable gssserver
Disable GSS2_crnp
unmanaging GSS2
removing CRNP
invalid resource
clresource: (C615326) Will not attempt to delete resource "CRNP".
removing Gss-hars_GSS2
removing gssserver
clresource: (C383672) Will not attempt to delete "gssserver" as another resource in the system has dependency on it.
removing GSS2_crnp
Remove the resource group
Cleaning the gss and postgres common data paths ...
Cleaning the gss installation paths
Cleaning the postgres installation paths ...
Cleaning the /opt directories paths ...
Cleaning the shared memory and semaphores
Deleting the postgres user created during installation ...
Cleaning the postgres lock from /tmp directory
Uninstallation process completed successfully.....
Step 6
Uninstallation of Global instance from node 1
bash-3.00# cd /opt/gss_global/
bash-3.00# ls
Global_Gss_Unistall.sh bin make_gss_instance sc_event.dtd
Logs global nvpair.dtd sc_reply.dtd
bash-3.00# ./Global_Gss_Unistall.sh
Uninstallation Process will remove shared directories, installation directories....
Do You Want To Proceed With Uninstallation [y/n]: [n] ? y
Step 7
Enter y (yes) to continue the uninstall and start the cleanup operation.
Starting cleanup process
Cleaning the starentGss package ...
The following package is currently installed:
StarentGss Sun Cluster resource type for Gss server (sparc) 3.0.0,REV=
Do you want to remove this package? [y,n,?,q] y
Step 8
Enter y (yes) to continue.
## Removing installed package instance <StarentGss>
This package contains scripts which will be executed with super-user permission during the process of removing this package.
Do you want to continue with the removal of this package [y,n,?,q] y
Step 9
Enter y (yes) to continue.
## Verifying package <StarentGss> dependencies in global zone
## Processing package information.
## Executing preremove script.
Resource <Gss-hars> has been removed already
Removing the resource type <Starent.Gss> ...
scrgadm -r -t Starent.Gss
done.
Network Resource not removed...
You may run removeGss again with the -h option to remove network resource.
## Removing pathnames in class <none>
/opt/StarentGss/util/stopGss
/opt/StarentGss/util/startGss
/opt/StarentGss/util/removeGss
/opt/StarentGss/util/Gss_config
/opt/StarentGss/util
/opt/StarentGss/man/man1m/stopGss.1m
/opt/StarentGss/man/man1m/startGss.1m
/opt/StarentGss/man/man1m/removeGss.1m
/opt/StarentGss/man/man1m/Gss_config.1m
/opt/StarentGss/man/man1m
/opt/StarentGss/man
/opt/StarentGss/etc/Starent.Gss
/opt/StarentGss/etc
/opt/StarentGss/bin/simple_probe
/opt/StarentGss/bin/hasp_check
/opt/StarentGss/bin/gettime
/opt/StarentGss/bin/gethostnames
/opt/StarentGss/bin/Gss_validate.ksh
/opt/StarentGss/bin/Gss_update.ksh
/opt/StarentGss/bin/Gss_svc_stop.ksh
/opt/StarentGss/bin/Gss_svc_start.ksh
/opt/StarentGss/bin/Gss_probe.ksh
/opt/StarentGss/bin/Gss_mon_stop.ksh
/opt/StarentGss/bin/Gss_mon_start.ksh
/opt/StarentGss/bin/Gss_mon_check.ksh
/opt/StarentGss/bin
/opt/StarentGss/README.Gss
/opt/StarentGss
## Executing postremove script.
## Updating system information.
Removal of <StarentGss> was successful.
Global GSS uninstalled ...
If the following message is displayed when Global_Gss_Unistall.sh is run, then some instance which needs to be uninstalled is still present and then, Global instance needs to be uninstalled. To identify which instance needs to be uninstalled, please see /etc/gss/.instance_configfile.
There are instance[s] configured on this system, please uninstall the instances using StandaloneGSSUninstall.sh script in instance directories before uninstalling the global GSS installation
Step 10
Login to node 2 and repeat step 1 through step 9 to uninstall the different instances of GSS and global instance from node 2.
 
Upgrading mutiple GSS Instances - Stand-alone Node
This process upgrades both the GSS and Postgres database application files for multiple GSS instances on a stand-alone node. The script begins by stopping all active processes and then restarts them after the upgrade is completed.
This upgrade process is valid for upgrading:
 
Preparing to Upgrade
Step 1
Step 2
 
Change to /<clus_install_dir>/gss/etc directory and make a copy of the current gss.cfg.
Rename the file and store it in a separate directory. It is recommended to store the copy in the /<packages>/<gss_version> directory holding the current GSS version.
Step 3
gunzip gss_<version>.tar.gz
<version> is the version number of the GSS software distributed in the compressed tar file. For example, gss_8.0.xx.tar.gz.
Step 4
Locate the tar file GSS_<version>.tar in the /<packages> directory and untar the file by entering the following command:
tar -xvf gss_<version>.tar
During the untar process, a /gss_<version> directory (for example: /gss_8.0.xx) is created in the /<packages> directory.
 
Using the Installation Script
Installation is accomplished using the inst_serv script. It provides a menu-driven interface with question prompts. Most prompts display default values or information derived from the server’s current setup - such as IP addresses for configured interfaces.
The following information will help you use the installation script most effectively:
Ctrl-C will abort the installation process at any time during the procedure.
The information from the prompts is used to generate the GSS configuration file (gss.cfg). This file can be changed at anytime after the installation.
Important: It is recommended that you fill in path prompts only after you have created the directories to be used.
 
Upgrading a GSS Stand-alone Node
The following procedure assumes that you are logged in to the GSS server with root privileges and that you are starting at the root directory level.
Step 1
Change to the /<packages>/gss_<version> directory that was created when the files were uncompressed.
Step 2
Locate the installation script file inst_serv and execute the following command:
./inst_serv
Important: This script will check the version of the operating system installed on the system. If it is not matching the requirements in the Minimum System Requirements for Stand-alone Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command:
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Standalone Mode Installation [in standalone environment]...
GTPP Storage Server installation directory
Path where Gss will be installed :
GSS Installation dir ? /<install_dir>
Important: If the upgrade is to a GSS 8.0.xx then /opt will appear as a default value for the GSS Installation dir parameter.
Step 3
</install_dir>
Step 4
Do you want to upgrade GSS [n] ? y
Step 5
Enter y (yes) to continue the upgrade.
Starting Standalone upgrade
This will stop Process Monitor Tool along with GSS and Filegen
Please see log/psmon.log file for log messages
Stopping Process Monitor Tool...
Done.
Stopping GSS...
Done.
Stopping GSS FileGen...
Process GSS FileGen could not be stopped..
Killing it...
Done.
waiting for server to shut down.... done
server stopped
server starting
Starting database upgrade...
Database upgrade completed.....
waiting for server to shut down.... done
server stopped
Do You Want To Start GSS [y/n]: [y] ?
Step 6
Enter y or press Enter to start the GSS and all related processes (e.g. the Process Monitor).
This will start Process Monitor Tool along with GSS and Filegen using params listed in /<install_dir>/gss/etc/gss.cfg
Please see log/psmon.log file for log messages
Starting Process Monitor Tool...
Done.
Capturing status, please wait for a while...
=============================================================
0 17453 11:35:15 TS 59 0:00 /</install_dir>/gss/lib/perl5.8.5
/bin/perl -w /</install_dir>/gss/psmon --daemon --cron 1
0 17460 11:35:16 TS 59 0:01 /</install_dir>/gss/bin/gss 1
0 17465 11:35:16 TS 59 0:01 /</install_dir>/gss/bin/gssfilegen 1
0 17108 11:33:08 TS 59 0:00 tee -a /</install_dir>/gss_7_1_67
/installation_log_</install_dir>_gss1 17067
0 2791 Oct_13 TS 59 1:01 /bin/bash /</install_dir>/gss/bin
/test_process.sh 1
0 17421 11:35:12 TS 59 0:00 tee -a /</install_dir>/gss_7_1_67/
installation_log_</install_dir>_sol1ems 17106
postgres 17479 17435 0 11:35:16 ? 0:00 /</install_dir>/postgres/
bin/postgres -D /</install_dir>/postgres/data -i
postgres 17485 17435 0 11:35:16 ? 0:00 /</install_dir>/postgres/
bin/postgres -D /</install_dir>/postgres/data -i
...
...
...
postgres 17491 17435 0 11:35:16 ? 0:00 /</install_dir>/postgres/
bin/postgres -D /</install_dir>/postgres/data -i
=============================================================
GTPP Storage Server Version <gss_x_x_xx> installation done.
====================================================================
The status display indicates that GSS, FileGen, PSMON, and PostgreSQL have all been started. If nothing displays, turn to the Troubleshooting the GSS section in the GTPP Storage Server Administration chapter.
 
Upgrading multiple GSS instances - Cluster Nodes
This process upgrades both the GSS and Postgres database application files for multiple GSS instances on the nodes in a cluster.
This upgrade process is valid for upgrading:
 
Preparing to Upgrade
Step 1
Step 2
 
Change to /<clus_install_dir>/gss/etc directory and make a copy of the current gss.cfg.
Rename the file and store it in a separate directory. It is recommended to store the copy in the /<packages>/<gss_version> directory holding the current GSS version.
Step 3
gunzip gss_<version>.tar.gz
<version> is the version number of the GSS software distributed in the compressed tar file. For example, gss_8.0.xx.tar.gz.
Step 4
Locate the tar file GSS_<version>.tar in the /<packages> directory and untar the file by entering the following command:
tar -xvf gss_<version>.tar
During the untar process, a /gss_<version> directory (for example: /gss_8.0.xx) is created in the /<packages> directory.
 
Upgrading Node1 - Primary Node
By using the upgrade procedure, you ensure that no CDRs are lost as there will always be one of the nodes in active/online mode.
The primary node in a cluster is typically referred to as node1. You should begin the process by logging in to the node1.
Step 1
./inst_serv
Important: This script will check the version of operating system and cluster software installed on the system. If it is not matching the requirements in the Minimum System Requirements for Cluster Deployment section, the script will abort the GSS installation.
The following appears, with pauses for validation, after entering the inst_serv command:
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Cluster mode GSS installation exists /<server_name>
Step 2
Do you wish to upgrade this installation [y/n]: [n] ?
The following prompt appears only when the backup mode has not been configured during the fresh installation of GSS.
Do you want Backup installation for current cluster mode installation [y/n]: [n] ?
*** PostgreSQL configuration for backup Installation ***
1) PostgreSQL port for backup installation :
2) PosrgreSQL login for backup installation :
3) PostgreSQL data directory for backup installation :
n) Proceed to next configuration
a) Abort Installation
Enter Your Choice : [n] ?
Step 3
Please note that GSS upgrade in cluster mode needs following steps:-
1. Database upgrade on either of the nodes.
2. The node should be in standalone mode (after DB upgrade is done).
--------------------------------------------------------------
This procedure will walk you through database upgrade first and then boot the node in standalone mode.
Step 4
Important: The database can only be upgraded on the node where the GSS is currently online (primary).
Do you want to upgrade database? (If you have already upgraded it on another node, you may answer no to this question and proceed to boot the node in standalone mode.) [y/n] : [y] ? y
The script proceeds to inform you of the next actions that are being taken.
Resource group "Gss-harg" is online on node "clustgss2".
For DB upgrade you need to start/switch resourcegroup on node clustgss1.
Do you want to start/switch resourcegroup "Gss-harg" on node clustgss1. [y/n] : [y] ?
Bringing resource group online on node clustgss1, please wait...
Using "cluster_db_upgrade" script from "/<packages>/gss_<version>" to upgrade database.
clustgss
Starting database upgrade...
Database upgrade completed successfully................
You need to reboot the node in standalone mode to upgrade the GSS
After rebooting node in standalone mode, use "inst_serv" script for GSS upgrade.
Do you want to reboot the node in standalone mode [y/n] : [y] ?
Step 5
Press Enter or type y to reboot into stand-alone mode and continue the upgrade process.
Rebooting node .....
Step 6
login as: root
# clresourcegroup
status
=== Cluster Resource Groups ===
Group Name         Node Name   Suspended   Status
----------         ---------   ---------   ------
Gss-harg            <name_node2>  No       Online
Gss-harg            <name_node1>  No       Offline
The display above indicates the upgrade script successfully performed the switchover.
Important: It will take a minute or two for the resource group to switch to node2 and for node1 to reboot in stand-alone mode.
Step 7
scstat
The system should indicate the node is not a cluster node.
scstate: not a cluster member.
Step 8
Return to node1 and move to the directory where the GSS application files were stored, /<packages>/gss_<version> and initiate the installation script.
# ./inst_serv
Checking For Root Privileges ........
Done
Warning :
Before starting installation process, please make sure that intended postgres username does not exist.
During "cluster mode" installation process, postgres user will be created with UID 100001. Before starting cluster mode installation, please make sure that UID 100001 is not in use.
Please check that the following parameters are set in the '/etc/system' file. If they are not, please abort the installation using ^C , make required changes in '/etc/system' file, restart the machine to get these changes reflected and then start installation again.
set msgsys:msginfo_msgmnb=65536
set msgsys:msginfo_msgtql=1024
set shmsys:shminfo_shmmax=33554432
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
Cluster mode GSS installation exists /<clus_install_dir>
Do you wish to upgrade this installation [y/n]: [n] ?
Step 9
Press Enter or type y (yes) to continue the upgrade.
Starting upgrade of cluster
Creating backup of previous installation files as /<gss_install_dir>/gss_backup.tar
Backup done.
GSS Cluster Agent Installation configuration
Proceeding with GSS Cluster Agent Installation
Extracting StarentGss.tar...
Processing package instance <StarentGss> from </<package_dir> /gss_<x_x_xx>
Sun Cluster resource type for Gss server (sparc) 3.0.0,REV=
Sun Microsystems, Inc.
This appears to be an attempt to install the same architecture and version of a package which is already installed. This installation will attempt to overwrite this package.
Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
28 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user permission during the process of installing this package.
Do you want to continue with the installation of <StarentGss> [y,n,?] y
Step 10
Press Enter or type y to confirm the upgrade of the cluster resources.
Installing Sun Cluster resource type for Gss server as <StarentGss>
## Installing part 1 of 1.
[ verifying class <none> ]
## Executing postinstall script.
Installation of <StarentGss> was successful.
Extracting perl tar... Done.
Starting Backup database upgrade...
Backup Database upgrade completed.....
waiting for server to shut down.... done
server stopped
You need to reboot the node in cluster mode for normal cluster mode operation.
If database is not upgraded on another node, please run "cluster_db_upgrade" script on another node for database upgrade.
Do you want to reboot the node in cluster mode [y/n] : [y] ?
Step 11
Rebooting node .....
Step 12
clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name         Status
---------         ------
<node_name1>       Online
<node_name2>       Online
The system recognizes node1 as the standby cluster node.
Step 13
./GSS -version
Important: ./serv version is used when upgrading to GSS versions higher than x.x.69.
Step 14
Important: Reminder from previous step - /opt is a default installation directory for some versions of GSS.
cd /opt/gss
./GSS switch
Important: ./serv switch is used when upgrading to GSS versions higher than x.x.69.
After entering the switchover command, the system displays the following:
Resource group “Gss-harg” is online on node “<name_node2>”.
Bringing resource group online on node “<name_node1>”. Please wait ...
Done.
Step 15
clresourcegroup status
=== Cluster Resource Groups ===
Group Name         Node Name       Suspended    Status
----------       -------------     ---------    ------
Gss-harg         <name_node1>          No       Online
                 <name_node2>          No       Offline
Now GSS resource group is active with the new release and node2 is in standby mode and free to be upgraded.
Step 16
Perform step 1 to step 11 on node2 to upgrade it to the newer version of GSS.
Important: While executing step 3 on the second node, do not choose to upgrade database, as it has already upgraded for first node. After completing the steps on node2, GSS upgrade on cluster setup will be completed.
 
Configuring IPMP on GSS Server (Optional)
With IPMP, two or more network interface cards (bge0, bge1 etc.) are dedicated for each network to which the host connects. Each interface is assigned a static “test” IP address, which is used to access the operational state of the interface. Each virtual IP address is assigned to an interface, though there may be more interfaces than virtual IP addresses, some of the interfaces being purely for standby purposes. When the failure of an interface is detected its virtual IP addresses are swapped to an operational interface in the group.
The IPMP load spreading feature increases the machine's bandwidth by spreading the outbound load between all the cards in the same IPMP group.
Important: IPMP is a feature supported on Sun® Solaris® provided by Sun Microsystems. The configuration is included in the System Administration Guide. For more information, refer to the Sun documentation.
This section describes the following procedures to configure IP Multipathing on WEM server:
Before proceeding for IPMP configuration here are some terms related to IPMP configuration:
Multipath Interface Group: This the name given to the group of network devices in a multipath configuration.
Test Addresses: These are IP addresses assigned to each board/interface of the multipath group, they do not move but should not be used for connections in or out of the host.
Multipath/float Address: This is the IP address allocated to a Multipath Interface Group that is shared between all devices in the group (either by load sharing or active standby).
 
Configuring Probe-based IP Multipathing
Configuration procedure given here assumes that:
<NIC_1> and <NIC_2> are the network interface devices; i.e. bge0, bge1 etc.
Using network device <NIC_2> as active and <NIC_1> as the Standby.
Multipath IP address is <multipath_IP_address>
Test IP address for <NIC_1> interface in <test_IP_address_NIC_1>
Test IP address for <NIC_2> interface in <test_IP_address_NIC_2>
Step 1
eeprom local-mac-address?=true
Step 2
Create an <NIC_1> for the Standby network device with the following entry:
<hostname>-<NIC_1> netmask <netmask> broadcast+group <multipath_grp> deprecated -failover standby up
<hostname> is name of the Host and <NIC_1> is the network device to be set as Standby.
<multipath_grp> is Multipath Interface Group name given to the group of network devices in a multipath configuration.
<netmask> is the sub-netmask used by network.
Step 3
Create an /etc/hostname.<NIC_2> for the active network device with the following entry:
<hostname>-<NIC_2> netmask 255.255.255.0 broadcast+group <multipath_grp>deprecated -failover up addif <hostname>-active netmask 255.255.255.0 broadcast+failover up
<hostname> is name of the Host and <NIC_2> is the network device to be set as active.
<multipath_grp> is Multipath Interface Group name given to the group of network devices in a multipath configuration.
Step 4
Edit the /etc/hosts file using “vi editor” and add the following three entries:
<multipath_IP_address> <hostname>-active
<test_IP_address_NIC_1> <hostname>-<NIC_1>
<test_IP_address_NIC_2> <hostname>-<NIC_2>
multipath_IP_address is the IP address allocated to a Multipath Interface Group that is shared between all devices in the group (either by load sharing or Active-Standby).
test_IP_address_NIC_1 is the IP addresses assigned to <NIC_1> interface of the multipath group, they do not move but should not be used for connections in or out of the host.
test_IP_address_NIC_2 is the IP addresses assigned to <NIC_2> interface of the multipath group, they do not move but should not be used for connections in or out of the host.
Step 5
shutdown -i 6 -g 0 -y
 
Configuring Link-based IP Multipathing
Configuration procedure provided here assumes that:
<NIC_1> and <NIC_2> are the network interface devices; i.e. bge0, bge1 etc.
Using network device <NIC_1> as active and <NIC_2> as the standby
Multipath IP address is <multipath_IP_address>
Test IP address for <NIC_1> interface in <test_IP_address_NIC_1>
Test IP address for <NIC_2> interface in <test_IP_address_NIC_2>
<my_address> is associated with Multipath IP address <multipath_IP_address> in the /etc/hosts file
Step 1
Ensure that the MAC addresses on the host are unique by setting the local-mac-address parameter to true by running following command as root user:
eeprom local-mac-address?=true
Step 2
Create an /etc/hostname.<NIC_1> for the Active network device with the following entry:
<my_address> netmask + broadcast + group <multipath_grp> up
<my_address> is associated with Multipath IP address <multipath_IP_address> in the /etc/host file.
<multipath_grp> is Multipath Interface Group name given to the group of network devices in a multipath configuration.
Step 3
Create an /etc/hostname.<NIC_2> for the Standby network device with the following entry:
group <multipath_grp> up
Step 4
shutdown -i 6 -g 0 -y
 
 

Cisco Systems Inc.
Tel: 408-526-4000
Fax: 408-527-0883