GTPP Storage Server Overview


GTPP Storage Server Overview
 
 
The GTPP Storage Server (GSS) provides an external management solution for the bulk storage of Charging Data Records (CDRs) coming from a GPRS Support Node (GSN) in a GPRS/UMTS network.
The GSS can collect eG-CDRs and/or G-CDRs from a Gateway GPRS Support Node (GGSN) or the GSS can collect any of the following CDR types from a Serving GPRS Support Node (SGSN):
This overview provides general information about the GSS including:
 
Product Description
The GSS enhances the mobile carrier’s ability to manage the CDRs. Running on standard carrier-grade servers in either a stand-alone or cluster-aware deployment, there are no practical limits on the period for storage thus ensuring high availability.
The GSS provides redundant/backup CDR storage for the billing/charging data by enabling the GGSN to simultaneously send CDRs to both the GSS and the Charging Gateway Function (CGF).
The GSS FileGen utility generates proprietarily encoded CDR files for transfer via FTP or SFTP to offline Billing System (BS).
The GTPP storage server comprises the following feature components:
 
Partnering with a GSN
The GSS is an “external application” product that resides on a server separate from the ASR 5000 GSN. GSS is only accessible if you have purchased this product separately and purchased and installed a GSS feature license on your ASR 5000 GSN system.
Prior to attempting to connect the GSS to the GSN, it is recommended that you:
Step 1
Step 2
Step 3
Install and configure the GSS application (see the GSS Installation Management chapter in this guide).
Step 4
Setup the GSS support on the GSN (see the Managing the GSN-GSS Services chapter in this guide).
 
System Requirements and Recommendations
This section identifies the minimum system requirements for the GTPP Storage Server. This section also describes any specific software requirement for a particular application installation.
Important: The hardware required for these components may vary depending on the number of clients that require access, other components managed, and other variables.
 
Minimum System Requirements for Stand-alone Deployment
 
Important: It is recommended that you have separate interfaces (in IPMP) for mediation device and chassis. Also, for given IPMP, the two interfaces should be on different cards.
 
Important: If you plan to install software and maintain the servers and applications remotely, it is recommended that you use an X-Windows client.
 
Minimum System Requirements for Cluster Deployment
Hardware and software requirement mentioned in this section is for single node in cluster. For additional node additional number of hardware and software are required.
 
Important: It is recommended that you have separate interfaces (in IPMP) for mediation device and chassis. Also, for given IPMP, the two interfaces should be on different cards.
 
Important: If you plan to install software and maintain the servers and applications remotely, it is recommended that you use an X-Windows client.
 
Default Ports for GSS
The various components of the GTPP storage server use specific TCP/UDP ports by default. The following table lists the default ports.
Default TCP/UDP Port Utilization
 
GSS Hardware Sizing and Provisioning Guidelines
In addition to the minimum system requirements indicated in the Minimum System Requirements for Stand-alone Deployment and Minimum System Requirements for Cluster Deployment sections, the following section offers information that can help you to plan hardware sizing needs, based on the exact deployment scenario that you are using.
 
Hard Drive Partition Recommendations
Following is the partition scheme required for GSS application:
/globaldevices should be at least 1 GB - This is applicable for Cluster mode only.
/opt should be at least 10 GB
/export/home should be the partition used for GSS and PostgreSQL.
In case of Cluster mode installation PostgreSQL and CDR storage will be on /shareddisk for all cluster node hence may not require 20 GB free disk space.
A typical CDR can be 200 Bytes in size. Based on this, the approximate file size with 4 Million CDRs per hour and backup for 2 days, the formula used to calculate the amount of space needed to backup this information is:
200 X (#_of_CDR_per_hour) X 48 X 1.5 = Backup space on Hard disk in Bytes.
 
IP Multipathing (IPMP) on GSS Server (Optional)
IPMP or IP multipathing is a facility provided by Solaris® to provide physical interface failure detection and transparent network access failover for a system with multiple interfaces on the same IP link. IPMP also provides load spreading of packets for systems with multiple interfaces.
For IPMP configuration, refer to the Configuring IPMP on GSS Server section in the GSS Installation Management chapter.
Important: IPMP is a feature supported on Sun® Solaris® provided by Sun Microsystems. The configuration is included in the System Administration Guide. For more information, refer to the Sun documentation.
 
Features of the GSS
This section describes the various features of GSS application.
 
GSS Server Application
This software application receives the CDRs from the GSN and stores them in database tables. It also provides a mechanism to send ACK responses to the GSN.
 
PostgreSQL Database Engine 8.2.0
The GSS application uses this database engine to process and store the information received from the GSN and the records generated by the GSS application. It is required that the PostgreSQL database engine resides on the same server as the GSS application.
 
GSS FileGen Utility
The GTPP Storage Server has a file generation utility called the GSS FileGen. It is used to generate the CDR files for the billing systems which do not have direct billing interface with the GSN.
The GSS FileGen saves the CDRs stored in the GSS database to the disk files.
 
File Format Encoding for CDRs
The file format determines the information organization and structure -- format -- of the generated data files. All file formats are different and are customizable.
Important: If none of the following formats meet your needs, you should contact your support representative to enquire about obtaining a customized file format.
The GSS FileGen utility supports the following file formats for CDRs:
starent Format: This default file format encodes CDRs according to the following conventions:
Header: No header
Contents: CDR1CDR2CDR3CDRn
GSN_<date>+<time>_<total-cdrs>_file<fileseqnum>
GSN_<date>+<time>_<total-cdrs>_unacked_file<fileseqnum>
custom1 Format: This file format encodes CDRs according to the starent file format explained above.
Important: The use of either starent or custom1 file formats, imposes a few specific reactions: - files are generated without an extension; acknowledged and unacknowledged files are differentiated by their file names; the system deletes all the files after reaching the maximum storage period (1-7 days) configured during GSS configuration.
 
custom2 Format: This customer-specific file format encodes CDRs according to the following conventions:
Header: 24 byte header incorporating the following information:
 
 
Contents: LEN1CDR1LEN2CDR2LEN3CDR3...LENnCDRn
EoF marker: No EoF marker
GSN_<date>+<time>_<total-cdrs>_file<fileseqnum>.u
 
custom3 Format: This customer-specific file format encodes CDRs according to the following conventions:
Header: No header
Contents: CDR1CDR2CDR3CDRn
EoF marker: No EoF marker
GSN_<date>+<time>_<total-cdrs>_file<fileseqnum>.u
Important: The use of either custom2 or custom3 file formats imposes the following actions: - files are generated with the .u file extension (indicating an unprocessed file to the billing system); - the GSS system deletes files with .p extension as part of periodic clean-up.
 
custom4 Format: This custom4 format was created to support writing CDRs in blocks. This file format is similar to custom3 file format except CDRs will be written in 2Kbyte blocks in a file.
Header: No Header
Contents: CDR1|CDR2FFFFFF|CDR3FFFFF..|..CDRnFFFF|
where | represents the end of a 2k block
EoF marker: No EoF marker
<GSN_Location>_<date>+<time>_<total-cdrs>_file<fileseqnum>.u
Important: With file format custom4, the files are generated with .u file extension indicating an unprocessed file by the billing system. Typically, the billing system would rename the file with .p extension after processing the files with CDR information. This also informs the GSS system that the file can be deleted during periodic cleanup.
 
custom5 Format: This file format is similar to custom3 file format except that the sequence number for CDR file name is of six digits in length ranging from 000001 to 999999.
Header: No Header
Contents: CDR1CDR2CDR3CDRn
EoF marker: No EoF marker
<GSN_Location>_<date>+<time>_<total-cdrs>_file<fixed-length-seqnum>.u
Important: This release of GSS does not support custom6 file format.
 
custom7 Format: This customer-specific file format contains CDRs converted from ASN.1 format to ASCII format according to the following conventions. Each line in the file consists of one CDR which contains 33 parameters occupying 491 bytes.
Header: No Header
Contents: CDR1CDR2CDR3CDRn
EoF marker: No EoF marker
Processed_02_YYYYMMDDhhmmss.cdr
 
custom8 Format: This customer-specific file format encodes CDRs according to the following conventions:
Header: No Header
Contents: CDR1CDR2CDR3CDRn
EoF marker: No EoF marker
<node-id-suffix>_<date>_<time>_<fixed-length-seq-num>.u
Important: The custom2 to custom8 file formats are customer-specific. For more information on the file formats, contact your local sales representative.
For more information on CDR accounting attribute elements, refer to the AAA Interface Administration and Reference Guide.
 
Redundant Data File Support
The FileGen utility includes an additional feature to generate redundant GSS files. When this feature is enabled, the FileGen utility automatically creates a directory called /<GSS_install_dir>/data_redundant (name cannot be changed). After the original data file is created and stored in the /<GSS_install_dir>/data directory, the FileGen utility creates a hard link between the /<GSS_install_dir>/data_redundant directory and the same tmp file that was used to create the original data file. Effectively, this creates a copy and stores a hard link duplicate in this redundant directory.
The redundant directory is in the same partition and cannot be moved. Hardlinked means that the redundant files are not deleted if/when the original files are deleted.
By default, this feature is disabled. It can be enabled during the installation of the GSS application (see the installation procedure later in this guide) or it can be enabled/disabled at anytime by using a text editor to modify the appropriate lines in the GSS configuration file (gss.cfg):
#Key: Enable_Redundant_File
#Flag to indicate whether to enable redundant file creation in path parallel to
#primary data path. For example <gss_dir>/data_redundant
#Value : yes/no
#Default: no
Enable_Redundant_File = y
 
PSMON
The PSMON is a UNIX process monitor utility that starts when GSS starts and then runs in the background as a fully functional background daemon, capable of logging to syslog and log file with customizable E-mail notification facilities.
PSMON monitors the PostgreSQL Database, GSS, and FileGen processes. The PSMON scans the operating system process table and, using the set of rules defined in the configuration file, respawns any dead processes.
 
Cluster Support in GSS
The cluster mode feature enables GSS to provide high availability and critical redundancy support to retrieve CDRs in failure of any one of the system. A GSS cluster is two or more GSS systems, or nodes, that work together as a single, continuously available system to provide applications, system resources, and data to GSS users. Each GSS node on a cluster is a fully functional, stand-alone system. However, in a clustered environment, the GSS nodes are connected by an interconnected network and work together as a single entity to provide increased availability and performance.
Highly available clusters provide nearly continuous access to data and applications by keeping the cluster running through failures that would normally bring down a single Server system.
A cluster offers several advantages over traditional single-server systems. These advantages include:
 
Cluster Components
Following are the cluster components work with GSS to provide this functionality:
 
A GSS cluster node is a GSS server that runs both the GSS Application software and Cluster Agent software. The Cluster Agent enables carrier to network two GSS nodes in a cluster. Every GSS node in the cluster is aware when another GSS node joins or leaves the cluster. Also, every GSS node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other GSS cluster nodes.
Each GSS cluster node is a stand-alone server that runs its own processes. These processes communicate with one another to form what looks like (to a network client) a single system that cooperatively provides applications, system resources, and data to GSS users.
A common storage system is a Fiber Channel (FC) -based cluster storage with FC drives for the servers in the cluster environment. It is interconnected with GSS cluster nodes with carrier class network connectivity to provide high level redundant storage and backup support for CDRs. It serves as common storage for all connected GSS cluster nodes.
This system provides high storage scalability and redundancy with RAID support.
Important: For information on Switching CDRs from HDD to GSS and Switching CDRs from GSS to HDD procedures, refer to the AAA Interface Administration and Reference Guide.
 
Multiple Instance GSS
This feature enables support for multiple data streams from one server or a single cluster setup to utilize multiple instances of GSS with a single installation and multiple databases. In a cluster setup, there is only one installation per node. During installation, GSS is installed at a fixed location (/opt/gss_global directory). The initial GSS installation does not create any GSS instance. Once GSS is installed on both the nodes, the /opt/gss_global/make_gss_instance script utility creates instances as and when needed and validates the conflicting ports/username across the instances.
For all instances on the node, only one set of binaries and scripts are used. Each instance has its own configuration file, log directory, tools directory and separate PostgreSQL database. The alarms and events generated by each instance are sent to its corresponding chassis. Individual GSS instance can also be stopped, started or switched over. Upgrade is smooth and involves minimum down time as possible.
Each GSS instance can be uninstalled separately and will not have any impact on the other instances. Global installation can be only uninstalled if there are no instances configured or running on the system.
The following figure explains the architecture of multiple GSS instances in a cluster setup.
 
Multiple Instances GSS
The advantages of this feature include:
For more information on the installation, uninstallation and upgrade procedures for multiple GSS instances, refer to Multiple Instances of GSS section in the GSS Installation Management chapter.
 
Monitoring of Disk Partitions
This feature enables support for disk monitoring of shared postgres and gss installation disk partition along with GSS data files disk partition. This feature enables sending an alarm or a notification based on the available disk space for postgres database and GSS base directory. This feature is supported only for single instance GSS, and for GSS in cluster mode.
This feature can be enabled after installation by configuring Notif_Disk_Usage_Postgres_Database and Notif_Disk_Usage_Gss_Base parameters from gss configuration file and there is no configuration support from installation script or during installation. For information on configuring these parameters, refer to Modifying a GSS Configuration section in the GTPP Storage Server Administration chapter of this guide.
Important: This feature does not support backward compatibility and hence GSN build should always match with GSS build. If GSN build and GSS build mismatches, then disk usage alarm and GSN Storage Server Status CLI will not work as expected at GSN side and some malfunction may occur. In this case GSN and GSS will be functional only if disk usage alarm is disabled and Storage Server Status CLI is not used.
 
Network Deployments and Interfaces
The GSS, in either a stand-alone or a cluster configuration, partners with a GSN (either an SGSN or a GGSN) in a GPRS/UMTS network to support a secure accounting solution. Optionally, other elements are included as needed such as a billing/mediation system, a RADIUS AAA server, a fiber channel common storage server, and/or a Charging Gateway Function (CGF).
 
Deploying the GSS
The following figure shows two typical deployments of the GSS in a GPRS/UMTS network.
 
GSS in GPRS/UMTS Network
The SGSN (SGSN Service) and the GGSN (GGSN Service) incorporate a range of user-defined and default contexts for the accounting functions - as illustrated in the following figure.
GGSN Contexts and Interfaces
The logical accounting context in the SGSN Service on an SGSN and the GGSN Service on a GGSN facilitate:
The source context of the GSN usually includes the
The GGSN destination context (not supported by SGSN) facilitates:
In order to support a GSS, the GSN system is configured with two components:
GTPP Storage Server (GSS) is configured in the same context as the GSN service(s) or any other accounting context. The configuration of the GSN initiates the tasks that communicate with the GSS.
UDP interface on the GSN is bound to the GTPP Storage Server (GSS). The UDP interface is a proprietary interface used by the GSN system to communicate with the GSS.
 
Cluster Mode GSS Deployment in GPRS/UMTS Network
The following figure shows a typical deployment of the cluster-aware GSS nodes in a GPRS/UMTS network with a Common Storage System. The GSS nodes, connecting through switches, could be connected to either a GGSN or an SGSN. As described earlier, the cluster nodes connect to the GGSN source context or the SGSN accounting context via the UDP interface.
The GSS cluster nodes process as stand-alone nodes with one in primary or active mode and the other in standby mode as a redundant backup system.
GSS Cluster Nodes in a GPRS/UMTS Network
 
How the GSS Works
The GSS and the GSS FileGen utility need to be configured to archive incoming records and export them to CDR files. The GSS generates the CDR files with a customer specific format. These generated CDR files can then be pulled (via FTP or SFTP) and used by the carrier’s billing system.
The following describes how the GSS interoperates with a GSN:
1.
2.
3.
The GSS FileGen utility retrieves records from the database and generates CDR files. As explained in File Format Encoding for CDRs section, these CDR files have vendor specific extensions and formatting for the billing system to use.
To generate a CDR file, the FileGen utility performs the following tasks:
It starts writing a raw file in /<GSS_install_dir>/data directory with name tmp.
Once the files are generated, then the files with .u extensions in the /<GSS_install_dir>/data directory can be pulled by a billing system for the processing of the charging details.
Depending upon the billing system, after processing the files pulled by the billing system can be stored with .p extension. The processed files with .p extensions can then be removed by the clean-up script based on the Maximum Storage Period for generated/processed data files.
4.
5.
 
 

Cisco Systems Inc.
Tel: 408-526-4000
Fax: 408-527-0883