This module explains how to use utility scripts to manage the Collection Manager database and the comma separated value (CSV) repository.
Many database management tasks are applicable only to the bundled Sybase database.
Note For general instruction on using utility scripts, see Using Utility Scripts, page 4-1.
Note By default, the Collection Manager handles database related transactions in uppercase table names. However, in Windows based MySql server, table names are stored in lowercase and name comparisons are not case-sensitive; and, it automatically converts all table names to lowercase. Therefore, if you are using Windows based database server, you need to modify the lower_case_table_names server configuration entry.
•Common Database Management Tasks
•Managing the Bundled Database
Database management tasks that are applicable to all the supported databases are:
•Generating a list of the database tables
•Defining and applying the schedule for the periodic deletion of old records
•Deleting a table
•Manually deleting old records from a table
Every record stored in the database is given a timestamp indicating the time that the Cisco Service Control Management Suite (SCMS) Collection Manager (CM) received the Raw Data Record (RDR). This timestamp is used when various maintenance operations are performed on the database tables.
Use the following scripts to configure and maintain the database:
•~scmscm/scripts/dbtables.sh
•~scmscm/scripts/dbperiodic.py
•~scmscm/db_maint/create_periodic_del_procs.sh
•~scmscm/scripts/droptable.sh
•~scmscm/scripts/prunetable.sh
To display a list of all of the tables in the database, use the dbtables.sh script:
~scmscm/scripts/dbtables.sh
Where applicable, the number of lines in the table and the earliest and latest timestamps are displayed.
Actual content of the tables can be displayed using the Cisco Service Control Application (SCA) Reporter. For more information, see the Cisco Service Control Application Reporter User Guide.
Table 5-1 lists the dbtables.sh script options.
The following is a sample output from the dbtables.sh script:
>~scmscm/scripts/dbtables.sh Executing query ...
name| num_lines| min_time| max_time|
----------------+--------+------------------------+------------------------+
RPT_SUR| 131000| 2006-10-30 16:46:42.24| 2007-02-15 12:00:32.216|
RPT_LUR| 170000| 2007-04-10 15:25:45.31| 2007-04-11 07:06:05.45|
RPT_VLUR| 4694| 2007-04-11 13:12:39.683| 2007-04-11 13:18:07.396|
RPT_PUR| 116000| 2007-04-09 04:45:55.566| 2007-04-11 07:44:09.186|
RPT_TR| 57766| 2007-04-11 13:12:39.683| 2007-04-11 13:18:07.396|
RPT_MALUR| 109000| 2007-04-09 04:46:35.903| 2007-04-09 13:32:18.42|
RPT_MEDIA| 120000| 2007-04-05 17:14:24.443| 2007-04-11 13:16:29.436|
RPT_TOPS_PERIOD0| 194250| 2007-03-18 20:00:00.01| 2007-04-23 06:00:00.16|
RPT_TOPS_PERIOD1| 46940| 2007-03-19 00:00:00.05| 2007-04-23 00:00:00.1|
To manage the periodic deletion of old records, perform the following steps:
1. Install the periodic delete procedures if they were not installed during the CM installation:
Log in as the scmscm user, start the CM, wait 1 to 2 minutes for the database tables to be created, and then run the script:
~scmscm/db_maint/create_periodic_del_procs.sh.
2. Edit the periodic delete configuration file. See Configuring Periodic Delete.
3. To apply the new configuration, use the dbperiodic.py utility script. See Applying the Periodic Delete Configuration File
Periodic deletion of a table does not begin while a previous periodic deletion is still running. This prevents excessive load on the database, which would degrade insertion performance in the adapters.
When two or more tables are scheduled to be reduced at the same time, the tables are processed in the order in which they are listed in the periodic delete configuration file.
For ease of configuration, you can schedule periodic deletion for all tables consecutively on one schedule.
All periodic delete activity is recorded in the system log file. For Solaris, the system log file is located in the /var/adm/messages directory; for Linux, the system log file is located in the /var/log/messages directory. The following is an example of the system log file at the time when the periodic delete action occurs.
Feb 17 05:00:07 app-redhat79 logger: /opt/scmscm/db_maint/periodic_del.sh for hourly all - running single del for RPT_LUR to 14 days
Feb 17 05:00:09 app-redhat79 logger: Starting periodic delete for RPT_LUR keeping 14 days
Feb 17 05:00:09 app-redhat79 logger: Deleted rows: 0 from the table RPT_LUR
Note Periodic delete when using a MySQL database is supported only on Version 5 or later.
•Applying the Periodic Delete Configuration File
The periodic delete configuration file (dbperiodic.conf) is, by default, located at ~scmscm/db_maint/. The file has a structure similar to an INI file, where each section describes a particular data reduction operation for a specific set of tables, to be performed according to a specified schedule.
Note The name of each section of the file is not used when the file is parsed; use the names you wish.
Each section begins with the section name in brackets, and should contain the parameters shown in the following table. (Not all parameters are required in each section of the configuration file.) Separate the parameters and their values by an equal sign (=). Examples of periodic delete configuration files appear in Table 5-2.
Note Values for all parameters except active and daystokeep can be either a single value, a list of values separated by commas, a range of values (two values separated by a dash), or an asterisk (*) which signifies all possible values. A range is not possible for tablenames.
In the following example, all fields are set to their default values.
# This dbperiodic.conf file emulates the legacy style for periodic # deletion. All tables are processed every hour on the hour, and # records are kept for 14 days. [hourly all] active = true tablenames = * daystokeep = 14 minute = 0 hour = *
In this example, all tables are reduced at 4:30 A.M., leaving 10 days of data in each table. In addition, the real-time tables are reduced every hour, leaving three days of data in each table.
# This dbperiodic.conf file reduces all tables once a day and # real-time tables once an hour. [daily all] active = true tablenames = * daystokeep = 10 minute = 30 hour = 4 [hourly real-time] active = true tablenames = RPT_SUR,RPT_LUR,RPT_PUR daystokeep = 3 minute = 0 hour = *
To load and apply a new periodic delete configuration file or to view the current file, use the dbperiodic.py script:
~scmscm/scripts/dbperiodic.py [ --dump ] [ --load | --loadfile=path_to_dbperiodic.conf ]
When the script is used to load a new configuration file, it parses the file, verifies its validity, and updates the scmscm user's crontab to reflect the changes. Table 5-3 lists the dbperiodic.py options.
The following example shows how to print the current periodic delete configuration:
~scmscm/scripts/dbperiodic.py --dump
Note This script prints the loaded periodic delete configuration. If the current periodic delete configuration file was not yet loaded, the configuration may vary from the script output.
The following example shows how to load the periodic delete configuration file from ~scmscm/db_maint/dbperiodic.conf :
~scmscm/scripts/dbperiodic.py --load
The following example shows how to load the periodic delete configuration file from a specified location:
~scmscm/scripts/dbperiodic.py --loadfile=path_to_periodic_delete_configuration_file
To delete one table or all current tables from the database, use the droptable.sh script:
~scmscm/scripts/droptable.sh [ -f ] tableParameter
Table 5-4 lists the droptable.sh options.
The following example shows how to drop a table named RPT_SUR from the database with no request for confirmation:
~scmscm/scripts/droptable.sh -f RPT_SUR
The following example shows how to drop all tables from the database:
~scmscm/scripts/droptable.sh ALLTABLES
To remove records from a database table based on the timestamps of the records, use the prunetable.sh script:
~scmscm/scripts/prunetable.sh [ -f ] num_days table_name
Table 5-5 lists the prunetable.sh options.
The following example shows how to delete all records that are more than seven days old from a table named RPT_SUR.
Because the -f flag is not specified, there may be requests for confirmation and errors will be reported.
>~scmscm/scripts/prunetable.sh 7 RPT_SUR
Managing the bundled database includes:
•Backing up and restoring a database
•Updating Sybase with a changed IP address
•Updating the Sybase database statistics
Each record stored in the database is given a timestamp indicating the time that the Cisco Service Control Management Suite (SCMS) Collection Manager (CM) received the Raw Data Record (RDR). This timestamp is used when various maintenance operations are performed on the database tables.
The following scripts are used to maintain the bundled Sybase database only:
•~scmscm/scripts/sybback.sh
•~scmscm/scripts/sybrestore.sh
•~scmscm/db_maint/update_statistics.sh
To create text file backups of all the tables in the database, use the sybback.sh script:
~scmscm/scripts/sybback.sh -d path_to_backup_directory
The script converts all tables to ASCII files and copies the files to a backup directory. Table 5-6 lists the sybback.sh options.
|
|
---|---|
-d path_to_backup_directory |
Write backup text files to the specified directory |
-h |
Display these options |
To restore a database from the backup file that was created by the sybback.sh script, use the sybrestore.sh script:
~scmscm/scripts/sybrestore.sh -d path_to_restore_directory
Table 5-7 lists the sybrestore.sh options.
|
|
---|---|
-d path_to_restore_directory |
Restore the database using the text files in the specified directory |
-h |
Display these options |
Note The scripts sybback.sh and sybrestore.sh are not a viable backup mechanism for Sybase. They are designed for backing up and restoring small amounts of data; for example, transferring small tables between machines.
Note If you require a viable backup mechanism, please consult the Sybase Backup Server product documentation.
When you change the IP address, you must update the Sybase server. As the root user run the following command:
~scmscm/setup/syb_interfaces.sh
We recommend that you run the update_statistics.sh script once a week at a time of low database load; for example, at night.
~scmscm/db_maint/update_statistics.sh
You can use a utility script to manage the repository of CSV files output by the CM. These files are written to the disk by the comma separated value (CSV) adapter for use by a service provider's operational support system (OSS) or by a third-party billing system. To prevent disk overflow, monitor the size of the CSV repository.
Note If the backup parameter is set to true, failure to delete CSV files may result in disk overflow (No CSV files are ever deleted.)
Note The third-party application is responsible for managing the CSV files and deleting them as necessary.
To successfully invoke this script, the HTTP adaptor of the CM must be running. If the adapter is down, an error message is printed.
•CSV Repository File Structure
•Configuring the CSV File Repository
•Configuring Escape of Nonprintable Characters
CSV files are stored in several subdirectories. Each subdirectory is given the number of a Raw Data Record (RDR) tag. (RDR tags define the type of the RDR.) Each RDR is stored in the subdirectory whose name matches its RDR tag number. For more information on RDR tags, see the Cisco Service Control Application for Broadband Reference Guide.
The CSV files are (automatically) sequentially numbered, with separate numbering in each directory. You can change the location of the parent directory by editing the cm.conf file located inthe cm/config directory.
Use the csvconf.sh script, ~scmscm/scripts/csvconf.sh, to:
•List the number of RDRs currently stored in the repository.
•Configure the maximum number of CSV files and the maximum permissible number of reports (lines) in each file.
•Control whether a backup is made when an old CSV file is about to be overwritten.
•Control whether each line in a CSV file contains an indication of the IP of the Service Control Engine (SCE) that sent this RDR. (By default, this option is off.)
Table 5-8 lists the csvconf.sh options.
Note Instead of using this script, you can edit the file ~scmscm/cm/config/csvadapter.conf. Changes in this file require a CM restart to take effect.
Note The same configuration is applied to all subdirectories in the CSV repository.
Note Setting these parameters does not change existing CSV files; it affects only files that are created subsequently.
The following example shows how to set the maximum number of CSV files per subdirectory to 1000.
>~scmscm/scripts/csvconf.sh --maxfiles=1000
The following example shows how to set the maximum number records per CSV files to 10,000.
>~scmscm/scripts/csvconf.sh --maxlines=10000
The following example shows how to delete all files from the CSV repository:
~scmscm/scripts/csvconf.sh --clear
The following example shows how to disable backing up old CSV files in the repository:
~scmscm/scripts/csvconf.sh --backups=false
When a comma is contained within a field in a CSV file, an escape sequence indicates that the comma does not mark the end of the field.
Three escape methods are supported:
•Single quotation marks—Single quotation marks surround any field that contains one or more commas. There is no special treatment of single quotation marks already present in existing RDRs.
•URL—Each comma contained within a field is replaced by %2C. There is no special treatment of such sequences already present in existing RDRs.
•Backslash—Each comma contained within a field is preceded by a backslash (\). There is no special treatment of backslashes already present in existing RDRs.
The first two escape methods are compatible with Microsoft Excel. The Backslash method is not compatible with Excel, but is retained for backward compatibility.
By default, single quotation marks are used. You can change the escape method by modifying the value of the escapeMethod attribute. This attribute is located in the csvadapter.conf file in the CSVAdapter directory. The value must be one of: backslash, quote, or url.
The CSV adapter can escape nonprintable characters. Enabling this option incurs a performance penalty for the adapter; by default, the option is disabled.
When the option is enabled, each non-printable character, such as CR and LF, contained within a field is preceded by a backslash (\).
You enable this option in the csvadapter.conf file in the CSVAdapter directory. Changes in this file require a CM restart to take effect.