The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This chapter explains how to use utility scripts to manage the Collection Manager (CM) database and the comma separated value (CSV) repository.
Many database management tasks are applicable only to the bundled Sybase database.
Note For general instruction on using utility scripts, see the "Using Utility Scripts" section.
Note By default, the Collection Manager handles database related transactions in uppercase table names. However, in Windows based MySql server, table names are stored in lowercase and name comparisons are not case-sensitive; and, it automatically converts all table names to lowercase. Therefore, if you are using Windows based database server, you need to modify the lower_case_table_names server configuration entry.
•Common Database Management Tasks
•Managing the Bundled Database
•Partitioning the MySQL Database
Database management tasks that are applicable to all the supported databases are:
•Generating a list of the database tables
•Defining and applying the schedule for the periodic deletion of old records
•Deleting a table
•Manually deleting old records from a table
Every record stored in the database is given a timestamp indicating the time that the Cisco Service Control Management Suite (SCMS) CM received the raw data record (RDR). This timestamp is used when various maintenance operations are performed on the database tables.
Use the following scripts to configure and maintain the database:
•~scmscm/scripts/dbtables.sh
•~scmscm/scripts/dbperiodic.sh
•~scmscm/db_maint/create_periodic_del_procs.sh
•~scmscm/scripts/droptable.sh
•~scmscm/scripts/prunetable.sh
•~scmscm/scripts/sybutfsupport.sh
•~scmscm/scripts/upgradeDB.sh
•~scmscm/scripts/partitions.sh
•~scmscm/scripts/createaggevents.sh
To display a list of all of the tables in the database, use the dbtables.sh script:
~scmscm/scripts/dbtables.sh
Where applicable, the number of lines in the table and the earliest and latest timestamps are displayed.
Actual content of the tables can be displayed using the Cisco Service Control Application (SCA) Reporter. For more information, see Cisco Service Control Application Reporter User Guide.
Table 5-1 lists the dbtables.sh script options.
The following is a sample output from the dbtables.sh script:
>~scmscm/scripts/dbtables.sh Executing query ...
name| num_lines| min_time| max_time|
----------------+--------+------------------------+------------------------+
RPT_SUR| 131000| 2006-10-30 16:46:42.24| 2007-02-15 12:00:32.216|
RPT_LUR| 170000| 2007-04-10 15:25:45.31| 2007-04-11 07:06:05.45|
RPT_VLUR| 4694| 2007-04-11 13:12:39.683| 2007-04-11 13:18:07.396|
RPT_PUR| 116000| 2007-04-09 04:45:55.566| 2007-04-11 07:44:09.186|
RPT_TR| 57766| 2007-04-11 13:12:39.683| 2007-04-11 13:18:07.396|
RPT_MALUR| 109000| 2007-04-09 04:46:35.903| 2007-04-09 13:32:18.42|
RPT_MEDIA| 120000| 2007-04-05 17:14:24.443| 2007-04-11 13:16:29.436|
RPT_TOPS_PERIOD0| 194250| 2007-03-18 20:00:00.01| 2007-04-23 06:00:00.16|
RPT_TOPS_PERIOD1| 46940| 2007-03-19 00:00:00.05| 2007-04-23 00:00:00.1|
To manage the periodic deletion of old records, perform the following steps:
1. Install the periodic delete procedures if they were not installed during the CM installation:
Log in as the scmscm user, start the CM, wait 1 to 2 minutes for the database tables to be created, and then run the script:
~scmscm/db_maint/create_periodic_del_procs.sh.
2. Edit the periodic delete configuration file. See the "Configuring Periodic Delete" section.
3. To apply the new configuration, use the dbperiodic.sh utility script. See the "Applying the Periodic Delete Configuration File" section.
Periodic deletion of a table does not begin while a previous periodic deletion is still running. This prevents excessive load on the database, which would degrade insertion performance in the adapters.
When two or more tables are scheduled to be reduced at the same time, the tables are processed in the order in which they are listed in the periodic delete configuration file.
For ease of configuration, you can schedule periodic deletion for all tables consecutively on one schedule.
All periodic delete activity is recorded in the system log file. For Solaris, the system log file is located in the /var/adm/messages directory; for Linux, the system log file is located in the /var/log/messages directory. The following is an example of the system log file at the time when the periodic delete action occurs.
Feb 17 05:00:07 app-redhat79 logger: /opt/scmscm/db_maint/periodic_del.sh for hourly all - running single del for RPT_LUR to 14 days
Feb 17 05:00:09 app-redhat79 logger: Starting periodic delete for RPT_LUR keeping 14 days
Feb 17 05:00:09 app-redhat79 logger: Deleted rows: 0 from the table RPT_LUR
Note Periodic delete when using a MySQL database is supported only on Version 5 or later.
•Applying the Periodic Delete Configuration File
The periodic delete configuration file (dbperiodic.conf) is, by default, located at ~scmscm/db_maint/. The file has a structure similar to an INI file, where each section describes a particular data reduction operation for a specific set of tables, to be performed according to a specified schedule.
Note The name of each section of the file is not used when the file is parsed; use the names you wish.
Each section begins with the section name in brackets, and should contain the parameters shown in the following table. (Not all parameters are required in each section of the configuration file.) Separate the parameters and their values by an equal sign (=). Examples of periodic delete configuration files appear in Table 5-2.
Note Values for all parameters except active and daystokeep can be either a single value, a list of values separated by commas, a range of values (two values separated by a dash), or an asterisk (*) which signifies all possible values. A range is not possible for tablenames.
In the following example, all fields are set to their default values.
# This dbperiodic.conf file emulates the legacy style for periodic # deletion. All tables are processed every hour on the hour, and # records are kept for 14 days. [hourly all] active = true tablenames = * daystokeep = 14 minute = 0 hour = *
In this example, all tables are reduced at 4:30 A.M., leaving 10 days of data in each table. In addition, the real-time tables are reduced every hour, leaving three days of data in each table.
# This dbperiodic.conf file reduces all tables once a day and # real-time tables once an hour. [daily all] active = true tablenames = * daystokeep = 10 minute = 30 hour = 4 [hourly real-time] active = true tablenames = RPT_SUR,RPT_LUR,RPT_PUR daystokeep = 3 minute = 0 hour = *
To load and apply a new periodic delete configuration file or to view the current file, use the dbperiodic.sh script:
~scmscm/scripts/dbperiodic.sh [ --dump ] [ --load | --loadfile=path_to_dbperiodic.conf ]
When the script is used to load a new configuration file, it parses the file, verifies its validity, and updates the scmscm user's crontab to reflect the changes. Table 5-3 lists the dbperiodic.sh options.
The following example shows how to print the current periodic delete configuration:
~scmscm/scripts/dbperiodic.sh --dump
Note This script prints the loaded periodic delete configuration. If the current periodic delete configuration file was not yet loaded, the configuration may vary from the script output.
The following example shows how to load the periodic delete configuration file from ~scmscm/db_maint/dbperiodic.conf:
~scmscm/scripts/dbperiodic.sh --load
The following example shows how to load the periodic delete configuration file from a specified location:
~scmscm/scripts/dbperiodic.sh --loadfile=path_to_periodic_delete_configuration_file
To delete one table or all current tables from the database, use the droptable.sh script:
~scmscm/scripts/droptable.sh [ -f ] tableParameter
Table 5-4 lists the droptable.sh options.
The following example shows how to drop a table named RPT_SUR from the database with no request for confirmation:
~scmscm/scripts/droptable.sh -f RPT_SUR
The following example shows how to drop all tables from the database:
~scmscm/scripts/droptable.sh ALLTABLES
To remove records from a database table based on the timestamps of the records, use the prunetable.sh script:
~scmscm/scripts/prunetable.sh [ -f ] num_days table_name
Table 5-5 lists the prunetable.sh options.
The following example shows how to delete all records that are more than seven days old from a table named RPT_SUR.
Because the -f flag is not specified, there may be requests for confirmation and errors are reported.
>~scmscm/scripts/prunetable.sh 7 RPT_SUR
To add UTF support for the Sybase database, use the below script:
~scmscm/scripts/sybutfsupport.sh
To upgrade the 3.5.x database, to one compatible with 3.6.0, use the below script:
~scmscm/scripts/upgradeDB.sh
Note The ~scmscm/scripts/upgradeDB.sh script is only required when the user retains the 3.5.x database after the CM upgrade.
Managing the bundled database includes:
•Backing up and restoring a database
•Updating Sybase with a changed IP address
•Updating the Sybase database statistics
Each record stored in the database is given a timestamp indicating the time that the Cisco Service Control Management Suite (SCMS) Collection Manager (CM) received the Raw Data Record (RDR). This timestamp is used when various maintenance operations are performed on the database tables.
The following scripts are used to maintain the bundled Sybase database only:
•~scmscm/scripts/sybback.sh
•~scmscm/scripts/sybrestore.sh
•~scmscm/db_maint/update_statistics.sh
To create text file backups of all the tables in the database, use the sybback.sh script:
~scmscm/scripts/sybback.sh -d path_to_backup_directory
The script converts all tables to ASCII files and copies the files to a backup directory. Table 5-6 lists the sybback.sh options.
|
|
---|---|
-d path_to_backup_directory |
Write backup text files to the specified directory |
-h |
Display these options |
To restore a database from the backup file that was created by the sybback.sh script, use the sybrestore.sh script:
~scmscm/scripts/sybrestore.sh -d path_to_restore_directory
Table 5-7 lists the sybrestore.sh options.
|
|
---|---|
-d path_to_restore_directory |
Restore the database using the text files in the specified directory |
-h |
Display these options |
Note The scripts sybback.sh and sybrestore.sh are not a viable backup mechanism for Sybase. They are designed for backing up and restoring small amounts of data; for example, transferring small tables between machines.
Note If you require a viable backup mechanism, please consult the Sybase Backup Server product documentation.
When you change the IP address, you must update the Sybase server. As the root user run the following command:
~scmscm/setup/syb_interfaces.sh
We recommend that you run the update_statistics.sh script once a week at a time of low database load; for example, at night.
~scmscm/db_maint/update_statistics.sh
Records can be stored in an external database such as MySQL. Starting with Release 3.6.5, the partitioning feature is supported when MySQL is configured as the database.
The partitioning feature is supported only when the MySQL server meets the following conditions:
•MySQL version 5.1.06 or later
•open_files_limit value in my.cnf file should be set as 8000 for default partitioning configuration under the [mysqld] section
•thread_stack value in my.cnf file should be set to at least 256k under the [mysqld] section
Restart the MySQL server for the configuration changes to take effect.
The partitioning configuration file (partitions.conf) is located in the ~scmscm/cm/config/dbpacks/mysql/4.0.20/partitions/ directory.
The following is a sample of the configuration file:
[partitions_config1]
table_names = RPT_LUR,RPT_TOPS_PERIOD1,RPT_FUR,RPT_GUR
no_of_partition = 30
period = 1
scheduler_time = 23:30
Each section begins with an unique section name starting with partitions_config in brackets and should contain the parameters shown in Table 5-8. Separate the parameters and their values by an equal sign (=).
To create and view partitions, use the partitions.sh script:
~scmscm/scripts/partitions.sh [ --create ] [ --non_partitioned ] [ --partitioned ] [ --help ]
Table 5-9 lists the partitions.sh options.
The following steps describe how to enable the partitioning feature:
1. Login as scmscm user.
2. Change directory to ~scmscm/scripts/.
3. Customize the ~scmscm/cm/config/dbpacks/mysql/4.0.20/partitions/partitions.conf file based on your requirements.
4. Run ~scmscm/scripts/loadsqlprocedures.sh -l -par to load the required support files for partitioning.
5. Run ~scmscm/scripts/partitions.sh to create the partitioning.
Note Make sure the periodic deletion process is disabled in CM when the partitioning feature is enabled.
During the creation of the partitions, the partitions.sh script performs the following actions:
•Enable the event scheduler on the MySQL server.
•If any of the tables (table_names) listed in the configuration file (partitions.conf) are not available in the database, then partitioning is ignored for that particular table while the remaining tables are processed.
Once the MySQL server is restarted after the partitioning, the event scheduler will be disabled. To ensure the event scheduler stays enabled through a server restart, add "event_scheduler = 1" to the [mysqld] section of the my.cnf file.
Note If any missing table is later created in the database, then run the partitions.sh script to create new partitioning. The same event scheduler is applicable for the new partitioning in each configuration section. This will not disturb the existing partitioning.
Note If any new tables are created in the database, then define new configuration sections for these new tables and run the partitions.sh script afterwards. This will not disturb the existing partitioning.
Although the partitioning event scheduler runs every day, it decides whether to create or delete a partition based on the configuration parameters no_of_partition and period. Even if the server is restarted, the partitioning event scheduler will not be disturbed.
Database data can be aggregated per varying time periods and stored in the same database tables. Both raw and aggregated data are stored in order to maintain compatibility with reporting tools that pull this data to generate analysis reports. All reports that process raw data for display can also process the aggregated data since the data structure is the same.
An input date parameter specifies the data records to be processed. These records are then stored per time period. For xUR tables, hourly aggregation is performed starting from the input date value to the following fourteen days. Thereafter, daily aggregation is performed. For HTTP, video, and NUR VSA tables, daily aggregation is performed.
During the aggregation process, table rows are aggregated per time period. These values are placed in a temporary table. The source records are then removed to allow insertion of the aggregated records.
The aggregation feature is supported in the following tables:
•RPT_LUR
•RPT_PUR
•RPT_VLUR
•RPT_MALUR
•RPT_SUR
•RPT_SPAM
•RPT_ZUR
•RPT_TOP_HTTP_DOMAINS
•RPT_TOP_VIDEO_DOMAINS
•RPT_TOP_HTTP_HOSTS
•RPT_TOP_VIDEO_HOSTS
•RPT_TOP_APN
•RPT_TOP_DEVICE_TYPE
•RPT_TOP_NETWORK_TYPE
•RPT_TOP_USER_LOCATION
•RPT_TOP_SGSN
Note If both periodic deletion and the aggregation process are enabled, then the number of days records are to be retained can be increased.
The aggregation configuration file ~scmscm/cm/config/aggregation.conf defines the values used in the aggregation process.
The following is an example configuration file:
[agg_event1]
procedure_names = P_RPT_MALUR_AGGREGATE, P_RPT_LUR_AGGREGATE
no_of_days = 2
scheduler_time = 23:30
Each section in the aggregation configuration file begins with a unique section name in brackets
([]) starting with agg_event and should contain the parameters shown in Table 5-10. Separate the parameters and their values by an equal sign (=).
To create a new aggregation event, use the createaggevents.sh script:
~scmscm/scripts/createaggevents.sh [ --create ] [ --help ]
Table 5-11 lists the createaggevents.sh options.
|
|
---|---|
--create |
Creates the corresponding aggregation events |
--help |
Display these options |
Data aggregation is supported in Sybase, MySQL, and Oracle databases. The following sections describe how to enable the aggregation feature:
•Enabling Aggregation in Sybase
•Enabling Aggregation in MySQL
•Enabling Aggregation in Oracle
Data aggregation is supported in Sybase versions 12 and 15.
The following steps describe how to enable the aggregation feature:
1. Login as scmscm user.
2. Run ~scmscm/scripts/loadsqlprocedures.sh -l -agg to load the required support files for aggregation.
3. Create a cron job to run ~scmscm/cm/config/dbpacks/sybase/ase12.5.1/aggregation/cron/run_aggregation.sh to schedule the aggregation process.
Cron job example:
crontab -e
Update the path and expected time to schedule the job:
30 23 * * * /<path>/run_aggregation.sh
4. Refer to ~scmscm/cm/config/dbpacks/sybase/ase12.5.1/aggregation/cron/
readme_run_aggregation_procedure.txt for further reference.
Data aggregation is supported in MySQL versions 5.1.06 and later.
The following steps describe how to enable the aggregation feature:
1. Login as scmscm user.
2. ~scmscm/cm/config/dbpacks/mysql/4.0.20/aggregation/all_events_create.sql has the SQL statements to create default events for aggregation with a defined time period.
3. Run ~scmscm/scripts/createaggevents.sh --create to override the existing aggregation events based on the ~scmscm/cm/config/aggregation.conf file.
Data aggregation is supported in Oracle versions 9, 10, and 11.
The following steps describe how to enable the aggregation feature:
1. Login as scmscm user.
2. ~scmscm/cm/config/dbpacks/oracle/9204e/aggregation/JOB_RUN_AGGREGATION.sql has the SQL statements to create default events for aggregation with a defined time period.
3. Run ~scmscm/scripts/createaggevents.sh --create to override the existing aggregation events based on the ~scmscm/cm/config/aggregation.conf file.
Note Make sure the Oracle user has sufficient privileges to alter and execute the aggregation events (for example: GRANT SELECT, INSERT, DELETE ON sys.job$ TO pqb_admin).
You can use a utility script to manage the repository of CSV files output by the CM. These files are written to the disk by the CSV adapter for use by a service provider's operational support system (OSS) or by a third-party billing system. To prevent disk overflow, monitor the size of the CSV repository.
Note If the backup parameter is set to true, failure to delete CSV files may result in disk overflow. No CSV files are ever deleted.
Note The third-party application is responsible for managing the CSV files and deleting them as necessary.
To successfully invoke this script, the HTTP adaptor of the CM must be running. If the adapter is down, an error message is printed.
•CSV Repository File Structure
•Configuring the CSV File Repository
•Configuring Escape of Nonprintable Characters
CSV files are stored in several subdirectories. Each subdirectory is given the number of a Raw Data Record (RDR) tag. (RDR tags define the type of the RDR.) Each RDR is stored in the subdirectory whose name matches its RDR tag number. For more information on RDR tags, see tCisco Service Control Application for Broadband Reference Guide.
The CSV files are (automatically) sequentially numbered, with separate numbering in each directory. You can change the location of the parent directory by editing the cm.conf file located inthe cm/config directory.
Use the ~scmscm/scripts/csvconf.sh script to:
•List the number of RDRs currently stored in the repository.
•Configure the maximum number of CSV files and the maximum permissible number of reports (lines) in each file.
•Control whether a backup is made when an old CSV file is about to be overwritten.
•Control whether each line in a CSV file contains an indication of the IP of the Service Control Engine (SCE) that sent this RDR. (By default, this option is off.)
Table 5-12 lists the csvconf.sh options.
Note Instead of using this script, you can edit the ~scmscm/cm/config/csvadapter.conf file. Changes in this file require a CM restart to take effect.
Note The same configuration is applied to all subdirectories in the CSV repository.
Note Setting these parameters does not change existing CSV files; it affects only files that are created subsequently.
The following example shows how to set the maximum number of CSV files per subdirectory to 1000.
>~scmscm/scripts/csvconf.sh --maxfiles=1000
The following example shows how to set the maximum number records per CSV files to 10,000.
>~scmscm/scripts/csvconf.sh --maxlines=10000
The following example shows how to delete all files from the CSV repository:
~scmscm/scripts/csvconf.sh --clear
The following example shows how to disable backing up old CSV files in the repository:
~scmscm/scripts/csvconf.sh --backups=false
When a comma is contained within a field in a CSV file, an escape sequence indicates that the comma does not mark the end of the field.
Three escape methods are supported:
•Single quotation marks—Single quotation marks surround any field that contains one or more commas. There is no special treatment of single quotation marks already present in existing RDRs.
•URL—Each comma contained within a field is replaced by %2C. There is no special treatment of such sequences already present in existing RDRs.
•Backslash—Each comma contained within a field is preceded by a backslash (\). There is no special treatment of backslashes already present in existing RDRs.
The first two escape methods are compatible with Microsoft Excel. The Backslash method is not compatible with Excel, but is retained for backward compatibility.
By default, single quotation marks are used. You can change the escape method by modifying the value of the escapeMethod attribute. This attribute is located in the ~/scmscm/cm/config/csvadapter.conf file. The value must be one of: backslash, quote, or url.
The CSV adapter can escape nonprintable characters. Enabling this option incurs a performance penalty for the adapter; by default, the option is disabled.
When the option is enabled, each non-printable character, such as CR and LF, contained within a field is preceded by a backslash (\).
You enable this option in the ~/scmscm/cm/config/csvadapter.conf file. Changes in this file require a CM restart to take effect.