Maintenance Troubleshooting
Revised: August 10, 2011, OL-25016-01
Introduction
This chapter provides the information needed for monitoring and troubleshooting maintenance events and alarms. This chapter is divided into the following sections:
•Maintenance Events and Alarms—Provides a brief overview of each maintenance event and alarm
•Monitoring Maintenance Events—Provides the information needed for monitoring and correcting the maintenance events
•Troubleshooting Maintenance Alarms—Provides the information needed for troubleshooting and correcting the maintenance alarms
Maintenance Events and Alarms
This section provides a brief overview of the maintenance events and alarms for the Cisco BTS 10200 Softswitch; the event and alarms are arranged in numerical order. Table 7-1 lists all of the maintenance events and alarms by severity.
|
Note Click the maintenance message number in Table 7-1 to display information about the event.
|
Maintenance (1)
Table 7-2 lists the details of the Maintenance (1) informational event. For additional information, refer to the "Test Report—Maintenance (1)" section.
Table 7-2 Maintenance (1) Details
Description |
Test Report |
Severity |
Information |
Threshold |
10000 |
Throttle |
0 |
Maintenance (2)
Table 7-3 lists the details of the Maintenance (2) informational event. For additional information, refer to the "Report Threshold Exceeded—Maintenance (2)" section.
Table 7-3 Maintenance (2) Details
Description |
Report Threshold Exceeded |
Severity |
Information |
Threshold |
0 |
Throttle |
0 |
Datawords |
Report Type—TWO_BYTES Report Number—TWO_BYTES Threshold Level—TWO_BYTES |
Primary Cause |
Issued when the threshold for a given report type and number is exceeded. |
Primary Action |
No action is required since this is an information report. The root cause event report and the threshold setting should be investigated to determine if there is a service-affecting situation. |
Maintenance (3)
Table 7-4 lists the details of the Maintenance (3) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Local Side Has Become Faulty—Maintenance (3)" section.
Table 7-4 Maintenance (3) Details
Description |
Keep Alive Module: Local Side Has Become Faulty (KAM: Local Side Has Become Faulty) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] Reason—STRING [80] Probable Cause—STRING [80] |
Primary Cause |
Can result from maintenance report 5, 6, 9, 10, 19, or 20. |
Primary Action |
Review the information from the command line interface (CLI) log report. Usually a software problem; restart the software using the installation and startup procedure. |
Secondary Cause |
Manually shutting down the system using platform stop command. |
Secondary Action |
Reboot the host machine, reinstall all applications and restart all applications. If the fault state is a commonly occurring problem, then the operating system (OS) or a hardware failure may be the problem. |
Maintenance (4)
Table 7-5 lists the details of the Maintenance (4) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Mate Side Has Become Faulty—Maintenance (4)" section.
Table 7-5 Maintenance (4) Details
Description |
Keep Alive Module: Mate Side Has Become Faulty (KAM: Mate Side Has Become Faulty) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] Reason—STRING [80] Probable Cause—STRING [80] Mate Ping—STRING [50] |
Primary Cause |
The local side has detected the mate side going to the faulty state. |
Primary Action |
Display the event summary on the faulty mate side, using the report event-summary command (see the Cisco BTS 10200 Softswitch CLI Database for command details). |
Secondary Action |
Review the information in the event summary. This is usually a software problem. |
Ternary Action |
After confirming the active side is processing traffic, restart software on the mate side. Log in to the mate platform as root user. Enter the platform stop command and then the platform start command. |
Subsequent Action |
If software restart does not resolve the problem or if the platform goes immediately to faulty again, or does not start, contact Cisco Technical Assistance Center (TAC). It may be necessary to reinstall software. If problem is commonly occurring, then the OS or a hardware failure may be the problem. Reboot the host machine, then reinstall and restart all applications. Rebooting brings down other applications running on this machine. Contact Cisco TAC for assistance. |
Maintenance (5)
Table 7-6 lists the details of the Maintenance (5) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Changeover Failure—Maintenance (5)" section.
Table 7-6 Maintenance (5) Details
Description |
Keep Alive Module: Changeover Failure (KAM: Changeover Failure) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
Issued when changing from an active processor to a standby and the changeover fails. |
Primary Action |
Review information from CLI log report. |
Secondary Cause |
This alarm is usually caused by a software problem on the specific platform identified in the alarm report. |
Secondary Action |
Restart the platform identified in the alarm report. |
Ternary Action |
If platform restart is not successful, reinstall the application for this platform, and then restart platform again. |
Subsequent Action |
If necessary, reboot host machine this platform is located on. Then reinstall and restart all applications on this machine. If faulty state is a commonly occurring event, then the OS or a hardware failure may be the problem. Contact Cisco TAC for assistance. It may also be helpful to gather information from event and alarm reports that were issued before and after this alarm report. |
Maintenance (6)
Table 7-7 lists the details of the Maintenance (6) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Changeover Timeout—Maintenance (6)" section.
Table 7-7 Maintenance (6) Details
Description |
Keep Alive Module: Changeover Timeout (KAM: Changeover Timeout) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
The system failed to changeover within the required time period. Soon after this event is issued, one platform will go to the faulty state. |
Primary Action |
Review the information from CLI log report. |
Secondary Cause |
This alarm is usually caused by a software problem on the specific platform identified in the alarm report. |
Secondary Action |
Restart the platform identified in the alarm report. |
Ternary Action |
If platform restart is not successful, reinstall the application for this platform, and then restart the platform again. |
Subsequent Action |
If necessary, reboot the host machine this platform is located on. Then reinstall and restart all applications on this machine. If faulty state is a commonly occurring event, then the operating system (OS) or a hardware failure may be the problem. Contact Cisco TAC for assistance. It may also be helpful to gather information from event and alarm reports that were issued before and after this alarm report. |
Maintenance (7)
Table 7-8 lists the details of the Maintenance (7) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Mate Rejected Changeover—Maintenance (7)" section.
Table 7-8 Maintenance (7) Details
Description |
Keep Alive Module: Mate Rejected Changeover (KAM: Mate Rejected Changeover) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
Mate is not yet in stable state. |
Primary Action |
Enter the status command to get information on the two systems in the pair (primary and secondary Element Management System (EMS), Call Agent (CA), or Feature Server (FS)). |
Secondary Cause |
The mate detects itself faulty during changeover and then rejects changeover. Note This attempted changeover could be caused by a forced (operator) switch, or could be caused by secondary instance rejecting changeover while the primary instance is being brought up. |
Secondary Action |
If the mate is faulty (not running), then perform the corrective action steps listed for the Maintenance (4) event. |
Ternary Action |
If both systems (local and mate) are still running, determine whether both instances are operating in a stable state (one in active and the other in standby). If both are in a stable state, wait 10 minutes and try the control command again. |
Subsequent Action |
If the standby side is not in stable state, bring down the standby side and restart software using the platform stop and platform start commands. If software restart does not resolve the problem, or if the problem is commonly occurring, contact Cisco TAC. It may be necessary to reinstall software. Additional OS or hardware problems may also need to be resolved. |
Maintenance (8)
Table 7-9 lists the details of the Maintenance (8) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Mate Changeover Timeout—Maintenance (8)" section.
Table 7-9 Maintenance (8) Details
Description |
Keep Alive Module: Mate Changeover Timeout (KAM: Mate Changeover Timeout) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
The mate is faulty. |
Primary Action |
Review the information from CLI log report concerning the faulty mate. |
Secondary Action |
This alarm is usually caused by a software problem on the specific mate platform identified in the alarm report. |
Ternary Action |
Restart the mate platform identified in the alarm report. |
Subsequent Action |
If mate platform restart is not successful, reinstall the application for this mate platform, and then restart the mate platform again. If necessary, reboot the host machine this mate platform is located on. Then reinstall and restart all applications on that machine. |
Maintenance (9)
Table 7-10 lists the details of the Maintenance (9) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Local Initialization Failure—Maintenance (9)" section.
Table 7-10 Maintenance (9) Details
Description |
Keep Alive Module: Local Initialization Failure (KAM: Local Initialization Failure) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
The local initialization has failed. |
Primary Action |
When this event report is issued, the system has failed and the reinitialization process has failed. |
Secondary Action |
Check that the binary files are present for the unit (Call Agent, Feature Server, Element Manager). |
Ternary Action |
If the files are not present, then reinstall the files from the initial or back up media. Then restart the failed device. |
Maintenance (10)
Table 7-11 lists the details of the Maintenance (10) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Local Initialization Timeout—Maintenance (10)" section.
Table 7-11 Maintenance (10) Details
Description |
Keep Alive Module: Local Initialization Timeout (KAM: Local Initialization Timeout) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
The local initialization has timed out. |
Primary Action |
Check that the binary files are present for the unit (Call Agent, Feature Server, or Element Manager). |
Secondary Cause |
When the event report is issued, the system has failed and the reinitialization process has failed. |
Secondary Action |
If the files are not present, then reinstall the files from the initial or back up media. Then restart the failed device. |
Maintenance (11)
Table 7-12 lists the details of the Maintenance (11) informational event. For additional information, refer to the "Switchover Complete—Maintenance (11)" section.
Table 7-12 Maintenance (11) Details
Description |
Switchover Complete |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
Acknowledges that the changeover has successfully completed. |
Primary Action |
This is an informational event report and no further action is required. |
Maintenance (12)
Table 7-13 lists the details of the Maintenance (12) informational event. For additional information, refer to the "Initialization Successful—Maintenance (12)" section.
Table 7-13 Maintenance (12) Details
Description |
Keep Alive Module: Initialization Successful (KAM: Initialization Successful) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Primary Cause |
The local initialization has been successfully completed. |
Primary Action |
This an informational event report and no further action is required. |
Maintenance (13)
Table 7-14 lists the details of the Maintenance (13) informational event. For additional information, refer to the "Administrative State Change—Maintenance (13)" section.
Table 7-14 Maintenance (13) Details
Description |
Administrative State Change (Admin State Change) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Facility Type—STRING [40] Facility ID—STRING [40] Initial Admin State—STRING [20] Target Admin State—STRING [20] Current Admin State—STRING [20] |
Primary Cause |
The administrative state of a managed resource has changed. |
Primary Action |
No action is required, because this informational event report is given after a user has manually changed the administrative state of a managed resource. |
Maintenance (14)
Table 7-15 lists the details of the Maintenance (14) informational event. For additional information, refer to the "Call Agent Administrative State Change—Maintenance (14)" section.
Table 7-15 Maintenance (14) Details
Description |
Call Agent Administrative State Change |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Call Agent ID—STRING [40] Current Local State—STRING [40] Current Mate State—STRING [20] |
Primary Cause |
Indicates that the call agent has changed operational state as a result of a manual switchover (control command in CLI). |
Primary Action |
No action is required. |
Maintenance (15)
Table 7-16 lists the details of the Maintenance (15) informational event. For additional information, refer to the "Feature Server Administrative State Change—Maintenance (15)" section.
Table 7-16 Maintenance (15) Details
Description |
Feature Server Administrative State Change |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Feature Server ID—STRING [40] Feature Server Type—STRING [40] Current Local State—STRING [20] Current Mate State—STRING [20] |
Primary Cause |
Indicates that the call agent has changed operational state as a result of a manual switchover (control command in CLI). |
Primary Action |
No action is required. |
Maintenance (16)
Table 7-17 lists the details of the Maintenance (16) informational event. For additional information, refer to the "Process Manager: Process Has Died: Starting Process—Maintenance (16)" section.
Table 7-17 Maintenance (16) Details
Description |
Process Manager: Starting Process (PMG: Starting Process) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Restart Type—STRING [40] Restart Mode—STRING [32] Process Group—ONE_BYTE |
Primary Cause |
A process is being started as the system is being brought up. |
Primary Action |
No action is required. |
Maintenance (17)
Table 7-18 lists the details of the Maintenance (17) informational event. For additional information, refer to the "Invalid Event Report Received—Maintenance (17)" section.
Table 7-18 Maintenance (17) Details
Description |
Invalid Event Report Received |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Report Type—TWO_BYTES Report Number—TWO_BYTES Validation Failure—STRING [30] |
Primary Cause |
Indicates that a process has sent an event report that cannot be found in the database. |
Primary Action |
If during system initialization a short burst of these event reports is issued prior to the database initialization, then these event reports are informational and can be ignored. |
Secondary Action |
Otherwise, contact Cisco TAC technical support for more information. |
Maintenance (18)
Table 7-19 lists the details of the Maintenance (18) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Process Has Died—Maintenance (18)" section.
Table 7-19 Maintenance (18) Details
Description |
Process Manager: Process Has Died (PMG: Process has Died) |
Severity |
Minor |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—FOUR_BYTES |
Primary Cause |
This alarm is caused by a software problem. |
Primary Action |
If problem persists, contact Cisco TAC technical support. |
Maintenance (19)
Table 7-20 lists the details of the Maintenance (19) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Process Exceeded Restart Rate—Maintenance (19)" section.
Table 7-20 Maintenance (19) Details
Description |
Process Manager: Process Exceeded Restart Rate (PMG: Process Exceeded Restart Rate) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Restart Rate—FOUR_BYTES Process Group—ONE_BYTE |
Primary Cause |
This alarm is usually caused by a software problem on the specific platform identified in the alarm report. Soon after this event is issued, one platform will go to the faulty state. |
Primary Action |
Review the information from CLI log report. |
Secondary Action |
Restart the platform identified in the alarm report. |
Ternary Action |
If platform restart is not successful, reinstall the application for this platform, and then restart platform again. |
Subsequent Action |
If necessary, reboot the host machine this platform is located on. Then reinstall and restart all applications on this machine. |
Maintenance (20)
Table 7-21 lists the details of the Maintenance (20) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Lost Connection to Mate—Maintenance (20)" section.
Table 7-21 Maintenance (20) Details
Description |
Keep Alive Module: Lost Connection to Mate (KAM: Lost KAM Connection to Mate) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Mate Ping—STRING [50] |
Primary Cause |
Network interface hardware problem. |
Primary Action |
Check whether or not the network interface is down. If it is down, restore network interface and restart the software. |
Secondary Cause |
The alarm can be caused by a router problem. |
Secondary Action |
If the alarm is caused by a router problem, repair the router and reinstall. |
Ternary Cause |
Soon after this event is issued, one platform may go to the faulty state. |
Maintenance (21)
Table 7-22 lists the details of the Maintenance (21) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Network Interface Down—Maintenance (21)" section.
Table 7-22 Maintenance (21) Details
Description |
Keep Alive Module: Network Interface Down (KAM: Network Interface Down) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
IP Address—STRING [50] |
Primary Cause |
The alarm is caused by a network interface hardware problem. |
Primary Action |
Check and correct for problems with the network interfaces. |
Secondary Cause |
Soon after this event is issued, one platform may go to the faulty state. |
Secondary Action |
Check whether or not the network interface is down. If the interface is down, restore network interface and restart the software. |
Maintenance (22)
Table 7-23 lists the details of the Maintenance (22) informational event. For additional information, refer to the "Mate Is Alive—Maintenance (22)" section.
Table 7-23 Maintenance (22) Details
Description |
Keep Alive Module: Mate Is Alive (KAM: Mate is Alive) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [30] Mate State—STRING [30] |
Maintenance (23)
Table 7-24 lists the details of the Maintenance (23) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Process Failed to Complete Initialization—Maintenance (23)" section.
Table 7-24 Maintenance (23) Details
Description |
Process Manager: Process Failed to Complete Initialization (PMG: Process Failed to Complete Initialization) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
The specified process failed to complete the initialization during the restoration process. |
Primary Action |
Verify that the specified process's binary image is installed. If not, install it and restart the platform. |
Maintenance (24)
Table 7-25 lists the details of the Maintenance (24) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Restarting Process—Maintenance (24)" section.
Table 7-25 Maintenance (24) Details
Description |
Process Manager: Restarting Process (PMG: Restarting Process) |
Severity |
Minor |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Restart Type—STRING [40] Restart Mode—STRING [32] Process Group—ONE_BYTE |
Primary Cause |
The software process has exited abnormally and had to be restarted. |
Primary Action |
If the problem persists, contact Cisco TAC. |
Maintenance (25)
Table 7-26 lists the details of the Maintenance (25) informational event. For additional information, refer to the "Process Manager: Changing State—Maintenance (25)" section.
Table 7-26 Maintenance (25) Details
Description |
Process Manager: Changing State (PMG: Changing State) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Platform State—STRING [40] |
Maintenance (26)
Table 7-27 lists the details of the Maintenance (26) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Going Faulty—Maintenance (26)" section.
Table 7-27 Maintenance (26) Details
Description |
Process Manager: Going Faulty (PMG: Going Faulty) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Reason—STRING [40] |
Primary Cause |
The system has been brought down because the system has detected a fault. |
Primary Action |
If it is not due to the operator intentionally bringing down the system, then the platform has detected a fault and has shut down. This is typically followed by a Maintenance (3) alarm event. Use corrective action procedures provided for the Maintenance (3) alarm event. |
Maintenance (27)
Table 7-28 lists the details of the Maintenance (27) informational event. For additional information, refer to the "Process Manager: Changing Over to Active—Maintenance (27)" section.
Table 7-28 Maintenance (27) Details
Description |
Process Manager: Changing Over to Active (PMG: Changing Over to Active) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Maintenance (28)
Table 7-29 lists the details of the Maintenance (28) informational event. For additional information, refer to the "Process Manager: Changing Over to Standby—Maintenance (28)" section.
Table 7-29 Maintenance (28) Details
Description |
Process Manager: Changing Over to Standby (PMG: Changing Over to Standby) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Maintenance (29)
Table 7-30 lists the details of the Maintenance (29) warning event. To monitor and correct the cause of the event, refer to the "Administrative State Change Failure—Maintenance (29)" section.
Table 7-30 Maintenance (29) Details
Description |
Administrative State Change Failure (Admin State Change Failure) |
Severity |
Warning |
Threshold |
100 |
Throttle |
0 |
Datawords |
Facility Type—STRING [40] Facility Instance—STRING [40] Failure Reason—STRING [40] Initial Admin State—STRING [20] Target Admin State—STRING [20] Current Admin State—STRING [20] |
Primary Cause |
An attempt to change the administrative state of a device has failed. |
Primary Action |
Monitor the system to see if any event reports indicate a database update failure. |
Secondary Action |
Analyze the cause of the failure if a cause is found. Verify that the controlling element of the targeted device was in the active state in order to service the request to change the adminstrator state of the device. |
Ternary Action |
If the controlling platform instance is not active, restore it to service. |
Maintenance (30)
Table 7-31 lists the details of the Maintenance (30) informational event. For additional information, refer to the "Element Manager State Change—Maintenance (30)" section.
Table 7-31 Maintenance (30) Details
Description |
Element Manager State Change |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Element Manager ID—STRING [40] Current Local State—STRING [40] Current Mate State—STRING [40] |
Primary Cause |
The specified EMS has been changed to the indicated state either naturally or through a user request. |
Primary Action |
No action is necessary. This is part of the normal state transitioning process for the EMS. |
Secondary Action |
Monitor the system for related event reports if the transition was to a faulty or out of service state. |
Maintenance (31)
Maintenance (31) is not used.
Maintenance (32)
Table 7-32 lists the details of the Maintenance (32) informational event. For additional information, refer to the "Process Manager: Sending Go Active to Process—Maintenance (32)" section.
Table 7-32 Maintenance (32) Details
Description |
Process Manager: Sending Go Active to Process (PMG: Sending Go Active to Process) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
Process is being notified to switch to the active state because the system is switching over from the standby state to the active state. |
Primary Action |
No action is necessary. |
Maintenance (33)
Table 7-33 lists the details of the Maintenance (33) informational event. For additional information, refer to the "Process Manager: Sending Go Standby to Process—Maintenance (33)" section.
Table 7-33 Maintenance (33) Details
Description |
Process Manager: Sending Go Standby to Process (PMG: Sending Go Standby to Process) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
The process is being notified to exit gracefully because the system is switching over to the standby state, or it is shutting down. The switchover or shutdown could be due to either of the following: (1) The operator is taking the action to switch or shut down the system. (2) The system has detected a fault. |
Primary Action |
No action is necessary. |
Maintenance (34)
Table 7-34 lists the details of the Maintenance (34) informational event. For additional information, refer to the "Process Manager: Sending End Process to Process—Maintenance (34)" section.
Table 7-34 Maintenance (34) Details
Description |
Process Manager: Sending End Process to Process (PMG: Sending End Process to Process) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
The process is being notified to exit gracefully because the system is switching over to the standby state, or it is shutting down. The switchover or shutdown could be due to either of the following: (1) The operator is taking the action to switch or shut down the system. (2) The system has detected a fault. |
Primary Action |
No action is necessary. |
Maintenance (35)
Table 7-35 lists the details of the Maintenance (35) informational event. For additional information, refer to the "Process Manager: All Processes Completed Initialization—Maintenance (35)" section.
Table 7-35 Maintenance (35) Details
Description |
Process Manager: All Processes Completed Initialization (PMG: All Processes Completed Initialization) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Primary Cause |
The system is being brought up, and all processes are ready to start executing. |
Primary Action |
No action is necessary. |
Maintenance (36)
Table 7-36 lists the details of the Maintenance (36) informational event. For additional information, refer to the "Process Manager: Sending All Processes Initialization Complete to Process—Maintenance (36)" section.
Table 7-36 Maintenance (36) Details
Description |
Process Manager: Sending All Processes Initialization Complete to Process (PMG: Sending All Processes Init Complete to Process) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
The system is being brought up, and all processes are being notified to start executing. |
Primary Action |
No action is necessary. |
Maintenance (37)
Table 7-37 lists the details of the Maintenance (37) informational event. For additional information, refer to the "Process Manager: Killing Process—Maintenance (37)" section.
Table 7-37 Maintenance (37) Details
Description |
Process Manager: Killing Process (PMG: Killing Process) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
A software problem occurred while the system was being brought up or shut down. |
Primary Action |
No action is necessary. |
Secondary Cause |
A process did not come up when the system was brought up and had to be killed in order to restart it. |
Ternary Cause |
A process did not exit when asked to exit. |
Maintenance (38)
Table 7-38 lists the details of the Maintenance (38) informational event. For additional information, refer to the "Process Manager: Clearing the Database—Maintenance (38)" section.
Table 7-38 Maintenance (38) Details
Description |
Process Manager: Clearing the Database (PMG: Clearing the Database) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Primary Cause |
The system is preparing to copy data from the mate. (The system has been brought up and the mate side is running.) |
Primary Action |
No action is necessary. |
Maintenance (39)
Table 7-39 lists the details of the Maintenance (39) informational event. For additional information, refer to the "Process Manager: Cleared the Database—Maintenance (39)" section.
Table 7-39 Maintenance (39) Details
Description |
Process Manager: Cleared the Database (PMG: Cleared the Database) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Primary Cause |
The system is prepared to copy data from the mate. (The system has been brought up and the mate side is running.) |
Primary Action |
No action is necessary. |
Maintenance (40)
Table 7-40 lists the details of the Maintenance (40) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Binary Does Not Exist for Process—Maintenance (40)" section.
Table 7-40 Maintenance (40) Details
Description |
Process Manager: Binary Does not Exist for Process (PMG: Binary Does not Exist for Process) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Program Name—STRING [30] Executable Name—STRING [100] |
Primary Cause |
The platform is not installed correctly. |
Primary Action |
Reinstall the platform. |
Maintenance (41)
Table 7-41 lists the details of the Maintenance (41) warning event. To monitor and correct the cause of the event, refer to the "Administrative State Change Successful With Warning—Maintenance (41)" section.
Table 7-41 Maintenance (41) Details
Description |
Administrative State Change Successful With Warning (Admin State Change Successful with Warning) |
Severity |
Warning |
Threshold |
100 |
Throttle |
0 |
Datawords |
Facility Type—STRING [40] Facility Instance—STRING [40] Initial State—STRING [20] Target State—STRING [20] Current State—STRING [20] Warning Reason—STRING [40] |
Primary Cause |
The device was in a flux state. |
Primary Action |
Retry the administrative state change. |
Maintenance (42)
Table 7-42 lists the details of the Maintenance (42) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Number of Heartbeat Messages Received Is Less Than 50% Of Expected—Maintenance (42)" section.
Table 7-42 Maintenance (42) Details
Description |
Keep Alive Module: Number of Heartbeat Messages Received is Less Than 50% of Expected (KAM: # of HB Messages Received is Less Than 50% of Expected) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [50] IP Address—STRING [50] Expected HB Messages—ONE_BYTE HB Messages Received—ONE_BYTE |
Primary Cause |
The alarm is caused by a network problem. |
Primary Action |
Fix the network problem. |
Maintenance (43)
Table 7-43 lists the details of the Maintenance (43) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Process Failed to Come Up In Active Mode—Maintenance (43)" section.
Table 7-43 Maintenance (43) Details
Description |
Process Manager: Process Failed to Come Up in Active Mode (PMG: Process Failed to Come Up in Active Mode) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
The alarm is caused by a software or a configuration problem. |
Primary Action |
Restart the platform. If the problem persists contact Cisco TAC. |
Maintenance (44)
Table 7-44 lists the details of the Maintenance (44) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Process Manager: Process Failed to Come Up In Standby Mode—Maintenance (44)" section.
Table 7-44 Maintenance (44) Details
Description |
Process Manager: Process Failed to Come Up in Standby Mode (PMG: Process Failed to Come Up in Standby Mode) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [40] Process Group—ONE_BYTE |
Primary Cause |
The alarm is caused by a software or a configuration problem. |
Primary Action |
Restart the platform. If the problem persists contact Cisco TAC. |
Maintenance (45)
Table 7-45 lists the details of the Maintenance (45) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Application Instance State Change Failure—Maintenance (45)" section.
Table 7-45 Maintenance (45) Details
Description |
Application Instance State Change Failure |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Application Instance—STRING [20] Failure Reason—STRING [80] |
Primary Cause |
The switchover of an application instance failed because of a platform fault. |
Primary Action |
Retry the switchover and if the condition continues contact Cisco TAC. |
Maintenance (46)
Table 7-46 lists the details of the Maintenance (46) informational event. For additional information, refer to the "Network Interface Restored—Maintenance (46)" section.
Table 7-46 Maintenance (46) Details
Description |
Network Interface Restored |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [80] Interface IP Address—STRING [80] |
Primary Cause |
The interface cable is reconnected and interface is restored using the ifconfig up command. |
Primary Action |
No action is required. |
Maintenance (47)
Table 7-47 lists the details of the Maintenance (47) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Thread Watchdog Counter Expired for a Thread—Maintenance (47)" section.
Table 7-47 Maintenance (47) Details
Description |
Thread Watchdog Counter Expired for a Thread |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [5] Thread Type—FOUR_BYTES Thread Instance—FOUR_BYTES |
Primary Cause |
The alarm is caused by a software error. |
Primary Action |
No action is required. (The system will automatically recover or shut down.) |
Maintenance (48)
Table 7-48 lists the details of the Maintenance (48) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "Index Table Usage Exceeded Minor Usage Threshold Level—Maintenance (48)" section.
Table 7-48 Maintenance (48) Details
Description |
Index Table Usage Exceeded Minor Usage Threshold Level (IDX Table Usage Exceeded Minor Usage Threshold Level) |
Severity |
Minor |
Threshold |
100 |
Throttle |
0 |
Datawords |
Table Name—STRING [80] Size—FOUR_BYTES Used—FOUR_BYTES |
Primary Cause |
Call traffic volume is above the design limits. |
Primary Action |
Verify that the traffic is within the rated capacity. |
Secondary Cause |
A software problem requiring manufacture analysis has occurred. |
Secondary Action |
Contact Cisco TAC. |
Maintenance (49)
Table 7-49 lists the details of the Maintenance (49) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Index Table Usage Exceeded Major Usage Threshold Level—Maintenance (49)" section.
Table 7-49 Maintenance (49) Details
Description |
Index Table Usage Exceeded Major Usage Threshold Level (IDX Table Usage Exceeded Major Usage Threshold Level) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Table Name—STRING [80] Table Size—FOUR_BYTES Used—FOUR_BYTES |
Primary Cause |
Call traffic volume is above the design limits. |
Primary Action |
Verify that the traffic is within rated capacity. |
Secondary Cause |
A software problem requiring manufacture analysis has occurred. |
Secondary Action |
Contact Cisco TAC. |
Maintenance (50)
Table 7-50 lists the details of the Maintenance (50) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Index Table Usage Exceeded Critical Usage Threshold Level—Maintenance (50)" section.
Table 7-50 Maintenance (50) Details
Description |
Index Table Usage Exceeded Critical Usage Threshold Level (IDX Table Usage Exceeded Critical Usage Threshold Level) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Table Name—STRING [80] Table Size—FOUR_BYTES Used—FOUR_BYTES |
Primary Cause |
Call traffic volume is above the design limits. |
Primary Action |
Verify that the traffic is within rated capacity. |
Secondary Cause |
A software problem requiring manufacture analysis has occurred. |
Secondary Action |
Contact Cisco TAC. |
Maintenance (51)
Table 7-51 lists the details of the Maintenance (51) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "A Process Exceeds 70% of Central Processing Unit Usage—Maintenance (51)" section.
Table 7-51 Maintenance (51) Details
Description |
A Process Exceeds 70% of Central Processing Unit Usage (A Process Exceeds 70% of CPU Usage) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] PID—STRING [40] Process Name—STRING [40] CPU Usage—STRING [40] |
Primary Cause |
A process has entered a state of erratic behavior. |
Primary Action |
Monitor the process and kill it if necessary. |
Maintenance (52)
Table 7-52 lists the details of the Maintenance (52) informational event. For additional information, refer to the "Central Processing Unit Usage Is Now Below the 50% Level—Maintenance (52)" section.
Table 7-52 Maintenance (52) Details
Description |
Central Processing Unit Usage Is Now Below the 50% Level (CPU Usage Is Now Below the 50% Level) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] PID—STRING [40] Process Name—STRING [40] CPU Usage—STRING [40] |
Primary Cause |
This is an informational event and no troubleshooting is necessary. |
Primary Action |
No corrective action is necessary. |
Maintenance (53)
Table 7-53 lists the details of the Maintenance (53) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "The Central Processing Unit Usage Is Over 90% Busy—Maintenance (53)" section.
Table 7-53 Maintenance (53) Details
Description |
The Central Processing Unit Usage Is Over 90% Busy (The CPU Usage Is Over 90% Busy) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] CPU Usage—STRING [40] |
Primary Cause |
Too numerous to determine. |
Primary Action |
Try to isolate the problem. Contact Cisco TAC for assistance. |
Maintenance (54)
Table 7-54 lists the details of the Maintenance (54) informational event. For additional information, refer to the "The Central Processing Unit Has Returned to Normal Levels of Operation—Maintenance (54)" section.
Table 7-54 Maintenance (54) Details
Description |
The Central Processing Unit Has Returned to Normal Levels of Operation (The CPU Has Returned to Normal Levels of Operation) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] CPU Usage—STRING [40] |
Primary Cause |
Not applicable. |
Primary Action |
Not applicable. |
Maintenance (55)
Table 7-55 lists the details of the Maintenance (55) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "The Five Minute Load Average Is Abnormally High—Maintenance (55)" section.
Table 7-55 Maintenance (55) Details
Description |
The Five Minute Load Average Is Abnormally High |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] Load Average—STRING [40] |
Primary Cause |
Multiple processes are vying for processing time on the system, which is normal in a high traffic situation such as heavy call processing or bulk provisioning. |
Primary Action |
Monitor the system to ensure that all subsystems are performing normally. If they are, only lightening the effective load on the system will clear the situation. If they are not, verify which process(es) are running at abnormally high rates and provide the information to Cisco TAC. |
Maintenance (56)
Table 7-56 lists the details of the Maintenance (56) informational event. For additional information, refer to the "The Load Average Has Returned to Normal Levels—Maintenance (56)" section.
Table 7-56 Maintenance (56) Details
Description |
The Load Average Has Returned to Normal Levels |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] Load Average—STRING [40] |
Primary Cause |
Not applicable. |
Primary Action |
Not applicable. |
Maintenance (57)
Table 7-57 lists the details of the Maintenance (57) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Memory and Swap Are Consumed at Critical Levels—Maintenance (57)" section.
|
Note Maintenance (57) is issued by the Cisco BTS 10200 system when memory consumption is greater than 95 percent (>95%) and swap space consumption is greater than 50 percent (>50%).
|
Table 7-57 Maintenance (57) Details
Description |
Memory and Swap Are Consumed at Critical Levels |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] Memory—STRING [40] Swap—STRING [40] |
Primary Cause |
A process or multiple processes have consumed a critical amount of memory on the system and the operating system is utilizing a critical amount of the swap space for process execution. This can be a result of high call rates or bulk provisioning activity. |
Primary Action |
Monitor the system to ensure that all subsystems are performing normally. If they are, only lightening the effective load on the system will clear the situation. If they are not, verify which process(es) are running at abnormally high rates and provide the information to Cisco TAC. |
Maintenance (58)
Table 7-58 lists the details of the Maintenance (58) informational event. For additional information, refer to the "Memory and Swap Are Consumed at Abnormal Levels—Maintenance (58)" section.
|
Note Maintenance (58) is issued by the Cisco BTS 10200 system when memory consumption is greater than 80 percent (>80%) and swap space consumption is greater than 30 percent (>30%).
|
Table 7-58 Maintenance (58) Details
Description |
Memory and Swap Are Consumed at Abnormal Levels |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [40] Memory—STRING [40] Swap—STRING [40] |
Primary Cause |
A process or multiple processes have consumed an abnormal amount of memory on the system and the operating system is utilizing an abnormal amount of the swap space for process execution. This can be a result of high call rates or bulk provisioning activity. |
Primary Action |
Monitor the system to ensure all subsystems are performing normally. If they are, only lightening the effective load on the system will clear the situation. If they are not, verify which process(es) are running at abnormally high rates and provide the information to Cisco TAC. |
Maintenance (59)
Maintenance (59) is not used.
Maintenance (60)
Maintenance (60) is not used.
Maintenance (61)
Table 7-59 lists the details of the Maintenance (61) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "No Heartbeat Messages Received Through the Interface—Maintenance (61)" section.
Table 7-59 Maintenance (61) Details
Description |
No Heartbeat Messages Received Through the Interface (No HB Messages Received Through the Interface) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [20] Interface IP Address—STRING [50] |
Primary Cause |
The local network interface is down. |
Primary Action |
Restore the local network interface. |
Secondary Cause |
The mate network interface on the same subnet is faulty. |
Secondary Action |
Restore the mate network interface. |
Ternary Cause |
Network congestion. |
Maintenance (62)
Table 7-60 lists the details of the Maintenance (62) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Link Monitor: Interface Lost Communication—Maintenance (62)" section.
Table 7-60 Maintenance (62) Details
Description |
Link Monitor: Interface Lost Communication |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [80] Interface IP Address—STRING [80] |
Primary Cause |
The interface cable is pulled out or the interface is shut down using ifconfig down command. |
Primary Action |
Restore the network interface. |
Secondary Cause |
The interface has no connectivity to any of the machines or routers. |
Maintenance (63)
Table 7-61 lists the details of the Maintenance (63) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Outgoing Heartbeat Period Exceeded Limit—Maintenance (63)" section.
Table 7-61 Maintenance (63) Details
Description |
Outgoing Heartbeat Period Exceeded Limit (Outgoing HB Period Exceeded Limit) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Maximum HB Period (ms)—FOUR_BYTES HB Period (ms)—FOUR_BYTES |
Primary Cause |
This is caused by system performance degradation due to central processing unit (CPU) overload or excessive I/O operations. |
Primary Action |
Identify the applications which are causing the system degradation by using the HMN CLI commands. Verify if this is a persistent situation. Contact Cisco TAC with the gathered information. |
Maintenance (64)
Table 7-62 lists the details of the Maintenance (64) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Average Outgoing Heartbeat Period Exceeds Major Alarm Limit—Maintenance (64)" section.
Table 7-62 Maintenance (64) Details
Description |
Average Outgoing Heartbeat Period Exceeds Major Alarm Limit (Average Outgoing HB Period Exceeds Maj Alarm Limit) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Maximum Avg HB Period—FOUR_BYTES Average HB Period (ms)—FOUR_BYTES |
Primary Cause |
This is caused by system performance degradation due to CPU overload or excessive I/O operations. |
Primary Action |
Identify the applications which are causing the system degradation by using the HMN CLI commands. Verify if this is a persistent situation. Contact Cisco TAC with the gathered information. |
Maintenance (65)
Table 7-63 lists the details of the Maintenance (65) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Disk Partition Critically Consumed—Maintenance (65)" section.
Table 7-63 Maintenance (65) Details
Description |
Disk Partition Critically Consumed |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Directory—STRING [32] Device—STRING [32] Percentage Used—STRING [8] |
Primary Cause |
A process or processes are writing extraneous data to the named partition. |
Primary Action |
Perform disk a cleanup and maintenance on the offending system. |
Maintenance (66)
Table 7-64 lists the details of the Maintenance (66) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Disk Partition Significantly Consumed—Maintenance (66)" section.
Table 7-64 Maintenance (66) Details
Description |
Disk Partition Significantly Consumed |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Directory—STRING [32] Device—STRING [32] Percentage Used—STRING [8] |
Primary Cause |
A process or processes is/are writing extraneous data to the named partition. |
Primary Action |
Perform a disk clean-up and maintenance on the offending system. |
Maintenance (67)
Table 7-65 lists the details of the Maintenance (67) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "The Free Inter-Process Communication Pool Buffers Below Minor Threshold—Maintenance (67)" section.
Table 7-65 Maintenance (67) Details
Description |
The Free Inter-Process Communication Pool Buffers Below Minor Threshold (The Free IPC Pool Buffers Below Minor Threshold) |
Severity |
Minor |
Threshold |
100 |
Throttle |
0 |
Datawords |
Free IPC Pool Buffer—STRING [10] Threshold—STRING [10] |
Primary Cause |
The IPC pool buffers are not being properly freed by the application or the application is not able to keep up with the incoming IPC messaging traffic. |
Primary Action |
Contact Cisco TAC immediately. |
Maintenance (68)
Table 7-66 lists the details of the Maintenance (68) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "The Free Inter-Process Communication Pool Buffers Below Major Threshold—Maintenance (68)" section.
Table 7-66 Maintenance (68) Details
Description |
The Free Inter-Process Communication Pool Buffers Below Major Threshold (The Free IPC Pool Buffers Below Major Threshold) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Free IPC Poll Buffer—STRING [10] Threshold—STRING [10] |
Primary Cause |
Inter-process communication (IPC) pool buffers are not being properly freed by the application or the application is not able to keep up with the incoming IPC messaging traffic. |
Primary Action |
Contact Cisco TAC immediately. |
Maintenance (69)
Table 7-67 lists the details of the Maintenance (69) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "The Free Inter-Process Communication Pool Buffers Below Critical Threshold—Maintenance (69)" section.
Table 7-67 Maintenance (69) Details
Description |
The Free Inter-Process Communication Pool Buffers Below Critical Threshold (The Free IPC Pool Buffers Below Critical Threshold) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Free IPC Poll Buffer—STRING [10] Threshold—STRING [10] |
Primary Cause |
The IPC pool buffers are not being properly freed by the application or the application is not able to keep up with the incoming IPC messaging traffic. |
Primary Action |
Contact Cisco TAC immediately. |
Maintenance (70)
Table 7-68 lists the details of the Maintenance (70) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "The Free Inter-Process Communication Pool Buffer Count Below Minimum Required—Maintenance (70)" section.
Table 7-68 Maintenance (70) Details
Description |
The Free Inter-Process Communication Pool Buffer Count Below Minimum Required (The Free IPC Pool Buffer Count Below Minimum Required) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Free IPC Buffer Count—TWO_BYTES Minimum Count—TWO_BYTES |
Primary Cause |
The IPC pool buffers are not being properly freed by the application or the application is not able to keep up with the incoming IPC messaging traffic. |
Primary Action |
Contact Cisco TAC immediately. |
Maintenance (71)
Table 7-69 lists the details of the Maintenance (71) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Local Domain Name System Server Response Too Slow—Maintenance (71)" section.
Table 7-69 Maintenance (71) Details
Description |
Local Domain Name System Server Response Too Slow (Local DNS Server Response Too Slow) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
DNS Server IP—STRING [64] |
Primary Cause |
The local domain name system (DNS) server is too busy. |
Primary Action |
Check the local DNS server. |
Maintenance (72)
Table 7-70 lists the details of the Maintenance (72) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "External Domain Name System Server Response Too Slow—Maintenance (72)" section.
Table 7-70 Maintenance (72) Details
Description |
External Domain Name System Server Response Too Slow (External DNS Server Response Too Slow) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
DNS Server IP—STRING [64] |
Primary Cause |
The network traffic level is high or the name server is very busy. |
Primary Action |
Check the DNS server(s). |
Secondary Cause |
There is a daemon called monitorDNS.sh checking the DNS server every minute or so. It will issue an alarm if it cannot contact the DNS server or the response is slow. But it will clear the alarm if it can contact the DNS server later. |
Maintenance (73)
Table 7-71 lists the details of the Maintenance (73) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "External Domain Name System Server Not Responsive—Maintenance (73)" section.
Table 7-71 Maintenance (73) Details
Description |
External Domain Name System Server not Responsive (External DNS Server not Responsive) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
DNS Server IP—STRING [64] |
Primary Cause |
The DNS servers or the network may be down. |
Primary Action |
Check the DNS server(s). |
Secondary Cause |
There is a daemon called monitorDNS.sh checking DNS server every minute or so. It will issue an alarm if it cannot contact the DNS server or the response is slow. But it will clear the alarm if it can contact the DNS server later. |
Maintenance (74)
Table 7-72 lists the details of the Maintenance (74) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Local Domain Name System Service Not Responsive—Maintenance (74)" section.
Table 7-72 Maintenance (74) Details
Description |
Local Domain Name System Service not Responsive (Local DNS Service not Responsive) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
DNS Server IP—STRING [64] Reason—STRING [64] |
Primary Cause |
The local DNS service may be down. |
Primary Action |
Check the local DNS server. |
Maintenance (75)
Table 7-73 lists the details of the Maintenance (75) warning event. To monitor and correct the cause of the event, refer to the "Mismatch of Internet Protocol Address Local Server and Domain Name System—Maintenance (75)" section.
Table 7-73 Maintenance (75) Details
Description |
Mismatch of Internet Protocol Address Local Server and Domain Name System (Mismatch of IP Addr% Local Server and DNS) |
Severity |
Warning |
Threshold |
100 |
Throttle |
0 |
Datawords |
Host Name—STRING [64] IP Addr Local Server—STRING [64] IP Addr DNS Server—STRING [64] |
Primary Cause |
The DNS updates are not getting to the Cisco BTS 10200 from the external server or a discrepancy was detected before the local DNS lookup table was updated. |
Primary Action |
Ensure that the external DNS server is operational and is sending updates to the Cisco BTS 10200. |
Maintenance (76)
Maintenance (76) is not used.
Maintenance (77)
Table 7-74 lists the details of the Maintenance (77) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Mate Time Differs Beyond Tolerance—Maintenance (77)" section.
Table 7-74 Maintenance (77) Details
Description |
Mate Time Differs Beyond Tolerance |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Max Time Difference—FOUR_BYTES Actual Time Difference—FOUR_BYTES |
Primary Cause |
Time synchronization is not working. |
Primary Action |
Change UNIX time on the Faulty/Standby side. If Standby, stop the platform first. |
Maintenance (78)
Table 7-75 lists the details of the Maintenance (78) informational event. For additional information, refer to the "Bulk Data Management System Admin State Change—Maintenance (78)" section.
Table 7-75 Maintenance (78) Details
Description |
Bulk Data Management System Admin State Change (BDMS Admin State Change) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Application Instance—STRING [40] Local State—STRING [40] Mate State—STRING [40] |
Primary Cause |
The Bulk Data Management Server (BDMS) was switched over manually. |
Primary Action |
None |
Maintenance (79)
Table 7-76 lists the details of the Maintenance (79) informational event. For additional information, refer to the "Resource Reset—Maintenance (79)" section.
Table 7-76 Maintenance (79) Details
Description |
Resource Reset |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Resource Type—STRING [40] Resource Instance—STRING [40] |
Primary Cause |
Trunk-Termination Subscriber-Termination Media Gateways |
Primary Action |
None |
Maintenance (80)
Table 7-77 lists the details of the Maintenance (80) informational event. For additional information, refer to the "Resource Reset Warning—Maintenance (80)" section.
Table 7-77 Maintenance (80) Details
Description |
Resource Reset Warning |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Resource Type—STRING [40] Resource Instance—STRING [40] Warning Reason—STRING [120] |
Primary Cause |
Trunk-Termination Subscriber-Termination Media Gateway |
Primary Action |
None |
Maintenance (81)
Table 7-78 lists the details of the Maintenance (81) informational event. For additional information, refer to the "Resource Reset Failure—Maintenance (81)" section.
Table 7-78 Maintenance (81) Details
Description |
Resource Reset Failure |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Resource Type—STRING [40] Resource Instance—STRING [40] Failure Reason—STRING [120] |
Primary Cause |
The informational event is the result of an internal messaging error. |
Primary Action |
Check Dataword 3 (Failure Reason) to determine if the event was caused by invalid user input, inconsistent provisioning of the device, or if the system is busy and a timeout occurred. |
Maintenance (82)
Table 7-79 lists the details of the Maintenance (82) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Average Outgoing Heartbeat Period Exceeds Critical Limit—Maintenance (82)" section.
Table 7-79 Maintenance (82) Details
Description |
Average Outgoing Heartbeat Period Exceeds Critical Limit (Average Outgoing HB Period Exceeds Critical Limit) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Critical Threshold F—FOUR_BYTES Current Average HB Peri—FOUR_BYTES |
Primary Cause |
The CPU is overloaded. |
Primary Action |
Shut down the platform. |
Maintenance (83)
Table 7-80 lists the details of the Maintenance (80) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "Swap Space Below Minor Threshold—Maintenance (83)" section.
Table 7-80 Maintenance (83) Details
Description |
Swap Space Below Minor Threshold |
Severity |
Minor |
Threshold |
5 |
Throttle |
0 |
Datawords |
Minor Threshold (MB)—FOUR_BYTES Current Value (MB)—FOUR_BYTES |
Primary Cause |
Too many processes are running. |
Primary Action |
Stop the proliferation of executables (process scripts). |
Secondary Cause |
File spaces /tmp or /var/run are over-used. |
Secondary Action |
Clean up the file systems. |
Maintenance (84)
Table 7-81 lists the details of the Maintenance (84) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Swap Space Below Major Threshold—Maintenance (84)" section.
Table 7-81 Maintenance (84) Details
Description |
Swap Space Below Major Threshold |
Severity |
Major |
Threshold |
5 |
Throttle |
0 |
Datawords |
Major Threshold (MB)—FOUR_BYTES Current Value (MB)—FOUR_BYTES |
Primary Cause |
Too many processes are running. |
Primary Action |
Stop the proliferation of executables (process and shell procedures). |
Secondary Cause |
File spaces /tmp or /var/run are over-used. |
Secondary Action |
Clean up the file systems. |
Maintenance (85)
Table 7-82 lists the details of the Maintenance (85) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Swap Space Below Critical Threshold—Maintenance (85)" section.
Table 7-82 Maintenance (85) Details
Description |
Swap Space Below Critical Threshold |
Severity |
Critical |
Threshold |
5 |
Throttle |
0 |
Datawords |
Critical Threshold (M—FOUR_BYTES Current Value (MB)—FOUR_BYTES |
Primary Cause |
Too many processes are running. |
Primary Action |
Restart the Cisco BTS 10200 software or reboot the system. |
Secondary Cause |
File spaces /tmp or /var/run are over-used. |
Secondary Action |
Clean up the file systems. |
Maintenance (86)
Table 7-83 lists the details of the Maintenance (86) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "System Health Report Collection Error—Maintenance (86)" section.
Table 7-83 Maintenance (86) Details
Description |
System Health Report Collection Error |
Severity |
Minor |
Threshold |
100 |
Throttle |
0 |
Datawords |
ErrString—STRING [64] |
Primary Cause |
An error occurred during collection of system health report data. |
Primary Action |
Contact Cisco TAC for support. |
Maintenance (87)
Table 7-84 lists the details of the Maintenance (87) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Status Update Process Request Failed—Maintenance (87)" section.
Table 7-84 Maintenance (87) Details
Description |
Status Update Process Request Failed |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
ErrString—STRING [64] Component Type—STRING [64] |
Primary Cause |
The status command is not working properly. |
Primary Action |
Use CLI to verify that the status command is working properly. |
Maintenance (88)
Table 7-85 lists the details of the Maintenance (88) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Status Update Process Database List Retrieval Error—Maintenance (88)" section.
Table 7-85 Maintenance (88) Details
Description |
Status Update Process Database List Retrieval Error (Status Update Process DB List Retrieval Error) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
ErrString—STRING [64] |
Primary Cause |
The Oracle database (DB) is not working properly. |
Primary Action |
Contact Cisco TAC for support. |
Maintenance (89)
Table 7-86 lists the details of the Maintenance (89) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Status Update Process Database Update Error—Maintenance (89)" section.
Table 7-86 Maintenance (89) Details
Description |
Status Update Process Database Update Error (Status Update Process DB Update Error) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
ErrString—STRING [64] SQL Command—STRING [64] |
Primary Cause |
The MySQL DB on the EMS is not working properly. |
Primary Action |
Contact Cisco TAC for support. |
Maintenance (90)
Table 7-87 lists the details of the Maintenance (90) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "Disk Partition Moderately Consumed—Maintenance (90)" section.
Table 7-87 Maintenance (90) Details
Description |
Disk Partition Moderately Consumed |
Severity |
Minor |
Threshold |
100 |
Throttle |
0 |
Datawords |
Directory—STRING [32] Device—STRING [32] Percentage Used—STRING [8] |
Primary Cause |
A process or processes are writing extraneous data to the named partition. |
Primary Action |
Perform disk clean-up and maintenance on the offending system. |
Maintenance (91)
Table 7-88 lists the details of the Maintenance (91) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Internet Protocol Manager Configuration File Error—Maintenance (91)" section.
Table 7-88 Maintenance (91) Details
Description |
Internet Protocol Manager Configuration File Error (IPM Config File Error) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Reason—STRING [128] |
Primary Cause |
The Internet Protocol Manager (IPM) has a configuration file error. |
Primary Action |
Check the IPM configuration file (ipm.cfg) for incorrect syntax. |
Maintenance (92)
Table 7-89 lists the details of the Maintenance (92) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Internet Protocol Manager Initialization Error—Maintenance (92)" section.
Table 7-89 Maintenance (92) Details
Description |
Internet Protocol Manager Initialization Error (IPM Initialization Error) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Reason—STRING [128] |
Primary Cause |
The IPM failed to initialize correctly. |
Primary Action |
Check the Reason dataword as to the cause of the error. |
Maintenance (93)
Table 7-90 lists the details of the Maintenance (93) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Internet Protocol Manager Interface Failure—Maintenance (93)" section.
Table 7-90 Maintenance (93) Details
Description |
Internet Protocol Manager Interface Failure (IPM Interface Failure) |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [32] Reason—STRING [128] |
Primary Cause |
The IPM failed to create logical interface. |
Primary Action |
Check the Reason dataword as to the cause of the error. |
Maintenance (94)
Table 7-91 lists the details of the Maintenance (94) informational event. For additional information, refer to the "Internet Protocol Manager Interface State Change—Maintenance (94)" section.
Table 7-91 Maintenance (94) Details
Description |
Internet Protocol Manager Interface State Change (IPM Interface State Change) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [32] State—STRING [16] |
Primary Cause |
The IPM changed state on an interface (up/down). |
Primary Action |
None |
Maintenance (95)
Table 7-92 lists the details of the Maintenance (95) informational event. For additional information, refer to the "Internet Protocol Manager Interface Created—Maintenance (95)" section.
Table 7-92 Maintenance (95) Details
Description |
Internet Protocol Manager Interface Created (IPM Interface Created) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Hostname—STRING [128] Physical IF Name—STRING [32] Logical IF Name—STRING [32] IP Addr—STRING [32] Netmask—STRING [32] Broadcast Addr—STRING [32] |
Primary Cause |
The IPM created a new logical interface. |
Primary Action |
None |
Maintenance (96)
Table 7-93 lists the details of the Maintenance (96) informational event. For additional information, refer to the "Internet Protocol Manager Interface Removed—Maintenance (96)" section.
Table 7-93 Maintenance (96) Details
Description |
Internet Protocol Manager Interface Removed (IPM Interface Removed) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Hostname—STRING [128] Logical IF Name—STRING [32] IP Addr—STRING [32] |
Primary Cause |
The IPM removed a logical interface. |
Primary Action |
None |
Maintenance (97)
Table 7-94 lists the details of the Maintenance (97) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Inter-Process Communication Input Queue Entered Throttle State—Maintenance (97)" section.
Table 7-94 Maintenance (97) Details
Description |
Inter-Process Communication Input Queue Entered Throttle State (IPC Input Queue Entered Throttle State) |
Severity |
Critical |
Threshold |
500 |
Throttle |
0 |
Datawords |
Process Name—STRING [10] Thread Type—TWO_BYTES Thread Instance—TWO_BYTES Hi Watermark—FOUR_BYTES Lo Watermark—FOUR_BYTES |
Primary Cause |
The indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is using up too much of the IPC memory pool resource. |
Primary Action |
Contact Cisco TAC. |
Maintenance (98)
Table 7-95 lists the details of the Maintenance (98) minor alarm. To troubleshoot and correct the cause of the alarm, refer to the "Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark—Maintenance (98)" section.
Table 7-95 Maintenance (98) Details
Description |
Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark (IPC Input Queue Depth at 25% of Its Hi-Watermark) |
Severity |
Minor |
Threshold |
500 |
Throttle |
0 |
Datawords |
Process Name—STRING [10] Thread Type—TWO_BYTES Thread Instance—TWO_BYTES Hi Watermark—FOUR_BYTES Lo Watermark—FOUR_BYTES |
Primary Cause |
The indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is at 25% of the level at which it will enter the throttle state. |
Primary Action |
Contact Cisco TAC. |
Maintenance (99)
Table 7-96 lists the details of the Maintenance (99) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark—Maintenance (99)" section.
Table 7-96 Maintenance (99) Details
Description |
Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark (IPC Input Queue Depth at 50% of Its Hi-Watermark) |
Severity |
Major |
Threshold |
500 |
Throttle |
0 |
Datawords |
Process Name—STRING [10] Thread Type—TWO_BYTES Thread Instance—TWO_BYTES Hi Watermark—FOUR_BYTES Lo Watermark—FOUR_BYTES |
Primary Cause |
The indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is at 50% of the level at which it will enter the throttle state. |
Primary Action |
Contact Cisco TAC. |
Maintenance (100)
Table 7-97 lists the details of the Maintenance (100) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark—Maintenance (100)" section.
Table 7-97 Maintenance (100) Details
Description |
Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark (IPC Input Queue Depth at 75% of Its Hi-Watermark) |
Severity |
Critical |
Threshold |
500 |
Throttle |
0 |
Datawords |
Process Name—STRING [10] Thread Type—TWO_BYTES Thread Instance—TWO_BYTES Hi Watermark—FOUR_BYTES Lo Watermark—FOUR_BYTES |
Primary Cause |
The indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is at 75% of the level at which it will enter the throttle state. |
Primary Action |
Contact Cisco TAC. |
Maintenance (101)
Table 7-98 lists the details of the Maintenance (101) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Switchover in Progress—Maintenance (101)" section.
Table 7-98 Maintenance (101) Details
Description |
Switchover in Progress |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Local State—STRING [15] Mate State—STRING [15] Reason—STRING [30] |
Primary Cause |
This alarm is issued when a system switchover occurs either due to a manual switchover (through a CLI command), failover, or automatic switchover. |
Primary Action |
No action needs to be taken; the alarm is cleared when the switchover is complete. The service is temporarily suspended for a short period of time during this transition. |
Maintenance (102)
Table 7-99 lists the details of the Maintenance (102) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Thread Watchdog Counter Close to Expiry for a Thread—Maintenance (102)" section.
Table 7-99 Maintenance (102) Details
Description |
Thread Watchdog Counter Close to Expiry for a Thread |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [5] Thread Type—FOUR_BYTES Thread Instance—FOUR_BYTES |
Primary Cause |
A software error has occurred. |
Primary Action |
None, the system will automatically recover or shut down. |
Maintenance (103)
Table 7-100 lists the details of the Maintenance (103) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Central Processing Unit Is Offline—Maintenance (103)" section.
Table 7-100 Maintenance (103) Details
Description |
Central Processing Unit is Offline (CPU Is Offline) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Hostname—STRING [20] CPU—ONE_BYTE |
Primary Cause |
An operator action has caused the CPU to go offline. |
Primary Action |
Restore the CPU or contact Cisco TAC. |
Maintenance (104)
Table 7-101 lists the details of the Maintenance (104) informational event. For additional information, refer to the "Aggregration Device Address Successfully Resolved—Maintenance (104)" section.
Table 7-101 Maintenance (104) Details
Description |
Aggregration Device Address Successfully Resolved |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
MGW IP Address—STRING [17] MGW ID—STRING [33] AGGR ID—STRING [33] Network Address—STRING [17] Subnet Mask—ONE_BYTE |
Primary Cause |
The event is informational. |
Primary Action |
No action needs to be taken. |
Maintenance (105)
Maintenance (105) is not used.
Maintenance (106)
Maintenance (106) is not used.
Maintenance (107)
Table 7-102 lists the details of the Maintenance (107) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "No Heartbeat Messages Received Through Interface From Router—Maintenance (107)" section.
Table 7-102 Maintenance (107) Details
Description |
No Heartbeat Messages Received Through Interface From Router (No HB Messages Received Through Interface From Router) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Interface Name—STRING [20] Critical Local IP Address—STRING [50] Router IP Address—STRING [50] |
Primary Cause |
The router is down. |
Primary Action |
Restore the router functionality. |
Secondary Cause |
Connection to the router is down. |
Secondary Action |
Restore the connection. |
Ternary Cause |
Network congestion. |
Maintenance (108)
Table 7-103 lists the details of the Maintenance (108) warning event. To monitor and correct the cause of the event, refer to the "A Log File Cannot Be Transferred—Maintenance (108)" section.
Table 7-103 Maintenance (108) Details
Description |
A Log File Cannot Be Transferred |
Severity |
Warning |
Threshold |
5 |
Throttle |
0 |
Datawords |
Name of the File With Full Path—STRING [100] External Archive System—STRING [50] |
Primary Cause |
A problem in access to the external archive system has occurred. |
Primary Action |
Check the external archive system. |
Secondary Cause |
The network to the external archive system is down. |
Secondary Action |
Check the network. |
Ternary Cause |
The source log file is not present. |
Ternary Action |
Check the presence of log file. |
Maintenance (109)
Table 7-104 lists the details of the Maintenance (109) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Five Successive Log Files Cannot Be Transferred—Maintenance (109)" section.
Table 7-104 Maintenance (109) Details
Description |
Five Successive Log Files Cannot Be Transferred |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
External Archive Systems—STRING [100] |
Primary Cause |
A problem in access to external archive system has occurred. |
Primary Action |
Check the external archive system. |
Secondary Cause |
Network to the external archive system is down. |
Secondary Action |
Check the network. |
Maintenance (110)
Table 7-105 lists the details of the Maintenance (110) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Access To Log Archive Facility Configuration File Failed or File Corrupted—Maintenance (110)" section.
Table 7-105 Maintenance (110) Details
Description |
Access to Log Archive Facility Configuration File Failed or File Corrupted (Access to LAF Configuration File Failed or File Corrupted) |
Severity |
Major |
Threshold |
10 |
Throttle |
0 |
Datawords |
Full Path of LAF Configuration F—STRING [50] |
Primary Cause |
The LAF file is corrupted. |
Primary Action |
Check the log archive facility (LAF) configuration file. |
Secondary Cause |
The file is missing. |
Secondary Action |
Check the presence of LAF configuration file. |
Maintenance (111)
Table 7-106 lists the details of the Maintenance (111) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Cannot Log In to External Archive Server—Maintenance (111)" section.
Table 7-106 Maintenance (111) Details
Description |
Cannot Log In to External Archive Server |
Severity |
Critical |
Threshold |
10 |
Throttle |
0 |
Datawords |
External Archive Server—STRING [50] Username—STRING [50] |
Primary Cause |
No authorization access is set up in the external archive server for the user from the Cisco BTS 10200. |
Primary Action |
Set up the authorization. |
Secondary Cause |
The external archive server is down. |
Secondary Action |
Ping the external archive server and try to bring it up. |
Ternary Cause |
The network is down. |
Ternary Action |
Check the network. |
Maintenance (112)
Table 7-107 lists the details of the Maintenance (112) major alarm. To troubleshoot and correct the cause of the alarm, refer to the "Congestion Status—Maintenance (112)" section.
Table 7-107 Maintenance (112) Details
Description |
Congestion Status |
Severity |
Major |
Threshold |
100 |
Throttle |
0 |
Datawords |
System MCL Level—ONE_BYTE |
Primary Cause |
A change has occurred in the system overload level. |
Primary Action |
If the reported level remains continuously high, adjust the system load or configuration. |
Maintenance (113)
Table 7-108 lists the details of the Maintenance (113) informational event. For additional information, refer to the "Central Processing Unit Load of Critical Processes—Maintenance (113)" section.
Table 7-108 Maintenance (113) Details
Description |
Central Processing Unit Load of Critical Processes (CPU Load of Critical Processes) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Factor Level—ONE_BYTE Factor MCL—ONE_BYTE |
Primary Cause |
A change (increase/decrease) has occurred in the call processing load. |
Primary Action |
If the level remains continuously high, change the configuration or redistribute the call load. |
Maintenance (114)
Table 7-109 lists the details of the Maintenance (114) informational event. For additional information, refer to the "Queue Length of Critical Processes—Maintenance (114)" section.
Table 7-109 Maintenance (114) Details
Description |
Queue Length of Critical Processes |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Process Name—STRING [5] Factor Level—ONE_BYTE Factor MCL—ONE_BYTE |
Primary Cause |
A change has occurred in the queue length of the critical processes. |
Primary Action |
If the reported level remains continuously high, adjust the system load or configuration. |
Maintenance (115)
Table 7-110 lists the details of the Maintenance (115) informational event. For additional information, refer to the "Inter-Process Communication Buffer Usage Level—Maintenance (115)" section.
Table 7-110 Maintenance (115) Details
Description |
Inter-Process Communication Buffer Usage Level (IPC Buffer Usage Level) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Factor Level—ONE_BYTE Factor MCL—ONE_BYTE |
Primary Cause |
A change has occurred in the IPC buffer usage. |
Primary Action |
If the reported level remains continuously high, adjust the system load or configuration. |
Maintenance (116)
Table 7-111 lists the details of the Maintenance (116) informational event. For additional information, refer to the "Call Agent Reports the Congestion Level of Feature Server—Maintenance (116)" section.
Table 7-111 Maintenance (116) Details
Description |
Call Agent Reports the Congestion Level of the Feature Server (CA Reports the Congestion Level of FS) |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Domain Name of FS—STRING [65] Feature Server ID—STRING [20] |
Primary Cause |
The Feature Server is congested. |
Primary Action |
No action is required. |
Maintenance (117)
Table 7-112 lists the details of the Maintenance (117) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Side Automatically Restarting Due to Fault—Maintenance (117)" section.
Table 7-112 Maintenance (117) Details
Description |
Side Automatically Restarting Due to Fault |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Time of next restart attempt—STRING [25] |
Primary Cause |
The platform has shut down due to the OOS-FAULTY state, and is in the process of being automatically restarted. |
Primary Action |
Capture the debugging information, especially from the saved.debug directory. This alarm indicates that an automatic restart is pending and at what time it will be attempted. |
Maintenance (118)
Table 7-113 lists the details of the Maintenance (118) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server—Maintenance (118)" section.
Table 7-113 Maintenance (118) Details
Description |
Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server (DNS Zone Database does not Match Between the Primary DNS and the ISADS) |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Zone Name—STRING [64] Primary DNS Server IP—STRING [64] Serial Number of that Zone in Sl—EIGHT_BYTES Serial Number of that Zone in Ma—EIGHT_BYTES |
Primary Cause |
The zone transfer between the primary DNS and the secondary DNS has failed. |
Primary Action |
Check the system log monitor for the DNS traffic through port 53 (default port for DNS). |
Maintenance (119)
Table 7-114 lists the details of the Maintenance (119) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Periodic Shared Memory Database Back Up Failure—Maintenance (119)" section.
Table 7-114 Maintenance (119) Details
Description |
Periodic Shared Memory Database Back Up Failure |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Reason—STRING [300] Available Disk Space (MB)—FOUR_BYTES Required Disk Space (MB)—FOUR_BYTES |
Primary Cause |
High disk usage. |
Primary Action |
Check disk usage. |
Maintenance (120)
Table 7-115 lists the details of the Maintenance (120) informational event. For additional information, refer to the "Periodic Shared Memory Database Back Up Success—Maintenance (120)" section.
Table 7-115 Maintenance (120) Details
Description |
Periodic Shared Memory Database Back Up Success |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Details—STRING [300] |
Primary Cause |
Successful back up of the shared memory database. |
Primary Action |
Not applicable. |
Maintenance (121)
Table 7-116 lists the details of the Maintenance (121) informational event. For additional information, refer to the "Invalid SOAP Request—Maintenance (121)" section.
Table 7-116 Maintenance (121) Details
Description |
Invalid SOAP Request |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Request—STRING [256] Session ID—STRING [20] |
Primary Cause |
The provisioning client sent an invalid xml request to the soap provisioning adapter. |
Primary Action |
Resend a valid xml request. |
Maintenance (122)
Table 7-117 lists the details of the Maintenance (122) informational event. For additional information, refer to the "Northbound Provisioning Message Is Retransmitted—Maintenance (122)" section.
Table 7-117 Maintenance (122) Details
Description |
Northbound Provisioning Message Is Retransmitted |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Datawords |
Prov Time at Seconds—FOUR_BYTES Prov Time at Milli Seconds—FOUR_BYTES Table Name—STRING [40] Update String—STRING [256] |
Primary Cause |
The EMS hub may be responding slowly. |
Primary Action |
Check to see if there are any hub alarms. Take the appropriate action according to the alarms. |
Maintenance (123)
Table 7-118 lists the details of the Maintenance (120) warning event. To monitor and correct the cause of the event, refer to the "Northbound Provisioning Message Dropped Due to Full Index Table—Maintenance (123)" section.
Table 7-118 Maintenance (123) Details
Description |
Northbound Provisioning Message Dropped Due To Full Index Table (Northbound Provisioning Message Dropped Due To Full IDX Table) |
Severity |
Warning |
Threshold |
100 |
Throttle |
0 |
Datawords |
Prov Time at Seconds—FOUR_BYTES Prov Time at Milli Seconds—FOUR_BYTES Table Name—STRING [40] Update String—STRING [256] |
Primary Cause |
The EMS hub is not responding. |
Primary Action |
Verify if there are any alarms originating from the hub and take the appropriate action. |
Maintenance (124)
Table 7-119 lists the details of the Maintenance (124) informational event. For additional information, refer to the "Periodic Shared Memory Sync Started—Maintenance (124)" section.
Table 7-119 Maintenance (124) Details
Description |
Periodic Shared Memory Sync Started |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Primary Cause |
Serves as an information alert that a periodic shared-memory synchronization has successfully started on the Cisco BTS 10200 system. |
Primary Action |
The customer should monitor the Cisco BTS 10200 system for the successful completion of the periodic shared-memory synchronization as indicated by the Periodic Shared Memory Sync Completed event. |
Maintenance (125)
Table 7-120 lists the details of the Maintenance (125) informational event. For additional information, refer to the "Periodic Shared Memory Sync Completed—Maintenance (125)" section.
Table 7-120 Maintenance (125) Details
Description |
Periodic Shared Memory Sync Completed |
Severity |
Information |
Threshold |
100 |
Throttle |
0 |
Primary Cause |
Serves as an informational alert that a periodic shared-memory synchronization to disk has been successfully completed. |
Primary Action |
No customer action is required when the periodic shared-memory synchronization is successfully completed by the Cisco BTS 10200 system. |
Maintenance (126)
Table 7-121 lists the details of the Maintenance (126) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Periodic Shared Memory Sync Failure—Maintenance (126)" section.
Table 7-121 Maintenance (126) Details
Description |
Periodic Shared Memory Sync Failure |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Datawords |
Failure Details - STRING [300] |
Primary Cause |
Indicates that the periodic shared-memory synchronization write to disk has failed. |
Primary Action |
Check the Cisco BTS 10200 system for the cause of the failure, correct it, and then verify that the next periodic shared-memory synchronization to disk is successfully completed by monitoring the Cisco BTS 10200 system for a Periodic Shared Memory Sync Completed informational event |
Maintenance (127)
Table 7-122 lists the details of the Maintenence (127) critical alarm. To troubleshoot and correct the cause of the alarm, refer to the "Manual Recovery of OMS HUB Queue Loss—Maintenance (127)" section.
Table 7-122 Maintenance (127) Details
Description |
Loss in OMS Hub Communication |
Severity |
Critical |
Threshold |
100 |
Throttle |
0 |
Dataword |
Queue-Name - STRING[8] Platform - STRING[8] Node - STRING[8] |
Primary Cause |
Indicates either a network problem or socket connection causing OMS queue loss. |
Primary Action |
Manually restart the OMS and SMG processes. Refer section Manual Recovery of OMS HUB Queue Loss—Maintenance (127). |
Monitoring Maintenance Events
This section provides the information you need for monitoring and correcting maintenance events. Table 7-123 lists all of the maintenance events in numerical order and provides cross-references to each subsection.
Test Report—Maintenance (1)
The Test Report is for testing the maintenance event category. The event is informational and no further action is required.
Report Threshold Exceeded—Maintenance (2)
The Report Threshold Exceeded event functions as an informational alert that a report threshold has been exceeded. The primary cause of the event is that the threshold for a given report type and number has been exceeded. No further action is required since this is an information report. The Root Cause event report threshold should be investigated to determine if there is a service-affecting situation.
Local Side Has Become Faulty—Maintenance (3)
The Local Side Has Become Faulty alarm (major) indicates that the local side has become faulty. To troubleshoot and correct the cause of the Local Side Has Become Faulty alarm, refer to the "Local Side Has Become Faulty—Maintenance (3)" section.
Mate Side Has Become Faulty—Maintenance (4)
The Mate Side Has Become Faulty alarm (major) indicates that the mate side has become faulty. To troubleshoot and correct the cause of the Mate Side has Become Faulty alarm, refer to the "Mate Side Has Become Faulty—Maintenance (4)" section.
Changeover Failure—Maintenance (5)
The Changeover Failure alarm (major) indicates that a changeover failed. To troubleshoot and correct the cause of the Changeover Failure alarm, refer to the "Changeover Failure—Maintenance (5)" section.
Changeover Timeout—Maintenance (6)
The Changeover Timeout alarm (major) indicates that a changeover timed out. To troubleshoot and correct the cause of the Changeover Timeout alarm, refer to the "Changeover Timeout—Maintenance (6)" section.
Mate Rejected Changeover—Maintenance (7)
The Mate Rejected Changeover alarm (major) indicates that the mate rejected the changeover. To troubleshoot and correct the cause of the Mate Rejected Changeover alarm, refer to the "Mate Rejected Changeover—Maintenance (7)" section.
Mate Changeover Timeout—Maintenance (8)
The Mate Changeover Timeout alarm (major) indicates that the mate changeover timed out. To troubleshoot and correct the cause of the Mate Changeover Timeout alarm, refer to the "Mate Changeover Timeout—Maintenance (8)" section.
Local Initialization Failure—Maintenance (9)
The Local Initialization Failure alarm (major) indicates that the local initialization has failed. To troubleshoot and correct the cause of the Local Initialization Failure alarm, refer to the "Local Initialization Failure—Maintenance (9)" section.
Local Initialization Timeout—Maintenance (10)
The Local Initialization Timeout alarm (major) indicates that the local initialization has timed out. To troubleshoot and correct the cause of the Local Initialization Timeout alarm, refer to the "Local Initialization Timeout—Maintenance (10)" section.
Switchover Complete—Maintenance (11)
The Switchover Complete event functions as an informational alert that the switchover has been completed. The Switchover Complete event acknowledges that the changeover successfully completed. The event is informational and no further action is required.
Initialization Successful—Maintenance (12)
The Initialization Successful event functions as an informational alert that the initialization was successful. The Initialization Successful event indicates that a local initialization has been successful. The event is informational and no further action is required.
Administrative State Change—Maintenance (13)
The Administrative State Change event functions as an informational alert that the administrative state of a managed resource has changed. No action is required, since this informational event is given after manually changing the administrative state of a managed resource.
Call Agent Administrative State Change—Maintenance (14)
The Call Agent Administrative State Change event functions as an informational alert that indicates that the call agent has changed operational state as a result of a manual switchover. The event is informational and no further action is required.
Feature Server Administrative State Change—Maintenance (15)
The Feature Server Administrative State Change event functions as an informational alert that indicates that the feature server has changed operational state as a result of a manual switchover. The event is informational and no further action is required.
Process Manager: Process Has Died: Starting Process—Maintenance (16)
The Process Manager: Process Has Died: Starting Process event functions as an information alert that indicates that a process is being started as system is being brought up. The event is informational and no further action is required.
Invalid Event Report Received—Maintenance (17)
The Invalid Event Report Received event functions as an informational alert that indicates that a process has sent an event report that cannot be found in the database. If during system initialization a short burst of these events is issued prior to the database initialization, these events are informational and can be ignored; otherwise, contact Cisco TAC.
Process Manager: Process Has Died—Maintenance (18)
The Process Manager: Process Has Died alarm (minor) indicates that a process has died. To troubleshoot and correct the cause of the Process Manager: Process Has Died alarm, refer to the "Process Manager: Process Has Died—Maintenance (18)" section.
Process Manager: Process Exceeded Restart Rate—Maintenance (19)
The Process Manager: Process Exceeded Restart Rate alarm (major) indicates that a process has exceeded the restart rate. To troubleshoot and correct the cause of the Process Manager: Process Exceeded Restart Rate alarm, refer to the "Process Manager: Process Exceeded Restart Rate—Maintenance (19)" section.
Lost Connection to Mate—Maintenance (20)
The Lost Connection to Mate alarm (major) indicates that the keepalive module connection to the mate has been lost. To troubleshoot and correct the cause of the Lost Connection to Mate alarm, refer to the "Lost Connection to Mate—Maintenance (20)" section.
Network Interface Down—Maintenance (21)
The Network Interface Down alarm (major) indicates that the network interface has gone down. To troubleshoot and correct the cause of the Network Interface Down alarm, refer to the "Network Interface Down—Maintenance (21)" section.
Mate Is Alive—Maintenance (22)
The Mate Is Alive event functions as an informational alert that the mate is alive. The reporting CA/FS/EMS/BDMS is indicating that its mate has been successfully restored to service. The event is informational and no further action is required.
Process Manager: Process Failed to Complete Initialization—Maintenance (23)
The Process Manager: Process Failed to Complete Initialization alarm (major) indicates that a PMG process failed to complete initialization. To troubleshoot and correct the cause of the Process Manager: Process Failed to Complete Initialization alarm, refer to the "Process Manager: Process Failed to Complete Initialization—Maintenance (23)" section.
Process Manager: Restarting Process—Maintenance (24)
The Process Manager: Restarting Process alarm (minor) indicates the a PMG process is being restarted. To troubleshoot and correct the cause of the Process Manager: Restarting Process alarm, refer to the "Process Manager: Restarting Process—Maintenance (24)" section.
Process Manager: Changing State—Maintenance (25)
The Process Manager: Changing State event functions as an informational alert that a PMG process is changing state. The primary cause of the event is that a side is transitioning from one state to another. This is part of the normal side state change process. Monitor the system for other maintenance category event reports to see if the transition is due to a failure of a component within the specified side.
Process Manager: Going Faulty—Maintenance (26)
The Process Manager: Going Faulty alarm (major) indicates that a PMG process is going faulty. To troubleshoot and correct the cause of the Process Manager: Going Faulty alarm, refer to the "Process Manager: Going Faulty—Maintenance (26)" section.
Process Manager: Changing Over to Active—Maintenance (27)
The Process Manager: Changing Over to Active event functions as an informational alert that a PMG process is being changed to active. The primary cause of the event is that the specified platform instance was in the standby state and was changed to the active state either by program control or via user request. No action is necessary. This is part of the normal process of activating the platform.
Process Manager: Changing Over to Standby—Maintenance (28)
The Process Manager: Changing Over to Standby event functions as an information alert that a PMG process is being changed to standby. The primary cause of the event is that the specified side was in the active state and was changed to the standby state, or is being restored to service, and its mate is already in the active state either by program control or through a user request. No action is necessary. This is part of the normal process of restoring or duplexing the platform.
Administrative State Change Failure—Maintenance (29)
The Administrative State Change Failure event functions as a warning that a change of the administrative state has failed. The primary cause of the event is that an attempt to change the administrative state of a device has failed. Analyze the cause of the failure if you can find one. Verify that the controlling element of the targeted device was in the active state in order to change the adminstrator state of the device. If the controlling platform instance is not active, restore it to service.
Element Manager State Change—Maintenance (30)
The Element Manager State Change event functions as an informational alert that the element manager has changed state. The primary cause of the event is that the specified EMS has changed to the indicated state either naturally or through a user request. The event is informational and no action is necessary. This is part of the normal state transitioning process for the EMS. Monitor the system for related event reports if the transition was due to a faulty or out of service state.
Process Manager: Sending Go Active to Process—Maintenance (32)
The Process Manager: Sending Go Active to Process event functions as an informational alert that a process is being notified to switch to active state as the system is switching over from standby to active. The event is informational and no further action is required.
Process Manager: Sending Go Standby to Process—Maintenance (33)
The Process Manager: Sending Go Standby to Process event functions as an informational alert that a process is being notified to exit gracefully as the system is switching over to standby state, or is shutting down. The switchover or shutdown could be due to the operator taking the action to switch or shut down the system or due to the system having detected a fault. The event is informational and no further action is required.
Process Manager: Sending End Process to Process—Maintenance (34)
The Process Manager: Sending End Process to Process event functions as an informational alert that a process is being notified to exit gracefully as the system is switching over to standby state, or is shutting down. The switchover or shutdown could be due to the operator taking the action to switch or shut down the system or due to the system having detected a fault. The event is informational and no further action is required.
Process Manager: All Processes Completed Initialization—Maintenance (35)
The Process Manager: All Processes Completed Initialization event functions as an informational alert that the system is being brought up, and that all processes are ready to start executing. The event is informational and no further action is required.
Process Manager: Sending All Processes Initialization Complete to Process—Maintenance (36)
The Process Manager: Sending All Processes Initialization Complete to Process event functions as an informational alert that system is being brought up, and all processes are being notified to start executing. The event is informational and no further action is required.
Process Manager: Killing Process—Maintenance (37)
The Process Manager: Killing Process event functions as an informational alert that a process is being killed. A software problem occurred while the system was being brought up or shut down. A process did not come up when the system was brought up and had to be killed in order to restart it. The event is informational and no further action is required.
Process Manager: Clearing the Database—Maintenance (38)
The Process Manager: Clearing the Database event functions as an informational alert that the system is preparing to copy data from the mate. The system has been brought up and the mate side is running. The event is informational and no further action is required.
Process Manager: Cleared the Database—Maintenance (39)
The Process Manager: Cleared the Database event functions as an informational alert that the system is prepared to copy data from the mate. The system has been brought up and the mate side is running. The event is informational and no further action is required.
Process Manager: Binary Does Not Exist for Process—Maintenance (40)
The Process Manager: Binary Does Not Exist for Process alarm (critical) indicates that the platform was not installed correctly. To troubleshoot and correct the cause of the Process Manager: Binary Does Not Exist for Process alarm, refer to the "Process Manager: Binary Does Not Exist for Process—Maintenance (40)" section.
Administrative State Change Successful With Warning—Maintenance (41)
The Administrative State Change Successful With Warning event functions as a warning that the system was in a flux when a successful administrative state change occurred. The primary cause of the event is that the system was in flux state when an administrative change state command was issued. To correct the primary cause of the event, retry the command.
Number of Heartbeat Messages Received Is Less Than 50% of Expected—Maintenance (42)
The Number of Heartbeat messages Received Is Less Than 50% of Expected alarm (major) indicates that the number of heartbeat (HB) messages being received is less than 50% of the expected number. To troubleshoot and correct the cause of the Number of Heartbeat messages Received Is Less Than 50% of Expected alarm, refer to the "Number of Heartbeat Messages Received Is Less Than 50% Of Expected—Maintenance (42)" section.
Process Manager: Process Failed to Come Up in Active Mode—Maintenance (43)
The Process Manager: Process Failed to Come Up in Active Mode alarm (critical) indicates that the process has failed to come up in active mode. To troubleshoot and correct the cause of the Process Manager: Process Failed to Come Up in Active Mode alarm, refer to the "Process Manager: Process Failed to Come Up In Active Mode—Maintenance (43)" section.
Process Manager: Process Failed to Come Up in Standby Mode—Maintenance (44)
The Process Manager: Process Failed to Come Up in Standby Mode alarm (critical) indicates that the process has failed to come up in standby mode. To troubleshoot and correct the cause of the Process Manager: Process Failed to Come Up in Standby Mode alarm, refer to the "Process Manager: Process Failed to Come Up In Standby Mode—Maintenance (44)" section.
Application Instance State Change Failure—Maintenance (45)
The Application Instance State Change Failure alarm (major) indicates that an application instance state change failed. To troubleshoot and correct the cause of the Application Instance State Change Failure alarm, refer to the "Application Instance State Change Failure—Maintenance (45)" section.
Network Interface Restored—Maintenance (46)
The Network Interface Restored event functions as an informational alert that the network interface was restored. The primary cause of the event is that the interface cable is reconnected and the interface is put "up" using ifconfig command. The event is informational and no further action is required.
Thread Watchdog Counter Expired for a Thread—Maintenance (47)
The Thread Watchdog Counter Expired for a Thread alarm (critical) indicates that a thread watchdog counter has expired for a thread. To troubleshoot and correct the cause of the Thread Watchdog Counter Expired for a Thread alarm, refer to the "Thread Watchdog Counter Expired for a Thread—Maintenance (47)" section.
Index Table Usage Exceeded Minor Usage Threshold Level—Maintenance (48)
The Index Table Usage Exceeded Minor Usage Threshold Level alarm (minor) indicates that the index (IDX) table usage has exceeded the minor threshold crossing usage level. To troubleshoot and correct the cause of the Index Table Usage Exceeded Minor Usage Threshold Level alarm, refer to the "Index Table Usage Exceeded Minor Usage Threshold Level—Maintenance (48)" section.
Index Table Usage Exceeded Major Usage Threshold Level—Maintenance (49)
The Index Table Usage Exceeded Major Usage Threshold Level alarm (major) indicates that the IDX table usage has exceeded the major threshold crossing usage level. To troubleshoot and correct the cause of the Index Table Usage Exceeded Major Usage Threshold Level alarm, refer to the "Index Table Usage Exceeded Major Usage Threshold Level—Maintenance (49)" section.
Index Table Usage Exceeded Critical Usage Threshold Level—Maintenance (50)
The Index Table Usage Exceeded Critical Usage Threshold Level alarm (critical) indicates that the IDX table usage has exceeded the critical threshold crossing usage level. To troubleshoot and correct the cause of the Index Table Usage Exceeded Critical Usage Threshold Level alarm, refer to the "Index Table Usage Exceeded Critical Usage Threshold Level—Maintenance (50)" section.
A Process Exceeds 70% of Central Processing Unit Usage—Maintenance (51)
The A Process Exceeds 70% of Central Processing Unit Usage alarm (major) indicates that a process has exceeded the CPU usage threshold of 70 percent. To troubleshoot and correct the cause of the A Process Exceeds 70% of Central Processing Unit Usage alarm, refer to the "A Process Exceeds 70% of Central Processing Unit Usage—Maintenance (51)" section.
Central Processing Unit Usage Is Now Below the 50% Level—Maintenance (52)
The Central Processing Unit Usage Is Now Below the 50% Level event functions as an informational alert that the CPU usage level has fallen below the threshold level of 50 percent. The event is informational and no further action is required.
The Central Processing Unit Usage Is Over 90% Busy—Maintenance (53)
The Central Processing Unit Usage Is Over 90% Busy alarm (critical) indicates that the CPU usage is over the threshold level of 90 percent. To troubleshoot and correct the cause of The Central Processing Unit Usage Is Over 90% Busy alarm, refer to the "The Central Processing Unit Usage Is Over 90% Busy—Maintenance (53)" section.
The Central Processing Unit Has Returned to Normal Levels of Operation—Maintenance (54)
The Central Processing Unit Has Returned to Normal Levels of Operation event functions as an informational alert that the CPU usage has returned to the normal level of operation. The event is informational and no further actions is required.
The Five Minute Load Average Is Abnormally High—Maintenance (55)
The Five Minute Load Average Is Abnormally High alarm (major) indicates the five minute load average is abnormally high. To troubleshoot and correct the cause of The Five Minute Load Average Is Abnormally High alarm, refer to the "The Five Minute Load Average Is Abnormally High—Maintenance (55)" section.
The Load Average Has Returned to Normal Levels—Maintenance (56)
The Load Average Has Returned to Normal Levels event functions as an informational alert the load average has returned to normal levels. The event is informational and no further action is required.
Memory and Swap Are Consumed at Critical Levels—Maintenance (57)
|
Note Maintenance (57) is issued by the Cisco BTS 10200 system when memory consumption is greater than 95 percent (>95%) and swap space consumption is greater than 50 percent (>50%).
|
The Memory and Swap Are Consumed at Critical Levels alarm (critical) indicates that memory and swap file usage have reached critical levels. To troubleshoot and correct the cause of the Memory and Swap Are Consumed at Critical Levels alarm, refer to the "Memory and Swap Are Consumed at Critical Levels—Maintenance (57)" section.
Memory and Swap Are Consumed at Abnormal Levels—Maintenance (58)
|
Note Maintenance (58) is issued by the Cisco BTS 10200 system when memory consumption is greater than 80 percent (>80%) and swap space consumption is greater than 30 percent (>30%).
|
The Memory and Swap Are Consumed at Abnormal Levels event functions as an informational alert that the memory and swap file usage are being consumed at abnormal levels. The primary cause of the event is that a process or multiple processes have consumed an abnormal amount of memory on the system and the operating system is utilizing an abnormal amount of the swap space for process execution. This can be a result of high call rates or bulk provisioning activity. Monitor the system to ensure all subsystems are performing normally. If they are, only lightening the effective load on the system will clear the situation. If some subsystems are not performing normally, verify which process(es) are running at abnormally high rates, and contact Cisco TAC.
No Heartbeat Messages Received Through the Interface—Maintenance (61)
The No Heartbeat Messages Received Through the Interface alarm (critical) indicates that no HB messages are being received through the local network interface. To troubleshoot and correct the cause of the No Heartbeat Messages Received Through the Interface alarm, refer to the "No Heartbeat Messages Received Through the Interface—Maintenance (61)" section.
Link Monitor: Interface Lost Communication—Maintenance (62)
The Link Monitor: Interface Lost Communication alarm (major) indicates that an interface has lost communication. To troubleshoot and correct the cause of the Link Monitor: Interface Lost Communication alarm, refer to the "Link Monitor: Interface Lost Communication—Maintenance (62)" section.
Outgoing Heartbeat Period Exceeded Limit—Maintenance (63)
The Outgoing Heartbeat Period Exceeded Limit alarm (major) indicates that the outgoing HB period has exceeded the limit. To troubleshoot and correct the cause of the Outgoing Heartbeat Period Exceeded Limit alarm, refer to the "Outgoing Heartbeat Period Exceeded Limit—Maintenance (63)" section.
Average Outgoing Heartbeat Period Exceeds Major Alarm Limit—Maintenance (64)
The Average Outgoing Heartbeat Period Exceeds Major Alarm Limit alarm (major) indicates that the average outgoing HB period has exceeded the major threshold crossing alarm limit. To troubleshoot and correct the cause of the Average Outgoing Heartbeat Period Exceeds Major Alarm Limit alarm, refer to the "Average Outgoing Heartbeat Period Exceeds Major Alarm Limit—Maintenance (64)" section.
Disk Partition Critically Consumed—Maintenance (65)
The Disk Partition Critically Consumed alarm (critical) indicates that the disk partition consumption has reached critical limits. To troubleshoot and correct the cause of the Disk Partition Critically Consumed alarm, refer to the "Disk Partition Critically Consumed—Maintenance (65)" section.
Disk Partition Significantly Consumed—Maintenance (66)
The Disk Partition Significantly Consumed alarm (major) indicates that the disk partition consumption has reached the major threshold crossing level. To troubleshoot and correct the cause of the Disk Partition Significantly Consumed alarm, refer to the "Disk Partition Significantly Consumed—Maintenance (66)" section.
The Free Inter-Process Communication Pool Buffers Below Minor Threshold—Maintenance (67)
The Free Inter-Process Communication Pool Buffers Below Minor Threshold alarm (minor) indicates that the number of free IPC pool buffers has fallen below the minor threshold crossing level. To troubleshoot and correct the cause of The Free Inter-Process Communication Pool Buffers Below Minor Threshold alarm, refer to the "The Free Inter-Process Communication Pool Buffers Below Minor Threshold—Maintenance (67)" section.
The Free Inter-Process Communication Pool Buffers Below Major Threshold—Maintenance (68)
The Free Inter-Process Communication Pool Buffers Below Major Threshold alarm (major) indicates that the number of free IPC pool buffers has fallen below the major threshold crossing level. To troubleshoot and correct the cause of The Free Inter-Process Communication Pool Buffers Below Major Threshold alarm, refer to the "The Free Inter-Process Communication Pool Buffers Below Major Threshold—Maintenance (68)" section.
The Free Inter-Process Communication Pool Buffers Below Critical Threshold—Maintenance (69)
The Free Inter-Process Communication Pool Buffers Below Critical Threshold alarm (critical) indicates that the number of free IPC pool buffers has fallen below the critical threshold crossing level. To troubleshoot and correct the cause of The Free Inter-Process Communication Pool Buffers Below Critical Threshold alarm, refer to the "The Free Inter-Process Communication Pool Buffers Below Critical Threshold—Maintenance (69)" section.
The Free Inter-Process Communication Pool Buffer Count Below Minimum Required—Maintenance (70)
The Free Inter-Process Communication Pool Buffers Below Critical Threshold alarm (critical) indicates that the IPC pool buffers are not being freed properly by the application or the application is not able to keep up with the incoming IPC messaging traffic. To troubleshoot and correct the cause of The Free Inter-Process Communication Pool Buffers Below Critical Threshold alarm, refer to the "The Free Inter-Process Communication Pool Buffer Count Below Minimum Required—Maintenance (70)" section.
Local Domain Name System Server Response Too Slow—Maintenance (71)
The Local Domain Name System Server Response Too Slow alarm (major) indicates that the response time of the local DNS server is too slow. To troubleshoot and correct the cause of the Local Domain Name System Server Response Too Slow alarm, refer to the "Local Domain Name System Server Response Too Slow—Maintenance (71)" section.
External Domain Name System Server Response Too Slow—Maintenance (72)
The External Domain Name System Server Response Too Slow alarm (major) indicates that the response time of the external DNS server is too slow. To troubleshoot and correct the cause of the External Domain Name System Server Response Too Slow alarm, refer to the "External Domain Name System Server Response Too Slow—Maintenance (72)" section.
External Domain Name System Server Not Responsive—Maintenance (73)
The External Domain Name System Server Not Responsive alarm (critical) indicates that the external DNS server is not responding to network queries. To troubleshoot and correct the cause of the External Domain Name System Server Not Responsive alarm, refer to the "External Domain Name System Server Not Responsive—Maintenance (73)" section.
Local Domain Name System Service Not Responsive—Maintenance (74)
The Local Domain Name System Service Not Responsive alarm (critical) indicates that the local DNS server is not responding to network queries. To troubleshoot and correct the cause of the Local Domain Name System Service Not Responsive alarm, refer to the "Local Domain Name System Service Not Responsive—Maintenance (74)" section.
Mismatch of Internet Protocol Address Local Server and Domain Name System—Maintenance (75)
The Mismatch of Internet Protocol Address Local Server and Domain Name System event functions as a warning that a mismatch of the local server IP address and the DNS server address has occurred. The primary cause of the event is that the DNS server updates are not getting to the Cisco BTS 10200 from the external server, or the discrepancy was detected before the local DNS lookup table was updated. Ensure the external DNS server is operational and sending updates to the Cisco BTS 10200.
Mate Time Differs Beyond Tolerance—Maintenance (77)
The Mate Time Differs Beyond Tolerance alarm (major) indicates that the mate time differs beyond the tolerance. To troubleshoot and correct the cause of the Mate Time Differs Beyond Tolerance alarm, refer to the "Mate Time Differs Beyond Tolerance—Maintenance (77)" section.
Bulk Data Management System Admin State Change—Maintenance (78)
The Bulk Data Management System Admin State Change event functions as an informational alert that the BDMS administrative state has changed. The primary cause of the event is that the Bulk Data Management Server was switched over manually. The event is informational and no further action is required.
Resource Reset—Maintenance (79)
The Resource Reset event functions as an informational alert that a resource reset has occurred. The event is informational and no further action is required.
Resource Reset Warning—Maintenance (80)
The Resource Reset Warning event functions as an informational alert that a resource reset is about to occur. The event is informational and no further action is required.
Resource Reset Failure—Maintenance (81)
The Resource Reset Failure event functions as an informational alert that a resource reset has failed. The primary cause of the event is an internal messaging error. Check dataword 3 (failure reason) to determine if this is caused by invalid user input, inconsistent provisioning of the device, or if the system is busy and a timeout occurred.
Average Outgoing Heartbeat Period Exceeds Critical Limit—Maintenance (82)
The Average Outgoing Heartbeat Period Exceeds Critical Limit alarm (critical) indicates that the average outgoing HB period has exceeded the critical limit threshold. To troubleshoot and correct the cause of the Average Outgoing Heartbeat Period Exceeds Critical Limit alarm, refer to the "Average Outgoing Heartbeat Period Exceeds Critical Limit—Maintenance (82)" section.
Swap Space Below Minor Threshold—Maintenance (83)
The Swap Space Below Minor Threshold alarm (minor) indicates that the swap space has fallen below the minor threshold level. To troubleshoot and correct the cause of the Swap Space Below Minor Threshold alarm, refer to the "Swap Space Below Minor Threshold—Maintenance (83)" section.
Swap Space Below Major Threshold—Maintenance (84)
The Swap Space Below Major Threshold alarm (major) indicates that the swap space has fallen below the major threshold level. To troubleshoot and correct the cause of the Swap Space Below Major Threshold alarm, refer to the "Swap Space Below Major Threshold—Maintenance (84)" section.
Swap Space Below Critical Threshold—Maintenance (85)
The Swap Space Below Critical Threshold alarm (critical) indicates that the swap space has fallen below the critical threshold level. To troubleshoot and correct the cause of the Swap Space Below Critical Threshold alarm, refer to the "Swap Space Below Critical Threshold—Maintenance (85)" section.
System Health Report Collection Error—Maintenance (86)
The System Health Report Collection Error alarm (minor) indicates that an error occurred during collection of the System Health Report. To troubleshoot and correct the cause of the System Health Report Collection Error alarm, refer to the "System Health Report Collection Error—Maintenance (86)" section.
Status Update Process Request Failed—Maintenance (87)
The Status Update Process Request Failed alarm (major) indicates that the status update process request failed. To troubleshoot and correct the cause of the Status Update Process Request Failed alarm, refer to the "Status Update Process Request Failed—Maintenance (87)" section.
Status Update Process Database List Retrieval Error—Maintenance (88)
The Status Update Process Database List Retrieval Error alarm (major) indicates that the status update process DB had a retrieval error. To troubleshoot and correct the cause of the Status Update Process Database List Retrieval Error alarm, refer to the "Status Update Process Database List Retrieval Error—Maintenance (88)" section.
Status Update Process Database Update Error—Maintenance (89)
The Status Update Process Database Update Error alarm (major) indicates that the status update process DB had an update error. To troubleshoot and correct the cause of the Status Update Process Database Update Error alarm, refer to the "Status Update Process Database Update Error—Maintenance (89)" section.
Disk Partition Moderately Consumed—Maintenance (90)
The Disk Partition Moderately Consumed alarm (minor) indicates that the disk partition is moderately consumed. To troubleshoot and correct the cause of the Disk Partition Moderately Consumed alarm, refer to the "Disk Partition Moderately Consumed—Maintenance (90)" section.
Internet Protocol Manager Configuration File Error—Maintenance (91)
The Internet Protocol Manager Configuration File Error alarm (critical) indicates that an IPM configuration file has an error. To troubleshoot and correct the cause of the Internet Protocol Manager Configuration File Error alarm, refer to the "Internet Protocol Manager Configuration File Error—Maintenance (91)" section.
Internet Protocol Manager Initialization Error—Maintenance (92)
The Internet Protocol Manager Initialization Error alarm (major) indicates that the IPM process failed to initialize correctly. To troubleshoot and correct the cause of the Internet Protocol Manager Initialization Error alarm, refer to the "Internet Protocol Manager Initialization Error—Maintenance (92)" section.
Internet Protocol Manager Interface Failure—Maintenance (93)
The Internet Protocol Manager Interface Failure alarm (major) indicates that an IPM interface has failed. To troubleshoot and correct the cause of the Internet Protocol Manager Interface Failure alarm, refer to "Internet Protocol Manager Interface Failure—Maintenance (93)" section.
Internet Protocol Manager Interface State Change—Maintenance (94)
The Internet Protocol Manager Interface State Change event functions as an informational alert that the state of the IPM interface has changed. The primary cause of the event is that the IPM changed state on an interface (up or down). The event is informational and no further action is required.
Internet Protocol Manager Interface Created—Maintenance (95)
The Internet Protocol Manager Interface Created event functions as an informational alert that the IPM has created a new logical interface. The event is informational and no further action is required.
Internet Protocol Manager Interface Removed—Maintenance (96)
The Internet Protocol Manager Interface Removed event functions as an informational alert that the IPM has removed a logical interface. The event is informational and no further action is required.
Inter-Process Communication Input Queue Entered Throttle State—Maintenance (97)
The Inter-Process Communication Input Queue Entered Throttle State alarm (critical) alarm indicates that the thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is using up too much of the IPC memory pool resource. To troubleshoot and correct the cause of the Inter-Process Communication Input Queue Entered Throttle State alarm, refer to the "Inter-Process Communication Input Queue Entered Throttle State—Maintenance (97)" section.
Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark—Maintenance (98)
The Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark alarm (minor) indicates that the IPC input queue depth has reached 25 percent of its hi-watermark. To troubleshoot and correct the cause of the Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark alarm, refer to the "Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark—Maintenance (98)" section.
Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark—Maintenance (99)
The Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark alarm (major) indicates that the IPC input queue depth has reached 50 percent of its hi-watermark. To troubleshoot and correct the cause of the Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark alarm, refer to the "Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark—Maintenance (99)" section.
Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark—Maintenance (100)
The Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark alarm (critical) indicates that the IPC input queue depth has reached 75 percent of its hi-watermark. To troubleshoot and correct the cause of the Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark alarm, refer to the "Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark—Maintenance (100)" section.
Switchover in Progress—Maintenance (101)
The Switchover in Progress alarm (critical) indicates that a system switchover is in progress. This alarm is issued when a system switchover is in progress either due to manual switchover (through a CLI command), failover switchover, or automatic switchover. No action needs to be taken; the alarm is cleared when switchover is complete. Service is temporarily suspended for a short period of time during this transition.
Thread Watchdog Counter Close to Expiry for a Thread—Maintenance (102)
The Thread Watchdog Counter Close to Expiry for a Thread alarm (critical) indicates that the thread watchdog counter is close to expiry for a thread. The primary cause of the alarm is that a software error has occurred. No further action is required; the Cisco BTS 10200 system automatically recovers or shutdowns.
Central Processing Unit Is Offline—Maintenance (103)
The Central Processing Unit Is Offline alarm (critical) indicates that the CPU is offline. To troubleshoot and correct the cause of the Central Processing Unit Is Offline alarm, refer to the "Central Processing Unit Is Offline—Maintenance (103)" section.
Aggregration Device Address Successfully Resolved—Maintenance (104)
The Aggregration Device Address Successfully Resolved event functions as an informational alert that the aggregration device address has been successfully resolved. The event is informational and no further actions is required.
No Heartbeat Messages Received Through Interface From Router—Maintenance (107)
The No Heartbeat Messages Received Through Interface From Router alarm (critical) indicates that no HB messages are being received through the interface from the router. To troubleshoot and correct the cause of the No Heartbeat Messages Received Through Interface From Router alarm, refer to the "No Heartbeat Messages Received Through Interface From Router—Maintenance (107)" section.
A Log File Cannot Be Transferred—Maintenance (108)
The A Log File Cannot Be Transferred event serves as a warning that a log file cannot be transferred. The primary cause of the event is that there is an access problem with the external archive system. To correct the primary cause of the event, check the external archive system. The secondary cause of the event is that the network is having a problem. To correct the secondary cause of the event, check the network. The ternary cause of the event is that the source log is not present. To correct ternary cause of the event, check for the presence of a log file.
Five Successive Log Files Cannot Be Transferred—Maintenance (109)
The Five Successive Log Files Cannot Be Transferred alarm (major) indicates that five successive log files cannot be transferred to the archive system. To troubleshoot and correct the cause of the Five Successive Log Files Cannot Be Transferred alarm, refer to the "Five Successive Log Files Cannot Be Transferred—Maintenance (109)" section.
Access to Log Archive Facility Configuration File Failed or File Corrupted—Maintenance (110)
The Access to Log Archive Facility Configuration File Failed or File Corrupted alarm (major) indicates that access to the LAF configuration file failed or the file is corrupted. To troubleshoot and correct the cause of the Access to Log Archive Facility Configuration File Failed or File Corrupted alarm, refer to the "Access To Log Archive Facility Configuration File Failed or File Corrupted—Maintenance (110)" section.
Cannot Log In to External Archive Server—Maintenance (111)
The Cannot Log In to External Archive Server alarm (critical) indicates that the user cannot log in to the external archive server. To troubleshoot and correct the cause of the Cannot Log In to External Archive Server alarm, refer to the "Cannot Log In to External Archive Server—Maintenance (111)" section.
Congestion Status—Maintenance (112)
The Congestion Status alarm (major) indicates that a change has occurred in the system overload level. To troubleshoot and correct the cause of the Congestion Status alarm, refer to the "Congestion Status—Maintenance (112)" section.
Central Processing Unit Load of Critical Processes—Maintenance (113)
The Central Processing Unit Load of Critical Processes event serves as an informational alert that a change (increase/decrease) has occurred in the call processing load. If the level remains continuously high, change the configuration or redistribute the call load.
Queue Length of Critical Processes—Maintenance (114)
The Queue Length of Critical Processes event serves as an informational alert that a change has occurred in the queue length of critical processes. If the reported level remains continuously high, adjust the system load or configuration.
Inter-Process Communication Buffer Usage Level—Maintenance (115)
The Inter-Process Communication Buffer Usage Level event serves as an informational alert that a change has occurred in the IPC buffer usage. If the reported level remains continuously high, adjust the system load or configuration.
Call Agent Reports the Congestion Level of Feature Server—Maintenance (116)
The Call Agent Reports the Congestion Level of Feature Server event serves as an informational alert that the Feature Server is congested. The event is informational and no further action is required.
Side Automatically Restarting Due to Fault—Maintenance (117)
The Side Automatically Restarting Due to Fault alarm (critical) indicates that the platform has shut down to the OOS-FAULTY state, and is in the process of being automatically restarted. To troubleshoot and correct the cause of the Side Automatically Restarting Due to Fault alarm, refer to the "Side Automatically Restarting Due to Fault—Maintenance (117)" section.
Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server—Maintenance (118)
The Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server alarm (critical) indicates that the zone transfer between primary DNS and secondary DNS failed. To troubleshoot and correct the cause of the Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server alarm, refer to the "Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server—Maintenance (118)" section.
Periodic Shared Memory Database Back Up Failure—Maintenance (119)
The Periodic Shared Memory Database Back Up Failure alarm (critical) indicates that the periodic shared memory database back up has failed. To troubleshoot and correct the cause of the Periodic Shared Memory Database Back Up Failure alarm, refer to the "Periodic Shared Memory Database Back Up Failure—Maintenance (119)" section.
Periodic Shared Memory Database Back Up Success—Maintenance (120)
The Periodic Shared Memory Database Back Up Success event serves as an informational alert that the periodic shared memory database back up was successfully completed. The event is informational and no further action is required.
Invalid SOAP Request—Maintenance (121)
The Invalid SOAP Request event serves as an informational alert that an invalid SOAP request was issued. The primary cause of the event is that a provisioning client sent an invalid XML request to the SOAP provisioning adapter. To correct the primary cause of the event, resend a valid XML request.
Northbound Provisioning Message Is Retransmitted—Maintenance (122)
The Northbound Provisioning Message Is Retransmitted event serves as an informational alert that a northbound message has been retransmitted. The primary cause of the event is that an EMS hub maybe responding slowly. To correct the primary cause of the event, check to see if there are any hub alarms. Take the appropriate action according to the alarms.
Northbound Provisioning Message Dropped Due to Full Index Table—Maintenance (123)
The Northbound Provisioning Message Dropped Due to Full Index Table event serves as a warning that a northbound provisioning message has been dropped due to a full index table. The primary cause of the event is that an EMS hub is not responding. To correct the primary cause of the event, find out if there are any alarms originating from the hub and take the appropriate action.
Periodic Shared Memory Sync Started—Maintenance (124)
The Periodic Shared Memory Sync Started event serves as an information alert that a periodic shared-memory synchronization has successfully started on the Cisco BTS 10200 system. The customer should monitor the Cisco BTS 10200 system for the successful completion of the periodic shared-memory synchronization as indicated by the Periodic Shared Memory Sync Completed event.
Periodic Shared Memory Sync Completed—Maintenance (125)
The Periodic Shared Memory Sync Completed event serves as an informational alert that a periodic shared-memory synchronization to disk has been successfully completed. No customer action is required when the periodic shared-memory synchronization is successfully completed by the Cisco BTS 10200 system.
Periodic Shared Memory Sync Failure—Maintenance (126)
The Periodic Shared Memory Sync Failure alarm (critical) indicates that the periodic shared-memory synchronization write to disk has failed. To troubleshoot and correct the cause of the Periodic Shared Memory Sync Failure alarm, check the Cisco BTS 10200 system for the cause of the failure, correct it, and then verify that the next periodic shared-memory synchronization to disk is successfully completed by monitoring the Cisco BTS 10200 system for a Periodic Shared Memory Sync Completed informational event.
Manual Recovery of OMS HUB Queue Loss—Maintenance (127)
The Manual Recovery of OMS HUB Queue Loss alarm (critical) indicates that due to some network or socket connetion issues, the OMS queue is lost causing communication problem between the Cisco BTS 10200 processes. To troubleshoot and correct the cause of the Manual Recovery of OMS HUB Queue Loss alarm, the operator needs to run the manual clean-up procedure such as
pkill smg3 or pkill hub3 on all the four nodes. It is recommended to perform this task on the maintenance window.
This procedure should be run when critical queues (mentioned below) are lost:
•BULK_OAM—Indicates provisioning queue.
•SCADM—Indicates status or control command queue.
•TMProvision—measurement related changes (used by measurement_prov CLI command.)
•QUEUE_THREAD_FSAINxxx—Indicates queue thread for sending AIN provisioning data.
•QUEUE_THREAD_FSPTCxxx—Indicates queue thread for sending PTC provisioning data.
•QUEUE_THREAD_CAxxx—Indicates queue thread for sending CA provisioning data.
•HANDSET_ACK—Indicates handset related queue.
•TrafficGA—Indicates measurement data (from CA to EMS).
•SystemManager—Used for system related command like block or unblock.
Troubleshooting Maintenance Alarms
This section provides the information you need for troubleshooting and correcting maintenance alarms. Table 7-124 lists all of the maintenance alarms in numerical order and provides cross-references to each subsection.
Local Side Has Become Faulty—Maintenance (3)
The Local Side Has Become Faulty alarm (major) indicates that the local side has become faulty. The alarm can result from maintenance report 5, 6, 9, 10, 19, or 20. Review information from CLI log report. The alarm is usually caused by a software problem. To correct the primary cause of the alarm, restart the software using the Installation and Startup procedure. The alarm can also be caused by manually shutting down the system using platform stop command. To correct the secondary cause of the alarm, reboot host machine, reinstall all applications and restart all applications. If the alarm is reoccurring, the operating system or the hardware may have a problem.
Mate Side Has Become Faulty—Maintenance (4)
The Mate Side Has Become Faulty alarm (major) indicates that the mate side has become faulty. The primary cause of the alarm is that the local side has detected the mate side going into a faulty state. To correct the primary cause of the alarm, display the event summary on the faulty mate side, using the report event-summary command (see the Cisco BTS 10200 Softswitch CLI Database for command details). Review information in the event summary. The alarm is usually caused by a software problem. After confirming the active side is processing traffic, restart software on the mate side. Log in to the mate platform as root user. Enter platform stop command and then platform start command. If a software restart does not resolve the problem or if the platform goes immediately to faulty again, or does not start, contact Cisco TAC. It may be necessary to reinstall software. If the alarm is reoccurring, then the operating system or the hardware may have a problem. Reboot host machine, then reinstall and restart all applications. The reboot will bring down the other applications running on the machine. Contact Cisco TAC for assistance.
Changeover Failure—Maintenance (5)
The Changeover Failure alarm (major) indicates that a changeover failed. The alarm is issued when there is a change from an active processor to a standby processor and the changeover fails. To correct the cause of the alarm, review alarm information from CLI log report. This alarm is usually caused by a software problem on the specific platform identified in the alarm report. Restart the platform identified in the alarm report. If the platform restart is not successful, reinstall the application on the platform, and then restart platform again. If necessary, reboot host machine the platform is located on. Then reinstall and restart all applications on this machine. If faulty state is a reoccurring event, then operating system or the hardware may be defective. Contact Cisco TAC for assistance. It may also be helpful to gather information event/alarm reports that were issued before and after this alarm report.
Changeover Timeout—Maintenance (6)
The Changeover Timeout alarm (major) indicates that a changeover timed out. The cause of the alarm is that the system failed to change over within time period. Soon after this event is issued, one platform will go to faulty state. This alarm is usually caused by a software problem on the specific platform identified in the alarm report. To correct the cause of the alarm, review information from CLI log report. Restart the platform identified in the alarm report. If platform restart is not successful, reinstall the application for this platform, and then restart platform again. If necessary, reboot host machine the platform is located on. Then reinstall and restart all applications on this machine. If faulty state is a reoccurring event, then operating system or hardware may be defective. Contact Cisco TAC for assistance. It may also be helpful to gather information event/alarm reports that were issued before and after this alarm report.
Mate Rejected Changeover—Maintenance (7)
The Mate Rejected Changeover alarm (major) indicates that the mate rejected the changeover. The primary cause of the alarm is that the mate is not in a stable state. To correct the primary cause of the alarm, enter the status command to get information on the two systems in the pair (primary and secondary EMS, CA or FS). The secondary cause of the alarm is that the mate detects that it is faulty during changeover and then rejects changeover.
To correct the secondary cause of the alarm, check to see if the mate is faulty (not running), then perform the corrective action steps listed in the "Mate Side Has Become Faulty—Maintenance (4)" section. Additionally, if both systems (local and mate) are still running, diagnose whether both instances are operating in stable state (one in active and the other in standby). If both are in a stable state, wait 10 minutes and retry the control command. If standby side is not in stable state, bring down the standby side and restart software using the platform stop and platform start commands. If software restart does not resolve the problem, or if the problem is commonly occurring, contact Cisco TAC. It may be necessary to reinstall software. Additional operating system or hardware problems may also need to be resolved.
To continue troubleshooting the cause of the alarm, refer to Figure 7-1 if the forced switchover has been rejected by the secondary. Refer to Figure 7-2 if the primary failed to come up in the active state.
Figure 7-1 Corrective Action for Maintenance Event (7) (Mate Rejected Changeover)
Forced Switchover Rejected by Secondary
Figure 7-2 Corrective Action for Maintenance Event (7) (Mate Rejected Changeover)
Primary Failed To Come Up in Active State
|
Note The attempted changeover could be caused by a forced (operator) switch, or it could be caused by secondary instance rejecting changeover as primary is being brought up.
|
Mate Changeover Timeout—Maintenance (8)
The Mate Changeover Timeout alarm (major) indicates that the mate changeover timed out. The primary cause of the alarm is that the mate is faulty. This alarm is usually caused by a software problem on the specific mate platform identified in the alarm report. To correct the primary cause of the alarm, review information from CLI log report concerning faulty mate. On the mate platform identified in this alarm report, restart the platform. If mate platform restart is not successful, reinstall the application for this mate platform, and then restart mate platform again. If necessary, reboot host machine this mate platform is located on. Then reinstall and restart all applications on that machine.
Local Initialization Failure—Maintenance (9)
The Local Initialization Failure alarm (major) indicates that the local initialization has failed. The primary cause of the alarm is that the local initialization has failed. When this alarm event report is issued, the system has failed and the reinitialization process has failed. To correct the primary cause of the alarm, check that the binary files are present for the unit (Call Agent, Feature Server, Element Manager). If the files are not present, then reinstall the files from the initial or the back up media. Then restart the failed device.
Local Initialization Timeout—Maintenance (10)
The Local Initialization Timeout alarm (major) indicates that the local initialization has timed out. The primary cause of this alarm is that the local initialization has timed out. When the event report is issued, the system has failed and the reinitialization process has failed. To correct the primary cause of the alarm, check that the binary files are present for the unit (Call Agent, Feature, Server, or Element Manager). If the files are not present, then reinstall the files from initial or back up media. Then restart the failed device.
Process Manager: Process Has Died—Maintenance (18)
The Process Manager: Process Has Died alarm (minor) indicates that a process has died. The primary cause of the alarm is that a software problem has occurred. If problem persists or is reoccurring, contact Cisco TAC.
Process Manager: Process Exceeded Restart Rate—Maintenance (19)
The Process Manager: Process Exceeded Restart Rate alarm (major) indicates that a process has exceeded the restart rate. This alarm is usually caused by a software problem on the specific platform identified in the alarm report. Soon after this event is issued, one platform will go to faulty state. To correct the primary cause of the alarm, review the information from CLI log report. On the platform identified in this alarm report, restart the platform. If platform restart is not successful, reinstall the application for this platform, and then restart platform again. If necessary, reboot host machine this platform is located on. Then reinstall and restart all applications on this machine.
If faulty state is a commonly occurring event, then OS or hardware may be a problem. Contact Cisco TAC for assistance. It may also be helpful to gather information from the event and alarm reports that were issued before and after this alarm report.
Lost Connection to Mate—Maintenance (20)
The Lost Connection to Mate alarm (major) indicates that the keepalive module connection to the mate has been lost. The primary cause of the alarm is that a network interface hardware problem was occurred. Soon after this event is issued, one platform may go to faulty state. To correct the primary cause of this alarm, check whether the network interface is down. If so, restore network interface and restart the software. The secondary cause of the alarm is a router problem. If secondary cause of the alarm is a router problem, then repair router and reinstall.
Network Interface Down—Maintenance (21)
The Network Interface Down alarm (major) indicates that the network interface has gone down. The primary cause of the alarm is a network interface hardware problem. Soon after this alarm event is issued, one platform may go to faulty state. Subsequently system goes faulty. To correct the primary cause of the alarm, check whether the network interface is down. If it has, restore network interface and restart the software.
Process Manager: Process Failed to Complete Initialization—Maintenance (23)
The Process Manager: Process Failed to Complete Initialization alarm (major) indicates that a PMG process failed to complete initialization. The primary cause of the this alarm is that the specified process failed to complete initialization during the restoral process. To correct the primary cause of the alarm, verify that the specified process's binary image is installed. If it is not installed, install it and restart the platform.
Process Manager: Restarting Process—Maintenance (24)
The Process Manager: Restarting Process alarm (minor) indicates the a PMG process is being restarted. The primary cause of the alarm is that a software problem process has exited abnormally and had to be restarted. If problem is recurrent, contact Cisco TAC.
Process Manager: Going Faulty—Maintenance (26)
The Process Manager: Going Faulty alarm (major) indicates that a PMG process is going faulty. The primary cause of the alarm is that the system has been brought down or the system has detected a fault. If the alarm is not due to the operator intentionally bringing down the system, then the platform has detected a fault and has shut down. This is typically followed by the Maintenance (3) alarm event. To correct the primary cause of the alarm, use the corrective action procedures provided for the Maintenance (3) alarm event. Refer to the "Local Side Has Become Faulty—Maintenance (3)" section.
Process Manager: Binary Does Not Exist for Process—Maintenance (40)
The Process Manager: Binary Does Not Exist for Process alarm (critical) indicates that the platform was not installed correctly. The primary cause of the alarm is that the platform was not installed correctly. To correct the primary cause of the alarm, reinstall the platform.
Number of Heartbeat Messages Received Is Less Than 50% Of Expected—Maintenance (42)
The Number of Heartbeat Messages Received Is Less Than 50% Of Expected alarm (major) indicates that number of HB messages being received is less than 50% of expected number. The primary cause of the alarm is that a network problem has occurred. To correct the primary cause of the alarm, fix the network problem.
Process Manager: Process Failed to Come Up In Active Mode—Maintenance (43)
The Process Manager: Process Failed to Come Up In Active Mode alarm (critical) indicates that the process has failed to come up in active mode. The primary cause of the alarm is a software or configuration problem. To correct the primary cause of the alarm, restart the platform. If problem persists or is recurrent, contact Cisco TAC.
Process Manager: Process Failed to Come Up In Standby Mode—Maintenance (44)
The Process Manager: Process Failed to Come Up In Standby Mode alarm (critical) indicates that the process has failed to come up in standby mode. The primary cause of the alarm is a software or a configuration problem. To correct the primary cause of the alarm, restart the platform. If problem persists or is recurrent, contact Cisco TAC.
Application Instance State Change Failure—Maintenance (45)
The Application Instance State Change Failure alarm (major) indicates that an application instance state change failed. The primary cause of the alarm is that a switchover of an application instance failed because of a platform fault. To correct the primary cause of the alarm, retry the switchover and if condition continues, contact Cisco TAC.
Thread Watchdog Counter Expired for a Thread—Maintenance (47)
The Thread Watchdog Counter Expired for a Thread alarm (critical) indicates that a thread watchdog counter has expired for a thread. The primary cause of the alarm is a software error. No action is required, the system will automatically recover or shutdown.
Index Table Usage Exceeded Minor Usage Threshold Level—Maintenance (48)
The Index Table Usage Exceeded Minor Usage Threshold Level alarm (minor) indicates that the IDX table usage has exceeded the minor threshold crossing usage level. The primary cause of the alarm is that call traffic has exceeded design limits. To correct the primary cause of the alarm, verify that traffic is within the rated capacity. The secondary cause of the alarm is that a software problem requiring additional analysis has occurred. To correct the secondary cause of the alarm, contact Cisco TAC.
Index Table Usage Exceeded Major Usage Threshold Level—Maintenance (49)
The Index Table Usage Exceeded Major Usage Threshold Level alarm (major) indicates that the IDX table usage has exceeded the major threshold crossing usage level. The primary cause of the alarm is that call traffic has exceeded design limits. To correct the primary cause of the alarm, verify that traffic is within the rated capacity. The secondary cause of the alarm is that a software problem requiring additional analysis has occurred. To correct the secondary cause of the alarm, contact Cisco TAC.
Index Table Usage Exceeded Critical Usage Threshold Level—Maintenance (50)
The Index Table Usage Exceeded Critical Usage Threshold Level alarm (critical) indicates that the IDX table usage has exceeded the critical threshold crossing usage level. The primary cause of the alarm is that call traffic has exceeded design limits. To correct the primary cause of the alarm, verify that traffic is within the rated capacity. The secondary cause of the alarm is that a software problem requiring additional analysis has occurred. To correct the secondary cause of the alarm, contact Cisco TAC.
A Process Exceeds 70% of Central Processing Unit Usage—Maintenance (51)
The A Process Exceeds 70% of Central Processing Unit Usage alarm (major) indicates that a process has exceeded the CPU usage threshold of 70 percent. The primary cause of the alarm is that a process has entered a state of erratic behavior. To correct the primary cause of the alarm, monitor the process and kill it if necessary.
The Central Processing Unit Usage Is Over 90% Busy—Maintenance (53)
The Central Processing Unit Usage Is Over 90% Busy alarm (critical) indicates that the CPU usage is over the threshold level of 90 percent. The primary causes of the alarm are too numerous to determine. Try to isolate the problem and Call Cisco TAC for assistance.
The Five Minute Load Average Is Abnormally High—Maintenance (55)
The Five Minute Load Average Is Abnormally High alarm (major) indicates the five minute load average is abnormally high. The primary cause of the alarm is that multiple processes are vying for processing time on the system, which is normal in a high traffic situation such as heavy call processing or bulk provisioning. To correct the primary cause of the alarm, monitor the system to ensure all subsystems are performing normally. If so, only lightening the effective load on the system will clear the situation. If not, verify which process(es) are running at abnormally high rates and provide the information to Cisco TAC.
Memory and Swap Are Consumed at Critical Levels—Maintenance (57)
|
Note Maintenance (57) is issued by the Cisco BTS 10200 system when memory consumption is greater than 95 percent (>95%) and swap space consumption is greater than 50 percent (>50%).
|
The Memory and Swap Are Consumed at Critical Levels alarm (critical) indicates that memory and swap file usage have reached critical levels. The primary cause of the alarm is that a process or multiple processes have consumed a critical amount of memory on the system and the operating system is utilizing a critical amount of the swap space for process execution. This can be a result of high call rates or bulk provisioning activity. To correct the primary cause of the alarm, monitor the system to ensure all subsystems are performing normally. If they are, only lightening the effective load on the system will clear the situation. If they are not, verify which process(es) are running at abnormally high rates and provide the information to Cisco TAC.
No Heartbeat Messages Received Through the Interface—Maintenance (61)
The No Heartbeat Messages Received Through the Interface alarm (critical) indicates that no HB messages are being received through the local network interface. The primary cause of the alarm is that the local network interface is down. To correct the primary cause of the alarm, restore the local network interface. The secondary cause of the alarm is that the mate network interface on the same sub-net is faulty. To correct the secondary cause of the alarm, restore the mate network interface. The ternary cause of the alarm is network congestion.
Link Monitor: Interface Lost Communication—Maintenance (62)
The Link Monitor: Interface Lost Communication alarm (major) indicates that a interface has lost communication. The primary cause of the alarm is that the interface cable is pulled out or interface has been set to "down" using ifconfig command. To correct the primary cause of the alarm, restore the network interface. The secondary cause of the alarm is that the interface has no connectivity to any of the machines or routers.
Outgoing Heartbeat Period Exceeded Limit—Maintenance (63)
The Outgoing Heartbeat Period Exceeded Limit alarm (major) indicates that the outgoing HB period has exceeded the preset limit. The primary cause of the alarm is system performance degradation due to CPU overload or excessive I/O operations. To correct the primary cause of the alarm, identify the applications which are causing the system degradation through use of the CLI commands to verify if this is a persistent or on-going situation. Contact Cisco TAC with the gathered information.
Average Outgoing Heartbeat Period Exceeds Major Alarm Limit—Maintenance (64)
The Average Outgoing Heartbeat Period Exceeds Major Alarm Limit alarm (major) indicates that the average outgoing HB period has exceeded the major threshold crossing alarm limit. The primary cause of the alarm is system performance degradation due to CPU overload or excessive I/O operations. To correct the primary cause of the alarm, identify the applications which are causing the system degradation through us of the CLI commands to verify if this is a persistent or on-going situation. Contact Cisco TAC with the gathered information.
Disk Partition Critically Consumed—Maintenance (65)
The Disk Partition Critically Consumed alarm (critical) indicates that the disk partition consumption has reached critical limits. The primary cause of the alarm is that a process or processes is/are writing extraneous data to the named partition. To correct the primary cause of the alarm, perform a disk clean-up and maintenance on the offending system.
Disk Partition Significantly Consumed—Maintenance (66)
The Disk Partition Significantly Consumed alarm (major) indicates that the disk partition consumption has reached the major threshold crossing level. The primary cause of the alarm is that a process or processes are writing extraneous data to the named partition. To correct the primary cause of the alarm, perform a disk clean-up and maintenance on the offending system.
The Free Inter-Process Communication Pool Buffers Below Minor Threshold—Maintenance (67)
The Free Inter-Process Communication Pool Buffers Below Minor Threshold alarm (minor) indicates that the number of free IPC pool buffers has fallen below the minor threshold crossing level. The primary cause of the alarm is that IPC pool buffers are not being freed properly by the application or the application is not able to keep up with the incoming IPC messaging traffic. To correct the primary cause of the alarm, contact Cisco TAC immediately.
The Free Inter-Process Communication Pool Buffers Below Major Threshold—Maintenance (68)
The Free Inter-Process Communication Pool Buffers Below Major Threshold alarm (major) indicates that the number of free IPC pool buffers has fallen below the major threshold crossing level. The primary cause of the alarm is that IPC pool buffers are not being freed properly by the application or the application is not able to keep up with the incoming IPC messaging traffic. To correct the primary cause of the alarm, contact Cisco TAC immediately.
The Free Inter-Process Communication Pool Buffers Below Critical Threshold—Maintenance (69)
The Free Inter-Process Communication Pool Buffers Below Critical Threshold alarm (critical) indicates that the number of free IPC pool buffers has fallen below the critical threshold crossing level. The primary cause of the alarm is that IPC pool buffers are not being freed properly by the application or the application is not able to keep up with the incoming IPC messaging traffic. To correct the primary cause of the alarm, contact Cisco TAC immediately.
The Free Inter-Process Communication Pool Buffer Count Below Minimum Required—Maintenance (70)
The Free Inter-Process Communication Pool Buffer Count Below Minimum Required alarm (critical) indicates that the IPC pool buffers are not being freed properly by the application or the application is not able to keep up with the incoming IPC messaging traffic. The primary cause of the alarm is that IPC pool buffers are not being freed properly by the application or the application is not able to keep up with the incoming IPC messaging traffic. To correct the primary cause of the alarm, contact Cisco TAC immediately.
Local Domain Name System Server Response Too Slow—Maintenance (71)
The Local Domain Name System Server Response Too Slow alarm (major) indicates that the response time of the local DNS server is too slow. The primary cause of the alarm is that the local DNS server is too busy. To correct the primary cause of the alarm, check the local DNS server.
External Domain Name System Server Response Too Slow—Maintenance (72)
The External Domain Name System Server Response Too Slow alarm (major) indicates that the response time of the external DNS server is too slow. The primary cause of the alarm is that the network traffic level is high, or the nameserver is very busy. To correct the primary cause of the alarm, check the DNS server(s). The secondary cause of the alarm is that there is an daemon called monitorDNS.sh checking the DNS server every minute or so. It will issue alarm if it cannot contact the DNS server or the response is slow. But it will clear the alarm if later it can contact the DNS server.
External Domain Name System Server Not Responsive—Maintenance (73)
The External Domain Name System Server Not Responsive alarm (critical) indicates that the external DNS server is not responding to network queries. The primary cause of the alarm is that the DNS servers or the network may be down. To correct the primary cause of the alarm, check the DNS server(s). The secondary cause of the alarm is that there is a daemon called monitorDNS.sh checking DNS server every minute or so. It will issue alarm if it cannot contact the DNS server or the response is slow. But it will clear the alarm if later it can contact the DNS server.
Local Domain Name System Service Not Responsive—Maintenance (74)
The Local Domain Name System Service Not Responsive alarm (critical) indicates that the local DNS server is not responding to network queries. The primary cause of the alarm is that the local DNS service may be down. To correct the primary cause of the alarm, check the local DNS server.
Mate Time Differs Beyond Tolerance—Maintenance (77)
The Mate Time Differs Beyond Tolerance alarm (major) indicates that the mate differs beyond the tolerance. The primary cause of the alarm is that time synchronization not working. To correct the primary cause of the alarm, change the UNIX time on the faulty or standby side. If the change is occur on the standby, stop platform first.
Average Outgoing Heartbeat Period Exceeds Critical Limit—Maintenance (82)
The Average Outgoing Heartbeat Period Exceeds Critical Limit alarm (critical) indicates that the average outgoing HB period has exceeded the critical limit threshold. The primary cause of the alarm is that the CPU is overloaded. To correct the primary cause of the alarm, shut down the platform.
Swap Space Below Minor Threshold—Maintenance (83)
The Swap Space Below Minor Threshold alarm (minor) indicates that the swap space has fallen below the minor threshold level. The primary cause of the alarm is that too many processes are running. To correct the primary cause of the alarm, stop the proliferation of executables (processes-scripts). The secondary cause of the alarm is that the /tmp or /var/run is being over-used. To correct the secondary cause of the alarm, clean up the file systems.
Swap Space Below Major Threshold—Maintenance (84)
The Swap Space Below Major Threshold alarm (major) indicates that the swap space has fallen below the major threshold level. The primary cause of the alarm is that too many processes are running. To correct the primary cause of the alarm, stop the proliferation of executables (processes/shell-procedures). The secondary cause of the alarm is that the /tmp or /var/run is being over-used. To correct the secondary cause of the alarm, clean up the file systems.
Swap Space Below Critical Threshold—Maintenance (85)
The Swap Space Below Critical Threshold alarm (critical) indicates that the swap space has fallen below the critical threshold level. The primary cause of the alarm is that too many processes are running. To correct the primary cause of the alarm, restart the Cisco BTS 10200 software or reboot system The secondary cause of the alarm is that the /tmp or /var/run is being over-used. To correct the secondary cause of the alarm, clean up the file systems.
System Health Report Collection Error—Maintenance (86)
The System Health Report Collection Error alarm (minor) indicates that an error occurred during the collection of date for the System Health Report. The primary cause of the alarm is that an error occurred during the collection of data for the System Health Report. To correct the primary cause of the alarm, contact Cisco TAC.
Status Update Process Request Failed—Maintenance (87)
The Status Update Process Request Failed alarm (major) indicates that the status update process request failed. The primary cause of the alarm is that the status command is not working properly. To correct the primary cause of the alarm, use CLI to verify that the status command is working properly.
Status Update Process Database List Retrieval Error—Maintenance (88)
The Status Update Process Database List Retrieval Error alarm (major) indicates that the status update process DB had a retrieval error. The primary cause of the alarm is the Oracle DB is not working properly. To correct the primary cause of the alarm, contact Cisco TAC.
Status Update Process Database Update Error—Maintenance (89)
The Status Update Process Database Update Error alarm (major) indicates that the status update process DB had an update error. The primary cause of the alarm is that the MySQL DB on the EMS is not working properly. To correct the primary cause of the alarm, contact Cisco TAC.
Disk Partition Moderately Consumed—Maintenance (90)
The Disk Partition Moderately Consumed alarm (minor) indicates that the disk partition is moderately consumed. The primary cause of the alarm is that a process or processes are writing extraneous data to the named partition. To correct the primary cause of the alarm, perform a disk clean-up and maintenance on the offending system.
Internet Protocol Manager Configuration File Error—Maintenance (91)
The Internet Protocol Manager Configuration File Error alarm (critical) indicates that IPM configuration file has an error. The primary cause of the alarm is a IPM configuration file error. To correct the primary cause of the alarm, check the IPM configuration file (ipm.cfg) for incorrect syntax.
Internet Protocol Manager Initialization Error—Maintenance (92)
The Internet Protocol Manager Initialization Error alarm (major) indicates that the IPM process failed to initialize correctly. The primary cause of the alarm is that IPM failed to initialize correctly. To correct the primary cause of the alarm, check the "reason" dataword to identify and correct the cause of the alarm.
Internet Protocol Manager Interface Failure—Maintenance (93)
The Internet Protocol Manager Interface Failure alarm (major) indicates that an IPM interface has failed. The primary cause of the alarm is that IPM failed to create a logical interface. To correct the primary cause of the alarm, check the "reason" dataword to identify and correct the cause of the alarm.
Inter-Process Communication Input Queue Entered Throttle State—Maintenance (97)
The Inter-Process Communication Input Queue Entered Throttle State alarm (critical) alarm indicates that the thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is using up too much of the IPC memory pool resource. The primary cause of the alarm is that the indicated thread is not able to process its IPC input messages fast enough, hence the input queue has grown too large and is using up too much of the IPC memory pool resource. To correct the primary cause of the alarm, contact Cisco TAC.
Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark—Maintenance (98)
The Inter-Process Communication Input Queue Depth at 25% of Its Hi-Watermark alarm (minor) indicates that the IPC input queue depth has reached 25 percent of its hi-watermark. The primary cause of the alarm is that the indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is at 25% of the level at which it will enter the throttle state. To correct the primary cause of the alarm, contact Cisco TAC.
Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark—Maintenance (99)
The Inter-Process Communication Input Queue Depth at 50% of Its Hi-Watermark alarm (major) indicates that the IPC input queue depth has reached 50 percent of its hi-watermark. The primary cause of the alarm is that the indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is at 50% of the level at which it will enter the throttle state. To correct the primary cause of the alarm, contact Cisco TAC.
Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark—Maintenance (100)
The Inter-Process Communication Input Queue Depth at 75% of Its Hi-Watermark alarm (critical) indicates that the IPC input queue depth has reached 75 percent of its hi-watermark. The primary cause of the alarm is that the indicated thread is not able to process its IPC input messages fast enough; hence, the input queue has grown too large and is at 75% of the level at which it will enter the throttle state. To correct the primary cause of the alarm, contact Cisco TAC.
Switchover in Progress—Maintenance (101)
The Switchover in Progress alarm (critical) indicates that a system switchover is in progress. This alarm is issued when a system switchover is in progress either due to manual switchover (through CLI command), failover switchover, or automatic switchover. No action needs to be taken; the alarm is cleared when switchover is complete. Service is temporarily suspended for a short period of time during this transition.
Thread Watchdog Counter Close to Expiry for a Thread—Maintenance (102)
The Thread Watchdog Counter Close to Expiry for a Thread alarm (critical) indicates that the thread watchdog counter is close to expiry for a thread. The primary cause of the alarm is that a software error has occurred. No further action is required. The Cisco BTS 10200 system will automatically recover or shutdown.
Central Processing Unit Is Offline—Maintenance (103)
The Central Processing Unit Is Offline alarm (critical) indicates that the CPU is offline. The primary cause of the alarm is operator action. To correct the primary cause of the alarm, restore CPU or contact Cisco TAC.
No Heartbeat Messages Received Through Interface From Router—Maintenance (107)
The No Heartbeat Messages Received Through Interface From Router alarm (critical) indicates that no HB messages are being received through the interface from the router. The primary cause of the alarm is that the router is down. To correct the primary cause of alarm, restore router functionality. The secondary cause of the alarm is that the connection to the router is down. To correct the secondary cause of the alarm, restore the connection to the router. The ternary cause of the alarm is network congestion.
Five Successive Log Files Cannot Be Transferred—Maintenance (109)
The Five Successive Log Files Cannot Be Transferred alarm (major) indicates that five successive log files cannot be transferred to the archive system. The primary cause of the alarm is that there is a problem in access to external archive system. To correct the primary cause of the alarm, check the external archive system. The secondary cause of the alarm is that the network to external archive system is down. To correct the secondary cause of the alarm, check the status of the network.
Access To Log Archive Facility Configuration File Failed or File Corrupted—Maintenance (110)
The Access To Log Archive Facility Configuration File Failed or File Corrupted alarm (major) indicates that access to the LAF configuration file failed or the file is corrupted. The primary cause of the alarm is the LAF file is corrupted. To correct the primary cause of the alarm, check the LAF configuration file. The secondary cause of the alarm is that the LAF file is missing. To correct the secondary cause of the alarm, check for the presence of LAF configuration file.
Cannot Log In to External Archive Server—Maintenance (111)
The Cannot Log In to External Archive Server alarm (critical) indicates that the user cannot log in to the external archive server. The primary cause of the alarm is that no authorization access is set up in external archive server for that user from Cisco BTS 10200. To correct the primary cause of the alarm, set up the authorization. The secondary cause of the alarm is that the external archive server is down. To correct the secondary cause of the alarm, ping the external archive server, and try to bring it up. The ternary cause of the alarm is that the network is down. To correct the ternary cause of the alarm, check the network.
Congestion Status—Maintenance (112)
The Congestion Status alarm (major) indicates that a change in the system overload level has occurred. If the reported level remains continuously high, adjust the system load or configuration.
Side Automatically Restarting Due to Fault—Maintenance (117)
The Side Automatically Restarting Due to Fault alarm (critical) indicates that the platform has shut down to the OOS-FAULTY state, and is in the process of being automatically restarted. Additionally, the alarm indicates that an automatic restart is pending and at what time it will be attempted.To troubleshoot and correct the cause of the Side Automatically Restarting Due to Fault alarm, capture the debugging information, especially from the saved.debug directory.
Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server—Maintenance (118)
The Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server alarm (critical) indicates that the zone transfer between primary DNS and secondary DNS failed. To troubleshoot and correct the cause of the Domain Name Server Zone Database Does Not Match Between the Primary Domain Name Server and the Internal Secondary Authoritative Domain Name Server alarm, check the system log and monitor the DNS traffic through port 53 (default port for DNS).
Periodic Shared Memory Database Back Up Failure—Maintenance (119)
The Periodic Shared Memory Database Back Up Failure alarm (critical) indicates that a periodic shared memory database back up has failed. The primary cause of the Periodic Shared Memory Database Back Up Failure alarm is high disk usage. To correct the primary cause of the alarm, check disk usage.
Periodic Shared Memory Sync Failure—Maintenance (126)
The Periodic Shared Memory Sync Failure alarm (critical) indicates that the periodic shared-memory synchronization write to disk has failed. To troubleshoot and correct the cause of the Periodic Shared Memory Sync Failure alarm, check the Cisco BTS 10200 system for the cause of the failure, correct it, and then verify that the next periodic shared-memory synchronization to disk is successfully completed by monitoring the Cisco BTS 10200 system for a Periodic Shared Memory Sync Completed informational event.
Manual Recovery of OMS HUB Queue Loss—Maintenance (127)
The Manual Recovery of OMS HUB Queue Loss alarm (critical) indicates that due to some network or socket connetion issues, the OMS queue is lost causing communication problem between the Cisco BTS 10200 processes. To troubleshoot and correct the cause of the Manual Recovery of OMS HUB Queue Loss alarm, the operator needs to run the manual clean-up procedure such as
pkill smg3 or pkill hub3 on all the four nodes. It is recommended to perform this task on the maintenance window.
This procedure should be run when critical queues (mentioned below) are lost:
•BULK_OAM—Indicates provisioning queue.
•SCADM—Indicates status or control command queue.
•TMProvision—measurement related changes (used by measurement_prov CLI command.)
•QUEUE_THREAD_FSAINxxx—Indicates queue thread for sending AIN provisioning data.
•QUEUE_THREAD_FSPTCxxx—Indicates queue thread for sending PTC provisioning data.
•QUEUE_THREAD_CAxxx—Indicates queue thread for sending CA provisioning data.
•HANDSET_ACK—Indicates handset related queue.
•TrafficGA—Indicates measurement data (from CA to EMS).
•SystemManager—Used for system related command like block or unblock.