PDG/TTG Overview

PDG/TTG Overview
 
 
This chapter contains general overview information about the PDG/TTG (Packet Data Gateway/Tunnel Termination Gateway), including:
 
 
Product Description
The Cisco® ASR 5000 Chassis provides 3GPP wireless carriers with a flexible solution that functions as a PDG/TTG (Packet Data Gateway/Tunnel Termination Gateway) in 3GPP UMTS wireless voice and data networks. The PDG/TTG consists of new software for the ASR 5000.
 
The PDG/TTG enables mobile operators to provide Fixed Mobile Convergence (FMC) services to subscribers with dual-mode handsets and dual-mode access cards via WiFi access points. The PDG/TTG makes it possible for operators to provide secure access to the operator’s 3GPP network from a non-secure network, reduce the load on the macro wireless network, enhance in-building wireless coverage, and make use of existing backhaul infrastructure to reduce the cost of carrying wireless calls.
This PDG/TTG software release provides TTG functionality. The TTG is a network element that enables 3GPP PDG functionality for existing GGSN deployments. The TTG and the subset of existing GGSN functions work together to provide PDG functionality to the subscriber UEs in the WLAN.
Important: This PDG/TTG software release provides TTG functionality only. PDG functionality is not supported in this release.
 
Summary of TTG Features and Functions
The TTG features and functions include:
 
 
Product Specifications
The following information is located in this section:
 
 
 
Licenses
The PDG/TTG is a licensed product. For information about PDG/TTG licenses, contact your sales representative.
 
Hardware Requirements
Information in this section describes the hardware required to run the PDG/TTG software.
 
Platforms
The PDG/TTG operates on the ASR 5000.
 
Components
The following application and line cards are required to support the PDG/TTG on an ASR 5000:
 
 
System Management Cards (SMCs): Provides full system control and management of all cards within the ASR 5000. Up to two SMC can be installed; one active, one redundant.
Packet Services Cards (PSCs/PSC2s): Within the ASR 5000, PSCs/PSC2s provide high-speed, multi-threaded PDP context processing capabilities for 2.5G SGSN, 3G SGSN, and GGSN services. Up to 14 PSCs/PSC2s can be installed, allowing for multiple active and/or redundant cards.
Switch Processor Input/Outputs (SPIOs): Installed in the upper-rear chassis slots directly behind the SMCs, SPIOs provide connectivity for local and remote management, central office (CO) alarms. Up to two SPIOs can be installed; one active, one redundant.
Ethernet 10/100 and/or Ethernet 1000 Line Cards: Installed directly behind PSCs, these cards provide the physical interfaces to elements in the operator’s network. Up to 26 line cards should be installed for a fully loaded system with 13 active PSCs/PSC2s, 13 in the upper-rear slots and 13 in the lower-rear slots for redundancy. Redundant PSCs/PSC2s do not require line cards.
Redundancy Crossbar Cards (RCCs): Installed in the lower-rear chassis slots directly behind the SMCs, RCCs utilize 5 Gbps serial links to ensure connectivity between Ethernet 10/100 or Ethernet 1000 line cards and every PSC/PSC2 in the system for redundancy. Two RCCs can be installed to provide redundancy for all line cards and PSCs/PSC2s.
Important: Additional information pertaining to each of the application and line cards required to support GPRS/UMTS wireless data services is located in the Hardware Platform Overview chapter of the Product Overview Guide.
 
Operating System Requirements
 
The PDG/TTG is available for the ASR 5000 running StarOS Release 9.0 or later.
 
Network Deployment(s) and Interfaces
 
This section describes the PDG/TTG as it functions as a TTG in a GPRS/UMTS data network.
 
The TTG in a GPRS/UMTS Data Network
The TTG is a GPRS/UMTS network element that enables the implementation of PDG functionality in existing GGSN deployments. It achieves this by using a subset of the Gn reference point called the Gn' (Gn prime) reference point.
The Gn' reference point provides the means by which GPRS mobile operators can implement PDG functionality by re-using existing infrastructure, including currently deployed GGSNs, to offer new services to current subscribers.
The following figure shows a PDG implementation that uses existing GGSN functionality. This implementation includes the PDG/TTG functioning as a TTG and a currently-deployed GGSN. In this implementation, only a subset of the GGSN functionality is used.
 
The TTG in a PDG Implementation
In the implementation above, the TTG terminates an IPSec tunnel for each WLAN UE subscriber session established over the Wu reference point. The TTG also establishes a corresponding GTP (GPRS Tunneling Protocol) tunnel over the Gn' reference point to the GGSN. The TTG and the subset of GGSN functions work together to provide PDG functionality to the UEs in the WLAN.
GTP (GPRS Tunneling Protocol) is the primary protocol used in the GPRS core network. It allows subscribers in a UMTS network to move from place to place while continuing to connect to the Internet as if from one location at the GGSN. It does this by carrying the subscriber’s data from the subscriber’s current SGSN to the GGSN that is handling the subscriber’s session.
The TTG functions as an SGSN in the GPRS/UMTS network to provide an SGTP (SGSN GPRS Tunneling Protocol) service. The SGTP service enables the TTG to use GTP over the Gn' interface to carry packet data between itself and the GGSN.
 
TTG Logical Network Interfaces (Reference Points)
The following table provides descriptions of the logical network interfaces supported by the TTG in a GPRS/UMTS data network.
TTG Logical Network Interfaces
 
Features and Functionality
 
This section describes the features and functions supported by the PDG/TTG software.
The following features are supported and described in this section:
 
 
PDG Service
In this software release, the PDG service provides TTG functionality to enable the implementation of PDG functionality in existing GGSN deployments.
 
During configuration, you create the PDG service in a PDG context, which is a routing domain on the ASR 5000. PDG context and service configuration includes the following main steps:
Configure the IPv4 address for the service: This is the IP address of the TTG to which the UEs in the WLAN attempt to connect. The UEs send IKEv2 messages to this IP address, and the TTG uses the IP address to listen for these messages.
Configure the name of the crypto template for IKEv2/IPSec: A crypto template is used to define an IKEv2/IPSec policy. It includes IKEv2 and IPSec parameters for keepalive, lifetime, NAT-T, and cryptographic and authentication algorithms. There must be one crypto template per PDG service.
The crypto template includes the following:
The name of the EAP profile: The EAP profile defines the EAP methods and associated parameters.
Multiple authentication support: Multiple authentication is specified as a part of crypto template configuration.
IKEv2 and IPSec transform sets: Transform set defines the negotiable algorithms for IKE SAs and Child SAs.
The setup timeout value: This parameter specifies the session setup timeout timer value. The TTG terminates a UE connection attempt if the UE does not establish a successful connection within the specified timeout period.
Max-sessions: This parameter sets the maximum number of subscriber sessions allowed by this PDG service.
SGTP context and service: You create an SGTP context and service to enable GPRS Tunneling Protocol (GTP) on the TTG to use for sending packet data between the TTG and the GGSN.
 
TTG Mode
TTG mode uses IKEv2/IPsec tunnels to deliver packet data services over untrusted WiFi access networks with connectivity to the Internet or managed networks.
In TTG mode, the system terminates an IPSec tunnel for each WLAN UE subscriber session established over the Wu reference point. The TTG also establishes a corresponding GTP (GPRS Tunneling Protocol) tunnel over the Gn' reference point to the GGSN. The TTG and a subset of GGSN functions work together to provide PDG functionality to the WLAN UEs.
 
IP Security (IPSec) Encryption
The PDG/TTG supports IKEv2 and IPSec encryption using IPv4 addressing. IKEv2 and IPSec encryption enables network domain security for all IP packet-switched networks in order to provide confidentiality, integrity, authentication, and anti-replay protection. These capabilities are insured through use of cryptographic techniques.
IKEv2 and IP Security (IPSec) encryption, including support for:
IKEv2 encryption protocols: AES-CBC with 128 bits, AES-CBC with 256 bits, 3DES-CBC, and DES-CBC
IKEv2 pseudo-random functions: PRF-HMAC-SHA1, PRF-HMAC-MD5
IKEv2 integrity: HMAC-SHA1-96, HMAC-MD5
IPSec ESP (Encapsulating Security Payload) encryption: AES-CBC with 128 bits, AES-CBC with 256 bits, 3DES-CBC, and DES-CBC
IPSec integrity: HMAC-SHA1-96, HMAC-MD5
 
Multiple Digital Certificate Selection Based on APN
Selecting digital certificates based on APN allows you to apply digital certificates per the requirements of each APN and associated packet data network. A digital certificate is an electronic credit card that establishes a subscriber’s credentials when doing business or other transactions on the Internet. Some digital certificates conform to ITU-T standard X.509 for a Public Key Infrastructure (PKI) and Privilege Management Infrastructure (PMI). X.509 specifies, among other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm.
During session establishment, the PDG/TTG can select a digital certificate from multiple certificates based on the APN (Access Point Name). The selected certificate is associated with the APN that the WLAN UE includes in the IDr payload of the first IKE_AUTH_REQ message.
When configuring APN-based certificate selection, ensure that the certificate names match the associated APNs exactly. The PDG/TTG can then examine each APN received in the IDr payload and select the correct certificate.
The PDG/TTG generates an SNMP notification when the certificate is within 30 days of expiration and approximately once a day until a new certificate is provided. Operators need to generate a new certificate and then configure the new certificate using the system’s CLI. The certificate is then used for all new sessions.
 
Subscriber Traffic Policing for IPSec Access
Traffic policing allows you to manage bandwidth usage on the network and limit bandwidth allowances to subscribers.
Traffic policing enables the configuring and enforcing of bandwidth limitations on individual subscribers of a particular traffic class in 3GPP service. Bandwidth enforcement is configured and enforced independently in the downlink and uplink directions.
When configured in the Subscriber Configuration Mode of the system’s CLI, the PDG/TTG performs traffic policing. However, if the GGSN changes the QoS via an Update PDP Context Request, the PDG/TTG uses the QoS values from the GGSN.
A Token Bucket Algorithm (a modified trTCM) [RFC2698] is used to implement the traffic policing feature. The following criteria is used when determining how to mark a packet:
Committed Data Rate (CDR): The guaranteed rate (in bits per second) at which packets can be transmitted/received for the subscriber during the sampling interval. Note that the committed (or guaranteed) data rate does not apply to the Interactive and Background traffic classes.
Peak Data Rate (PDR): The maximum rate (in bits per second) that subscriber packets can be transmitted/received for the subscriber during the sampling interval.
Using negotiated QoS data rates, the system calculates the burst size, which is the maximum number of bytes that can be transmitted/received for the subscriber during the sampling interval for both committed and peak rate conditions. The committed burst size (CBS) and peak burst size (PBS) for each subscriber depends on the guaranteed bit rate (GBR) and maximum bit rate (MBR) respectively. This represents the maximum number of tokens that can be placed in the subscriber’s “bucket”. The burst size is the bucket size used by the Token Bucket Algorithm.
Tokens are removed from the subscriber’s bucket based on the size of the packets being transmitted/received. Every time a packet arrives, the system determines how many tokens need to be added (returned) to a subscriber’s CBS (and PBS) bucket. This value is derived by computing the product of the time difference between incoming packets and the CDR (or PDR). The computed value is then added to the tokens remaining in the subscriber’s CBS (or PBS) bucket. The total number of tokens can not be greater than the burst-size. If the total number of tokens is greater than the burst-size, the number is set to equal the burst-size.
After passing through the Token Bucket Algorithm, the packet is internally classified with a color, as follows:
The system can be configured with actions to take for red and yellow packets. Any of the following actions may be specified:
Drop: The offending packet is discarded.
Transmit: The offending packet is passed.
Lower the IP Precedence: The packet's ToS octet is set to “0”, thus downgrading it to Best Effort, prior to passing the packet.
Different actions can be specified for red and yellow, as well as for uplink and downlink directions and different 3GPP traffic classes.
 
DSCP Marking for IPSec Access
The DSCP (Differentiated Service Code Point) marking feature provides support for more granular configuration of DSCP marking.
The PDG/TTG functioning as a TTG can perform DSCP marking of packets sent over the Wu interface in the downlink direction to the WLAN UEs and over the Gn' interface in the uplink direction to the GGSN.
In the PDG Service Configuration Mode of the system’s CLI, you use the ip qos-dscp command to control DSCP markings for downlink packets sent over the Wu interface in IPSec tunnels, and use the ip gnp-qos-dscp command to control DSCP markings for uplink packets sent over the Gn' interface in GTP tunnels.
The Diffserv markings are applied to the IP header of every transmitted subscriber data packet. DSCP levels can be assigned to specific traffic patterns in order to ensure that the data packets are delivered according to the precedence with which they are tagged. The four traffic patterns have the following order of precedence: background (lowest), interactive, streaming, and conversational (highest).
For the interactive traffic class, the PDG/TTG supports per-gateway service and per-APN configurable DSCP marking for uplink and downlink direction based on Allocation/Retention Priority in addition to the current priorities.
The following matrix can be used to determine the Diffserv markings used based on the configured traffic class and Allocation/Retention Priority:
Default DSCP Value Matrix
 
WLAN Access Control
The PDG/TTG enables WLAN access control by enabling you to limit the number of IKEv2/IPSec tunnels per subscriber session.
In the PDG Service Configuration Mode of the system’s CLI, the max-tunnels-per-ue command can be used to specify the maximum number of IKEv2/IPSec tunnels per subscriber session.
The number of tunnels per UE is limited by the NSAPI (Network Service Access Point Identifier) range, which is 5 to 15. Hence, the configurable maximum number of tunnels is 11, within the range of 1 to 11, with a default value of 11.
 
RADIUS and Diameter Support
RADIUS and Diameter support on the PDG/TTG provides a mechanism for performing authorization, authentication, and accounting (AAA) for subscribers. The benefits of using AAA are:
 
The Remote Authentication Dial-In User Service (RADIUS) and Diameter protocols can be used to provide AAA functionality for subscribers. The PDG/TTG supports EAP authentication based on both RADIUS and Diameter protocols.
The AAA functionality on the PDG/TTG provides a wide range of configuration options via AAA server groups, which allow a number of RADIUS/Diameter parameters to be configured in support of the PDG service.
Currently, two types of authentication load-balancing methods are supported: first-server and round-robin. The first-server method sends requests to the highest priority active server. A request will be sent to a different server only if the highest priority server is not reachable. With the round-robin method, requests are sent to all active servers in a round-robin fashion.
The PDG/TTG can detect the status of the AAA servers. Status checking is enabled by configuration in the AAA Server Group Configuration Mode of the system CLI. Once an AAA server is detected to be down, it is kept in the down state up to a configurable duration of time called the dead-time period. After the dead-time period expires, the AAA server is eligible to be retried. If a subsequent request is directed to that server and the server properly responds to the request, the system makes the server active again.
The PDG/TTG generates accounting messages on successful session establishment. For a TTG session, the system creates an IPsec SA for a subscriber session after it creates the GTP tunnel to the GGSN over the Gn' interface. The TTG sends an accounting START message to the AAA server after successful completion of both GTP tunnel creation on the Gn' interface and IPsec SA creation on the Wu interface.
Important: For more information on AAA configuration, refer to the AAA Interface Administration and Reference.
 
EAP Fast Re-authentication Support
When subscriber authentication is performed frequently, it can lead to a high network load, especially when the number of currently connected subscribers is high. To address this issue, the PDG/TTG can employ fast re-authentication, which is a more efficient method than the full authentication.
Fast re-authentication is an EAP (Extensible Authentication Protocol) exchange that is based on keys derived from a preceding full authentication exchange. The fast re-authentication mechanism can be used during both EAP-AKA and EAP-SIM authentication.
When fast re-authentication is enabled, the PDG/TTG receives a fast re-auth ID from the UE in the IDi payload of the IKE_AUTH_REQ message. The PDG/TTG sends the fast re-auth ID to the AAA server in an Authentication Request message to initiate fast re-authentication.
During fast re-authentication, the PDG/TTG handles two separate IKE/IPSec SAs, one for the original session and one for re-authentication. The re-authentication SA remains for a very short period until the fast re-authentication is successful. After the successful fast re-authentication, the PDG/TTG assigns the UE with the same IP address. The SGTP service running on the PDG/TTG identifies the original session and replicates the same session using the same IP address assignment. The PDG/TTG then deletes the original session SA.
The AAA server fall backs to full authentication in the following scenarios:
 
Pseudonym NAI Support
The PDG/TTG supports the use of pseudonym NAIs (Network Access Identifiers) to protect the identity of subscribers against tracing from unauthorized access networks.
Pseudonym NAIs are allocated to the WLAN UEs by the EAP server along with the last successful full authentication. The EAP server maintains the mapping of pseudonym-to-permanent identity for each subscriber. The UEs store this mapping in non-volatile memory to save it across reboots, and then use the pseudonym NAI instead of the permanent one in responses to identity requests from the EAP server.
 
Multiple APN Support for IPSec Access
The PDG/TTG supports multiple wireless APNs for the same UE (the same IMSI) for use during subscriber authentication.
To support subscribers while they attempt to access multiple services, the PDG/TTG enables multiple subscriber authorizations via multiple wireless APNs. Each time a UE attempts to access a service, the PDG/TTG receives a new APN from the UE in the IDr payload of its first IKE_AUTH_REQ message, and the PDG/TTG initiates a new authorization as a distinct session.
 
Lawful Intercept
The PDG/TTG supports lawful interception (LI) of subscriber session information to provide telecommunication service providers (TSPs) with a mechanism to assist law enforcement agencies (LEAs) in the monitoring of suspicious individuals (referred to as targets) for potential criminal activity.
Law Enforcement Agencies (LEAs) provide one or more Telecommunication Service Providers (TSPs) with court orders or warrants requesting the monitoring of a particular target. The targets are identified by information such as their Network Access Identifier (NAI), Mobile Station Integrated Services Digital Network (MSISDN) number, or International Mobile Subscriber Identification (IMSI) number.
Once the target has been identified, the PDG/TTG serves as an access function (AF) and performs monitoring for either new PDP contexts (“camp-on”) or PDP contexts that are already in progress. While monitoring, the system intercepts and duplicates Content of Communication (CC) and/or Intercept Related Information (IRI) and forwards it to a Delivery Function (DF) over an extensible, proprietary interface.
Note that when a target establishes multiple, simultaneous PDP contexts, the system intercepts CC and IRI for each of them. The DF, in turn, delivers the intercepted content to one or more Collection Functions (CFs).
For more information about the Lawful Intercept feature, see the Lawful Intercept Configuration Guide.
 
IMS Emergency Call Handling
The PDG/TTG supports IMS emergency call handling per 3GPP TS 33.234. This feature is enabled by configuring a special WLAN access point name (W-APN), which includes a W-APN network identifier for emergency calls (sos, for example), and can be configured with no authentication.
The DNSs in the network are configured to resolve the special W-APN to the IP address of the PDG/TTG. When a WLAN UE initiates an IMS emergency call, the UE sends a W-APN that includes the same W-APN network identifier (sos) as the one that is configured on the PDG/TTG. This W-APN network identifier is prefixed to the W-APN operator identifier per 3GPP TS 23.003. The W-APN operator identifier sent by the UE must match the PLMN ID (MCC and MNC) that is configured on the PDG/TTG (visited network). When the PDG/TTG receives the W-APN from the UE in the IDr, the PDG/TTG marks the call as an emergency call and proceeds with call establishment, even in the event of an authentication or EAP failure from the AAA/EAP server.
If the PDG/TTG detects that an old IKE SA for the special W-APN already exists, it deletes the IKE SA and sends an INFORMATIONAL message with a Delete payload to the WLAN UE to delete the old IKE SA on the UE.
 
IPSec Session Recovery Support
The IPSec session recovery feature is a licensed feature. It provides seamless failover and nearly instantaneous reconstruction of subscriber session information in the event of a hardware or software fault within the same chassis, preventing a fully-connected user session from being dropped. For information about the required software license for this feature, contact your sales representative.
IPSec session recovery is performed by mirroring key software processes (the IPSec manager, session manager, and AAA manager, for example) on the PDG/TTG. These mirrored processes remain in an idle state (in standby mode), where they perform no processing until they may be needed in the case of a software failure (a session manager task aborts, for example). The system spawns new instances of standby mode sessions and AAA managers for each active control processor being used.
Additionally, other key system-level software tasks such as VPN manager are performed on a physically separate Packet Services Card (PSC/PSC2) to ensure that a double software fault (the session manager and the VPN manager fail at same time on same card, for example) cannot occur. The PSC/PSC2 used to host the VPN manager process is in active mode and is reserved by the operating system for this sole use when session recovery is enabled. At a minimum, four PSCs/PSC2s (3 active and 1 standby) are required on the chassis to support the IPSec session recovery feature.
Important: For more information about session recovery support, refer to Session Recovery in the System Enhanced Feature Configuration Guide.
 
Congestion Control
Congestion control allows you to set policies and thresholds and specify how the system reacts when faced with a heavy load condition.
Congestion control monitors the system for conditions that could potentially degrade performance when the system is under heavy load. Typically, these conditions are temporary (for example, high CPU or memory utilization) and are quickly resolved. However, continuous or large numbers of these conditions within a specific time interval may have an impact the system’s ability to service subscriber sessions. Congestion control helps identify such conditions and invokes policies for addressing the situation.
Congestion control operation is based on configuring the following:
Congestion Condition Thresholds: Thresholds dictate the conditions for which congestion control is enabled and establishes limits for defining the state of the system (congested or clear). These thresholds function in a way similar to operation thresholds that are configured for the system as described in the Thresholding Configuration Guide. The primary difference is that when congestion thresholds are reached, a service congestion policy and an SNMP trap, starCongestion, are generated.
A threshold tolerance dictates the percentage under the configured threshold that must be reached in order for the condition to be cleared. An SNMP trap, starCongestionClear, is then triggered.
Port Utilization Thresholds: If you set a port utilization threshold, when the average utilization of all ports in the system reaches the specified threshold, congestion control is enabled.
Port-specific Thresholds: If you set port-specific thresholds, when any individual port-specific threshold is reached, congestion control is enabled system-wide.
Service Congestion Policies: Congestion policies are configurable for each service. These policies dictate how services respond when the system detects that a congestion condition threshold has been crossed.
Important: For more information on congestion control, refer to the System Enhanced Feature Configuration Guide.
 
Bulk Statistics
Bulk statistics allow operators to choose to view not only statistics that are of importance to them, but also to configure the format in which it is presented. This simplifies the post-processing of statistical data since it can be formatted to be parsed by external, back-end processors.
When used in conjunction with the Web Element Manager, the data can be parsed, archived, and graphed.
The system can be configured to collect bulk statistics (performance data) and send them to a collection server (called a receiver). Bulk statistics are statistics that are collected in a group. The individual statistics are grouped by schema. The following is a partial list of supported schemas:
 
System: Provides system-level statistics
Card: Provides card-level statistics
Port: Provides port-level statistics
PDG: Provides PDG service statistics
APN: Provides Access Point Name statistics
The system supports the configuration of up to four sets (primary/secondary) of receivers. Each set can be configured with to collect specific sets of statistics from the various schemas. Statistics can be pulled manually from the system or sent at configured intervals. The bulk statistics are stored on the receiver(s) in files.
The format of the bulk statistic data files can be configured by the user. Users can specify the format of the file name, file headers, and/or footers to include information such as the date, system host name, system uptime, the IP address of the system generating the statistics (available for only for headers and footers), and/or the time that the file was generated.
When the Web Element Manager is used as the receiver, it is capable of further processing the statistics data through XML parsing, archiving, and graphing.
The Bulk Statistics Server component of the Web Element Manager parses collected statistics and stores the information in the PostgreSQL database. If XML file generation and transfer is required, this element generates the XML output and can send it to a northbound NMS or an alternate bulk statistics server for further processing.
Additionally, if archiving of the collected statistics is desired, the Bulk Statistics Server writes the files to an alternative directory on the server. A specific directory can be configured by the administrative user or the default directory can be used. Regardless, the directory can be on a local file system or on an NFS-mounted file system on the Web Element Manager server.
Important: For more information on bulk statistic configuration, refer to the Configuring and Maintaining Bulk Statistics chapter of the System Administration Guide.
 
Threshold Crossing Alerts
Thresholding on the system is used to monitor the system for conditions that could potentially cause errors or outages. Typically, these conditions are temporary (i.e., high CPU utilization or packet collisions on a network) and are quickly resolved. However, continuous or large numbers of these error conditions within a specific time interval may be indicative of larger, more severe issues. The purpose of thresholding is to help identify potentially severe conditions so that immediate action can be taken to minimize and/or avoid system downtime.
The system supports threshold crossing alerts for certain key resources such as CPU, memory, IP pool addresses, etc. With this capability, the operator can configure threshold on these resources whereby, should the resource depletion cross the configured threshold, a SNMP trap would be sent.
The following thresholding models are supported by the system:
 
Alert: A value is monitored and an alert condition occurs when the value reaches or exceeds the configured high threshold within the specified polling interval. The alert is generated then generated and/or sent at the end of the polling interval.
Alarm: Both high and low threshold are defined for a value. An alarm condition occurs when the value reaches or exceeds the configured high threshold within the specified polling interval. The alert is generated then generated and/or sent at the end of the polling interval.
Thresholding reports conditions using one of the following mechanisms:
 
SNMP traps: SNMP traps have been created that indicate the condition (high threshold crossing and/or clear) of each of the monitored values. Generation of specific traps can be enabled or disabled on the chassis. Ensuring that only important faults get displayed. SNMP traps are supported in both Alert and Alarm modes.
Logs: The system provides a facility called threshold for which active and event logs can be generated. As with other system facilities, logs are generated messages pertaining to the condition of a monitored value are generated with a severity level of WARNING. Logs are supported in both the Alert and the Alarm models.
Alarm System: High threshold alarms generated within the specified polling interval are considered outstanding until a the condition no longer exists or a condition clear alarm is generated. Outstanding alarms are reported to the system’s alarm subsystem and are viewable through the Alarm Management menu in the Web Element Manager.
The Alarm System is used only in conjunction with the Alarm model.
Important: For more information on threshold crossing alert configuration, refer to the Thresholding Configuration Guide.
 
Features Not Supported in This Release
The following features are not supported in this PDG/TTG software release:
 
 
 
How the PDG/TTG Works
 
This section describes the PDG/TTG functioning as a TTG during connection establishment.
 
TTG Connection Establishment Call Flow
The call flow in the figure below shows the message flow during connection establishment. The table that follows the figure describes each step in the call flow.
 
TTG Connection Establishment Call Flow
 
Supported Standards
 
The PDG/TTG complies with the following standards.
 
 
3GPP References
 
 
 
IETF References
 
 
 
 

Cisco Systems Inc.
Tel: 408-526-4000
Fax: 408-527-0883