HRPD Serving Gateway Overview

HRPD Serving Gateway Overview
 
 
The ASR 5000 provides wireless carriers with a flexible solution that functions as an HRPD Serving Gateway (HSGW) in 3GPP2 evolved High Rate Packet Data (eHRPD) wireless data networks.
This overview provides general information about the HSGW including:
 
 
eHRPD Network Summary
In a High Rate Packet Data (HRPD) network, the method of mobility is performed using client-based mobile IPv6 or Client Mobile IPv6 (CMIPv6). This involves the mobile node with an IPv6 stack maintaining a binding between its home address and its care-of address. The mobile node must also send mobility management signaling messages to a home agent.
The primary difference in an evolved HRPD (eHRPD) network is the use of network mobility (via proxy) allowing the network to perform mobility management, instead of the mobile node. This form of mobility is known as Proxy Mobile IPv6 (PMIPv6).
The eHRPD network’s main function is to provide interworking of the mobile node with the Evolved Packet System (EPS). The EPS is a 3GPP Enhanced UMTS Terrestrial Radio Access Network/Evolved Packet Core (E-UTRAN/EPC). The E-UTRAN/EPC is the core data network of the 4G System Architecture Evolution (SAE) network supporting the Long Term Evolution Radio Access Network (LTE RAN).
The following figure shows the physical relationship of the eHRPD network with the E-UTRAN/EPC.
The primary functions of the eHRPD network are:
 
eHRPD Network Components
The eHRPD network is comprised of the following components:
 
Evolved Access Network (eAN)
The eAN is a logical entity in the radio access network used for radio communications with an access terminal (mobile device). The eAN is equivalent to a base station in 1x systems. The eAN supports operations for EPS – eHRPD RAN in addition to legacy access network capabilities.
 
Evolved Packet Control Function (ePCF)
The EPCF is an entity in the radio access network that manages the relay of packets between the eAN and the HSGW. The ePCF supports operations for the EPS – eHRPD RAN in addition to legacy packet control functions.
The ePCF supports the following:
 
HRPD Serving Gateway (HSGW)
The HSGW is the entity that terminates the HRPD access network interface from the eAN/PCF. The HSGW functionality provides interworking of the AT with the 3GPP EPS architecture and protocols specified in 23.402 (mobility, policy control (PCC), and roaming). The HSGW supports efficient (seamless) inter-technology mobility between LTE and HRPD with the following requirements:
 
 
E-UTRAN EPC Network Components
The E-UTRAN EPC network is comprised of the following components:
 
eNodeB
The eNodeB (eNB) is the LTE base station and is one of two nodes in the SAE Architecture user plane (the other is the S-GW). The eNB communicates with other eNBs via the X2 interface. The eNB communicates with the EPC via the S1 interface. The user plane interface is the S1-U connection to S-GW. The signaling plane interface is the S1-MME connection to MME.
 
Basic functions supported include:
 
Mobility Management Entity (MME)
The MME is the key control-node for the LTE access-network. The MME provides the following basic functions:
 
 
Serving Gateway (S-GW)
For each UE associated with the EPS, there is a single S-GW at any given time providing the following basic functions:
 
 
PDN Gateway (P-GW)
For each UE associated with the EPS, there is at least one P-GW providing access to the requested PDN. If a UE is accessing multiple PDNs, there may be more than one P-GW for that UE. The P-GW provides the following basic functions:
 
 
Product Description
The HSGW terminates the eHRPD access network interface from the Evolved Access Network/Evolved Packet Core Function (eAN/ePCF) and routes UE-originated or terminated packet data traffic. It provides interworking with the eAN/ePCF and the PDN Gateway (P-GW) within the Evolved Packet Core (EPC) or LTE/SAE core network and performs the following functions:
 
An HSGW also establishes, maintains and terminates link layer sessions to UEs. The HSGW functionality provides interworking of the UE with the 3GPP EPS architecture and protocols. This includes support for mobility, policy control and charging (PCC), access authentication, and roaming. The HSGW also manages inter-HSGW handoffs.
eHRPD Basic Network Topology
 
Basic Features
 
Authentication
The HSGW supports the following authentication features:
 
For more information on authentication features, refer to the Network Access and Charging Management Features section in this overview.
 
IP Address Allocation
The HSGW supports the following IP address allocation features:
 
 
Quality of Service
The HSGW supports the following QoS features:
 
For more information on QoS features, refer to the Quality of Service Management Features section in this overview.
 
AAA, Policy and Charging
The HSGW supports the following AAA, policy and charging features:
 
For more information on policy and charging features, refer to the Network Access and Charging Management Features section in this overview.
 
Product Specifications
The following information is located in this section:
 
 
Licenses
The HSGW is a licensed product. A session use license key must be acquired and installed to use the HSGW service.
The following licenses are available for this product:
 
Hardware Requirements
Information in this section describes the hardware required to enable HSGW services.
 
Platforms
The HSGW service operates on the ASR 5000 platform.
 
Components
The following application and line cards are required to support HSGW functionality on an ASR 5000:
 
System Management Cards (SMCs): Provides full system control and management of all cards within the chassis. Up to two SMCs can be installed; one active, one redundant.
Packet Services Cards (PSCs): The PSCs provide high-speed, multi-threaded PDP context processing capabilities for HSGW services. Up to 14 PSCs can be installed, allowing for multiple active and/or redundant cards.
Switch Processor Input/Outputs (SPIOs): Installed in the upper-rear chassis slots directly behind the SMCs, SPIOs provide connectivity for local and remote management, central office (CO) alarms. Up to two SPIOs can be installed; one active, one redundant.
Line Cards: Installed directly behind PSCs, these cards provide the physical interfaces to elements in the eHRPD data network. Up to 26 line cards can be installed for a fully loaded system with 13 active PSCs, 13 in the upper-rear slots and 13 in the lower-rear slots for redundancy. Redundant PSCs do not require line cards.
Redundancy Crossbar Cards (RCCs): Installed in the lower-rear chassis slots directly behind the SPCs/SMCs, RCCs utilize 5 Gbps serial links to ensure connectivity between Ethernet 10/100 or Ethernet 1000 line cards and every PSC in the system for redundancy. Two RCCs can be installed to provide redundancy for all line cards and PSCs.
Important: Additional information pertaining to each of the application and line cards required to support LTE/SAE services is located in the Hardware Platform Overview chapter of the Product Overview Guide.
 
Operating System Requirements
The HSGW is available for all Cisco Systems ASR 5000 platforms running StarOS Release 9.0 or later.
 
Network Deployment(s)
This section describes the supported interfaces and the deployment scenario of an HSGW in an eHRPD network.
 
HRPD Serving Gateway in an eHRPD Network
The following figure displays a simplified network view of the HSGW in an eHRPD network and how it interconnects with a 3GPP Evolved-UTRAN/Evolved Packet Core network. The interfaces shown in the following graphic are standards-based and are presented for informational purposes only. For information on interfaces supported by Cisco Sytems’ HSGW, refer to the next section, .
 
HSGW in an eHRPD Network Architecture
 
Supported Logical Network Interfaces (Reference Points)
The HSGW supports many of the standards-based logical network interfaces or reference points. The graphic below and following text define the supported interfaces. Basic protocol stacks are also included.
 
HSGW Supported Network Interfaces
In support of both mobile and network originated subscriber PDP contexts, the HSGW provides the following network interfaces:
 
A10/A11
This interface exists between the Evolved Access Network/Evolved Packet Control
Function (eAN/ePCF) and the HSGW and implements the A10 (signaling) and A11 (bearer) protocols defined in 3GPP2 specifications.
 
 
S2a Interface
This reference point supports the bearer interface by providing signaling and mobility support between a trusted non-3GPP access point (HSGW) and the PDN Gateway. It is based on Proxy Mobile IP but also supports Client Mobile IPv4 FA mode which allows connectivity to trusted non-3GPP IP access points that do not support PMIP.
 
 
 
 
STa Interface
This signaling interface supports Diameter transactions between a 3GPP2 AAA proxy and a 3GPP AAA server. This interface is used for UE authentication and authorization.
 
 
 
 
Gxa Interface
This signalling interface supports the transfer of policy control information (QoS) between the HSGW (BBERF) and a PCRF.
 
 
 
 
Features and Functionality - Base Software
This section describes the features and functions supported by default in the base software for the HSGW service and do not require any additional licenses to implement the functionality.
Important: To configure the basic service and functionality on the system for the HSGW service, refer to the configuration examples provided in the HSGW Administration Guide.
The following features are supported and described in this section:
 
 
Subscriber Session Management Features
This section describes the following features:
 
 
Proxy Mobile IPv6 (S2a)
Provides a mobility management protocol to enable a single LTE-EPC core network to provide the call anchor point for user sessions as the subscriber roams between native EUTRAN and non-native e-HRPD access networks
S2a represents the trusted non-3GPP interface between the LTE-EPC core network and the evolved HRPD network anchored on the HSGW. In the e-HRPD network, network-based mobility provides mobility for IPv6 nodes without host involvement. Proxy Mobile IPv6 extends Mobile IPv6 signaling messages and reuses the HA function (now known as LMA) on PDN Gateway. This approach does not require the mobile node to be involved in the exchange of signaling messages between itself and the Home Agent. A proxy mobility agent (Eg MAG function on HSGW) in the network performs the signaling with the home agent and does the mobility management on behalf of the mobile node attached to the network
The S2a interface uses IPv6 for both control and data. During the PDN connection establishment procedures the PDN Gateway allocates the IPv6 Home Network Prefix (HNP) via Proxy Mobile IPv6 signaling to the HSGW. The HSGW returns the HNP in router advertisement or based on a router solicitation request from the UE. PDN connection release events can be triggered by either the UE, the HSGW or the PGW.
In Proxy Mobile IPv6 applications the HSGW (MAG function) and PDN GW (LMA function) maintain a single shared tunnel and separate GRE keys are allocated in the PMIP Binding Update and Acknowledgement messages to distinguish between individual subscriber sessions. If the Proxy Mobile IP signaling contains Protocol Configuration Options (PCO's) it can also be used to transfer P-CSCF or DNS server addresses
 
Mobile IP Registration Revocation
Mobile IP registration revocation functionality provides the following benefits:
Registration Revocation is a general mechanism whereby either the P-GW or the HSGW providing Mobile IP functionality to the same mobile node can notify the other mobility agent of the termination of a binding. Mobile IP Registration Revocation can be triggered at the HSGW by any of the following:
Important: Registration Revocation functionality is also supported for Proxy Mobile IP. However, only the P-GW can initiate the revocation for Proxy-MIP calls. For more information on MIP registration revocation support, refer to the Mobile IP Registration Revocation chapter in the System Enhanced Feature Configuration Guide.
 
Session Recovery Support
The Session Recovery feature provides seamless failover and reconstruction of subscriber session information in the event of a hardware or software fault within the system preventing a fully connected user session from being disconnected.
This feature is also useful for Software Patch Upgrade activities. If session recovery feature is enabled during the software patch upgrading, it helps to permit preservation of existing sessions on the active PSC during the upgrade process.
Session recovery is performed by mirroring key software processes (e.g. session manager and AAA manager) within the system. These mirrored processes remain in an idle state (in standby-mode), wherein they perform no processing, until they may be needed in the case of a software failure (e.g. a session manager task aborts). The system spawns new instances of “standby mode” session and AAA managers for each active control processor (CP) being used.
Additionally, other key system-level software tasks, such as VPN manager, are performed on a physically separate Packet Service Card (PSC) to ensure that a double software fault (e.g. session manager and VPN manager fails at same time on same card) cannot occur. The PSC used to host the VPN manager process is in active mode and is reserved by the operating system for this sole use when session recovery is enabled.
The additional hardware resources required for session recovery include a standby system processor card (SPC) and a standby PSC.
There are two modes for Session Recovery.
Task recovery mode: Wherein one or more session manager failures occur and are recovered without the need to use resources on a standby PSC. In this mode, recovery is performed by using the mirrored “standby-mode” session manager task(s) running on active PSCs. The “standby-mode” task is renamed, made active, and is then populated using information from other tasks such as AAA manager.
Full PSC recovery mode: Used when a PSC hardware failure occurs, or when a PSC migration failure happens. In this mode, the standby PSC is made active and the “standby-mode” session manager and AAA manager tasks on the newly activated PSC perform session recovery.
Session/Call state information is saved in the peer AAA manager task because each AAA manager and session manager task is paired together. These pairs are started on physically different PSCs to ensure task recovery.
Important: For more information on session recovery support, refer to the Session Recovery chapter in the System Enhanced Feature Configuration Guide.
 
Non-Optimized Inter-HSGW Session Handover
Enables non-optimized roaming between two eHRPD access networks that lack a relationship of trust and when there are no SLA's in place for low latency hand-offs.
Inter-HSGW hand-overs without context transfers are designed for cases in which the user roams between two eHRPD networks where no established trust relationship exists between the serving and target operator networks. Additionally no H1/H2 optimized hand-over interface exists between the two networks and the Target HSGW requires the UE to perform new PPP LCP and attach procedures. Prior to the hand-off the UE has a complete data path with the remote host and can send and receive packets via the eHRPD access network and HSGW & PGW in the EPC core.
The UE eventually transitions between the Serving and Target access networks in active or dormant mode as identified via A16 or A13 signaling. The Target HSGW receives an A11 Registration Request with VSNCP set to “Hand-Off”. The request includes the IP address of the Serving HSGW, the MSID of the UE and information concerning existing A10 connections. Since the Target HSGW lacks an authentication context for the UE, it sends the LCP config-request to trigger LCP negotiation and new EAP-AKA procedures via the STa reference interface. After EAP success, the UE sends its VSNCP Configure Request with Attach Type equal to “Hand-off”. It also sets the IP address to the previously assigned address in the PDN Address Option. The HSGW initiates PMIPv6 binding update signaling via the S2a interface to the PGW and the PGW responds by sending a PMIPv6 Binding Revocation Indication to the Serving HSGW.
 
Quality of Service Management Features
This section describes the following features:
 
 
DSCP Marking
Provides support for more granular configuration of DSCP marking.
For Interactive Traffic class, the HSGW supports per-HSGW service and per-APN configurable DSCP marking for Uplink and Downlink direction based on Allocation/Retention Priority in addition to the current priorities.
The following matrix may be used to determine the Diffserv markings used based on the configured traffic class and Allocation/Retention Priority:
Default DSCP Value Matrix
 
UE Initiated Dedicated Bearer Resource Establishment
Enables a real-time procedure as applications are started, for the Access Terminal to request the appropriate end-to-end QoS and service treatment to satisfy the expected quality of user experience.
Existing HRPD applications use UE/AT initiated bearer setup procedures. As a migration step toward the EUTRAN-based LTE-SAE network model, the e-HRPD architecture has been designed to support two approaches to resource allocation that include network initiated and UE initiated dedicated bearer establishment. In the StarOS 9.0 release, the HSGW will support only UE initiated bearer creation with negotiated QoS and flow mapping procedures.
After the initial establishment of the e-HRPD radio connection, the UE/AT uses the A11' signaling to establish the default PDN connection with the HSGW. As in the existing EV-DO Rev A network, the UE uses RSVP setup procedures to trigger bearer resource allocation for each additional dedicated EPC bearer. The UE includes the PDN-ID, ProfileID, UL/DL TFT, and ReqID in the reservation.
Each Traffic Flow Template (referred to as Service Data Flow Template in the LTE terminology) consists of an aggregate of one or more packet filters. Each dedicated bearer can contain multiple IP data flows that utilize a common QoS scheduling treatment and reservation priority. If different scheduling classes are needed to optimize the quality of user experience for any service data flows, it is best to provision additional dedicated bearers. The UE maps each TFT packet filter to a Reservation Label/FlowID. The UE sends the TFT to the HSGW to bind the DL SDF IP flows to a FlowID that is in turn mapped to an A10 tunnel toward the RAN. The HSGW uses the RSVP signaling as an event trigger to request Policy Charging and Control (PCC) rules from the PCRF. The HSGW maps the provisioned QoS PCC rules and authorized QCI service class to ProfileID's in the RSVP response to the UE. At the final stage the UE establishes the auxiliary RLP and A10' connection to the HSGW. Once that is accomplished traffic can begin flowing across the dedicated bearer.
 
Network Access and Charging Management Features
This section describes the following features:
 
 
EAP Authentication (STa)
Enables secure user and device level authentication with a 3GPP AAA server or via 3GPP2 AAA proxy and the authenticator in the HSGW.
In an evolved HRPD access network, the HSGW uses the Diameter based STa interface to authenticate subscriber traffic with the 3GPP AAA server. Following completion of the PPP LCP procedures between the UE and HSGW, the HSGW selects EAP-AKA as the method for authenticating the subscriber session. EAP-AKA uses symmetric cryptography and pre-shared keys to derive the security keys between the UE and EAP server. EAP-AKA user identity information (Eg NAI=IMSI) is conveyed over EAP-PPP between the UE and HSGW.
The HSGW represents the EAP authenticator and triggers the identity challenge-response signaling between the UE and back-end 3GPP AAA server. On successful verification of user credentials the 3GPP AAA server obtains the Cipher Key and Integrity Key from the HSS. It uses these keys to derive the Master Session Keys (MSK) that are returned on EAP-Success to the HSGW. The HSGW uses the MSK to derive the Pair-wise Mobility Keys (PMK) that are returned in the Main A10' connection to the e-PCF. The RAN uses these keys to secure traffic transmitted over the wireless access network to the UE.
After the user credentials are verified by the 3GPP AAA and HSS the HSGW returns the PDN address in the VSNCP signaling to the UE. In the e-HRPD connection establishment procedures the PDN address is triggered based on subscription information conveyed over the STa reference interface. Based on the subscription information and requested PDN-Type signaled by the UE, the HSGW informs the PDN GW of the type of required address (Eg v6 HNP and/or IPv4 Home Address Option for dual IPv4/v6 PDN's).
 
Rf Diameter Accounting
Provides the framework for offline charging in a packet switched domain. The gateway support nodes use the Rf interface to convey session related, bearer related or service specific charging records to the CGF and billing domain for enabling charging plans.
The Rf reference interface enables offline accounting functions on the HSGW in accordance with 3GPP Release 8 specifications. In an LTE application the same reference interface is also supported on the S-GW and PDN Gateway platforms. The systems use the Charging Trigger Function (CTF) to transfer offline accounting records via a Diameter interface to an adjunct Charging Data Function (CDF) / Charging Gateway Function (CGF). The HSGW and Serving Gateway collect charging information for each mobile subscriber UE pertaining to the radio network usage while the P-GW collects charging information for each mobile subscriber related to the external data network usage.
The ASR 5000 Charging Trigger Function features dual redundant 140GB RAID hard drives and up to 100GB of capacity on each drive is reserved for writing charging records (Eg CDRs, UDRs, FDRs) to local file directories with non-volatile persistent memory. The CTF periodically uses the sFTP protocol to push charging files to the CDF/CGF. It is also possible for the CDF/CGF to pull offline accounting records at various intervals or times of the day.
The HSGW, SGW and PGW collect information per-user, per IP CAN bearer or per service. Bearer charging is used to collect charging information related to data volumes sent to and received from the UE and categorized by QoS traffic class. Users can be identified by MSISDN or IMSI. Flow Data Records (FDR's) are used to correlate application charging data with EPC bearer usage information. The FDR's contain application level charging information like service identifiers, rating groups, IMS charging identifiers that can be used to identify the application. The FDR's also contain the authorized QoS information (QCI) that was assigned to a given flow. This information is used correlate charging records with EPC bearers.
 
AAA Server Groups
Value-added feature to enable VPN service provisioning for enterprise or MVNO customers. Enables each corporate customer to maintain its own AAA servers with its own unique configurable parameters and custom dictionaries.
This feature provides support for up to 800 AAA server groups and 800 NAS IP addresses that can be provisioned within a single context or across the entire chassis. A total of 128 servers can be assigned to an individual server group. Up to 1,600 accounting, authentication and/or mediation servers are supported per chassis.
Important: Due to additional memory requirements, this service can only be used with 8GB Packet Service Cards (PSCs).
 
Dynamic Policy and Charging: Gxa Reference Interface
Enables network initiated policy based usage controls for such functions as service data flow authorization for EPS bearers, QCI mapping, modified QoS treatments and per-APN AMBR bandwidth rate enforcement.
As referenced in Figure 1 below, in an e-HRPD application the Gxa reference point is defined to transfer QoS policy information between the PCRF and Bearer Binding Event Reporting Function (BBERF) on the HSGW. In contrast with an S5/S8 GTP network model where the sole policy enforcement point resides on the PGW, the S2a model introduces the additional BBERF function to map EPS bearers to the main and auxiliary A10 connections. Gxa is sometimes referred to as an off-path signaling interface because no in-band procedure is defined to convey PCC rules via the PMIPv6 S2a reference interface. Gxa is a Diameter based policy signaling interface.
Gxa signaling is used for bearer binding and reporting of events. It provides control over the user plane traffic handling and encompasses the following functionalities:
 
Intelligent Traffic Control
Intelligent Traffic Control (ITC) supports customizable policy definitions that enforce and manage service level agreements for a subscriber profile, thus enabling differentiated levels of services for native and roaming subscribers.
In 3GPP2, service ITC uses a local policy look-up table and permits either static EV-DO Rev 0 or dynamic EV-DO Rev A policy configuration.
Important: ITC includes the class-map, policy-map and policy-group commands. Currently ITC does not include an external policy server interface.
ITC provides per-subscriber/per-flow traffic policing to control bandwidth and session quotas. Flow-based traffic policing enables the configuring and enforcing bandwidth limitations on individual subscribers, which can be enforced on a per-flow basis on the downlink and the uplink directions.
Flow-based traffic policies are used to support various policy functions like Quality of Service (QoS), and bandwidth, and admission control. It provides the management facility to allocate network resources based on defined traffic-flow, QoS, and security policies.
 
Network Operation Management Functions
This section describes the following features:
 
 
A10/A11
Provides a lighter weight PPP network control protocol designed to reduce connection set-up latency for delay sensitive multimedia services. Also provides a mechanism to allow user devices in an evolved HRPD network to request one or more PDN connections to an external network.
The HRPD Serving Gateway connects the evolved HRPD access network with the Evolved Packet Core (EPC) as a trusted non-3GPP access network. In an e-HRPD network the A10'/A11' reference interfaces are functionally equivalent to the comparable HRPD interfaces. They are used for connection and bearer establishment procedures. In contrast to the conventional client-based mobility in an HRPD network, mobility management in the e-HRPD application is network based using Proxy Mobile IPv6 call anchoring between the MAG function on HSGW and LMA on PDN GW. Connections between the UE and HSGW are based on Simple IPv6. A11' signaling carries the IMSI based user identity.
The main A10' connection (SO59) carries PPP traffic including EAP-over-PPP for network authentication. The UE performs LCP negotiation with the HSGW over the main A10' connection. The interface between the e-PCF and HSGW uses GRE encapsulation for A10's. HDLC framing is used on the Main A10 and SO64 auxiliary A10's while SO67 A10 connections use packet based framing. After successful authentication, the HSGW retrieves the QoS profile from the 3GPP HSS and transfers this information via A11' signaling to the e-PCF.
 
Multiple PDN Support
Enables an APN-based user experience that enables separate connections to be allocated for different services including IMS, Internet, walled garden services, or offdeck content services.
The MAG function on the HSGW can maintain multiple PDN or APN connections for the same user session. The MAG runs a single node level Proxy Mobile IPv6 tunnel for all user sessions toward the LMA function of the PDN GW. When a user wants to establish multiple PDN connections, the MAG brings up the multiple PDN connections over the same PMIPv6 session to one or more PDN GW LMA's. The PDN GW in turn allocates separate IP addresses (Home Network Prefixes) for each PDN connection and each one can run one or multiple EPC default & dedicated bearers. To request the various PDN connections, the MAG includes a common MN-ID and separate Home Network Prefixes, APN's and a Handover Indication Value equal to one in the PMIPv6 Binding Updates.
Performance: In the current release, each HSGW maintains a limit of up to 3 PDN connections per user session.
 
PPP VSNCP
VSNCP offers streamlined PPP signaling with fewer messages to reduce connection set-up latency for VoIP services (VORA). VSNCP also includes PDN connection request messages for signaling EPC attachments to external networks.
Vendor Specific Network Control Protocol (VSNCP) provides a PPP vendor protocol in accordance with IETF RFC 3772 that is designed for PDN establishment and is used to encapsulate user datagrams sent over the main A10' connection between the UE and HSGW. The UE uses the VSNCP signaling to request access to a PDN from the HSGW. It encodes one or more PDN-ID's to create multiple VSNCP instances within a PPP connection. Additionally, all PDN connection requests include the requested Access Point Name (APN), PDN Type (IPv4, IPv6 or IPv4/v6) and the PDN address. The UE can also include the Protocol Configuration Options (PCO) in the VSNCP signaling and the HSGW can encode this attribute with information such as primary/secondary DNS server or P-CSCF addresses in the Configuration Acknowledgement response message.
 
Congestion Control
The congestion control feature allows you to set policies and thresholds and specify how the system reacts when faced with a heavy load condition.
Congestion control monitors the system for conditions that could potentially degrade performance when the system is under heavy load. Typically, these conditions are temporary (for example, high CPU or memory utilization) and are quickly resolved. However, continuous or large numbers of these conditions within a specific time interval may have an impact the system’s ability to service subscriber sessions. Congestion control helps identify such conditions and invokes policies for addressing the situation.
Congestion control operation is based on configuring the following:
 
Congestion Condition Thresholds: Thresholds dictate the conditions for which congestion control is enabled and establishes limits for defining the state of the system (congested or clear). These thresholds function in a way similar to operation thresholds that are configured for the system as described in the Thresholding Configuration Guide. The primary difference is that when congestion thresholds are reached, a service congestion policy and an SNMP trap, starCongestion, are generated.
A threshold tolerance dictates the percentage under the configured threshold that must be reached in order for the condition to be cleared. An SNMP trap, starCongestionClear, is then triggered.
Port Utilization Thresholds: If you set a port utilization threshold, when the average utilization of all ports in the system reaches the specified threshold, congestion control is enabled.
Port-specific Thresholds: If you set port-specific thresholds, when any individual port-specific threshold is reached, congestion control is enabled system-wide.
Service Congestion Policies: Congestion policies are configurable for each service. These policies dictate how services respond when the system detects that a congestion condition threshold has been crossed.
Important: For more information on congestion control, refer to the Congestion Control chapter in this guide.
 
IP Access Control Lists
IP access control lists allow you to set up rules that control the flow of packets into and out of the system based on a variety of IP packet parameters.
IP access lists, or access control lists (ACLs) as they are commonly referred to, are used to control the flow of packets into and out of the system. They are configured on a per-context basis and consist of “rules” (ACL rules) or filters that control the action taken on packets that match the filter criteria. Once configured, an ACL can be applied to any of the following:
 
Important: For more information on IP access control lists, refer to the IP Access Control Lists chapter in the System Enhanced Feature Configuration Guide.
 
System Management Features
This section describes following features:
 
 
Management System
The system's management capabilities are designed around the Telecommunications Management Network (TMN) model for management - focusing on providing superior quality network element (NE) and element management system (Web Element Manager) functions. The system provides element management applications that can easily be integrated, using standards-based protocols (CORBA and SNMPv1, v2), into higher-level management systems - giving wireless operators the ability to integrate the system into their overall network, service, and business management systems. In addition, all management is performed out-of-band for security and to maintain system performance.
Cisco Systems' O&M module offers comprehensive management capabilities to the operators and enables them to operate the system more efficiently. There are multiple ways to manage the system either locally or remotely using its out-of-band management interfaces.
These include:
The following figure demonstrates these various element management options and how they can be utilized within the wireless carrier network.
Element Management Methods
Important: P-GW management functionality is enabled by default for console-based access. For GUI-based management support, refer to the Web Element Management System section in this chapter. For more information on command line interface based management, refer to the Command Line Interface Reference and P-GW Administration Guide.
 
Bulk Statistics Support
The system's support for bulk statistics allows operators to choose to view not only statistics that are of importance to them, but also to configure the format in which it is presented. This simplifies the post-processing of statistical data since it can be formatted to be parsed by external, back-end processors.
When used in conjunction with the Web Element Manager, the data can be parsed, archived, and graphed.
The system can be configured to collect bulk statistics (performance data) and send them to a collection server (called a receiver). Bulk statistics are statistics that are collected in a group. The individual statistics are grouped by schema. Following is a partial list of supported schemas:
System: Provides system-level statistics
Card: Provides card-level statistics
Port: Provides port-level statistics
Context: Provides context-level statistics
IP Pool: Provides IP pool statistics
MAG: Provides Mobile Access Gateway statistics
ECS: Provides Enhanced Charging Service statistics
RADIUS: Provides AAA RADIUS statistics
The system supports the configuration of up to 4 sets (primary/secondary) of receivers. Each set can be configured with to collect specific sets of statistics from the various schemas. Statistics can be pulled manually from the chassis or sent at configured intervals. The bulk statistics are stored on the receiver(s) in files.
The format of the bulk statistic data files can be configured by the user. Users can specify the format of the file name, file headers, and/or footers to include information such as the date, chassis host name, chassis uptime, the IP address of the system generating the statistics (available for only for headers and footers), and/or the time that the file was generated.
When the Web Element Manager is used as the receiver, it is capable of further processing the statistics data through XML parsing, archiving, and graphing.
The Bulk Statistics Server component of the Web Element Manager parses collected statistics and stores the information in the PostgreSQL database. If XML file generation and transfer is required, this element generates the XML output and can send it to a Northbound NMS or an alternate bulk statistics server for further processing.
Additionally, if archiving of the collected statistics is desired, the Bulk Statistics server writes the files to an alternative directory on the server. A specific directory can be configured by the administrative user or the default directory can be used. Regardless, the directory can be on a local file system or on an NFS-mounted file system on the Web Element Manager server.
Important: For more information on bulk statistic configuration, refer to the Configuring and Maintaining Bulk Statistics chapter in the System Administration Guide.
 
Threshold Crossing Alerts (TCA) Support
Thresholding on the system is used to monitor the system for conditions that could potentially cause errors or outage. Typically, these conditions are temporary (i.e high CPU utilization, or packet collisions on a network) and are quickly resolved. However, continuous or large numbers of these error conditions within a specific time interval may be indicative of larger, more severe issues. The purpose of thresholding is to help identify potentially severe conditions so that immediate action can be taken to minimize and/or avoid system downtime.
The system supports Threshold Crossing Alerts for certain key resources such as CPU, memory, IP pool addresses, etc. With this capability, the operator can configure threshold on these resources whereby, should the resource depletion cross the configured threshold, a SNMP Trap would be sent.
The following thresholding models are supported by the system:
 
Alert: A value is monitored and an alert condition occurs when the value reaches or exceeds the configured high threshold within the specified polling interval. The alert is generated then generated and/or sent at the end of the polling interval.
Alarm: Both high and low threshold are defined for a value. An alarm condition occurs when the value reaches or exceeds the configured high threshold within the specified polling interval. The alert is generated then generated and/or sent at the end of the polling interval.
Thresholding reports conditions using one of the following mechanisms:
SNMP traps: SNMP traps have been created that indicate the condition (high threshold crossing and/or clear) of each of the monitored values.
Generation of specific traps can be enabled or disabled on the chassis. Ensuring that only important faults get displayed. SNMP traps are supported in both Alert and Alarm modes.
Logs: The system provides a facility called threshold for which active and event logs can be generated. As with other system facilities, logs are generated Log messages pertaining to the condition of a monitored value are generated with a severity level of WARNING.
Logs are supported in both the Alert and the Alarm models.
Alarm System: High threshold alarms generated within the specified polling interval are considered “outstanding” until a the condition no longer exists or a condition clear alarm is generated. “Outstanding” alarms are reported to the system's alarm subsystem and are viewable through the Alarm Management menu in the Web Element Manager.
The Alarm System is used only in conjunction with the Alarm model.
Important: For more information on threshold crossing alert configuration, refer Thresholding Configuration Guide.
 
ANSI T1.276 Compliance
ANSI T1.276 specifies security measures for Network Elements (NE). In particular it specifies guidelines for password strength, storage, and maintenance security measures.
ANSI T1.276 specifies several measures for password security. These measures include:
These measures are applicable to the ASR 5000 and the Web Element Manager since both require password authentication. A subset of these guidelines where applicable to each platform will be implemented. A known subset of guidelines, such as certificate authentication, are not applicable to either product. Furthermore, the platforms support a variety of authentication methods such as RADIUS and SSH which are dependent on external elements. ANSI T1.276 compliance in such cases will be the domain of the external element. ANSI T1.276 guidelines will only be implemented for locally configured operators.
 
Features and Functionality - External Application Support
This section describes the features and functions of external applications supported on the HSGW. These services require additional licenses to implement the functionality.
 
 
Web Element Management System
Provides a graphical user interface (GUI) for performing fault, configuration, accounting, performance, and security (FCAPS) management for the ASR 5000.
The Web Element Manager is a Common Object Request Broker Architecture (CORBA)-based application that provides complete fault, configuration, accounting, performance, and security (FCAPS) management capability for the system.
For maximum flexibility and scalability, the Web Element Manager application implements a client-server architecture. This architecture allows remote clients with Java-enabled web browsers to manage one or more systems via the server component which implements the CORBA interfaces. The server component is fully compatible with the fault-tolerant Sun® Solaris® operating system.
The following figure demonstrates various interfaces between the Web Element Manager and other network components.
Web Element Manager Network Interfaces
License Keys: A license key is required in order to use the Web Element Manager application. Please contact your local Sales or Support representative for more information.
Important: For more information on WEM support, refer to the WEM Installation and Administration Guide.
 
Features and Functionality - Optional Enhanced Feature Software
This section describes the optional enhanced features and functions for the S-GW service.
Each of the following features require the purchase of an additional license to implement the functionality with the S-GW service.
This section describes following features:
 
 
IP Header Compression (RoHCv1 for IPv6)
Dynamic header compression contexts enable more efficient memory utilization by allocating and deleting header compression contexts based on the presence/absence of traffic flowing over an S067 A10 bearer connection.
In order to provision VoIP services over an e-HRPD network the StarOS 9.0 release adds support for ROHC compression contexts over IPv6 datagrams using the RTP profile over S067 auxiliary A10' connections. The e-HRPD application uses pre-established SO67 A10' connections for VoIP bearers. A header compression context is allocated for the first time when a new SO67 A10' connection request comes with negotiated ROHC parameters.
In order to optimize memory allocation and system performance, the HSGW uses configured inactivity time of traffic over the bearer to dynamically determine when the ROHC compression context should be removed. This feature is also useful for preserving compression contexts on intra-HSGW call hand-offs. The dynamic header compression context parameters are configured in the ROHC profile that is associated with the subscriber session.
Important: For more information on IP header compression support, refer IP Header Compression chapter in System Enhanced Feature Configuration Guide.
 
IP Security (IPSec)
IP Security provides a mechanism for establishing secure tunnels from mobile subscribers to pre-defined endpoints (i.e. enterprise or home networks) in accordance with the following standards:
 
In order to provision VoIP services over an e-HRPD network the StarOS 9.0 release adds support for ROHC compression contexts over IPv6 datagrams using the RTP profile over S067 auxiliary A10' connections. The e-HRPD application uses pre-established SO67 A10' connections for VoIP bearers. A header compression context is allocated for the first time when a new SO67 A10' connection request comes with negotiated ROHC parameters.
In order to optimize memory allocation and system performance, the HSGW uses configured inactivity time of traffic over the bearer to dynamically determine when the ROHC compression context should be removed. This feature is also useful for preserving compression contexts on intra-HSGW call hand-offs. The dynamic header compression context parameters are configured in the ROHC profile that is associated with the subscriber session.
Important: For more information on IP header compression support, refer IP Header Compression chapter in System Enhanced Feature Configuration Guide.
 
Traffic Policing and Shaping
Traffic policing and shaping allows you to manage bandwidth usage on the network and limit bandwidth allowances to subscribers. Shaping allows you to buffer excesses to be delivered at a later time.
 
Traffic Policing
Traffic policing enables the configuring and enforcing of bandwidth limitations on individual subscribers and/or APNs of a particular traffic class in 3GPP/3GPP2 service.
Bandwidth enforcement is configured and enforced independently on the downlink and the uplink directions.
A Token Bucket Algorithm (a modified trTCM) [RFC2698] is used to implement the Traffic-Policing feature. The algorithm used measures the following criteria when determining how to mark a packet:
The system can be configured to take any of the following actions on packets that are determined to be in excess or in violation:
 
Traffic Shaping
Traffic Shaping is a rate limiting method similar to the Traffic Policing, but provides a buffer facility for packets exceeded the configured limit. Once the packet exceeds the data-rate, the packet queued inside the buffer to be delivered at a later time.
The bandwidth enforcement can be done in the downlink and the uplink direction independently. If there is no more buffer space available for subscriber data system can be configured to either drop the packets or kept for the next scheduled traffic session.
Important: For more information on traffic policing and shaping, refer to the Traffic Policing and Shaping chapter in the System Enhanced Feature Configuration Guide.
 
Layer 2 Traffic Management (VLANs)
Virtual LANs (VLANs) provide greater flexibility in the configuration and use of contexts and services.
IP Security (IPSec) is a suite of protocols that interact with one another to provide secure private communications across IP networks. These protocols allow the system to establish and maintain secure tunnels with peer security gateways. For IPv4, IKEv1 is used and for IPv6, IKEv2 is supported. IPSec can be implemented on the system for the following applications:
PDN Access: Subscriber IP traffic is routed over an IPSec tunnel from the system to a secure gateway on the packet data network (PDN) as determined by access control list (ACL) criteria.
Mobile IP: Mobile IP control signals and subscriber data is encapsulated in IPSec tunnels that are established between foreign agents (FAs) and home agents (HAs) over the Pi interfaces.
Important: Once an IPSec tunnel is established between an FA and HA for a particular subscriber, all new Mobile IP sessions using the same FA and HA are passed over the tunnel regardless of whether or not IPSec is supported for the new subscriber sessions. Data for existing Mobile IP sessions is unaffected.
Important: For more information on IPSec support, refer to the IP Security chapter in the System Enhanced Feature Configuration Guide.
 
Call/Session Procedure Flows
This section provides information on the function of the HSGW in an eHRPD network and presents call procedure flows for different stages of session setup.
The following topics and procedure flows are included:
 
Initial Attach with IPv6/IPv4 Access
This section describes the procedure of initial attach and session establishment for a subscriber (UE).
 
Initial Attach with IPv6/IPv4 Access Call Flow
Initial Attach with IPv6/IPv4 Access Call Flow Description
 
PMIPv6 Lifetime Extension without Handover
This section describes the procedure of a session registration lifetime extension by the P-GW without the occurrence of a handover.
 
PMIPv6 Lifetime Extension (without handover) Call Flow
PMIPv6 Lifetime Extension (without handover) Call Flow Description
 
PDN Connection Release Initiated by UE
This section describes the procedure of a session release by the UE.
 
PDN Connection Release by the UE Call Flow
PDN Connection Release by the UE Call Flow Description
 
PDN Connection Release Initiated by HSGW
This section describes the procedure of a session release by the HSGW.
 
PDN Connection Release by the HSGW Call Flow
PDN Connection Release by the HSGW Call Flow Description
 
PDN Connection Release Initiated by P-GW
This section describes the procedure of a session release by the P-GW.
 
PDN Connection Release by the HSGW Call Flow
PDN Connection Release by the HSGW Call Flow Description
 
Supported Standards
The HSGW complies with the following standards.
 
 
3GPP References
 
 
 
3GPP2 References
 
 
 
IETF References
 
 
 
Object Management Group (OMG) Standards
 
 

Cisco Systems Inc.
Tel: 408-526-4000
Fax: 408-527-0883