Enhanced Charging Service Overview

This chapter provides an overview of the Enhanced Charging Service (ECS) in-line service, also known as Active Charging Service (ACS).

This chapter covers the following topics:

Introduction

The Enhanced Charging Service (ECS) is an in-line service feature that enables operators to reduce billing-related costs and gives the ability to offer tiered, detailed, and itemized billing to their subscribers. Using shallow and deep packet inspection (DPI), ECS allows operators to charge subscribers based on actual usage, number of bytes, premium services, location, and so on. ECS also generates charging records for postpaid and prepaid billing systems.

The ECS is an enhanced or extended premium service. The System Administration Guide provides basic system configuration information, and the product administration guides provide information to configure the core network service functionality. It is recommended that you select the configuration example that best meets your service model, and configure the required elements for that model before using the procedures in this document.

Platform Requirements

The ECS in-line service runs on Cisco® ASR 5x00 chassis running StarOS. The chassis can be configured with a variety of components to meet specific network deployment requirements. For additional information, refer to the Installation Guide for the chassis and/or contact your Cisco account representative.

License Requirements

The ECS in-line service is a licensed Cisco feature. Separate session and feature licenses may be required. Contact your Cisco account representative for detailed information on specific licensing requirements.

For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide.

Basic Features and Functionality

This section describes basic features of the ECS in-line service.

Shallow Packet Inspection

Shallow packet inspection is the examination of the layer 3 (IP header) and layer 4 (for example, UDP or TCP header) information in the user plane packet flow. Shallow packet analyzers typically determine the destination IP address or port number of a terminating proxy.

Deep Packet Inspection

Deep-packet inspection is the examination of layer 7, which contains Uniform Resource Identifier (URI) information. In some cases, layer 3 and 4 analyzers that identify a trigger condition are insufficient for billing purposes, so layer 7 examination is used. Whereas, deep-packet analyzers typically identify the destination of a terminating proxy.

For example, if the Web site “www.companyname.com” corresponds to the IP address 1.1.1.1, and the stock quote page (www.companyname.com/quotes) and the company page (www.companyname.com/business) are chargeable services, while all other pages on this site are free. Because all parts of this Web site correspond to the destination address of 1.1.1.1 and port number 80 (http), determination of chargeable user traffic is possible only through the actual URL (layer 7).

DPI performs packet inspection beyond layer 4 inspection and is typically deployed for:

  • Detection of URI information at level 7 (for example, HTTP, WTP, RTSP URLs)
  • Identification of true destination in the case of terminating proxies, where shallow packet inspection would only reveal the destination IP address/port number of a terminating proxy such as the OpCo’s WAP gateway
  • De-encapsulation of nested traffic encapsulation, for example MMS-over-WTP/WSP-over-UDP/IP
  • Verification that traffic actually conforms to the protocol the layer 4 port number suggests

Charging Subsystem

ECS has protocol analyzers that examine uplink and downlink traffic. Incoming traffic goes into a protocol analyzer for packet inspection. Routing rules definitions (ruledefs) are applied to determine which packets to inspect. This traffic is then sent to the charging engine where charging rules definitions are applied to perform actions such as block, redirect, or transmit. These analyzers also generate usage records for the billing system.

Traffic Analyzers

Traffic analyzers in ECS are based on configured ruledefs. Ruledefs used for traffic analysis analyze packet flows and create usage records. The usage records are created per content type and forwarded to a prepaid server or to a billing system.

The Traffic Analyzer function can perform shallow (layer 3 and layer 4) and deep (above layer 4) packet inspection of IP packet flows. It is able to correlate all layer 3 packets (and bytes) with higher layer trigger criteria (for example, URL detected in an HTTP header). It also performs stateful packet inspection for complex protocols like FTP, RTSP, and SIP that dynamically open ports for the data path and this way, user plane payload is differentiated into “categories”. Traffic analyzers can also detect video streaming over RTSP, and image downloads and MMS over HTTP and differential treatment can be given to the Vcast traffic.

Traffic analyzers work at the application level as well, and perform event-based charging without the interference of the service platforms.

The ECS content analyzers can inspect and maintain state across various protocols at all layers of the OSI stack. The ECS supports inspecting and analyzing the following protocols:

  • Domain Name System (DNS)
  • File Transfer Protocol (FTP)
  • Hyper Text Transfer Protocol (HTTP)
  • Internet Control Message Protocol (ICMP)
  • Internet Control Message Protocol version 6 (ICMPv6)
  • Internet Message Access Protocol (IMAP)
  • Internet Protocol version 4 (IPv4)
  • Internet Protocol version 6 (IPv6)
  • Multimedia Messaging Service (MMS)
  • Post Office Protocol version 3 (POP3)
  • RTP Control Protocol/Real-time Transport Control Protocol (RTCP)
  • Real-time Transport Protocol (RTP)
  • Real Time Streaming Protocol (RTSP)
  • Session Description Protocol (SDP)
  • Secure-HTTP (S-HTTP)
  • Session Initiation Protocol (SIP)
  • Simple Mail Transfer Protocol (SMTP)
  • Transmission Control Protocol (TCP)
  • User Datagram Protocol (UDP)
  • Wireless Session Protocol (WSP)
  • Wireless Transaction Protocol (WTP)

Apart from the above protocols, ECS also supports analysis of downloaded file characteristics (for example, file size, chunks transferred, and so on) from file transfer protocols such as HTTP and FTP.

How ECS Works

This section describes the base components of the ECS solution, and the roles they play.

Content Service Steering

Content Service Steering (CSS) enables directing selective subscriber traffic into the ECS subsystem (in-line services internal to the system) based on the content of the data presented by mobile subscribers.

CSS uses Access Control Lists (ACLs) to redirect selective subscriber traffic flows. ACLs control the flow of packets into and out of the system. ACLs consist of “rules” (ACL rules) or filters that control the action taken on packets matching the filter criteria.

ACLs are configurable on a per-context basis and applies to a subscriber through either a subscriber profile (for PDSN) or an APN profile (for GGSN) in the destination context.

IMPORTANT:

For more information on CSS, refer to the Content Service Steering chapter of the System Administration Guide.

For more information on ACLs, refer to the IP Access Control Lists chapter of the System Administration Guide.

Protocol Analyzer

The Protocol Analyzer is the software stack responsible for analyzing the individual protocol fields and states during packet inspection.

The Protocol Analyzer performs two types of packet inspection:

  • Shallow Packet Inspection—Inspection of the layer 3 (IP header) and layer 4 (for example, UDP or TCP header) information.
  • Deep Packet Inspection—Inspection of layer 7 and 7+ information. DPI functionality includes:
    • Detection of Uniform Resource Identifier (URI) information at level 7 (for example, HTTP, WTP, and RTSP URLs)
    • Identification of true destination in the case of terminating proxies, where shallow packet inspection would only reveal the destination IP address/port number of a terminating proxy
    • De-encapsulation of upper layer protocol headers, such as MMS-over-WTP, WSP-over-UDP, and IP-over GPRS
    • Verification that traffic actually conforms to the protocol the layer 4 port number suggests

The Protocol Analyzer performs a stateful packet inspection of complex protocols, such as FTP, RTSP, and SIP, which dynamically open ports for the data path, so the payload can be classified according to content.

The Protocol Analyzer is also capable of determining which layer 3 packets belong (either directly or indirectly) to a trigger condition (for example, URL). In cases where the trigger condition cannot be uniquely defined at layers 3 and 4, then the trigger condition must be defined at layer 7 (that is, a specific URL must be matched).

Protocol Analyzer Software Stack

Every packet that enters the ECS subsystem must first go through the Protocol Analyzer software stack, which comprises of individual protocol analyzers for each of the supported protocols.


Figure 1. ECS Protocol Analyzer Stack

Note that protocol names are used to represent the individual protocol analyzers.

Each analyzer consists of fields and states that are compared to the protocol-fields and protocol-states in the incoming packets to determine packet content.

Rule Definitions

Rule definitions (ruledefs) are user-defined expressions based on protocol fields and protocol states, which define what actions to take on packets when specified field values match.

Rule expressions may contain a number of operator types (string, =, >, and so on) based on the data type of the operand. For example, “string” type expressions like URLs and host names can be used with comparison operators like “contains”, “!contains”, “=”, “!=”, “starts-with”, “ends-with”, “!starts-with” and “!ends-with”. In 14.0 and later releases, ECS also supports regular expression based rule matching. For more information, refer to the Regular Expression Support for Rule Matching section.

Integer type expressions like “packet size” and “sequence number” can be used with comparison operators like “=”, “!=”, “>=”, “<=”. Each ruledef configuration consists of multiple expressions applicable to any of the fields or states supported by the respective analyzers.

Ruledefs are of the following types:

  • Routing Ruledefs—Routing ruledefs are used to route packets to content analyzers. Routing ruledefs determine which content analyzer to route the packet to when the protocol fields and/or protocol-states in ruledef expression are true. Up to 256 ruledefs can be configured for routing.
  • Charging Ruledefs—Charging ruledefs are used to specify what action to take based on the analysis done by the content analyzers. Actions can include redirection, charge value, and billing record emission. Up to 2048 charging ruledefs can be configured in the system.
  • Post-processing Ruledefs—Used for post-processing purposes. Enables processing of packets even if the rule matching for them has been disabled.

IMPORTANT:

When a ruledef is created, if the rule-application is not specified for the ruledef, by default the system considers the ruledef as a charging ruledef.

Ruledefs support a priority configuration to specify the order in which the ruledefs are examined and applied to packets. The names of the ruledefs must be unique across the service or globally. A ruledef can be used across multiple rulebases.

IMPORTANT:

Ruledef priorities control the flow of the packets through the analyzers and control the order in which the charging actions are applied. The ruledef with the lowest priority number invokes first. For routing ruledefs, it is important that lower level analyzers (such as the TCP analyzer) be invoked prior to the related analyzers in the next level (such as HTTP analyzer and S-HTTP analyzers), as the next level of analyzers may require access to resources or information from the lower level. Priorities are also important for charging ruledefs as the action defined in the first matched charging rule apply to the packet and ECS subsystem disregards the rest of the charging ruledefs.

Each ruledef can be used across multiple rulebases, and up to 2048 ruledefs can be defined in a charging service.

Ruledefs have an expression part, which matches specific packets based upon analyzer field variables. This is a boolean (analyzer_field operator value) expression that tests for analyzer field values.

Example

The following is an example of a ruledef to match packets:

http url contains cnn.com

–or–

http any-match = TRUE

In the following example the ruledef named “rule-for-http” routes packets to the HTTP analyzer:

route priority 50
ruledef rule-for-http analyzer http

Where, rule-for-http has been defined with the expressions: tcp either-port = 80

The following example applies actions where:

  • Subscribers whose packets contain the expression “bbc-news” are not charged for the service.
  • All other subscribers are charged according to the duration of use of the service.
ruledef port-80
   tcp
either-port = 80
   rule-application routing
   exit
ruledef bbc-news
   http
url starts-with http://news.bbc.co.uk
   rule-application charging
   exit
ruledef catch-all
   ip
any-match = TRUE
   rule-application charging
   exit
charging-action free-site
   content-id 100
   [ ... ]
   exit
charging-action charge-by-duration
   content-id 101
   [ ... ]
   exit
rulebase standard
   [ ... ]
   route
priority 1 ruledef
port-80 analyzer http
   action
priority 101 ruledef
bbc-news charging-action free-site
   action
priority 1000 ruledef
catch-all charging-action charge-by-duration
   [ ... ]
   exit

The following figure illustrates how ruledefs interact with the Protocol Analyzer Stack and Action Engine to produce charging records.


Figure 2. ECS In-line Service Processing

Packets entering the ECS subsystem must first pass through the Protocol Analyzer Stack where routing ruledefs apply to determine which packets to inspect. Then output from this inspection is passed to the charging engine, where charging ruledefs apply to perform actions on the output.

Routing Ruledefs and Packet Inspection

The following figure and the steps describe the details of routing ruledef application during packet inspection.


Figure 3. Routing Ruledefs and Packet Inspection
  1. The packet is redirected to ECS based on the ACLs in the subscriber’s template /APN and packets enter ECS through the Protocol Analyzer Stack.
  2. Packets entering Protocol Analyzer Stack first go through a shallow inspection by passing through the following analyzers in the listed order:
    1. Bearer Analyzer
    2. IP Analyzer
    3. ICMP, TCP, or UDP Analyzer as appropriate

      IMPORTANT:

      In the current release traffic routes to the ICMP, TCP, and UDP analyzers by default. Therefore, defining routing ruledefs for these analyzers is not required.

  3. The fields and states found in the shallow inspection are compared to the fields and states defined in the routing ruledefs in the subscriber’s rulebase. The ruledefs’ priority determines the order in which the ruledefs are compared against packets.
  4. When the protocol fields and states found during the shallow inspection match those defined in a routing ruledef, the packet is routed to the appropriate layer 7or 7+ analyzer for deep-packet inspection.
  5. After the packet has been inspected and analyzed by the Protocol Analyzer Stack:
    1. The packet resumes normal flow and through the rest of the ECS subsystem.
    2. The output of that analysis flows into the charging engine, where an action can be applied. Applied actions include redirection, charge value, and billing record emission.
Charging Ruledefs and the Charging Engine

This section describes details of how charging ruledefs are applied to the output from the Protocol Analyzer Stack.

The following figure and the steps that follow describe the process of charging ruledefs and charging engines.


Figure 4. Charging Ruledefs and Charging Engine
  1. In the Classification Engine, the output from the deep-packet inspection is compared to the charging ruledefs. The priority configured in each charging ruledef specifies the order in which the ruledefs are compared against the packet inspection output.
  2. When a field or state from the output of the deep-packet inspection matches a field or state defined in a charging ruledef, the ruledef action is applied to the packet. Actions can include redirection, charge value, or billing record emission. It is also possible that a match does not occur and no action will be applied to the packet at all.

Group-of-Ruledefs

Group-of-Ruledefs enable grouping ruledefs into categories. When a group-of-ruledefs is configured in a rulebase, if any of the ruledefs within the group matches, the specified charging-action is applied, any more action instances are not processed.

A group-of-ruledefs may contain optimizable ruledefs. Whether a group is optimized or not is decided on whether all the ruledefs in the group-of-ruledefs can be optimized, and if the group is included in a rulebase that has optimization turned on, then the group will be optimized.

When a new ruledef is added, it is checked if it is included in any group-of-ruledefs, and whether it needs to be optimized, and so on.

The group-of-ruledefs configuration enables setting the application for the group (group-of-ruledefs-application parameter). When set to gx-alias the group-of-ruledefs is expanded only to extract the rule names out of it (with their original priority and charging actions) ignoring the field priority set within the group. This is just an optimization over the PCRF to PCEF interface where there exists a need to install/remove a large set of predefined rules at the same time. Though this is possible over the Gx interface (with a limit of 256) it requires a lot of PCRF resources to encode each name. This also increases the message size.

This aliasing function enables to group a set of ruledef names and provides a simple one name alias that when passed over Gx, as a Charging-Rule-Base-Name AVP, is expanded to the list of names with each rule being handled individually. From the PCEF point of view, it is transparent, as if the PCRF had activated (or deactivated) those rules by naming each one.

Rulebase

A rulebase allows grouping one or more rule definitions together to define the billing policies for individual subscribers or groups of subscribers.

A rulebase is a collection of ruledefs and their associated billing policy. The rulebase determines the action to be taken when a rule is matched. A maximum of 512 rulebases can be specified in the ECS service.

It is possible to define a ruledef with different actions. For example, a Web site might be free for postpaid users and charge based on volume for prepaid users. Rulebases can also be used to apply the same ruledefs for several subscribers, which eliminate the need to have unique ruledefs for each subscriber.

ECS Deployment and Architecture

The following figure shows a typical example of ECS deployment in a mobile data environment.


Figure 5. Deployment of ECS in a Mobile Data Network

The following figure depicts the ECS architecture managed by the Session Controller (SessCtrl) and Session Manager (SessMgr) subsystems.


Figure 6. ECS Architecture

Enhanced Features and Functionality

This section describes enhanced features supported in ECS.

IMPORTANT:

The features described in this section may be licensed Cisco features. A separate feature license may be required. Contact your Cisco account representative for detailed information on specific licensing requirements. For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide.

Session Control in ECS

In conjunction with the Cisco ASR 5x00 chassis, the ECS provides a high-level network flow and bandwidth control mechanism in conjunction with the Session Control subsystem. ECS Session Control feature uses the interaction between SessMgr subsystem and Static Traffic Policy Infrastructure support of the chassis to provide an effective method to maximize network resource usage and enhancement of overall user experience.

This feature provides the following functionality:

  • Flow Control Functionality—Provides the ability to define and manage the number of simultaneous IP-based sessions and/or the number of simultaneous instances of a particular application permitted for the subscriber.If a subscriber begins a packet data session and system is either pre-configured or receives a subscriber profile from the AAA server indicating the maximum amount of simultaneous flow for a subscriber or an application is allowed to initiate. If subscriber exceeds the limit of allowed number of flows for subscriber or type of application system blocks/redirect/discard/terminate the traffic.The following type of flow quotas are available for Flow Control Functionality:
    • Subscriber-Level Session Quota—Configurable on a per-rulebase basis
    • Application-Level Session Quota—Configurable on a per-charging-action basis
  • Bandwidth Control Functionality—Allows the operator to apply rate limit to potentially bandwidth intensive and service disruptive applications.Using this feature the operator can police and prioritize subscribers’ traffic to ensure that no single or group of subscribers’ traffic negatively impacts another subscribers’ traffic.For example, if a subscriber is running a peer-to-peer (P2P) file sharing program and the system is pre-configured to detect and limit the amount of bandwidth to the subscriber for P2P application. The system gets the quota limit for bandwidth from PDP context parameter or individual subscriber. If the subscriber’s P2P traffic usage exceeds the pre-configured limit, the Session Control discards the traffic for this subscriber session.Session Control feature in ECS also provides the controls to police any traffic to/from a subscriber/application with the chassis.

Time and Flow-based Bearer Charging in ECS

ECS supports Time-based Charging (TBC) to charge customers on either actual consumed time or total session time usage during a subscriber session. TBC generates charging records based on the actual time difference between receiving the two packets, or by adding idle time when no packet flow occurs.

ECS also supports Flow-based Charging (FBC) based on flow category and type.

PDP context charging allows the system to collect charging information related to data volumes sent to and received by the MS. This collected information is categorized by the QoS applied to the PDP context. FBC integrates a Tariff Plane Function (TPF) to the charging capabilities that categorize the PDP context data volume for specific service data flows.

Service data flows are defined by charging rules. The charging rules use protocol characteristics such as:

  • IP address
  • TCP port
  • Direction of flow
  • Number of flows across system
  • Number of flows of a particular type

FBC provides multiple service data flow counts, one each per defined service data flow. When FBC is configured in the ECS, PDP context online charging is achieved by FBC online charging using only the wildcard service data flow.

When further service data flows are specified, traffic is categorized, and counted, according to the service data flow specification. You can apply wildcard to service data flow that do not match any of the specific service data flows.

The following are the chargeable events for FBC:

  • Start of PDP context—Upon encountering this event, a Credit Control Request (CCR) starts, indicating the start of the PDP context, is sent towards the Online Charging Service. The data volume is captured per service data flow for the PDP context.
  • Start of service data flow—An interim CCR is generated for the PDP context, indicating the start of a new service data flow, and a new volume count for this service data flow is started.
  • Termination of service data flow—The service data flow volume counter is closed, and an interim CCR is generated towards the Online Charging Service, indicating the end of the service data flow and the final volume count for this service data flow.
  • End of PDP context—Upon encountering this event, a CCR stop, indicating the end of the PDP context, is sent towards the Online Charging Service together with the final volume counts for the PDP context and all service data flows.
  • Expiration of an operator configured time limit per PDP context—This event triggers the emission of an interim CCR, indicating the elapsed time and the accrued data volume for the PDP context since the last report.
  • Expiration of an operator configured time limit per service data flow—The service data flow volume counter is closed and an interim CCR is sent to the Online Charging Service, indicating the elapsed time and the accrued data volume since the last report for that service data flow. A new service data flow container is opened if the service data flow is still active.
  • Expiration of an operator configured data volume limit per PDP context—This event triggers the emission of an interim CCR, indicating the elapsed time and the accrued data volume for the PDP context since the last report.
  • Expiration of an operator configured data volume limit per service data flow—The service data flow volume counter is closed and an interim CCR is sent to the Online Charging Service, indicating the elapsed time and the accrued data volume since the last report for that service data flow. A new service data flow container is opened if the service data flow is still active.
  • Change of charging condition—When QoS change, tariff time change are encountered, all current volume counts are captured and sent towards the Online Charging Service with an interim CCR. New volume counts for all active service data flows are started.
  • Administrative intervention by user/service also force trigger a chargeable event.

The file naming convention for created xDRs (EDR/UDR/FDRs) are described in the Impact on xDR File Naming section.

Fair Usage

The Fair Usage feature enables resource management at two levels:

  • Instance-Level Load Balancing—Enables load balancing of calls based on resource usage for in-line service memory allocations. If an in-line service is configured on the chassis, all resource allocation and release (memory) would involve maintaining instance-level credit usage. Sessions would be more equally distributed based on the in-line memory credits rather than the number of sessions running on individual instances.
  • Subscriber Session Resource Monitoring—Enables monitoring individual subscriber resource usage and restricting unacceptable usage of resources by subscriber sessions. Every subscriber session will have a free ride until the operator configured threshold is reached. After that, any new resource requirement may be allowed based on the entitled services and the currently available memory in the system. Once resource allocation is failed for a subscriber session, the in-line service application requesting resource manages the failure handling.Operators can configure when the monitor action is initiated as a percentage of memory usage. By default, if the feature is enabled, the monitor action would be initiated at 50% threshold. Monitor action includes allowing or failing a particular resource allocation request. Any new resource allocation, once after the threshold is hit, is subject to an evaluation before allowing allocation. The requested resource is compared against the average available memory per session with a configurable per session waiver (default set to 10%). If the instance credit usage goes below a certain configurable percentage of the threshold, monitor action is disabled (default set to 5%) to avoid possible ping pong effect of enabling and disabling monitor action.Monitoring memory usage of individual subscriber sessions and providing preferential treatment is based on the services that the subscriber is entitled to. The subscriber session will be entitled to services as configured in the rulebase. Every subscriber session would have free ride until the operator configured threshold is reached. After that, any new resource requirement may be allowed based on the following:
    • Rulebase configured for the subscriber
    • Current available memory in the system

It is recommended that the parameters be configured only after continuous evaluation and fine tuning of the system.

Content Filtering Support

ECS provides off-line content filtering support and in-line static and dynamic content filtering support to control static and dynamic data flow and content requests.

Content Filtering Server Group Support

ECS supports external Content Filtering servers through Internet Content Adaptation Protocol (ICAP) implementation between ICAP client and Active Content Filter (ACF) server (ICAP server).

ICAP is a protocol designed to support dynamic content filtering and/or content insertion and/or modification of Web pages. Designed for flexibility, ICAP allows bearer plane nodes such as firewalls, routers, or systems running ECS to interface with external content servers such as parental control (content filtering) servers to provide content filtering service support.

In-line Content Filtering Support

Content Filtering is a fully integrated, subscriber-aware in-line service available for 3GPP and 3GPP2 networks to filter HTTP and WAP requests from mobile subscribers based on the URLs in the requests. This enables operators to filter and control the content that an individual subscriber can access, so that subscribers are inadvertently not exposed to universally unacceptable content and/or content inappropriate as per the subscribers’ preferences. Content Filtering uses Deep Packet Inspection (DPI) capabilities of ECS to discern HTTP and WAP requests.

IMPORTANT:

For more information on Content Filtering support, refer to the Content Filtering Services Administration Guide.

IP Readdressing

The IP Readdressing feature enables redirecting unknown gateway traffic based on the destination IP address of the packets to known/trusted gateways.

IP Readdressing is configured in the flow action defined in a charging action. IP readdressing works for traffic that matches particular ruledef, and hence the charging action. IP readdressing is applicable to both uplink and downlink traffic. In the Enhanced Charging Subsystem, uplink packets are modified after packet inspection, rule matching, and so on, where the destination IP/port is determined, and replaced with the readdress IP/port just before they are sent out. Downlink packets (containing the readdressed IP/port) are modified as soon as they are received, before the packet inspection, where the source IP/port is replaced with the original server IP/port number.

For one flow from an MS, if one packet is re-addressed, then all the packets in that flow will be re-addressed to the same server. Features like DPI and rule-matching remain unaffected. Each IP address + port combination will be defined as a ruledef.

In case of IP fragmentation, packets with successful IP re-assembly will be re-addressed. However, IP fragmentation failure packets will not be re-addressed.

Next-hop Address Configuration

ECS supports the ability to set the next-hop default gateway IP address as a charging action associated with any ruledef in a rulebase. This functionality provides more flexibility for service based routing allowing the next-hop default gateway to be set after initial ACL processing. This removes need for AAA to send the next-hop default gateway IP address for CC opted in subscribers.

How it works:

  1. The next-hop address is configured in the charging action.
  2. Uplink packet sent to ECS is sent for analysis.
  3. When the packet matches a rule and the appropriate charging action is applied, the next-hop address is picked from the charging action is copied to the packet before sending the packet to Session Manager.
  4. Session Manager receives the packet with the next-hop address, and uses it accordingly.

Post Processing

The Post Processing feature enables processing of packets even if the rule matching for them has been disabled. This enables all the IP/TCP packets including TCP handshaking to be accounted and charged for in the same bucket as the application flow. For example, delay-charged packets for IP Readdressing and Next-hop features.

  • Readdressing of delay-charged initial hand-shaking packets.
  • Sending the delay-charged initial packets to the correct next-hop address.
  • DCCA—Taking appropriate action on retransmitted packets in case the quota was exhausted for the previous packet and a redirect request was sent. DCCA with buffering enabled—Match CCA rules, charging-action will decide action—terminate flow/redirect DCCA with buffering disabled—Match post-processing rules, and take action
  • Content ID based ruledefs—On rule match, if content ID based ruledef and charging action are present, the rule is matched, and the new charging action will decide the action

A ruledef can be configured as a post-processing rule in the ruledef itself using rule-application of the ruledef. A rule can be charging, routing, or a post-processing rule. If the same ruledef is required to be a charging rule in one rulebase and a post-processing rule in another one, then two separate identical ruledefs must be defined.

How the Post-processing Feature Works

The following steps describe how the Post-processing feature works:

  1. Charging rule-matching is done on packets and the associated charging-action is obtained.
  2. Using this charging-action the disposition-action is obtained.
  3. If the disposition action is to either buffer or discard the packets, or if it is set by the ACF, or if there are no post-processing rules, the packets are not post processed. The disposition action is applied directly on the packets. Only if none of the above conditions is true, post processing is initiated.
  4. Post-processing rules are matched and the associated charging-action and then the disposition-action obtained through control-charge.
  5. If both match-rule and control-charge for post processing succeed, the disposition-action obtained from post-processing is applied. Otherwise, the disposition-action obtained from charging rule-matching is used. If no disposition action is obtained by matching post-processing rules, the one obtained by matching charging-rules will be applied. Irrespective of whether post processing is required or not, even if a single post-processing rule is configured in the rulebase, post processing will be done. The following points should be considered while configuring post-processing rules for next-hop/readdressing.
    • The rules will be L3/L4 based.
    • They should be configured in post-processing rules' charging actions.
    For x-header insertion, there should either be a post-processing rule whose charging-action gives no disposition-action or the packet should not match any of the post-processing rules so that the disposition action obtained from charging-rule matching is applied.

Time-of-Day Activation/Deactivation of Rules

Within a rulebase, ruledefs/groups-of-ruledefs are assigned priorities. When packets start arriving, as per the priority order, every ruledef/group-of-ruledefs in the rulebase is eligible for matching regardless of the packet arrival time. By default, the ruledefs/groups-of-ruledefs are active all the time.

The Time-of-Day Activation/Deactivation of Rules feature uses time definitions (timedefs) to activate/deactivate static ruledefs/groups-of-ruledefs such that they are available for rule matching only when they are active.

IMPORTANT:

The time considered for timedef matching is the system’s local time.

How the Time-of-Day Activation/Deactivation of Rules Feature Works

The following steps describe how the Time-of-Day Activation/Deactivation of Rules feature enables charging according to the time of the day/time:

  1. Timedefs are created/deleted in the ACS Configuration Mode. A maximum of 10 timedefs can be created in an ECS service.
  2. Timedefs are configured in the ACS Timedef Configuration Mode. Within a timedef, timeslots specifying the day/time for activation/deactivation of rules are configured. A maximum of 24 timeslots can be configured in a timedef.
  3. In the ACS Rulebase Configuration Mode, timedefs are associated with ruledefs /groups-of-ruledefs along with the charging action. One timedef can be used with several ruledefs/group-of-ruledefs. If a ruledef/group-of-ruledefs does not have a timedef associated with it, it will always be considered as active.
  4. When a packet is received, and a ruledef/group-of-ruledefs is eligible for rule matching, if a timedef is associated with the ruledef/group-of-ruledefs, before rule matching, the packet-arrival time is compared with the timeslots configured in the timedef. If the packet arrived in any of the timeslots configured in the associated timedef, rule matching is undertaken, else the next ruledef/group-of-ruledefs is considered. This release does not support configuring a timeslot for a specific date. If, in a timeslot, only the time is specified, that timeslot will be applicable for all days. If for a timeslot, “start time” > “end time”, that rule will span the midnight. That is, that rule is considered to be active from the current day until the next day. If for a timeslot, “start day” > “end day”, that rule will span over the current week till the end day in the next week. In the following cases a rule will be active all the time:
    • A timedef is not configured in an action priority
    • A timedef is configured in an action priority, but the named timedef is not defined
    • A timedef is defined but with no timeslots

URL Filtering

The URL Filtering feature simplifies using rule definitions for URL detection.

The following configuration is currently used for hundreds of URLs:

ruledef HTTP://AB-WAP.YZ
    www
url starts-with HTTP://CDAB-SUBS.OPERA-MINI.NET/HTTP://AB-WAP.YZ
    www
url starts-with HTTP://AB-WAP.YZ
    multi-line-or
all-lines
    exit

In the above ruledef:

  • The HTTP request for the URL “http://ab-wap.yz” is first sent to a proxy “http://cdab-subs.opera-mini.net/”.
  • The URL “http://cdab-subs.opera-mini.net/” will be configured as a prefixed URL.

Prefixed URLs are URLs of the proxies. A packet can have a URL of the proxy and the actual URL contiguously. First a packet is searched for the presence of proxy URL. If the proxy URL is found, it is truncated from the parsed information and only the actual URL (that immediately follows it) is used for rule matching and EDR generation.

The group-of-ruledefs can have rules for URLs that need to be actually searched (URLs that immediately follow the proxy URLs). That is, the group-of-prefixed-URLs will have URLs that need to be truncated from the packet information for further ECS processing, whereas, the group-of-ruledefs will have rules that need to be actually searched for in the packet.

URLs that you expect to be prefixed to the actual URL can be grouped together in a group-of-prefixed-URLs. A maximum of 64 such groups can be configured. In each such group, URLs that need to be truncated from the URL contained in the packet are specified. Each group can have a maximum of 10 such prefixed URLs. By default, all group-of-prefixed-URLs are disabled.

In the ECS rulebase, you can enable/disable the group-of-prefixed-URLs to filter for prefixed URLs.

IMPORTANT:

A prefixed URL can be detected and stripped if it is of the type “http://www.xyz.com/http://www.abc.com”. Here, “http://www.xyz.com” will be stripped off. But in “http://www.xyz.com/www.abc.com”, it cannot detect and strip off “http://www.xyz.com” as it looks for occurrence of “http” or “https” within the URL.

TCP Proxy

The TCP Proxy feature enables the ASR 5x00 to function as a TCP proxy. TCP Proxy is intended to improve ECS subsystem’s functionality in case of Content Filtering, ICAP, RADIUS Prepaid, Redirection, Header Enrichment, Stateful Firewall, Application Detection and Control, DCCA, and Partial Application Headers features.

TCP Proxy along with other capabilities enables the ASR 5x00 to transparently split every TCP connection passing through it between sender and receiver hosts into two separate TCP connections, and relay data packets from the sender host to the receiver host via the split connections. This results in smaller bandwidth delay and improves TCP performance.

The TCP Proxy solution comprises of two main components:

  • User-level TCP/IP Stacks — The TCP Proxy implementation uses two instances of the User Level TCP/IP stack. The stack is integrated with ECS and acts as packet receiving and sending entity. These stacks modify the behavior in which the connection is handled.
  • Proxy Application — The Proxy application binds ECS, stack, and all the applications. It is the only communicating entity between the two stacks and the various applications requiring the stack. The TCP Proxy application manages the complete connection. It detects connection request, connection establishment, and connection tear-down, and propagates the same to the applications. Whenever the buffers are full, the Proxy application also buffers data to be sent later.

On an ASR 5x00 chassis, the TCP Proxy functionality can be enabled or disabled and configured from the CLI, enabling the ASR 5x00 to perform either in proxy or non-proxy mode. TCP Proxy can either be enabled for all connections regardless of the IP address, port, or application protocol involved, or for specific flows based on the configuration, for example, TCP Proxy can be enabled for some specific ports. TCP Proxy must be enabled at rulebase level. When enabled in a rulebase, it is applied on subscribers’ flows using that rulebase.

TCP Proxy can be enabled in static or dynamic modes. In static mode TCP proxy is enabled for all server ports/flows for a rulebase. In the dynamic mode/Socket Migration TCP Proxy is enabled dynamically based on specified conditions. In case TCP proxy is started dynamically on a flow, the original client (MS) first starts the TCP connection with the final server. ECS keeps on monitoring the connection. Based on any rule-match/charging-action, it may happen that the connection will be proxied automatically. This activity is transparent to original client and original server. After dynamically enabling the proxy, ECS acts as TCP endpoint exactly in the same way it is when connection is statically proxied.

The functional/charging behavior of ECS for that particular connection before the dynamic proxy is started is exactly same as when there is no proxy. After the dynamic proxy is started on the connection, the functional/charging behavior of the ECS for that particular connection will be exactly similar to the ECS static proxy behavior. When the socket migration is underway, the functional/charging behavior for that particular connection is exactly the same as when there is no proxy for that flow.

TCP Proxy impacts post-recovery behavior and the charging model. With TCP Proxy, whatever packets are received from either side is charged completely. The packets that are sent out from the ECS are not considered for charging. This approach is similar to the behavior of ECS without proxy.

The following packets will be charged at ECS:

  • Uplink packets received at Gn interface
  • Downlink packets received at Gi interface

The following packets will not be considered for charging:

  • Uplink packets forwarded/sent out by ECS/Stack on the Gi interface.
  • Downlink packets forwarded/sent out by ECS/Stack on the Gn interface.

IMPORTANT:

After TCP Proxy is enabled for a connection, the connection will remain proxied for it’s lifetime. TCP Proxy cannot be disabled for the flow.

TCP Proxy Behavior and Limitations

The following are behavioral changes applicable to various ECS features and on other applications after enabling TCP Proxy.

  • TCP Proxy Model: Without TCP Proxy, for a particular flow, there is only a single TCP connection between subscriber and server. ECS is a passive entity with respect to flows and the packets received on ingress were sent out on egress side (except in case where some specific actions like drop are configured through CLI) transparently.With TCP Proxy, a flow is split into two TCP connections — one between subscriber and proxy and another between chassis and server.
  • Ingress Data Flow to Proxy: For all uplink packets, ingress flow involves completing the following steps and then enters the Gn side TCP IP Stack of proxy:
    1. IP Analysis (support for IP reassembly)
    2. Shallow/Deep Packet TCP Analysis (support for TCP OOO)
    3. Stateful Firewall Processing
    4. Application Detection and Control Processing
    5. DPI Analysis
    6. Charging Function (including rule-matching, generation of various records, and applying various configured actions)
    For all downlink packets, ingress flow would involve completing the following steps, and then enters the Gi side TCP IP Stack of proxy:
    1. IP Analysis (support for IP reassembly)
    2. Network Address Translation Processing
    3. Shallow/Deep Packet TCP Analysis (support for TCP OOO)
    4. Stateful Firewall Processing
    5. Application Detection and Control Processing
    6. DPI Analysis
    7. Charging Function (including rule-matching, generation of various records, and applying various configured actions)
  • Egress Data Flow from Proxy: All egress data flow is generated at proxy stack. For uplink packets, egress data flow would involve the following and then are sent out of the chassis:
    1. IP Analysis
    2. Shallow/Deep Packet TCP Analysis
    3. Stateful Firewall processing
    4. Network Address Translation processing
    For downlink packets, egress data flow would involve the following and then are sent out of the chassis:
    1. IP Analysis
    2. Shallow/Deep Packet TCP Analysis
    3. Stateful Firewall processing

On enabling TCP Proxy the behavior of some ECS features will get affected. For flows on which TCP Proxy is enabled it is not necessary that all the packets going out of the Gn (or Gi) interface are the same (in terms of number, size, and order) as were on Gi (or Gn).

  • IP Reassembly: If the fragments are successfully reassembled then DPI analysis is done on the reassembled packet.Without TCP Proxy, fragmented packets will go out on the other side. With TCP proxy, normal (non-fragmented) IP packets will go out on the other side (which will not be similar to the incoming fragmented packets).With or without TCP Proxy, if fragment reassembly was not successful, then all the fragments will be dropped except under the case where received fragments were sufficient enough to identify the 5-tuple TCP flow and the flow had TCP Proxy disabled on it.
  • TCP OOO Processing: Without TCP Proxy if it is configured to send the TCP OOO packets out (as they come), without TCP proxy such packets were sent out. With TCP Proxy, OOO packets coming from one side will go in-order on the other side. For proxied flows TCP OOO expiry timer will not be started and hence there will be no specific handling based on any such timeouts. Also, TCP OOO packets will not be sent to other side unless the packets are re-ordered.
  • TCP Checksum Validation: Without TCP Proxy TCP Checksum validation is optional (configurable through "transport-layer-checksum verify-during-packet-inspection tcp" CLI command). With TCP Proxy TCP checksum is automatically done irrespective of whether the CLI command is configured or not. If the checksum validation fails, the packet is not processed further and so it does not go for application layer analysis.
  • TCP Reset Packet Validation: Without TCP Proxy TCP reset packet is not validated for Seq and ACK number present in the segment and the flow is cleared immediately.With TCP Proxy TCP Reset packet validation is done. The flow will be cleared only if a valid TCP Reset segment is arrived. This validation is not configurable.
  • TCP Timestamp (PAWS) Validation: Without TCP Proxy timestamp verification is not performed and even if there is any timestamp error, the packet is processed normally and goes for further analysis and rule-matching.With TCP Proxy if the connection is in established state, timestamp validation for packets is performed. If TCP timestamp is less than the previous timestamp, the packet is marked TCP error packet and is dropped. The packet is not analyzed further and not forwarded ahead. This packet should match TCP error rule (if configured). This validation is not configurable.
  • TCP Error Packets: Without TCP Proxy ECS being a passive entity, most of the errors (unless configured otherwise) were ignored while parsing packets at TCP analyzer and were allowed to pass through. With TCP Proxy TCP error packets are dropped by Gi and Gn side TCP IP stack. However, since the ECS processing is already done before giving the packet to the stack, these packets are charged but not sent out by proxy on the other end.
  • Policy Server Interaction (Gx): With TCP Proxy, application of policy function occurs on two separate TCP connections (non-proxy processed packets on Gn/Gi). Only external packets (the ones received from Radio and Internet) will be subject to policy enforcement at the box. This does not have any functional impact.
  • Credit Control Interaction (Gy): With TCP Proxy, application of Credit Control function occur on two separate TCP connections (non-proxy processed packets on Gn/Gi). Only external packets (the ones received from Radio and Internet) will be subject to credit control at the box. This does not have any functional impact.
  • DPI Analyzer: With TCP Proxy, application of DPI analyzer occurs on two separate TCP connections (non-proxy processed packets on Gn/Gi). Only external packets (the ones received from Radio and Internet) will be subject to DPI analyzer at the chassis. Any passive analyzer in the path would be buffering packet using the existing ECS infrastructure.
  • ITC/BW Control: With TCP Proxy, only incoming traffic is dropped based on bandwidth calculation on ingress side packets. The BW calculation and dropping of packet is be done before sending packet to ingress TCP IP Stack. ToS and DSCP marking will be on flow level. The ToS and DSCP marking can be done only once for whole flow and once the ToS is marked for any packet either due to "ip tos" CLI command configured in the charging action or due to ITC/BW control, it will remain same for the whole flow.
  • Next Hop and VLAN-ID: Without TCP Proxy nexthop feature is supported per packet, that is nexthop address can be changed for each and every packet of the flow depending on the configuration in the charging action. With TCP Proxy only flow-level next-hop will be supported. So, once the nexthop address is changed for any packet of the flow, it will remain same for the complete flow. The same is the case for VLAN-ID.
  • TCP state based rules: Without TCP Proxy there is only one TCP connection for a flow and the TCP state based rules match to state of subscriber stack. With TCP Proxy there are two separate connections when TCP proxy is enabled. TCP state ("tcp state" and "tcp previous-state") based rules will match to MS state on egress side. Two new rules (tcp proxy-state and tcp proxy-prev-state) have been added to support the existing cases (of TCP state based rules). "tcp proxy-state" and "tcp proxy-prev-state" are the state of the embedded proxy server, that is the proxy ingress-side. These rules will not be applicable if proxy is not enabled.Using both "tcp state" and "tcp proxy-state" in the same ruledef is allowed. If proxy is enabled, they would map to Gi-side and Gn-side, respectively. If TCP Proxy is not enabled, the "tcp proxy-state" and "tcp proxy-prev-state" rules will not be matched because proxy-state will not be applicable.Since TCP state and previous-state rules are now matched based on state on Gi side connection, ECS will not be able to support all the existing use-cases with the existing configuration. New ruledefs based on the new rules (tcp proxy-state and tcp proxy-prev-state) need to be configured to support existing use cases. Note that even by configuring using new rules; all use-cases may not be supported. For example, detection of transition from TIME-WAIT to CLOSED state is not possible now.
  • TCP MSS: TCP IP Stack always inserts MSS Field in the header. This causes difference in MSS insertion behavior with and without TCP Proxy.
    • TCP CFG MSS limit-if-present: If incoming SYN has MSS option present, in outgoing SYN and SYN-ACK MSS value is limited to configured MSS value (CFG MSS)
    • TCP CFG MSS add-if-not-present: If incoming SYN does not have MSS option present, in outgoing SYN and SYN-ACK MSS configured MSS value is inserted (CFG MSS)
    • TCP CFG MSS limit-if-present add-if-not-present: If incoming SYN has MSS option present, in outgoing SYN and SYN-ACK MSS value is limited to configured MSS value (CFG MSS), OR if incoming SYN does not have MSS option present, in outgoing SYN and SYN-ACK MSS configured MSS value is inserted (CFG MSS).
  • Flow Discard: Flow discard occurring on ingress/egress path of TCP Proxy would be relying on TCP-based retransmissions. Any discard by payload domain applications would result in data integrity issues as this might be charged already and it may not be possible to exclude packet. So it is recommended that applications in payload domain (like dynamic CF, CAE readdressing) should not be configured to drop packets. For example, dynamic content filtering should not be configured with drop action. If drop is absolutely necessary, it is better to use terminate action.
  • DSCP/IP TOS Marking: Without TCP Proxy DSCP/IP TOS marking is supported per packet, that is IP TOS can be changed for each and every packet of the flow separately based on the configuration. With TCP Proxy flow-level DSCP/IP TOS marking is supported. So, once the IP TOS value is changed for any packet of the flow, it will remain same for the complete flow.
  • Redundancy Support (Session Recovery and ICSR): Without TCP Proxy after recovery, non-syn flows are not reset. With TCP Proxy session recovery checkpointing is bypassing any proxied flows (currently on NAT flows support recovery of flows). If any flow is proxied for a subscriber, after recovery (session recovery or ICSR), if any non-syn packet is received for that subscriber, ECS sends a RESET to the sender. So, all the old flows will be RESET after recovery.
  • Charging Function: Application of charging function would occur on two separate TCP connections (non proxy processed packets on Gn/Gi). Only external packets (the ones received from Radio and Internet) shall be subject to Policy enforcement at the box. Offline charging records generated at charging function would pertain to different connections hence.

X-Header Insertion and Encryption

This section describes the X-Header Insertion and Encryption features, collectively known as Header Enrichment, which enable to append headers to HTTP/WSP GET and POST request packets for use by end applications, such as mobile advertisement insertion (MSISDN, IMSI, IP address, user-customizable, and so on).

IMPORTANT:

In this release, the X-Header Insertion and Encryption features are supported only on the GGSN, IPSG, and P-GW.

License Requirements

X-Header Insertion and Encryption are both licensed Cisco features. A separate feature license may be required. Contact your Cisco account representative for detailed information on specific licensing requirements. For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide.

X-Header Insertion

This section provides an overview of the X-Header Insertion feature.

Extension header (x-header) fields are the fields not defined in RFCs or standards but can be added to headers of protocol for specific purposes. The x-header mechanism allows additional entity-header fields to be defined without changing the protocol, but these fields cannot be assumed to be recognizable by the recipient. Unrecognized header fields should be ignored by the recipient and must be forwarded by transparent proxies.

The X-Header Insertion feature enables inserting x-headers in HTTP/WSP GET and POST request packets. Operators wanting to insert x-headers in HTTP/WSP GET and POST request packets, can configure rules for it. The charging-action associated with the rules will contain the list of x-headers to be inserted in the packets.

For example, if you want to insert the field x-rat-type in the HTTP header with a value of rat-type, the header inserted should be:

x-rat-type: geran

where, rat-type is geran for the current packet.

Configuring the X-Header Insertion feature involves:

  1. Creating/configuring a ruledef to identify the HTTP/WSP packets in which the x-headers must be inserted.
  2. Creating/configuring a rulebase and configuring the charging-action, which will insert the x-header fields into the HTTP/WSP packets.
  3. Creating/configuring the x-header format.
  4. Configuring insertion of the x-header fields in the charging action.

X-Header Encryption

This section provides an overview of the X-Header Encryption feature.

X-Header Encryption enhances the X-header Insertion feature to increase the number of fields that can be inserted, and also enables encrypting the fields before inserting them.

If x-header insertion has already happened for an IP flow (because of any x-header format), and if the current charging-action has the first-request-only flag set, x-header insertion will not happen for that format. If the first-request-only flag is not set in a charging-action, then for that x-header format, insertion will continue happening in any further suitable packets in that IP flow.

Changes to x-header format configuration will not trigger re-encryption for existing calls. The changed configuration will however, be applicable for new calls. The changed configuration will also apply at the next re-encryption time to those existing calls for which re-encryption timeout is specified. If encryption is enabled for a parameter while data is flowing, since its encrypted value will not be available, insertion of that parameter will stop.

IMPORTANT:

Recovery of flows is not supported for this feature.

The following steps describe how X-Header Encryption works:

  1. X-header insertion, encryption, and the encryption certificate is configured in the CLI.
  2. When the call gets connected, and after each regeneration time, the encryption certificate is used to encrypt the strings.
  3. When a packet hits a ruledef that has x-header format configured in its charging-action, x-header insertion into that packet is done using the given x-header-format.
  4. If x-header-insertion is to be done for fields which are marked as encrypt, the previously encrypted value is populated for that field accordingly.

Limitations to the Header Insertion and Encryption Features

This section lists known limitations to insertion and encryption of x-header fields in HTTP and WSP headers.

The following are limitations to insertion and encryption of x-header fields in HTTP headers.

Limitations in StarOS 12.3 and earlier releases:

  • The packet size is assumed to be less than “Internal MED MTU size, the size of header fields inserted”. If the total length of packet exceeds the internal MTU size, header insertion will not occur after the addition of fields.
  • Header insertion occurs for both HTTP GET and POST requests. However, for POST requests, the resulting packet size will likely be larger than for GET requests due to the message body contained in the request. If the previous limitation applies, then POST request will suffer a bigger limit due to this.
  • Header insertion does not occur for retransmitted packets.
  • Header insertion does not occur for packets with incomplete HTTP headers.
  • Header insertion does not occur for TCP OOO and IP fragmented packets.
  • If a flow has x-header insertion and later some IP fragments are received for which reassembly fails, sequence space of that segment will be mismatched.
  • ECS does not support applying more than one modifying action on an inbound packet before sending it on the outbound interface. For example, if header insertion is applied on a packet, then the same packet is not allowed to be modified for NAT/ALG and MSS insertion.
  • If a packet is buffered by DCCA/ICAP, header insertion will not occur for that packet.
  • Receive window will not be considered during header enrichment. That is, after header enrichment if packet exceeds receive window, ECS will not truncate the packet.
  • Packet size limit is 2400 bytes, if due to header insertion packet size exceeds this limit, behavior is unpredictable.
  • Only those x-header fields in header portion of application protocol that begin with “-x” are parsed at HTTP analyzer. In URL and data portion of HTTP any field can be parsed.

The following are limitations to insertion and encryption of x-header fields in WSP headers:

  • x-header fields are not inserted in IP fragmented packets.
  • In case of concatenated request, x-header fields are only inserted in first GET or POST request (if rule matches for the same). X-header fields are not inserted in the second or later GET/POST requests in the concatenated requests. For example, if there is ACK+GET in packet, x-header is inserted in the GET packet. However, if GET1+GET2 is present in the packet and rule matches for GET2 and not GET1 x-header is still inserted in GET2. In case of GET+POST also, x-header is not inserted in POST.
  • In case of CO, x-header fields are not inserted if the WTP packets are received out of order (even after proper re-ordering).
  • If route to MMS is present, x-headers are not inserted.
  • x-headers are not inserted in WSP POST packet when header is segmented. This is because POST contains header length field which needs to be modified after addition of x-headers. In segmented WSP headers, header length field may be present in one packet and header may complete in another packet.
  • x-headers are not inserted in case of packets buffered at DCCA.

Accounting and Charging Interfaces

ECS supports different accounting and charging interfaces for prepaid and postpaid charging and record generation.

IMPORTANT:

Some feature described in this section are licensed Cisco features. A separate feature license may be required. Contact your Cisco account representative for detailed information on specific licensing requirements. For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide.

Accounting Interfaces for Postpaid Service: ECS supports the following accounting interfaces for postpaid subscribers:

  • Remote Authentication Dial-In User Service (RADIUS) Interface
  • GTPP Accounting Interface

Accounting and Charging Interface for Prepaid Service: ECS supports the following Credit Control Interfaces for prepaid subscribers:

  • RADIUS Prepaid Credit Control interface
  • Diameter Prepaid Credit Control Application (DCCA) Gy Interface
  • Diameter Gx interface

Charging Records in ECS: ECS provides the following charging records for postpaid and prepaid charging:

  • GGSN-Call Detail Records (G-CDRs)
  • Enhanced GGSN-Call Detail Records (eG-CDRs)
  • Event Detail Records (EDRs)
  • Usage Detail Records (UDRs)

GTPP Accounting

ECS enables the collection of counters for different types of data traffic, and including that data in CDRs that is sent to a Charging Gateway Function (CGF).

For more information on GTPP accounting, refer to the AAA and GTPP Interface Administration and Reference.

RADIUS Accounting and Credit Control

The Remote Authentication Dial-In User Service (RADIUS) interface in ECS is used for the following purposes:

  • Subscriber Category Request—ECS obtains the subscriber category from the AAA server (either prepaid or postpaid) when a new data session is detected. The AAA server used for the subscriber category request can be different from the AAA server used for service authorization and accounting.
  • Service Access Authorization—ECS requests access authorization for a specific subscriber and a newly detected data session. The AAA server is the access Policy Decision Point and the ECS the Policy Enforcement Point.
  • On-line Service Accounting (Prepaid)—ECS reports service usage to the AAA server. The AAA server acts as a prepaid control point and the ECS as the client. Accounting can be applied to a full prepaid implementation or just to keep ECS updated of the balance level and trigger a redirection if the subscriber balance reaches a low level.

Diameter Accounting and Credit Control

The Diameter Credit Control Application (DCCA) is used to implement real-time online or offline charging and credit control for a variety of services, such as network access, messaging services, and download services.

In addition to being a general solution for real-time cost and credit control, DCCA includes these features:

  • Real-time Rate Service Information—DCCA can verify when end subscribers' accounts are exhausted or expired; or deny additional chargeable events.
  • Support for Multiple Services: DCCA supports the usage of multiple services within one subscriber session. Multiple service support includes: The ability to identify and process the service or group of services that are subject to different cost structures. Independent credit control of multiple services in a single credit control sub-session.

Gx Interface Support

The Gx interface is used in IMS deployment in GPRS/UMTS networks. Gx interface support on the system enables wireless operators to intelligently charge the services accessed depending on the service type and parameters with rules. It also provides support for IP Multimedia Subsystem (IMS) authorization in a GGSN service. The goal of the Gx interface is to provide network-based QoS control as well as dynamic charging rules on a per bearer basis for an individual subscriber. The Gx interface is in particular needed to control and charge multimedia applications.

IMPORTANT:

For more information on Gx interface support, see the Gx Interface Support appendix in the administration guide for the product that you are deploying.

Gy Interface Support

The Gy interface provides a standardized Diameter interface for real-time content-based charging of data services. It is based on the 3GPP standards and relies on quota allocation.

It provides an online charging interface that works with the ECS Deep Packet Inspection feature. With Gy, customer traffic can be gated and billed in an “online” or “prepaid” style. Both time- and volume-based charging models are supported. In all these models, differentiated rates can be applied to different services based on shallow or deep-packet inspection.

Gy is a Diameter interface. As such, it is implemented atop, and inherits features from, the Diameter Base Protocol. The system supports the applicable base network and application features, including directly connected, relayed or proxied DCCA servers using TLS or plain text TCP.

In the simplest possible installation, the system will exchange Gy Diameter messages over Diameter TCP links between itself and one “prepay” server. For a more robust installation, multiple servers would be used. These servers may optionally share or mirror a single quota database so as to support Gy session failover from one server to the other. For a more scalable installation, a layer of proxies or other Diameter agents can be introduced to provide features such as multi-path message routing or message and session redirection features.

The Diameter Credit Control Application (DCCA) which resides as part of the ECS manages the credit and quota for a subscriber.

IMPORTANT:

For more information on Gy interface support, see the Gy Interface Support appendix in the administration guide for the product that you are deploying.

Event Detail Records (EDRs)

Event Detail Records (EDRs) are usage records with support to configure content information, format, and generation triggers by the system administrative user.

EDRs are generated according to explicit action statements in rule commands. Several different EDR schema types, each composed of a series of analyzer parameter names, are specified in EDR. EDRs are written at the time of each event in CSV format. EDRs are stored in timestamped files that can be downloaded via SFTP from the configured context.

EDRs are generated on per flow basis, and as such they catch whatever bytes get transmitted over that flow including retransmitted.

EDR format

The EDRs can be generated in comma separated values (CSV) format as defined in the traffic analysis rules.

IMPORTANT:

In EDRs, the maximum field length for normal and escaped strings is 127 characters. If a field’s value is greater than 127 characters, in the EDR it is truncated to 127 characters.

Flow-overflow EDR

Flow-overflow EDR or Summary FDR is a feature to count the data bytes from the subscriber that are missed due to various reasons in ECS.

In case any condition that affects the callline (FLOW end-condition like hagr, handoff) occurs, flow-overflow EDR generation is enabled, an extra EDR is generated. Based on how many bytes/packets were transferred from/to the subscriber for which ECS did not allocate data session. This byte/packet count is reflected in that extra EDR. This extra EDR is nothing but “flow-overflow” EDR or Summary FDR.

The extra EDR is generated if all of the following is true:

  • Subscriber affecting condition occurs (session-end, hand-off, hagr)
  • Flow-overflow EDR generation is enabled
  • EDR generation on session-end, hand-off or hagr is enabled
  • Number of bytes/packets for flow-overflow EDR is non-zero.

The bytes/packet count will be printed as a part of “sn-volume-amt” attribute in the EDR. Hence, this attribute must be configured in the EDR format.

EDR Generation in Flow-end and Transaction Complete Scenarios with sn-volume Fields

“sn-volume-amt” counters will be re-initialized only when the fields are populated in EDRs. For example, consider the following two EDR formats:

edr-format edr1
   rule-variable
http url priority 10
   attribute
sn-volume-amt ip bytes uplink priority 500
   attribute
sn-volume-amt ip bytes downlink priority 510
   attribute
sn-volume-amt ip pkts uplink priority 520
   attribute
sn-volume-amt ip pkts downlink priority 530
   attribute
sn-app-protocol priority 1000
   exit
edr-format edr2
   rule-variable
http url priority 10
   attribute
sn-app-protocol priority 1000
   exit

“sn-volume-amt counters” will be re-initialized only if these fields are populated in the EDRs. Now if edr2 is generated, these counters will not be re-initialized. These will be re-initialized only when edr1 is generated. Also, note that only those counters will be re-initialized which are populated in EDR. For example, in the following EDR format:

edr-format edr3
   rule-variable
http url priority 10
   attribute
sn-volume-amt ip bytes uplink priority 500
   attribute
sn-volume-amt ip bytes downlink priority 510
   attribute
sn-app-protocol priority 1000
   exit

If edr3 is generated, only uplink bytes and downlink bytes counter will be re-initialized and uplink packets and downlink packets will contain the previous values till these fields are populated (say when edr1 is generated).

For the voice call duration for SIP reporting requirements, ECS SIP analyzer keeps timestamp of the first INVITE that it sees. It also keeps a timestamp when it sees a 200 OK for a BYE. When this 200 OK for a BYE is seen, SIP analyzer triggers creation of an EDR of type ACS_EDR_VOIP_CALL_END_EVENT. This will also be triggered at the time of SIP flow termination if no 200 OK for BYE is seen. In that case, the last packet time will be used in place of the 200 OK BYE timestamp. The EDR generation logic calculates the call duration based on the INVITE and end timestamps, it also accesses the child RTP/RTCP flows to calculate the combined uplink/downlink bytes/packets counts and sets them in the appropriate fields.

Usage Detail Records (UDRs)

Usage Detail Records (UDRs) contain accounting information based on usage of service by a specific mobile subscriber. UDRs are generated based on the content-id for the subscriber, which is part of charging action. The fields required as part of usage data records are configurable and stored in the System Configuration Task (SCT).

UDRs are generated on any trigger of time threshold, volume threshold, handoffs, and call termination. If any of the events occur then the UDR subsystem generates UDRs for each content ID and sends to the CDR module for storage.

UDR format

The UDRs are generated in Comma Separated Values (CSV) format as defined in the traffic analysis rules.

Charging Methods and Interfaces

Prepaid Credit Control

Prepaid billing operates on a per content-type basis. Individual content-types are marked for prepaid treatment. A match on a traffic analysis rule that has a prepaid-type content triggers prepaid charging management.

In prepaid charging, ECS performs the metering function. Credits are deducted in real time from an account balance or quota. A fixed quota is reserved from the account balance and given to the system by a prepaid rating and charging server, which interfaces with an external billing system platform. The system deducts volume from the quota according to the traffic analysis rules. When the subscriber’s quota gets to the threshold level specified by the prepaid rating and charging server, system sends a new access request message to the server and server updates the subscriber's quota. The charging server is also updated at the end of the call.

ECS supports the following credit control applications for prepaid charging:

  • RADIUS Credit Control Application—RADIUS is used as the interface between ECS and the prepaid charging server. The RADIUS Prepaid feature of ECS is separate to the system-level Prepaid Billing Support and that is covered under a different license key.
  • Diameter Credit Control Application—The Diameter Credit Control Application (DCCA) is used to implement real-time credit control for a variety of services, such as networks access, messaging services, and download services.

In addition to being a general solution for real-time cost and credit control, DCCA includes the following features:

  • Real-time Rate Service Information—DCCA can verify when end subscribers' accounts are exhausted or expired; or deny additional chargeable events.
  • Support for Multiple Services—DCCA supports the usage of multiple services within one subscriber session. Multiple service support includes:
    • The ability to identify and process the service or group of services that are subject to different cost structures.
    • Independent credit control of multiple services in a single credit control sub-session.

Postpaid

In a postpaid environment, the subscribers pay after use of the service. AAA/RADIUS server is responsible for authorizing network nodes to grant access to the user, and the CDR system generates G-CDRs/eG-CDRs/EDRs/UDRs for billing information on pre-defined intervals of volume or per time.

IMPORTANT:

G-CDRs and eG-CDRs are only available in UMTS networks.

ECS also supports FBC and TBC methods for postpaid billing. For more information on FBC and TBC in ECS, see the Enhanced Services in ECS section.

Prepaid Billing in ECS

In a prepaid environment, the subscribers pay for service prior to use. While the subscriber is using the service, credit is deducted from subscriber’s account until it is exhausted or call ends. The prepaid charging server is responsible for authorizing network nodes to grant access to the user, as well as grant quotas for either time connected or volume used. It is up to the network node to track the quota use, and when these use quotas run low, the network node sends a request to the prepaid server for more quota.

If the user has not used up the purchased credit, the server grants quota and if no credit is available to the subscriber the call will be disconnected. ECS and DCCA manage this functionality by providing the ability to set up quotas for different services.

Prepaid quota in ECS is implemented using RADIUS and DCCA as shown in the following figure.

How ECS Prepaid Billing Works

The following figure illustrates a typical prepaid billing environment with system running ECS.


Figure 7. Prepaid Billing Scenario with ECS

Credit Control Application (CCA) in ECS

This section describes the credit control application that is used to implement real-time credit-control for a variety of end user services such as network access, Session Initiation Protocol (SIP) services, messaging services, download services, and so on. It provides a general solution to the real-time cost and credit control.

CCA with RADIUS or Diameter interface uses a mechanism to allow the user to be informed of the charges to be levied for a requested service. In addition, there are services such as gaming and advertising that may debit from a user account.

How Credit Control Application (CCA) Works for Prepaid Billing

The following figure and steps describe how CCA works with in a GPRS/UMTS or CDMA-2000 network for prepaid billing.


Figure 8. Prepaid Charging in GPRS/UMTS/CDMA-2000 Networks

Table 1. Prepaid Charging in GPRS/UMTS/CDMA-2000 Networks
Step No. Description

1

Subscriber session starts.

2

System sends request to CCA for subscriber’s quota.

3

CCA sends request to Data Warehouse (DW) credit quota for subscriber.

4

Credit Database in DW sends pre-configured amount of usage limit from subscriber’s quota to CCA. To reduce the need for multiple requests during subscriber’s session configured amount of usage limit a major part of available credit quota for subscriber is set.

5

CCA sends the amount of quota required to fulfill the subscriber’s initial requirement to the system.

6

When the initial amount of quota runs out, system sends another request to the CCA and the CCA sends another portion of available credit quota.

7

Subscriber session ends after either quota exhausts for subscriber or subscriber terminates the session.

8

CCA returns unused quota to DW for update to subscribers Credit DB.

9

EDRs and UDRs are periodically SFTPd from system memory to the ESS/external storage, if deployed or to billing system directly as they are generated. Or, if configured, pushed to the ESS/external storage at user-configurable intervals.

10

The ESS/external storage periodically sends records to the billing system or charging reporting and analysis system.



IMPORTANT:

For information on ESS contact your Cisco account representative.

Postpaid Billing in ECS

This section describes the postpaid billing that is used to implement off-line billing processing for a variety of end user services.

The following figure shows a typical deployment of ECS for postpaid billing system.


Figure 9. Postpaid Billing System Scenario with ECS

How ECS Postpaid Billing Works

This section describes how the ECS postpaid billing works in the GPRS/UMTS and CDMA-2000 Networks.

ECS Postpaid Billing in GPRS/UMTS Networks

The following figure and steps describe how ECS works in a GPRS/UMTS network for postpaid billing.


Figure 10. Postpaid Billing with ECS in GPRS/UMTS Network

Table 2. Postpaid Billing with ECS in GPRS/UMTS Network
Step No. Description

1

The subscriber initiates the session.

2

After subscriber authentication and authorization, the system starts the session.

3

Data packet flow and accounting starts.

4

System periodically generates xDRs and stores them to the system memory.

5

System generates G-CDRs/eG-CDRs and sends them to billing system as they are generated.

6

The billing system picks up the CDR files periodically.

7

Subscriber session ends after subscriber terminates the session.

8

The system stores the last of the xDRs to the system memory and final xDRs are SFTPd from system memory to ESS/external storage, if deployed or to billing system directly.

9

System sends the last of the G-CDRs/eG-CDRs to the billing system.

10

File Generation Utility, FileGen in external storage periodically runs to generate G-CDRs/eG-CDRs files for billing system and send them to the billing system.

11

The billing system picks up the xDR files from the ESS/external storage periodically.



Postpaid Billing in CDMA-2000 Networks

The following figure and steps describe how ECS works within a CDMA-2000 network for postpaid billing.


Figure 11. Postpaid Billing with ECS in CDMA-2000 Network

Table 3. Postpaid Billing with ECS in GPRS/UMTS Network
Step No. Description

1

The subscriber initiates the session.

2

After subscriber authentication and authorization, the system starts the session.

3

Data packet flow and accounting starts.

4

System periodically generates xDRs and stores them to the system memory.

5

EDRs/UDRs are periodically SFTPd from system memory to ESS/external storage, if deployed or to billing system directly as they are generated.

6

The billing system picks up the xDR files from the ESS/external storage periodically.

7

Subscriber session ends after subscriber terminates the session.

8

The system stores the last of the xDRs to the system memory and final xDRs are SFTPd from system memory to the ESS/external storage, if deployed or to billing system directly.

9

The ESS/external storage finally sends xDRs to the billing system.



External Storage System

IMPORTANT:

For information on availability/support for ESS, contact your Cisco account representative.

The External Storage System (ESS) is a high availability, fault tolerant, redundant solution for short-term storage of files containing detail records (UDRs/EDRs/FDRs (xDRs)). To avoid loss of xDRs on the chassis due to overwriting, deletion, or unforeseen events such as power or network failure or unplanned chassis switchover, xDRs are off-loaded to ESS for storage and analysis to avoid loss of charging and network analysis information contained in the xDRs.

The xDR files can be pulled by the ESS from the chassis, or the chassis can push the xDR files to the ESS using SFTP protocol. In the Push mode, the ESS URL to which the CDR files need to be transferred to is specified. The configuration allows a primary and a secondary server to be configured. Configuring the secondary server is optional. Whenever a file transfer to the primary server fails for four consecutive times, the files will be transferred to the secondary server. The transfer will switch back to the original primary server when:

  • Four consecutive transfer failures to the secondary server occur
  • After switching from the primary server, 30 minutes elapses

In the push transfer mode, the following can be configured:

  • Transfer interval—A time interval, in seconds, after which the CDRs are pushed to the configured IP periodically. All the files that are completed before the PUSH timer expires are pushed.
  • Remove file after transfer—An option to keep or remove the CDR files on the hard disk after they are transferred to the ESS successfully.

The system running with ECS stores xDRs on an ESS, and the billing system collects the xDRs form the ESS and correlates them with the AAA accounting messages using 3GPP2-Correlation-IDs (for PDSN) or Charging IDs (for GGSN).

IMPORTANT:

For more information on the ESS, refer to the ESS Installation and Administration Guide.

System Resource Allocation

ECS does not require manual resource allocation. The ECS subsystem automatically allocates the resources when ECS is enabled on the chassis. ECS must be enabled on the chassis before configuring services.

Redundancy Support in ECS

This section describes the redundancy support available in ECS to recover user sessions and charging records in the event of software/hardware failure.

CAUTION:

Persistent data flows are NOT recoverable during session recovery.

IMPORTANT:

Redundancy is not available in the current version of the Cisco® XT2 platform.

Intra-chassis Session Recovery Interoperability

Intra-chassis session recovery is coupled with SessMgr recovery procedures.

Intra-chassis session recovery support is achieved by mirroring the SessMgr and AAAMgr processes. The SessMgrs are paired one-to-one with the AAAMgrs. The SessMgr sends checkpointed session information to the AAAMgr. ECS recovery is accomplished using this checkpointed information.

IMPORTANT:

In order for session recovery to work there should be at least four packet processing cards, one standby and three active. Per active CPU with active SessMgrs, there is one standby SessMgr, and on the standby CPU, the same number of standby SessMgrs as the active SessMgrs in the active CPU.

There are two modes of session recovery, one from task failure and another on failure of CPU or packet processing card.

Recovery from Task Failure

When a SessMgr failure occurs, recovery is performed using the mirrored “standby-mode” SessMgr task running on the active packet processing card. The “standby-mode” task is renamed, made active, and is then populated using checkpointed session information from the AAAMgr task. A new “standby-mode” SessMgr is created.

Recovery from CPU or Packet Processing Card Failure

When a PSC, PSC2, or PPC hardware failure occurs, or when a planned packet processing card migration fails, the standby packet processing card is made active and the “standby-mode” SessMgr and AAAMgr tasks on the newly activated packet processing card perform session recovery.

Inter-chassis Session Recovery Interoperability

The system supports the simultaneous use of ECS and the Inter-chassis Session Recovery feature. (For more information on the Inter-chassis Session Recovery feature, refer to the System Administration Guide.) When both features are enabled, ECS session information is regularly checkpointed from the active chassis to the standby as part of normal Service Redundancy Protocol processes.

In the event of a manual switchover, there is no loss of accounting information. All xDR data from the active chassis is moved to a customer-configured ESS before switching over to the standby. This data can be retrieved at a later time. Upon completion of the switchover, the ECS sessions are maintained and the “now-active” chassis recreates all of the session state information including the generation of new xDRs.

In the event of an unplanned switchover, all accounting data that has not been written to the external storage is lost. (Note that either the ESS can pull the xDR data from the chassis, or the chassis can push the xDR files to a configured ESS at user-configured intervals. For more information, see External Storage System section.) Upon completion of switchover, the ECS sessions are maintained and the “now-active” chassis recreates all of the session state information including the generation of new xDRs.

Regardless of the type of switchover that occurred, the names of the new xDR files will be different from those stored in the /records directory of packet processing card RAM on the “now-standby” chassis. Also, in addition to the file name, the content of many of the fields within the xDR files created by the “now-active” chassis will be different. ECS manages this impact with recovery mechanism. For more information on the differences and how to correlate the two files and other recovery information, see the Impact on xDR File Naming section.

Inter-chassis Session Recovery Architecture

Inter-chassis redundancy in ECS uses Flow Detail Records (FDRs) and UDRs to manage the switchover between Active-Standby system. xDRs are moved between redundant external storage server and Active-Standby systems.

Impact on xDR File Naming

The xDR file name is limited to 256 characters with the following syntax:

basename_ChargSvcName_ timestamp_SeqNumResetIndicator_FileSeqNumber

where:

  • basename—A global configurable text string that is unique per system that uniquely identifies the global location of the system running ECS.
  • ChargSvcName—A system context-based configurable text string that uniquely identifies a specific context-based charging service.
  • timestamp—Date and time at the instance of file creation. Date and time in the form of “MMDDYYYYHHmmSS” where HH is a 24-hour value from 00-23.
  • SeqNumResetIndicator—A one-byte counter used to discern the potential for duplicated FileSeqNumber with a range of 0 through 255, which is incremented by a value of 1 for the following conditions:
    • Failure of an ECS software process on an individual packet processing card
    • Failure of a system such that a second system takes over according to the Inter-chassis Session Recovery feature
    • File Sequence Number (FileSeqNumber) rollover from 999999999 to 0
  • FileSeqNumber—Unique file sequence number for the file with nine-digit integer having range from 000000000 to 999999999. It is unique on each system.

With inter-chassis session recovery, only the first two fields in the xDR file names remain consistent between the active and standby chassis as these are parameters that are configured locally on the chassis. Per inter-chassis session recovery implementation requirements, the two chassis systems must be configured identically for all parameters not associated with physical connectivity to the distribution node.

The fields “timestamp”, “SeqNumResetIndicator”, and “FileSeqNumber” are all locally generated by the specific system through CDR subsystem, regardless of whether they are in an Inter-chassis Session Recovery arrangement or not.

  • The “timestamp” value is unique to the system generating the actual xDRs and generated at the time the file is opened on the system.
  • The SeqNumResetIndicator is a unique counter to determine the number of resets applied to FileSeqNumber. This counter is generated by CDR subsystem and increment the counter in event of resets in FileSeqNumber. This is required as “timestamp” field is not sufficient to distinguish between a unique and a duplicate xDR.

As such, the “SeqNumResetIndicator” field is used to distinguish between xDR files which have the same “FileSeqNumber” as a previously generated xDR as a result of:

  • Normal operation, for example a rollover of the “FileSeqNumber” from maximum limit to 0.
  • Due to a failure of one of the ECS processes running on a packet processing card card.
  • Failure of the system (that is, Inter-chassis Session Recovery switchover).

In any scenario where the “FileSeqNumber” is reset to 0, the value of the “SeqNumResetIndicator” field is incremented by 1.

  • The value of the “FileSeqNumber” is directly linked to the ECS process that is generating the specific xDRs. Any failure of this specific ECS process results in resetting of this field to 0.

Impact on xDR File Content

The following scenarios impact the xDR file content:

  • On failure of an active chassis:On system startup, xDR files are generated in accordance with the standard processes and formats. If the system fails at any time it results in an inter-chassis session recovery switchover from active to standby and the following occurs depending on the state of the call/flow records and xDR file at the time of failure:
    • Call/flow records that were being generated and collected in system memory prior to being written out to /records directory on packet processing card RAM are not recoverable and therefore are lost.
    • Closed xDRs that have been written out to records directory on packet processing card RAM but that have yet to be retrieved by the ESS are recoverable.
    • Closed xDRs that have been retrieved and processed by the ESS have no impact.
  • On the activation of a Standby chassis:Upon detection of a failure of the original active chassis, the standby chassis transits to the active state and begins serving the subscriber sessions that were being served by the now failed chassis. Any subsequent new subscriber session will be processed by this active chassis and will generate xDRs per the standard processes and procedures.However, this transition impacts the xDRs for those subscribers that are in-progress at the time of the transition. For in progress subscribers, a subset of the xDR fields and their contents are carried over to the newly active chassis via the SRP link. These fields and their contents, which are carried over after an Inter-chassis Session Recovery switchover, are as follows: HA-CORRELATION-ID PDSN-CORRELATION-ID (PDSN only) PDSN-NAS-IP-ADDRESS (PDSN only) PDSN-NAS-ID (PDSN only) USERNAME MSID RADIUS-NAS-IP-ADDRESSAll remaining fields are populated in accordance with the procedures associated with any new flow with the exceptions that, the field “First Packet Direction” is set to “Unknown” for all in-progress flows that were interrupted by the switchover and the field “FDR Reason” is marked as a PDSN Handoff and therefore is set to a value of “1” and corresponding actions are taken by the billing system to assure a proper and correct accounting of subscriber activities.