Plug-in Configuration
Cisco Policy Builder provides core plug-ins for customizing and optimizing your installation.
-
Configurations set at the system level are system-wide except as noted in the bullet items below.
-
Configurations set at the cluster level apply to that cluster and the instances in it. A value set here overrides the same value set at the system level.
-
Configurations set at the instance level apply to the instance only and override the same value set at the cluster or system level.
Select the Create Child action in a Plug-in Configuration node in the Systems tree to define them. You can change any of the variables from the default, or choose not to use a plug-in, as necessary.
When you create a system from the example, the following configuration stubs appear at the cluster and instance level:
Threading Configuration
A threading configuration utility is provided for advanced users.
Click Threading Configuration in the right pane to add the threading configuration to the system. If you are planning to run the system with higher TPS, then you need to configure Threading Configuration. For further information, contact your Cisco Technical Representative.
The Threading Plug-in controls the total number of threads in CPS vDRA that are executing at any given time.
The following parameters can be configured under Threading Configuration:
Parameter |
Description |
---|---|
Thread Pool Name |
Name of the thread pool. Following names can be configured in CPS vDRA:
|
Threads |
Number of threads to set in the thread pool. |
Queue Size |
Size of the queue before they are rejected. |
Scale By Cpu Core |
Select this check box to scale the maximum number of threads by the processor cores. |
Async Threading Configuration
Click Async Threading Configuration in the right pane to add the configuration in the system.
Use the default values for the Async Threading Plug-in. The Async configuration controls the number of asynchronous threads.
Note |
Currently, CPS vDRA does not have any asynchronous threads. However, you must add “Async Threading Configuration” and keep this table empty. |
The following parameters can be configured under Async Threading Configuration.
Parameter |
Description |
---|---|
Default Processing Threads |
The number of threads that are allocated to process actions based on priority. |
Default Action Priority |
The priority assigned to an action if it is not specified in the Action Configurations table. |
Default Action Threads |
The number of threads assigned to process the action if it is not specified in the Action Configurations table. |
Default Action Queue Size |
The number of actions that can be queued up for an action if it is not specified in the Action Configurations table. |
Default Action Drop Oldest When Full |
When checked, the oldest queued action is dropped from the queue when a new action is added to a full queue. Otherwise, the new action to add is ignored. This check box applies to all the threads specified. To drop a specific thread, leave this unchecked and use the Action Configurations table. |
Action Configurations Table |
|
Action Name |
The name of the action. This must match the implementation class name. |
Action Priority |
The priority of the action. Used by the default processing threads to determine which action to execute first. |
Action Threads |
The number of threads dedicated to processing this specific action. |
Action Queue Size |
The number of actions that can be queued up. |
Action Drop Oldest When Full |
For the specified action only: When checked, the oldest queued action is dropped from the queue when a new action is added to a full queue. Otherwise, the new action to add is ignored. |
Custom Reference Data Configuration
Before you can create a custom reference data table, configure your system to use the Custom Reference Data Table plug-in configuration.
You only have to do this one time for each system, cluster, or instance. Then you can create as many tables as needed.
Click Custom Reference Data Configuration from right pane to add the configuration in the system.
Here is an example for HA and AIO setups:
-
HA example:
-
Primary Database Host/IP Address: sessionmgr01
-
Secondary Database Host/IP Address: sessionmgr02
-
Database Port: 27717
-
-
AIO example:
-
Primary Database Host/IP Address: localhost or 127.0.0.1
-
Secondary Database Host/IP Address: NA (leave blank)
-
Database Port: 27017
-
The following parameters can be configured under Custom Reference Data Configuration.
Parameter |
Description |
---|---|
Primary Database IP Address |
IP address of the primary sessionmgr database. |
Secondary Database IP Address |
Optional, this field is the IP address of a secondary, backup, or failover sessionmgr database. |
Database Port |
Port number of the sessionmgr. It should be the same for both the primary and secondary databases. |
Db Read Preference |
Read preference describes how sessionmgr clients route read operations to members of a replica set. You can select from the following drop-down list:
For more information, refer to http://docs.mongodb.org/manual/core/read-preference/. |
Connection Per Host |
Number of connections that are allowed per DB Host. Default value is 100. |
For more information on Custom Reference Data configuration, refer to the CPS Operations Guide for this release.
DRA Configuration
Click DRA Configuration from the right pane in Policy Builder to add the configuration in the system.
The following parameters can be configured under DRA Configuration:
Parameter |
Description |
||
---|---|---|---|
Stale Session Timer Minutes |
Indicates the time after which the audit RAR should be generated (in the subsequent audit RAR process cycle that runs every minute in CPS vDRA) for sessions that are stale. Default: 180 minutes (recommended value) Minimum: 10 minutes Maximum: 10080 minutes |
||
Rate Limiter |
Indicates the number of audit RARs per second that should be sent out by CPS vDRA. For example, if there are 100 stale sessions found in the audit RAR process, but the Rate Limiter is configured as 10, then audit RARs are generated at 10 RAR/sec for the next 10 seconds. Default: 10 (recommended value) Minimum: 1 Maximum: 1000 (maximum number of RAR messages per second from vDRA to PCEF) |
||
Stale Session Expiry Count |
Specifies the number of retries vDRA should do for a stale session if there is no response of audit RAR or if there is Result-Code in RAA (for audit RAR) other than 5002 or 2001. Default: 6 (recommended value) Minimum: 0 (Session deleted without sending RAR) Maximum: 10 |
||
Binding DB Read Preference |
Used to select the mode when reading from Binding DB. Use "nearest" mode for better performance of traffic that needs only read operation on Binding DB. Default: Nearest (recommended setting) |
||
Stale Binding Expiry Minutes |
Duration after which the binding database records expire. The timer is initialized when the session is created. The records are deleted when the time since the last refresh exceeds Stale Binding Refresh Minutes. Default: 10080 minutes (168 hours or one week) (recommended value) Minimum: 10 minutes Maximum: 43200 minutes (28 days) For more information about binding DB audits and stale records, see Binding DB Audit. |
||
Stale Binding Refresh Minutes |
Duration for which the expiry time of the binding database records is refreshed. Default: 2880 minutes (48 hours or 2 days - recommended value). Minimum: 10 minutes Maximum: 10080 minutes (one week)
|
||
Binding Creation, Primary Alternative System |
Name of vDRA system to retry Gx CCR-i When vDRA tries to route a Gx CCR-i request, but is unable to reach the database, the configured values of first the primary, then the secondary systems are used to route the Gx CCR-i to a different vDRA to try the database. The retry is stopped if that vDRA also cannot reach the database. |
||
Binding Creation, Secondary Alternative System |
Name of secondary vDRA system to retry Gx CCR-i |
||
Binding Routing, Primary Alternative System |
Name of vDRA system to retry Rx AAR When vDRA tries to route a Rx AAR request, but is unable to reach the database, the configured values of first the primary, then the secondary systems are used to route the Rx AAR to a different vDRA to try the database. The retry is stopped if that vDRA also cannot reach the database. |
||
Binding Routing, Secondary Alternative System |
Name of secondary vDRA system to retry Rx AAR |
||
Settings |
Refer to Settings. |
||
Rate Limits |
Refer to Rate Limits. |
||
DRA Feature |
Refer to DRA Feature. |
||
DRA Inbound Endpoints |
Refer to DRA Inbound Endpoints. |
||
DRA Outbound Endpoints |
Refer to DRA Outbound Endpoints. |
||
Relay Endpoints |
Refer to Relay Endpoints. |
Settings
Click Settings check box to open the configuration pane.
The following parameters can be configured under Settings:
Parameter |
Description |
---|---|
Stop Timeout Ms |
Determines how long the stack waits for all resources to stop. The delay is in milliseconds. Default: 10000 ms (recommended value) Minimum: 1000 ms Maximum: 60000 ms (one minute) |
Cea Timeout Ms |
Determines how long it takes for CER/CEA exchanges to timeout if there is no response. The delay is in milliseconds. Default: 10000 ms (recommended value) Minimum: 1000 ms Maximum: 60000 ms (one minute) |
Iac Timeout Ms |
Determines how long the stack waits before initiating a DWR message exchange on a peer connection from which no Diameter messages have been received. The timeout value is in milliseconds. Default: 5000 ms (recommended value) Minimum: 1000 ms Maximum: 30000 ms (30 seconds) |
Dwa Timeout Ms |
Determines how long the stack waits for a DWA message in response to a DWR message. If no Diameter message (DWA or other message) is received on the peer connection during the first timeout period, the stack counts a failure, sends another DWR message, and restarts the Dwa timer. If no Diameter messages are received during the second timeout period, the stack counts a second failure. After two consecutive failures, the stack considers the peer connection as failed, and closes the connection. The delay is in milliseconds. Default: 10000 ms (recommended value) Minimum: 1000 ms Maximum: 60000 ms (one minute) |
Dpa Timeout Ms |
Determines how long it takes for a DPR/DPA exchange to timeout if there is no response. The delay is in milliseconds. Default: 5000 ms (recommended value) Minimum: 1000 ms Maximum: 30000 ms (30 seconds) |
Rec Timeout Ms |
Determines how long it takes for the reconnection procedure to timeout. The delay is in milliseconds. Default: 10000 ms (recommended value) Minimum: 1000 ms Maximum: 60000 ms (one minute) |
Drain Timeout Ms |
Indicates the time that a peer connection remains open for responses to be sent to peers even if DPR is sent or received by vDRA. If a DPR is sent or received by vDRA, vDRA does not route requests to the disconnecting peer connection via any routing (Dest-Host, SRK, Binding, Table-Driven). However, responses and in-flight requests sent to the corresponding peers till the duration of Drain Timeout. This allows vDRA to gracefully shut down when any remote peer sends a DPR so as to minimize the diameter message loss. Default: 2000 ms Maximum: Must be less than Dpa timeout Ms |
The following figure illustrates the timers in peer detection:
Binding DB Audit
The Binding DB Audit automatically deletes stale records from the binding DBs. When a Gx session record is created, binding records for the session binding keys are also created. When each binding record is created, the binding record expiry time is initialized to the sum of the session creation time and the Stale Binding Expiry Minutes (that you can configure in Policy Builder). A binding record is considered stale if it cannot be deleted when its associated session record is deleted (this occurs typically due to communication failures). The binding records are audited via a binding audit background process. If the audit process finds a binding record that is past the expiry time, the binding record is considered stale and deleted from the database. Note that the binding audit process does not perform a session DB lookup nor does it perform any Diameter signaling with the GW before deletion.
To prevent a binding record from becoming stale, the session audit process periodically updates the expiry time for bindings associated with sessions in the session DB. The session maintains a stale binding refresh timer that is initialized to the sum of the session creation time and Stale Binding Refresh Minutes. When the session audit process finds a session with a refresh time that has passed, it updates a new expiry time (calculated from current time plus the Stale Binding Expiry Minutes) to its associated bindings. The write is conditional on the session-id matching the Gx session-id in the binding record. This refresh action prevents the binding audit process from incorrectly deleting active bindings from its binding database. The following figures illustrate the working of binding DB audit and refresh:
Rate Limits
Rate limit per process instance on Policy Director (lb) VM can be managed using this configuration.
Default is unchecked, that is, no rate limits for Diameter traffic (recommended setting).
If enabled, the following parameters can be configured under Rate Limits:
Parameter |
Description |
||
---|---|---|---|
Rate Limit per Instance on Policy Director |
Allowable TPS on a single instance of policy server (QNS) process running on the Policy Director. Minimum: 1 Maximum: 5000
|
||
Result-Code in Response |
Indicates the error code that must be used while rejecting requests, due to rate limits being reached. Default: 3004 |
||
Error Message in Response |
Select the check box to drop the rate-limited messages without sending error response. If the check box is not selected, then the rate limited message are dropped with error response as configured. |
||
Drop Requests Without Error Response |
Select the check box to drop rate limited messages without sending error response. If the check box is unchecked, then the rate limited messages are dropped with error response as configured. To accommodate configuration to either drop the request or send an error response, a column Discard Behavior can be added under Peer Rate Limit Profile. The column may have one of the two possible values:
Default: Unchecked (recommended setting) For more information, refer to Peer Rate Limit.
|
Here is the list of the available combinations for rate limiting:
Rate Limiting Type |
With Error Code |
With Error Code and Error Message |
Without Error Code (Drop) |
---|---|---|---|
Instance Level |
Yes |
Yes |
Yes |
Peer Level Egress |
Yes |
Yes |
Yes |
Peer Level Egress with Message Level |
Yes |
Yes |
Yes |
Egress Message Level (No Peer Level RL) |
Yes |
Yes |
Yes |
Peer Level Ingress |
Yes |
Yes |
Yes |
Peer Level Ingress with Message Level |
Yes |
Yes |
Yes |
Ingress Message Level (No Peer Level RL) |
Yes |
Yes |
Yes |
DRA Feature
Click DRA Feature check box to open the configuration pane.
The following parameters can be configured under DRA Feature:
Parameter |
Description |
||
---|---|---|---|
Gx Session Tear Down On5065 |
By default, Gx Session Tear Down On5065 flag is enabled (recommended setting). When the PCRF responds with a Experimental Result Code of 5065 in AAAnswer on Rx Interface, DRA deletes its internal binding and session created for the transaction.A RAR with appropriate Session-Release-Cause AVP will also be sent to the PCEF.
|
||
Update Time Stamp On Success RAA |
When this check box is selected, session timestamp will be updated on receipt of success RAA (Result-Code: 2001) from PCEF. 1 Default is checked (recommended setting)
|
||
Update Time Stamp On Success CCRU |
When this check box is selected, session timestamp will be updated on receipt of success CCR-U (Result-Code: 2001) from PCEF. 2 Default is unchecked (recommended setting)
|
||
Enable Proxy Bit Validation |
Enables P bit validation. vDRA validates the P bit in the Diameter request and, if set, the message maybe proxied, relayed, or redirected. If this option is disabled, the P bit in the request is not checked and the request is not considered proxiable. Default: Enabled. |
||
Enable Mediation |
Enable advanced mediation capabilities in both egress and ingress direction. This feature allows you to configure vDRA to change the value of the Result-Code in Diameter Answer, use mediation to hide topology, prepend label to Destination Host AVP, etc. |
||
Enable Doic |
Enable or disable abatement action for Diameter requests towards PCRF, HSS, AAA, and OCS servers based on reporting of overloaded conditions using the architecture described in RFC 7683 Diameter Overload Indication Conveyance (DOIC). DOIC can be enabled/disabled at peer group level in Peer Group SRK Mapping table. If the destination peer is congested or overloaded, you can choose to either forward, divert, or drop messages. |
||
Enable PCRF Session Query |
Enables or disables the PCRF session query. If you enable this, Policy DRA then supports a fallback routing for Rx AARs for VoLTE using the PCRF session query. This ensures that VoLTE calls can complete in the event that IPv6 binding is not found in the binding database. For an Rx AAR with an IPv6 binding query, vDRA provides the ability to route the Rx AAR based on an API query to the PCRF to determine if it has a session for the IPv6. The queries can be made in parallel to a configured set of query points on PCRFs. The Framed-IPv6 AVP from the Rx must be provided in the request to the PCRF. PCRF returns an SRK to be used for routing, similar to existing binding lookups. |
||
Create IPv6 Bindings based on PCRF Session Query |
Enables creation of IPv6 binding record in the database based on PCRF session query. When PCRF session query result (success) is received and if IPv6 record is not present in the database, vDRA creates an IPv6 binding record based on the response from the PCRF. If any CCR-I is received for the same IPv6 record, then it overwrites the IPv6 binding record. For any CCR-T, vDRA deletes the IPv6 binding record from database.
The Stale Binding Expiry and Refresh Minutes are used to clear these binding records from the database. For more information, see Binding DB Audit. |
||
Slf Max Bulk Provisioning TPS |
Rate at which subscribers are provisioned in the SLF database. SLF bulk provisioning generates high number of database write operations in a short duration of time. To spread out the operations over a period of time and mitigate the performance issue, configure the TPS. The rate limit adds delay between transactions and thereby limits the number of transactions executed per second. For more information about SLF bulk provisioning, see the CPS vDRA Operations Guide. |
DRA Inbound Endpoints
The following parameters can be configured under DRA Inbound Endpoints:
Note |
To handle loads of 15 K TPS or more, create multiple TCP connections with PCRF and apply the same configuration to all DRA Directors. |
Parameter |
Description |
||||
---|---|---|---|---|---|
Vm Host Name |
Host Name of the VM that hosts this CPS vDRA endpoint. |
||||
Ip Address |
Address on which this CPS vDRA endpoint should bind to. |
||||
Realm |
Realm of the CPS vDRA endpoint. |
||||
Fqdn |
Fully Qualified Domain Name of the CPS vDRA end point. |
||||
Transport Protocol |
Allows you to select either 'TCP' or 'SCTP' for the selected DRA endpoint. Default value is TCP. If the DRA/relay endpoint is to be configured for SCTP, the Transport Protocol should be selected as SCTP for those endpoints. |
||||
Multi-Homed IPs |
This is a comma separated list of IP addresses that CPS vDRA will use to start the diameter stack with multi-homing enabled for SCTP transport. Diameter stack with TCP transport will still use the existing 'Local Bind Ip' field to specify any specific IP address for TCP stack. CPS vDRA will use the 'Local Bind Ip' to bring up SCTP stack and use it along with the ‘Multi Homing Hosts' to start the SCTP transport with multi-homing support.
The configuration for multi-homing is validated by netstat command on lb01:
|
||||
Application |
Refers to 3GPP Application ID of the interface. You can select multiple applications on a peer connection. For example, S6a and SLg on a single IPv4/SCTP Multi-homed peer connection. |
||||
Enabled |
Check to enable the endpoint. |
||||
Base Port |
Refers to the port on which the CPS vDRA listens for incoming connections. |
An example configuration is shown below:
DRA Outbound Endpoints
The following parameters can be configured under DRA Outbound Endpoints:
Parameter |
Description |
||||
---|---|---|---|---|---|
Vm Host Name |
Host Name of the VM that hosts this CPS vDRA endpoint. |
||||
Ip Address |
Address on which this CPS vDRA endpoint should bind to. |
||||
Realm |
Realm of the CPS vDRA endpoint. |
||||
Fqdn |
Fully Qualified Domain Name of the CPS vDRA end point. |
||||
Transport Protocol |
Allows you to select either 'TCP' or 'SCTP' for the selected CPS vDRA endpoint. Default value is TCP. If the DRA/relay endpoint is to be configured for SCTP, the Transport Protocol should be selected as SCTP for those endpoints. |
||||
Multi-Homed IPs |
This is a comma separated list of IP addresses that CPS vDRA will use to start the diameter stack with multi-homing enabled for SCTP transport. Diameter stack with TCP transport will still use the existing 'Local Bind Ip' field to specify any specific IP address for TCP stack. CPS vDRA will use the 'Local Bind Ip' to bring up SCTP stack and use it along with the ‘Multi Homing Hosts' to start the SCTP transport with multi-homing support.
The configuration for multi-homing is validated by netstat command on lb01:
|
||||
Application |
Refers to 3GPP Application ID of the interface. |
||||
Enabled |
Check to enable the endpoint. |
||||
Peer Realm |
Diameter server realm. |
||||
Peer Host |
Diameter server host. By default, the connection is initiated on the standard diameter port (3868). If a different port needs to be used than the peer name must be defined using the host:port format. |
An example configuration is shown below:
Relay Endpoints
The following parameters can be configured under Relay Endpoints:
Parameter |
Description |
||
---|---|---|---|
Vm Host Name |
Host Name of the VM that hosts this Relay endpoint. |
||
Instance Id |
Instance Identifier is the ID of the current Instance. |
||
Ip Address |
Address on which this DRA endpoint should bind to.
|
||
Port |
Port is the listening port for this instance. |
||
Fqdn |
Fully Qualified Domain Name of the DRA end point. |
||
Enabled |
Check to enable endpoint. |
An example configuration is shown below:
Policy Routing for Real IPs with Relay Endpoints
vDRA relay links consist of a control plane and a data plane.
The control plane uses virtual IPs and the data plane uses real IPs.
If the control and data plane use the same links, and those links are configured with VIPs, by default, the data plane uses the VIP as its source address for outgoing connections. The data plane uses the VIP as the source address only if the VIP is active on the data plane's outgoing interface.
To avoid this situation, policy routing is used to force the data plane to use the real IP address of the outgoing interface instead of the VIP.
Example of a vDRA Relay Endpoints
In the following example network, only the DRA director VMs and their relay links are displayed. In a real scenario, many more links may exist on the DRA director VMs.
Policy Routing
Linux policy routing includes rules and routing tables. The rules identify traffic and point to a user-defined routing table. The routing table contains customized routes.
To prevent the Relay Link's data plane from using the VIP as a source address, a rule is created to identify the real IP in the destination address and identify the desired routing table.
Configure Policy Routing
The following configuration procedure is performed on Site 1 dra1-director-1. Repeat the procedure for all other dra-directors and modify the IP addresses accordingly.
Perform the following steps on each dra-director VM to configure policy routing:
-
Create a custom routing table
-
Create an IP rule for each remote relay endpoint's real IP address
-
Add a route to the custom routing table that specifies the real IP source address
Set up Custom Routing Table
Set up the custom routing table as shown in the following example:
echo "200 dra.relay" | sudo tee --append /etc/iproute2/rt_tables
Define IP Rules
The following rules match the packets destined to the real IPs of interface ens224 on dra2-director1 and dra2-director2:
ip -6 rule add to 2006:db8:2001:2425::13 table dra.relay
ip -6 rule add to 2006:db8:2001:2425::14 table dra.relay
Define the Route
The following example of the route uses the router's interface as the next hop and specifies ens224's real IP address as the source address for outgoing packets.
ip route add 2006:db8:2001:2425::/112 via
2001:db8:2040:202::1 src 2001:db8:2040:202::13 table dra.relay
Validate the Routing
Use the following example commands to validate the route selection for remote relay real IP and VIP addresses.
ip -6 route show table dra.relay
ip -6 route get 2006:db8:2001:2425::13
ip -6 route get 2006:db8:2001:2425::14
ip -6 route get 2006:db8:2001:2425::50
Persistent Configuration
In order for the Policy Routing configuration to survive a reboot, add the configuration commands to /etc/network/interfaces under interface ens224 as shown below:
auto ens224
iface ens224 inet static
address 192.169.22.13
netmask 255.255.255.0
iface ens224 inet6 static
address 2001:db8:2040:202::13
netmask 112
up ip route add 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1
up ip -6 rule add to 2006:db8:2001:2425::13 table dra.relay
up ip -6 rule add to 2006:db8:2001:2425::14 table dra.relay
up ip route add 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1 src 2001:
db8:2040:202::13 table dra.relay
down ip route del 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1
down ip -6 rule del to 2006:db8:2001:2425::13 table dra.relay
down ip -6 rule del to 2006:db8:2001:2425::14 table dra.relay
down ip route del 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1 src
2001:db8:2040:202::13 table dra.relay
Configure Policy Routing with Deployer/Installer
Configure the VM artifacts and the cloud config to set up policy routing using the deployer.
VM Artifacts
Add Policy Route configuration to the DRA director VM's interfaces.esxi file as shown in the following example:
cps@installer:/data/deployer/envs/dra-vnf/vms/dra-director
/dra-director-1$ cat interfaces.esxi
auto lo
iface lo inet loopback
auto ens160
iface ens160 inet static
address 10.81.70.191
netmask 255.255.255.0
gateway 10.81.70.1
auto ens192
iface ens192 inet static
address 192.169.21.13
netmask 255.255.255.0
auto ens224
iface ens224 inet static
address 192.169.22.13
netmask 255.255.255.0
iface ens224 inet6 static
address 2001:db8:2040:202::13
netmask 112
up ip route add 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1
up ip -6 rule add to 2006:db8:2001:2425::13 table dra.relay
up ip -6 rule add to 2006:db8:2001:2425::14 table dra.relay
up ip route add 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1 src
2001:db8:2040:202::13 table dra.relay
down ip route del 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1
down ip -6 rule del to 2006:db8:2001:2425::13 table dra.relay
down ip -6 rule del to 2006:db8:2001:2425::14 table dra.relay
down ip route del 2006:db8:2001:2425::/112 via 2001:db8:2040:202::1 src
2001:db8:2040:202::13 table dra.relay
auto ens256
iface ens256 inet static
address 192.169.23.13
netmask 255.255.255.0
cps@installer:/data/deployer/envs/dra-vnf/vms/dra-director/dra-director-1$
Cloud Config
Create the dra.relay routing table on the dra-directors by adding the following bootcmd: to user_data.yml and storing the file at /data/deployer/envs/dra-vnf/vms/dra-director/user_data.yml. The sed command prevents adding a routing table every time the VM boots.
bootcmd:
- "sed -i -e '/^200 *dra.relay/d' /etc/iproute2/rt_tables"
- "sh -c \"echo '200 dra.relay' >> /etc/iproute2/rt_tables\""
Example of user_data.yml:
#cloud-config
debug: True
output: {all: '| tee -a /var/log/cloud-init-output.log'}
users:
- name: cps
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: docker
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDzjJjndIvUiBta4VSIbd2gJmlMWcQ8wtejgAbi
XtoFZdtMdo9G0ZDEOtxHNNDPwWujMiYAkZhZWX/zON9raavU8lgD9+YcRopWUtujIC71YjtoxIjWIBBbrtqt
PlUXMUXQsi91RQbUtslENP+tSatS3awoQupyBMMSutyBady/7Wq0UTwFsnYs5Jfs8jIQuMfVQ9uJ4mNn7wJ0
N+Iaf27rE0t3oiY5DRN6j07WhauM6lCnZ1JDlzqmTnTHQkgJ3uKmQa5x73tJ1OW89Whf+R+dfslVn/yUwK/
vf4extHTn32Dtsxkjz7kQeEDgCe/y7owimaEFcCIfEWEaj/50jegN cps@root-public-key
resize_rootfs: true
write_files:
- path: /root/swarm.json
content: |
{
"role": "{{ ROLE }}",
"identifier": "{{ IDENTIFIER }}",
"master": "{{ MASTER_IP }}",
"network": "{{ INTERNAL_NETWORK }}",
{% if WEAVE_PASSWORD is defined %}"weavePw": "{{ WEAVE_PASSWORD }}",
{% endif %}
"zing": "{{ RUN_ZING | default(1) }}",
"cluster_id": "{{ CLUSTER_ID }}",
"system_id": "{{ SYSTEM_ID }}"
}
owner: root:root
permissions: '0644'
- path: /home/cps/.bash_aliases
encoding: text/plain
content: |
# A convenient shortcut to get to the Orchestrator CLI
alias cli="ssh -p 2024 admin@localhost"
alias pem="wget --quiet http://171.70.34.121/microservices/latest/cps.pem ;
chmod 400
cps.pem ; echo 'Retrieved \"cps.pem\" key file'"
owner: cps:cps
permissions: '0644'
- path: /etc/pam.d/common-password
content: |
#
# /etc/pam.d/common-password - password-related modules common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of modules that define the services to be
# used to change user passwords. The default is pam_unix.
# Explanation of pam_unix options:
#
# The "sha512" option enables salted SHA512 passwords. Without this option,
# the default is Unix crypt. Prior releases used the option "md5".
#
# The "obscure" option replaces the old `OBSCURE_CHECKS_ENAB' option in
# login.defs.
#
# See the pam_unix manpage for other options.
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules. See
# pam-auth-update(8) for details.
# here are the per-package modules (the "Primary" block)
password requisite pam_pwquality.so retry=3 minlen=8
minclass=2
password [success=2 default=ignore] pam_unix.so obscure use_authtok
try_first_pass sha512 remember=5
password sufficient pam_sss.so use_authtok
# here's the fallback if no module succeeds
password requisite pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
password required pam_permit.so
# and here are more per-package modules (the "Additional" block)
# end of pam-auth-update config
owner: root:root
permissions: '0644'
runcmd:
- [vmware-toolbox-cmd, timesync, enable ]
bootcmd:
- "sed -i -e '/^200 *dra.relay/d' /etc/iproute2/rt_tables"
- "sh -c \"echo '200 dra.relay' >> /etc/iproute2/rt_tables\""
SLF Configuration
You can specify whether the IMSI and MSISDN values are validated in SLF API.
By default, SLF validation is disabled.
To set up SLF validation, create SLF Configuration from the Plugin Configuration in Policy Builder.
The following table describes the SLF API validations that you can configure:
Field |
Description |
---|---|
Validate IMSI is Numeric |
If checked: IMSI received in the SLF API request must be numeric If unchecked: IMSI numeric validation is not performed on the IMSI received in the SLF API request |
Validate IMSI Length |
If checked: IMSI length is validated based on the specified IMSI Minimum Length (inclusive) and IMSI Maximum Length (inclusive) If unchecked: IMSI length validation is not performed on the IMSI received in the SLF API request |
Validate MSISDN is Numeric |
If checked: MSISDN received in the SLF API request must be numeric If unchecked: MSISDN numeric validation is not performed on the MSISDN received in the SLF API request |
Validate MSISDN Length |
If checked: MSISDN length is validated based on the specified MSISDN Minimum Length (inclusive) and MSISDN Maximum Length (inclusive) If unchecked: MSISDN length validation is not performed on the MSISDN received in the SLF API request |