PFCP Protocol Endpoint with UDP Proxy Bypass
Protocol endpoint bypass UDP proxy and send N4/Sxa messages towards UPF directly. Incoming N4/Sxa messages from UPF also bypass
UDP poxy and land on Protocol pod. (Subject to UPF support for Source IP Address IE in heartbeat request message). Protocol
pod continues to use non-host networking mode of operation.
Kubernete service starts to listen on the configured VIP IP address and standard port, ensuring incoming N4/Sxa UDP packets
are sent to Protocol pods. A separate Kubernetes service created for N4 & Sx with separate target ports to identify the interface
associated with the incoming message/packet. Kubernetes client IP address affinity is availed to ensure retransmitted packets
from UPF are sent to the same Protocol pod instance to hit the retransmission cache successfully.
Current Mode (No Bypass)
In this mode of operation the message exchange for N4, Sxa, GTP-U happen through the Udp-Proxy. The UDP Proxy is responsible
for connecting to or receiving connections from UPF
All the node related messages, or Session that is related on PFCP are initiated either by the service or from the UPF and
their responses pass through the UDP Proxy.
Outbound Bypass Proxy Mode
This mode of operation is enabled by default for all messages that are initiated by SGW or SMF service and sent by the system
toward UPF using PFCP through Kubernetes Pod environment variable “OUTBOUND_PROXY_BYPASS”. The messages that are sent by SMF(Protocol-Pod)
directly to UPF are session that is related and as follows:
-
PFCP Session Establishment Request.
-
PFCP Session Modification Request
-
PFCP Session Deletion Request
In this mode, the GTP-U messages from UPF or initiated by Service toward UPF continue to be exchanged through the UDP-Proxy.
In this mode, only the Session related messages (i.e., the ones initiated by SMF Service) flow directly from Protocol towards
the UPF.
Protocol pod receives the UPF IP Address from the service, which is used to set up connection with UPF and subsequently use
the same for Session related message exchange. The node related messages continue to take the UDP-Proxy to protocol or Node
Manager path.
Complete Bypass Mode (Inbound and Outbound)
In this mode, both inbound and outbound messages are sent and received by Protocol Pod bypassing UDP Proxy. The protocol pod
will listen on N4 and GTPu or Sxa ports based on the configured VIP’s . Protocol pod ceases to be on a Kubernetes service
network and remains in Host based networking mode. Protocol pos gets the IP of the node or VM that it is on, this condition
is triggered based on an environment variable present or available for both Protocol and UDP Proxy Pods (UDP_PROXY_BYPASS).
By default, this variable is false and UDP Proxy and Protocol continue as they do today with UDP-Proxy exchanging messages
with UPF.
UDP_PROXY_BYPASS is set to true only if both the following conditions are met:
-
VIP is configured under endpoint PFCP interface N4 or interface Sxa.
-
There is no VIP configured under endpoint protocol interface N4 or interface Sxa.
With change in value of UDP_PROXY_BYPASS variable, both UDP-Proxy and Protocol pods are restarted to enable this new mode
of working or to fallback to earlier mode of message exchange trhough UDP-Proxy.
Triggering Bypass Mode using CLI
To trigger the bypass mode or protocol-proxy merged working, the VIP-IPs must be configured under endpoint PFCP as shown here:
no instance instance-id 1 endpoint protocol interface n4
no instance instance-id 1 endpoint protocol interface gtpu
instance instance-id 1 endpoint pfcp interface n4 vip-ip X.X.X.X
instance instance-id 1 endpoint pfcp interface gtpu vip-ip X.X.X.X
Important |
With the above configuration the value of environment variable UDP_PROXY_BYPASS will change. This triggers a restart of both
pods UDP-Proxy and Protocol.
|
Every feature that is present under endpoint→ protocol must be correspondingly configured under endpoint → PFCP and which
include features like DSCP, SLA and Dispatcher related configurations. The configurations for all features take effect only
if VIP-IP is configured under endpoint → PFCP and interface N4 or interface Sxa as shown above. And at the same time there
should be interface n4 and Vip-Ip or interface Sxa and VipIp present under endpoint → protocol.
Rendering CLI Values
Based on N4 and or Sxa VIP configuration, the rendering logic calculates which values to publish under endpoint protocol.
The configuration is rendered in pods having the key as “endpointIp”. The configuration path in each individual pods is located
at /config/AppName/vip-ip/endpointIp.yaml. The affected pods are:
-
Protocol
-
Node Mgr
-
SMF-Service
-
SGW Service.
Having endpoint → pfcp configurations render under endpoint → protocol helps in avoiding changes to background configuration
read logic.
Node Management
In this case Protocol starts a PFCP endpoint for peers to connect with it. At the same time, it will also establish connection
with UPF as and when the app service initiates a PFCP message towards the UPF. Following messages are included: -
-
PFCP Association Setup Request/Response
-
PFCP Association Update Request/Response
-
PFCP Session Report Request/Response
-
PFCP Node Report Request/Response
-
Heartbeat Request/Response.
-
PFCP PFD Management Request/Response
Session Management
Session Management messages initiated by the service and sent directly to UPF through the Protocol pod. The protocol pod initiates
connection with UPF to send these messages, this is the reason protocol pod must be in “Host networking” to take the IP address
of the node on which it is on.
Standardized Port Numbers
While triggering the “Merged” mode, the protocol pod transitions into Host based networking. Protocol pod takes the IP address
of the Host or the Node much like the existing UDP-Proxy Pod. It is essential that UDP-Proxy, GTPC-Ep and Protocol do not
share the same ports. The thumb rule for port calculation is:
Port_Value = Base_Port _Value + (Gr_Instance_Id_index *50) + (Logical_Instance_id mod 50)
Gr_Instance_id: The GR Instance Id supplied in the configurations using CLI.
Logical_Instance_id: Identifier for the logical SMF Instance.
Prometheus Port:
With complete UDP Proxy Bypass the Prometheus port of 8080 is not used, instad the start port for Prometheus 8004 for instance
1. The "instance-Id” added with 8003 must be the port number.
Proxy Keep-Alive Port:
The Proxy Keep Alive port shall start from 27500+ “Instance-Id”.
-
GR Instance 1 & Logical Instance Id 0 :- 27500 + (0 * 50) + (0 % 50) = 27500.
-
GR Instance 2 & Logical Instance Id 0 :- 27500 + (1 * 50) + (0 % 50) = 27550
Admin Port for Keep Alive and Liveness Probe:
Admin Port will be 7879 + (Gr_Instance_Id_index *50) + (Logical_Instance_id mod 50)
Infra Diagnostics Port:
Infra Diag Port will be 7779 + (Gr_Instance_Id_index *50) + (Logical_Instance_id mod 50)
PProf port:
PProf Profiling port will be 7679 + (Gr_Instance_Id_index *50) + (Logical_Instance_id mod 50)