How it Works

This section describes the operation of the Dynamic Routing feature.

Incoming Traffic

BGP uses TCP as the transport protocol, on port 179. Two BGP routers form a TCP connection between one another. These routers are peer routers. The peer routers exchange messages to open and confirm the connection parameters.

The BGP speaker publishes routing information of the protocol pod for incoming traffic in the active/standby mode. Use the following image as an example to understand the dynamic routing functionality. There are two protocol pods, pod1 and pod2. Pod1 is active and pod2 is in the standby mode. The service IP address, 209.165.201.10 is configured on both the nodes, 209.165.200.226 and 209.165.200.227. Pod1 is running on host 209.165.200.226 and pod2 on host 209.165.200.227. The host IP address exposes the pod services. BGP speaker publishes the route 209.165.201.10 through 209.165.200.226 and 209.165.200.227. It also publishes the preference values, 110 and 100 to determine the priority of pods.

Dynamic Routing for Incoming Traffic in the Active-standby Topology

For high availability, each cluster has two BGP speaker pods with active/standby topology. Kernel route modification is done at host/network level where the protocol pod runs.

MED Value

The Local Preference is used only for IGP neighbors, whereas the MED Attribute is used only for EGP neighbors. A lower MED value is the preferred choice for BGP.

MED Value

Bonding Interface Active

VIP Present

MED Value

Local Preference

Yes

Yes

1210

2220

Yes

No

1220

2210

No

Yes

1215

2215

No

No

1225

2205

Bootstrap of BGP Speaker Pods

The following sequence of steps set up the BGP speaker pods:

  1. The BGP speaker pods use TCP as the transport protocol, on port 179. These pods use the AS number that is configured in the Ops Center CLI.

  2. Register the Topology manager.

  3. Select the Leader pod. The active speaker pod is the default choice.

  4. Establish connection to all the BGP peers provided by the Ops Center CLI.

  5. Publish all existing routes from ETCD.

  6. Configure import policies for routing by using CLI configuration.

  7. Start gRPC stream server on both the speaker pods.

  8. Similar to the cache pod, two BGP speaker pods must run on each Namespace.