Provides instructions for deploying ULB, including required steps and verification actions.
Use this procedure to deploy the ULB in an on-premise environment, including preparation, deployment, and configuration steps.
This procedure covers the end-to-end deployment of the ULB, including image handling, Helm chart configuration, and log management for both operator and agent components.
Before you begin
Before you begin the deployment of ULB in On-Premise environment, make sure the following prerequisites are met.
-
Ensure the Kubernetes cluster is installed and configured.
-
Linux Kernel version should be 5.4.x and above.
-
Enable packet forwarding in the kernel.
-
Disable Reverse Path Filtering (that is, ‘rp_filter = 0’).
-
iptables: User-space utility program should be installed.
-
Cilium is installed using SMI/CNDP.
Procedure
|
1. |
Download the ULB image tar file from the designated source (https://software.cisco.com/).
Extract the tar file to unpack the Docker images.
|
|
2. |
Onboard and deploy the ULB image.
-
Load the Docker images and tag them as required.
-
Login to the Docker Hub and push the images.
-
Ensure you have the Helm charts for the ULB operator and agent. Update the ‘values.yaml’ file in the Helm charts to use your Docker images.
-
Deploy the ULB (ULB operator and agents) using ULB Ops-center from cluster deployer.
-
Configure the ULB according to the requirements.
-
Ensure packet forwarding is enabled and reverse path filtering is disabled.
-
Update log tags configuration for ULB operator and agent.
Use the following configuration commands for the operator:
config
logging name lbs_operator.app.app level { debug | error | info | off | trace | warn }
logging name lbs_operator.app.empcrd level { debug | error | info | off | trace | warn }
logging name lbs_operator.app.lbcrd level { debug | error | info | off | trace | warn }
logging name lbs_operator.app.service level { debug | error | info | off | trace | warn }
end
Use the following configuration commands for the agent:
config
logging name lbs-agent.egress-mgr.app level { debug | error | info | off | trace | warn }
logging name lbs-agent.egress-mgr.iptables level { debug | error | info | off | trace | warn }
logging name lbs-agent.egress-mgr.iptables-oper level { debug | error | info | off | trace | warn }
logging name lbs-agent.egress-mgr.lbs-iptables level { debug | error | info | off | trace | warn }
logging name lbs-agent.egress-mgr.egressmgr level { debug | error | info | off | trace | warn }
logging name lbs-agent.egress-mgr.listener level { debug | error | info | off | trace | warn }
end
This log tag configuration allows precise control over the logging output, enabling effective monitoring, troubleshooting, performance analysis, security auditing, and operational insights. By specifying different log levels for various components, administrators can tailor the logging to meet specific needs and ensure the smooth operation of the load balancing service.
|
|
3. |
Define and apply LoadBalancer and Egress Management Policy CRs as needed.
|