Install Kubernetes or OpenShift Agents for Deep Visibility and Enforcement

Kubernetes or OpenShift Overview

Container orchestration platforms allow to define and enforce security policies, such as network policies, pod security policies, and role-based access control (RBAC), to further enhance the security of containerized applications. Cisco Secure Workload uses Kubernetes to automate the deployment, scaling, and management of containerized applications. It provides detailed visibility into the state and performance of containerized workloads. On the other hand, OpenShift builds on Kubernetes, adding enterprise-grade features such as enhanced security, developer tools, and management capabilities.

Key Concepts

  • Namespaces : A namespace is a logical way to divide a cluster into multiple virtual subclusters.

  • Pods: A pod is the smallest unit in the Kubernetes object model that you can create or deploy. A pod represents a single instance of a running process in your cluster and can contain one or more containers.

  • Node: A node is a machine in the cluster, either physical or virtual, that runs applications in containers. Each node is managed by the Kubernetes control plane.

  • Services: Services define a logical set of pods and policies for accessing them. Services enable loose coupling between dependent Pods, making it easier to manage microservices architectures.

  • Sidecar Container: A sidecar container in Kubernetes is an extra container that runs alongside the main application container in the same Pod. This setup allows the sidecar container to share the network, storage, and lifecycle with the main container, enabling them to work closely together.

  • Service Mesh: A Service Mesh in Kubernetes manages microservice communication, enhancing security, reliability, and observability with advanced traffic management and monitoring capabilities.

Control Pane Components

You can access the Kubernetes control panel through the UI or use the command Kubectl to access from CLI.

  • API Server: The API server is the central management entity that exposes the Kubernetes API, handling all internal and external requests and serving as the front end of the control plane.

  • Scheduler: The scheduler is responsible for assigning pods to nodes based on resource requirements, constraints, and availability.

  • Controller-Manager: Runs various controllers that regulate the state of the cluster which ensure that the desired state of the cluster matches the actual state.

  • etcd: etcd is a distributed key-value store that Kubernetes uses for all its cluster data storage needs

Node Components

  • kubelet: The kubelet is an agent on each node that ensures containers in pods are running and reports their status to the control plane.

  • kube-proxy: The kube-proxy is a network proxy on each node that manages network rules and balances traffic, ensuring services are accessible and connections reach the right pods.

  • Container Runtime: The container runtime is the software responsible for running containers.

Kubernetes/OpenShift deployment in Cisco Secure Workload

The deployment comprises four major components:

  1. The Control or Management Pane that reside on either an on-premises Secure Workload cluster or a Secure Workload tenant hosted on SaaS

  2. The Secure Workload Orchestrator or Connector, established within the management plane, engages with Kubernetes cluster APIs for EKS, AKS, GKE, OpenShift or Unmanaged Kubernetes. This interaction allows enhanced visibility into pod and service metadata, providing details such as pod IDs, annotations, or labels. For more information, see Kubernetes/OpenShift.

  3. The Kubernetes Daemonset is deployed to the Kubernetes or OpenShift cluster intended for security measures. The Daemonset ensures the continuous operation of the Secure Workload agent or pod on each Kubernetes or OpenShift node. For more information, see Install Kubernetes or OpenShift Agents for Deep Visibility and Enforcement.

  4. Activating the Vulnerability Scanner initiates a scan on one of the pods within the Kubernetes nodes. This scanner oversees every container image in the Kubernetes or OpenShift cluster, reporting the identified CVEs to the Control or Management plane.

Requirements and Prerequisites

Operating system support information is available at Agent OS support matrix.

Requirements

  • The install script requires Kubernetes or OpenShift administrator credentials to start privileged agent pods on the cluster nodes.

  • Secure Workload entities are created in the tetration namespace.

  • The node or pod security policies must permit privileged mode pods.

  • busybox:1.33 images must either be preinstalled or be downloadable from Docker Hub.

  • For containerd run time, if the config_path is not set, modify your config.toml (default location: /etc/containerd/config.toml) as follows:
    
    ```
        [plugins."io.containerd.grpc.v1.cri".registry]
        config_path = "/etc/containerd/certs.d"
     ```

    Restart the containerd daemon.

  • To run on Kubernetes or OpenShift control plane nodes, the –toleration flag can be used to pass in a toleration for the Secure Workload pods. The toleration that is usually passed is the NoSchedule toleration that normally prevents pods from running on control plane nodes.

  • For Windows worker nodes:

    • Supported Windows worker node container runtime: ContainerD.

    • ContainerD config: Configure the following containerd change.
      
      ```
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = "/etc/containerd/certs.d"
       ```

      Remove configurations under registry.mirrors. The default configuration file location is C:\Program Files\containerd\config.toml.

      Restart the containerd daemon after the configuration changes.

    • The image mcr.microsoft.com/oss/kubernetes/windows-host-process-containers-base-image:v1.0.0 must either be preinstallated or downloadable on the Windows worker node.

    • The existing Kubernetes agent which is upgrading to the newer version includes the Windows DaemonSet agent automatically. However, the previous script does not uninstall the Windows DaemonSet agent. Download the latest installer script to uninstall the Windows DaemonSet agent.

    • Supported on:

      • Microsoft Windows Server 2022

      • Windows Server 2019

      • Kubernetes 1.27 and later

Requirements for Policy Enforcement

IPVS-based kube-proxy mode is not supported for OpenShift.

These agents should be configured with the Preserve Rules option that is enabled. For more information, see Creating an Agent Config Profile.

For enforcement to function properly, any installed CNI plug-in must:

  • Provide flat address space (IP network) between all nodes and pods. Network plug-ins that masquerade the source pod IP for intracluster communication are not supported.

  • Not interfere with Linux iptables rules or marks that are used by the Secure Workload Enforcement Agent (mark bits 21 and 20 are used to allow and deny traffic for NodePort services)

The following CNI plug-ins are tested for the above requirements:

  • Calico (3.13) with the following Felix configurations: (ChainInsertMode: Append, Ipta- blesRefreshInterval: 0) or (ChainInsertMode: Insert, IptablesFilterAllowAction: Return, IptablesMangleAllowAction: Return, IptablesRefreshInterval: 0). All other options use their default values.

For more information on setting these options, see the Felix configuration reference.

Install Kubernetes or OpenShift Agent using the Agent Script Installer Method


Note


The agent script installer method automatically installs agents on nodes included later.


Procedure


Step 1

Navigate to the Agent Installation Methods:

  • If you are a first-time user, launch the Quick Start wizard and click Install Agents.

  • From the navigation pane, choose Manage > Agents, and select the Installer tab.

Step 2

Click Agent Script Installer.

Step 3

From the Select Platform drop-down menu, choose Kubernetes.

To view the supported Kubernetes or OpenShift platforms, click Show Supported Platforms.

Step 4

Choose the tenant to install the agents.

Note

 

Selecting a tenant is not required for Secure Workload SaaS clusters.

Step 5

If HTTP proxy is required to communicate with Secure Workload, choose Yes, and then enter a valid proxy URL.

Step 6

Click Download and save the file to the local disk.

Step 7

Run the installer script on a Linux machine which has access to the Kubernetes API server and a kubectl configuration file with administrative privileges as the default context/cluster/user.

The installer attempts to read the file from its default location (~/.kube/config). However, you can explicitly specify the location of the config file using the --kubeconfig command.


The installation script provides instructions for verifying the Secure Workload Agent Daemonset and the Pods that were installed.


Note


The HTTP Proxy configured on the agent installer page prior to download only controls how Secure Workload agents connect to the Secure Workload cluster. This setting does not affect how Docker images are fetched by Kubernetes or OpenShift nodes, because the container runtime on those nodes uses its own proxy configuration. If the Docker images are not fetched from the Secure Workload cluster, debug the image pulling process of the container and add a suitable HTTP proxy.


Deep Visibility and Enforcement with Istio Service Mesh

Secure workload provides comprehensive visibility and enforcement for all applications running within Kubernetes or OpenShift clusters that are enabled with Istio Service Mesh.

Following are key components and guidelines for effective segmentation of these applications:

Service Mesh Sidecars

Service Mesh uses sidecar proxies deployed with application containers to intercept and manage network traffic. Sharing the same network namespace as the application, these sidecars mediate all inbound and outbound network communication.

Traffic Enforcement

  • When implementing segmentation policies for Service Mesh enabled applications, it's essential to consider the additional ports used by sidecar proxies. These ports play a vital role in managing and securing the application's network traffic.

  • For the Service Mesh to remain intact and available, ensure your segmentation policies explicitly include rules for the ports used by sidecar proxies.

Supported Port and Protocol for Sidecar Proxy

Include the following ports while enforcing segmentation policies on Service Mesh enabled applications.

Port

Protocol

Description

15000

TCP

Envoy admin port (commands/diagnostics)

15001

TCP

Envoy outbound

15004

HTTP

Debug port

15006

TCP

Envoy inbound

15008

HTTP2

HBONE mTLS tunnel port

15020

HTTP

Mereged Prometheus telemetry from Istio agent, Envoy, and application

15021

HTTP

Health Checks

15053

DNS

DNS port, if capture is enabled

15090

HTTP

Envoy Prometheus telemetry


Note


The above ports are the default ports used by Istio for Envoy sidecar proxy communication.If these ports have been updated in the Istio global Service Mesh configuration settings, use the updated ports in the applications.


Supported Port and Protocol for Service Mesh Control Plane

Use the following port and protocol when segmenting the control plane.

Port

Protocol

Description

443

HTTPS

Webhook servie port

8080

HTTP

Debug Interface (deprecated, container port only)

15010

GRPC

XDS and CA services (Plaintext, only for secure networks)

15012

GRPC

XDS and CA services (TLS and mTLS, recommended for production use)

15014

HTTP

Control plane monitoring

15017

HTTPS

Webhook container port, forwarded from 443