Implement Zero-Trust Microsegmentation for Kubernetes-Based Workloads

Zero-Trust microsegmentation helps organizations limit lateral movement, reduce attack surfaces, and enforce least privilege access, and create a secure, adaptive, scalable infrastructure. Microsegmentation divides the network into smaller segments to contain and control the threats.

Core benefits of implementing zero-trust microsegmentation for Kubernetes workloads include:

  • Isolation of Workloads: Kubernetes microsegmentation allows you to isolate workloads or containers from each other. Each workload runs in its segment, preventing lateral movement in case one workload is compromised.

  • Visibility and Monitoring: Microsegmentation provides increased visibility into network traffic within the Kubernetes environment. This visibility enables security teams to monitor communication between containers and detect anomalous behavior or potential security incidents.

  • Zero-Trust Networking: Zero-Trust networking assumes that the network is compromised and all communication should be verified and authenticated to help protect workloads against threats.

For more information, see Set Up Microsegmentation for Kubernetes-Based Workloads.

Scenario

Modern enterprise applications run in hybrid multicloud environments that include bare-metal servers, virtual machines, and containerized workloads orchestrated by Kubernetes. This complexity introduces challenges in securing applications and data without affecting operational agility. Cisco Secure Workload addresses this challenge by bringing security closer to applications through microsegmentation and zero-trust principles. It uses advanced machine learning and behavioral analysis to tailor security policies based on workload behavior. This enables organizations to:

  • Allow only business-required traffic through microsegmentation policies.

  • Detect anomalies and potential threats via behavioral baselining.

  • Identify vulnerabilities in software packages installed on workloads.

  • Quarantine compromised workloads to prevent lateral movement.

  • Enforce zero-trust security and microsegmentation in Kubernetes environments.

Is this use case for you?

The target audience for this scenario includes Security Architects, Network Administrators or Cloud Architects responsible for managing and configuring network infrastructure. They may be involved in defining and implementing network policies to segment and control communication among different components within the Kubernetes cluster.

Prerequisites

  • Ensure that a connector or external orchestrator is correctly configured and integrated with the target Kubernetes cluster. For more information, see Install Kubernetes or OpenShift Agents for Deep Visibility and Enforcement.

  • Ensure that you create the Secure Workload entities in the tetration namespace.

  • Ensure that the node or pod security policies allow privileged mode pods.

  • Ensure that you install a CNI plug-in for enforcement to function properly.

Limitations for Enforcing Zero-Trust Security in Kubernetes

  • Kubernetes versions outside of 1.16 to 1.22 are not supported.

  • Requires privileged pod permissions, which may conflict with strict security policies in some environments.

  • Behavioral analysis depends on sufficient traffic data. Newly deployed workloads may have limited baseline data initially.

  • Integration complexity may increase in highly customized or multi-tenant Kubernetes clusters.

Guidelines for Enforcing Zero-Trust Security in Kubernetes

  • Deploy agents as DaemonSets to ensure coverage across all nodes and workloads.

  • Ensure that you have Kubernetes API access with admin privileges for orchestrator connectivity.

  • Use behavioral baselining for policy creation rather than relying only on static rules.

  • Regularly update vulnerability data to maintain accurate risk assessments.

  • Integrate with existing firewall and network security infrastructure for comprehensive protection.

End-to-end Implementation of Zero-Trust Microsegmentation for Kubernetes-Based Workloads

Zero-trust microsegmentation for Kubernetes forms a crucial part of modern security strategies. It limits lateral movement, reduces attack surfaces, and enforces least privilege access, helping organizations build a secure, adaptive, and scalable infrastructure. Microsegmentation divides the network into smaller, isolated segments. This approach contains and controls the lateral movement of threats.

For more information, see Set Up Microsegmentation for Kubernetes-Based Workloads.

Procedure


Step 1

Scope Design–Scope design refers to the creation of a Secure Workload construct called Scope to categorize a defined set of workloads or Kubernetes clusters intended for policy discovery. Each Kubernetes cluster must be associated with a specific scope. Assign all workloads from a business application to one scope. Kubernetes environments support two main scope design approaches.

Note

 

While it's possible to map multiple Kubernetes clusters to a single scope, it is not permissible to divide the same Kubernetes cluster across multiple scopes. For more information, see Scopes and Inventory.

  • Single scope design–If you want to manage policies for an entire Kubernetes cluster or a group of similar Kubernetes clusters together then, one or more clusters can be grouped into a single scope.

    Figure 1. Single Scope Design
    Diagram showing a hierarchical single scope design for Kubernetes clusters.
  • Split scope design–If multiple applications are running on the same Kubernetes cluster and the requirement is to manage policies for each application independently or separately, then:

    • map the Kubernetes cluster inventory to a parent scope.

    • map the subset of pods and services (that belong to a certain business applications) to a separate child scope.

    Figure 2. Split Scope Design

Step 2

Policy discovery–Policy discovery in Secure Workload offers the capability to automatically identify segmentation policies. This discovery tool depends on workload inventory and flow data, as described in Understanding Kubernetes Inventory and Network Communications. It offers flexibility to regulate policy outcomes. Two key aspects of discovered policies, Policy Objects and Policy Definition can be controlled in alignment with your zero-trust segmentation objectives:

  • Policy Objects refer to clusters of workloads that exhibit similar behavior. In the context of zero-trust microsegmentation, the configuration of policy discovery settings is pivotal in achieving the desired level of segmentation. Depending on the objectives, common approaches include determining enforcement boundaries at various levels,such as:

    • Kubernetes cluster: Treat the entire Kubernetes cluster as the enforcement boundary.

      Configuration: Set up the policy discovery engine to skip clustering and perform a scope-level policy discovery.

    • Kubernetes namespace: Create Ringfence Kubernetes namespaces by creating clusters based on namespace queries. Discovered policies will use namespace-based clusters as policy objects when running the ADM with clustering enabled.

    • Service and Pod as one unit: Consider services and pods as a unified unit without the need for manual cluster definition.

      Configuration: Set the cluster granularity to Coarse.

    • Pods and Services as separate units: Treat pods and services as distinct units without requiring manual cluster definition.

      Configuration: Set the cluster granularity to Fine.

      Note

       

      These approaches provide flexibility in defining the level of granularity for policy enforcement based on your specific Zero-Trust segmentation objectives within the Kubernetes environment. Choose the approach that aligns with your desired security and operational considerations.

  • Policy Definition–involves defining the protocols and ports that are permissible for communication between source and destination clusters. Two key aspects related to policy management are:

    • Port Generalization: Port generalization refers to consolidating or abstracting specific port details to a more generalized level. Instead of defining policies for each port, a broader category or range of ports may be specified, simplifying the management and implementation of policies.

    • Policy Compression: Policy compression involves optimizing and condensing policies to streamline their representation and implementation. This can include reducing redundancy, eliminating unnecessary rules, and simplifying the overall policy structure. The goal is to enhance efficiency without compromising the security requirements of the communication between clusters.

    Note

     

    For more information, see Policy Discovery.

Step 3

Analyze and validate policies–The Real-time policy analysis tool allows you to define policy versions and analyze them against live traffic from the Kubernetes workloads. This process helps to identify unexpected outcomes. Policy analysis does not require enforcing the policies on the Kubernetes cluster. Instead, it evaluates whether the intended behavior aligns with live traffic patterns in the cluster. For more information, see Live Policy Analysis.

Figure 3. Live Policy Analysis

Step 4

Enforce policies on the Kubernetes nodes

The focus is on enforcing policies on the Kubernetes nodes. The Secure Workload policy engine translates intended policies into specific rules. It then programs infrastructure components according to the underlying operating system of the Kubernetes node.

The enforcement process varies depending on whether the Kubernetes node is Linux-based or Windows-based:

Linux nodes

  • Pod to Pod Policy (Inter and Intra Node):

    Scenario: Control communication between pods, whether they are on the same node (intra-node) or different nodes (inter-node).

    Example: Restrict communication between Pod A on Node 1 and Pod B on Node 2.

  • External to Pod via NodePort or LoadBalancer:

    Scenario: Govern traffic from external sources to pods via NodePort LoadBalancer services.

    Example: Allow external access to a specific service on a pod using NodePort or LoadBalancer.

  • Pod to External:

    Scenario: Manage communication from pods to external entities outside the Kubernetes cluster.

    Example: Allow pods to access specific external services or APIs.

Windows Nodes

  • Pod to Pod Policy (Inter and Intra Node):

    Scenario: Regulate communication between pods, whether they are on the same node (intra-node) or different nodes (inter-node).

    Example: Restrict communication between Pod X on Node A and Pod Y on Node B.

  • External to Pod via NodePort or LoadBalancer:

    Scenario: Control traffic from external sources to pods through NodePort or LoadBalancer services.

    Example: Permit external access to a specific service on a pod using NodePort or LoadBalancer.

  • Pod to External:

    Scenario: Manage communication from pods to external entities beyond the Kubernetes cluster.

    Example: Allow pods on Windows nodes to communicate with specific external services or APIs.

For more information, see Enforcement on Containers.

Step 5

Monitor policy compliance–Live policy analysis tool allows you to continue monitoring policy compliance against live traffic from the cluster to look for any unexpected outcome.

After enforcing policies on the Kubernetes nodes, it is essential to continue monitoring policy compliance. The live policy analysis tool remains a valuable resource during this phase, enabling ongoing assessment of policy adherence against live traffic within the cluster. This continuous monitoring helps identify and address any unexpected outcomes or deviations from the intended policy framework. By regularly analysing live traffic and comparing it to the established policies, organizations can maintain a proactive approach to security, promptly identifying and mitigating any issues that may arise in the dynamic and evolving Kubernetes environment.

Figure 4. Policy Compliance