What Is Kubernetes?

Kubernetes (also spelled “K8s”) is an open-source software platform used to manage containers. Using automation, it takes care of the manual engineering effort in deploying container-based applications at scale.

Kubernetes has awareness of both applications and infrastructure requirements, reducing operational complexity and making it location-independent.

Why should I use Kubernetes?

While containerized-applications do not need Kubernetes to run, the centralized control plane it provides, eliminates the need to individually manage containers and the associated infrastructure services, which would be complex and time-consuming.

Considering a modern application can comprise tens or hundreds of containers, the challenge of scaling without an orchestrator becomes evident.

Kubernetes offers great value for DevOps and IT Operations teams, as it enables them to maintain a common and consistent environment throughout an application's lifecycle.

An additional Kubernetes benefit is application portability. Containerized applications can run on any Kubernetes platform, either in an on-premises data center, at the edge or in public clouds. This allows users to choose the best place to run an application based on business needs, for example performance, data gravity or compliance requirements.

How does Kubernetes work?

Kubernetes is architected as a cluster of nodes, upon which containers are deployed into pods.

Each pod can contain one or more containers. A pod definition states how to run a container, including an image reference, memory, CPU, storage and networking requirements.

Each cluster contains master and worker nodes. The master node, or nodes, schedule all activities, such as determining which pod will run on which worker node, maintaining the applications' desired state, scaling, and rolling out of new updates.

Applications are deployed on worker nodes, and there may be multiple worker nodes within each cluster. To add or remove capacity within a cluster, worker nodes can be scaled automatically up or down.

Both the master and worker nodes can be a virtual machine, or a bare metal server.

Kubernetes and networking

As Kubernetes reflects a multi-layered, distributed system that oftentimes spans heterogeneous infrastructure spanning different locations, networking is a crucial capability of the architecture.

Each of the different construct in Kubernetes, such as nodes, pods, containers and applications has its own communication requirements.

  • Container-to-container communication refers to containers within the same pod. Containers in the same pod run on the same node and share resources such as storage volumes and IP addressing. They can communicate with each other using the same localhost IP but different ports.
  • Pod-to-pod communication is accomplished using a dedicated pod IP address. Kubernetes assigns each node in a cluster (a group of a master/worker nodes) a block of IP addresses and in turn every pod in this node gets allocated an IP address. Traffic between pods is routed using these IP addresses. The pods IP addresses are independent or the nodes and containers IP addresses.
  • Pod-to-service networking. As pods are ephemeral and come and go, so do their IP addresses. A service is what provides a constant IP for a pods or groups of pods. Any traffic destined for a pod is routed via a service virtual IP address.
  • External-to-service communication provides connectivity from a cluster to outside world. For connecting into the cluster, an ingress is required. Ingress is a function that manages incoming traffic to the cluster, along with other features such as load-balancing, SSL offload and name-based routing. To deliver the ingress function, a controller is needed, which typically is virtual device.

To implement this networking model Kubernetes uses the Container Network Interface (CNI) specification. CNIs can be developed by third parties to deliver networking to Kubernetes adding vendor-specific capabilities.