Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

What Is Kubernetes?

Kubernetes is an orchestration tool for containers and applications. It allows users to perform this orchestration more efficiently than other tools have done in the past.

Google paired up with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF), which hosts Kubernetes.

Why should I use Kubernetes?

One of the biggest challenges for businesses adopting DevOps practices and cloud capabilities is maintaining common and consistent environments throughout an application's lifecycle.

Containers solved the application portability problem of packaging all the necessary dependencies into discrete images, and thus maintaining consistency across cloud platforms and microservices architectures. Kubernetes is the de facto standard for how containers are orchestrated and deployed.

IT and line-of-business users can focus their efforts on developing applications, rather than infrastructure, by adopting containers and Kubernetes.

Kubernetes allows users to choose the best place to run an application based on business needs. For some applications, the scale and reach of public cloud will be the determining factor. For others, factors such as data locality, security, or other concerns call for an on-premises deployment.

Current solutions can be complex, forcing teams to glue all the parts together at the expense of time and money. It can result in less choice by forcing users to choose between on-premises and public cloud providers. A container management solution can overcome these challenges.

Kubernetes offers a host of possibilities, including but not limited to:

  • Running and managing containers
  • Automating and scaling deployments
  • Deploying stateless or stateful applications 
  • Creating and configuring ingresses
  • Managing application health, service discovery, autoscaling, and load balancing. 

How does Kubernetes work?

At the center of Kubernetes is the cluster, a group of nodes that schedule and run the container workloads, called pods. Each cluster contains master nodes and worker nodes, and all Kubernetes nodes must be configured with a container runtime. 

As the brains of the cluster, the master node, or nodes, schedule all activities, such as determining which pod will run on which worker node, maintaining applications' desired state, scaling applications, and rolling out new updates.

The worker nodes run your applications, and there may be multiple nodes within your cluster. To add more capacity to your cluster, you can scale out the worker nodes.

The applications or workloads that the worker nodes run are called pods. A pod is a container that also includes a definition for how to run the containers, an image reference/location, and sometimes a data volume. Each pod has an IP address. This important detail differentiates the Kubernetes model from traditional container management solutions.

Both the master and worker nodes can be a virtual machine, or a bare-metal server (physical computer) running in your data center.

Types of containers

Containers are a critical element of cloud environments, particularly in the following capacities:

Containers

Containers are similar to virtual machines, which allow you to run multiple virtual machines on a single physical server. Virtualization allows applications to be isolated between virtual machines for better use of resources in a physical server.

A container has its own file system, CPU, memory, and process space. But containers employ isolation on the kernel level without a guest OS, so applications become encapsulated in self-contained environments. They are more efficient, fast, and lightweight than virtual machines, letting developers deploy new versions of applications and make updates without downtime.


Kubernetes and Docker

Docker is currently the most popular container platform. Although the idea of isolating environments dates quite far back, and there has been other container software in the past, Docker appeared on the market at the right time, and was open source from the beginning.

Docker, or the Docker Engine, allows you to build and run containers. There is also the Docker Hub, which is a service for storing and sharing images.

While Docker provided an open standard for packaging and distributing containerized applications, new challenges emerged:

  • How would all of these containers be coordinated and scheduled?
  • How do all the different containers in your application communicate with each other?
  • How can container instances be scaled?

Solutions for orchestrating containers soon emerged. Kubernetes, Mesos, and Docker Swarm are some popular options. Kubernetes is the container orchestration platform that was developed at Google as a comprehensive system for automating deployment, scheduling, and scaling of containerized applications. It supports many containerization tools such as Docker.


Kubernetes and networking

Kubernetes and Docker differ in the way they use networking. Docker containers share a virtual bridge interface. Docker containers can talk to other containers only if they are on the same virtual machine using the same virtual bridge.

Containers on different virtual machines cannot communicate with each other. In fact, they may have the exact same network ranges and IP addresses. This means there is no native container-to-container networking unless they are hosted on the same virtual machine. You would therefore have to build rules for proxies/port forwarding.

When Google built Kubernetes, it introduced three rules:

  • All containers can communicate with other containers without Network Address Translation (NAT). There are no proxies or port forwarding; they can all communicate natively
  • All containers can communicate with all other nodes, and vice versa, without NAT (subnet)
  • They have their own IP (not a virtual IP)