Kubernetes is open source software for the orchestration of containers and applications. It allows users to perform this orchestration much more efficiently than other tools have done in the past.
Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the open source community. It is hosted by the Cloud Native Computing Foundation (CNCF).
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014.
Why should I use Kubernetes?
One of the biggest challenges for businesses adopting DevOps practices and cloud capabilities is maintaining common and consistent environments throughout an application’s lifecycle.
Containers solved the application portability problem of packaging all the necessary dependencies into discrete images. Kubernetes has become the de facto standard for how containers are orchestrated and deployed.
IT and Line of Business users can focus their efforts on developing applications, rather than infrastructure, by adopting containers and Kubernetes.
Kubernetes allows users to choose the best place to run an application based on business needs. For some applications, the scale and reach of public cloud will be the determining factor. For others, factors such as data locality, security, or other concerns call for an on-premises deployment.
Current solutions can be complex, forcing teams to ‘glue’ all the parts together at the expense of time and money. It can result in less choice by forcing users to choose between on-premises and public clouds. A container management solution can overcome these challenges.
Learn more on simplifying container orchestration with Cisco Hybrid Solution for Kubernetes on AWS.
How does Kubernetes work?
It is important to understand the components that make up Kubernetes. The ‘cluster’ is at the center of Kubernetes. It is the group of nodes that schedule and run the container workloads, called Pods. In each cluster there are ‘master’ nodes and ‘worker’ nodes.
The ‘master’ node or nodes are the brains of the cluster and they schedule all activities such as determining which Pod will run on which worker node, maintaining applications’ desired state, scaling applications, and rolling out new updates.
The ‘worker’ nodes are the components that run your applications. You may have several or many nodes within your cluster. To add more capacity to your cluster, it’s easy to scale out the worker nodes.
The applications or workloads that the ‘worker’ nodes run are called ‘pods’. A pod is a container that also includes a definition for how to run the containers, an image reference/location, and sometimes a data volume. Each pod has an IP address. This is what is important about the Kubernetes model and differentiates it from traditional container management solutions..
Both the ‘master’ and ‘worker’ nodes can be a virtual machine (VM), or a bare metal server (physical computer) running in your on-premises environment.
Containers are similar to virtual machines (VMs), which allow you to run multiple VMs on a single physical server. Virtualization allows applications to be isolated between VMs for better use of resources in a physical server.
Similar to a VM, a container has its own filesystem, CPU, memory, and process space. However, containers employ isolation on the kernel level without the need for a guest operating system (OS), so they allow applications to become encapsulated in self-contained environments. This makes them much more efficient, fast, and lightweight than VMs, helping developers to deploy new versions of applications several times a day and make updates in an easy and fast way without downtime.
Docker is currently the most popular container platform. Although the idea of isolating environments dates quite far back, and there has been other container software in the past, Docker appeared on the market at the right time, and was open source from the beginning.
Docker, or the ‘Docker Engine’ allows you to build and run containers. There’s also the ‘Docker Hub’, which is a service for storing and sharing images.
While Docker provided an open standard for packaging and distributing containerized applications, there arose new challenges:
Solutions for orchestrating containers soon emerged. Kubernetes, Mesos, and Docker Swarm are some of the more popular options. Kubernetes is the container orchestrator that was developed at Google as a comprehensive system for automating deployment, scheduling and scaling of containerized applications. It supports many containerization tools such as Docker.
Kubernetes and networking
Kubernetes and Docker differ in the way they use networking. Docker containers share a virtual bridge interface. Docker containers can talk to other containers only if they are on the same virtual machine (VM) using the same virtual bridge.
Containers on different VMs cannot communicate with each other. In fact, they may have the exact same network ranges and IP addresses. This means there is no native container-to-container networking unless they are hosted on the same VM. You would therefore have to build rules for proxies/port forwarding.
When Google built Kubernetes, it introduced 3 rules
Watch this video on securing Kubernetes workloads.
Our resources are here to help you understand the security landscape and choose technologies to help safeguard your business.
These tools and articles will help you make important communications decisions to help your business scale and stay connected.