Kubernetes is a container orchestration system. This means that applications can be split between different containers and thus run faster and more efficiently. It is an open-source project and was first released in 2014. Kubernetes is very powerful and can manage systems distributed over thousands of computers.
Why is Kubernetes also called k8s?
By the way, the Greek word Kubernetes means helmsman and is exactly what the program does, it controls. In this article, we will mainly use the abbreviation of Kubernetes k8s. This comes from the fact that the word Kubernetes starts with k, ends with s and there are 8 letters in between.
What are containers and why do we need them?
One of the defining characteristics of Big Data or Machine Learning is that in many cases a single computer is not sufficient to handle the massive computational loads. Therefore, it is necessary to use multiple computers that can share the work. In addition, a cluster can also compensate for failures of individual computers, which in turn ensures that the application is continuously accessible. We refer to such an arrangement of computers as a computing cluster or distributed system for parallel computing.
Now we know what clusters are, but we have not yet understood what containers are all about.
The containers come from the Docker software environment. For now, just this: we can actually think of Docker containers as relatively practical, like a shipping container. Let’s assume that three people are working on a certain task in this container (I know that this probably violates every applicable occupational health and safety law, but it fits our example very well).
In this container, they find all the resources and machines they need for their task. They receive the raw materials they need through a certain hatch in the container, and they release the finished product through another hatch. Our shipping container can thus operate relatively undisturbed and largely self-sufficiently. The people inside will not notice whether the ship including the container is currently in the port of Hamburg, in Brazil, or somewhere in a calm sea. As long as they are continuously supplied with raw materials, they will carry out their task no matter where they are.
It is the same with Docker Containers in the software environment. They are precisely defined, self-contained applications that can run on different machines. As long as they continuously receive the defined inputs, they can also continue to work continuously.
What do we need Kubernetes for?
What we’ve discussed up to this point: We use computing clusters to run computationally intensive projects, such as machine learning models, reliably and efficiently on multiple computers. In containers, in turn, we program subtasks that can be self-contained and that always run in the same way, regardless of whether they run on computer 1 or computer 2. That actually sounds sufficient, doesn’t it?
Distributed systems offer advantages over single computers and additional challenges, for example, in sharing data or communication between the computers within the cluster. Kubernetes takes over the work of distributing the containers to the clusters and ensures that the program runs smoothly. This allows us to focus on the actual problem, i.e. our specific use case.
How is a Kubernetes cluster built?
Kubernetes is typically installed on a cluster of computers. Each computer in this cluster is called a node. In turn, several so-called pods run on a computer or node. Finally, the containers with the smaller applications run on the pods and can communicate in a local system.
In order for the pods and the containers inside them to run without complications, there are some auxiliary functions and components in the Kubernetes cluster that make sure all systems are running.
- Control Plane: This is the computer that monitors the entire cluster. It does not run any pods for the application. Instead, the individual pods are assigned the containers to run on them.
- Sched: The scheduler keeps an eye out within the cluster for newly created pods and assigns them to existing nodes.
- ETCD: A repository for all information that accumulates in the cluster and needs to be kept, e.g. configuration metadata.
- Cloud Controller Manager (CCM): When part of the system is running on cloud resources, this component comes into play and handles communication and coordination with the cloud.
- Controller Manager (CM): The most important component in the Kubernetes cluster monitors the cluster and looks for failed nodes, then redistributes the containers and pods.
- API: This interface enables the communication between the nodes and the control plane.
The nodes have a much slimmer design than the Control Plane and contain two essential components for monitoring in addition to the pods:
- Kubelet: It is the control plane within a node and ensures that all pods are running properly.
- Kube proxy (k-proxy): This component distributes the incoming node traffic to the pods by creating the network inside the node.
What are the Kubernetes objects?
Kubernetes objects are persistent entities in the Kubernetes system that represent the state of the cluster. They are used to define, create, modify, and delete components of a Kubernetes application, such as pods, services, deployments, and configuration maps.
Each Kubernetes object is defined using a YAML or JSON file that specifies the object’s properties, such as name, labels, and desired state. Kubernetes objects have a unique name within a namespace and can be manipulated through the API.
Objects are divided into several categories, including:
- Pods: a pod is the smallest deployment unit in Kubernetes and consists of one or more containers that share the same network namespace and storage volume.
- Services: A service is a way to deploy a group of pods as a network service so that they can communicate with each other within the cluster.
- Deployment: A deployment is a way to manage the lifecycle of a group of pods and enables rolling updates and rollbacks.
- ConfigMaps: A ConfigMap is a way to store configuration data as key-value pairs so that pods can access them as environment variables or files.
- Secrets: A secret is a way to store sensitive data such as passwords and API keys as key-value pairs that are encrypted at rest.
- Ingress: An ingress is a way to expose HTTP and HTTPS routes to services within the cluster so that external traffic can be directed to specific pods.
These objects provide a powerful way to define and manage the components of a distributed application so that developers can focus on writing code rather than managing infrastructure.
What are the Advantages of using k8s?
The principle of computer clusters was a real breakthrough in computer science, as it allows intensive computing tasks to be managed and, due to scaling, also to be offered at a relatively lower cost than with a powerful machine. However, until the introduction of k8s, managing such clusters was very costly and actually unmanageable. This is one of the reasons why Kubernetes has now become a real industry standard.
Other benefits include:
- Automatic Scaling: Depending on demand, (cloud) resources can be ramped up or down. This saves costs by avoiding infrastructures being set up to handle peaks but remaining unused the rest of the time. For example, a ticketing platform can respond to rushes when well-known artists sell concert tickets. However, since these releases are the exception, they can then scale their infrastructure back down.
- Load Balancing: The Controller Manager makes it possible to take the load away from individual computers and distribute it to other machines. This prevents system failures and reduces the load on components.
- Developer-friendliness: For the application developer, Kubernetes offers the advantage that it can already be integrated into many tools and the programmer can concentrate entirely on the actual task.
- Compatibility of on-premise and cloud: Kubernetes is also used to build hybrid infrastructures that run partly in the cloud and partly on local servers. Communication and collaboration between these components can be made easy through Kubernetes.
Clarity: K8s offers a clear dashboard for the entire cluster, which provides an initial overview and makes the problem areas very quickly apparent.
Which Applications work with Kubernetes?
Today, Kubernetes is used everywhere distributed systems are in place. Additionally, it simplifies the work in development when working in different stages until going live. Kubernetes application areas include:
- Websites: The load on websites is not always the same. There are more users on the site during peak hours than during evening hours. With the help of Kubernetes, the loads can be controlled in a targeted manner.
- Microservice Architecture: Many applications today work with a so-called microservice architecture. This means that complex applications, such as a website, are divided into small, manageable microservices. These are then easier to improve or update than if they were “trapped” in a single, large application. In such an arrangement, the microservices can be deployed in individual containers, which in turn are orchestrated by Kubernetes.
- Software as a Service – Products: In this area, a company offers specific software that is used by customers. Because the customers represent a range of application scenarios, the load on the cluster must be distributed and balanced accordingly.
This is what you should take with you
- A network of different computers is called a cluster. A container, in turn, can perform a certain task autonomously, regardless of the system on which it does so.
- Kubernetes handles the management of containers within a computing cluster for us.
- The Kubernetes cluster has various components that ensure that all pods are running and the system continues to function.
Other Articles on the Topic of Kubernetes
- Here you can find the documentation of Kubernetes with many interesting articles.
- And here is the directory of Docker and the corresponding Docker Containers.