Once upon a time, the road to Kubernetes
The story of Kubernetes began at Google: like no other, the company needed a huge infrastructure to make its search engine and the associated advertising available to all people. The planned, enormous growth was a challenge for which various ideas arose. Virtualization of hardware, site reliability engineering and container technologies represent three essential pillars which are vital for the subsequent solution:
Cloud Computing: Implement efficient infrastructures with virtual machines
Virtualized computing power is an important prerequisite for efficient infrastructures: it has improved the utilization of server hardware fivefold. While previously a hardware server was used for a service, many services gather on one thanks to virtualized hardware. This improves hardware utilization from 5 to 25 percent. Despite virtualization, the full potential of resources is still not fully utilized: If a single machine is not sufficient for high loads, the whole system can get bored at quieter times. As operators of an OpenStack-based public cloud, we are also aware of this.
Site Reliability Engineering: Solving Problems with Software
The approach behind Site Reliability Engineering (SRE) is to combine operation and software development. By accessing the means of software development, SRE solves classic problems of IT operations. For example, if a server crashes, SRE does not primarily want to fix the problem by changing the configuration of the servers. Instead, Site Reliability Engineering is applied at the software level to automate and sustainably solve the problem.
The Docker Age: Optimizing development and operation with containers
Docker started 2013 with a simple idea: Applications do not always need a complete operating system with all services, but only a small part of it. With this approach, resource-conserving images can be built that function autonomously and produce an independent work result. For software development, but also from the perspective of the company, the new quasi-standard for container-based images was an encouraging development: Developers were able to develop software without having to take greater account of individual environments. Furthermore, containers also led to fewer errors in operation.
The project “Borg” becomes Kubernetes
Google relied on containers and their advantages in many ways – even independently of Docker. In order to manage (or orchestrate) containers sensibly, the employees developed the “Borg” project. Star Trek fans know the meaning of the name: it is a collective that is not organized hierarchically, because all Borg are connected to each other. We’ll learn later what that means for Kubernetes.
Borg was an undisputed competitive advantage of Google because it utilized the machines much better than purely virtualized operations. Google strengthened its market position by being able to provide huge infrastructure landscapes at a much lower cost.
The big hit for the general public was the idea of making Borg available as an open-source solution. Kubernetes (Greek for helmsman; short: K8 or K8s) was born.
What is K8s?
The open-source platform Kubernetes orchestrates and automates the setup, operation and scaling of container applications. The architecture allows the containers to be orchestrated across multiple machines, whether they are virtualized hardware or bare metal.
Kubernetes continuously monitors the state of the applications and ensures that it corresponds to the specified descriptions: For example, if the descriptions specify that three instances should always be executed as web servers, Kubernetes keeps exactly three instances running. If an instance fails or a process crashes, Kubernetes restarts this instance. Even if an entire work process fails or cannot be reached, Kubernetes restarts it from a new node.
Kubernetes is structured according to the so-called master-slave architecture. The master component controls the nodes on which the containers run. The Kubernetes architecture includes:
- Kubernetes Master: The Kubernetes Master is the central control element that distributes and manages the containers on the nodes. To achieve high availability, several masters can be distributed.
- Nodes: A node, also called a worker machine or minion, can be a virtual machine (VM) or a physical server. The pods run on the nodes.
- Pods: Pods are the smallest deployable unit. They contain one or more containers that share the allocated resources.
- etcd: The etcd stores the configurations of the Kubernetes cluster and thus represents the key value database. The communication with the etcd manages Kubernetes via the API server.
- API server: The API server contains all the information of the etcd, making it one of the most important components of Kubernetes. Via REST interfaces, for example, it communicates with all internal and external services of the Kubernetes Cluster.
- kube-scheduler: The kube-scheduler monitors and manages the utilization of the nodes by deciding on the basis of the resources on which node a pod starts.
- Controller Manager: The Controller Manager contains all control mechanisms and is therefore an essential component for monitoring. It communicates with the API server to read and write all statuses.
More than Kubernetes: MetaKube by SysEleven
For the IT world, Kubernetes is a revolutionary approach and makes life easier for DevOps. But the implementation of Kubernetes is complex and time-consuming. With MetaKube we offer a managed service for Kubernetes that goes beyond pure provisioning: We provide you with the powerful features of Kubernetes and have already considered other important services for you. Meta features include Backup & Recovery, Load Balancer as a Service, Monitoring and Lifecycle Management.
With MetaKube we are a member of CNCF and are thus registered with the umbrella organization that organizes the developments of the open-source solution. For this reason, we at MetaKube strictly ensure that we remain standard-compliant with Kubernetes. Our OpenStack-based cloud, on which we operate Kubernetes, also is 100 percent standards-compliant.
MetaKube’s goal is to provide you with all the advantages of Kubernetes with the peace of mind of a managed service. This allows you to focus on developing your application without worrying about operational issues.
Please do not hesitate to contact us:
We make your cluster available to you in a few minutes