What is Kubernetes?
February 21, 2023
Overview
The shift from monolith to microservice (more aptly named multi-service) architecture has been gradual, but picked up steam in the 2010s alongside the move to The Cloud. Initially, services would be deployed to virtual machines (aka instances), but the difficulty in vertically scaling virtual machines in order to maximize resource usage created a problem that required a solution. The solution was to provide finer-grained management of individual services' resources, which led to the creation of containers. Containers provide other advantages over virtual machines that are noted in the aforementioned link. This advent of containers then created a problem for applications with many services, because these containers need to be allocated into virtual machines in an efficient manner. This allocation is difficult when containers are able to vertically and horizontally scale resources up and down at unpredictable rates! Allocating different containers with different scaling rules (and traffic patterns) within existing virtual machines is one problem. Another problem is when to provision new virtual machines for additional containers, or when to remove virtual machines when services have scaled down. The latter would require re-allocating containers onto existing virtual machines.
The problems just mentioned are problems of orchestrating containers on virtual machines, and it's the main reason to use Kubernetes. However, this solution comes at a cost, and we'll explore those costs later in this blog post after understanding Kubernetes in a bit more detail.
Orchestration Model
The smallest deployable unit in Kubernetes is a Pod. A Pod consists of one or more containers that work together. Those containers
are typically defined by the user/developer in a Kubernetes object called
a Deployment. Although Pods can be defined in a Pod object, Deployments are a higher-level object
that provide control over the lifecycle of the Pod. The initial (requested
) and maximum (limit
) CPU and memory resources for containers can be defined here.
Services are another important Kubernetes object that work side-by-side with Deployments. According to Kubernetes documentation, Services ensure that the Kubernetes Scheduler can spread the Pods associated with the service as they are created by the controller(s), such as Deployment. Spreading the Pods of a Service across different Nodes is required to achieve high-availability. Fine-grained control of how pods are spread across nodes is accessible via Pod Topology Spread Constraints.
The Kubernetes construct that encapsulates Pods are Nodes. Kubernetes Nodes can be backed by either virtual machines or physical machines. In The Cloud and in the case of AWS, Kubernetes nodes are typically (directly or indirectly) backed by EC2 instances.
Volumes provide local storage to a group of Pods. Volumes can be defined in other Kubernetes objects, including Deployments. They are then mounted to individual containers defined in the Deployment, Pod, StatefulSet, or other definition. Volumes can be backed by different storage types in Cloud platforms. In AWS, Kubernetes Volumes are typically backed by EBS volumes. This can be customized if the user/developer defines their own Persistent Volume Claim.
StatefulSets are similar to Deployments, however, they're intended to be used for resources that maintain state over a long period of time, such as databases, or Kafka brokers. Each Pod in a StatefulSet contains (likely) different data, so unlike Deployments, pods in a StatefulSet are not interchangeable.
The last basic, yet important, Kubernetes object to be aware of is the Horizontal Pod Autoscaler. This Kubernetes object is responsible for scaling replicas (Pods) horizontally.
These objects are all controlled by the Kubernetes Control Plane. Visit this Kubernetes Basics tutorial to begin interactive learning.
Networking Model
The full scope of this topic is described very well in this epic blog post about the Kubernetes networking model. Some important takeaways are:
- Containers in a pod share the same IP and port space, so they must have different port assignments.
- Pod IP addresses are not durable and should not be referenced directly since they appear and disappear as a result of up/down scaling. Service objects solve this problem.
- Using the optional Kubernetes DNS makes it easier to reference Services.
Performance overhead of Kubernetes
Kubernetes requires several components in its Control Plane. This adds CPU, memory, and persistent storage overhead. However, its networking also adds overhead to each and every request. Furthermore, each Pod has its own overhead.
Latency
Kubernetes can add a minimum of a millisecond to each request, as shown in this Istio Service Mesh blog post, or much more if it's not configured correctly.
CPU and Memory
This in-depth Kubernetes performance experiment by Datadog found that a well-tuned configuration would cut in half the number of jobs that could be completed compared to pure virtual machines.
Alternatives
OpenShift
Kubernetes is a relatively complex system that requires intimate knowledge to manage it well. Any system running on it that requires this level of complexity likely requires more tools to help in the management of Kubernetes, as well as the system as a whole. This is why tools like OpenShift were created. OpenShift is Kubernetes, with additional optional tooling.
Container Services like ECS or Fargate
AWS's ECS and its Serverless counterpart, Fargate, are much easier to use than Kubernetes. However, they also don't provide nearly the breadth of features or depth of configurability that Kubernetes provides. This is fine for most teams. Many who are already on these platforms ask if they should switch to Kubernetes. However, if the need hasn't appeared, then there likely isn't one. ECS is fully capable of managing large-scale multi-service systems.
Being serverless means that Fargate's infrastructure is abstracted away from the user. This means less configurability, but also less management to worry about. Fargate is a Platform-as-a-Service whereas ECS is Infrastructure-as-a-Service.
Who is it for?
Kubernetes is highly customizable, providing very granular tuning. It even allows its own API to be extended through custom Kubernetes objects defined by the user/developer called Custom Resource Definitions. This feature, alone, enables third parties to create plugins for a wide array of use cases.
Kubernetes is not the right choice for everyone, but it is a great choice for anyone who is running a large-scale multi-service system. This doesn't mean Kubernetes can't be used from the start by a smaller team, but inefficiencies in resource usage and performance will likely need to be accepted in order to grow into the product's intent. Beyond a certain scale, the CPU and memory resource savings of having dedicated teams to manage Kubernetes and its configurations becomes necessary. Cloud products that use Kubernetes under-the-hood (like AWS's EKS) help to take this burden of management off of a team. However, this doesn't remove the need for Kubernetes knowledge (and preferably experience too).