All about Kubernetes

2022-09-29 08:55:00
DevOps Aura International
Source
Translated 779
Summary : This article takes a quick look at Kubernetes and explains what we're talking about when discussing Kubernetes.

Kubernetes has excelled in the container orchestration field. It is a container-based cluster orchestration engine with the ability to scale clusters, rolling upgrade rollback, elastic scaling, automatic recovery, service discovery, and many other features.


This article takes a quick look at Kubernetes and explains what we're talking about Kubernetes.

Kubernetes Architecture

From a macro perspective, the overall architecture of Kubernetes includes the Master, Node, and Etcd.


The Master, which is the controller node, is responsible for controlling the entire Kubernetes cluster. It includes the API Server, Scheduler, Controller, and other components. They all need to interact with the Etcd to store data.

  • API Server: mainly provides a unified entry point for resource operations, thus shielding direct interaction with Etcd. Functions include security, registration, discovery, etc.
  • Scheduler: Responsible for scheduling Pods to the Node according to certain scheduling rules.
  • Controller: The resource control center, ensuring that resources are in the expected working state.

Node, the working node, provides the computing power for the whole cluster and is where the containers run, including running containers, kubelet, and Kube-proxy.

  • Kubelet: The main tasks include managing the lifecycle of the container, monitoring and function checking in conjunction with cAdvisor, and reporting the Node's status regularly.
  • Kube-proxy mainly uses service to provide service discovery and load balancing within the cluster while simultaneously monitoring service/endpoints changes and refreshing the load balancing.

Starting with the creation of Deployment

A deployment is a controller resource used to orchestrate a pod, which we will describe later. Let's take deployment as an example and see what each architecture component does in creating a deployment resource.

  1. First, kubectl requests to create a deployment.
  2. Apiserver receives the deployment creation request and writes the relevant resources to etcd. All components then similarly interact with apiserver/etcd.
  3. The deployment controller lists/watch the resource changes and initiates a request to create a replicaSet.
  4. ReplicaSet controller list/watch resource changes and initiate pod creation requests.
  5. Scheduler detects unbound pod resources and selects the appropriate node for binding through a series of matches and filters.
  6. Kubelet finds a new pod to be created on its node and is responsible for pod creation and subsequent lifecycle management.
  7. Kube-proxy is responsible for initializing service-related resources, including service discovery, load balancing, and other network rules.

At this point, the Kubernetes components have coordinated their work, creating a deployment request and ending with each specific pod running properly.

Pod

In the many API resources in Kubernetes, the pod is the most fundamental, the smallest unit of deployment.


The first question we need to consider is why we need a pod. The pod is a container design pattern designed for containers that are "super close" to each other, so we can imagine scenarios where servelet containers deploy war packages, log collection, etc. These containers often need to share networks, storage, and configuration; hence we have the pod concept.

For the pod, the external network space is unified among the different containers using an infra container. The same volume naturally shares the storage, e.g., it corresponds to a catalog on the host.

Container Arrangement

Container arrangement is a specialty of Kubernetes, so it is important to understand what it entails. Kubernetes has several control resources related to the arrangement, such as deployment for stateless applications, statefulset for stateful applications, daemonset for the arrangement of daemons, and job/cronjob for the arrangement of offline services, etc.

Let's take deployment as an example. The relationship between deployment, replicataset, and the pod is one of mutual control. In simple terms, the replicaset controls the number of pods, while deployment controls the versioning properties of the replicaset. This design pattern also provides the basis for two of the most basic deployment actions: horizontal scaling for quantity control and update/rollback for version attribute control.

1. Horizontal expansion and contraction

Horizontal expansion and contraction are easy to understand. We need to change the number of pod replicas controlled by the replicaset, for example, from 2 to 3. We have completed the action of horizontal expansion and vice versa, i.e., horizontal contraction.

2. Update/Rollback

The update/rollback reflects the need for the replicaset object to exist. For example, if we need to change the version of an application from v1 to v2 with 3 instances, the number of pods controlled by the v1 replicaset will gradually change from 3 to 0. In contrast, the number of pods controlled by the v2 replicaset will annotate from 0 to 3, and the update will be completed when only the v2 replicaset exists under the deployment. The rollback action is the opposite.

3. Rolling updates

As you can see, in the above example, we update the application, and the pods are always upgraded one by one, and a minimum of 2 pods are available, with a maximum of 4 pods serving. The benefit of this "rolling update" is obvious, if there is a bug in the new version, the remaining 2 pods will still be available, and it is easy to roll back quickly.


In practice, we can control the rolling update strategy by configuring the RollingUpdateStrategy. Max surge indicates how many new pods the deployment controller can create, while max unavailable indicates how many old pods the deployment controller can delete.

Networks in kubernetes

We understand how container arrangement is made, but how do containers communicate with each other? When it comes to network communication, Kubernetes needs to have a "three-way" foundation:

  1. Node-to-pod communication.
  2. The node's pods can communicate with each other.
  3. Pods of different nodes can communicate with each other.

In short, different pods communicate with each other via the cni0/docker0 bridge, and nodes access pods via the cni0/docker0 bridge.


There are several different implementations of pod communication among different nodes, including the common flannel vxlan/hostgw mode, which uses etcd to learn about other nodes' networks and create routing tables for this node, enabling cross-host communication among different nodes.

Microservice-service

Before we get to the rest, we need to understand a very important resource object: service.


Why do we need a service? A pod can correspond to an instance in microservices, so a service corresponds to a microservice. The presence of service solves two problems in the service invocation process.

  1. The IP of a pod is not fixed, so it is not practical to use a non-fixed IP for network calls.
  2. Service calls need to be load balanced across pods.

The service selects the appropriate pods using a label selector and constructs an endpoint, i.e., a pod load balancing list. In practice, we would typically label each pod instance of the same microservice with something like app=xxx and create a service with a label selector of app=xxx for that microservice.

Service Discovery and Network Invocation in Kubernetes

With the above "three-way" network foundation, we can see how network calls in microservice architectures are implemented in Kubernetes.

1. Inter-service calls

The first is the east-west traffic call, i.e., the inter-service call. This consists of two main types of calls, namely clusterIp mode and dns mode.


ClusterIp is a type of Service where Kube-proxy implements a form of VIP (virtual IP) for the Service via iptables/ipvs. You only need to access this VIP to load balance access to the pod behind the Service.

The diagram above shows one implementation of clusterIp, the userSpace proxy mode (which is largely unused), and the ipvs mode (which offers better performance).


The DNS mode is easy to understand. A service in clusterIp mode has an A record of service-name.namespace-name.svc.cluster.local, which points to the clusterIp address. So, in general, we can call service-name.

2. Out of-service visits

North-south flows are external requests to access a Kubernetes cluster consisting of three main methods: nodePort, loadbalancer, and ingress.


A nodePort is also a type of service that allows access to the service behind it by calling a specific port on the host through iptables.


Loadbalancer is another service implemented through a load balancer provided by the public cloud.


We want to access the internal Kubernetes cluster through a unified external access layer, which is what ingress does. Ingress provides a unified access layer that matches different services on the back end with different routing rules. ingress can be thought of as a "service of services." ingress is often implemented in conjunction with nodePort and loadbalancer.


So far, we've got a brief overview of Kubernetes, how it generally works, and how microservices run in Kubernetes. So when we hear people discussing Kubernetes, we can know what they're talking about.

Write a Comment
Comment will be posted after it is reviewed.