Introducing Kubernetes

35 篇文章 0 订阅
12 篇文章 0 订阅
Kubernetes’s origins

After having kept Borg and Omega secret for a whole decade (整整十年), in 2014, Google introduced Kubernetes, an open-source system based on the experiences gathered through Borg, Omega and other internal Google systems.

Looking at Kubernetes from the top of a mountain

Kubernetes is a software system that allows you to easily deploy and manage containerized applications on top of it. It relies on the features of Linux containers to run heterogeneous(异构) applications without having to know any internal details of these applications and without having to manually deploy these applications on each host. Because these apps run in containers, they won’t affect other apps running on the same server, which is critical(关键,决定性的) when you run applications for completely different organizations on the same hardware. This is of paramount(最重要的,至关重要) importance for cloud providers(云提供商), since they strive for(追求,争取) the best possible utilization(利用) of their hardware, while still having to maintain complete isolation of hosted(承载,托管) applications.

Kubernetes enables you to run your software applications on thousands of computer nodes as if all those nodes were a single, enormous computer. It abstracts away the underlying infrastructure and, by doing so, simplifies the development lifecycle for both the developers and the system administrators.

Deploying applications through Kubernetes is always the same, whether your cluster contains only a couple of nodes or thousands of them (无论集群规模). The size of the cluster makes no difference at all. Additional cluster nodes simply increase the amount of resources available to deployed apps.

UNDERSTANDING THE CORE OF WHAT KUBERNETES DOES

Kubernetes exposes the whole datacenter as a single deployment platform:
在这里插入图片描述

  • This system (figure) is composed of a master node and any number of worker nodes. When the developer submits a list of apps to the master, Kubernetes deploys them across the cluster of worker nodes. What node each component lands on doesn’t (and shouldn’t) matter(组件部署到哪个节点不必关心) – neither to the developer nor to the system administrator.
  • The developer can specify that some apps must run together and they will indeed be deployed on the same worker node. Others will be spread around. But, regardless of where each app is deployed, it can find other apps and communicate with them easily.
the architecture of a Kubernetes cluster

A Kubernetes cluster is composed many nodes, which can be split into two types:

  • the Kubernetes Control Plane(控制面板) (in essence(本质上), the master node(s)), which controls and manages the whole Kubernetes system,
  • worker nodes, which run the actual applications you deploy.
THE CONTROL PLANE

The control plane is what controls and makes the whole cluster function. It consists of multiple components, which can run on a single master node or be split across multiple nodes and replicated to ensure high-availability. These components are:

  • the API Server, which you use to communicate with and perform operations on the Kubernetes cluster
  • the Scheduler, which is responsible for scheduling your apps (assigning a worker node to each deployable component of your application),
  • the Controller Manager, which performs cluster-level functions(功能), such as replicating components, keeping track of worker nodes, etc.
  • etcd, a reliable(可靠的) distributed data store that stores the whole cluster configuration persistently.

The components of the control plane hold and control the state of the cluster, but they don’t actually run our applications. This is done by the (worker) nodes.

worker nodes

The worker nodes are the machines that actually run our containerized applications. The task of running, monitoring and providing services to our applications is done by the following components:

  • Docker, rkt or other container runtime, which actually runs your containers,
  • The Kubelet, which talks to the master and controls containers on the node,
  • The Kube-Proxy, which proxies and load balances network traffic between your application components.
Running an application on Kubernetes

In order to run an application, you need to package it up(打包) into one or more Docker images, push those images to a Docker registry and then post a description of your app in the form of an app descriptor to the API server.

DESCRIBING WHAT CONTAINERS TO RUN AND HOW THEY SHOULD BE ORGANIZED

The description includes information such as the container image or images that contain your application components, how those components are related to each other – which ones need to be run collocated(并列) (run together on the same node) and which don’t. For each component, you can also specify how many copies you want to have running. Additionally, the description can also include which of those components provide a service to internal or external clients and should be exposed through a stable(稳定的) IP address and made discoverable(可被发现) to the rest of the components.

HOW THE DESCRIPTION RESULTS IN A RUNNING CONTAINER

When the API server processes(处理) your app’s description, the scheduler will schedule the specified groups of containers onto the available worker nodes, based on resources required by each group and the available resources on each node at that moment. The Kubelet on those nodes will then instruct(指示) Docker to pull the necessary Docker images and run the containers.

Figure : A basic overview of the Kubernetes architecture and an application running on top of it
在这里插入图片描述
From the figure above, the app descriptor lists four containers, grouped into three sets (these sets are called Pods). The first two pods each contain only a single container, while the last one contains two – meaning both containers need to run collocated and shouldn’t be fully isolated from each other. Next to each pod, you also see a number representing the number of copies or replicas of each pod that need to run simultaneously. After submitting the descriptor to Kubernetes, it will schedule the specified number of replicas of each pod to the available worker nodes. The Kubelets on the nodes will then use Docker to pull the container images from the image registry and run the containers.

KEEPING THE CONTAINERS RUNNING

Once the application is running, Kubernetes will then continuously make sure that the actual state of the application always matches the description you provided. For example, if you specify that you always want five instances of a web server running, Kubernetes will always keep exactly five instances running. If one of those instances stops working properly(停止正常工作), like when its process crashes or when it stops responding, Kubernetes will restart it automatically. Similarly, if the whole worker node dies or becomes inaccessible, Kubernetes will select new nodes for all the containers that were running on the node and run them on the newly selected nodes.

SCALING THE NUMBER OF COPIES

While the application is running, you can request an increase or decrease of the number of copies, and Kubernetes will spin up (快速增加) additional ones or stop the superfluous(多余的) ones, respectively. And you don’t even need to do that manually. You can tell Kubernetes to constantly(不断地) keep adjusting the number of instances automatically, based on real-time metrics, such as CPU load, memory consumption or queries per second.

HITTING A MOVING TARGET (命中移动目标)

Kubernetes may move your containers around the cluster. This can be because of a failure of the node they are running on, or a container can even be evicted from a node to make room for other containers. If the container is providing a service to external clients or other containers running in the cluster, how can they use the container properly, if it’s moving around the cluster? Additionally, there can be multiple containers providing the same service, but spread around the whole cluster. So how do clients consume the service? To allow clients to easily find those containers, you can tell Kubernetes which of them provide the same service and Kubernetes will expose all of them at a single static IP address(静态地址), which can be looked up by any other client container through either environment variables or DNS. The Kube-proxy will make sure requests to the service are load balanced across all the containers that make up the service(构成服务的所有容器). The IP address of the service stays constant(不变), so clients will always be able to connect to the backing containers, even though they are moved around the cluster by Kubernetes.

the benefits of using Kubernetes

There are some cases, where the developer does care what kind of hardware the application should run on. If the nodes are heterogeneous(异构), there are cases when you will want certain apps to run on nodes with certain capabilities and run other apps on others. For example, one of your apps may perform much better when running on a system with SSDs instead of HDDs, while other apps perform exactly the same, regardless of the disk type. For such cases, you obviously want to have some say in where your app should be scheduled to. But instead of selecting a specific node where your app should be run, it’s much simpler to just instruct Kubernetes to only select among the nodes that contain certain features.

HEALTH CHECKING AND SELF-HEALING(健康检查和自修复)

Kubernetes monitors your app components and the nodes they are running on, and automatically reschedules them to other nodes in the event of a node failure. This frees the ops team from having to migrate app components manually and allows the team to immediately focus on fixing the node itself and returning it to the pool of available hardware resources.
If your infrastructure has enough spare resources to allow normal system operation even without the failed node, the ops team doesn’t even need to react to the failure immediately you know, at 3AM in the morning. They can sleep tight(睡个好觉) and deal with the failed node later, during regular(正常的) work hours.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值