Kubernetes pods

pod & containers

A pod is a collocated group of containers and represents the basic building block in Kubernetes.
It is actually very common for pods to only contain a single container. The key thing about pods is that when a pod does contain multiple containers, all of them are always run on a single worker node – a pod never spans(跨越) multiple worker nodes, as shown in the following figure.

Figure: All containers of a pod run on the same node; a pod never spans two nodes
在这里插入图片描述

MULTIPLE CONTAINERS ARE BETTER THAN ONE CONTAINER WITH MULTIPLE PROCESSES

Imagine an app consisting of multiple processes that either communicate through IPC (InterProcess Communication) or through locally stored files, which requires them to be running on the same machine. Because, in Kubernetes, you always run processes in containers and each container is very much like an isolated machine, you may think it makes sense to run multiple processes in a single container, but you shouldn’t do that.

Containers are designed to only run a single process per container (unless the process itself spawns child processes). If you were to run multiple unrelated processes in a single container, it would be your responsibility to keep all those processes running, manage their logs, and so on(等等). For example, you would have to include a mechanism for automatically restarting individual processes if they crash. Also, all those processes would log to the same standard output, so you’d have a hard time figuring out what process logged what.

Therefore, you need to run each process in its own container. That’s how Docker and Kubernetes are meant to be used.(想要的做法)

THE PARTIAL ISOLATION BETWEEN CONTAINERS OF THE SAME POD

UTS,UNIX Time-sharing System namespace (hostname and domain)

The UTS namespace determines what hostname and domain name the process running inside that namespace will see.

Containers inside each group share some resources, although not all (in other words, not have them fully isolated). Kubernetes achieves this by configuring Docker to have all containers of a pod share the same Linux namespaces instead of each container having its own set of namespaces. Because all containers of a pod run under the same network and UTS namespaces (we’re talking about Linux namespaces here), they all share the same hostname and network interfaces. Similarly, all containers of a pod run under the same IPC namespace and are thus able to communicate through IPC. They should also share the same PID namespace, but that’s currently not the case.

HOW CONTAINERS SHARE THE SAME IP AND PORT SPACE

One thing to stress here is that because containers in a pod run in the same network namespace, they share the same IP address and port space. This means processes running in containers of the same pod need to take care not to bind to the same port numbers or they’ll run into port conflicts. But this only concerns containers in the same pod. Containers of different pods can never run into port conflicts, since each pod has a separate port space. All the containers in a pod also have the same loopback(本地回环) network interface, so a container can communicate with other containers in the same pod through localhost.

THE FLAT INTER-POD NETWORK

All pods in a Kubernetes cluster reside(居住,属于) in a single flat shared network address space, which means every pod is able to access every other pod at the other pod’s IP address. In other words, there are no NAT (Network Address Translation) gateways between them. When two pods send network packets between each other, they will each see the actual IP address of the other as the source IP in the packet.

Figure : Each pod gets a routable IP address and all other pods see the pod under that IP address
在这里插入图片描述
Consequently(因此), communication between pods is always very simple. It doesn’t matter if two pods are scheduled onto a single or onto different worker nodes; in both cases the containers inside those pods will be able to communicate with each other across the flat NAT-less network, much like computers on a local area network (LAN), regardless of the actual internode network topology. Like a computer on a LAN, each pod gets its own IP address and is accessible from all other pods through this network established specifically for pods. This is usually achieved through an additional software-defined (额外的软件实现的)network layered on top of the actual network.

Pods are logical hosts and behave very much like physical hosts or VMs in the non-container world. Processes running in the same pod are like processes running on the same physical or virtual machine, except that each process is encapsulated(封装在) in a container.

Organizing containers across pods properly

you should think of pods as separate machines, but where each one hosts (托管)only a certain app. Unlike the old days, when we used to cram(填鸭,塞满) all sorts of apps onto the same host, we don’t do that with pods. Since pods are relatively lightweight, you can have as many as you need, without incurring(招致) almost any overhead. Instead of stuffing (填充)everything into a single pod, you should organize apps into multiple pods, where each one contains only tightly(紧密) related components or processes.

SPLITTING MULTI-TIER APPS INTO MULTIPLE PODS

Although there’s nothing stopping you from running both the frontend server and the database in a single pod with two containers, it isn’t the most appropriate way. We’ve said that all containers of the same pod always run collocated, but do the web server and the database really need to run on the same machine? The answer is obviously no, so there’s nothing
forcing us to put them into a single pod. But is it wrong to do so regardless? In a way, it is.( 它们真的需要在同一 台计算机上运行吗?答案显然是否定的, 它们不应该被放到同一个pod 中。 那假如你非要把它们放在一起, 有错吗?某种程度上来说, 是的)
If both the frontend and backend are in the same pod, then both will always be run on the same machine. If you have a two-node Kubernetes cluster and only this single pod, you will always only be using a single worker node and not taking advantage of the computational resources (CPU and memory) you have at your disposal(处理) on the second node. Splitting the pod into two would allow Kubernetes to schedule the frontend to one node and the backend to the other node and thereby(从而) improve the utilization of your infrastructure.

SPLITTING INTO MULTIPLE PODS BECAUSE OF SCALING

Another reason why you shouldn’t put them both into a single pod is scaling. A pod is also the basic unit of scaling(扩容的基本单元). Kubernetes can’t horizontally (水平 ,横向)scale individual containers; instead, it scales whole pods. This means that if your pod consisted of a frontend and a backend container, when you’d scale up the number of instances of the pod to, let’s say, two, you would end up with two frontend containers and two backend containers.

Usually, frontend components have completely different scaling requirements as the backends, so we tend to scale them individually. Not to mention the fact that backends like databases are usually much harder to scale compared to (stateless) frontend web servers. So, if you need to scale a container individually, this is a clear indication that it needs to be deployed in a separate pod.

WHEN TO USE MULTIPLE CONTAINERS IN A POD

The main reason to add multiple containers into a single pod is when the application consists of one main process and one or more complementary(补充,辅助) processes, as shown in figure below.

Figure: Pods should contain tightly coupled containers (usually a main container and containers that support
the main one)
在这里插入图片描述
For example, the main container in a pod could be a web server that simply serves files from a certain file directory, while an additional container (a so-called sidecar container) periodically downloads content from an external source and stores it in the web server’s directory.
Other examples of sidecar containers include log rotators (转轮器) and collectors, data processors, communication adapters and similar.

when deciding whether to put two containers into a single pod or into two separate pods, you always need to ask yourself the
following questions:

  • Do they need to be run together or can they run on different hosts?
  • Do they represent a single whole or are they independent components?
  • Must they be scaled together or individually?
    Basically(基本上), you should always gravitate towards(倾向于) running containers in separate pods, unless there is a specific reason that requires them to be part of the same pod.

Figure: Don’t put multiple processes into the same container, and don’t put containers that don’t need to live together into a single pod
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值