Overview of Kubernetes

35 篇文章 0 订阅
12 篇文章 0 订阅

overview

Setting up a full-fledged(完整的) multi-node Kubernetes cluster is not a simple task, especially if you’re not very well versed in Linux and networking administration. A proper Kubernetes install spans(跨越) multiple physical or virtual machines and requires the networking to be set up properly(正确的网络配置), so that all the containers running inside the Kubernetes cluster can connect to each other through the same flat(扁平的) networking space.

The simplest and quickest way to a fully functioning Kubernetes cluster is by using Minikube. Minikube is a package that sets up a single-node cluster great for both testing Kubernetes and developing apps locally.
Although we can’t show certain Kubernetes features related to managing apps on multiple nodes, the single-node cluster should be enough for exploring the majority of topics discussed

pod

A pod is a group of one or more tightly related containers that will always run together on the same worker node and in the same Linux namespace(s). Each pod is like a separate logical machine with its own IP, hostname, processes, etc., running a single application. The application can be a single process, running in a single container, or it can be a main application process and additional supporting processes, each running in their own container. All the containers in a pod will appear to be running on the same logical machine(逻辑机器), whereas(然而) containers in other pods, even if they are running on the same worker node, will appear to be running on a different one.

To better understand the relationship between containers, pods and nodes. As you can see, each pod has its own IP and contains one or more containers, each running an application process. Pods are spread out across different worker nodes.

Figure: The relationship between containers, pods and physical worker nodes
在这里插入图片描述
Can’t list individual containers, as they are not standalone Kubernetes objects. But we can tell kubectl to list pods:

kubectl run kubia --image=zhixingheyitian/kubia --port=8080 --generator=run/v1
# kubectl get pods
NAME                                                      READY   STATUS      RESTARTS   AGE
kubia-wzmcz                                               1/1     Running     0          12m

When its status is Pending and the pod’s single container is shown as not ready yet (this is what the 0/1 in the READY column means). The reason why the pod isn’t running yet is because the worker node the pod has been assigned to is downloading the container image before it can run it. When the download is finished, the pod’s container will be created and then the pod will transition to the Running state.

To see more information about the pod, you can also use the kubectl describe pod command, like we did earlier for one of the worker nodes.

 kubectl describe pod

If the pod stays stuck(卡住,停留) in the Pending status, it might be that Kubernetes can’t pull the image from the registry. If you’re using your own image, make sure it is marked as public(公开的) on Docker Hub. To make sure the image can be pulled successfully, try pulling the image manually with the docker pull command on another machine.

UNDERSTANDING WHAT HAPPENED BEHIND THE SCENES

To help you visualize what we just did, look at figure below. It shows both steps we had to perform to get a container image running inside Kubernetes.

  • First, we built the image and pushed it to Docker Hub. This was necessary, since building the image on our local machine only makes it available on our local machine, but we needed to make it accessible to the Docker daemons running on our worker nodes(在worker nodes 上都可以访问到这个image).
  • When we ran the kubectl command, it created a new ReplicationController object in the cluster by performing a REST HTTP request to the Kubernetes API server. The replication controller then created a new pod, which was then scheduled to one of the worker nodes by the scheduler. The Kubelet on that node saw that the pod was scheduled to it and instructed Docker to pull the specified image from the registry, because the image wasn’t available locally. After downloading the image, Docker created and ran the container.
  • The other two nodes are displayed just to show context(上下文). They didn’t play any role in the whole process, because the pod wasn’t scheduled to them.
    在这里插入图片描述
# kubectl expose rc kubia --type=LoadBalancer --name kubia-http
service/kubia-http exposed

the abbreviation(缩写) “rc” instead of “replicationcontroller”.

Minikube doesn’t support LoadBalancer services yet, so the service will never get an external IP.
# kubectl get services
NAME                                                          TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
kubernetes                                                    ClusterIP      10.96.0.1       <none>        443/TCP             9d
kubia-http                                                    LoadBalancer   10.98.198.100   <pending>     8080:32099/TCP      12m

When using minikube, you can get the IP and port through which you can access the service by running

# minikube service kubia-http
-   Opening kubernetes service default/kubia-http in default browser...
START /usr/bin/firefox "http://192.168.99.108:32099"
Error: GDK_BACKEND does not match available displays
xdg-open: no method available for opening 'http://192.168.99.108:32099'

UNDERSTANDING THE POD AND ITS CONTAINER

The main and most important component in our system is the pod. It contains only a single container, but generally a pod can contain as many containers as you want. Inside the container is our Node.js process, which is bound to port 8080 and is waiting for HTTP requests. The pod has its own unique private IP address and hostname.

UNDERSTANDING THE ROLE OF THE REPLICATION CONTROLLER

The next component is the “kubia” replication controller. It makes sure there’s always exactly one instance of our pod running. Generally, replication controllers are used to replicate pods (i.e. create multiple copies of a pod) and keep them running. In our case, we didn’t specify how many pod replicas we want, so the replication controller created just a single one. If our pod were to disappear for any reason, the replication controller would create a new pod to replace the missing one.

Horizontally(水平) scaling the application

Our pod is managed by a replication controller. Let’s see it with the kubectl get command:

#  kubectl get replicationcontrollers
NAME    DESIRED   CURRENT   READY   AGE
kubia   1         1         1       79m

扩容

# kubectl scale rc kubia --replicas=3
replicationcontroller/kubia scaled
# kubectl get pods
NAME                                                      READY   STATUS              RESTARTS   AGE
kubia-p465w                                               0/1     ContainerCreating   0          11s
kubia-tqrvp                                               0/1     ContainerCreating   0          11s
kubia-wzmcz                                               1/1     Running             0          117m
# kubectl get rc
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       120m

Requests are hitting different pods randomly. This is what services in Kubernetes do when there’s more than one pod instance backing them. They act as a load balancer standing in front of multiple pods. When there’s only one pod, services simply provide a static address for the single pod. Whether a service is backed by a single pod or a group of pods, those pods come and go as they are moved around the cluster, which means their IP addresses change, but the service is always there at the same address.(服务地址不变) This makes it easy for clients to connect to the pods, regardless of how many there are and how often they change location(位置).

# curl http://192.168.99.108:32099  --noproxy "*"
You've hit kubia-tqrvp
[root@bdpe822n2 ~]# curl http://192.168.99.108:32099  --noproxy "*"
You've hit kubia-wzmcz
[root@bdpe822n2 ~]# curl http://192.168.99.108:32099  --noproxy "*"
You've hit kubia-tqrvp
[root@bdpe822n2 ~]# curl http://192.168.99.108:32099  --noproxy "*"
You've hit kubia-wzmcz
[root@bdpe822n2 ~]# curl http://192.168.99.108:32099  --noproxy "*"
You've hit kubia-tqrvp
[root@bdpe822n2 ~]# curl http://192.168.99.108:32099  --noproxy "*"
You've hit kubia-p465w

Figure: Three instances of a pod managed by the same replication controller and exposed through a single service IP and port
在这里插入图片描述

you can request additional columns to be displayed by using the -o wide option. When listing pods, this option shows the pod’s IP and the node the pod is running on:

#kubectl get pods -o wide
NAME                                                      READY   STATUS      RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
kubia-p465w                                               1/1     Running     0          14m    172.17.0.10   minikube   <none>           <none>
kubia-tqrvp                                               1/1     Running     0          14m    172.17.0.9    minikube   <none>           <none>
kubia-wzmcz                                               1/1     Running     0          132m   172.17.0.7    minikube   <none>           <none>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值