Pods高级实战(一)

1 标签–label

1.1 什么是标签

标签其实就一对 key/value ,被关联到对象上,比如Pod,标签的使用我们倾向于能够表示对象的特殊特点,就是一眼就看出了这个Pod是干什么的,标签可以用来划分特定的对象(比如版本,服务类型等),标签可以在创建一个对象的时候直接定义,也可以在后期随时修改,每一个对象可以拥有多个标签,但是,key值必须是唯一的。创建标签之后也可以方便我们对资源进行分组管理。如果对pod打标签,之后就可以使用标签来查看、删除指定的pod。
在k8s中,大部分资源都可以打标签。

1.2 给pod资源打标签

[root@master1 pod]# kubectl apply -f pod-first.yaml 
pod/tomcat-test created
[root@master1 pod]# kubectl get pods -n test                 
NAME          READY   STATUS    RESTARTS   AGE
tomcat-test   1/1     Running   0          6s
[root@master1 pod]# kubectl label pods tomcat-test release=v1
Error from server (NotFound): pods "tomcat-test" not found
[root@master1 pod]# kubectl label pods tomcat-test release=v1 -n test
pod/tomcat-test labeled
[root@master1 pod]# kubectl get pods tomcat-test -n test --show-labels  
NAME          READY   STATUS    RESTARTS   AGE    LABELS
tomcat-test   1/1     Running   0          104s   app=tomcat,release=v1

1.3 查看资源标签

#查看默认名称空间下所有pod资源的标签
[root@master1 pod]# kubectl get pods --show-labels           
NAME          READY   STATUS    RESTARTS   AGE   LABELS
tomcat-test   1/1     Running   0          76s   app=tomcat,release=v2

#查看test空间下指定pod具有的所有标签
[root@master1 pod]# kubectl get pods tomcat-test -n test --show-labels  
NAME          READY   STATUS    RESTARTS   AGE    LABELS
tomcat-test   1/1     Running   0          104s   app=tomcat,release=v1

#列出默认名称空间下标签key是release的pod,不显示标签
[root@master1 pod]# kubectl get pods -l release         
NAME          READY   STATUS    RESTARTS   AGE
tomcat-test   1/1     Running   0          8m41s

#列出默认名称空间下标签key是release的所有pod,并打印对应的标签值
[root@master1 pod]# kubectl get pods -L release
NAME          READY   STATUS    RESTARTS   AGE   RELEASE
tomcat-test   1/1     Running   0          10m   v2

#查看所有名称空间下的所有pod的标签
[root@master1 pod]# kubectl get pods --all-namespaces --show-labels
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE   LABELS
default       tomcat-test                                1/1     Running   0             15m   app=tomcat,release=v2
kube-system   calico-kube-controllers-6744f6b6d5-62rhh   1/1     Running   8 (28m ago)   24d   k8s-app=calico-kube-controllers,pod-template-hash=6744f6b6d5
kube-system   calico-node-2md6b                          1/1     Running   7 (28m ago)   24d   controller-revision-hash=f646f8d97,k8s-app=calico-node,pod-template-generation=1
kube-system   calico-node-f96dc                          1/1     Running   8 (30m ago)   24d   controller-revision-hash=f646f8d97,k8s-app=calico-node,pod-template-generation=1
kube-system   coredns-7f8cbcb969-b9jb8                   1/1     Running   7 (28m ago)   24d   k8s-app=kube-dns,pod-template-hash=7f8cbcb969
kube-system   coredns-7f8cbcb969-ctwnf                   1/1     Running   7 (28m ago)   24d   k8s-app=kube-dns,pod-template-hash=7f8cbcb969
kube-system   etcd-master1                               1/1     Running   9 (30m ago)   24d   component=etcd,tier=control-plane
kube-system   kube-apiserver-master1                     1/1     Running   9 (30m ago)   24d   component=kube-apiserver,tier=control-plane
kube-system   kube-controller-manager-master1            1/1     Running   9 (30m ago)   24d   component=kube-controller-manager,tier=control-plane
kube-system   kube-proxy-4hr4j                           1/1     Running   7 (28m ago)   24d   controller-revision-hash=5cc4b8856c,k8s-app=kube-proxy,pod-template-generation=1
kube-system   kube-proxy-4kzqg                           1/1     Running   8 (30m ago)   24d   controller-revision-hash=5cc4b8856c,k8s-app=kube-proxy,pod-template-generation=1
kube-system   kube-scheduler-master1                     1/1     Running   9 (30m ago)   24d   component=kube-scheduler,tier=control-plane
test          tomcat-test                                1/1     Running   0             22m   app=tomcat,release=v1

2 nodename和nodeselector节点选择器

我们在创建pod资源的时候,pod会根据schduler进行调度,那么默认会调度到随机的一个工作节点,如果我们想要pod调度到指定节点或者调度到一些具有相同特点的node节点,可以使用pod中的nodeName或者nodeSelector字段指定要调度到的node节点。

2.1 nodename指定pod运行在哪个node上

#把tomcat.tar.gz上传到node1,手动解压:
#把busybox.tar.gz上传到node1,手动解压:
[root@node1 ~]# ctr -n=k8s.io images import tomcat.tar.gz 
unpacking docker.io/library/tomcat:8.5-jre8-alpine (sha256:463a0b1de051bff2208f81a86bdf4e7004eb68c0edfcc658f2e2f367aab5e342)...done
[root@node1 ~]# ctr -n=k8s.io images import busybox.tar.gz 
unpacking docker.io/library/busybox:latest (sha256:2d86744fc4e303fbf4e71c67b89ee77cc6c60e9315cbd2c27f50e85b2d866450)...done

[root@master1 pod]# vim pod-node.yaml
[root@master1 pod]# cat pod-node.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  nodeName: master1
  containers:
  - name: tomcat-pod-java
    ports:
    - containerPort: 8088
    image: tomcat:8.5-jre8-alpine
    imagePullPolicy: IfNotPresent
  - name: busybox
    image: busybox:latest
    command:
    - "/bin/sh"
    - "-c"
    - "sleep 3600"

[root@master1 pod]# kubectl apply -f pod-node.yaml 
pod/demo-pod created
[root@master1 pod]# kubectl get pods
NAME          READY   STATUS              RESTARTS   AGE
demo-pod      0/2     ContainerCreating   0          7s
tomcat-test   1/1     Running             0          17m

[root@master1 pod]# kubectl describe pod demo-pod  #查看pod状态
Name:             demo-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             master1/192.168.109.131
Start Time:       Tue, 09 Apr 2024 15:46:10 +0800
Labels:           app=myapp
                  env=dev
Annotations:      cni.projectcalico.org/podIP: 10.244.137.65/32
                  cni.projectcalico.org/podIPs: 10.244.137.65/32
Status:           Pending
IP:               
IPs:              <none>
Containers:
  tomcat-pod-java:
    Container ID:   
    Image:          tomcat:8.5-jre8-alpine
    Image ID:       
    Port:           8088/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rknvz (ro)
  busybox:
    Container ID:  
    Image:         busybox:latest
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      sleep 3600
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rknvz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-rknvz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason   Age   From     Message
  ----    ------   ----  ----     -------
  Normal  Pulling  30s   kubelet  Pulling image "tomcat:8.5-jre8-alpine"

#查看pod调度到哪个节点
[root@master1 pod]# kubectl get pods             
NAME          READY   STATUS    RESTARTS   AGE
demo-pod      2/2     Running   0          2m22s
tomcat-test   1/1     Running   0          19m
[root@master1 pod]# kubectl get pods -owide
NAME          READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
demo-pod      2/2     Running   0          2m36s   10.244.137.65    master1   <none>           <none>
tomcat-test   1/1     Running   0          19m     10.244.166.172   node1     <none>           <none>

2.2 通过nodeSelector指定pod运行在哪个node

指定pod调度到具有哪些标签的node节点上

#给node节点打标签,打个具有disk=ceph的标签
[root@master1 pod]# kubectl label node node1 disk=ceph
node/node1 labeled
[root@master1 pod]# kubectl get nodes --show-labels
NAME      STATUS   ROLES           AGE   VERSION   LABELS
master1   Ready    control-plane   24d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1     Ready    work            24d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ceph,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,node-role.kubernetes.io/work=work
#定义pod的时候指定要调度到具有disk=ceph标签的node上
[root@master1 pod]# vim pod-1.yaml
[root@master1 pod]# cat pod-1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod-1
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  nodeSelector: 
    disk: ceph
  containers:
  - name: tomcat-pod-java
    ports:
    - containerPort: 8888
    image: tomcat:8.5-jre8-alpine
    imagePullPolicy: IfNotPresent
[root@master1 pod]# kubectl apply -f pod-1.yaml 
pod/demo-pod-1 created
[root@master1 pod]# kubectl get pods -owide     
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
demo-pod      2/2     Running   0          29m   10.244.137.65    master1   <none>           <none>
demo-pod-1    1/1     Running   0          4s    10.244.166.173   node1     <none>           <none>
tomcat-test   1/1     Running   0          47m   10.244.166.172   node1     <none>           <none>

把node节点标签删除,在重新创建pod-1,就会报错,因为失去标签导致创建pod无法调度

[root@master1 pod]# kubectl delete pod demo-pod-1
pod "demo-pod-1" deleted
[root@master1 pod]# kubectl get pods -owide      
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
demo-pod      2/2     Running   0          33m   10.244.137.65    master1   <none>           <none>
tomcat-test   1/1     Running   0          50m   10.244.166.172   node1     <none>           <none>
[root@master1 pod]# vim pod-1.yaml 
[root@master1 pod]# kubectl label node node1 disk-
node/node1 unlabeled
[root@master1 pod]# kubectl apply -f pod-1.yaml   
pod/demo-pod-1 created
[root@master1 pod]# kubectl get pods -owide       
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
demo-pod      2/2     Running   0          33m   10.244.137.65    master1   <none>           <none>
demo-pod-1    0/1     Pending   0          5s    <none>           <none>    <none>           <none>
tomcat-test   1/1     Running   0          50m   10.244.166.172   node1     <none>           <none>
[root@master1 pod]# kubectl describe pod demo-pod-1
Name:             demo-pod-1
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=myapp
                  env=dev
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
  tomcat-pod-java:
    Image:        tomcat:8.5-jre8-alpine
    Port:         8888/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tjk5r (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-tjk5r:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              disk=ceph
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  21s   default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

FailedScheduling 调度失败

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值