Kubernetes CKA认证运维工程师笔记-Kubernetes调度

1. 创建一个Pod的工作流程

Kubernetes基于list-watch机制的控制器架构,实现组件间交互的解耦。
在这里插入图片描述

kubectl run nginx --image=nginx
1、kubectl将创建pod的请求提交到apiserver
2、会将请求的信息写到etcd
3、apiserver通知scheduler有创建新的pod,收到创建之后就会根据调度算法选择一个合适的节点
4、给这个pod打一个标记,pod=k8s-node1
5、apiserver收到scheduler的调度结果写到etcd
6、k8s-node1上的kubelet收到事件,从apiserver获取pod的相关信息
7、kubelet调用docker api创建pod中所需的容器
8、会把这个pod的状态汇报给apiserver
9、apiserver把状态写入到etcd
kubectl get pods

[root@k8s-master ~]# kubectl run --help
Create and run a particular image in a pod.

Examples:
  # Start a nginx pod.
  kubectl run nginx --image=nginx
  
  # Start a hazelcast pod and let the container expose port 5701.
  kubectl run hazelcast --image=hazelcast/hazelcast --port=5701
  
  # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and
"POD_NAMESPACE=default" in the container.
  kubectl run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster"
--env="POD_NAMESPACE=default"
  
  # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the
container.
  kubectl run hazelcast --image=hazelcast/hazelcast
--labels="app=hazelcast,env=prod"
  
  # Dry run. Print the corresponding API objects without creating them.
  kubectl run nginx --image=nginx --dry-run=client
  
  # Start a nginx pod, but overload the spec with a partial set of values parsed
from JSON.
  kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": {
... } }'
  
  # Start a busybox pod and keep it in the foreground, don't restart it if it
exits.
  kubectl run -i -t busybox --image=busybox --restart=Never
  
  # Start the nginx pod using the default command, but use custom arguments
(arg1 .. argN) for that command.
  kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>
  
  # Start the nginx pod using a different command and custom arguments.
  kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>

Options:
      --allow-missing-template-keys=true: If true, ignore any errors in
templates when a field or map key is missing in the template. Only applies to
golang and jsonpath output formats.
      --attach=false: If true, wait for the Pod to start running, and then
attach to the Pod as if 'kubectl attach ...' were called.  Default false, unless
'-i/--stdin' is set, in which case the default is true. With '--restart=Never'
the exit code of the container process is returned.
      --cascade=true: If true, cascade the deletion of the resources managed by
this resource (e.g. Pods created by a ReplicationController).  Default true.
      --command=false: If true and extra arguments are present, use them as the
'command' field in the container, rather than the 'args' field which is the
default.
      --dry-run='none': Must be "none", "server", or "client". If client
strategy, only print the object that would be sent, without sending it. If
server strategy, submit server-side request without persisting the resource.
      --env=[]: Environment variables to set in the container.
      --expose=false: If true, service is created for the container(s) which are
run
      --field-manager='kubectl-run': Name of the manager used to track field
ownership.
  -f, --filename=[]: to use to replace the resource.
      --force=false: If true, immediately remove resources from API and bypass
graceful deletion. Note that immediate deletion of some resources may result in
inconsistency or data loss and requires confirmation.
      --grace-period=-1: Period of time in seconds given to the resource to
terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can
only be set to 0 when --force is true (force deletion).
      --hostport=-1: The host port mapping for the container port. To
demonstrate a single-machine container.
      --image='': The image for the container to run.
      --image-pull-policy='': The image pull policy for the container. If left
empty, this value will not be specified by the client and defaulted by the
server
  -k, --kustomize='': Process a kustomization directory. This flag can't be used
together with -f or -R.
  -l, --labels='': Comma separated labels to apply to the pod(s). Will override
previous values.
      --leave-stdin-open=false: If the pod is started in interactive mode or
with stdin, leave stdin open after the first attach completes. By default, stdin
will be closed after the first attach completes.
      --limits='': The resource requirement limits for this container.  For
example, 'cpu=200m,memory=512Mi'.  Note that server side components may assign
limits depending on the server configuration, such as limit ranges.
  -o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file.
      --overrides='': An inline JSON override for the generated object. If this
is non-empty, it is used to override the generated object. Requires that the
object supply a valid apiVersion field.
      --pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher
than zero) to wait until at least one pod is running
      --port='': The port that this container exposes.
      --privileged=false: If true, run the container in privileged mode.
      --quiet=false: If true, suppress prompt messages.
      --record=false: Record current kubectl command in the resource annotation.
If set to false, do not record the command. If set to true, record the command.
If not set, default to updating the existing annotation value only if one
already exists.
  -R, --recursive=false: Process the directory used in -f, --filename
recursively. Useful when you want to manage related manifests organized within
the same directory.
      --requests='': The resource requirement requests for this container.  For
example, 'cpu=100m,memory=256Mi'.  Note that server side components may assign
requests depending on the server configuration, such as limit ranges.
      --restart='Always': The restart policy for this Pod.  Legal values
[Always, OnFailure, Never].
      --rm=false: If true, delete resources created in this command for attached
containers.
      --save-config=false: If true, the configuration of current object will be
saved in its annotation. Otherwise, the annotation will be unchanged. This flag
is useful when you want to perform kubectl apply on this object in the future.
      --serviceaccount='': Service account to set in the pod spec.
  -i, --stdin=false: Keep stdin open on the container(s) in the pod, even if
nothing is attached.
      --template='': Template string or path to template file to use when
-o=go-template, -o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].
      --timeout=0s: The length of time to wait before giving up on a delete,
zero means determine a timeout from the size of the object
  -t, --tty=false: Allocated a TTY for each container in the pod.
      --wait=false: If true, wait for resources to be gone before returning.
This waits for finalizers.

Usage:
  kubectl run NAME --image=image [--env="key=value"] [--port=port]
[--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND]
[args...] [options]

Use "kubectl options" for a list of global command-line options (applies to all
commands).
[root@k8s-master ~]# kubectl run nginx --image=nginx
pod/nginx created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               0/1     ContainerCreating   0          8s
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          78s
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
web                1/1     1            1           22d
[root@k8s-master ~]# kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         k8s-node2/10.0.0.63
Start Time:   Tue, 14 Dec 2021 22:24:34 +0800
Labels:       run=nginx
Annotations:  cni.projectcalico.org/podIP: 10.244.169.191/32
              cni.projectcalico.org/podIPs: 10.244.169.191/32
Status:       Running
IP:           10.244.169.191
IPs:
  IP:  10.244.169.191
Containers:
  nginx:
    Container ID:   docker://185b057460d60afd4965e7f8ae79e715cdca49c4dacd40eabd5a1c0d2c111dcf
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 14 Dec 2021 22:25:49 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-8grtj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8grtj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From                Message
  ----    ------     ----   ----                -------
  Normal  Scheduled  2m13s  default-scheduler   Successfully assigned default/nginx to k8s-node2
  Normal  Pulling    2m11s  kubelet, k8s-node2  Pulling image "nginx"
  Normal  Pulled     58s    kubelet, k8s-node2  Successfully pulled image "nginx" in 1m12.991781419s
  Normal  Created    58s    kubelet, k8s-node2  Created container nginx
  Normal  Started    58s    kubelet, k8s-node2  Started container nginx

2. Pod中影响调度的主要属性

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  namespace: default
spec:
...
  containers:
  -image: lizhenliang/java-demo
   name: java-demo
   imagePullPolicy: Always
   livenessProbe:
     initialDelaySeconds: 30
     periodSeconds: 20
     tcpSocket:
       port: 8080
   resources: {}        #资源调度依据
  restartPolicy: Always
  schedulerName: default-scheduler     # 资源调度策略
  nodeName: ""
  nodeSelector: {}
  affinity: {}
  tolerations: []

3. 资源限制对Pod调度的影响

容器资源限制:

  • resources.limits.cpu
  • resources.limits.memory

容器使用的最小资源需求,作为容器调度时资源分配的依据:

  • resources.requests.cpu
  • resources.requests.memory
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
  -name: web
   image: nginx
   resources:
     requests:
       memory: "64Mi"
       cpu: "250m"
     limits:
       memory: "128Mi"
       cpu: "500m"

K8s会根据Request的值去查找有足够资源的Node来调度此Pod
CPU单位:可以写m也可以写浮点数,例如0.5=500m,1=1000m

[root@k8s-master ~]# cat dep.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
  - name: web3
    image: nginx
    resources:
     requests:
      memory: "64Mi"
      cpu: "250m"
     limits:
      memory: "128Mi"
      cpu: "500m"
[root@k8s-master ~]# kubectl apply -f dep.yaml 
pod/web created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          61m
web                                 0/1     ContainerCreating   0          12s
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          61m
web                                 1/1     Running   0          20s
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl describe pod web
Name:         web
Namespace:    default
Priority:     0
Node:         k8s-node2/10.0.0.63
Start Time:   Tue, 14 Dec 2021 23:25:55 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/podIP: 10.244.169.132/32
              cni.projectcalico.org/podIPs: 10.244.169.132/32
Status:       Running
IP:           10.244.169.132
IPs:
  IP:  10.244.169.132
Containers:
  web3:
    Container ID:   docker://660aea8148685226b80d1f0e1ba7705919704d80c11d3209f9f3ae022cdee65b
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 14 Dec 2021 23:26:12 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:        250m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-8grtj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8grtj
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From                Message
  ----    ------     ----  ----                -------
  Normal  Scheduled  44s   default-scheduler   Successfully assigned default/web to k8s-node2
  Normal  Pulling    43s   kubelet, k8s-node2  Pulling image "nginx"
  Normal  Pulled     27s   kubelet, k8s-node2  Successfully pulled image "nginx" in 15.795143763s
  Normal  Created    27s   kubelet, k8s-node2  Created container web3
  Normal  Started    27s   kubelet, k8s-node2  Started container web3
[root@k8s-master ~]# kubectl describe node l8s-node2
Error from server (NotFound): nodes "l8s-node2" not found
[root@k8s-master ~]# kubectl describe node k8s-node2
Name:               k8s-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-node2
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 172.16.1.63/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 10.244.169.128
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 21 Nov 2021 23:23:27 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-node2
  AcquireTime:     <unset>
  RenewTime:       Tue, 14 Dec 2021 23:26:58 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 14 Dec 2021 22:24:09 +0800   Tue, 14 Dec 2021 22:24:09 +0800   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Tue, 14 Dec 2021 23:26:44 +0800   Sun, 21 Nov 2021 23:23:27 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 14 Dec 2021 23:26:44 +0800   Sun, 21 Nov 2021 23:23:27 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 14 Dec 2021 23:26:44 +0800   Sun, 21 Nov 2021 23:23:27 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 14 Dec 2021 23:26:44 +0800   Wed, 01 Dec 2021 10:48:31 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.0.63
  Hostname:    k8s-node2
Capacity:
  cpu:                2
  ephemeral-storage:  30185064Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1863020Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  27818554937
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1760620Ki
  pods:               110
System Info:
  Machine ID:                 c20304a03ec54a0fa8aab6469d0a16dc
  System UUID:                46874D56-AB2E-7867-1BD5-C67713201686
  Boot ID:                    593535db-8a33-4979-8ff6-11aa4e524832
  Kernel Version:             3.10.0-1160.45.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.11
  Kubelet Version:            v1.19.0
  Kube-Proxy Version:         v1.19.0
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
Non-terminated Pods:          (12 in total)
  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
  default                     my-dep-5f8dfc8c78-j9fqp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
  default                     nginx                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         62m
  default                     nginx-deployment-7f78c49b8f-ftzb7             500m (25%)    0 (0%)      0 (0%)           0 (0%)         4d17h
  default                     web                                           250m (12%)    500m (25%)  64Mi (3%)        128Mi (7%)     68s
  default                     web2                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15d
  kube-system                 calico-node-9r6zd                             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                 coredns-6d56c8448f-gcgrh                      100m (5%)     0 (0%)      70Mi (4%)        170Mi (9%)     23d
  kube-system                 kube-proxy-5qpgc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23d
  kubernetes-dashboard        dashboard-metrics-scraper-7b59f7d4df-jxb4b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d
  kubernetes-dashboard        kubernetes-dashboard-5dbf55bd9d-zpr7t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d
  test                        my-dep-5f8dfc8c78-58sdk                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
  test                        my-dep-5f8dfc8c78-965w7                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1100m (55%)  500m (25%)
  memory             134Mi (7%)   298Mi (17%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>

4. nodeSelector & nodeAffinity

nodeSelector:用于将Pod调度到匹配Label的Node上,如果没有匹配的标签会调度失败。
作用:

  • 完全匹配节点标签
  • 固定Pod到特定节点

给节点打标签:
kubectl label nodes [node] key=value
例如:kubectl label nodes k8s-node1 disktype=ssd

apiVersion: v1
kind: Pod
metadata:
  name: pod-example
spec:
  nodeSelector:
   disktype: "ssd"
  containers:
  -name: nginx
   image: nginx:1.19
[root@k8s-master ~]# kubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml
[root@k8s-master ~]# vi pod.yaml 
[root@k8s-master ~]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: pod-ssd
spec:
  nodeSelector:
    disktype: "ssd"
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
[root@k8s-master ~]# kubectl apply -f pod.yaml 
pod/pod-ssd created
[root@k8s-master ~]# kubectl get pod
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h10m
pod-ssd                             0/1     ContainerCreating   0          10s
web                                 1/1     Running             0          6h9m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          7h10m   10.244.169.191   k8s-node2   <none>           <none>
pod-ssd                             1/1     Running   0          18s     10.244.169.133   k8s-node2   <none>           <none>
web                                 1/1     Running   0          6h9m    10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running   4          15d     10.244.169.187   k8s-node2   <none>           <none>

[root@k8s-master ~]# kubectl label node k8s-node2 disktype-
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl get node --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
k8s-master   Ready    master   23d   v1.19.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    Ready    <none>   23d   v1.19.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    Ready    <none>   23d   v1.19.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]# vi pod.yaml 
[root@k8s-master ~]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: pod-ssd2
spec:
  nodeSelector:
    disktype: "ssd"
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
[root@k8s-master ~]# kubectl apply -f pod.yaml 
pod/pod-ssd2 created
[root@k8s-master ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h13m
pod-ssd                             1/1     Running   0          3m16s
pod-ssd2                            0/1     Pending   0          6s
web                                 1/1     Running   0          6h12m
web2                                1/1     Running   4          15d

nodeAffinity:节点亲和类似于nodeSelector,可以根据节点上的标签来约束Pod可以调度到哪些节点。
相比nodeSelector:

  • 匹配有更多的逻辑组合,不只是字符串的完全相等
  • 调度分为软策略和硬策略,而不是硬性要求
    • 硬(required):必须满足
    • 软(preferred):尝试满足,但不保证

操作符:In、NotIn、Exists、DoesNotExist、Gt、Lt

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        -matchExpressions:
         -key: gpu
          operator: In
          values:
          -nvidia-tesla
      preferredDuringSchedulingIgnoredDuringExecution:
      -weight: 1
       preference:
        matchExpressions:
        -key: group
         operator: In
         values:
         -ai
containers:
-name: web
image: nginx
[root@k8s-master ~]# vi nodeaffinity.yaml
[root@k8s-master ~]# cat nodeaffinity.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx1
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd            
  containers:
  - name: with-node-affinity
    image: nginx
[root@k8s-master ~]# kubectl apply -f nodeaffinity.yaml 
pod/nginx1 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h25m

nginx1                              0/1     Pending   0          15s
pod-ssd                             1/1     Running   0          14m
pod-ssd2                            0/1     Pending   0          11m
web                                 1/1     Running   0          6h23m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# vi nodeaffinity2.yaml 
[root@k8s-master ~]# cat nodeaffinity2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx2
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd          
  containers:
  - name: with-node-affinity
    image: nginx
[root@k8s-master ~]# kubectl apply -f nodeaffinity2.yaml 
pod/nginx2 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h27m
nginx1                              0/1     Pending             0          2m28s
nginx2                              0/1     ContainerCreating   0          3s
pod-ssd                             1/1     Running             0          17m
pod-ssd2                            0/1     Pending             0          13m
web                                 1/1     Running             0          6h26m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl label node k8s-node2 disktype=ssd
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h30m
nginx1                              0/1     ContainerCreating   0          5m11s
nginx2                              1/1     Running             0          2m46s
pod-ssd                             1/1     Running             0          19m
pod-ssd2                            0/1     ContainerCreating   0          16m
web                                 1/1     Running             0          6h28m
web2                                1/1     Running             4          15d

5. Taints & Tolerations

Taints:避免Pod调度到特定Node上
应用场景:

  • 专用节点,例如配备了特殊硬件的节点
  • 基于Taint的驱逐

设置污点:
kubectl taint node [node] key=value:[effect]
其中[effect] 可取值:

  • NoSchedule :一定不能被调度。
  • PreferNoSchedule:尽量不要调度。
  • NoExecute:不仅不会调度,还会驱逐Node上已有的Pod。

去掉污点:
kubectl taint node [node] key:[effect]-

[root@k8s-master ~]# kubectl taint node k8s-node2 disktype=ssd:NoSchedule
node/k8s-node2 tainted
[root@k8s-master ~]# kubectl describe node k8s-node2|grep Taint
Taints:             disktype=ssd:NoSchedule
[root@k8s-master ~]# kubectl describe node k8s-node1|grep Taint
Taints:             <none>
[root@k8s-master ~]# kubectl run nginx3 --image=nginx 
pod/nginx3 created
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY   STATUS              RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running             0          7h40m   10.244.169.191   k8s-node2   <none>           <none>
nginx1                              1/1     Running             0          15m     10.244.169.129   k8s-node2   <none>           <none>
nginx2                              1/1     Running             0          13m     10.244.169.135   k8s-node2   <none>           <none>
nginx3                              0/1     ContainerCreating   0          18s     <none>           k8s-node1   <none>           <none>
pod-ssd                             1/1     Running             0          30m     10.244.169.133   k8s-node2   <none>           <none>
pod-ssd2                            1/1     Running             0          27m     10.244.169.134   k8s-node2   <none>           <none>
web                                 1/1     Running             0          6h39m   10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running             4          15d     10.244.169.187   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl taint node k8s-node1 gpu=yes:NoSchedule
node/k8s-node1 tainted
[root@k8s-master ~]# kubectl describe node k8s-node1|grep Taint
Taints:             gpu=yes:NoSchedule
[root@k8s-master ~]# kubectl run nginx4 --image=nginx 
pod/nginx4 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h42m
nginx1                              1/1     Running   0          17m
nginx2                              1/1     Running   0          14m
nginx3                              1/1     Running   0          111s
nginx4                              0/1     Pending   0          7s
pod-ssd                             1/1     Running   0          31m
pod-ssd2                            1/1     Running   0          28m
web                                 1/1     Running   0          6h40m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl describe node k8s-master|grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

Tolerations:允许Pod调度到持有Taints的Node上

apiVersion: v1
kind: Pod
metadata:
  name: pod-taints
spec:
  containers:
  - name: pod-taints
   image: busybox:latest
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"
[root@k8s-master ~]# vi taint.yaml 
[root@k8s-master ~]# cat taint.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: pod-taint
spec:
  tolerations:
  - key: "disktype"
    value: "ssd"
    effect: "NoSchedule"
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
[root@k8s-master ~]# kubectl apply -f taint.yaml 
pod/pod-taint created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h46m
nginx1                              1/1     Running             0          22m
nginx2                              1/1     Running             0          19m
nginx3                              1/1     Running             0          6m37s
nginx4                              0/1     Pending             0          4m53s
pod-ssd                             1/1     Running             0          36m
pod-ssd2                            1/1     Running             0          33m
pod-taint                           0/1     ContainerCreating   0          6s
web                                 1/1     Running             0          6h45m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          7h47m   10.244.169.191   k8s-node2   <none>           <none>
nginx1                              1/1     Running   0          22m     10.244.169.129   k8s-node2   <none>           <none>
nginx2                              1/1     Running   0          20m     10.244.169.135   k8s-node2   <none>           <none>
nginx3                              1/1     Running   0          7m      10.244.36.122    k8s-node1   <none>           <none>
nginx4                              0/1     Pending   0          5m16s   <none>           <none>      <none>           <none>
pod-ssd                             1/1     Running   0          37m     10.244.169.133   k8s-node2   <none>           <none>
pod-ssd2                            1/1     Running   0          33m     10.244.169.134   k8s-node2   <none>           <none>
pod-taint                           1/1     Running   0          29s     10.244.169.131   k8s-node2   <none>           <none>
web                                 1/1     Running   0          6h45m   10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running   4          15d     10.244.169.187   k8s-node2   <none>           <none>

6. nodeName

nodeName:指定节点名称,用于将Pod调度到指定的Node上,不经过调度器

apiVersion: v1
kind: Pod
metadata:
  name: pod-example
  labels:
    app: nginx
spec:
  nodeName: k8s-node2
  containers:
  - name: nginx
    image: nginx:1.15
[root@k8s-master ~]# kubectl run nginx5 --image=nginx 
pod/nginx5 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h52m
nginx1                              1/1     Running   0          27m
nginx2                              1/1     Running   0          24m
nginx3                              1/1     Running   0          11m
nginx4                              0/1     Pending   0          10m
nginx5                              0/1     Pending   0          9s
pod-ssd                             1/1     Running   0          41m
pod-ssd2                            1/1     Running   0          38m
pod-taint                           1/1     Running   0          5m16s
web                                 1/1     Running   0          6h50m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h52m
nginx1                              1/1     Running   0          27m
nginx2                              1/1     Running   0          24m
nginx3                              1/1     Running   0          11m
nginx4                              0/1     Pending   0          10m
nginx5                              0/1     Pending   0          17s
pod-ssd                             1/1     Running   0          41m
pod-ssd2                            1/1     Running   0          38m
pod-taint                           1/1     Running   0          5m24s
web                                 1/1     Running   0          6h50m
web2                                1/1     Running   4          15d
[root@k8s-master ~]# cp pod.yaml nodename.yaml
[root@k8s-master ~]# kubectl apply -f nodename.yaml 
pod/nginx6 created
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h53m
nginx1                              1/1     Running             0          28m
nginx2                              1/1     Running             0          25m
nginx3                              1/1     Running             0          12m
nginx4                              0/1     Pending             0          11m
nginx5                              0/1     Pending             0          73s
nginx6                              0/1     ContainerCreating   0          8s
pod-ssd                             1/1     Running             0          42m
pod-ssd2                            1/1     Running             0          39m
pod-taint                           1/1     Running             0          6m20s
web                                 1/1     Running             0          6h51m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
nginx                               1/1     Running             0          7h53m
nginx1                              1/1     Running             0          28m
nginx2                              1/1     Running             0          25m
nginx3                              1/1     Running             0          12m
nginx4                              0/1     Pending             0          11m
nginx5                              0/1     Pending             0          79s
nginx6                              0/1     ContainerCreating   0          14s
pod-ssd                             1/1     Running             0          42m
pod-ssd2                            1/1     Running             0          39m
pod-taint                           1/1     Running             0          6m26s
web                                 1/1     Running             0          6h51m
web2                                1/1     Running             4          15d
[root@k8s-master ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx                               1/1     Running   0          7h53m
nginx1                              1/1     Running   0          28m
nginx2                              1/1     Running   0          26m
nginx3                              1/1     Running   0          13m
nginx4                              0/1     Pending   0          11m
nginx5                              0/1     Pending   0          91s
nginx6                              1/1     Running   0          26s
pod-ssd                             1/1     Running   0          43m
pod-ssd2                            1/1     Running   0          40m
pod-taint                           1/1     Running   0          6m38s

7. DaemonSet控制器

DaemonSet功能:

  • 在每一个Node上运行一个Pod
  • 新加入的Node也同样会自动运行一个Pod

应用场景:网络插件、监控Agent、日志Agent
在这里插入图片描述

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d
calico-node-4pwdc                         1/1     Running   5          23d
calico-node-9r6zd                         1/1     Running   5          23d
calico-node-vqzdj                         1/1     Running   5          23d
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d
etcd-k8s-master                           1/1     Running   6          23d
kube-apiserver-k8s-master                 1/1     Running   12         23d
kube-controller-manager-k8s-master        1/1     Running   14         22d
kube-proxy-5qpgc                          1/1     Running   5          23d
kube-proxy-q2xfq                          1/1     Running   5          23d
kube-proxy-tvzpd                          1/1     Running   5          23d
kube-scheduler-k8s-master                 1/1     Running   16         22d
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d

示例:部署一个日志采集程序

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:
      - name: log
        image: elastic/filebeat:7.3.2
[root@k8s-master ~]# vi daemonset.yaml
[root@k8s-master ~]# kubectl apply -f daemonset.yaml 
daemonset.apps/filebeat created
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d
calico-node-4pwdc                         1/1     Running   5          23d
calico-node-9r6zd                         1/1     Running   5          23d
calico-node-vqzdj                         1/1     Running   5          23d
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d
etcd-k8s-master                           1/1     Running   6          23d
kube-apiserver-k8s-master                 1/1     Running   12         23d
kube-controller-manager-k8s-master        1/1     Running   14         22d
kube-proxy-5qpgc                          1/1     Running   5          23d
kube-proxy-q2xfq                          1/1     Running   5          23d
kube-proxy-tvzpd                          1/1     Running   5          23d
kube-scheduler-k8s-master                 1/1     Running   16         22d
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d

[root@k8s-master ~]# kubectl taint node k8s-node2 disktype-
node/k8s-node2 untainted
[root@k8s-master ~]# kubectl taint node k8s-node1 gpu-
node/k8s-node1 untainted
[root@k8s-master ~]# kubectl describe node |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             <none>
Taints:             <none>
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d
calico-node-4pwdc                         1/1     Running   5          23d
calico-node-9r6zd                         1/1     Running   5          23d
calico-node-vqzdj                         1/1     Running   5          23d
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d
etcd-k8s-master                           1/1     Running   6          23d
filebeat-5pwh7                            1/1     Running   0          82s
filebeat-pt848                            1/1     Running   0          2m12s
kube-apiserver-k8s-master                 1/1     Running   12         23d
kube-controller-manager-k8s-master        1/1     Running   14         22d
kube-proxy-5qpgc                          1/1     Running   5          23d
kube-proxy-q2xfq                          1/1     Running   5          23d
kube-proxy-tvzpd                          1/1     Running   5          23d
kube-scheduler-k8s-master                 1/1     Running   16         22d
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d
[root@k8s-master ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
nginx                               1/1     Running   0          8h     10.244.169.191   k8s-node2   <none>           <none>
nginx1                              1/1     Running   0          41m    10.244.169.129   k8s-node2   <none>           <none>
nginx2                              1/1     Running   0          39m    10.244.169.135   k8s-node2   <none>           <none>
nginx3                              1/1     Running   0          26m    10.244.36.122    k8s-node1   <none>           <none>
nginx4                              1/1     Running   0          24m    10.244.169.138   k8s-node2   <none>           <none>
nginx5                              1/1     Running   0          14m    10.244.169.137   k8s-node2   <none>           <none>
nginx6                              1/1     Running   0          13m    10.244.169.136   k8s-node2   <none>           <none>
pod-ssd                             1/1     Running   0          56m    10.244.169.133   k8s-node2   <none>           <none>
pod-ssd2                            1/1     Running   0          53m    10.244.169.134   k8s-node2   <none>           <none>
pod-taint                           1/1     Running   0          20m    10.244.169.131   k8s-node2   <none>           <none>
web                                 1/1     Running   0          7h5m   10.244.169.132   k8s-node2   <none>           <none>
web2                                1/1     Running   4          15d    10.244.169.187   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   5          23d     10.244.235.198   k8s-master   <none>           <none>
calico-node-4pwdc                         1/1     Running   5          23d     10.0.0.62        k8s-node1    <none>           <none>
calico-node-9r6zd                         1/1     Running   5          23d     10.0.0.63        k8s-node2    <none>           <none>
calico-node-vqzdj                         1/1     Running   5          23d     10.0.0.61        k8s-master   <none>           <none>
coredns-6d56c8448f-gcgrh                  1/1     Running   5          23d     10.244.169.184   k8s-node2    <none>           <none>
coredns-6d56c8448f-tbsmv                  1/1     Running   5          23d     10.244.36.120    k8s-node1    <none>           <none>
etcd-k8s-master                           1/1     Running   6          23d     10.0.0.61        k8s-master   <none>           <none>
filebeat-5pwh7                            1/1     Running   0          3m10s   10.244.36.123    k8s-node1    <none>           <none>
filebeat-pt848                            1/1     Running   0          4m      10.244.169.130   k8s-node2    <none>           <none>
kube-apiserver-k8s-master                 1/1     Running   12         23d     10.0.0.61        k8s-master   <none>           <none>
kube-controller-manager-k8s-master        1/1     Running   14         22d     10.0.0.61        k8s-master   <none>           <none>
kube-proxy-5qpgc                          1/1     Running   5          23d     10.0.0.63        k8s-node2    <none>           <none>
kube-proxy-q2xfq                          1/1     Running   5          23d     10.0.0.62        k8s-node1    <none>           <none>
kube-proxy-tvzpd                          1/1     Running   5          23d     10.0.0.61        k8s-master   <none>           <none>
kube-scheduler-k8s-master                 1/1     Running   16         22d     10.0.0.61        k8s-master   <none>           <none>
metrics-server-84f9866fdf-kt2nb           1/1     Running   5          16d     10.244.36.118    k8s-node1    <none>           <none>

8. 调度失败原因分析

查看调度结果:
kubectl get pod -o wide
查看调度失败原因:kubectl describe pod

  • 节点CPU/内存不足
  • 有污点,没容忍
  • 没有匹配到节点标签

课后作业:
1、创建一个pod,分配到指定标签node上
•pod名称:web
•镜像:nginx
•node标签:disk=ssd

# vi web.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
  nodeSelector:
    disk: ssd
# kubectl apply -f web.yaml

2、确保在每个节点上运行一个pod,不考虑污点
•pod名称:nginx
•镜像:nginx

# vi nginx.yaml
apiVersion: apps/v1
kind: Pod
metadata:
  name: nginx
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: log
        image: nginx
#kubectl apply -f nginx.yaml 

3、查看集群中状态为ready的node数量,并将结果写到指定文件

kubectl describe node $(kubectl get nodes|grep Ready|awk '{print $1}') |grep Taint|grep -vc NoSchedule > /opt/node.txt
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值