集群的调度:
怎么把pod部署到节点的方法。
调度的过程:
scheduler是集群的调度器,主要任务就是把pod部署到节点上。
自动调度:
1、公平,保证每个可用的节点都可以部署pod
2、资源的高效利用,集群当中的资源可以被最大化的使用
3、效率,调度的性能要好,能够对大批量的pod完成调度的工作。
4、灵活(自定义),用户需要根据自己的需求进行控制,可以满足。
调度约束机制:
list-watch机制进行每个组件的协作,保持数据同步,组件之间的解耦。
list-watch
watch----k8s当中的监听
get-------获取资源
apiserver和组件之间的watch机制
调度过程:
1、预算策略 先对节点的条件进行过滤
pod的资源适应性:节点上是否有资源能够满足pod请求的资源
pod的主机适应性:如果指定了节点,检查集群当中是否有满足要求的节点可供部署
pod的主机端口适应性:检查节点上使用的端口是否与pod请求的端口冲突
pod与主机磁盘的适应性:每个pod之间的挂载卷不能冲突
如果预算条件不满足,pod会进入pending状态
2、优先策略 根据过滤出来的节点选择和最优的节点
最低请求优先级:通过计算节点上cpu和内存的使用率,确定节点的权重。使用率越低权重越大,越会被选中作为部署节点
倾向于选择资源利用率占用较少的节点
平衡资源分配:cpu和内存的使用率,确定节点权重。cpu和内存之间的比率,使用率的比率。
A 50% 50% 1:1
B 10% 20% 1:2
这两个节点会选择A节点
镜像本地性优先级:如果节点上在本地已经有了需要的镜像,分配的概率更大。
用户定制节点部署:
1、强制性节点调度:
nodeName强制性的选择一个节点,不再需要调度器和算法了。之间部署即可。
[root@master01 k8s-yaml]# vim test3.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 nodeName: node01 [root@master01 k8s-yaml]# kubectl apply -f test3.yaml deployment.apps/nginx1 configured [root@master01 k8s-yaml]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1-95bf57c7f-4j2zn 1/1 Running 0 70s 10.244.1.12 node01 <none> <none> nginx1-95bf57c7f-m5r5q 1/1 Running 0 69s 10.244.1.13 node01 <none> <none> nginx1-95bf57c7f-rqjkt 1/1 Running 0 71s 10.244.1.11 node01 <none> <none>
2、根据节点的标签来进行部署,匹配机制,只要标签匹配都可以部署。
问题:标签选择节点是否需要调度器和算法?
标签选择器是需要调度器和算法来进行分配的。
查看节点的标签:
kubectl get node --show-labels
标签的格式是键值对
一个节点可以有多个标签,每个以逗号隔开。
增加标签: [root@master01 k8s-yaml]# kubectl label nodes node01 test1=a 修改标签: [root@master01 k8s-yaml]# kubectl label nodes node01 test1=b --overwrite 删除标签: [root@master01 k8s-yaml]# kubectl label nodes node01 test2-
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 nodeSelector: test2: b
3、亲和性:
节点亲和性 node Affinity
pod亲和性 pod Affinity
软策略和硬策略:
#软策略 preferredDuringSchedulingIgnoredDuringExecution
软策略:在选择节点的时候,尽量的满足部署的条件,非条件也可以部署。
#硬策略 requiredDuringSchedulingIgnoredDuringExecution
硬策略:必须满足指定节点的条件,否则pending
根据节点标签和pod的标签来进行选择:
键值的运算关系:
1、In 在 匹配 =
2、NotIn 不在 不等于,逻辑非
3、Gt 大于
4、Lt 小于
5、Exists 存在
6、DoesNotExist 不存在
节点标签的情况:
[root@master01 k8s-yaml]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS master01 Ready control-plane,master 21h v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master= node01 Ready <none> 21h v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,test2=b,test3=a node02 Ready <none> 21h v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,test2=b
硬策略
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 affinity: #选择亲和性的字段 nodeAffinity: #节点亲和性 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: test3 operator: In values: - a #节点亲和性的硬策略,表示必须选择带有标签的值是test3=a [root@master01 k8s-yaml]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1-5ccd456747-6zm5p 1/1 Running 0 98s 10.244.1.16 node01 <none> <none> nginx1-5ccd456747-mtvkn 1/1 Running 0 18s 10.244.1.18 node01 <none> <none> nginx1-5ccd456747-vnvl7 1/1 Running 0 19s 10.244.1.17 node01 <none> <none>
问题:
1、条件不满足肯定pending
2、条件满足,调度器即刻生效
3、需要调度器分配,不同节点可以有相同的标签。需要调度器分配。
软策略
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: test3 operator: NotIn values: - a #节点亲和性的软策略,希望部署到不包含test3=a的标签节点。 [root@master01 k8s-yaml]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1-77f4d4fb7-8f462 1/1 Running 0 7s 10.244.2.21 node02 <none> <none> nginx1-77f4d4fb7-dtc8h 1/1 Running 0 7s 10.244.1.19 node01 <none> <none> nginx1-77f4d4fb7-dw4gz 1/1 Running 0 7s 10.244.2.22 node02 <none> <none>
多个软策略
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 affinity: #选择亲和性的字段 nodeAffinity: #节点亲和性 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: test3 operator: NotIn values: - a preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: test3 operator: In values: - a #多个软策略已权重来进行分配,权重高的,优先级大。 #节点亲和性的软策略,希望部署到不包含test3=a的标签节点。
如果已经有了硬策略,一般不需要声明软策略。
pod亲和性:
topologkey 定义节点的拓扑域,用来反映pod和节点之间的关系。
硬连接:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: #根据标签镜像选择 matchExpressions: - key: app operator: In values: - nginx1 topologyKey: test1 #匹配的pod的标签是app=nginx1,且节点上包含标签名是test1
软连接:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: #根据标签镜像选择 matchExpressions: - key: app operator: In values: - nginx1 topologyKey: test1
4、反亲和性:
pod反亲和性 pod Anti-Affinity
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx2 name: nginx2 spec: replicas: 3 selector: matchLabels: app: nginx2 template: metadata: labels: app: nginx2 spec: containers: - name: nginx image: nginx:1.22 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: #根据标签镜像选择 matchExpressions: - key: app operator: In values: - nginx1 topologyKey: test2 #只能部署在pod的标签不是app=nginx1且节点的标签名不能有test2 #其实在pod的亲和性当中,起决定作用的是拓扑域的标签。
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 3 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: #根据标签镜像选择 matchExpressions: - key: app operator: In values: - nginx1 topologyKey: test2
软策略:倾向性,尽可能的满足条件————更多的,尽量把资源调度到需要的节点
硬策略:必须要满足条件。————特殊情况,节点故障,但是有业务要更新,强制性的把资源调度到指定的节点。
作业:
1、实现pod的探针:
就绪探针
tcpScoket
2、挂载,容器/usr/share/nginx/html/
节点 /opt/html
3、node的亲和性 尽量部署在node01
4、pod的亲和性,尽量部署在包含有app=nginx的pod且标签名是xy102的节点
5、软策略选择标签名不包含xy102, 值小于100
[root@master01 k8s-yaml]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS master01 Ready control-plane,master 24h v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master= node01 Ready <none> 24h v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux,test1=a,xy102=500 node02 Ready <none> 24h v1.20.15 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux,xy102=50
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx1 name: nginx1 spec: replicas: 10 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: containers: - name: nginx image: nginx:1.22 readinessProbe: tcpSocket: port: 80 volumeMounts: - name: data-v mountPath: /usr/share/nginx/html/ affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 2 preference: matchExpressions: - key: test1 operator: In values: - a - weight: 3 preference: matchExpressions: - key: xy102 operator: Lt values: - "100" podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: xy102 volumes: - name: data-v hostPath: path: /opt/html type: DirectoryOrCreate [root@master01 k8s-yaml]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx1-5d9d5b56db-8bkq4 1/1 Running 0 32s 10.244.2.80 node02 <none> <none> nginx1-5d9d5b56db-8jwcn 1/1 Running 0 32s 10.244.2.85 node02 <none> <none> nginx1-5d9d5b56db-g7flw 1/1 Running 0 32s 10.244.2.81 node02 <none> <none> nginx1-5d9d5b56db-jd8kk 1/1 Running 0 32s 10.244.2.82 node02 <none> <none> nginx1-5d9d5b56db-jwthk 1/1 Running 0 32s 10.244.1.64 node01 <none> <none> nginx1-5d9d5b56db-mh5w7 1/1 Running 0 32s 10.244.1.66 node01 <none> <none> nginx1-5d9d5b56db-p62sc 1/1 Running 0 32s 10.244.2.84 node02 <none> <none> nginx1-5d9d5b56db-pj6zz 1/1 Running 0 32s 10.244.1.67 node01 <none> <none> nginx1-5d9d5b56db-rlm86 1/1 Running 0 32s 10.244.1.65 node01 <none> <none> nginx1-5d9d5b56db-zk7bj 1/1 Running 0 32s 10.244.2.83 node02 <none> <none>