kubernetes调度
详情参照官网:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/
nodeName
节点名称调度
# 推送资源清单
kubectl apply -f pod.yml
# 查看pod以及 所部署的节点
kubectl get pod -o wide
# pod.yaml:
# 在节点server3 上部署 nginx 服务
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
nodeName: server3
nodeSelector
节点标签调度
详情参照官网:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/
# 节点server3 打上 disktype=ssd 的标签
kubectl label nodes server3 disktype=ssd
# 查看 所有节点 可以是disktype 的value 值
kubectl get nodes -L disktype
kubectl apply -f pod.yml
kubectl get pod -o wide
# pod.yaml:
# 标签是 disktype=ssd 的节点部署nginx
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
节点亲和性
详情参照官网:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
硬亲和性(必须满足):
requiredDuringSchedulingIgnoredDuringExecution:
软亲和性(倾向满足):
preferredDuringSchedulingIgnoredDuringExecution:
最佳实践:
# 打标签
kubectl label nodes server4 disktype=sata
kubectl label nodes server4 roles=nginx
# 查看标签
kubectl get nodes -L disktype
kubectl get nodes -L roles
# 执行
kubectl apply -f pod.yml
kubectl get pod -o wide
# pod.yaml:
# 硬亲和性 标签 -> disktype=ssd,disktype=sata,必须满足,不满足,软亲和性也不执行
# 软亲和性 -> roles=nginx ,倾向满足
# 满足条件部署 nginx
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
- sata
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: roles
operator: In
values:
- nginx
pod的亲和性与反亲和性
pod的亲和性(podAffinity:)
# pod.yaml:
# 亲和性, (podAffinity:) 将myApp 部署在有 app=nginx 标签的 pod 主机节点上。
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
pod的反亲和性(podAntiAffinity:)
# pod.yaml:
# 反亲和性, ( podAntiAffinity:) 将myApp 部署在没有 app=nginx 标签的 pod 主机节点上。
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
Taints(污点)
详情参照官网: https://kubernetes.io/zh/docs/concepts/scheduling-eviction/taint-and-toleration/
NoSchedule:POD 不会被调度到标记为 taints 节点。
PreferNoSchedule:NoSchedule 的软策略版本。
NoExecute:该选项意味着一旦 Taint 生效,如该节点内正在运行的 POD 没有对应 Tolerate 设置,会直接被逐出
污点:
# 查看 server2 节点的污点类型
kubectl describe nodes server2 | grep Taint
NoExecute:
# 设置节点 server3 节点类型为 NoExecute
kubectl taint node server3 key1=v1:NoExecute
NoSchedule:
# 设置节点 server4 节点类型为 NoSchedule
kubectl taint node server4 key2=v2:NoSchedule
容忍所有:
如果一个容忍度的 key
为空且 operator 为 Exists
, 表示这个容忍度与任意的 key 、value 和 effect 都匹配,即这个容忍度能容忍任意 taint。
tolerations:
- operator: "Exists"
# pod.yaml:
# 前提 server2 -> NoSchedule , server3 -> NoExecute , server4 -> NoSchedule
# 再部署pod ,会一直处于准备状态,使用 ( - operator: "Exists" ),容忍所有污点
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: nginx
replicas: 10
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
tolerations:
- operator: "Exists"
master(server2)主机污点类型为 NoSchedule,所以pod 不调度到 master 端:
节点管理
详情参照官网: https://kubernetes.io/zh/docs/concepts/architecture/nodes/#manual-node-administration
cordon (停止调度)
drain (驱离节点)
delete (删除节点)
注:生产环境,需要关闭服务器时,先cordon,再 drain ,再delete。(drain 时会有某些pod无法驱离,需要忽略,系统会提示,根据提示走就好)
kubernetes访问控制
详情参照官网:https://kubernetes.io/zh/docs/concepts/security/controlling-access/
三部曲
Authentication 认证
Authorization 授权
Admission Control 准入控制 (按需加载)
详情请参照官网:https://kubernetes.io/zh/docs/reference/access-authn-authz/authentication/
详情请参照官网:https://kubernetes.io/zh/docs/concepts/cluster-administration/certificates/
服务认证 serviceaccount
# 创建 名为 myregistrykey 的 secret
kubectl create secret docker-registry myregistrykey --docker-server=reg.westos.org --docker-username=admin --docker-password=westos --docker-email=yakexi007@westos.org
# 创建名为 admin 的 serviceaccount
kubectl create sa admin
# 添加 myregistrykey(secret)到 admin(serviceaccount)
kubectl patch serviceaccount admin -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
# 查看 admin(serviceaccount) 信息
kubectl describe sa admin
kubectl apply -f pod.yml
注意点:
pod.yaml 文件的指定使用 secret (前面创建的 admin):
pod.yml:
serviceAccountName: admin
# pod.yaml:
# 把serviceaccount(admin)和pod绑定起来
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: reg.westos.org/westos/game2048
ports:
- name: http
containerPort: 80
serviceAccountName: admin
使用 default 的 secret:
# 删除前面实验创建的 admin
kubectl delete sa admin
# 添加 myregistrykey 到 default
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
kubectl apply -f pod.yml
注意点:
pod.yml:
serviceAccountName: default
# pod.yaml:
# 把serviceaccount(default)和pod绑定起来
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: reg.westos.org/westos/game2048
ports:
- name: http
containerPort: 80
serviceAccountName: default
用户认证 UserAccount
RBAC(Role Based Access Control):基于角色访问控制授权
详情请参照官网:https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/
角色绑定 rolebinding :
- 可以绑定namespace 角色
- 也可以绑定集群角色
- 但都只能作用指定的namespace
集群绑定 ClusterRoleBinding
- 只可以绑定集群角色,作用于整个集群
openssl genrsa -out test.key 2048
openssl req -new -key test.key -out test.csr -subj "/CN=test"
openssl x509 -req -in test.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out test.crt -days 365
openssl x509 -in test.crt -text -noout
kubectl config set-credentials test --client-certificate=/etc/kubernetes/pki/test.crt --client-key=/etc/kubernetes/pki/test.key --embed-certs=true
kubectl config view
kubectl config set-context test@kubernetes --cluster=kubernetes --user=test
kubectl config use-context test@kubernetes
kubectl get pod
# rbac.yaml:
# 定义namespace 角色
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: myrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]
---
# RoleBinding 方式 (指定namespace,只作用与指定的namespace)角色与用户绑定
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-read-pods
namespace: default
subjects:
- kind: User
name: test
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: myrole
apiGroup: rbac.authorization.k8s.io
---
# 定义集群角色
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myclusterrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete", "create", "update"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# RoleBinding 方式 (指定namespace,只作用于指定的namespace)角色与用户绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rolebind-myclusterrole
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: test
---
# ClusterRoleBinding 方式 (不指定namespace,作用于整个集群)角色与用户绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: clusterrolebinding-myclusterrole
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: test
最佳实践
面向用户的角色
Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用:
详情请参照官网:https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/#user-facing-roles
- cluster-amdin (完全控制权限)
- admin (低于cluster-amdin ,限制部分权限)
- edit (读写权限)
- view (只读权限)