k8s容器探针(保证容器存货)
apiVersion: v1
kind: ReplicaSet
metadata:
name: kubia-manual
labels:
name: kubia-manual
spec:
containers:
- name: kubia-manual
image: luksa/kubia
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 8080
创建基于HTTP的存活探针
livenessProbe:
httpGet:
path: / #请求路径
port: 8080 #请求端口
initialDelaySeconds: 15
基于端口存活的探针
ReplicationController控制器
##删除框架pod不受运行影响
kubectl delete rc kubia --cascade=false
ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 4
selector:
matchLabels:
app: kubia
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 8080
https://www.cnblogs.com/dalianpai/p/12072844.html
apiVersion详细解释
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-mnonitor
spec:
selector:
matchLabels:
app: ssd-mnonitor
template:
metadata:
name: ssd-mnonitor
labels:
app: ssd-mnonitor
spec:
nodeSelector:
disk: ssd
containers:
- name: ssd-mnonitor
image: luksa/ssd-mnonitor
resources:
limits:
memory: "128Mi"
cpu: "500m"
ReplicationController控制器&&ReplicaSet控制器的区别
给node节点赋予标签:
# 赋予节点标签
kubectl label node [nodename] key=value
# 删除标签
kubectl label node [nodename] key-
在容器执行指令
kubectl exec [podname] -- curl -s http://10.103.228.252
## -- 是指前方指令 执行完毕
服务
- Kubemetes 服务是一种为一组功能相同的 pod 提供单一不变的接入点的资源。当服务存在时,它的 IP 地址和端口不会改变。 客户端通过 IP 地址和端口号建立连接,这些连接会被路由到提供该服务的任意一个 pod 上。 通过这种方式, 客户端不需要知道每个单独的提供服务的 pod 的地址, 这样这些 pod 就可以在集群中随时被创建或移除。
模板示例
- yaml文件创建
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
sessionAffinity: ClientIP ##只想容器中某一个容器
selector:
app: kubia
ports:
- port: 80 -- 该服务的可用端口
targetPort: 8080 -- 服务将连接转发到的容器端口
-
指令创建
kubectl expose rc kubia --type=LoadBalancer --name kubia
会话亲和性设置成了ClientIP的服务
在服务定义中指定多端口
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
selector:
app: kubia
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
在pod的定义中指定port名称
apiVersion: v1
kind: Pod
metadata:
name: kubia
labels:
name: kubia
spec:
containers:
- name: kubiaa
ports:
- name: http
containerPort: 80
- name: https
containserPoart:8080
services
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
selector:
app: kubia
ports:
- name: http
port: 80
targetPort: http
- name: http
port: 443
targetPort: https
服务发现:
服务endpoint
kubectl describe svc kubia
kubectl get endpoints kubia
[root@k8s-master1 ~]# cat external-service-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: 11.11.11.11
- ip: 22.22.22.22
ports:
- port: 80
# 添加一个不需要标签设定的服务,然后手动设定endpoints做地址转发。
[root@k8s-master1 ~]# cat external-service.yaml
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
ports:
- port: 80
Endpoint对象需要与服务具有相同的名称,并包含该服务的目标IP地址和端口列表。
为外部服务创建别名
创建ExtemalName类型的服务
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: someapi.somecompany.com
ports:
- port: 80
[root@k8s-master1 ~]# kubectl apply -f external-service-extemalname.yaml
service/external-service created
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
external-service ExternalName <none> someapi.somecompany.com 80/TCP 11s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d
[root@k8s-master1 ~]#
通过Nodeport服务暴露给外部客户端
有几种方式可以在外部访问服务
- 将服务的类型设置成NodePort
- 将服务类型设置成Loadbalance,NodePort类型的一种扩展–这使得服务可以通过专用的负载均衡器来访问
- 创建一个Ingress资源,这是一个完全不通的机制,通过一个ip地址公开多个服务
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
selector:
app: kubia
ports:
- port: 80
targetPort: 8080
nodePort: 30123
type: NodePort
通过loadbalance讲服务暴露出来,这是因为LoadBadancer服务是NodePo江服务的扩展
[root@k8s-master1 ~]# vim kubia-svc-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: kubia-loadbalancer
spec:
type: LoadBalancer
selector:
app: kubia
ports:
- port: 80
targetPort: 8080
创建一个headless服务
apiVersion: v1
kind: Service
metadata:
name: kubia-headless
spec:
clusterIP: None
selector:
app: kubia
ports:
- port: 80
targetPort: 8080
#验证
[root@k8s-master1 ~]# kubectl run dnsutils --image=tutum/dnsutils --command -- sleep infinity
pod/dnsutils created
[root@k8s-master1 ~]# kubectl exec dnsutils nslookup kubia-headless
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubia-headless.default.svc.cluster.local
Address: 192.168.169.131
Name: kubia-headless.default.svc.cluster.local
Address: 192.168.36.74
Name: kubia-headless.default.svc.cluster.local
Address: 192.168.36.73
Name: kubia-headless.default.svc.cluster.local
Address: 192.168.169.132
headless 服务仍然提供跨 pod 的负载平衡, 但是通过 DNS 轮询机制不是通过服务代理。
将磁盘挂载到卷
• emptyDir— 用于存储临时数据的简单空目录。
• hostPath —用于将目录从工作节点的文件系统挂载到pod中。
• gitRepo 一通过检出Git仓库的内容来初始化的卷。
• nfs—挂载到pod中的NFS共享卷。
• gcePersistentD sk (Google 高效能型存储磁盘卷)、 awsElasticBlockStore (AmazonWeb 服务弹性块存储卷)、 azureDisk (MicrosoAzure 磁盘卷)一一用于挂载云服务商提供的特定存储类型。
• cinde cephf scs flocke glusterf quobyte rbdflexVolume vsphere Volume photo PersistentDis scale I O
用于挂载其他类型的网络存储。
• configMap secret downwardAPI 一用于将 Kubemetes 部分资源和集群信息公开给 pod 的特殊类型的卷
• persistentVolumeCla 一一 种使用预置或者动态配置的持久存储类型(我们将在本 的最后 节对此展开讨论)
这些卷类型有各种用途。我 将在下面的部分中了
apiVersion: v1
kind: Pod
metadata:
name: fortun
spec:
containers:
- name: html-generator
image: luksa/fortune
volumeMounts:
- name: html
mountPath: /var/htdocs
resources:
limits:
memory: "128Mi"
cpu: "500m"
- name: web-server
image: nginx
volumeMounts:
- name: html
mountPath: /var/share/nginx/html
readOnly: true
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: html
emptyDir: {}
----------------------------
volumes:
- name: html
emptyDir:
medium: Memory
hostPath卷示例
apiVersion: v1
kind: Pod
metadata:
name: hostpath-deam
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
- mountPath: /data
name: test-volume
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
volumes:
- name: test-volume
hostPath:
path: /root/data
NFS网络卷挂载示例
apiVersion: v1
kind: Pod
metadata:
name: nfspathpod
labels:
name: nfsdemo
role: master
spec:
containers:
- name: c1
image: nginx
volumeMounts:
- name: nfs-storage
mountPath: /nfs/
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
volumes:
- name: nfs-storage
nfs:
server: 192.168.30.6
path: "/data/dsk;"
ConfigMap和Secret
使用kubectl创建ConfigMap
[root@k8s-master1 ~]# kubectl create configmap fortune-config --from-literal=sleep-interval=25
configmap/fortune-config created
[root@k8s-master1 ~]# kubectl get cm
NAME DATA AGE
fortune-config 1 16s
### 多条映射条目
[root@k8s-master1 ~]# kubectl create configmap myconfigmap --from-literal=foo=bar --from-literal=bar=baz --from-literal=one=two
configmap/myconfigmap created
[root@k8s-master1 ~]# kubectl get cm
NAME DATA AGE
fortune-config 1 110s
myconfigmap 3 8s
查询configmap
[root@k8s-master1 ~]# kubectl get cm myconfigmap -o yaml
apiVersion: v1
data:
bar: baz
foo: bar
one: two
kind: ConfigMap
metadata:
creationTimestamp: "2020-09-22T10:54:23Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:bar: {}
f:foo: {}
f:one: {}
manager: kubectl-create
operation: Update
time: "2020-09-22T10:54:23Z"
name: myconfigmap
namespace: default
resourceVersion: "180035"
selfLink: /api/v1/namespaces/default/configmaps/myconfigmap
uid: cf96d0d2-0f8d-4a5e-a2e9-caacb9f39ecc
从文件内容创建 ConfigMap 条目
[root@k8s-master1 ~]# kubectl create configmap my-config --from-file=configmap.yaml
从文件夹创建 ConfigMap创建
[root@k8s-master1 ~]# mkdir -p /path/to/dir
[root@k8s-master1 ~]# kubectl create configmap my-config-dir --from-file=/path/to/dir
这种情况下, kubectl 会为文件夹中的每个文件单独创建条目,仅限于那些文件名可作为合法 Co Ma 键名的文件
实验将nginx的配置文件采用configmap挂载出来
[root@k8s-master1 ~]# mkdir configmap-files
[root@k8s-master1 ~]# vim configmap-files/my-nginx-config.conf
server {
listen 80;
server_name www.kubia-example.com;
gzip on;
gzip_types text/plain application/xml;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
[root@k8s-master1 ~]# kubectl create configmap fortune-config --from-file=configmap-files
[root@k8s-master1 ~]# vim fortune-pod-configmap-volume.yaml
[root@k8s-master1 ~]# cat fortune-pod-configmap-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
readOnly: true
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
volumes:
- name: config
configMap:
name: fortune-config
[root@k8s-master1 ~]# kubectl exec nginx -- ls /etc/nginx/conf.d/
my-nginx-config.conf
使用 Secret给容器传递敏感数据
实例创建一个Secret
[root@k8s-master1 ~]# openssl genrsa -out https.key 2048
[root@k8s-master1 ~]# openssl req -new -x509 -key https.key -out https.cert -days 3650 -subj /CN=www.kubia-example.com
[root@k8s-master1 ~]# mkdir secret
[root@k8s-master1 ~]# mv https.* secret/
[root@k8s-master1 ~]# cd secret/
[root@k8s-master1 secret]# kubectl create secret generic fortune-https --from-file=https.key --from-file=https.cert --from-file=foo
[root@k8s-master1 secret]# kubectl get secrets fortune-https -o yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: html
mountPath: /user/share/nginx/html
readOnly: true
- name: config
mountPath: /etc/nginx/conf.d
readOnly: true
- name: certs
mountPath: /etc/nginx/certs/
readOnly: true
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
- containerPort: 443
volumes:
- name : html
emptyDir: {}
- name: config
configMap:
name: fortune-config
- name: certs
secret:
secretName: fortune-https
kubectl apply -f fortune-pod-https.yaml
kubectl port-forward nginx 8443:443 &
curl https://localhost:8443 -k
从应用访问pod元数据吗以及其他资源
apiVersion: v1
kind: Pod
metadata:
name: downward
spec:
containers:
- name: main
image: busybox
command: ["sleep", "9999999"]
resources:
requests:
cpu: "15m"
memory: "100k"
limits:
memory: "4M"
cpu: "100m"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: CONTAINER_CPU_REQUEST_MILLICORES
valueFrom:
resourceFieldRef:
resource: requests.cpu
divisor: 1m
- name: CONTAINER_MEMORY_LIMIT_KIBIBYTES
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Ki
Downward API传递元数据
通过 ambassador 容器简化与 API 服务器的交互集简单形容就是:pod中有运行一个 kubectl proxy
Deployment :声明式升级应用
apiVersion: v1
kind: ReplicationController
metadata:
name: kubia-v1
spec:
replicas: 3
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- name: nodejs
image: luksa/kubia:v1
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
type: LoadBalancer
selector:
app: kubia
ports:
- port: 80
targetPort: 8080
kubectl rolling-updade kubia-vl kubia-v2 --image=luksa/kubia:v2
imagePullPolicy 策略也依赖于镜像的tag。如果容器使用lastest的tag (显式指定或者不指定),imagePullPolicy默认为Always,但是如果容器指定了其他标记,则策略默认为IfNotPresent。
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
selector:
matchLabels:
app: kubia
replicas: 3
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- name: nodejs
image: luksa/kubia:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
type: LoadBalancer
selector:
app: kubia
ports:
- port: 80
targetPort: 8080
[root@k8s-master1 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
kubia-78984f9567 3 3 3 3m32s
[root@k8s-master1 ~]# while true; do sleep 1 ; curl 10.101.185.40; done
[root@k8s-master1 ~]# kubectl set image deployment kubia nodejs=luksa/kubia:v2
[root@k8s-master1 ~]# kubectl get pod --show-labels -o wide -w
##更新和回滚
[root@k8s-master1 ~]# kubectl set image deployment kubia nodejs=luksa/kubia:v3
deployment.apps/kubia image updated
[root@k8s-master1 ~]# kubectl rollout status deployment kubia
Waiting for deployment "kubia" rollout to finish: 1 out of 3 new replicas have been updated...
##回滚到某一特定版本
[root@k8s-master1 ~]# kubectl rollout undo deployment kubia --to-revision=1
deployment.apps/kubia rolled back
[root@k8s-master1 ~]# kubectl rollout status deployment kubia
旧版本的 ReplicaSet 过多会导致 ReplicaS 列表过于混乱,可以通过指定Deployment re visionHistoryLimit 属性来限制历史版本数量。默认值是所以正常情况下在版本列表里只有当前版本和上一个版本(以及只保留了当前和上
ReplicaSet )
通过yaml文件控制升级策略
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
阻止出错版本的滚动升级
minReadySeconds属性指定新创建的pod至少要成功运行多久之后 , 才能 将其视为可用。
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
minReadySeconds: 10
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: kubia
replicas: 3
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- name: nodejs
image: luksa/kubia:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
readinessProbe:
periodSeconds: 1
httpGet:
path: /
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
type: LoadBalancer
selector:
app: kubia
ports:
- port: 80
targetPort: 8080
创建一个本地PV和PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
创建一个Statefulset控制器服务
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
clusterIP: None
selector:
app: kubia
ports:
- name: http
port: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kubia
spec:
selector:
matchLabels:
app: kubia
serviceName: kubia
replicas: 2
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia-pet
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: pv-claim
mountPath: /var/data
volumes:
- name: pv-claim
persistentVolumeClaim:
claimName: pv-claim
测试数据是否持久化
[root@k8s-master1 ~]# kubectl proxy
[root@k8s-master1 ~]# curl localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/
[root@k8s-master1 ~]# curl -X POST -d "Hey there! This greeting was submitted to kubia-0." localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/
[root@k8s-master1 ~]# kubectl delete pod kubia-0
[root@k8s-master1 ~]# kubectl get pods
[root@k8s-master1 ~]# curl localhost:8001/api/v1/namespaces/default/pods/kubia-0/proxy/
[root@k8s-master1 ~]# kubectl run -it srvlookup --image=tutum/dnsutils --rm --restart=Never -- dig SRV kubia.default.svc.cluster.local
Statefulset控制器服务是无法通过API来通信的所以内部集群的通信可以通过DNS集合来解析查询。(DNS解析的时候必须有无头服务headless)
建立新的services 为集群IP模式
apiVersion: v1
kind: Service
metadata:
name: kubia-public
spec:
selector:
ports:
- port: 80
targetPort: 8080
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d
kubia ClusterIP None <none> 80/TCP 43m
kubia-public ClusterIP 10.111.223.113 <none> 80/TCP 76m
[root@k8s-master1 ~]# curl -X POST -d "The sun is shining" localhost:8001/api/v1/namespaces/default/services/kubia-public/proxy/
Data stored on pod kubia-0
[root@k8s-master1 ~]# curl localhost:8001/api/v1/namespaces/default/services/kubia-public/proxy/
You've hit kubia-0
Data stored in the cluster:
- kubia-0.kubia.default.svc.cluster.local: The sun is shining
- kubia-2.kubia.default.svc.cluster.local: The sun is shining
- kubia-1.kubia.default.svc.cluster.local: The sun is shining
[root@k8s-master1 ~]# kubectl get pod -o custom-columns=POD:metadata.name,NODE:spec.nodeName --sort-by spec.nodeName -n kube-system
网络插件
- Calico
- Flannel
- Romana
- Weave Net
权限管理
[root@k8s-master1 ~]# kubectl create sa foo
[root@k8s-master1 ~]# cat curl-custom-sa.yaml
apiVersion: v1
kind: Pod
metadata:
name: curl-custom-sa
spec:
serviceAccountName: foo
containers:
- name: main
image: tutum/curl
command: ["sleep","9999999"]
- name: ambassador
image: luksa/kubectl-proxy
[root@k8s-master1 ~]# kubectl apply -f curl-custom-sa.yaml
Role 和 ClusterRole的区别
定义:角色可以用 Role
来定义到某个命名空间上, 用 ClusterRole
来定义到整个集群作用域。一个 Role
只可以用来对某一命名空间中的资源赋予访问权限。下面的 Role
示例定义到名称为 “default” 的命名空间
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" 指定核心 API 组
resources: ["pods"]
verbs: ["get", "watch", "list"]
ClusterRole
可以授予的权限和 Role
相同, 但是因为 ClusterRole
属于集群范围,所以它也可以授予以下访问权限:
- 集群范围资源 (比如 nodes)
- 非资源端点(比如 “/healthz”)
- 跨命名空间访问的有名字空间作用域的资源(如 Pods),比如运行命令
kubectl get pods --all-namespaces
时需要此能力
下面的 ClusterRole
示例可用来对某特定命名空间下的 Secrets 的读取操作授权, 或者跨所有命名空间执行授权(取决于它是如何绑定的):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# 此处的 "namespace" 被省略掉是因为 ClusterRoles 是没有命名空间的。
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
Cluste RoleBinding ClusterRole 必须一起使用授予集群级别的资源的权限
对不同的用户组分配不同的podSecuritypolicy(容器用户允许的特权容器)
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
spec:
privileged: true
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
seLinux:
rule: RunAsAny
volumes:
- '*'
[root@k8s-master1 ~]# kubectl apply -f psp-privileged.yaml
podsecuritypolicy.policy/privileged created
[root@k8s-master1 ~]# kubectl get p
persistentvolumeclaims podsecuritypolicies.policy
persistentvolumes podtemplates
poddisruptionbudgets.policy priorityclasses.scheduling.k8s.io
pods
[root@k8s-master1 ~]# kubectl get podsecuritypolicies
使用 RBAC 将不同的 PodSecuityPoicy 分配给不同用户
[root@k8s-master1 ~]# kubectl create clusterrole psp-default --verb=use --resource=podsecuritypolicies --resource-name=default
[root@k8s-master1 ~]# kubectl create clusterrole psp-privileged --verb=use --resource=podsecuritypolicies --resource-name=privileged
集群节点的横向伸缩
手动标记节点为不可调度、 排空节点
节点也可以手动被标记为不可调度并排空。 不涉及细节, 这些工作可用以
下 kubectl 命令完成:
• kubectl cordon 标记节点为不可调度(但对其上的 pod不做
任何事)。
• kubectl drain 标记节点为不可调度, 随后疏散其上所有
pod。
两种情形下, 在你用 kubectl uncordon 解除节点的不可调度
状态之前, 不会有新 pod被调度到该节点。