目录
6.Ingress-NodePort-Deployment-http代理
7.Ingress-NodePort-Deployment-secret-https代理
1.节点硬亲和(node_required_affinity)
2.节点软亲和(node_preferred_affinity)
一、探针-钩子
1.pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
spec:
containers:
- name: nginx-1
image: nginx:latest
imagePullPolicy: IfNotPresent
- name: busybox-1
image: busybox:1.35.0
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
2.init-探针-钩子
apiVersion: v1
kind: Pod
metadata:
name: alltest
namespace: default
labels:
app: myapp
spec:
initContainers:
- name: init-myservice
image: busybox:1.35.0
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox:1.35.0
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
containers:
- name: busybox
image: busybox:1.35.0
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","touch /tmp/live && sleep 36000"]
livenessProbe:
exec:
command: ["test","-e","/tmp/live"]
initialDelaySeconds: 1
periodSeconds: 3
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /var/message"]
preStop:
exec:
command: ["/bin/sh", "-c", "echo Hello from the preStop handler > /var/message"]
- name: nginx
image: nginx:latest
readinessProbe:
httpGet:
port: 80
path: /index1.html
initialDelaySeconds: 1
periodSeconds: 3
livenessProbe:
httpGet:
port: 80
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 3
二、控制器
1.RC
apiVersion: v1 #在核心组v1版本进行定义的
kind: ReplicationController #资源类别用全称描绘,可通过kubectl explain rc命令获得信息
metadata: #元数据
name: frontend #名字,forntend前端
spec: #期望
replicas: 3 #当前的副本数量3个
selector: #选择器
app: nginx #只要符合app:nginx这个标签的pod都是被我管理的。必须是下面labels标签的子集
template: #模板,告诉前面的3个副本怎么创建。(可以将pod资源清单照搬过来)
metadata: #元数据
labels: #标签
app: nginx #标签的内容
spec: #期望
containers:
- name: php-redis
image: nginx:latest
imagePullPolicy: IfNotPresent
env: #环境变量,在当前容器中注入了两个环境变量
- name: GET_HOSTS_FROM #环境变量的名字是GET_HOSTS_FROM
value: dns #value:dns
name: zhangsan #还有个环境变量叫zhangsan
value: "123" #值是123
ports:
- containerPort: 80 #端口是80
livenessProbe: #存活探测
httpGet: #基于httpGet方案检测
port: 80 #检测的端口是80
path: /index.html #检测的页面是index.html页面
initialDelaySeconds: 1 #延时的时间是1秒
periodSeconds: 3 #间隔3秒
timeoutSeconds: 3 #检测的超时时间3秒
2.RS(标签选择)
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx:latest
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80
标签选择说明:
matchExpressions
#匹配运算符
matchLabels
#匹配标签
匹配运算符的方式:
In:label 的值在某个列表中
#对values与一个集合运算,存在的定义是这个key有没有,value没无所谓
NotIn:label 的值不在某个列表中
Exists:某个 label 存在
DoesNotExist:某个 label 不存在
selector: #选择器
matchExpressions: #匹配运算符
- key: app #key是app
operator: In #运算符是in,是否在values列表
values: #也就是说当前的节点要么是app.-spring-k8s,要么是app- hahahah
- spring-k8s
- hahahah
selector:
matchExpressions:
- key: app
operator: Exists
selector:
matchExpressions:
- key: app #key是app
operator: Exists #运算符是存在
matchLabels:
app: nginx
selector:
matchLabels: #匹配标签
tier: frontend
3.Deployment
apiVersion: extensions/v1beta1 #extensions组的v1beta1接口
kind: Deployment # Deployment控制器类型
metadata: #元数据
name: nginx-deployment #当前的名字叫nginx-deployment
spec: #期望
revisionHistoryLimit:0 #加入这个选项后就不会产生rs升级的缓存记录了
replicas: 3 #当前的副本数量为3
selector: #匹配标签,这可以缩写,如果不写,默认匹配下面的
matchLabels:
app: nginx
strategy:
rollingUpdate: #滚动更新
maxSurge:2 #指定超出副本数有几个,两种方式:1、指定数量 2、百分比
maxUnavailable:0 #最少不可用为0
type: RollingUpdate
template: #模板
metadata:
labels:
app: nginx
spec: #期望
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
kubectl create -f https://kubernetes.io/docs/user-guide/nginx-deployment.yaml --record
# --record参数可以记录命令,我们可以很方便的查看每次 revision 的变化
4.DaemonSet
apiVersion: apps/v1 #apps组版本v1
kind: DaemonSet #类型为DaemonSet类型
metadata: #元数据
name: deamonset-example #元数据的名字为deamonset-example
labels:
app: daemonset # daemonset标签为app: daemonset
spec: #期望
selector: #选择器,只要选择器下有matchLabels,就证明下面还有一个,一个匹配标签,一个匹配运算
matchLabels: #匹配pod标签
name: deamonset-example
template: #template模板
metadata: #pod元数据
labels: #pod标签
name: deamonset-example #这是上面要匹配的pod标签名
spec:
containers:
- name: daemonset-example
image: nginx:latest
imagePullPolicy: IfNotPresent
5.StatefulSet-svc-pvc
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates: #持久卷声明模板
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs" #存储的类为nfs,必须与pv的一致才可匹配
resources:
requests:
storage: 2Gi
6.CronJob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.35.0
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
#重启策略:OnFailure(失败再重启),Never(从不重启),Always(总是重启)
三、SVC类型
1.ClusterIp-deploy
apiVersion: apps/v1
kind: Deployment
metadata: #元数据
name: clusterip-deploy
namespace: default
spec: #期望
replicas: 3 #副本数量为3
selector: #标签选择器
matchLabels: #匹配标签
app: clusterip
release: stabel
template: #创建pod模板
metadata: #pod元数据
labels: #pod标签
app: clusterip
release: stabel
env: test
spec: #pod期望
containers: #mainC
- name: clusterip-pod
image: nginx:latest
imagePullPolicy: IfNotPresent #如果本地有镜像,就不下载
ports:
- name: http #指定端口别名为http
containerPort: 80 #指定端口为80(只是说明一下,没有特殊含义)
---
apiVersion: v1
kind: Service #service类型
metadata:
name: clusterip-svc #当前service的名是myapp
namespace: default #放到default名称空间下
spec: #期望
type: ClusterIP #ClusterIP的类型
sessionAffinity: None #如果要进行会话保持,需将None改为ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds:300 #会话保持时间300秒
selector: #标签选择器,匹配下面两个标签
app: clusterip
release: stabel
ports: #端口,
- name: http #端口名称为http
port: 80 #端口是80,集群访问的VIP的端口。
targetPort: 80 #后端目标80,真实服务器的端口
#IPVS是基于NAT工作模式运行的
2.headless(无头服务)
apiVersion: v1
kind: Service
metadata:
name: headless-svc
namespace: default
spec:
type: ClusterIP
selector:
app: headless
clusterIP: "None" #加这个选项就是无头服务
ports:
- port: 80
targetPort: 80
3.NodePort-deploy
apiVersion: apps/v1
kind: Deployment
metadata: #元数据
name: nodeport-deploy
namespace: default
spec: #期望
replicas: 3 #副本数量为3
selector: #标签选择器
matchLabels: #匹配标签
app: nodeport
release: stabel
template: #创建pod模板
metadata: #pod元数据
labels: #pod标签
app: nodeport
release: stabel
env: test
spec: #pod期望
containers: #mainC
- name: nodeport-deploy-pod
image: nginx:latest
imagePullPolicy: IfNotPresent #如果本地有镜像,就不下载
ports:
- name: http #指定端口别名为http
containerPort: 80 #指定端口为80(只是说明一下,没有特殊含义)
---
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc
namespace: default #放在default名称空间下
spec: #期望
type: NodePort #当前的工作模式是NodePort方式
selector:
app: nodeport
release: stabel
ports:
- name: http #定义的别名为http
port: 8080 #集群的端口80
targetPort: 8081 #后端pod的端口8081
nodePort: 60008 #nodeport暴露的端口
kubectl get svc
nodeport-svc NodePort 172.21.9.130 <none> 8080:31553/TCP 38m
#31553是对外暴露的物理机端口.
#集群内部访问通过172.21.9.130:8080,外部访问通过 物理机IP:31553.
4.LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: loadbalancer-svc
spec:
selector:
app: loadbalancer # 这里应该匹配你的 Pod 标签
type: LoadBalancer
ports:
- protocol: TCP
port: 80 # 服务暴露端口
targetPort: 8080 # Pod 上的端口
# 如果云提供商支持,也可以指定一个负载均衡器的外部端口,例如:
# nodePort: 30000
5.Endpoints
apiVersion: v1
kind: Service
metadata:
name: endpoint-svc #元数据名称为nginx
spec:
ports:
- protocol: TCP #协议是tcp
port: 6666 #集群端口是6666
targetPort: 80 #后端端口是80
---
apiVersion: v1
kind: Endpoints
metadata:
name: endpoint-svc
subsets: #后端地址的设置(pod的地址或外部服务的地址)
- addresses:
- ip: 172.20.1.9
- ip: 172.20.0.249
- ip: 172.20.1.10
ports:
- port: endpoint #此地址池的服务名称
- port: 80
- addresses:
- ip: 192.168.1.1
ports:
- name: http
port: 8080
# 访问service的地址,通过我们自己维护的endpoints代理到了node01节点。
# 如果endpoints代理的地址不是node01从节点的IP,而是105,106的其他IP。
# 这就达到了把外部的服务通过自己维护地址的方案转换成了集群内部可以用的svc的信息
# 即访问svc即可访问到外部的服务。
6.Ingress-NodePort-Deployment-http代理
apiVersion: apps/v1
kind: Deployment #部署一个Deployment控制器
metadata:
name: ingress-deploy #Deployment叫ingress-deploy
spec:
replicas: 3 #副本数量2
selector: #标签选择器
matchLabels: #匹配标签
name: ingress
template: #模板怎么创建pod
metadata:
labels:
name: ingress #标签名字为www1
spec:
containers: #mainC
- name: ingress-nginx #mainC的名字为ingress-nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ingress-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP #使用TCP协议
selector:
name: ingress
---
apiVersion: networking.k8s.io/v1
kind: Ingress #Ingress对象类型
metadata:
name: ingress
spec:
ingressClassName: "nginx" #指定由名为nginx的ingressclass处理请求,kubectl get ingressclass查看
rules:
- host: www.daboluo.com #访问的域名是www.daboluo.com
http: #http协议
paths:
- path: / #指定路径
pathType: Prefix #根据 Kubernetes 版本需求,添加 pathType 字段
backend:
service:
name: ingress-svc #如果是nginx路径,后端基于www1提供(上方的service地址)
port:
number: 80
# 对于 Ingress 资源中的 backend,从 Kubernetes 1.19 开始,应使用新的字段结构
# 包括 pathType 和对 service 的引用。
# 如果告警控制器版本问题,可以使用kubectl explain deploy来查看
7.Ingress-NodePort-Deployment-secret-https代理
私钥和证书创建为 secret
#创建私钥,输入两次私钥密码
openssl genrsa -des3 -out server.key 2048
#创建当前证书请求,输入私钥密码。
openssl req -new -key server.key -out server.csr
#给私钥备份
cp server.key server.key.org
#给私钥做退密码处理
openssl rsa -in server.key.org -out server.key
#将私钥和请求的证书文件签发为证书
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
mv server.crt tls.crt
mv server.key tls.key
# 将这两个文件封装在secret镜像里
# secret是一种特殊的存储介质,专门存储这些加密类型的文件
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
创建 yaml 文件
vim ingress-svc-deployment-https.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-ssl
spec:
replicas: 3
selector: #标签选择器
matchLabels: #匹配标签
app: ssl
template:
metadata:
labels:
app: ssl
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ingress-ssl
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
selector:
app: ssl
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ssl.daboluo.com
namespace: default
annotations: #元数据信息,是kubernetes集群给第三方应用提供的一种描述机制。
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
#强制从http跳转至https。只要我们写了ingress,上面有这两个标记,就代表我要开启https访问了。
#这个信息是读给我们ingress-nginx去看的。
spec:
ingressClassName: "nginx" #指定由名为nginx的ingressclass处理请求,kubectl get ingressclass查看
tls:
- hosts:
- ssl.daboluo.com
secretName: tls-secret #指定当前tls访问的证书
rules: #客户端规则
- host: ssl.daboluo.com
http:
paths:
- path: / #如果访问路径是根路径,后端基于serviceName: ssl提供
pathType: Prefix #根据 Kubernetes 版本需求,添加 pathType 字段
backend:
service:
name: ingress-ssl #上方servicename的名字
port:
number: 8080 #后端的Deployment控制器被当前的ingress的http
访问测试
curl -k -L https://ssl.daboluo.com
# -k或--insecure跳过证书认证
# -L 参数告诉curl跟随重定向
8.NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db #对标签为role: db的pod做策略
policyTypes:
- Ingress #控制进入 Pod 的流量
- Egress #控制从 Pod 发出的流量
ingress:
- from:
- ipBlock: #选择特定的 IP CIDR 范围作为流量来源或目的地
cidr: 172.17.0.0/16
except: #排除哪些
- 172.17.1.0/24
- namespaceSelector: #选择特定标签的命名空间中的所有 Pod
matchLabels:
project: myproject
- podSelector: #选择特定命名空间中与标签匹配的 Pod
matchLabels:
role: frontend
ports: #允许上述源,访问标签为role: db的pod的6379端口
- protocol: TCP
port: 6379
egress: #出站策略
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports: #允许标签为role: db的pod,访问 CIDR 10.0.0.0/24 下 5978 TCP 端口的资源
- protocol: TCP
port: 5978
四、存储
1.Deployment-Secret
#base64加密
echo -n "daboluo" | base64
#base64解密
echo -n "ZGFib2x1bw==" | base64 -d
创建secret
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: MWYyZDFlMmU2N2Rm #这里的values值必须是经过加密之后的
username: YWRtaW4=
将 Secret 挂载到 Volume 中
apiVersion: v1
kind: Pod
metadata:
labels:
name: seret-test
name: seret-test
spec:
volumes: #卷
- name: volumes12
secret: #基于secret提供的
secretName: mysecret
containers: #mainC
- image: nginx:latest
name: db
volumeMounts:
- name: volumes12
mountPath: "/data"
将 Secret 导出到环境变量中
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-pod-secret
spec:
replicas: 2
selector:
matchLabels:
app: deployment-secret
template:
metadata:
labels:
app: deployment-secret
spec:
containers:
- name: deployment-pod-secret-test
image: centos:7.9.2009
imagePullPolicy: IfNotPresent
command: [ "/bin/sh","-c","tailf /var/log/yum.log" ]
ports:
- containerPort: 80
env:
- name: TEST_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: TEST_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
2.configmap
a.使用目录创建
ls docs/user-guide/configmap/kubectl/
game.file
ui.file
cat docs/user-guide/configmap/kubectl/game.file
version=1.17
name=dave
age=18
cat docs/user-guide/configmap/kubectl/ui.properties
level=2
color=yellow
#--from-file指定在目录下的所有文件都会被用在 ConfigMap 里面创建一个键值对,键的名字就是文件名,值就是文件的内容
kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
b.使用文件创建
#--from-file这个参数可以使用多次,你可以使用两次分别指定上个实例中的那两个配置文件,效果就跟指定整个目录是一样的
kubectl create configmap game-config-2 --from-file=./game.file
c.使用命令行创建
kubectl create configmap literal-config --from-literal=name=dave --from-literal=password=pass
d.使用 ConfigMap 来替代环境变量
vim configmap-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
og_level: INFO
kubectl apply -f configmap-env.yam
apiVersion: v1
kind: Pod
metadata:
name: cm-env-test-pod
spec:
containers:
- name: test-container
image: nginx:latest
command: [ "/bin/sh", "-c", "env" ]
env:
- name: USERNAME #创建一个环境变量的名字
valueFrom: #来源
configMapKeyRef: #来源于configMapKeyRef
name: literal-config #哪个config呢?来自于literal-config
key: name #来自于name字段
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: literal-config
key: password
envFrom:
- configMapRef:
name: env-config
restartPolicy: Never
e.用 ConfigMap 设置命令行参数
apiVersion: v1
kind: Pod
metadata:
name: cm-command-dapi-test-pod
spec:
containers:
- name: test-container
image: nginx:latest
command: [ "/bin/sh", "-c", "echo $(USERNAME) $(PASSWORD)" ]
env:
- name: USERNAME
valueFrom:
configMapKeyRef:
name: literal-config
key: name
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: literal-config
key: password
restartPolicy: Never
f.通过数据卷插件使用ConfigMap
apiVersion: v1
kind: Pod
metadata:
name: cm-volume-test-pod
spec:
containers:
- name: test-container
image: nginx:latest
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes: #卷
- name: config-volume #卷的名字叫config-volume
configMap: #基于configMap提供的
name: literal-config #基于名字为literal-config的configMap提供的
restartPolicy: Never
3.volume存储卷
a.emptyDir
apiVersion: batch/v1
kind: Job
metadata:
name: jobs-empty
spec:
template:
spec:
restartPolicy: Never
initContainers:
- name: job-1
image: busybox:1.34.1
command:
- 'sh'
- '-c'
- >
for i in 1 2 3;
do
echo "job-1 `date`";
sleep 1s;
done;
echo job-1 GG > /srv/input/code
volumeMounts:
- mountPath: /srv/input/
name: input
- name: job-2
image: busybox:1.34.1
command:
- 'sh'
- '-c'
- >
for i in 1 2 3;
do
echo "job-2 `date`";
sleep 1s;
done;
cat /srv/input/code &&
echo job-2 GG > /srv/input/output/file
volumeMounts:
- mountPath: /srv/input/
name: input
- mountPath: /srv/input/output/
name: output
containers:
- name: job-3
image: busybox:1.34.1
command:
- 'sh'
- '-c'
- >
echo "job-1 and job-2 completed";
sleep 3s;
cat /srv/output/file
volumeMounts:
- mountPath: /srv/output/
name: output
volumes:
- name: input
emptyDir: {}
- name: output
emptyDir: {}
b.hostPath
apiVersion: v1
kind: Pod
metadata:
name: volume-test-pd
spec:
containers:
- image: centos:7.9.2009
name: volume-test-container
command: [ "/bin/sh","-c","tailf /var/log/yum.log" ]
volumeMounts:
- mountPath: /volume-test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /data # 主机目录
type: Directory # 类型
值 | 行为 |
---|---|
空字符串(默认)用于向后兼容,这意味着在挂载 hostPath 卷之前不会执行任何检查。 | |
DirectoryOrCreate | 如果在给定的路径上没有任何东西存在,那么将根据需要在那里创建一个空目录,权限设置为 0755,与 Kubelet 具有相同的组和所有权。(只能是目录) |
Directory | 给定的路径下必须存在目录 |
FileOrCreate | 如果在给定的路径上没有任何东西存在,那么会根据需要创建一个空文件,权限设置为 0644,与 Kubelet 具有相同的组和所有权。 |
File | 给定的路径下必须存在文件 |
Socket | 给定的路径下必须存在 UNIX 套接字 |
CharDevice | 给定的路径下必须存在字符设备 |
BlockDevice | 给定的路径下必须存在块设备 |
4.PersistentVolume 持久卷
nfs类型的pv:
apiVersion: v1
kind: PersistentVolume #资源对象类别为持久卷
metadata:
name: pv003 #当前pv的名字为pv003
labels:
type: nfs
spec:
capacity: #容量
storage: 5Gi #存储空间为5GI
volumeMode: Filesystem #卷的模式,模拟的文件系统类型
accessModes: #返回模式
- ReadWriteOnce #单节点读写
persistentVolumeReclaimPolicy: Recycle #回收的策略,基本擦除(`rm -rf /thevolume/*`)
storageClassName: slow #存储类的名字
mountOptions: #挂载的选项
- hard
- nfsvers=4.1 #nfs的版本
nfs: #使用nfs创建的pv
path: /tmp #共享路径在/tmp下
server: 172.17.0.2 #服务器的地址是172.17.0.2
基于本地目录的pv:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain # 手动回收类型
hostPath:
path: "/mnt/data"
5.PersistentVolumeClaim 持久卷声明
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: manual
# 指定 PVC 请求的存储类别,需要与现存的 StorageClass 名称相匹配
# 或者留空以使用默认的 StorageClass。在这个例子中使用的是 manual。
五、亲和性
1.节点硬亲和(node_required_affinity)
apiVersion: v1
kind: Pod
metadata:
name: affinity
labels:
app: node-affinity-pod
spec:
containers:
- name: node-affinity
image: nginx:latest
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- k8s-node02
2.节点软亲和(node_preferred_affinity)
apiVersion: v1
kind: Pod
metadata:
name: affinity
labels:
app: node-affinity-pod
spec:
containers:
- name: with-node-affinity
image: nginx:latest
imagePullPolicy: IfNotPresent
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In #匹配规则IN
values:
- k8s-node02 #优先匹配k8s-node02节点
3.pod亲和
apiVersion: v1
kind: Pod
metadata:
name: pod-affinity
labels:
app: pod-affinity
spec:
containers:
- name: pod-affinity
image: nginx:latest
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-1
topologyKey: kubernetes.io/hostname
4.pod反亲和
apiVersion: v1
kind: Pod
metadata:
name: pod-antiaffinity
labels:
app: pod-antiaffinity
spec:
containers:
- name: pod-antiaffinity
image: nginx:latest
imagePullPolicy: IfNotPresent
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-1
topologyKey: kubernetes.io/hostname
#上述配置不优先匹配pod-1,反亲和,双重否定表肯定,最终优先匹配pod-1所在的节点运行pod。
六、污点与容忍
1.污点(Taint)
# 查看污点
kubectl get pod -o wide
# Tains字段表示污点
kubectl describe node k8s-master01
# 添加污点
kubectl taint node k8s-master01 node-role.kubernetes.io/master=:NoSchedule
# 删除污点
kubectl taint node k8s-master01 node-role.kubernetes.io/master=:NoSchedule-
2.容忍(Tolerations)
tolerations_daemonset
vim tolerations_daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: tolerations-daemonset
labels:
app: daemonset
spec:
selector:
matchLabels:
name: tolerations-daemonset-pod
template:
metadata:
labels:
name: tolerations-daemonset-pod
spec:
tolerations:
- key: "node-role.kubernetes.io/master" #master节点的污点的key。
operator: "Exists" #匹配规则为exists,存在key就容忍。
containers:
- name: tolerations-daemonset-pod
image: nginx:latest
imagePullPolicy: IfNotPresent
七、Scheduler 调度器
1.固定节点调度
vim scheduler-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scheduler-deploy
spec:
selector:
matchLabels:
app: scheduler-deploy
replicas: 7
template:
metadata:
labels:
app: scheduler-deploy
spec:
nodeName: cn-chengdu-scyc-d01.10.88.62.172 #指定运行的节点名称
containers:
- name: scheduler-deploy
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
2.匹配标签调度
vim scheduler-selector-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: scheduler-selector-deploy
spec:
selector:
matchLabels:
app: scheduler-selector-deploy
replicas: 20
template:
metadata:
labels:
app: scheduler-selector-deploy
spec:
nodeSelector:
app: selector #标签匹配
containers:
- name: scheduler-selector-pod
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
八、资源限制
1.pod
spec:
containers:
- image: nginx:latest
name: auth
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 250m
memory: 250Mi
#limits 为最高请求的资源
#requests 要分配的资源
#可以理解为最大值和初始值
2.名称空间
(1)计算资源配额
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: spark-cluster
spec: #期望
hard: #硬件
requests.cpu: "20" #硬件类型:requests.cpu,20个核心
requests.memory: 100Gi
limits.cpu: "40"
limits.memory: 200Gi
(2)配置对象数量配额限制
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: spark-cluster
spec:
hard:
pods: "20"
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
(3)配置CPU和内存 limitrange
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: example
spec:
limits:
- default: # 默认限制值
memory: 512Mi
cpu: 2
defaultRequest: # 默认请求值(初始值)
memory: 256Mi
cpu: 0.5
max: # 最大的资源限制
memory: 800Mi
cpu: 3
min: # 最小限制
memory: 100Mi
cpu: 0.3
maxLimitRequestRatio: # 超售值
memory: 2
cpu: 2
type: Container # Container / Pod / PersistentVolumeClaim