chapter4 running Your docker containers
容器被建议一个容器最好只有一个进程,但是一般在实际的需求中,需要多个进程。k8s提供了一种能力,将你的所有容器成组。比如共享的文件、共享的网络等。
wordpress举例
需要一个nginx http server
fpm php interpreter,php解释器
pod说明
每个pod有单独的ip address
每个pod需要可以包含一个微服务的所有内容
pod需要是无状态的
创建例子
kubectl run nginx-pod --image nginx:latest
vim nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx-container
image: nginx:latest
kubectl apply -f nginx-pod.yaml
获取信息
kubectl get pods <空/pod-name> -o yaml
kubectl get pods <空/pod-name> -o json
外部接入网络
8080是本机的地址,80是容器内的地址
kubectl port-forward Pod/nginx-Pods 8080:80
删除
kubectl delete -f nginx-pod.yaml
标签
限制在63字节,可以使用一些如 enviroment stack tier app_name team等等
kubectl run nginx-pod --image nginx -l "env=prod"
vim nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
env: prod
tier: frontend
spec:
containers:
- name: nginx-container
image: nginx:latest
kubectl apply -f nginx-pod.yaml
kubectl get pods -l "env=prod"
annotation
annotation一般不用于标识对象
job例子
vim hello-world-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world-job
spec:
#失败了尝试几次
backoffLimit: 3
# 一共来几次 kubectl get Pods --watch
completions: 10
#并行度
parallelism: 5
#终止时间
activeDeadlineSeconds: 60
#保留时间
ttlSecondsAfterFinished: 30
template:
metadata:
name: hello-world-job
spec:
restartPolicy: OnFailure
containers:
- name: hello-world-container
image: busybox
command: ["/bin/sh", "-c"]
args: ["echo 'Hello world'; sleep 3"]
cronjob
cronjob 就是封了一层的job,原书有很大问题,参考 官方文档 CronJob | Kubernetes
vim hello-world-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-world-cronjob
spec:
# cronJob表达式 0 */1 * * * 第一个为分钟。
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello-world-container
image: busybox
command: ["/bin/sh", "-c"]
args: ["echo 'Hello world'"]
restartPolicy: OnFailure
kubectl get cronjobs
多containerpod
包括有一些设计模式,如ambassador sidecar adapter-container
会创建pod(至少有两个containers),也会看到怎么获取到特定的容器的container
vim multi-container.yaml
#实际内容
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
containers:
- name: nginx-container
image: nginx:latest
- name: tomcat
image: tomcat:7.0.75-jre8-alpine
ports:
- containerPort: 8080
kubectl apply -f multi-container.yaml
错误的时候怎么处理
1. 首先,etcd中记录yaml有效
2. 尝试去lauch 容器
3. 如果失败,将会不断的充实
grace period
优雅停机,如果想要立即删除,--grace-period=0
进入容器
#执行命令
kubectl exec multi-pod --container nginx-container -- ls -l
# 交互模式
kubectl exec -it multi-pod --container nginx-container -- /bin/bash
entrypoint其实是docker 连接的时候主要启动的,cmd作为参数
启动参数
command 覆盖 dockerfile 的entryPoint
args(自动覆盖docker file 的cmd) Define a Command and Arguments for a Container | Kubernetes
initContainers
会在实际执行容器之前执行
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init-container
spec:
initContainers:
- name: my-init-container
image: busybox:latest
command: ["sleep", "15"]
containers:
- name: nginx-container
image: nginx:latest
日志
kubectl logs -f pods/multi-pod --container tomcat --since=124h --tail=30
Volume和PersistentVolume
volume可以简单理解为是随着pod的声明周期的。persistentVolume则简单理解为不是pod同生命周期的。虽然这种理解并不是全对。例如aws就有他独立的声明周期。
两种,emptyDir,hostPath 以后后续的persistentVolumeClaim
emptyDir
顾名思义就是创建pod的时候是空的。
apiVersion: v1
kind: Pod
metadata:
name: two-containers-with-empty-dir
spec:
containers:
- name: nginx-container
image: nginx:latest
volumeMounts:
- mountPath: /var/i-am-empty-dir-volume
name: empty-dir-volume
- name: busybox-container
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 30; done;"] # Prevents busybox from exiting after completion
volumeMounts:
- mountPath: /var/i-am-empty-dir-volume
name: empty-dir-volume
volumes:
- name: empty-dir-volume # name of the volume
emptyDir: {} # Initialize an empty directory # The path on the worker node.
#验证是否共享,第一个里面新建文档
kubectl exec -it two-containers-with-empty-dir --container nginx-container -- /bin/bash
cd /var/i-am-empty-dir-volume
touch a.txt
#第二个里面看下是不是又
kubectl exec two-containers-with-empty-dir --container busybox-container -- ls /var/i-am-empty-dir-volume
hostPath
vim host-path.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod-with-host-path
spec:
containers:
- name: nginx-container
image: nginx:latest
volumeMounts:
- mountPath: /var/config
name: my-host-path-volume
- name: busybox-container
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 30; done;"] # Prevents busybox from exiting after completion
volumes:
- name: my-host-path-volume
hostPath:
path: /tmp # The path on the worker node.
echo 1 > /tmp/host-path.txt
kubectl exec two-containers-with-empty-dir --container nginx-container -- cat /var/config/host-path.txt
ambassador pattern
典型的场景,来部署主container以及其他container,就是所谓的大使容器,他用于与其他容器交互。例如sql proxy。
sidecar模式
对主容器起辅助作用的容器。例如有一些监控信息啦,日志容器啦之类的。
adapter模式
1. 主容器 2. 适配容器,其实比较类似于sidecar,但是多了一层转换功能。
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-ambassador
spec:
containers:
- name: mysql-proxy-ambassador-container
image: mysql-proxy:latest
ports:
- containerPort: 3306
env:
- name: DB_HOST
value: mysql.xxx.us-east-1.rds.amazonaws.com
- name: nginx-container
image: nginx:latest
chapter 6 ConfigMaps and Secrets
目的就是为了解耦,比如生产环境和测试环境使用同一套代码,但是使用不同的mysql库
configMap 是不敏感的信息,secrets是用来存储一些比如database的password之类的东西
1. 创建configmap或者secret
2. 填充你的配置值
3. 创建一个pod来引用对应的configMap和secret
一般使用方式有两种
写入到成环境变量
挂在一个volume,这样就可以将对应目录下的所有value都注入到容器中。
配置实践(crud)
kubectl get configmaps
kubectl get cm
kubectl create configmap my-first-configmap
#查看configmap中的值
kubectl describe cm my-first-configmap
kubectl create configmap my-first-configmap --from-literal=color=blue \
--from-file=$HOME/configfile.txt
#编辑一个env-file文件,即key=value形式的文件,不可以与literal和file混合
vim env-file.txt
hello=world
release=1.1.1
kubectl create configmap my-first-configmap --from-env-file=./env-file.txt
vim configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-fifth-configmap
data:
color: "blue"
version: "1"
environment: "prod"
configfile.txt: |
I'm another configuration file
kubectl create -f configmap.yaml
kubectl delete -f configmap.yaml
#注意不能update,只能delete后create
带config
#执行vim前执行
kubectl create cm my-third-configmap --from-literal=color=blue
#部分引用configMap
vim pod-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-with-configmap
spec:
containers:
- name: nginx-container-with-configmap
image: nginx:latest
env:
- name: COLOR #Any other name works here.
valueFrom:
configMapKeyRef:
name: my-third-configmap #上面有configmap创建
key: color
kubectl create -f pod-configmap.yaml
kubectl exec Pods/nginx-pod-with-configmap -- env
#直接引用configMap
vim pod-refconfig.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-Pod-with-configmap
spec:
containers:
- name: nginx-container-with-configmap
image: nginx:latest
envFrom:
- configMapRef:
name: my-third-configmap
kubectl exec Pods/nginx-pod-with-configmap -- env
configMap放置到容器中
kubectl create cm my-sixth-configmap --from-literal=color=green --from-literal=version=1 --from-literal=environment=prod
vim config-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-with-configmap-volume
spec:
volumes:
- name: configuration-volume
configMap:
name: my-sixth-configmap # Configmap name goes here
containers:
- name: nginx-container-with-configmap
image: nginx:latest
volumeMounts:
- name: configuration-volume # match the volume name
mountPath: /etc/conf
kubectl create -f config-volume.yaml
# color version enviroment
kubectl exec Pods/nginx-pod-with-configmap-volume -- ls /etc/conf
#green
kubectl exec Pods/nginx-pod-with-configmap-volume -- cat /etc/conf/color
secret
kubectl get secret
#命令式创建
kubectl create secret generic my-first-secret --from-literal='db_password=my-db-password'
#声明式创建
vim secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-second-secret
type: Opaque
data:
db_password: bXktZGItcGFzc3dvcmQK
kubectl create -f secret.yaml
kubectl describe secret/my-second-secret
#用文件创建 命令方式
echo -n 'mypassword' > ./password.txt
kubectl create secret generic mypassword –from-file=./password.txt
secret转env和转volume,但会使得变成环境变量,如果登录,可能有危险
#secret 变成环境变量
vim secret-to-env.yaml
apiVersion: v1
metadata:
name: nginx-pod-with-secret-env-variable
namespace: default
spec:
containers:
- name: nginx-container
image: nginx:latest
env:
- name: PASSWORD_ENV_VAR # Name of env variable
valueFrom:
secretKeyRef:
name: mypassword # Name of secret object
key: password # Name of key in secret object
kubectl create -f secret-to-env.yaml
kubectl exec Pods/nginx-pod-with-secret-env-variable – env
#secret 转换成volume
vim secret-to-volume.yaml
ApiVersion: v1
Metadata:
Name: nginx-pod-with-secret-volume
Spec:
containers:
- name: nginx-container
Image: nginx:latest
VolumeMounts:
- name: mysecretVolume # Name of the volume
MountPath: /etc/password-mounted-path
Volumes:
- name: mysecretVolume # Name of the volume
Secret:
SecretName: mypassword # name of secret
kubectl create -f secret-to-volume.yaml
kubectl exec Pods/nginx-pod-with-secret-volume -- cat /etc/password-mounted-path/password
chapter7 exposing pods with service
基于高可用等,来保障。
为啥需要expose
每次新建的时候,都会重新分配个ip,可能分配到不同的ip
service其实是networking以及负载均衡的核心
可以认为service是一个具有静态的dns name的代理。
my-app-service ---> my-app-service.default.svc.cluster.local
create
#1.23.8版本 expose只是默认创建了一个东西,但是并没有指定具体的创建方式,最好还是走声明式的创建
kubectl run nginx --image nginx:latest --port=80 --expose=true
kubectl get pods -o wide --show-labels
#结果
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx 1/1 Running 0 2m26s 172.17.0.5 minikube <none> <none> run=nginx
kubectl get services
#结果
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h
nginx ClusterIP 10.100.21.213 <none> 80/TCP 64s
kubectl describe svc nginx
#结果开始-------
Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>
#标签
Selector: run=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.21.213
IPs: 10.100.21.213
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 172.17.0.5:80
Session Affinity: None
Events: <none>
#结果结束-------
wget 'https://k8s.io/examples/admin/dns/dnsutils.yaml'
vim dnsutils.yaml
#替换image地址 registry.cn-beijing.aliyuncs.com/simonchen/jessie-dnsutils:1.3
kubectl apply -f dnsutils.yaml
kubectl exec -ti dnsutils -- nslookup nginx.default.svc.cluster.local
#结果
Server: 10.96.0.10
Address: 10.96.0.10 # This address is only resolvable from within the Kubernetes cluster, via local kube-dns or CoreDNS.
Name: nginx.default.svc.cluster.local
Address: 10.98.191.187
#nginx页面访问 访问不了,真撒比
kubectl exec -ti dnsutils -- wget nginx.default.svc.cluster.local
kubectl exec -ti dnsutils -- cat index.html
#这个才有用,浪费时间
kubectl exec -it dnsutils -- dig nginx.default.svc.cluster.local
--expose这个标签使用并不能指定怎么去创建,比如你想创建NodePort service的时候就没法创建(下文讲述)
NodePort
意思是能在node上能看到对应的服务
#service
kubectl run whoami1 --image=containous/whoami –port 80 –labels="app=whoami"
kubectl run whoami2 --image=containous/whoami –port 80 –labels="app=whoami"
#启动nodepod
vim nodepod.yaml
apiVersion: v1
kind: Service
metadata:
name: nodeport-whoami
spec:
type: NodePort
selector:
app: whoami
ports:
- nodePort: 30001
port: 80
targetPort: 80
kubectl apply -f nodepod.yaml
kubectl get services
kubectl describe svc nodeport-whoami
kubectl delete svc/nodeport-whoami
#另一种验证 nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
env: prod
app: whoami
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80
#获取minikube得ip
minikube ip
#启动服务
kubectl apply-f nodepod.yaml
#获取页面
wget minikubeIp:30001
#可以看到nginx得页面,证明从node上访问到了服务
cat index.html
nodePort 节点port 一般对于宿主机来说是30000-32767范围,一般是宿主机上得
port 服务本身得节点
targetPort pod容器得节点
一般来说,nodeport一般都是在一些负载均衡器后面。还有两个概念如ingress ingerssController
只要添加了标签以后,那么就会自动放到nodePod services,只有当terminating state得时候才会从service中移除。,
由于限制了ip范围,所以比较恶心,因此一般上层都还有一层反向代理。
另外service和pod得生命周期完全不同。
一般来说 kubectl port forward只用于测试使用,生命周期跟随kubectl客户端。而nodePort则跟随service
ClusterIp
本质上也是个service,但是利用的kubernetes内部的dns,其他pod可以访问该pod。
--expose=true 那个命令 describe 看到Type为ClusterIp
#其他与上面一致
kubectl run nginx --image nginx:latest --port=80 --expose=true
声明式创建
vim clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip
spec:
type: ClusterIP # Indicates that the service is a ClusterIP
ports:
- port: 80 # The port exposed by the service
protocol: TCP
targetPort: 80 # The destination port on the pods
selector:
run: nginx-clusterip
kubectl apply -f cluster.yaml
vim cluster-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-headless
spec:
clusterIP: None
type: ClusterIP # Indicates that the service is a ClusterIP
ports:
- port: 80 # The port exposed by the service
protocol: TCP
targetPort: 80 # The destination port on the pods
selector:
run: nginx-clusterip
# 返回对应的dns name 无ip,上层自己做一些负载均衡。有一些有状态的服务,如轻量级的目录访问协议 LDAP lightweight directory access protocol LDAP可能用到
LoadBalancer
大部分人倾向于不使用它,因为上面需要一定的开发,而且有一定的隐含信息。云服务厂商很多支持 如AWS GCP Azure OpenStack
以aws举例
classic load balancer
application load balancer
network load balancer
ReadinessProbe
是否准备好以及探活
readinessProbe livenessProbe
readiness
vim nginx-readiness.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-with-readiness-http
spec:
containers:
- name: nginx-pod-with-readiness-http
image: nginx
readinessProbe:
#第一次的延时
initialDelaySeconds: 5
#周期间隔
periodSeconds: 5
#探测方式 http需要返回值>=200 < 400
httpGet:
path: /ready
port: 80
kubectl create -f nginx-readiness.yaml
- 其他检测方式包括
- Command—You issue a command that should complete with exit code 0, indicating the Pod is ready.
- HTTP—You issue an HTTP request that should complete with a response code >= 200 and < 400, which indicates the Pod is ready.
- TCP—You issue a TCP connection attempt. If the connection is established, the Pod is ready.
探活检测:
跟上面一样的检测方式 command http tcp
vim nginx-liveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-with-liveness-http
spec:
containers:
- name: nginx-pod-with-liveness-http
image: nginx
livenessProbe:
initialDelaySeconds: 5
periodSeconds: 5
#HTTP方式
httpGet:
path: /healthcheck
port: 80
httpHeaders:
- name: My-Custom-Header
value: My-Custom-Header-Value
#命令执行方式
exec:
command:
- cat
- /hello/world
#端口方式
tcpSocket:
port: 80
kubectl create -f nginx-liveness.yaml
NetworkPolicy (安全性)
kubernetes有一个保证相互之前安全性的防火墙组件 NetworkPolicy
- 可以利用 Classless Inter-Domain Routing (CIDR) 组件来构建出口入口规则
- 依据标签来进行构建出口入口规则 (如service)
- 依据namespace来构建出入站规则
minikube start --driver=docker --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --kubernetes-version=1.23.8 --network-plugin=cni --cni=calico
kubectl run nginx-1 --image nginx --labels='app=nginx-1'
kubectl run nginx-2 --image nginx --labels='app=nginx-2'
kubectl get pods -o wide
kubectl exec nginx-1 -- curl nginx2Ip
vim nginx-2-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginx-2-networkpolicy
spec:
podSelector:
matchLabels:
app: nginx-2 # Applies to which pod
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: nginx-1 # Allows calls from which pod
ports:
- protocol: TCP
port: 80
kubectl create -f nginx-2-networkpolicy.yaml
kubectl exec nginx-1 -- curl nginx2Ip
kubectl apply -f nginx-2-networkpolicy.yaml
kubectl exec nginx-2 -- curl nginx2ip:80
#变更为8080
vim nginx-2-networkpolicy.yaml
kubectl apply -f nginx-2-networkpolicy.yaml
#再次访问,则得到超时
kubectl exec nginx-2 -- curl nginx2ip:80
# refused
kubectl exec nginx-2 -- curl nginx2ip:8080
namespace
主要是给管理员用的,application需要能够deploy 任意的namespace上
- 能够cluster 分割来简化资源编排
- 限制resource names
- 硬件限制
- 权限控制
具体来说
- 不同的环境可以使用不同的namespace
- 不同的层级,如数据库层级,应用层,中间件曾
- 如果小资源,可以直接使用默认的分区。
- 但是不同的namespace之间并无阻拦
kubectl get ns
kubectl create ns custom-ns
apiVersion: v1
kind: Namespace
metadata:
name: custom-ns-2
#所有namespace下的东西都会被删除
kubectl delete namespaces custom-ns
#configmap -n 代表所在得namespace
kubectl create configmap configmap-custom-ns --from-literal=Lorem=Ipsum -n custom-ns
vim podInNamespace.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx2
namespace: custom-ns #ns
spec:
containers:
- name: nginx
image: nginx:latest
kubectl create -f podInNamespace.yaml
- default 默认
- kube-public 公共
- kube-system kubernetes自己的
状态 active terminating
活动的,终止中
Scope
#允许同时创建两个
kubectl run nginx-1 --image nginx:latest -n custom-ns
kubectl run nginx-1 --image nginx:latest
#查看是否是namespaced
kubectl api-resources --namespaced=false
kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=true
#命令的一部分
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
#configmaps
configmaps cm v1 true
ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
#pods
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
#svcs
services svc v1 true Service
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
#deploymenbts
deployments deploy apps/v1 true Deployment
kubectl create ns another-ns
#默认得namespace
kubectl config set-context $(kubectl config current-context) --namespace=another-ns
#默认创建得space
kubectl config view | grep -i "namespace"
#改回来
kubectl config set-context $(kubectl config current-context) --namespace=default
配置namespace得 resourceQuota 和limit
单个pod得request与limit,request代表了至少需求得资源。 limit代表了上限资源(所谓超卖)
vim namespaces-with-request.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-request
namespace: custom-ns
spec:
containers:
- name: nginx
image: nginx:latest
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
#到达限制后,直接pod崩溃
memory: "1Gi"
#到达限制后,性能下降
cpu: "1000m"
ns得quota
vim resourceQuota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-resourcequota
spec:
hard:
requests.cpu: "1000m"
requests.memory: "1Gi"
limits.cpu: "2000m"
limits.memory: "2Gi"
configmaps: "10"
services: "5"
kubectl create -f resourceQuota.yaml --namespace=custom-ns
kubectl get quotas -n custom-ns
kubectl delete -f quotas/resourcequota-with-object-count -n custom-ns
这个配置得含义是
- 所有得pods request和不能超过 CPU core.
- 所有得pods request和不能超过 1 GiB of memory.
- 所有得pods limit不能超过2CPU cores.
- 所有得pods limit不能超过2 GiB of memory.
limitRange
所有得container必须遵循得,上文加了限制以后,创建容器时必须加上这些个limit request
vim limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: my-limitrange
spec:
limits:
# 默认得容器得limit
- default:
memory: 256Mi
cpu: 500m
# 默认得request值
defaultRequest:
memory: 128Mi
cpu: 250m
#最大值 limit,声明不能比这个大
max:
memory: 1000Mi
cpu: 1000m
#最小得request,不能比这个小
min:
memory: 128Mi
cpu: 250m
type: Container
kubectl create -f ~/limitrange.yaml --namespace=custom-ns
kubectl delete limit/my-limitrange -n custom-ns
kubectl run nginx-1 --image=nginx
kubectl run nginx-2 --image=nginx
kubectl run nginx-3 --image=nginx
kubectl run nginx-4 --image=nginx
#先加上文得限制,但是不加namespace 创建4个后 而后会创建失败
kubectl run nginx-5 --image=nginx
#结果
Error from server (Forbidden): pods "nginx-5" is forbidden: exceeded quota: my-resourcequota, requested: limits.cpu=500m,limits.memory=512Mi,requests.cpu=250m,requests.memory=256Mi, used: limits.cpu=2,limits.memory=2Gi,requests.cpu=1,requests.memory=1Gi, limited: limits.cpu=2,limits.memory=2Gi,requests.cpu=1,requests.memory=1Gi
Persistent Storage in k8s
两种,emptyDir 以及hostPath,声明周期可以与pod一致。但有时候希望持久化下来,就用到pv,这个东西可以是比如 nfs disk aws得ebs卷 等等
- pv 类型
- access mode 比如 read write once
- readwriteonce 相当于加锁
- readonlyMany 多次读取
- readWriteMany 多读多写
pod至于persistentVolumes交流,pv可以下层适配如 aws-ebs aws-efs gce-pd azure-disks,但是具体得行为其实主要由下面得pv决定,pv自己可以认为是仅仅是一个指针。
kubectl get persistentvolume
kubectl get persistentvolumes
kubectl get pv
vim pv-hostpath.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hostpath
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
vim pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metdata:
name: persistent-volume-nfs
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/nfs
server: nfsxxxx
fsType: ext4
挂载pv
PersistentVolumeClaim 和persistentVolume 后者为 管理员视角,即storage引入k8s ,前者主要是pvc,pod上使用得
一个核心得点是代表storage本身,另外一个是与pod得连接
pod连接来说,由俩个yamlfile
- pod application
- 第二个就是persistent volume claim pvc需要作为volumeMount 配置文件在yaml上。需要与pod在一起挂在
具体过程
- 管理员创建 PersistentVolume object.
- 开发者在开发应用时使用pvc PersistentVolumeClaim 时需要这个PersistentVolume.
- 开发者写yaml PersisentVolumeClaim 挂载在pod上.
- 一旦pod以及pvc PersisentVolumeClaim 创建了,k8s取出所需要得pv PersisentVolume
- pv 就能从pod里面被读写了
# api-resources 可以看到 pv是namespace无关的,pvc是namespace有关的
kubectl api-resources |grep persist
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
kubectl get persistentvolumeclaims
kubectl get pvc
#实际整一波pv
vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-hostpath-pv
labels:
type: hostpath
env: prod
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /home/simon/pv
kubectl create -f pv.yaml
#默认变更为删除,pv当pvc删除时得状态 就会木有,retain 就是pvc删除时还在 recycle是下次还能接触,但是这次没了
kubectl patch pv/my-hostpath-pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
#整一波pvc.yaml
vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-hostpath-pvc
spec:
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: hostpath
env: prod
accessModes:
- ReadWriteOnce
kubectl create -f pvc.yaml
#pods
vim pods-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: mypersistentvolume
persistentVolumeClaim:
claimName: my-hostpath-pvc
containers
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/www/html
name: mypersistentvolume
本身状态有
available 准备好了被使用
bound 已经被一个或多个pods使用了
terminating 正在删除
dynamic provisioning 动态供给
静态意思就是,创建一个storage 创建pv、pvc 然后bind pvc-pods
动态过程
- 配置k8s cluster 用aws授权。
- 然后就用pv挂在--storageClass
storageClass
get sc name就是名字 provisionner storage得技术
kubectl get storageclass
kubectl get storageclasses
kubectl get sc
#get sc name就是名字 provisionner storage得技术
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 6d23h
vim pvc-dynamic.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-dynamic-hostpath-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard # VERY IMPORTANT !
resources:
requests:
storage: 1Gi
selector:
matchLabels:
type: hostpath
env: prod
kubectl create -f pvc-dynamic.yaml
vim pvc-dynamic.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-dynamic-storage
spec:
containers
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypersistentvolume
volumes:
- name: mypersistentvolume
persistentVolumeClaim:
claimName: my-dynamic-hostpath-pvc
kubectl create -f Pod-dynamic.yaml