k8s快速入门教程

1 命令集合123

1.1查看集群信息

kubectl cluster-info

输出信息如下:

Kubernetes master is running at https://192.168.3.71:6443
KubeDNS is running at https://192.168.3.71:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看集群证书有效期:

kubeadm alpha certs check-expiration

1.2 pod

命令创建一个pod

kubectl run --image=nginx --port=80 nginx0920

以上port为容器里面服务的端口。

输入pod的yaml文件

kubectl get pod nginx0920 -o yaml > nginx0920.yaml

启用一个测试的pod

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

备注:busybox:1.28.4可以正常解析集群的DNS

删除pod,强制删除加–force

kubectl delete pod ceph-csi-rbd-nodeplugin-8vfsb -n ceph-rbd  --force 

1.3 deployment

命令创建一个deployment

kubectl create deployment --image=nginx nginx0920

创建一个service,默认类型是cluster.

kubectl expose deployment nginx0920 --port=80 --target-port=8000

创建一个service,类型是NodePort

kubectl expose deployment nginx0920 --port=8080 --target-port=80 --type=NodePort

创建一个deployment yaml文件:

kubectl create deployment web --image=nginx --dry-run=client -o yaml > we.yaml

1.4 升级镜像

kubectl set image deployment/web container-eh09qq=wordpress:6.8 --record
备注:container-eh09qq为容器名字。

查看升级实时状态

kubectl rollout status deployment/web

暂停更新状态,扩容副本数这条命令无效。(对立刻修改yaml文件此命令无效。)

kubectl rollout pause deployment nginx1001

恢复更新状态

kubectl rollout resume deployment nginx1001

1.5 回滚

查看历史版本

kubectl rollout history deployment/web

查看历史版本的镜像

kubectl rollout history deployment/web --revision=3

回滚到上一个版本

kubectl rollout undo deployment/web

回滚到指定版本

kubectl rollout undo deployment/web --to-revision=1

1.6 扩容副本数

扩容depoloyment的pod数量
kubectl scale deployment web --replicas=5

1.7 启动一个临时pod

kubectl run -it --rm --image=busybox busybox-demo

1.8 污点和污点容忍度

查看污点情况:

kubectl describe nodes hz-kubesphere-worker01

master节点默认污点不允许调度:

kubectl taint node hz-kubesphere-master02 node-role.kubernetes.io/master="":NoSchedule

打开master节点默认可以调度:

kubectl taint node hz-kubesphere-master02 node-role.kubernetes.io/master-

增加节点(master或worker)的污点:

kubectl taint node hz-kubesphere-master02 node-type=master:NoSchedule

控制器yaml配置增加一下内容:

tolerations:
- key: node-type
  operator: Equal
  value: master
  effect: NoSchedule

例如deployment:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  ..........
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      tolerations:
      - key: node-type
        operator: Equal
        value: master
        effect: NoSchedule
      serviceAccountName: nfs-client-provisioner
      containers:

每一个污点都可以关联一个效果,效果包含了以下三种:

• NoSchedule表示如果pod没有容忍这些污点,pod则不能被调度到包含这些污点的节点上。
• PreferNoSchedule是NoSchedule的一个宽松的版本,表示尽量阻止pod被调度到这个节点上,但是如果没有其他节点可以调度,pod依然会被调度到这个节点上。
• NoExecute不同于NoSchedule以及PreferNoSchedule,后两者只在调度期间起作用,而NoExecute也会影响正在节点上运行着的pod。如果在一个节点上添加了NoExecute污点,那些在该节点上运行着的pod,如果没有容忍这个NoExecute污点,将会从这个节点去除。

1.9 非亲缘性

使用非亲缘性分散一个部署中的pod,app: nfs-provisioner-01为template的app: app: nfs-provisioner-01

affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
          app: nfs-provisioner-01

完整的配置:

spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nfs-provisioner-01
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nfs-provisioner-01
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - topologyKey: kubernetes.io/hostname
            labelSelector:
              matchLabels:
                app: nfs-provisioner-01
      containers:
      - env:
        - name: PROVISIONER_NAME
          value: nfs-provisioner-01
        - name: NFS_SERVER
          value: 192.168.3.32

可以跟污点一起使用:

 tolerations:
   - effect: NoSchedule
     key: node-type
     operator: Equal
     value: master
 affinity:
   podAntiAffinity:
     requiredDuringSchedulingIgnoredDuringExecution:
     - topologyKey: kubernetes.io/hostname
       labelSelector:
         matchLabels:
           app: nfs-provisioner-01 
  

假如有2个节点,会出现:

[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep nfs
nfs-provisioner-01-68cc74b9d8-clmfc       0/1     Pending   0          9m8s
nfs-provisioner-01-68cc74b9d8-wt6z8       1/1     Running   0          9m8s

kubectl describe pod nfs-provisioner-01-68cc74b9d8-clmfc -n kube-system
...........
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  65s (x9 over 9m41s)  default-scheduler  0/7 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node-type: dev}, that the pod didn't tolerate, 2 node(s) had taint {node-type: production}, that the pod didn't tolerate.

1.10 不可调度

配置节点不可调度

kubectl cordon k8s21-worker01  

恢复节点调度

kubectl uncordon k8s21-worker01

1.11 k8s pod日志路径

/var/log/pods

2 控制器使用

2.1 deployment

2.1.1标准deployment配置

# 标准deployment配置:
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx1001
  name: nginx1001
  namespace: default
spec:
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx1001
  template:
    metadata:
      labels:
        app: nginx1001
    spec:
      containers:
      - image: httpd
        imagePullPolicy: Always
        name: nginx
      restartPolicy: Always
      nodeSelector:
        locationnodes: worker01

2.1.2 imagePullPolicy

imagePullPolicy的选择有三个:Always IfNotPresent Never
k8s的配置文件中经常看到有imagePullPolicy属性,这个属性是描述镜像的拉取策略。

1. Always 总是拉取镜像
2. IfNotPresent 本地有则使用本地镜像,不拉取
3. Never 只使用本地镜像,从不拉取,即使本地没有
4. 如果省略imagePullPolicy 镜像tag为 :latest 策略为always ,否则 策略为 IfNotPresent

2.1.3 label

Deployment的selector其实为spec的label

2.1.4 nodeSelector

指定pod部署到哪个nodes

kubectl label node k8s-worker01 locationnodes=worker01

例如:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: httpd1101
  namespace: ehr
  labels:
    app: httpd1101
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpd1101
  template:
    metadata:
      labels:
        app: httpd1101
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
        - name: volume-fnf6pi
          persistentVolumeClaim:
            claimName: ehr-demo-pvc1
      containers:
        - name: container-j1g6b1
          image: httpd
          ports:
            - name: tcp-80
              containerPort: 80
              protocol: TCP
          resources:
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 128Mi
          volumeMounts:
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
            - name: volume-fnf6pi
              mountPath: /var/www/html
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      nodeSelector:
        locationnodes:worker01

内置node节点label:

nodeSelector:
  kubernetes.io/hostname: gz2-k8s-worker01

2.1.4 revisionHistoryLimit

默认保存rs版本的副本数,默认是10

2.1.5 hostAliases

例如容器要解析内网地址:192.168.3.79 wjx.devops.com,可以在配置文件添加hostAliases.

cat test2.yaml 
# 标准deployment配置:
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: busybox1211
  name: busybox1211
  namespace: demo
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: busybox1211
  template:
    metadata:
      labels:
        app: busybox1211
    spec:
      hostAliases:              #在此处添加hostAliases
      - ip: "192.168.3.79"
        hostnames:
        - "wjx.devops.com"
      tolerations:
      - key: node-type
        operator: Equal
        value: production
        effect: NoSchedule
      containers:
      - image: busybox
        imagePullPolicy: Always
        name: busybox
        args:
        - /bin/sh
        - -c
        - sleep 10; touch /tmp/healthy; sleep 30000
        readinessProbe:           #就绪探针
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10         #10s之后开始第一次探测
          periodSeconds: 5 

可以在容器看到:

[root@k8s-master01 ~]# kubectl exec -it busybox1211-786f455b5c-442kz -n demo -- sh
/ # cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.255.69.216   busybox1211-786f455b5c-442kz

# Entries added by HostAliases.
192.168.3.79    wjx.devops.com
/ # ping wjx.devops.com
PING wjx.devops.com (192.168.3.79): 56 data bytes
64 bytes from 192.168.3.79: seq=0 ttl=63 time=0.525 ms
64 bytes from 192.168.3.79: seq=1 ttl=63 time=0.311 ms
^C

2.2 service

service介绍,待更新

2.2.1 标准service 配置

service分两种type: ClusterIP和NodePort。

2.2.1.1 ClusterIP
kind: Service
apiVersion: v1
metadata:
  name: httpd-demo1
  namespace: usopa-demo1
  labels:
    app: httpd-demo1
spec:
  ports:
    - name: http-80
      protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: httpd-demo1
  type: ClusterIP
  备注:14行的selector为调度的deployment或者statesful,daemonset资源。默认配置type是ClusterIP,yaml不写默认是ClusterIP。
2.2.1.2 NodePort
kind: Service
apiVersion: v1
metadata:
  name: kn1024-kibana
  namespace: usopa-demo1
  labels:
    app: kibana
spec:
  ports:
    - name: http
      protocol: TCP
      port: 5601
      targetPort: 5601
      nodePort: 30296
  selector:
    app: kibana
  type: NodePort
  
备注:17行type类型是NodePort,14行nodePort端口可以指定,也可以使用系统随机产生的,不写随机产生。

2.3 daemoset

 daemonset在每个节点运行一个pod,也可以在指定节点运行。

2.3.1标准daemonset配置

# 标准daemonset配置
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: nginx1011
  name: nginx1011
  namespace: data-project1
spec:
  selector:
    matchLabels:
      app: nginx1011
  template:
    metadata:
      labels:
        app: nginx1011
    spec:
      containers:
      - image: httpd
        imagePullPolicy: Always
        name: nginx
      restartPolicy: Always
      nodeSelector:
        node: worker

2.4 statesful

官方文档 https://kubernetes.io/zh/docs/tutorials/stateful-application/basic-stateful-set/

Pod 的序号、主机名、SRV 条目和记录名称没有改变,但和 Pod 相关联的 IP 地址可能发生了改变。 在本教程中使用的集群中它们就改变了。这就是为什么不要在其他应用中使用 StatefulSet 中的 Pod 的 IP 地址进行连接,这点很重要。
如果你需要查找并连接一个 StatefulSet 的活动成员,你应该查询 Headless Service 的 CNAME。 和 CNAME 相关联的 SRV 记录只会包含 StatefulSet 中处于 Running 和 Ready 状态的 Pod。
如果你的应用已经实现了用于测试 liveness 和 readiness 的连接逻辑,你可以使用 Pod 的 SRV 记录(web-0.nginx.default.svc.cluster.local, web-1.nginx.default.svc.cluster.local)。因为他们是稳定的,并且当你的 Pod 的状态变为 Running 和 Ready 时,你的应用就能够发现它们的地址。

2.4.1标准statesful配置

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx-svc"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
          - name: www
            mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
    - metadata:
        name: www
      spec:
        storageClassName: csi-rbd-sc
        accessModes:
        -  ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

statesfulset域名访问:nacos-0.nacos-headless.default.svc.cluster.local:8848

2.5 job

工作类容器job,job的配置文件myjob.yaml

apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app: myjob
  name: myjob
  namespace: default
spec:
  template:
    metadata:
      name: myjob
    spec:
      containers:
      - name: hello
        image: busybox
        imagePullPolicy: Always
        command: ["echo", "hello k8s job!"]
        name: nginx
      restartPolicy: Never

通过kubectl get pod会显示:

myjob-sst4m                  0/1     Completed   0          42s

完成后就退出了,查看日志:

# kubectl logs myjob-sst4m 
hello k8s job!

2.6 Health Check

2.6.1 init容器

先运行myapp-pod.yaml发现无法解释myservice和mydb,容器myapp-pod无法running;再执行myservice.yaml,发现可以正常解析,容器myapp-pod正常running。没有运行myservice.yaml会显示以下内容:

myapp-pod                       0/1     Init:0/2    0          49s

myapp-pod.yaml如下:

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh','-c','echo The app is running! && sleep 3600']
  initContainers:  
  - name: init-myservice
    image: busybox
    command: ['sh','-c','until nslookup myservice; do echo waiting for myservice; sleep 2;done;']
  - name: init-mydb
    image: busybox
    command: ['sh','-c','until nslookup mydb; do echo waiting for mydb; sleep 2; done;']

myservice.yaml如下:

kind: Service
apiVersion: v1
metadata:
  name: myservice
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
---
kind: Service
apiVersion: v1
metadata:
  name: mydb
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9377

spec.restartPolicy: OnFailure/always, spec. containers.-容器名.imagePullPolicy: IfNotPresent/always

2.6.2 readinessProbe

检测探针 - 就绪检测
readinessProbe-httpget.yaml文件如下:

apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: nginx
    imagePullPolicy: IfNotPresent
    readinessProbe:
      httpGet:
        port: 80
        path: /index1.html
      initialDelaySeconds: 1
      periodSeconds: 3

运行readinessProbe-httpget.yaml

kubectl apply -f readinessProbe-httpget.yaml

发现pod未就绪

[root@k8s-master01 yaml]# kubectl get pod | grep readiness
readiness-httpget-pod           0/1     Running     0          15s
[root@k8s-master01 yaml]# kubectl describe pod readiness-httpget-pod
Name:         readiness-httpget-pod
Namespace:    default
Priority:     0
Node:         k8s-worker01/192.168.3.74
Start Time:   Thu, 07 Oct 2021 18:18:39 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: 4ea9583f3cc95390ab77dd32e36f05fd46afa3ac650bb34e7c830db4529e06a3
              cni.projectcalico.org/podIP: 172.16.79.89/32
              cni.projectcalico.org/podIPs: 172.16.79.89/32
Status:       Running
IP:           172.16.79.89
IPs:
  IP:  172.16.79.89
Containers:
  readiness-httpget-container:
    Container ID:   docker://507b37c114a077bd78fc2ad07895131e5488c16e9086ffa453140ec02f65a88f
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:765e51caa9e739220d59c7f7a75508e77361b441dccf128483b7f5cce8306652
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 07 Oct 2021 18:18:40 +0800
    Ready:          False
    Restart Count:  0
    Readiness:      http-get http://:80/index1.html delay=1s timeout=1s period=3s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r9vvb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-r9vvb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r9vvb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  29s               default-scheduler  Successfully assigned default/readiness-httpget-pod to k8s-worker01
  Normal   Pulled     29s               kubelet            Container image "nginx" already present on machine
  Normal   Created    29s               kubelet            Created container readiness-httpget-container
  Normal   Started    29s               kubelet            Started container readiness-httpget-container
  Warning  Unhealthy  2s (x9 over 26s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404

进入pod,增加index1.html

#kubectl exec -it readiness-httpget-pod -- /bin/bash
cd /usr/share/nginx/html
echo "i am allen" > index1.html
exit

重新查询

# kubectl get pod | grep readiness
readiness-httpget-pod           1/1     Running     0          7m8s

2.6.3 livenessProbe

检测探针 - 存活检测,一般有3种检查方法,一种是shell exec,一种是httpget,一种是TCPsocket探针。
livenessProbe-exec.yaml如下:

apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
spec:
  containers:
  - name: liveness-exec-container
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/live; sleep 30; rm -rf /tmp/live; sleep 600"]
    livenessProbe:
      exec:
        command: ["test","-e","/tmp/live"]
      initialDelaySeconds: 1
      periodSeconds: 3

备注:此例子pod会不断重建。
livenessProbe-httpget.yaml如下:

apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: nginx
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
        port: http
        path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      timeoutSeconds: 10

livenessProbe-tcpsocket.yaml

apiVersion: v1
kind: Pod
metadata:
  name: liveness-tcpsocket-pod
  namespace: data-project1
spec:
  containers:
  - name: liveness-tcpsocket-container
    image: nginx
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 1
      periodSeconds: 3
      timeoutSeconds: 10

小结:
Liveness探测和Readiness探测做个比较: Liveness探测和Readiness探测是两种health check机制,如果不特意配置,kubernetes将对两种探测采取相同的默认行为,即通过判断容器启动进程的返回值是否为零来判断探测是否成功。 两种探测的配置方法完全一样,支持的配置参数也一样。不同之处在于探测失败后的行为:Liveness探测是重启容器;Readiness探测则是将容器设置为不可用,不接受Service转发的请求。 Liveness探测和Readiness探测是独立执行的,两者之间没有依赖,所以可以单独使用,也可以同时使用。用Liveness探测判断容器是否重启以实现自愈;用Readiness探测判断容器是否已经准备好对外提供服务。

2.7 ingress-nginx

2.7.1 yaml部署(deployment 青云)

#下载
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml

#重命名
mv deploy.yaml ingress-nginx.yaml

#修改镜像
vi ingress-nginx.yaml
#将image的值改为如下值:
wangjinxiong/ingress-nginx-controller:v0.46.0

#部署
kubectl apply -f ingress-nginx.yaml

# 检查安装的结果
kubectl get pod,svc -n ingress-nginx

# 最后别忘记把svc暴露的端口要放行

如果下载不了,直接复制一下:ingress-nginx.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0 
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

2.7.2 使用

官网地址:https://kubernetes.github.io/ingress-nginx/
应用如下yaml,准备好

2.7.2.1 测试环境

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000

2.7.2.2域名访问

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000

其中demo.atguigu.com需要进入容器,在/usr/share/nginx/html下创建nginx目录,并建立index.html文件(内容是wangjinxiong)。

# elinks --dump http://hello.atguigu.com:30486
   Hello World!

# elinks --dump http://demo.atguigu.com:30486/nginx/
   wangjinxiong

2.7.2.3路径重写(未核实)

apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.atguigu.com"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx(/|$)(.*)"  # 把请求会转给下面的服务,下面的服务一定要能处理这个路径,不能处理就是404
        backend:
          service:
            name: nginx-demo  ## java,比如使用路径重写,去掉前缀nginx
            port:
              number: 8000

2.7.2.4流量限制(未核实)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-limit-rate
  annotations:
    nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
  ingressClassName: nginx
  rules:
  - host: "haha.atguigu.com"
    http:
      paths:
      - pathType: Exact
        path: "/"
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000

Kubernetes之上传文件报413 Request Entity Too Large : https://cloud.tencent.com/developer/article/1586810

2.7.2.5 通用配置

(1)创建secret ,http不想要配置secret,以下的证书假如是自签证书参考2.7.2.6
kubectl create secret tls nginx-test --cert=tls.crt --key=tls.key
  (2)  创建ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: feiutest.cn
    http:
      paths:
      - path:
        backend:
          serviceName: test-ingress
          servicePort: 80
  tls:
  - hosts:
    - feiutest.cn
    secretName: nginx-test

(3)应用

kubectl apply -f nginx-ingress-yaml

参考博客:https://blog.csdn.net/bbwangj/article/details/82940419

2.7.2.6 自签证书

# openssl genrsa -out tls.key 2048

# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Guangdong/L=Guangzhou/O=devops/CN=feiutest.cn

生成两个文件:

# ls
tls.crt  tls.key

验证证书有效期:


openssl x509 -in tls.crt -noout -text |grep ' Not '

3 Secret and configmap

3.1 Secret

3.1.1 环境变量方式

加密密码:

echo -n 21vianet | base64
MjF2aWFuZXQ=

反密:

echo -n MjF2aWFuZXQ= | base64 --decode
21vianet

yaml:

apiVersion: v1
kind: Secret
metadata:
  name: secret1017
data:
  password: MjF2aWFuZXQ=
---
kind: Pod
apiVersion: v1
metadata:
  name: mysql1017
  labels:
    app: mysql1017
spec:
  containers:
    - name: mysql1017
      image: mysql:5.6
      ports:
        - name: tcp-3306
          containerPort: 3306
          protocol: TCP
      env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: secret1017
              key: password
      imagePullPolicy: IfNotPresent
  restartPolicy: Always

3.1.2 volume方式

(待补充)

3.2 ConfigMap

Secret可以为Pod 提供密码、Token、 私钥等敏感数据;对于一些非敏感数据,比如应用的配置信息,则可以用ConfigMap。ConfigMap的创建和使用方式与Secret 非常类似,主要的不同是数据以明文的形式存放。
与Secret 一样,ConfigMap 一般用于下面2种方式,一种是数据库环境变量传递;一种是配置文件读写。

3.2.1 环境变量传递

例如:
cat configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: configmap1008
  namespace: data-project1
data:
  MYSQL_DATABASE: "wangjinxiong"
  MYSQL_PASSWORD: "21vianet"
  MYSQL_ROOT_PASSWORD: "123456"
  MYSQL_USER: "wangjinxiong"
---
kind: Pod
apiVersion: v1
metadata:
  name: mysql1017
  labels:
    app: mysql1017
spec:
  containers:
    - name: mysql1017
      image: mysql:5.6
      ports:
        - name: tcp-3306
          containerPort: 3306
          protocol: TCP
      env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: secret1017
              key: password
      imagePullPolicy: IfNotPresent
  restartPolicy: Always

cat mysql.yaml

kind: Pod
apiVersion: v1
metadata:
  name: mysql1008
  namespace: data-project1
  labels:
    app: mysql1008
spec:
  volumes:
    - name: host-time
      hostPath:
        path: /etc/localtime
        type: ''
    - name: mysql1008-pvc
      persistentVolumeClaim:
        claimName: mysql1008-pvc
    - name: configmap1008
      configMap:
        name: configmap1008
  containers:
    - name: mysql1008
      image: mysql:5.7
      ports:
        - name: tcp-3306
          containerPort: 3306
          protocol: TCP
      env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_ROOT_PASSWORD
        - name: MYSQL_DATABASE
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_DATABASE
        - name: MYSQL_USER
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_USER
        - name: MYSQL_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_PASSWORD
      resources:
        limits:
          cpu: 50m
          memory: 4Gi
        requests:
          cpu: 50m
          memory: 4Gi
      volumeMounts:
        - name: host-time
          readOnly: true
          mountPath: /etc/localtime
        - name: mysql1008-pvc
          mountPath: /var/lib/mysql
      imagePullPolicy: IfNotPresent
  restartPolicy: Always

不用configmap配置是这样的:

kind: Pod
apiVersion: v1
metadata:
  name: mysql1008
  namespace: data-project1
  labels:
    app: mysql1008
spec:
  volumes:
    - name: host-time
      hostPath:
        path: /etc/localtime
        type: ''
    - name: mysql1008-pvc
      persistentVolumeClaim:
        claimName: mysql1008-pvc
  containers:
    - name: mysql1008
      image: mysql:5.7
      ports:
        - name: tcp-3306
          containerPort: 3306
          protocol: TCP
      env:
        - name: MYSQL_ROOT_PASSWORD
          value: '123456'
        - name: MYSQL_DATABASE
          value: wangjinxiong
        - name: MYSQL_USER
          value: wangjinxiong
        - name: MYSQL_PASSWORD
          value: '21vianet'
      resources:
        limits:
          cpu: 50m
          memory: 4Gi
        requests:
          cpu: 50m
          memory: 4Gi
      volumeMounts:
        - name: host-time
          readOnly: true
          mountPath: /etc/localtime
        - name: mysql1008-pvc
          mountPath: /var/lib/mysql
      imagePullPolicy: IfNotPresent
  restartPolicy: Always

3.2.2 配置文件传递

例如:
cat configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap1009
  namespace: data-project1
data:
  wangjinxiong.txt: |
    wangjinxiong=OK

cat mysql.yaml

kind: Pod
apiVersion: v1
metadata:
  name: mysql1009
  namespace: data-project1
  labels:
    app: mysql1009
spec:
  volumes:
    - name: host-time
      hostPath:
        path: /etc/localtime
        type: ''
    - name: mysql1009-pvc
      persistentVolumeClaim:
        claimName: mysql1009-pvc
    - name: configmap1008
      configMap:
        name: configmap1008
    - name: configmap1009
      configMap:
        name: configmap1009
  containers:
    - name: mysql1009
      image: mysql:5.6
      ports:
        - name: tcp-3306
          containerPort: 3306
          protocol: TCP
      env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_ROOT_PASSWORD
        - name: MYSQL_DATABASE
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_DATABASE
        - name: MYSQL_USER
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_USER
        - name: MYSQL_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: configmap1008
              key: MYSQL_PASSWORD
      resources:
        limits:
          cpu: 50m
          memory: 1Gi
        requests:
          cpu: 50m
          memory: 1Gi
      volumeMounts:
        - name: host-time
          readOnly: true
          mountPath: /etc/localtime
        - name: mysql1009-pvc
          mountPath: /var/lib/mysql
        - name: configmap1009
          mountPath: /etc/mysql/conf.d
          readOnly: true
      imagePullPolicy: IfNotPresent
  restartPolicy: Always

3.2.3 配置文件例子

kubectl create configmap mysql-config --from-file=mysqld.cnf 配置文件写在一个文件里面

configmap:

# cat my.cnf 
[mysqld]
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
datadir         = /var/lib/mysql
secure-file-priv= NULL
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

# Custom config should go here
!includedir /etc/mysql/conf.d/

default_authentication_plugin= mysql_native_password

# kubectl create configmap mysql-config0117 --from-file=my.cnf 

pvc:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-pvc0117
  namespace: data-project1
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: nfs-provisioner-01
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-synology

sts调用pvc及configmap:

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: mysql0117
  namespace: data-project1
  labels:
    app: mysql0117
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql0117
  template:
    metadata:
      labels:
        app: mysql0117
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
        - name: volume0117
          persistentVolumeClaim:
            claimName: mysql-pvc0117
        - name: volumeconfigmap
          configMap:
            name: mysql-config0117
            items:
            - key: my.cnf
              path: my.cnf
      containers:
        - name: container0117
          image: 'mysql:8.0'
          ports:
            - name: tcp-3306
              containerPort: 3306
              protocol: TCP
            - name: tcp-33060
              containerPort: 33060
              protocol: TCP
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: gtland2021
          resources:
            limits:
              cpu: '1'
              memory: 4Gi
            requests:
              cpu: 500m
              memory: 1Gi
          volumeMounts:
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
            - name: volume0117
              mountPath: /var/lib/mysql
            - name: volumeconfigmap
              subPath: my.cnf
              mountPath: /etc/mysql/my.cnf
          imagePullPolicy: IfNotPresent
      nodeSelector:
        node: worker
  serviceName: mysql-svc0117

6 Volume

6.5 StorageClass

6.5.1 csi-driver-nfs

nfs服务器搭建另外,nfs的csi及sc创建

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system 
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-provisioner-01
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      serviceAccountName: nfs-client-provisioner
      nodeSelector: 
        nfs-server: synology
      containers:
        - name: nfs-client-provisioner
          image: wangjinxiong/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner-01  # 此处供应者名字供storageclass调用
            - name: NFS_SERVER
              value: 10.186.100.13   # 填入NFS的地址
            - name: NFS_PATH
              value: /volume2/EHR   # 填入NFS挂载的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.186.100.13   # 填入NFS的地址
            path: /volume2/EHR   # 填入NFS挂载的目录

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-synology
provisioner: nfs-provisioner-01
# Supported policies: Delete、 Retain , default is Delete
reclaimPolicy: Delete
#allowVolumeExpansion: true

创建PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc101902
  namespace: data-project1
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: nfs-provisioner-01
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs-synology

查看:

[root@gz-kubesphere-master01 ~]# kubectl get pvc -n data-project1 | grep nfs-synology
pvc1019                Bound    pvc-f25f7e0c-7276-493c-996f-1d19315d58a4   1Gi        RWX            nfs-synology   31m
pvc101902              Bound    pvc-a58d9019-84bf-4026-84ff-d268f78df0f4   2Gi        RWX            nfs-synology   25m

注意。多冗余可以将修改nfs provisioner插件为DaemonSet

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system 
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: nfs-provisioner-01
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: nfs-provisioner-01
  template:
    metadata:
      labels:
        app: nfs-provisioner-01
    spec:
      serviceAccountName: nfs-client-provisioner
      tolerations:
      - key: node-type
        operator: Equal
        value: master
        effect: NoSchedule
      containers:
        - name: nfs-client-provisioner
          image: wangjinxiong/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /k8s
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner-01  # 此处供应者名字供storageclass调用
            - name: NFS_SERVER
              value: 192.168.3.81   # 填入NFS的地址
            - name: NFS_PATH
              value: /nas/k8s   # 填入NFS挂载的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.3.81   # 填入NFS的地址
            path: /nas/k8s # 填入NFS挂载的目录

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-server
#  annotations:
#    storageclass.beta.kubernetes.io/is-default-class: 'true'
provisioner: nfs-provisioner-01
# Supported policies: Delete、 Retain , default is Delete
reclaimPolicy: Delete

6.5.2 csi-driver-smb

6.5.2.1 安装SMB CSI驱动

通过helm安装

helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system

查看pod状态是否正常

kubectl -n kube-system get pod |grep csi-smb
csi-smb-controller-5fc49df54f-prrmw        3/3     Running   9          39h
csi-smb-controller-5fc49df54f-w68r4        3/3     Running   11         39h
csi-smb-node-d5w96                         3/3     Running   9          39h
csi-smb-node-thxcf                         3/3     Running   9          39h
csi-smb-node-tqd9x                         3/3     Running   9          39h

首先创建一个secret保存用户和密码。(本例子samba服务器已经创建,为windows共享文件夹或者linux samba服务)。

kubectl create secret generic smbcreds --from-literal username=rancher --from-literal password="rancher"
6.5.2.2 创建storage class

新建storage-class.yaml文件

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: smb
provisioner: smb.csi.k8s.io
parameters:
  source: "//192.168.3.1/sda4"
  csi.storage.k8s.io/provisioner-secret-name: "smbcreds"
  csi.storage.k8s.io/provisioner-secret-namespace: "default"
  csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
  csi.storage.k8s.io/node-stage-secret-namespace: "default" 
#  createSubDir: "false"  # optional: create a sub dir for new volume
reclaimPolicy: Retain  # only retain is supported
volumeBindingMode: Immediate
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=1001
  - gid=1001

其中csi.storage.k8s.io/node-stage-secret-name对应的是上一步创建的secret
创建storage class

kubectl create -f storage-class.yaml
6.5.2.3部署应用

创建一个pod类型的应用,并挂载Samba卷

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: smb-pvc1019
#  annotations:
#    volume.beta.kubernetes.io/storage-class: "smb.csi.k8s.io"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: "smb"
---
kind: Pod
apiVersion: v1
metadata:
  name: nginx1019
  labels:
    app: nginx1019
spec:
  volumes:
    - name: smb-pvc1019
      persistentVolumeClaim:
        claimName: smb-pvc1019
  containers:
    - name: nginx1019
      image: httpd
      ports:
        - name: tcp-80
          containerPort: 80
          protocol: TCP
      volumeMounts:
        - name: smb-pvc1019
          mountPath: /var/www/html
      imagePullPolicy: IfNotPresent
  restartPolicy: Always
  

创建该应用(名字使用nginx,但用了httpd镜像,但不影响使用,其实名字无所谓)

kubectl apply -f smb-pod.yaml

查看应用状态

# kubectl get pod | grep nginx1019
nginx1019                   1/1     Running   1          21h

可以看到,/var/www/html目录挂载了//192.168.3.1/sda4 Samba文件存储

# kubectl exec -it nginx1019 -- /bin/bash
 root@nginx1019:/usr/local/apache2# df -h
Filesystem                                                   Size  Used Avail Use% Mounted on
overlay                                                      195G  3.5G  192G   2% /
tmpfs                                                         64M     0   64M   0% /dev
tmpfs                                                        911M     0  911M   0% /sys/fs/cgroup
/dev/mapper/centos-root                                      195G  3.5G  192G   2% /etc/hosts
shm                                                           64M     0   64M   0% /dev/shm
//192.168.3.1/sda4/pvc-dd932c8a-21bc-4c6f-b339-ea64a0918a6d  7.7G  5.0G  2.7G  66% /var/www/html
tmpfs                                                        911M   12K  911M   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                        911M     0  911M   0% /proc/acpi
tmpfs                                                        911M     0  911M   0% /proc/scsi
tmpfs                                                        911M     0  911M   0% /sys/firmware

参考文献:https://github.com/kubernetes-csi/csi-driver-smb
其他个人博客:https://www.cnblogs.com/zerchin/p/14549849.html (配置有点错误)

6.5.3 ceph-csi-rbd (helm部署)

(1) 添加仓库

helm repo add ceph-csi https://ceph.github.io/csi-charts

(2) 查询关于ceph的安装包

# helm search repo ceph
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
ceph-csi/ceph-csi-cephfs        3.4.0           v3.4.0          Container Storage Interface (CSI) driver, provi...
ceph-csi/ceph-csi-rbd           3.4.0           v3.4.0          Container Storage Interface (CSI) driver, provi...

(3) 下载helm的rbd安装包

helm pull ceph-csi/ceph-csi-rbd

(4) 解压

tar zxvf ceph-csi-rbd-3.4.0.tgz

(5) 修改values文件的镜像地址

sed -i "s/k8s.gcr.io\/sig-storage/wangjinxiong/g" values.yaml 

(6)设置ceph cluster变量

cat <<EOF>>ceph-csi-rbd_values.yml 
csiConfig:
  - clusterID: "319d2cce-b087-4de9-bd4a-13edc7644abc"
    monitors:
      - "192.168.3.61:6789"
      - "192.168.3.62:6789"
      - "192.168.3.63:6789"
EOF

(7) 安装csi

cd ceph-csi-rbd #进入解压目录
kubectl create namespace ceph-rbd
helm -n ceph-rbd install ceph-csi-rbd -f ceph-csi-rbd_values.yml .

查看pod情况:

# kubectl get pod -n ceph-rbd
NAME                                        READY   STATUS    RESTARTS   AGE
ceph-csi-rbd-nodeplugin-9mhmc               3/3     Running   0          10m
ceph-csi-rbd-nodeplugin-rrncz               3/3     Running   0          10m
ceph-csi-rbd-provisioner-7ccf65559f-44mqp   7/7     Running   0          10m
ceph-csi-rbd-provisioner-7ccf65559f-f5qsw   7/7     Running   0          10m
ceph-csi-rbd-provisioner-7ccf65559f-mnx5k   0/7     Pending   0          10m

(8) 创建ceph pool

ceph osd pool create kubernetes
rbd pool init kubernetes

创建ceph用户

ceph auth get-or-create \
  client.kube mon 'allow r' \
  osd 'allow class-read object_prefix rbd_children, allow rwx pool=kubernetes' \
  -o /etc/ceph/ceph.client.kube.keyring

备注:本例子使用admin,可以不创建用户。

(9)创建k8s secret

cat <<EOF>>secret.yaml 
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: ceph-rbd
stringData:
  userID: admin
  userKey: AQABcKZfMUF9FBAAf9gcYUkS0KW/ptcOpHPWyA==
EOF

创建storageClass

cat <<EOF>>storageClass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: 319d2cce-b087-4de9-bd4a-13edc7644abc
   pool: kubernetes
   fsType: ext4
   imageFormat: "2"
   imageFeatures: "layering"
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: ceph-rbd
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: ceph-rbd
reclaimPolicy: Delete
mountOptions:
   - discard
EOF

应用:

kubectl apply -f secret.yaml
kubectl apply -f storageClass.yaml

创建pvc测试

cat <<EOF>>rbd-pvc1114.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: rbd-pvc1114
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc
EOF
kubectl apply -f rbd-pvc1114.yaml

查看:

# kubectl get pvc
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rbd-pvc1114   Bound    pvc-e91f7d67-b19e-48f0-822a-f9e01c32a25c   1Gi        RWO            csi-rbd-sc     26s

参考文档:https://github.com/ceph/ceph-csi/tree/devel/charts/ceph-csi-rbd
https://www.qedev.com/cloud/316999.html
ceph-rbd: https://blog.csdn.net/s7799653/article/details/88303605

6.5.4 ceph-csi-cephfs(helm部署)

(1) 添加仓库

helm repo add ceph-csi https://ceph.github.io/csi-charts

(2) 查询关于ceph的安装包

# helm search repo ceph
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
ceph-csi/ceph-csi-cephfs        3.4.0           v3.4.0          Container Storage Interface (CSI) driver, provi...
ceph-csi/ceph-csi-rbd           3.4.0           v3.4.0          Container Storage Interface (CSI) driver, provi...

(3) 下载helm的rbd安装包

helm pull ceph-csi/ceph-csi-cephfs

(4) 解压

tar zxvf ceph-csi-cephfs-3.4.0.tgz

(5) 修改values文件的镜像地址

cd ceph-csi-cephfs
cp -p values.yaml values.yaml.bak
sed -i "s/k8s.gcr.io\/sig-storage/registry.aliyuncs.com\/google_containers/g" values.yaml

(6)设置ceph cluster变量

cat <<EOF>>ceph-csi-cephfs_values.yml 
csiConfig:
  - clusterID: "319d2cce-b087-4de9-bd4a-13edc7644abc"
    monitors:
      - "192.168.3.61:6789"
      - "192.168.3.62:6789"
      - "192.168.3.63:6789"
EOF

(7) 安装csi

cd ceph-csi-cephfs
kubectl create namespace ceph-cephfs
helm -n ceph-cephfs install ceph-csi-cephfs -f ceph-csi-cephfs_values.yml .

查看pod情况:

# kubectl get pod -n ceph-cephfs
NAME                                           READY   STATUS    RESTARTS   AGE
ceph-csi-cephfs-nodeplugin-sbl6v               3/3     Running   0          3h51m
ceph-csi-cephfs-nodeplugin-vvf87               3/3     Running   0          3h51m
ceph-csi-cephfs-provisioner-7bdfdbfdc6-bdk4l   0/6     Pending   0          3h51m
ceph-csi-cephfs-provisioner-7bdfdbfdc6-vlvst   6/6     Running   0          3h51m
ceph-csi-cephfs-provisioner-7bdfdbfdc6-z9rc8   6/6     Running   0          3h51m

(8) cepfs配置略
查看ceph的cephfs情况:

# ceph mds stat
cephfs:1 {0=ceph3=up:active} 2 up:standby

备注:本例子使用admin,可以不创建用户。

(9)创建k8s secret

cat secret.yaml 
---
apiVersion: v1
kind: Secret
metadata:
  name: csi-cephfs-secret
  namespace: ceph-cephfs
stringData:
  userID: admin
  userKey: AQABcKZfMUF9FBAAf9gcYUkS0KW/ptcOpHPWyA==
  adminID: admin
  adminKey: AQABcKZfMUF9FBAAf9gcYUkS0KW/ptcOpHPWyA==

创建storageClass

# cat storageClass.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
  clusterID: 319d2cce-b087-4de9-bd4a-13edc7644abc
  fsName: cephfs
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: ceph-cephfs
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: ceph-cephfs
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: ceph-cephfs
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

应用:

 # kubectl apply -f secret.yaml
 # kubectl apply -f storageClass.yml

创建pvc测试

# cat cephfs-pvc0119.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-pvc0119
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-cephfs-sc
# kubectl apply -f cephfs-pvc0119.yaml

查看:

# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
cephfs-pvc0119   Bound    pvc-970f7625-9783-4bee-ab4b-8bc1d3efb2cd   1Gi        RWO            csi-cephfs-sc   3h22m

部署参考: https://github.com/ceph/ceph-csi/tree/devel/charts/ceph-csi-cephfs https://github.com/ceph/ceph-csi/tree/devel/examples/cephfs
目前有问题
https://www.cnblogs.com/wsjhk/p/13710577.html
https://github.com/ceph/ceph-csi/blob/release-v3.1

6.5.5 cephfs-provisioner

安装cephfs-provisioner
因为k8s没有内置cephfs的provisioner,故须要安装第三方的。咱们先来简单看下此provisioner的架构:
![image.png](https://img-blog.csdnimg.cn/img_convert/fa981f6515db6d747502fd7de02a1f3d.png#clientId=ud71f6fd5-56d0-4&crop=0&crop=0&crop=1&crop=1&from=paste&id=u620a581e&margin=[object Object]&name=image.png&originHeight=319&originWidth=463&originalType=url&ratio=1&rotation=0&showTitle=false&size=93475&status=done&style=none&taskId=u0f2361f6-0944-4afa-8554-f0d07d7f77a&title=)
主要有两部分:

  • cephfs-provisioner.go
    是cephfs-provisioner(cephfs的storageclass)的核心,主要是 watch kubernetes中 PVC 资源的CURD事件,而后以命令行方式调用 cephfs_provisor.py脚本建立PV。
  • cephfs_provisioner.py
    python 脚本实现的与cephfs交互的命令行工具。cephfs-provisioner 对cephfs端volume的建立都是经过该脚本实现。里面封装了volume的增删改查等功能。
6.5.5.1 插件安装
git clone https://github.com/kubernetes-retired/external-storage.git

克隆下来的目录包含以下yaml文件

clusterrole.yaml  role.yaml  clusterrolebinding.yaml  deployment.yaml   rolebinding.yaml  serviceaccount.yaml

yaml默认的命名空间为cephfs

# cat ./* | grep namespace
  namespace: cephfs
    namespace: cephfs
  namespace: cephfs
  namespace: cephfs
  namespace: cephfs
  namespace: cephfs
  namespace: cephfs

直接在所在目录运行yaml文件:

kubectl create cephfs
kubectl apply -f .

检查是否安装成功:

# kubectl get pod -n cephfs 
NAME                                  READY   STATUS    RESTARTS   AGE
cephfs-provisioner-56f846d54b-6cd77   1/1     Running   0          71m
6.5.5.2 StorageClass创建

创建base64的ceph admin秘钥,得到秘钥写到ceph-admin-secret.yaml。

ceph auth get-key client.admin | base64
QVFBQmNLWmZNVUY5RkJBQWY5Z2NZVWtTMEtXL3B0Y09wSFBXeUE9PQ==

创建ceph-admin-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: cephfs
data:
  key: "QVFBQmNLWmZNVUY5RkJBQWY5Z2NZVWtTMEtXL3B0Y09wSFBXeUE9PQ=="
type: kubernetes.io/rbd

创建StorageClass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 192.168.3.61:6789,192.168.3.62:6789,192.168.3.63:6789
    adminId: admin
    adminSecretNamespace: "cephfs"
    adminSecretName: ceph-admin-secret
reclaimPolicy: Delete

应用:

kubectl apply -f ceph-admin-secret.yaml StorageClass.yaml

查看sc情况:

# kubectl get sc
NAME         PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cephfs       ceph.com/cephfs    Delete          Immediate           false                  72m
6.5.5.3 PVC应用cephfs-provisioner

创建一个pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-pvc1120
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: "cephfs"

查看pvc情况:

# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc1120   Bound    pvc-de119af9-2f2b-4a26-b085-86c0aab3494e   2Gi        RWO            cephfs         71m

6.5.6 local-volume-provisioner

kubernetes从1.10版本开始支持local volume(本地卷),workload(不仅是statefulsets类型)可以充分利用本地快速SSD,从而获取比remote volume(如cephfs、RBD)更好的性能。
在local volume出现之前,statefulsets也可以利用本地SSD,方法是配置hostPath,并通过nodeSelector或者nodeAffinity绑定到具体node上。但hostPath的问题是,管理员需要手动管理集群各个node的目录,不太方便。
下面两种类型应用适合使用local volume。
数据缓存,应用可以就近访问数据,快速处理。
分布式存储系统,如分布式数据库Cassandra ,分布式文件系统ceph/gluster
下面会先以手动方式创建PV、PVC、Pod的方式,介绍如何使用local volume,然后再介绍external storage提供的半自动方式

本操作采用helm安装,其他方式参考官方文档,官方helm安装地址:https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/helm,整个项目地址:https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner

6.5.6.1下载git源码

已经将克隆的目录打包tar.gz上传到nas

git clone --depth=1 https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner.git

进入sig-storage-local-static-provisioner, 修改 helm/provisioner/values.yaml 的镜像地址:

googleimages/local-volume-provisioner:v2.4.0
备注:k8s.gcr.io改成googleimages,也可以改成wangjinxiong docker hub地址wangjinxiong/local-volume-provisioner:v2.4.0,已经上传。

其中helm/examples/gke.yaml,默认的classes名为local-scsi

[root@k8s-master01 sig-storage-local-static-provisioner]# cat helm/examples/gke.yaml
common:
  useNodeNameOnly: true
classes:
- name: local-scsi
  hostDir: "/mnt/disks"
  storageClass: true

helm安装


helm install -f <path-to-your-values-file> <release-name> --namespace <namespace> ./helm/provisioner
6.5.6.2 helm安装操作
cd sig-storage-local-static-provisioner
helm install -f helm/examples/gke.yaml local-volume-provisioner --namespace kube-system ./helm/provisioner

查看pod及sc情况:

[root@k8s-master01 sig-storage-local-static-provisioner]# kubectl get pod -n kube-system | grep local
local-volume-provisioner-5tcx5            1/1     Running   1          10h
local-volume-provisioner-lsd7d            1/1     Running   1          10h
local-volume-provisioner-ltwxr            1/1     Running   1          10h
[root@k8s-master01 sig-storage-local-static-provisioner]# kubectl get ds -n kube-system | grep local
local-volume-provisioner       3         3         3       3            3           kubernetes.io/os=linux     10h
local-volume-provisioner-win   0         0         0       0            0           kubernetes.io/os=windows   10h
[root@k8s-master01 sig-storage-local-static-provisioner]# kubectl get sc
NAME                   PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
cephfs                 ceph.com/cephfs                Delete          Immediate              false                  42d
csi-rbd-sc             rbd.csi.ceph.com               Delete          Immediate              false                  48d
local-scsi             kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  10h
nfs-server (default)   nfs-provisioner-01             Delete          Immediate              false                  14d
6.5.6.3挂载磁盘

其Provisioner本身其并不提供local volume,但它在各个节点上的provisioner会去动态的“发现”挂载点(discovery directory),当某node的provisioner在/mnt/fast-disks目录下发现有挂载点时,会创建PV,该PV的local.path就是挂载点,并设置nodeAffinity为该node。
挂载盘麻烦或者没有,可以参考mount bind方式,这个需要各个节点都进行操作,执行下面脚本。

#!/bin/bash
for i in $(seq 1 5); do
  mkdir -p /mnt/fast-disks-bind/vol${i}
  mkdir -p /mnt/fast-disks/vol${i}
  mount --bind /mnt/fast-disks-bind/vol${i} /mnt/fast-disks/vol${i}
done

查询目录情况:

[root@k8s-master01 mnt]# ls -ls
总用量 0
0 drwxr-xr-x 7 root root 66 11 22:24 disks
0 drwxr-xr-x 7 root root 66 11 22:24 disks-bind
[root@k8s-master01 mnt]# ls disks
vol1  vol2  vol3  vol4  vol5
[root@k8s-master01 mnt]# ls disks-bind/
vol1  vol2  vol3  vol4  vol5
[root@k8s-master01 mnt]# 

执行该脚本后,等待一会,执行查询pv命令,就可以发现自动创建了 ,笔者上面有一个已经挂载,是因为整理笔记是后面操作的,正常情况都是Available。

[root@k8s-master01 mnt]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                     STORAGECLASS   REASON   AGE
local-pv-146f6f49                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-204215d7                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-304039a2                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-51a85a66                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-5e9db7ce                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-5f118901                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-6faed31c                          194Gi      RWO            Delete           Bound       default/local-vol-web-0   local-scsi              10h
local-pv-98f553c1                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-b01c5d8f                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-c2117cdf                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-c934dbdd                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-d20a3b6                           194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-d2e2cbdc                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-d3e79a2c                          194Gi      RWO            Delete           Available                             local-scsi              10h
local-pv-e9a54db                           194Gi      RWO            Delete           Available                             local-scsi              9h
pvc-054f3494-c65d-499f-ab33-5a47cc6be764   10Gi       RWX            Delete           Bound       demo/demo-rabbitmq-pvc    nfs-server              13d
pvc-67642ba8-5716-47d0-8c9a-2ac744cc43c6   1Gi        RWX            Delete           Bound       demo/demo1218             cephfs                  14d
pvc-7fc0a4aa-12aa-44d7-9dc9-eedec3ff5677   2Gi        RWO            Delete           Bound       default/cephfs-pvc1120    cephfs                  42d
pvc-840ac596-f5e2-4e15-8246-0d33857b3a77   10Gi       RWX            Delete           Bound       demo/demo-redis-pvc       nfs-server              13d
pvc-8e1f7a47-3207-46cc-8328-5c11b7aabac3   100Gi      RWX            Delete           Bound       demo/demo-mysql-pvc       nfs-server              13d
pvc-d3372ebb-6d64-42da-89a8-65b6ca14f759   1Gi        RWX            Delete           Bound       demo/demo1218-2           nfs-server              14d
pvc-dee11f5f-3024-4b0f-b7a8-b529e4e723be   2Gi        RWO            Delete           Bound       default/cephfs-pvc1121    cephfs                  42d
pvc-e91f7d67-b19e-48f0-822a-f9e01c32a25c   1Gi        RWO            Delete           Bound       default/rbd-pvc1114       csi-rbd-sc              48d

编写一个statefulset去调用localvolume:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx-svc"
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
          - name: local-vol
            mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
    - metadata:
        name: local-vol
      spec:
        storageClassName: local-scsi
        accessModes:
        -  ReadWriteOnce
        resources:
          requests:
            storage: 1Gi 

发觉pvc已经绑定了:

# kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-pvc1120    Bound    pvc-7fc0a4aa-12aa-44d7-9dc9-eedec3ff5677   2Gi        RWO            cephfs         42d
cephfs-pvc1121    Bound    pvc-dee11f5f-3024-4b0f-b7a8-b529e4e723be   2Gi        RWO            cephfs         42d
local-vol-web-0   Bound    local-pv-6faed31c                          194Gi      RWO            local-scsi     10h
rbd-pvc1114       Bound    pvc-e91f7d67-b19e-48f0-822a-f9e01c32a25c   1Gi        RWO            csi-rbd-sc     48d

参考其他博客文档:https://www.jianshu.com/p/436945a25e9f
https://blog.csdn.net/weixin_42758299/article/details/119102461
https://blog.csdn.net/cpongo1/article/details/89549139
https://www.freesion.com/article/5811953555/

6.5.7 OpenEBS 实现 Local PV 动态持久化存储

OpenEBS(https://openebs.io)) 是一种模拟了 AWS 的 EBS、阿里云的云盘等块存储实现的基于容器的存储开源软件。OpenEBS 是一种基于 CAS(Container Attached Storage) 理念的容器解决方案,其核心理念是存储和应用一样采用微服务架构,并通过 Kubernetes 来做资源编排。其架构实现上,每个卷的 Controller 都是一个单独的 Pod,且与应用 Pod 在同一个节点,卷的数据使用多个 Pod 进行管理。

可能自己到官网看文档了比我讲的清楚

https://docs.openebs.io/#replicated-volumes-aka-highly-available-volumes

(1) 所有节点安装启动 iSCSI 启动器

yum install iscsi-initiator-utils -y
systemctl enable --now iscsid
systemctl start iscsid.service
systemctl status iscsid.service

(2) 安装 OpenEBS

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
kubectl get pods -n openebs

查看pod情况:

# kubectl get pod -n openebs
NAME                                            READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-7fc6f78968-6grdz    1/1     Running   0          99m
openebs-ndm-cluster-exporter-5c5ddddc89-45vrp   1/1     Running   0          99m
openebs-ndm-l7kcz                               1/1     Running   0          99m
openebs-ndm-node-exporter-hxpd6                 1/1     Running   0          99m
openebs-ndm-node-exporter-wt7nd                 1/1     Running   0          99m
openebs-ndm-operator-56877788bf-dwrd6           1/1     Running   0          99m
openebs-ndm-pzq79                               1/1     Running   0          99m

默认情况下 OpenEBS 还会安装一些内置的 StorageClass 对象:

# kubectl get sc
NAME               PROVISIONER           RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device     openebs.io/local      Delete          WaitForFirstConsumer   false                  100m
openebs-hostpath   openebs.io/local      Delete          WaitForFirstConsumer   false                  100m

直接使用上面自带的 openebs-hostpath 这个 StorageClass 来创建 PVC:

# cat localpvc0119.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc0119
spec:
  storageClassName: openebs-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
kubectl apply -f localpvc0119.yaml

PVC 的状态是 Pending,这是因为对应的 StorageClass 是延迟绑定模式,所以需要等到 Pod 消费这个 PVC 后才会去绑定

# kubectl get pvc
NAME             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
cephfs-pvc0119   Bound     pvc-970f7625-9783-4bee-ab4b-8bc1d3efb2cd   1Gi        RWO            csi-cephfs-sc      5h30m
local-pvc0119    Pending                                                                        openebs-hostpath   2m44s

创建一个deployment去绑定:

cat local-pvc-pod.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-gb
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: local-pvc0119

看到 Pod 运行成功后,PVC 也绑定上了一个自动生成的 PV

# kubectl get pod 
NAME                        READY   STATUS    RESTARTS   AGE
nginx-gb-84476f8845-kgkzh   1/1     Running   0          18m
nginx-gb-84476f8845-wssr6   1/1     Running   0          18m
# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
local-pvc0119    Bound    pvc-4c6b3336-3365-43a0-9952-9dad32b742f4   3Gi        RWO            openebs-hostpath   20m

查看pv的路径:

# kubectl describe pv pvc-4c6b3336-3365-43a0-9952-9dad32b742f4
Name:              pvc-4c6b3336-3365-43a0-9952-9dad32b742f4
Labels:            openebs.io/cas-type=local-hostpath
Annotations:       pv.kubernetes.io/provisioned-by: openebs.io/local
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      openebs-hostpath
Status:            Bound
Claim:             default/local-pvc0119
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          3Gi
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [k8s21-worker01]
Message:           
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /var/openebs/local/pvc-4c6b3336-3365-43a0-9952-9dad32b742f4
Events:    <none>

进入pod,写入数据:

[root@k8s21-master01 ~]# kubectl exec -it nginx-gb-84476f8845-kgkzh -- sh
/ # 
/ # 
/ # echo "wangjinxiong" > /usr/share/nginx/html/index.html

发现k8s21-worker01的/var/openebs/local/pvc-4c6b3336-3365-43a0-9952-9dad32b742f4有数据:

[root@k8s21-worker01 openebs]# cd local/
[root@k8s21-worker01 local]# ls
pvc-4c6b3336-3365-43a0-9952-9dad32b742f4
[root@k8s21-worker01 local]# cd pvc-4c6b3336-3365-43a0-9952-9dad32b742f4/
[root@k8s21-worker01 pvc-4c6b3336-3365-43a0-9952-9dad32b742f4]# ls
[root@k8s21-worker01 pvc-4c6b3336-3365-43a0-9952-9dad32b742f4]# ls
index.html
[root@k8s21-worker01 pvc-4c6b3336-3365-43a0-9952-9dad32b742f4]# pwd
/var/openebs/local/pvc-4c6b3336-3365-43a0-9952-9dad32b742f4
[root@k8s21-worker01 pvc-4c6b3336-3365-43a0-9952-9dad32b742f4]# ls
index.html
[root@k8s21-worker01 pvc-4c6b3336-3365-43a0-9952-9dad32b742f4]# cat index.html 
wangjinxiong

参考博客:https://blog.csdn.net/weixin_42562106/article/details/112347574

6.6 Default StorageClass设置:

例如:

# kubectl get sc
NAME         PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cephfs       ceph.com/cephfs    Delete          Immediate           false                  22d
csi-rbd-sc   rbd.csi.ceph.com   Delete          Immediate           false                  28d

编辑cephfs,设置为默认SC:

在annotations下面添加:storageclass.beta.kubernetes.io/is-default-class: ‘true’

#kubectl edit sc cephfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: 'true'

查看:

# kubectl get sc
NAME            PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cephfs(default) ceph.com/cephfs    Delete          Immediate           false                  22d
csi-rbd-sc      rbd.csi.ceph.com   Delete          Immediate           false                  28d

6.7 PV生命周期

AccessModes(访问模式):
AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
• ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
• ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
• ReadWriteMany(RWX):读写权限,可以被多个节点挂载

RECLAIM POLICY(回收策略):
目前 PV 支持的策略有三种:
• Retain(保留): 保留数据,需要管理员手工清理数据
• Recycle(回收):清除 PV 中的数据,效果相当于执行 rm -rf /ifs/kuberneres/*
• Delete(删除):与 PV 相连的后端存储同时删除

STATUS(状态):
一个 PV 的生命周期中,可能会处于4中不同的阶段:
• Available(可用):表示可用状态,还未被任何 PVC 绑定
• Bound(已绑定):表示 PV 已经被 PVC 绑定
• Released(已释放):PVC 被删除,但是资源还未被集群重新声明
• Failed(失败): 表示该 PV 的自动回收失败

7 helm

7.1 helm安装

7.1.1 下载

  下载:[https://github.com/helm/helm/releases](https://github.com/helm/helm/releases)
     一般选择最新版本,当前版本是3.6

![图片.png](https://img-blog.csdnimg.cn/img_convert/33b73022c3b21523fdf31f2514f17d9e.png#clientId=u9d564608-18bc-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=362&id=ue6d843c8&margin=[object Object]&name=图片.png&originHeight=362&originWidth=892&originalType=binary&ratio=1&rotation=0&showTitle=false&size=37246&status=done&style=none&taskId=u8ff3c887-175a-41ef-8b24-cf32b563f70&title=&width=892)7.1.2 安装

tar -zxvf helm-v3.6.3-linux-amd64.tar.gz # 解压压缩包
# 把 helm 指令放到bin目录下
mv linux-amd64/helm /usr/local/bin/helm
helm help # 验证

7.2 helm基本操作

7.2.1 添加repo源

helm repo add [NAME] [URL] [flags]

helm repo add aliyuncs https://apphub.aliyuncs.com

通用的repo源:

minio           https://helm.min.io/                     
aliyuncs        http://mirror.azure.cn/kubernetes/charts/
elastic         https://helm.elastic.co                  
bitnami         https://charts.bitnami.com/bitnami       
stable          http://mirror.azure.cn/kubernetes/charts 
ceph-csi        https://ceph.github.io/csi-charts        
harbor          https://helm.goharbor.io                 
azure           http://mirror.azure.cn/kubernetes/charts/

7.2.2 搜索可用的软件镜像

helm search repo [keyword] [flags]

helm search repo nginx-ingress

显示以下内容:![图片.png](https://img-blog.csdnimg.cn/img_convert/39f394b735cc638d5deea742337d5553.png#clientId=u1526a8df-d3e4-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=76&id=u89ef38b7&margin=[object Object]&name=图片.png&originHeight=76&originWidth=1010&originalType=binary&ratio=1&rotation=0&showTitle=false&size=8915&status=done&style=none&taskId=ubdb664b6-6c41-4207-b9f6-572cc9b0e08&title=&width=1010)7.2.3 查看chart包提供的变量
helm show values [CHART] [flags]

helm show values aliyuncs/nginx-ingress

7.2.4 修改指定的变量并安装软件

helm install [NAME] [CHART] [flags]

在线安装

helm install nginx-ingress aliyuncs/nginx-ingress --set controller.service.type=NodePort --set controller.service.nodePorts.http=30080 --set controller.service.nodePorts.https=30443 --set controller.kind=DaemonSet -n nginx-ingress

本地tar解压成目录,进入目录安装

helm install nginx-ingress --set controller.service.type=NodePort --set controller.service.nodePorts.http=30080 --set controller.service.nodePorts.https=30443 --set controller.kind=DaemonSet -n nginx-ingress .

7.2.5 下载chart

helm pull [chart URL | repo/chartname] […] [flags]

helm pull aliyuncs/nginx-ingress --untar

不带–untar直接下载tar包,带–untar直接下载并解压成目录。

7.2.6 查找安装的软件

helm [command]

helm ls -n nginx-ingress

显示:![图片.png](https://img-blog.csdnimg.cn/img_convert/76e161dd04868987b6e1b8a4cecb76fe.png#clientId=u7a9197c2-8125-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=57&id=u2806e421&margin=[object Object]&name=图片.png&originHeight=57&originWidth=1142&originalType=binary&ratio=1&rotation=0&showTitle=false&size=5936&status=done&style=none&taskId=ud2cb6b47-2efa-44ec-b0e3-c39880a847a&title=&width=1142)7.2.7 卸载
helm uninstall RELEASE_NAME […] [flags]

helm uninstall nginx-ingress -n nginx-ingress

8 中间件

8.1 mysql

8.1.1 单机部署

pvc的yaml文件

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: it2-mysql-pvc1
  namespace: it2
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: nfs-provisioner-01
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Gi
  storageClassName: nfs-synology

有状态yaml文件

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: it2-mysql1
  namespace: it2
  labels:
    app: it2-mysql1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: it2-mysql1
  template:
    metadata:
      labels:
        app: it2-mysql1
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
        - name: volume-sxqu38
          persistentVolumeClaim:
            claimName: it2-mysql-pvc1
      containers:
        - name: container-iikqeq
          image: 'mysql:8.0'
          ports:
            - name: tcp-3306
              containerPort: 3306
              protocol: TCP
            - name: tcp-33060
              containerPort: 33060
              protocol: TCP
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: gtland2021
          resources:
            limits:
              cpu: '1'
              memory: 4Gi
            requests:
              cpu: 500m
              memory: 1Gi
          volumeMounts:
            - name: host-time
              readOnly: true
              mountPath: /etc/localtime
            - name: volume-sxqu38
              mountPath: /var/lib/mysql
          imagePullPolicy: IfNotPresent
      nodeSelector:
        node: worker
  serviceName: it2-mysql1-dr31


service的yaml:

kind: Service
apiVersion: v1
metadata:
  name: it2-mysql1-dr31
  namespace: it2
  labels:
    app: it2-mysql1
spec:
  ports:
    - name: tcp-3306
      protocol: TCP
      port: 3306
      targetPort: 3306
      nodePort: 31717
    - name: tcp-33060
      protocol: TCP
      port: 33060
      targetPort: 33060
      nodePort: 30783
  selector:
    app: it2-mysql1
  clusterIP: 10.254.231.45
  type: NodePort

数据库操作:

#查询用户:
use mysql;
select host,user from user;

#授权用户权限:
CREATE USER 'zhuweipeng'@'%' IDENTIFIED BY 'zhuweipeng2021';
GRANT SELECT,INSERT,DELETE,UPDATE ON utopa_assetctr.* TO 'zhuweipeng'@'%';
GRANT SELECT,INSERT,DELETE,UPDATE ON utopa_advert.* TO 'zhuweipeng'@'%';
GRANT SELECT,INSERT,DELETE,UPDATE ON utopa_devicectr.* TO 'zhuweipeng'@'%';
flush privileges;

8.2 ELK

8.2.1 添加helm repo源

helm repo add elastic https://helm.elastic.co
helm repo add fluent https://fluent.github.io/helm-charts

8.2.2 安装

elastic

helm pull elastic/elasticsearch --version 7.6.2
tar zxvf elasticsearch-7.6.2.tgz
cd elasticsearch
helm install es1024 -n usopa-demo1 .

备注:es一般想要修改values文件的SC:

cat values.yaml
........
volumeClaimTemplate:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: nfs-synology    #修改SC
  resources:
    requests:
      storage: 30Gi

kibana

helm pull elastic/kibana --version 7.6.2
tar zxvf kibana-7.6.2.tgz
cd kibana
helm install kn1024 -n usopa-demo1 .

fluentd

helm pull fluent/fluentd
tar zxvf fluentd-0.2.12.tgz
cd fluentd
helm install fl1024 -n usopa-demo1 .

8.3 rabbitmq

8.3.1 单机部署

it2-rabbitmq1-pvc1.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: usopa-demo1-rabbitmq1-pvc1
  namespace: usopa-demo1
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: nfs-provisioner-01
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-server

it2-rabbitmq1-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: usopa-demo1-rabbitmq1
  name: usopa-demo1-rabbitmq1
  namespace: usopa-demo1
spec:
  replicas: 1
  selector:
    matchLabels:
      name: usopa-demo1-rabbitmq1
  template:
    metadata:
      labels:
        name: usopa-demo1-rabbitmq1
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
        - name: data
          persistentVolumeClaim:
            claimName: usopa-demo1-rabbitmq1-pvc1
      nodeSelector:
        node: worker
      containers:
      - env:
        - name: RABBITMQ_DEFAULT_USER
          value: "guest"     # 默认账号
        - name: RABBITMQ_DEFAULT_PASS
          value: "guest"     # 默认密码
        image: rabbitmq:3.6.11-management
        imagePullPolicy: IfNotPresent
        name: rabbitmq
        ports:
        - containerPort: 15672
          name: manager
        - containerPort: 5672
          name: broker
        resources:
            limits:
              cpu: 1000m
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 512Mi
        volumeMounts:
        - name: data
          mountPath: /var/lib/rabbitmq
        - name: host-time
          readOnly: true
          mountPath: /etc/localtime

it2-rabbitmq1-svc.yaml

apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: Service
  metadata:
    labels:
      name: usopa-demo1-rabbitmq1-svc1
    name: usopa-demo1-rabbitmq1-svc1
    namespace: usopa-demo1
  spec:
    ports:
    - name: http
      port: 15672
      protocol: TCP
      targetPort: 15672
    selector:
      name: usopa-demo1-rabbitmq1 
- apiVersion: v1
  kind: Service
  metadata:
    labels:
      name: usopa-demo1-rabbitmq-svc2
    name: usopa-demo1-rabbitmq1-svc2
    namespace: usopa-demo1
  spec:
    ports:
    - name: http
      port: 5672
      protocol: TCP
      targetPort: 5672
    selector:
      name: usopa-demo1-rabbitmq1

8.3.2 rabbitmq命令

添加账号

# rabbitmqctl add_user openstack 21vianet
Creating user "openstack"

授权

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

调整角色为管理员

# rabbitmqctl set_user_tags openstack administrator 
Setting tags for user "openstack" to [administrator]

8.4 redis

8.4.1 单机部署

it2-redis-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: it2-redis-config
  namespace: it2
data:
  redis.conf: |
        bind 0.0.0.0
        port 6379
        requirepass gtland
        pidfile .pid
        appendonly yes
        cluster-config-file nodes-6379.conf
        pidfile /data/middleware-data/redis/log/redis-6379.pid
        cluster-config-file /data/middleware-data/redis/conf/redis.conf
        dir /data/middleware-data/redis/data/
        logfile "/data/middleware-data/redis/log/redis-6379.log"
        cluster-node-timeout 5000
        protected-mode no

cat it2-redis-pvc1.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: it2-redis-pvc1
  namespace: it2
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: nfs-provisioner-01
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: nfs-synology

it2-redis-sts.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: it2-redis1
  namespace: it2
spec:
  replicas: 1
  serviceName: it2-redis1-svc
  selector:
    matchLabels:
      name: it2-redis1
  template:
    metadata:
      labels:
        name: it2-redis1
    spec:
      volumes:
        - name: host-time
          hostPath:
            path: /etc/localtime
            type: ''
        - name: data
          persistentVolumeClaim:
            claimName: it2-redis-pvc1
        - name: redis-config
          configMap:
            name: it2-redis-config
      nodeSelector:
        node: worker
      initContainers:
      - name: init-redis
        image: busybox
        command: ['sh', '-c', 'mkdir -p /data/middleware-data/redis/log/;mkdir -p /data/middleware-data/redis/conf/;mkdir -p /data/middleware-data/redis/data/']
        volumeMounts:
        - name: data
          mountPath: /data/middleware-data/redis/
        resources:
            limits:
              cpu: 250m
              memory: 64Mi
            requests:
              cpu: 125m
              memory: 32Mi
      containers:
      - name: redis
        image: redis:5.0.6
        imagePullPolicy: IfNotPresent
        command:
        - sh
        - -c
        - "exec redis-server /data/middleware-data/redis/conf/redis.conf"
        ports:
        - containerPort: 6379
          name: redis
          protocol: TCP
        resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 1Gi
        volumeMounts:
        - name: redis-config
          mountPath: /data/middleware-data/redis/conf/
        - name: data
          mountPath: /data/middleware-data/redis/
        - name: host-time
          readOnly: true
          mountPath: /etc/localtime

it2-redis-svc.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    name: it2-redis1-svc
  name: it2-redis1-svc
  namespace: it2
spec:
  type: NodePort
  ports:
  - name: redis
    port: 6379
    targetPort: 6379
  selector:
    name: it2-redis1

8.4.2 redis命令

登录服务器

redis-cli -h 10.186.103.101 -p 31684 -a "gtland"

容器svc端口号一般为nodeport地址,随机为31684,容器里面为6379。

集群登录服务器:

redis-cli -h 10.186.103.101 -p 31684 -a "gtland" -c # -c标识以集群访问

操作数据:

# redis-cli -h 127.0.0.1 -p 6379
10.186.103.101:6379> ping
PONG
10.186.103.101:6379> set allen good
OK
10.186.103.101:6379> get allen
"good"
10.186.103.101:6379> get allenwang
(nil)
10.186.103.101:6379> keys *
1) "allen"
2) "counter:__rand_int__"
3) "name"
4) "myset:__rand_int__"
5) "mylist"
6) "key:__rand_int__"

8.4.3 主从helm部署

(1) 添加新的远程仓库

helm repo add azure http://mirror.azure.cn/kubernetes/charts/

(2) 查看repo情况并更新repo

 helm repo list
 helm repo update

(3) 查找redis安装包

# helm search repo redis
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION                                       
aliyuncs/prometheus-redis-exporter      3.5.1           1.3.4           DEPRECATED Prometheus exporter for Redis metrics  
aliyuncs/redis                          10.5.7          5.0.7           DEPRECATED Open source, advanced key-value stor...
aliyuncs/redis-ha                       4.4.6           5.0.6           DEPRECATED - Highly available Kubernetes implem...
azure/prometheus-redis-exporter         3.5.1           1.3.4           DEPRECATED Prometheus exporter for Redis metrics  
azure/redis                             10.5.7          5.0.7           DEPRECATED Open source, advanced key-value stor...
azure/redis-ha                          4.4.6           5.0.6           DEPRECATED - Highly available Kubernetes implem...
bitnami/redis                           15.6.4          6.2.6           Open source, advanced key-value store. It is of...
bitnami/redis-cluster                   7.0.13          6.2.6           Open source, advanced key-value store. It is of...
stable/prometheus-redis-exporter        3.5.1           1.3.4           DEPRECATED Prometheus exporter for Redis metrics  
stable/redis                            10.5.7          5.0.7           DEPRECATED Open source, advanced key-value stor...
stable/redis-ha                         4.4.6           5.0.6           DEPRECATED - Highly available Kubernetes implem...
aliyuncs/sensu                          0.2.5           0.28            DEPRECATED Sensu monitoring framework backed by...
azure/sensu                             0.2.5           0.28            DEPRECATED Sensu monitoring framework backed by...
stable/sensu                            0.2.5           0.28            DEPRECATED Sensu monitoring framework backed by...

(4) 选择stable/redis 并下载安装包

helm pull stable/redis --untar

进入目录,修改values.yaml

cd redis
vi values.yaml
nodeSelector: {"node": "worker"}     #指定标签部署在nodes
storageClass: "nfs-synology"         #修改SC

(5) 部署:

cd redis
helm install redis -n it2-prod .

输入以下:

Run a Redis pod that you can use as a client:
kubectl run --namespace it2-prod redis-client --rm --tty -i --restart='Never' \
 --env REDIS_PASSWORD=$REDIS_PASSWORD \
--image docker.io/bitnami/redis:5.0.7-debian-10-r32 -- bash
Connect using the Redis CLI:
redis-cli -h redis-master -a $REDIS_PASSWORD
redis-cli -h redis-slave -a $REDIS_PASSWORD
To connect to your database from outside the cluster execute the following commands:

kubectl port-forward --namespace it2-prod svc/redis-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD

操作redis:

# kubectl exec -it redis-master-0 -n it2-prod -- /bin/bash
I have no name!@redis-master-0:/$ 
I have no name!@redis-master-0:/$ echo $REDIS_PASSWORD               
RuWKe7jSjZ
I have no name!@redis-master-0:/$ redis-cli -h 127.0.0.1 -p 6379 -a RuWKe7jSjZ
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> set wangjinxiong good
OK
127.0.0.1:6379> get wangjinxiong
"good"
127.0.0.1:6379> 

参考博客:https://www.cnblogs.com/wangzhangtao/p/12593812.html

8.5 nacos

8.5.1 部署

(1) yaml文件部署
双节点:
nacos.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: it2-nacos-headless
  namespace: it2
  labels:
    name: it2-nacos-headless
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的选举端口
    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  selector:
    name: it2-nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: it2
data:
  mysql.service.name: "it2-mysql1-dr31"
  mysql.db.name: "nacos"                          #调整nacos数据库名称
  mysql.port: "3306"                              #调整nacos端口名称
  mysql.user: "nacos"                          #调整nacos数据库用户
  mysql.password: "nacos"                      #调整nacos数据库密码
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: it2-nacos
  namespace: it2
spec:
  serviceName: it2-nacos-headless
  replicas: 2
  selector:
    matchLabels:
      name: it2-nacos
  template:
    metadata:
      labels:
        name: it2-nacos
    spec:
      nodeSelector:
        node: worker
      containers:
        - name: k8snacos
          imagePullPolicy: IfNotPresent
          image: nacos/nacos-server:1.4.2
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.service.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: NACOS_SERVERS
              value: "it2-nacos-0.it2-nacos-headless.it2.svc.cluster.local:8848 it2-nacos-1.it2-nacos-headless.it2.svc.cluster.local:8848"

nacos-svc-nodeport.yaml

kind: Service
apiVersion: v1
metadata:
  name: it2-nacos-svc
  namespace: it2
  labels:
    name: it2-nacos-svc
spec:
  ports:
    - name: server
      protocol: TCP
      port: 8848
      targetPort: 8848
  selector:
    name: it2-nacos
  type: NodePort

(2)下载https://github.com/alibaba/nacos/releases/tag/1.4.2
解压nacos-server-1.4.2.tar.gz拷贝nacos-mysql.sql到容器里。
![image.png](https://img-blog.csdnimg.cn/img_convert/e1db6c727477e332340fe946da3c3d11.png#clientId=uf3cf9c8a-3d15-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=250&id=u07426460&margin=[object Object]&name=image.png&originHeight=500&originWidth=1648&originalType=binary&ratio=1&rotation=0&showTitle=false&size=135284&status=done&style=none&taskId=u8546d3ce-4d40-4eaf-99b8-c28ca018d7b&title=&width=824)

kubectl cp nacos-mysql.sql it2/it2-mysql1-0:/

(3) 使用nacos要提前安装好mysql,步骤(1)yaml文件有对接mysql的信息:

#进入数据库
mysql -uroot -pgtland2021
# 创建数据库
CREATE DATABASE `nacos`;
# 创建user nacos并设置密码'nacos'
create user 'nacos'@'%' identified by 'nacos';
# 授权user nacos 读写 nacos数据库
grant all privileges on nacos.* to 'nacos'@'%' with grant option;
#  更新配置
flush privileges;

(4)导入sql

use nacos;
source nacos-mysql.sql;

(5) 打开:http://10.186.103.101:30145/nacos,账号/密码: nacos/nacos
![图片.png](https://img-blog.csdnimg.cn/img_convert/6f2beec7678fad8d5705e9460674a5ff.png#clientId=u9f728e55-b4e1-4&crop=0&crop=0&crop=1&crop=1&from=paste&height=615&id=u46a067a4&margin=[object Object]&name=图片.png&originHeight=1230&originWidth=2840&originalType=binary&ratio=1&rotation=0&showTitle=false&size=850980&status=done&style=none&taskId=ud241812f-ee1e-4a4d-9a30-f6287ef3013&title=&width=1420)

17 重要链接

k8s官方软件yaml https://github.com/kubernetes/kubernetes/tree/master/cluster/addons

  • 2
    点赞
  • 54
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值