K8S总结3:kubernetes常用资源对象的使用详解

提示:kubectl api-resources # 查看k8s资源资源的简称、对应版本apiVersion,是否适用于所有namespace、资源类型KIND。

一.总结pod基于coredns进行域名解析流程

# 分别在default和myserver命名空间创建pod
[root@k8s-master1 k8s-Resource-N79]# kubectl run net-test2 --image=centos:7.9.2009 sleep 360000
[root@k8s-master1 k8s-Resource-N79]# kubectl exec -it net-test2 bash
[root@net-test2 /]# yum install -y net-tools bind-utils
[root@net-test2 /]# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.100.0.2 # crondns的svc地址
options ndots:5
# search 在nslookup解析域名是会自动依次向后补全
[root@net-test2 /]# nslookup kubernetes
# kubernetes为default命名空间的svc,解析kubernetes会按照resolv.conf文件中的search自动补全为全域名, kubernetes.default.svc.cluster.local,如图所示地址为10.100.0.1

在这里插入图片描述

在这里插入图片描述

# 跨不同命名空间解析
[root@k8s-master1 ~]# kubectl run net-test3 -n myserver --image=centos:7.9.2009 sleep 360000 
[root@k8s-master1 ~]# kubectl exec -it net-test3 -n myserver bash
[root@net-test3 /]# yum install -y net-tools bind-utils
[root@net-test3 /]# cat /etc/resolv.conf 
search myserver.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.100.0.2
options ndots:5
# 此时在myserver命名空间,要解析default命名空间的svc需要多写一级域名,否则会解析失败。
# 解析svc的全域名为kubernetes.default.svc.cluster.local
# svc名称.所在命名空间名称.svc.cluster.local

在这里插入图片描述

二.总结rc、rs及deployment控制器的使用

第一代副本控制器rc:可以保证pod处于特定的数量,当pod在失败、被删除或终止时会被自动替换,名称会变化,相当于新建一个,第一代中selector只支持= or != ,根据标签匹配,所以标签在不同pod中不要重复使用。

[root@k8s-master1 case3-controller]# cat 1-rc.yml 
apiVersion: v1  
kind: ReplicationController  
metadata:  
  name: ng-rc
spec:  
  replicas: 2 # 副本数
  selector:  
    app: ng-rc-80 
    #app1: ng-rc-81
  template:   
    metadata:  
      labels:  
        app: ng-rc-80
        #app1: ng-rc-81
    spec:  
      containers:  
      - name: ng-rc-80 
        image: nginx  
        ports:  
        - containerPort: 80 
[root@k8s-master1 case3-controller]# kubectl apply -f 1-rc.yml 

在这里插入图片描述

第二代副本控制器rs:在rc的基础上selector除了支持= or !=还支持in 或者 notin,同时要加上matchLabels,pod命名规则会在rs名称frontend后加上随机字符,matchLabels和matchExpressions只能使用一个。

[root@k8s-master1 case3-controller]# cat 2-rs.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1 
kind: ReplicaSet
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels: 
      app: ng-rs-80
    #matchExpressions:  # in 或者 notin
    #  - {key: app, operator: In, values: [ng-rs-80,ng-rs-81]}  # 匹配标签key=app,value 在 [ng-rs-80,ng-rs-81]中的
  template:
    metadata:
      labels:
        app: ng-rs-80
    spec:  
      containers:  
      - name: ng-rs-80 
        image: nginx  
        ports:  
        - containerPort: 80
[root@k8s-master1 case3-controller]# kubectl apply -f 2-rs.yml 

在这里插入图片描述

Deployment副本控制器:除了拥有rc和rs的功能外还支持滚动升级、回滚等,从k8s 1.9版本正式使用,pod名称是nginx-deployment-577df84c7c-fz5wc 控制器名称-rs-随机pod名称,会自动起一个rs,通过rs来维护副本,Deployment用来负责回滚和滚动更新等功能。

​ 如果有3个副本,rs用来管理副本,要更新镜像的话,会保持原来的rs不变,新建一个rs用来启动新的pod,当新的pod都启动完成后,会把3旧的pod全都回收,但是会保留旧的rs不会删除,可以回滚使用。

[root@k8s-master1 case3-controller]# cat 3-deployment.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    #app: ng-deploy-80 #rc
    matchLabels: #rs or deployment
      app: ng-deploy-80
    
    #matchExpressions:
    #  - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.2
        ports:
        - containerPort: 80
 [root@k8s-master1 case3-controller]# kubectl apply -f 3-deployment.yml 

在这里插入图片描述

# 更换镜像,
[root@k8s-master1 case3-controller]# cat 3-deployment.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    #app: ng-deploy-80 #rc
    matchLabels: #rs or deployment
      app: ng-deploy-80
    
    #matchExpressions:
    #  - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
[root@k8s-master1 case3-controller]# kubectl apply -f 3-deployment.yml 
# 此时pod会新建一部分回收一部分

在这里插入图片描述

# 并且此时会有两个rs,nginx-deployment-577df84c7c为之前的rs, nginx-deployment-6c64cd96cb 是新建的rs

在这里插入图片描述

# 回滚,撤销上一步动作,此时回退到之前的rs
[root@k8s-master1 case3-controller]# kubectl rollout undo deployment/nginx-deployment -n default

在这里插入图片描述

三.总结nodeport类型的service访问流程(画图说明)

​ service简介:由于pod重建后的ip就变了,因此pod之间不会通过ip访问,service通过label标签动态匹配endpoint实现。kube-proxy监听着k8s-apiserver,service信息发生修改后,kube-proxy就会生成对应的负载调度,保证service的最新状态。service本身就是个域名。

​ kube-proxy三种调度模型:

 userspace:k8s1.1之前,基本不用了
 iptables:1.2-k8s1.11之前,访问量不大规则不多比较合适,每次从上向下进行规则匹配,浪费资源
 ipvs:k8s 1.11之后,如果没有开启ipvs,则自动降级为iptables

​ service类型:

 ClusterIP:用于内部服务基于service name的访问。POD-A访问POD-B是的内部访问
 NodePort:用于kubernetes集群以外的服务主动访问运行在kubernetes集群内部的服务。
 LoadBalancer:用于公有云环境的服务暴露。
 ExternalName:用于将k8s集群外部的服务映射至k8s集群内部访问,从而让集群内部的pod能够通过固定
的service name访问集群外部的服务,有时候也用于将不同namespace之间的pod通过ExternalName进行访问。

nodeport类型的service访问流程:

在这里插入图片描述

实例:

# 创建pod
[root@k8s-master1 case4-service]# cat 1-deploy_node.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
    #matchExpressions:
    #  - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.2
        ports:
        - containerPort: 80
      #nodeSelector:
      #  env: group1
[root@k8s-master1 case4-service]# kubectl -f 1-deploy_node.yml 
[root@k8s-master1 case4-service]# cat 3-svc_NodePort.yml 
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80 
spec:
  ports:
  - name: http
    port: 81		# service端口
    targetPort: 80		# 要访问的pod端口
    nodePort: 30012		# 宿主机端口
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
[root@k8s-master1 case4-service]# kubectl apply -f 3-svc_NodePort.yml 
# 此时访问宿主机的30012端口
[root@k8s-master1 case4-service]# curl 172.18.10.125:30012

在这里插入图片描述

# 配置负载均衡访问
[root@k8s-ha ~]# vim /etc/haproxy/haproxy.cfg 
listen k8s-api-30012
    bind 172.18.10.199:30012 
    mode tcp # 模式
    server 172.18.10.124 172.18.10.124:30012 check inter 3s fall 3 rise 3 
    server 172.18.10.125 172.18.10.125:30012 check inter 3s fall 3 rise 3
[root@k8s-ha ~]# systemctl restart haproxy.service 
# 此时可以通过负载访问
[root@k8s-master1 case4-service]# curl 172.18.10.199:30012 

在这里插入图片描述

四.Volume-存储卷简介

​ Volume将容器中的指定数据和容器解耦,并将数据存储到指定的位置,不同的存储卷功能不一样,如果是基于网络存储的存储卷可以可实现容器间的数据共享和持久化。静态存储卷需要在使用前手动创建PV和PVC,然后绑定至pod使用。
常用的几种卷:

  1. Secret:是一种包含少量敏感信息例如密码、令牌或密钥的对象
  2. configmap: 配置文件
  3. emptyDir:本地临时卷
  4. hostPath:本地存储卷
  5. nfs等:网络存储卷

五.掌握pod挂载nfs的使用

# 配置nfs
[root@k8s-master1 ~]# showmount -e   172.18.10.121
Export list for 172.18.10.121:
/data/k8sdata *
# 启动挂载nfs路径的pod
[root@k8s-master1 case7-nfs]# cat 1-deploy_nfs.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite
          name: my-nfs-volume
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 172.18.10.121
          path: /data/k8sdata

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30016
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
# 在nfs服务器路径添加资源文件
[root@k8s-deployer ~]# cd /data/k8sdata/
[root@k8s-deployer k8sdata]# echo 123456 > index.html
# 页面访问测试
[root@k8s-master1 case7-nfs]# curl http://172.18.10.125:30016/mysite/index.html

在这里插入图片描述

六.总结基于nfs实现静态pvc的使用

PV/PVC简介:
PersistentVolume(PV):是集群中已经由kubernetes管理员配置的一个网络存储,集群中的存储资源一个集群资源,即不隶属于任何namespace,PV的数据最终存储在硬件存储,pod不能直接挂载PV,PV需要绑定给PVC并最终由pod挂载PVC使用,PV其支持NFS、Ceph、商业存储或云提供商的特定的存储等,可以自定义PV的类型是块还是文件存储、存储空间大小、访问模式等,PV的生命周期独立于Pod,即当使用PV的Pod被删除时可以对PV中的数据没有影响。
PersistentVolumeClaim(PVC):是pod对存储的请求, pod挂载PVC并将数据存储在PVC,而PVC需要绑定到PV才能使用,另外PVC在创建的时候要指定namespace,即pod要和PVC运行在同一个namespace,可以对PVC设置特定的空间大小和访问模式,使用PVC的pod在删除时也可以对PVC中的数据没有影响。

PV/PVC总结:
PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。
PVC是对PV资源的申请调用,pod是通过PVC将数据保存至PV,PV再把数据保存至真正的硬件存储。

在这里插入图片描述

# 创建pv
[root@k8s-master1 case8-pv-static]# cat 1-myapp-persistentvolume.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv
spec:
  capacity:
    storage: 10Gi  # 指定pv大小
  accessModes:
    - ReadWriteOnce  # PV只能被单个节点以读写权限挂载,RWO
  nfs:		# 指定后端存储使用nfs
    path: /data/k8sdata/myserver/myappdata
    server: 172.18.10.121 # nfs服务器
# 创建pvs,并绑定到pv上
[root@k8s-master1 case8-pv-static]# cat 2-myapp-persistentvolumeclaim.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc
  namespace: myserver # 指定命名空间,要和绑定的pod在一个命名空间
spec:
  volumeName: myserver-myapp-static-pv  # 指定该pvs要绑定的pv
  accessModes:
    - ReadWriteOnce  # PVC只能被单个节点以读写权限挂载,RWO
  resources:
    requests:
      storage: 10Gi # 指定大小,不能超过pv的大小
 
# 创建web服务器测试
[root@k8s-master1 case8-pv-static]# cat 3-myapp-webserver.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:  # pvc类型的存储卷
            claimName: myserver-myapp-static-pvc   #指定要使用的pvc

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30009
  selector:
    app: myserver-myapp-frontend
[root@k8s-master1 case8-pv-static]# kubectl apply -f 3-myapp-webserver.yaml 
# 验证是否可以正常使用pv/pvc存储访问资源
[root@k8s-deployer k8sdata]# systemctl restart nfs-server.service 
[root@k8s-deployer myappdata]# wget https://w.wallhaven.cc/full/l8/wallhaven-l8jxpy.jpg
[root@k8s-deployer myappdata]# mv wallhaven-l8jxpy.jpg tp.jpg
# 访问验证172.18.10.124:30009/statics/tp.jpg

在这里插入图片描述

七.总结基于nfs及storageclass实现动态pvc的使用

# 创建账号
[root@k8s-master1 case9-pv-dynamic-nfs]# cat 1-rbac.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master1 case9-pv-dynamic-nfs]# kubectl apply -f 1-rbac.yaml 
# 创建存储类
[root@k8s-master1 case9-pv-dynamic-nfs]# cat 2-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME',要和第三步的PROVISIONER_NAME值一致
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  #- vers=4.1 #containerd有部分参数异常
  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  #mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 
[root@k8s-master1 case9-pv-dynamic-nfs]# kubectl apply -f 2-storageclass.yaml 
# 创建pv
[root@k8s-master1 case9-pv-dynamic-nfs]# cat 3-nfs-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.18.10.121
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.18.10.121
            path: /data/volumes
[root@k8s-master1 case9-pv-dynamic-nfs]# kubectl apply -f 3-nfs-provisioner.yaml 
# 创建pvc,并将pvc和存储类绑定
[root@k8s-master1 case9-pv-dynamic-nfs]# cat 4-create-pvc.yaml 
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: managed-nfs-storage #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 500Mi #空间大小
[root@k8s-master1 case9-pv-dynamic-nfs]# kubectl apply -f 4-create-pvc.yaml 
# 创建pod,绑定pvc,验证访问
[root@k8s-master1 case9-pv-dynamic-nfs]# cat 5-myapp-webserver.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc 

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30010
  selector:
    app: myserver-myapp-frontend
[root@k8s-master1 case9-pv-dynamic-nfs]# kubectl apply -f 5-myapp-webserver.yaml 
# 创建文件验证
[root@k8s-deployer myserver-myserver-myapp-dynamic-pvc-pvc-4fac5012-2ac7-424a-b4f9-04e27d0e548f]# cd /data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-4fac5012-2ac7-424a-b4f9-04e27d0e548f/
[root@k8s-deployer myserver-myserver-myapp-dynamic-pvc-pvc-4fac5012-2ac7-424a-b4f9-04e27d0e548f]# echo "123456" > index.html
[root@k8s-master1 case9-pv-dynamic-nfs]# curl http://172.18.10.124:30010/statics/

在这里插入图片描述

八.pod基于configmap实现配置挂载和环境变量

Configmap将非机密性信息(如配置信息)和镜像解耦, 实现方式为将配置信息放到configmap对象中,然后在pod的中作为Volume挂载到pod中,从而实现导入配置的目的。
使用场景:
 通过Configmap给pod中的容器服务提供配置文件,配置文件以挂载到容器的形式使用。
 通过Configmap给pod定义全局环境变量
 通过Configmap给pod传递命令行参数,如mysql -u -p中的账户名密码可以通过Configmap传递。
注意事项:
 Configmap需要在pod使用它之前创建。
 pod只能使用位于同一个namespace的Configmap,即Configmap不能跨namespace使用。
 通常用于非安全加密的配置场景。
 Configmap通常是小于1MB的配置。

# 配置挂载
[root@k8s-master1 case10-configmap]# cat 1-deploy_configmap.yml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  mysite: |
    server {
       listen       80;
       server_name  www.mysite.com;
       index        index.html index.php index.htm;

       location / {
           root /data/nginx/mysite;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }

  myserver: |
    server {
       listen       80;
       server_name  www.myserver.com;
       index        index.html index.php index.htm;

       location / {
           root /data/nginx/myserver;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }  

---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.0
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/nginx/mysite
          name: nginx-mysite-statics
        - mountPath: /data/nginx/myserver
          name: nginx-myserver-statics
        - name: nginx-mysite-config
          mountPath:  /etc/nginx/conf.d/mysite/
        - name: nginx-myserver-config
          mountPath:  /etc/nginx/conf.d/myserver/
      volumes:
      - name: nginx-mysite-config
        configMap:
          name: nginx-config
          items:
             - key: mysite
               path: mysite.conf
      - name: nginx-myserver-config
        configMap:
          name: nginx-config
          items:
             - key: myserver
               path: myserver.conf
      - name: nginx-myserver-statics
        nfs:
          server: 172.18.10.121
          path: /data/k8sdata/myserver
      - name: nginx-mysite-statics
        nfs:
          server: 172.18.10.121
          path: /data/k8sdata/mysite

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30019
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
# 创建config文件挂载到容器汇总
[root@k8s-master1 case10-configmap]# kubectl exec -it -n default       nginx-deployment-8d8d6667f-nf2d2 bash
# 加载挂载的配置,后续可以新建镜像写到配置环境中
root@nginx-deployment-8d8d6667f-nf2d2:/# vim /etc/nginx/nginx.conf 
root@nginx-deployment-8d8d6667f-nf2d2:/# apt install -y vim
root@nginx-deployment-8d8d6667f-nf2d2:/# vim /etc/nginx/nginx.conf 
    include /etc/nginx/conf.d/*/*.conf;
root@nginx-deployment-8d8d6667f-nf2d2:/# nginx -s reload
# 添加主页文件
[root@k8s-deployer k8sdata]# cd /data/k8sdata/myserver/
[root@k8s-deployer myserver]# echo '<h1>myserver</h1>' > index.html
[root@k8s-deployer myserver]# cd /data/k8sdata/mysite/
[root@k8s-deployer mysite]# echo '<h1>mysite</h1>' > index.html
# 访问验证
[root@k8s-master1 case10-configmap]# vim /etc/hosts
172.18.10.124 www.myserver.com www.mysite.com
[root@k8s-master1 case10-configmap]# curl www.myserver.com:30019
[root@k8s-master1 case10-configmap]# curl www.mysite.com:30019

在这里插入图片描述

# 环境变量
[root@k8s-master1 case10-configmap]# cat 2-deploy_configmap_env.yml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  host: "172.31.7.189"
  username: "user1"
  password: "12345678"


---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        env:
        - name: HOST  # 定义变量HOST
          valueFrom:
            configMapKeyRef:   # configMap通过定义环境变量
              name: nginx-config
              key: host    # nginx-config中的host的值赋予HOST变量
        - name: USERNAME
          valueFrom:
            configMapKeyRef:
              name: nginx-config
              key: username
        - name: PASSWORD
          valueFrom:
            configMapKeyRef:
              name: nginx-config
              key: password
        ######
        - name: "MySQLPass"  # 直接定义环境变量
          value: "123456"
        ports:
        - containerPort: 80
[root@k8s-master1 case10-configmap]# kubectl apply -f 2-deploy_configmap_env.yml 
# 验证
[root@k8s-master1 case10-configmap]# kubectl exec -it -n default       nginx-deployment-5d4f87d989-29q2k bash

在这里插入图片描述

九.总结secret简介及常见类型、基于Secret实现Nginx tls认证

Secret简介:
Secret 的功能类似于 ConfigMap给pod提供额外的配置信息,但是Secret是一种包含少量敏感信息例如密码、令牌或密钥的对象。Secret 的名称必须是合法的 DNS 子域名。每个Secret的大小最多为1MiB,主要是为了避免用户创建非常大的Secret进而导致API服务器和kubelet内存耗尽,不过创建很多小的Secret也可能耗尽内存,可以使用资源配额来约束每个名字空间中
Secret的个数。
在通过yaml文件创建secret时,可以设置data或stringData字段,data和stringData字段都是可选的,data字段中所有键值都必须是
base64编码的字符串,如果不希望执行这种 base64字符串的转换操作,也可以选择设置stringData字段,其中可以使用任何非加密的字符串作为其取值。
Pod 可以用三种方式的任意一种来使用 Secret:
1)作为挂载到一个或多个容器上的卷 中的文件(crt文件、key文件)。
2)作为容器的环境变量。
3)由 kubelet 在为 Pod 拉取镜像时使用(与镜像仓库的认证)。

Secret简介类型:

在这里插入图片描述

基于Secret实现Nginx tls认证

# 自签名证书
[root@k8s-master1 case11-secret]# mkdir certs
[root@k8s-master1 case11-secret]# cd certs/
[root@k8s-master1 case11-secret]# openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'
[root@k8s-master1 case11-secret]# openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.mysite.com'
[root@k8s-master1 case11-secret]# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
# 创建用于TLS环境的Secret
[root@k8s-master1 certs]# kubectl create secret tls myserver-tls-key --cert=./server.crt --key=./server.key -n myserver
# 创建nginx pod验证
[root@k8s-master1 case11-secret]# cat 4-secret-tls.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: myserver
data:
 default: |
    server {
       listen       80;
       server_name  www.mysite.com;
       listen 443 ssl;
       ssl_certificate /etc/nginx/conf.d/certs/tls.crt;
       ssl_certificate_key /etc/nginx/conf.d/certs/tls.key;

       location / {
           root /usr/share/nginx/html; 
           index index.html;
           if ($scheme = http ){  #未加条件判断,会导致死循环
              rewrite / https://www.mysite.com permanent;
           }  

           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }

---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp-frontend-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-frontend
        image: nginx:1.20.2-alpine 
        ports:
          - containerPort: 80
        volumeMounts:
          - name: nginx-config
            mountPath:  /etc/nginx/conf.d/myserver
          - name: myserver-tls-key
            mountPath:  /etc/nginx/conf.d/certs
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-config
          items:
             - key: default
               path: mysite.conf
      - name: myserver-tls-key
        secret:
          secretName: myserver-tls-key 


---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30018
    protocol: TCP
  - name: htts
    port: 443
    targetPort: 443
    nodePort: 30029
    protocol: TCP
  selector:
    app: myserver-myapp-frontend 
[root@k8s-master1 case11-secret]# kubectl apply -f 4-secret-tls.yaml 
# 加载configmap的配置文件
[root@k8s-master1 case11-secret]# kubectl exec -it -n myserver      myserver-myapp-frontend-deployment-7df66455b9-5xn6s sh
/ # vi /etc/nginx/nginx.conf 
    include /etc/nginx/conf.d/*/*.conf;
/ # nginx -s reload
# 访问验证

在这里插入图片描述

十.基于Secret实现私有镜像仓库的镜像下载认证

[root@k8s-master1 case11-secret]# nerdctl pull harbor.linuxarchitect.io/myimsges/nginx:1.16.1-alpine-perl
# 不登录的话,拉去私有仓库镜像会失败

在这里插入图片描述

# 先登录
[root@k8s-master1 case11-secret]# nerdctl login harbor.linuxarchitect.io
# 通过docker认证文件创建secret
[root@k8s-master1 case11-secret]# kubectl create secret generic aliyun-registry-image-pull-key \
--from-file=.dockerconfigjson=/root/.docker/config.json \
--type=kubernetes.io/dockerconfigjson \
-n myserver
[root@k8s-master1 case11-secret]# cat 5-secret-imagePull.yaml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp-frontend-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-frontend
        #image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nginx:1.16.1-alpine-perl 
        image: harbor.linuxarchitect.io/myimages/nginx:1.16.1-alpine-perl
        imagePullPolicy: Always
        ports:
          - containerPort: 80
      imagePullSecrets:
        - name: aliyun-registry-image-pull-key

---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30018
    protocol: TCP
  type: NodePort
  selector:
    app: myserver-myapp-frontend 
[root@k8s-master1 case11-secret]# kubectl apply -f 5-secret-imagePull.yaml 
# 验证

在这里插入图片描述

十一.总结StatefulSet、DaemonSet的特点及使用

Statefulset简介:
https://kubernetes.io/zh/docs/concepts/workloads/controllers/statefulset/

  1. Statefulset为了解决有状态服务的集群部署、集群之间的数据同步问题(MySQL主从、Redis Cluster、ES集群等)
  2. Statefulset所管理的Pod拥有唯一且固定的Pod名称
  3. Statefulset按照顺序对pod进行启停、伸缩和回收
  4. Headless Services(无头服务,请求的解析直接解析到pod IP)
[root@k8s-master1 case12-Statefulset]# cat 1-Statefulset.yaml 
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: StatefulSet 
metadata:
  name: myserver-myapp
  namespace: myserver
spec:
  replicas: 3
  serviceName: "myserver-myapp-service"
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-frontend
        #image: registry.cn-qingdao.aliyuncs.com/zhangshijie/zookeeper:v3.4.14
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
        ports:
          - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-service
  namespace: myserver
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
  selector:
    app: myserver-myapp-frontend 
[root@k8s-master1 case12-Statefulset]# kubectl apply -f 1-Statefulset.yaml 
# Statefulset会按照顺序启动,pod命名会在Statefulset名称后加上序号,启动时一次从0开始启动,删除是一次从最后一个pod开始删除

在这里插入图片描述

DaemonSet简介:
https://kubernetes.io/zh/docs/concepts/workloads/controllers/daemonset/
DaemonSet 在当前集群中每个节点运行同一个pod,当有新的节点加入集群时也会为新的节点配置相同的pod,当节点从集群中移除时其pod也会被kubernetes回收,删除DaemonSet 控制器将删除其创建的所有的pod。
可以用作日志收集,普罗米修斯集群监控等场景。

[root@k8s-master1 case13-DaemonSet]# cat 1-DaemonSet-webserver.yaml 
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: myserver-myapp
  namespace: myserver
spec:
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      hostNetwork: true
      hostPID: true
      containers:
      - name: myserver-myapp-frontend
        image: nginx:1.20.2-alpine 
        ports:
          - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30018
    protocol: TCP
  type: NodePort
  selector:
    app: myserver-myapp-frontend
[root@k8s-master1 case13-DaemonSet]# cat 2-DaemonSet-fluentd.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
[root@k8s-master1 case13-DaemonSet]# cat 3-DaemonSet-prometheus.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring 
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
        k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      containers:
      - image: prom/node-exporter:v1.3.1 
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          protocol: TCP
          name: metrics
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        - mountPath: /host/sys
          name: sys
        - mountPath: /host
          name: rootfs
        args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --path.rootfs=/host
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
      hostNetwork: true
      hostPID: true
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: monitoring 
spec:
  type: NodePort
  ports:
  - name: http
    port: 9100
    nodePort: 32109
    protocol: TCP
  selector:
    k8s-app: node-exporter
[root@k8s-master1 case13-DaemonSet]# kubectl apply -f 1-DaemonSet-webserver.yaml 
[root@k8s-master1 case13-DaemonSet]# kubectl apply -f 2-DaemonSet-fluentd.yaml 
[root@k8s-master1 case13-DaemonSet]# kubectl apply -f 3-DaemonSet-prometheus.yaml 
```shell
porter
        ports:
        - containerPort: 9100
          hostPort: 9100
          protocol: TCP
          name: metrics
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        - mountPath: /host/sys
          name: sys
        - mountPath: /host
          name: rootfs
        args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --path.rootfs=/host
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
      hostNetwork: true
      hostPID: true
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: monitoring 
spec:
  type: NodePort
  ports:
  - name: http
    port: 9100
    nodePort: 32109
    protocol: TCP
  selector:
    k8s-app: node-exporter
    ```
[root@k8s-master1 case13-DaemonSet]# kubectl apply -f 1-DaemonSet-webserver.yaml 
[root@k8s-master1 case13-DaemonSet]# kubectl apply -f 2-DaemonSet-fluentd.yaml 
[root@k8s-master1 case13-DaemonSet]# kubectl apply -f 3-DaemonSet-prometheus.yaml 

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值