20230813 作业-Kubernetes

1.总结pod基于coredns进行域名解析流程

coredns无法直接调用etcd查询域名,需要调用名为Kubernetes的Service来通过etcd查询域名

Service 与 Pod 的 DNS | Kubernetes

Kubernetes 为 Service 和 Pod 创建 DNS 记录 。 你可以使用一致的 DNS 名称而非 IP 地址访问 Service。 Kubelet 配置 Pod 的 DNS 在/etc/resolv.conf中,以便运行中的容器可以通过名称而不是 IP 来查找服务。

集群中定义的 Service 被赋予 DNS 名称。 默认情况下,客户端 Pod 的 DNS 搜索列表会包含 Pod 自身的命名空间和集群的默认域。

Service 的命名空间

DNS 查询可能因为执行查询的 Pod 所在的名字空间而返回不同的结果。 不指定名字空间的 DNS 查询会被限制在 Pod 所在的名字空间内。 要访问其他名字空间中的 Service,需要在 DNS 查询中指定名字空间。

例如,假定名字空间 test 中存在一个 Pod,prod 名字空间中存在一个服务 data

Pod 查询 data 时没有返回结果,因为使用的是 Pod 的名字空间 test

Pod 查询 data.prod 时则会返回预期的结果,因为查询中指定了名字空间。

DNS 查询可以使用 Pod 中的 /etc/resolv.conf 展开。 Kubelet 为每个 Pod 配置此文件。 例如,对 data 的查询可能被展开为 data.test.svc.cluster.local。 search 选项用于查找本地域名,它的取值会被用来展开查询。要进一步了解 DNS 查询,可参阅 resolv.conf 手册页面

nameserver 10.32.0.10
search <namespace>.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

 由于Pod的 A 和 AAAA 记录不是基于 Pod 名称创建,因此需要设置了 hostname 才会生成 Pod 的 A 或 AAAA 记录。 没有设置 hostname 但设置了 subdomain 的 Pod 只会为 无头 Service 创建 A 或 AAAA 记录(busybox-subdomain.my-namespace.svc.cluster-domain.example) 指向 Pod 的 IP 地址。 另外,除非在服务上设置了 publishNotReadyAddresses=True,否则只有 Pod 准备就绪才会有与之对应的记录。

2.总结rc、rs及deployment控制器的使用

第一代pod副本控制器: Replication Controller
选择器匹配方式支持精确匹配(selector = !=)
https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicationcontroller/
https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/
第二代pod副本控制器:ReplicaSet
和副本控制器的区别是:对选择器的支持包含匹配(selector 还支持in notin) 
https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicaset
第三代pod副本控制器:Deployment
比rs更高一级的控制器,除了有rs的功能之外,还有很多高级功能,,比如说最重要的:滚动升级、回滚等 

Deployments | Kubernetes

3.总结nodeport类型的service访问流程(画图说明)

nodeport类型的service可以被集群外部主机访问,访问ip地址为集群任意一台主机地址,端口为nodeport端口

 4.掌握pod挂载nfs的使用

在172.31.7.109主机部署NFS服务并创建/data/k8sdata/pool1和/data/k8sdata/pool2目录

编写配置文件
vim 2-deploy_nfs.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-site2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-81
  template:
    metadata:
      labels:
        app: ng-deploy-81
    spec:
      containers:
      - name: ng-deploy-81
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/pool1
          name: my-nfs-volume-pool1
        - mountPath: /usr/share/nginx/html/pool2
          name: my-nfs-volume-pool2
        - mountPath: /etc/localtime
          name: timefile
      volumes:
      - name: my-nfs-volume-pool1
        nfs:
          server: 172.31.7.109
          path: /data/k8sdata/pool1
      - name: my-nfs-volume-pool2
        nfs:
          server: 172.31.7.109
          path: /data/k8sdata/pool2
      - name: timefile
        hostPath:
          path: /etc/localtime


---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-81
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30017
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-81

kubectl apply -f 2-deploy_nfs.ym

5.总结基于nfs实现静态pvc的使用

准备NFS存储:

~# mkdir /data/k8sdata/myserver/myappdata -p

~# vim /etc/exports

/data/k8sdata/myserver/myappdata *(rw,no_root_squash)

~# systemctl restart nfs-server && systemctl enable nfs-server

创建PV/PVC:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/myserver/myappdata
    server: 172.31.7.109
case8-pv-static# kubectl apply -f 1-myapp-persistentvolume.yaml
persistentvolume/myserver-myapp-static-pv created
case8-pv-static# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
myserver-myapp-static-pv 10Gi RWO Retain Available 9m25s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc
  namespace: myserver
spec:
  volumeName: myserver-myapp-static-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
case8-pv-static# kubectl apply -f 2-myapp-persistentvolumeclaim.yaml
persistentvolumeclaim/myserver-myapp-static-pvc created
case8-pv-static# kubectl get pvc -n myserver
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myserver-myapp-static-pvc Bound myserver-myapp-static-pv 10Gi RWO 96s
部署web服务:
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-static-pvc 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30009
  selector:
    app: myserver-myapp-frontend

case8-pv-static# kubectl apply -f 3-myapp-webserver.yaml

6.总结deployment基于nfs及storageclass实现动态pvc的使用

1.创建k8s集群账号,赋予权限使其能够在k8s做一些比如创建PV的操作:
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f 1-rbac.yaml

2.创建storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  #- vers=4.1 #containerd有部分参数异常
  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  #mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 

kubectl apply -f 2-storageclass.yaml

3.创建NFS 制备器(provisioner),用于向storageclass提供后端nfs存储,使PVC能够调用storageclass创建pv:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner  #使用指定账号运行pod,该账号有创建PV都权限
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.31.7.109
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.31.7.109
            path: /data/volumes

kubectl apply -f 3-nfs-provisioner.yaml

4.创建PVC,PVC引用storageclass,storageclass自动创建并绑定PV:
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: managed-nfs-storage #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 500Mi #空间大小

kubectl apply -f 4-create-pvc.yaml

5.创建deployment
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30010
  selector:
    app: myserver-myapp-frontend

kubectl apply -f 5-myapp-webserver.yaml

验证NFS存储服务器

7.pod基于configmap实现配置挂载和环境变量

创建configmap提供pod的配置文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  mysite: |
    server {
       listen       80;
       server_name  www.mysite.com;
       index        index.html index.php index.htm;

       location / {
           root /data/nginx/mysite;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }

  myserver: |
    server {
       listen       80;
       server_name  www.myserver.com;
       index        index.html index.php index.htm;

       location / {
           root /data/nginx/myserver;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }  
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.0
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/nginx/mysite
          name: nginx-mysite-statics
        - mountPath: /data/nginx/myserver
          name: nginx-myserver-statics
        - name: nginx-mysite-config
          mountPath:  /etc/nginx/conf.d/mysite/
        - name: nginx-myserver-config
          mountPath:  /etc/nginx/conf.d/myserver/
      volumes:
      - name: nginx-mysite-config
        configMap:
          name: nginx-config
          items:
             - key: mysite
               path: mysite.conf
      - name: nginx-myserver-config
        configMap:
          name: nginx-config
          items:
             - key: myserver
               path: myserver.conf
      - name: nginx-myserver-statics
        nfs:
          server: 172.31.7.109
          path: /data/k8sdata/myserver
      - name: nginx-mysite-statics
        nfs:
          server: 172.31.7.109
          path: /data/k8sdata/mysite
---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30019
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80

kubectl apply -f 1-deploy_configmap.yml

创建configmap提供pod的环境变量

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  host: "172.31.7.189"
  username: "user1"
  password: "12345678"
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        env:
        - name: HOST
          valueFrom:
            configMapKeyRef:
              name: nginx-config
              key: host
        - name: USERNAME
          valueFrom:
            configMapKeyRef:
              name: nginx-config
              key: username
        - name: PASSWORD
          valueFrom:
            configMapKeyRef:
              name: nginx-config
              key: password
        ######
        - name: "MySQLPass"
          value: "123456"
        ports:
        - containerPort: 80

kubectl apply -f 2-deploy_configmap_env.yml

8.总结secret简介及常见类型、基于Secret实现Nginx tls认证

Secret 的功能类似于 ConfigMap给pod提供额外的配置信息,但是Secret是一种包含少量敏感信息例如密码、令牌或密钥的对象。Secret 的名称必须是合法的 DNS 子域名。每个Secret的大小最多为1MiB,主要是为了避免用户创建非常大的Secret进而导致API服务器和kubelet内存耗尽,不过创建很多小的Secret也可能耗尽内存,可以使用资源配额来约束每个名字空间中Secret的个数。在通过yaml文件创建secret时,可以设置data或stringData字段,data和stringData字段都是可选的,data字段中所有键值都必须是base64编码的字符串,如果不希望执行这种 base64字符串的转换操作,也可以选择设置stringData字段,其中可以使用任何非加密的字符串作为其取值。
Pod 可以用三种方式的任意一种来使用 Secret:
mountPath:作为挂载到一个或多个容器上的卷中的文件(crt文件、key文件)。
作为容器的环境变量。
imagePullSecrets:由 kubelet 在为 Pod 拉取镜像时使用(与镜像仓库的认证)。
Secret常见类型
Opaque :用户定义的任意数据(分为两种类型1、data-基于base64加密 2、stringData明文)
kubernetes.io/service-account-token :ServiceAccount 令牌
kubernetes.io/dockercfg :~/.dockercfg 文件的序列化形式
kubernetes.io/dockerconfigjson :~/.docker/config.json 文件的序列化形式
kubernetes.io/basic-auth :用于基本身份认证的凭据
kubernetes.io/ssh-auth :用于 SSH 身份认证的凭据
kubernetes.io/tls :用于 TLS 环境,保存crt证书和key证书
bootstrap.kubernetes.io/token :启动引导令牌数据

自签名证书:

mkdir certs
cd certs/
openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'
openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.mysite.com'
certs# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
创建kubernetes.io/tls类型的 secret
kubectl create secret tls myserver-tls-key --cert=./server.crt --key=./server.key -n myserver
创建web服务nginx并使用证书:
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: myserver
data:
 default: |
    server {
       listen       80;
       server_name  www.mysite.com;
       listen 443 ssl;
       ssl_certificate /etc/nginx/conf.d/certs/tls.crt;
       ssl_certificate_key /etc/nginx/conf.d/certs/tls.key;

       location / {
           root /usr/share/nginx/html; 
           index index.html;
           if ($scheme = http ){  #未加条件判断,会导致死循环
              rewrite / https://www.mysite.com permanent;
           }  

           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp-frontend-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-frontend
        image: nginx:1.20.2-alpine 
        ports:
          - containerPort: 80
        volumeMounts:
          - name: nginx-config
            mountPath:  /etc/nginx/conf.d/myserver
          - name: myserver-tls-key
            mountPath:  /etc/nginx/conf.d/certs
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-config
          items:
             - key: default
               path: mysite.conf
      - name: myserver-tls-key
        secret:
          secretName: myserver-tls-key 
---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30018
    protocol: TCP
  - name: htts
    port: 443
    targetPort: 443
    nodePort: 30029
    protocol: TCP
  selector:
    app: myserver-myapp-frontend 

case11-secret# kubectl apply -f 4-secret-tls.yaml

负载均衡转发请求到nodeport:
~# cat /etc/haproxy/haproxy.cfg
listen myserer-nginx-80
bind 172.31.7.189:80
mode tcp
server 172.31.7.111 172.31.7.111:30018 check inter 3s fall 3 rise 3
listen myserer-nginx-443
bind 172.31.7.189:443
mode tcp
server 172.31.7.111 172.31.7.111:30019 check inter 3s fall 3 rise 3

客户端配置hosts 解析,进入pod验证配置文件和证书是否添加,编辑配置文件,默认的官方镜像没有加载自定义配置

vi /etc/nginx/nginx.conf
include /etc/nginx/conf.d/*/*.conf;
nginx -s reload

9.基于Secret实现私有镜像仓库的镜像下载认证

在master节点通过登录镜像仓库生成docker认证文件创建secret:
nerdctl login --username=rooroot@aliyun.com registry.cn-qingdao.aliyuncs.com
kubectl create secret generic aliyun-registry-image-pull-key \
--from-file=.dockerconfigjson=/root/.docker/config.json \
--type=kubernetes.io/dockerconfigjson \
-n myserver
创建pod:
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myserver-myapp-frontend-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-frontend
        image: harbor.magedu.net/baseimages/nginx:1.16.1-alpine-perl
        imagePullPolicy: Always
        ports:
          - containerPort: 80
      imagePullSecrets:
        - name: aliyun-registry-image-pull-key
---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30018
    protocol: TCP
  type: NodePort
  selector:
    app: myserver-myapp-frontend 

kubectl apply -f 5-secret-imagePull.yaml

k8s拉取私有仓库镜像失败:

Failed to pull image "192.168.220.102:80/baseimages/alpine": rpc error: code = Unknown desc = failed to pull and unpack image "192.168.220.102:80/baseimages/alpine:latest": failed to resolve reference "192.168.220.102:80/baseimages/alpine:latest": failed to do request: Head "https://192.168.220.102:80/v2/baseimages/alpine/manifests/latest": http: server gave HTTP response to HTTPS client

k8s通过 crictl连接containerd拉取镜像。需要打开containerd配置文件,配置HTTP镜像仓库,添加如下内容:

vim /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.magedu.net:443".tls]

insecure_skip_verify = true

[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.magedu.net".tls]

insecure_skip_verify = true

[plugins."io.containerd.grpc.v1.cri".registry.headers]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.magedu.net"]

endpoint = ["http://harbor.magedu.net"]

systemctl restart containerd.service

测试拉取镜像

crictl pull harbor.magedu.net/baseimages/pause:3.9

10.总结StatefulSet、DaemonSet的特点及使用

Statefulset为了解决有状态服务的集群部署、集群之间的数据同步问题(MySQL主从、Redis Cluster、ES集群等)
Statefulset所管理的Pod拥有唯一且固定的Pod名称
Statefulset按照顺序对pod进行启停、伸缩和回收
Headless Services(Statefulset的Services没有ip地址,Services名称请求的解析直接解析到pod IP)
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: StatefulSet 
metadata:
  name: myserver-myapp
  namespace: myserver
spec:
  replicas: 3
  serviceName: "myserver-myapp-service"
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
      - name: myserver-myapp-frontend
        #image: registry.cn-qingdao.aliyuncs.com/zhangshijie/zookeeper:v3.4.14
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
        ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-service
  namespace: myserver
spec:
  clusterIP: None
  ports:
  - name: http
    port: 80
  selector:
    app: myserver-myapp-frontend 
kubectl apply -f 1-Statefulset.yaml
kubectl get pod -n myserver
NAME READY STATUS RESTARTS AGE
myserver-myapp-0 1/1 Running 0 4m57s
myserver-myapp-1 1/1 Running 0 4m56s
myserver-myapp-2 1/1 Running 0 4m54s
DaemonSet 在当前集群中每个节点运行同一个pod,当有新的节点加入集群时也会为新的节点配置相同的pod,当节点从集群中移除时其pod也会被kubernetes回收,删除DaemonSet 控制器将删除其创建的所有的pod。
---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet 
metadata:
  name: myserver-myapp
  namespace: myserver
spec:
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      hostNetwork: true
      hostPID: true
      containers:
      - name: myserver-myapp-frontend
        image: nginx:1.20.2-alpine 
        ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myserver-myapp-frontend
  namespace: myserver
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30018
    protocol: TCP
  type: NodePort
  selector:
    app: myserver-myapp-frontend

kubectl apply -f 1-DaemonSet-webserver.yaml

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值