kubernetes之控制器(今天学习了吗)!!!

一、为什么要有控制器

K8S是容器资源管理和调度平台,容器跑在Pod里,Pod是K8S里最小的单元。所以,这些Pod作为一个个单元我们肯定需要去操作它的状态和生命周期。那么如何操作?这里就需要用到控制器了。

这里一个比较通俗的公式:应用APP = 网络 + 载体 + 存储
在这里插入图片描述
这里应用一般分为无状态应用、有状态应用、守护型应用、批处理应用这四种。

  • 无状态应用:应用实例不涉及事务交互,不产生持久化数据存储在本地,并且多个应用实例对于同一个请求响应的结果是完全一致的。举例:nginx或者tomcat
  • 有状态应用:有状态服务可以说是需要数据存储功能的服务或者指多线程类型的服务、队列等。举例:mysql数据库、kafka、redis、zookeeper等。
  • 守护型应用:类似守护进程一样,长期保持运行,监听持续的提供服务。举例:ceph、logstash、fluentd等。
  • 批处理应用:工作任务型的服务,通常是一次性的。举例:运行一个批量改文件夹名字的脚本。

二、K8S有哪些控制器

既然应用的类型有上面说的这些无状态、有状态的,那么K8S肯定要实现一些控制器来专门处理对应类型的应用。总体来说,K8S有五种控制器,分别对应处理无状态应用、有状态应用、守护型应用和批处理应用。

Deployment

StatefulSet

DaemonSet

Job

CronJob

2.1:pod与控制器之间的关系

controllers:在集群上管理和运行容器的对象通过label-selector相关联

Pod通过控制器实现应用的运维,如伸缩,升级等

在这里插入图片描述

2.2:Deployment(无状态化应用)

应用场景:web服务

Deployment中文意思为部署、调度,通过Deployment我们能操作RS(ReplicaSet),你可以简单的理解为它是一种通过yml文件的声明,在Deployment 文件里可以定义Pod数量、更新方式、使用的镜像,资源限制等。无状态应用都用Deployment来创建,例

[root@master01 demo]# vim nginx-delpoy.yaml

apiVersion: apps/v1
kind: Deployment   '定义是Deployment'
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3      '副本数量为3'
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80

[root@master01 demo]# kubectl create -f nginx-delpoy.yaml 
deployment.apps/nginx-deployment created
[root@master01 demo]# kubectl get all
NAME                                  READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-d55b94fd-6hc6h   1/1     Running   0          41h
pod/nginx-deployment-d55b94fd-hclxq   1/1     Running   0          41h
pod/nginx-deployment-d55b94fd-wjjc2   1/1     Running   0          41h

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   4d19h

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3         3         3            3           45s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-d55b94fd   3         3         3       41h

查看控制器信息

[root@master01 demo]# kubectl edit deployment/nginx-deployment

.....省略信息.....
strategy:
    rollingUpdate:         '版本更新为滚动更新机制'
      maxSurge: 25%        '最大更新副本数是25%,最多扩容125%' '为了保持副本数量,增加的百分比同时要销毁多少'
      maxUnavailable: 25%  '最大删除副本是25%,最多缩容到75%'
    type: RollingUpdate
...省略信息....

#describe也可
[root@master01 demo]# kubectl describe deploy nginx-deployment
....省略信息....
RollingUpdateStrategy:  25% max unavailable, 25% max surge

查看历史版本

[root@master01 demo]# kubectl rollout history deploy/nginx-deployment
deployment.extensions/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>           #这边只有一个,证明还没有滚动更新

2.3:StatefulSet(有状态应用应用)

StatefulSet的出现是K8S为了解决 “有状态” 应用落地而产生的,Stateful这个单词本身就是“有状态”的意思。之前大家一直怀疑有状态应用落地K8S的可行性,StatefulSet很有效解决了这个问题。有状态应用一般都需要具备一致性,它们有固定的网络标记、持久化存储、顺序部署和扩展、顺序滚动更新等等。总结两个词就是需要稳定、有序。
那么StatefulSet如何做到Pod的稳定、有序?具体有了哪些内在机制和方法?主要概况起来有这几个方面:

给Pod一个唯一和持久的标识(例:Pod name)
给予Pod一份持久化存储
部署Pod都是顺序性的,0 ~ N-1
扩容Pod必须前面的Pod还存在着
终止Pod,后面Pod也一并终止

举个例子:创建了zk01、zk02、zk03 三个Pod,zk01就是给的命名,如果要扩容zk04,那么前面01、02、03必须存在,否则不成功;如果删除了zk02,那么zk03也

2.31:有状态与无状态化对特点

  • 无状态服务的特点:
1)deployment 认为所有的pod都是一样的

2)不用考虑顺序的要求

3)不用考虑在哪个node节点上运行

4)可以随意扩容和缩容
  • 有状态服务的特点:
1)实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper

2)实例之间不对等的关系,以及依靠外部存储的应用。

2.32:常规的service服务和无头服务的区别

  • service:一组Pod访问策略,提供cluster-IP群集之间通讯,还提供负载均衡和服务发现。

  • Headless service
    无头服务,不需要cluster-IP,直接绑定具体的Pod的IP,无头服务经常用于statefulset的有状态部署

2.33:配置service资源

#编写一个yaml资源
[root@master01 demo]# cat nginx.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx

#查看暴露端口' '已经创建cluster IP为10.0.0.57  集群地址
[root@master01 demo]# kubectl create -f nginx.yaml 
service/nginx-service created
[root@master01 demo]# kubectl get svc
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP        4d19h
nginx-service   NodePort    10.0.0.57    <none>        80:31699/TCP   5s

#获取service的endpoint信息
[root@master01 demo]# kubectl get endpoints
NAME            ENDPOINTS                                    AGE
kubernetes      192.168.158.10:6443,192.168.158.40:6443      4d19h
nginx-service   172.17.17.2:80,172.17.8.3:80,172.17.8.4:80   2m6s

在node节点操作

[root@node1 ~]# systemctl restart flanneld.service 
[root@node1 ~]# systemctl restart docker

#查看群集间通讯
[root@node01 ~]# curl 10.0.0.57
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

2.34:创建Headless service 无头服务资源和dns资源

由于有状态服务的IP地址是动态的,所以使用无头服务的时候要绑定dns服务

[root@master01 demo]# vim nginx-headless.yaml

apiVersion: v1
kind: Service      '定义service类型'
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None   '无头服务不使用cluster IP'
  selector:
    app: nginx

#创建资源
[root@master01 demo]# kubectl apply -f nginx-headless.yaml 
service/nginx created

#删除以前的service资源
[root@master01 demo]# kubectl delete -f nginx.yaml 
service "nginx-service" deleted

查看service资源,可以看到clusterIP是none
[root@master01 demo]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   4d20h
nginx        ClusterIP   None         <none>        80/TCP    2m26s
#没有clusterIP是访问不到的,需要配置DNS服务

2.35:使用CoreDNS实现Kubernetes基于DNS的服务发现

在Kubernetes集群推荐使用Service
Name作为服务的访问地址,因此需要一个Kubernetes集群范围的DNS服务实现从Service Name到Cluster
Ip的解析,这就是Kubernetes基于DNS的服务发现功能。

配置DNS服务

[root@master01 demo]# vim coredns.yaml 
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.2.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

创建资源验证DNS解析

[root@master01 demo]# kubectl create -f coredns.yaml

[root@master01 demo]# kubectl get pods -n kube-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE
coredns-56684f94d6-7k58d                1/1     Running   0          15m     172.17.8.2    192.168.158.20   <none>
kubernetes-dashboard-7dffbccd68-ftt6c   1/1     Running   1          2d18h   172.17.17.3   192.168.158.30   <none>

创建测试pod资源并验证DNS解析功能

[root@master01 demo]# cat dns.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
spec:
  containers:
  - name: busybox
    image: busybox:1.28.4
    args:
    - /bin/sh
    - -c
    - sleep 36000
  restartPolicy: Never   #重启政策,默认方式 不重启

#创建资源
[root@master01 demo]# kubectl apply -f dns.yaml 

#查看pod资源
[root@master01 demo]# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
dns-test                          1/1     Running   0          16m

#验证dns功能 解析kubernetes和nginx-service名称  这边节点要重启flanneld和docker做服务转发
[root@master01 demo]# kubectl exec -it dns-test sh
#解析nginx-service 这边解析的前提是有资源
/ # nslookup nginx
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 172.17.17.2 172-17-17-2.nginx.default.svc.cluster.local
Address 2: 172.17.8.3 172-17-8-3.nginx.default.svc.cluster.local
Address 3: 172.17.8.4 172-17-8-4.nginx.default.svc.cluster.local

/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

创建statefulset资源

[root@master demo]# vim sts.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None  #无头服务node
  selector:
    app: nginx
---
apiVersion: apps/v1beta1  
kind: StatefulSet  
metadata:
  name: nginx-statefulset  
  namespace: default
spec:
  serviceName: nginx  
  replicas: 3  
  selector:
    matchLabels:  
       app: nginx
  template:  
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:latest  
        ports:
        - containerPort: 80 
        
#清除所有pod资源   
[root@master demo]# kubectl delete -f .

#创建
[root@master01 demo]# kubectl apply -f sts.yaml

#查看pod资源跟service
[root@master01 demo]# kubectl get pod,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/dns-test              1/1     Running   0          5m49s
pod/nginx-statefulset-0   1/1     Running   0          5m29s
pod/nginx-statefulset-1   1/1     Running   0          5m21s
pod/nginx-statefulset-2   1/1     Running   0          5m13s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   4d21h
service/nginx        ClusterIP   None         <none>        80/TCP    5m29s


[root@master01 demo]# kubectl exec -it dns-test sh
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # nslookup nginx-statefulset-0.nginx 
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-0.nginx
Address 1: 172.17.8.3 nginx-statefulset-0.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-1.nginx 
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-1.nginx
Address 1: 172.17.17.4 nginx-statefulset-1.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-2.nginx 
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      nginx-statefulset-2.nginx
Address 1: 172.17.8.4 nginx-statefulset-2.nginx.default.svc.cluster.local

相比于Deployment而言,StatefulSet是有身份的!(序列编号区分唯一身份)

身份三要素:

1、域名 nginx-statefulset-0.nginx

2、主机名 nginx-statefulset-0

3、存储(PVC)

2.4: DaemonSet

Daemon本身就是守护进程的意思,那么很显然DaemonSet就是K8S里实现守护进程机制的控制器。比如我们需要在每个node里部署fluentd采集容器日志,那么我们完全可以采用DaemonSet机制部署。它的作用就是能确保全部(或者你指定的node数里)运行一个fluentd
Pod副本。当有 node加入集群时,也会为他们新增一个 Pod 。当有 node从集群移除时,这些 Pod 也会被回收。删除
DaemonSet 将会删除它创建的所有 Pod。

应用场景:监控,分布式存储,日志收集等

所以,你可以想象,DaemonSet 特别适合运行那些静默后台运行的应用,而且是连带性质的,非常方便。

[root@master01 demo]# vim ds.yaml

apiVersion: apps/v1
kind: DaemonSet         #资源类型为daemonset
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4  #镜像
        ports:
        - containerPort: 80
        - 
#创建
[root@master01 demo]# kubectl apply -f ds.yaml
daemonset.apps/nginx-deployment created
[root@master01 demo]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE
nginx-deployment-fwbrh   1/1     Running   0          2m15s   172.17.17.5   192.168.158.30   <none>
nginx-deployment-hztz5   1/1     Running   0          2m15s   172.17.8.5    192.168.158.20   <none>

2.5: Job

Job就是任务,我们不用K8S,批处理的运行一些自动化脚本或者跑下ansible也是经常的事儿。那么在K8S里运行批处理任务我们用Job即可。执行一次的任务,它保证批处理任务的一个或多个Pod成功结束。

应用场景:离线数据处理,视频解码等业务

[root@master01 demo]# vim job.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4
#重试次数默认是6次,修改为4次,当遇到异常时Never状态会重启,所以要设定次数

#创建
[root@master01 demo]# kubectl apply -f job.yaml 
job.batch/pi created
[root@master01 demo]# kubectl get pods -w

pi-ctcp4   1/1   Running   0     61s
pi-ctcp4   0/1   Completed   0     66s


#我们查看日志查看相信信息
[root@master shuai]# kubectl logs pi-v6kc5 

在这里插入图片描述
2.6:CronJob
在IT环境里,经常遇到一些需要定时启动运行的任务。传统的linux里我们执行定义crontab即可,那么在K8S里我们就可以用到CronJob控制器。其实它就是上面Job的加强版,带时间定点运行的。

周期性任务,像Linux的Crontab一样。

周期性任务

应用场景:通知,备份

列子:每分钟输出一句Hello from the Kubernetes cluster

[root@master01 demo]# vim cronjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"    #定义定时任务运行
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

#创建资源
[root@master01 demo]# kubectl apply -f cronjob.yaml

#查看pod资源状态
[root@master01 demo]# kubectl get pods -w
NAME                     READY   STATUS      RESTARTS   AGE
hello-1611844500-8zqj9   0/1     Completed   0          3m4s
hello-1611844560-bgqpl   0/1     Completed   0          2m4s
hello-1611844620-nqdlb   0/1     Completed   0          64s
hello-1611844680-9tvv4   0/1     Completed   0          3s

#查看日志
[root@master01 demo]# kubectl logs hello-1611844620-nqdlb
Thu Jan 28 13:41:19 UTC 2021
Hello from the Kubernetes cluster

#等待一分钟有会执行一次
[root@master01 demo]# kubectl get pods
NAME                     READY   STATUS      RESTARTS   AGE
hello-1611844680-9tvv4   0/1     Completed   0          2m16s
hello-1611844740-zmxrt   0/1     Completed   0          76s
hello-1611844800-fj5mq   0/1     Completed   0          16s

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值