Kubernetes

Kubernetes

K8s概念和术语

k8s中,主机分为master和nodes,客户端请求首先发给master,由master分析各node资源状态,分配一个最佳node,然后由node主机通过docker把容器启动起来

Master

  • API server:负责接收并处理请求
  • scheduler:调度器,存在于master上,负责检测node上的资源,根据用户请求的资源量在node节点上创建容器,先做预选,然后做优选

  • Controller-Manager(控制器管理器):在master上,负责监控每一个控制器是否健康,控制器管理器自身做冗余。

Node

  • kubelet:用于和Master进行通信,接收master调度过来的各种任务,然后执行,通过容器引擎(docker)启动容器,也负责在本机上检查pod的健康
  • Pod控制器:周期性探测管理的容器是否是健康的
  • service:通过标签选择器来关联pod,动态探测pod的IP地址,客户端的请求首先到达service,通过service调度到pod,每一个pod都需要通过service来访问,service不是一个实体组件或应用程序,只是一个DNAT规则(1.11之后使用IPVS来调度),通过DNS中的解析找到service,修改service名称,DNS自动会修改DNS中的记录。
  • kube-proxy:负责随时与master API通信,当发现service后的pod发生变动时,通知APIserver,自动改变iptables规则

Pod

  • 用于运行容器,一个pod中可以运行多个容器,同一个pod中的容器共享NET空间、IPC空间、MOUNT空间、存储卷,相当于虚拟机,是k8s上的最小控制单元,可以理解为是容器的一个外壳,各node主要用来运行pod,一般来说一个pod中只运行一个容器,其它容器都是来辅助该主容器的

label

  • 用于为每一个pod都需要打上标签,用来识别pod,是key,value格式的数据

label select

  • 标签选择器:根据标签来过滤符合条件的资源对象的机制

AddOns

  • k8s的附加组件,比如DNS Pod、flannel、ingressController,Dashboard

etcd(共享存储)

  • 负责存储整个集群的状态数据(需要做好冗余),配置为https通信,保证安全(需要CA,证书)

k8s的网络模型

客户端访问通过节点网络转发到service网络然后通过service网络到达pod网络

  • 节点网络
  • service网络(也叫集群网络)
  • pod网络:
    1. 同一Pod内的多个容器间通过lo通信
    2. 各pod之间通信通过:Overlay Network (叠加网络,隧道)
    3. Pod与service之间的通信:Pod的网关指向docke0桥

k8s名称空间

  • 隔离各个pod间直接通信,也为了安全
    1. flannel:网络配置(简单,但是不支持网络策略)
    2. calico:网络配置,网络策略(配置比较复杂)
    3. canel:上面两种的结合

k8s的安装部署

centos版本信息说明

[root@master ~]# uname -r
3.10.0-862.el7.x86_64
[root@master ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core)

部署方式:通过kubeadm安装步骤(一个master节点和两个node节点)

  1. master,nodes:安装kubelet,kubeadm,docker
  2. master:kubeadm init
  3. nodes:kubeadm join(文档:https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.8.md)
  4. 关闭firewall和iptables

  5. 创建docker-ce和kubernetes的yum仓库:

    [root@master ~]# cd /etc/yum.repo.d/
    [root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@master ~]# cat > kubernetes.repo <<EOF
    [kubernetes]
    name=Kubernetes Repo
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    enabled=1
    EOF
  6. 安装docker-ce kubelet kubeadm kubectl

    [root@master ~]# yum -y install docker-ce kubelet kubeadm kubectl
    [root@master ~]# systemctl stop firewalld #关闭防火墙
    [root@master ~]# systemctl disable firewalld
    [root@master ~]# systemctl enable docker kubelet
  7. 创建/etc/sysctl.d/k8s.conf文件,并配置kubelet不加载swap

    [root@master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    [root@master ~]# cat > /etc/sysconfig/kubelet<<EOF
    KUBELET_EXTRA_ARGS=--fail-swap-on=false
    EOF

    以上命令在master和node节点都需要执行

  8. 因为天朝防火墙的关系,在中国访问不了google的docker仓库,但是我们可以在阿里云上找到需要的镜像,下载下来,然后重新打上标签即可,可以使用下面的脚本下载所需镜像

    #!/bin/bash
    image_aliyun=(kube-apiserver-amd64:v1.12.1 kube-controller-manager-amd64:v1.12.1 kube-scheduler-amd64:v1.12.1 kube-proxy-amd64:v1.12.1 pause-amd64:3.1 etcd-amd64:3.2.24)
    for image in ${image_aliyun[@]}
    do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$image
    docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/$image k8s.gcr.io/${image/-amd64/}
    done
  9. 初始化

    kubeadm init --apiserver-advertise-address=192.168.175.4 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap
    
    保存节点加入的命令:#kubeadm join 192.168.175.4:6443 --token wyy67p.9wmda1iw4o8ds0c5 --discovery-token-ca-cert-hash sha256:3de3e4401de1cdf3b4c778ad1ac3920d9f7b15ca34b4c5ebe44d92e60d1290e0 保存代用
    如果忘记可以使用kubeadm token create --print-join-command重新创建
  10. 完成之后执行一些初始化工作

    [root@master ~]# mkdir -p $HOME/.kube
    [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  11. 查看信息

    kubectl get cs
    kubectl get nodes
  12. 部署网络插件flannel

     kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml如果镜像下载不了,可以使用上面的方式在aliyun下载xiangy
  13. 将node节点加入到集群:

    [root@node1 ~]# systemctl enable docker kubelet
    [root@node1 ~]# kubeadm join 192.168.175.4:6443 --token wyy67p.9wmda1iw4o8ds0c5 --discovery-token-ca-cert-hash sha256:3de3e4401de1cdf3b4c778ad1ac3920d9f7b15ca34b4c5ebe44d92e60d1290e0

Kubernetes集群的管理方式:

  • 命令式:create,run,expose,delete,edit...
  • 命令式配置文件:create -f /PATH/TO/RESOURCE_CONFIGURATION_FILE,delete -f, replace -f
  • 声明式配置文件:apply -f, patch...

基本命令

  • kubectl是apiserver的客户端程序,kubectl是整个kubernetes集群的唯一管理入口,kubectl能够连接到apiserver实现管理集群各项资源
  • kubectl 可以管理的对象:pod,service,replicaset.deployment,statefulet,daemonset,job,cronjob,node
  • kubectl describe node node1 :显示节点1详细信息
  • kubectl cluster-info:查看集群信息
  • deployment,job:pod控制器
  • kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1:创建一个deployment
  • kubectl get deployment :查看当前已经创建的deployment信息
  • kubectl get pods:查看pod信息,加上-o wide显示更多信息 –show-labels 显示label信息
  • kubectl delete pod <pod名称> :删除一个pod
  • kubectl get svc:查看service信息
  • kubectl delete svc nginx:删除nginx这个服务
  • kubectl expose deployment nginx-deploy --name nginx --port=80 --target-port=80 --protocol=TCP:创建一个对外的服务service将80端口开放出去
  • kubectl edit svc nginx:编辑nginx服务
  • kubectl describe deployment nginx-deploy:查看pod控制器详细信息
  • kubectl scale --replicas=5 deployment myapp:将myapp扩展到五个
  • kubectl set image deployment myapp myapp=ikubernetes/myapp:v2 :升级镜像版本
  • kubectl rollout status deployment myapp:查看myapp的滚动更新过程
  • kubectl rollout undo deployment myapp:回滚
  • kubectl get pods --show-label : 显示label信息
  • kubectl label pod PODNAME app=myapp,release=canary : 给pod打上标签,可以是使对应控制器来管理pod
  • kubectl delete pv --all 删除全部pv

资源清单的定义

kubernetes把所有内容都抽象为资源,把这些资源实例化以后称之为对象

  • workload:Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet, Job,Cronjob…工作负载型资源
  • 服务发现及均衡:Service,Ingress..
  • 配置与存储:Volume,CSI(扩展第三方存储卷)
    1. ConfigMap,Secret(容器化应用)
    2. DownwardAPI
  • 集群级资源:Namespace,Node,Role,ClusterRole, ClusterRoleBinding,RoleBinding
  • 元数据型资源:
    1. HPA
    2. PodTemplate(控制器创建容器时使用的模板)
    3. LimitRange
[root@master manifests]# kubectl get pods myapp-6946649ccd-2tjs8 -o yaml
apiVersion: v1   #声明对应的对象属于k8s的哪一个api群组的版本
kind: Pod #资源类别(service,deloyment都是类别)
metadata: #元数据,是一个嵌套的字段
  creationTimestamp: 2018-10-22T15:08:38Z
  generateName: myapp-6946649ccd-
  labels:
    pod-template-hash: 6946649ccd
    run: myapp
  name: myapp-6946649ccd-2tjs8
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: myapp-6946649ccd
    uid: 0e9fe6e8-d608-11e8-b847-000c29e073ed
  resourceVersion: "36407"
  selfLink: /api/v1/namespaces/default/pods/myapp-6946649ccd-2tjs8
  uid: 5abff320-d60c-11e8-b847-000c29e073ed
spec: #规格,定义创建的资源对象应该具有什么样的特性,靠控制器来满足对应的状态,用户定义
  containers:
  - image: ikubernetes/myapp:v1  
    imagePullPolicy: IfNotPresent
    name: myapp
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-962mh
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: node2
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-962mh
    secret:
      defaultMode: 420
      secretName: default-token-962mh
status: #显示当前这个资源当前的状态,如果当前资源状态和目标状态不一致,需要向目标状态转移,只读
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:38Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:40Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:40Z
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2018-10-22T15:08:38Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://f9a63dc33340082c3a78196f624bc52c193d3f2694c05f91ecb82aa143a9e369
    image: ikubernetes/myapp:v1
    imageID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    lastState: {}
    name: myapp
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-10-22T15:08:39Z
  hostIP: 192.168.175.5
  phase: Running
  podIP: 10.244.2.15
  qosClass: BestEffort
  startTime: 2018-10-22T15:08:38Z
  • 创建资源的方法:

    1. APIserver仅接收JSON格式的资源定义,run命令会自动将配置转换为yaml格式
    2. yaml格式提供配置清单,apiserver可自动将其转为JSON格式,然后再提交
  • 大部分资源的配置清单由5个1级字段组成:

    1. apiVersion:group/version

      [root@master manifests]# kubectl api-versions
      admissionregistration.k8s.io/v1beta1
      apiextensions.k8s.io/v1beta1
      apiregistration.k8s.io/v1
      apiregistration.k8s.io/v1beta1
      apps/v1
      apps/v1beta1 #alpha内测版,beta公测版,stable稳定版
      apps/v1beta2
      authentication.k8s.io/v1
      authentication.k8s.io/v1beta1
      authorization.k8s.io/v1
      authorization.k8s.io/v1beta1
      autoscaling/v1
      autoscaling/v2beta1
      autoscaling/v2beta2
      batch/v1
      batch/v1beta1
      certificates.k8s.io/v1beta1
      coordination.k8s.io/v1beta1
      events.k8s.io/v1beta1
      extensions/v1beta1
      networking.k8s.io/v1
      policy/v1beta1
      rbac.authorization.k8s.io/v1
      rbac.authorization.k8s.io/v1beta1
      scheduling.k8s.io/v1beta1
      storage.k8s.io/v1
      storage.k8s.io/v1beta1
      v1
    2. kind:资源类别,标记打算创建一个怎么样的资源类别

    3. metadata:元数据

      • name
      • namespace:名称空间
      • labels:标签
      • annotations:资源注解,与label不同的地方在于,它不能用于挑选资源对象,仅用于为对象提供元数据
      • uid:唯一标识(系统自动生成)
      • 每个资源的引用PATH /api/GROUP/namespace/NAMESPACE/TYPE/NAME
    4. spec:期望的状态,不同的资源类型,spec各不相同,disired state

      • status:当前状态,current state,本字段由kubernetes集群维护。
      • 可以使用kubectl explain pods.sepc.containers来查看如何定义
    5. Pod的生命周期:状态:Pending,Running,Failed,Succeeded,Unknown

    6. 创建Pod:

      • Pod生命周期中的重要行为:
        1. 初始化容器
        2. 容器探测:liveness、readiness
        3. restartPolicy:Always,OnFailure,Never,Default to Always
    7. 自定义资源示例

      [root@master manifests]# cat pod-demo.yaml 
      apiVersion: v1
      kind: Pod #注意大小写
      metadata:
        name: pod-demo #pod名称
        namespace: default #可不定义,默认为default
        labels:
          app: myapp
          tier: frontend #所属层次
      spec:
        containers:
        - name: myapp
          image: ikubernetes/myapp:v1
        - name: busybox
          image: busybox:latest
          command:
          - "/bin/sh"
          - "-c"
          - "sleep 3600"
    8. 创建pod资源

      kubectl create -f pod-demo.yaml #创建资源
      kubectl describe pods pod-demo #显示pod详细信息
      [root@master manifests]# kubectl describe pod pod-demo
      Name:               pod-demo
      Namespace:          default
      Priority:           0
      PriorityClassName:  <none>
      Node:               node2/192.168.175.5 
      Start Time:         Tue, 23 Oct 2018 02:33:51 +0800
      Labels:             app=myapp
                          tier=frontend
      Annotations:        <none>
      Status:             Running
      IP:                 10.244.2.20
      Containers: #内部容器
        myapp:
          Container ID:   docker://20dabd0d998f5ebd2a7ad1b875e3517831b100f1df9340eefa9e18d89941a8ac
          Image:          ikubernetes/myapp:v1
          Image ID:       docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
          Port:           <none>
          Host Port:      <none>
          State:          Running
            Started:      Tue, 23 Oct 2018 02:33:52 +0800
          Ready:          True
          Restart Count:  0
          Environment:    <none>
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from default-token-962mh (ro)
        busybox:
          Container ID:  docker://d69f788cdf8772497c0afc19b469c3553167d3d5ccf03ef4876391a7ed532aa9
          Image:         busybox:latest
          Image ID:      docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812
          Port:          <none>
          Host Port:     <none>
          Command:
            /bin/sh
            -c
            sleep 3600
          State:          Running
            Started:      Tue, 23 Oct 2018 02:33:56 +0800
          Ready:          True
          Restart Count:  0
          Environment:    <none>
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from default-token-962mh (ro)
      Conditions:
        Type              Status
        Initialized       True 
        Ready             True 
        ContainersReady   True 
        PodScheduled      True 
      Volumes:
        default-token-962mh:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  default-token-962mh
          Optional:    false
      QoS Class:       BestEffort
      Node-Selectors:  <none>
      Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                       node.kubernetes.io/unreachable:NoExecute for 300s
      Events:
        Type    Reason     Age   From               Message
        ----    ------     ----  ----               -------
        Normal  Pulled     48s   kubelet, node2     Container image "ikubernetes/myapp:v1" already present on machine
        Normal  Created    48s   kubelet, node2     Created container
        Normal  Started    48s   kubelet, node2     Started container
        Normal  Pulling    48s   kubelet, node2     pulling image "busybox:latest"
        Normal  Pulled     44s   kubelet, node2     Successfully pulled image "busybox:latest"
        Normal  Created    44s   kubelet, node2     Created container
        Normal  Started    44s   kubelet, node2     Started container
        Normal  Scheduled  18s   default-scheduler  Successfully assigned default/pod-demo to node2 #成功调度到node2
      kubectl  logs pod-demo myapp #查看pod的日志
      kubectl logs pod-demo busybox
      kubectl get pods -w #-w持续监控
      kubectl exec -it pod-demo -c myapp -- /bin/sh
      kubectl delete -f pod-demo.yaml #通过配置清单文件删除对应资源

存活、生存探测

探针类型有三种:ExceAction、TCPSocketAction、HTTPGetAction

liveness exec示例 存活状态探测

[root@master manifests]# cat liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
  namespace: default
spec:
  containers:
  - name: liveness-exec-container
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh", "-c", "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"]
    livenessProbe:
      exec:
        command: ["test", "-e", "/tmp/healthy"]
      initialDelaySeconds: 2
      periodSeconds: 3

liveness HTTPGetAction探针示例 存活状态探测

[root@master manifests]# cat liveness-httpget.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
        port: http
        path: /index.html  
      initialDelaySeconds: 1
      periodSeconds: 3

readiness 就绪型探测示例:

[root@master manifests]# cat rediness-httpget.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    readinessProbe:
      httpGet:
        port: http
        path: /index.html  
      initialDelaySeconds: 1
      periodSeconds: 3
[root@master manifests]#kubectl describe pod readiness-httpget-pod

lifecycle

postStart:容器创建后执行命令

[root@master manifests]# cat poststart-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: poststart-pod
  namespace: default
spec:
  containers:
  - name: busybox-httpd
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    lifecycle:
      postStart:
        exec:
          command: ["mkdir", "-p", "/data/web/html"]
    command: ["/bin/sh", "-c","sleep 3600"]

Pod控制器

  • ReplicaSet:代用户创建指定数量的pod副本,并确保pod副本满足用户数量,支持自动扩缩容,主要有三个组件组成:用户期望的pod副本数、标签选择器、pod资源模板,完成pod资源的新建

    不建议直接使用ReplicaSet

  • Deployment:建构在ReplicaSet之上,支持扩缩容、滚动更新、回滚,支持声明式配置,一般用于无状态服务

  • DaemonSet:用于确保集群中的每一个节点只运行一个特定的pod副本,集群新增节点自动增加此类pod副本,常用于实现系统级无状态服务

  • Job:用来执行特定的任务,任务完成自动退出

  • Cronjob:周期性执行特定的任务

  • StatefulSet:有状态,持久存储

创建ReplicaSet控制器示例

[root@master manifests]# cat rs-demo.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        release: canary
        environment: qa
    spec:
      containers:
      - name: myapp-container
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80

可以使用命令: kubectl edit rs myapp(控制器的name)来编辑控制器来实时更新pods的副本数,以及一些其他的参数

  • Deployment借助于ReplicaSet实现滚动更新、蓝绿部署以及其他更新策略

  • Deployment建构于ReplicaSet之上,通过控制ReplicaSet来控制pods

Deployment创建控制器示例配置

[root@master manifests]# cat deploy-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - name: http
          containerPort: 80
          
[root@master manifests]# kubectl apply -f deploy-demo.yaml   #apply 和create的不同之处在于apply可以执行多次,不同之处会更新到etcd中,于是我们就可以直接修改资源清单配置文件,来实时更新配置,修改完成之后只需要再一次执行一次kubectl apply -f deploy-demo.yaml即可          

Deployment的回滚

[root@master manifests]# kubectl rollout undo deploy myapp-deploy [--revision=1 指定回滚到哪一个版本]
[root@master manifests]# kubectl get deployment -o wide
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                 SELECTOR
myapp-deploy   3         3         3            3           20m   myapp        ikubernetes/myapp:v2   app=myapp,release=canary  #查看deployment控制器信息,可以看出deployment是建构在ReplicaSet之上的

kubectl get rs -o wide #查看ReplicaSet控制器信息

给控制器打补丁

[root@master ]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' #给控制器打补丁
[root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavaliable":0}}}}'
[root@master manifests] kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy ##更新

监控更新过程

[root@master manifests]# kubectl rollout status deploy myapp-deploy
[root@master manifests]# kubectl get pods -l app=myapp -w
kubectl rollout history deployment myapp-deploy #查看历史更新版本

DaemonSet控制器的示例


[root@master manifests]# cat ds-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: logstor
  template:
    metadata:
      labels:
        app: redis
        role: logstor
    spec:
      containers:
      - name: redis
        image: redis:4.0-alpine
        ports:
        - name: redis
          containerPort: 6379
---    #隔离资源定义
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      release: stable
  template:
    metadata:
      labels:
        app: filebeat
        release: stable
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local
        - name: REDIS_LOG_LEVEL
          value: info

Service

  • 实现模式:
    1. userspace
    2. iptables
    3. ipvs(需要在kubelet配置文件/etc/sysconfig/kubelet中加入KUBE_PROXY_MODE=ipvs,并且需要安装内核模块ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,nf_conntrack_ipv4
  • 四中类型:
    1. ClusetrIP:只能在集群内部可达
    2. NodePort:client -->NodeIP:NodePort-->ClusterIP:ServicePort-->PodIP:containerPort
    3. LoadBalance:
    4. ExternelName
  • No ClusterIP:Headless Service
    • ServiceName --> PodIP
  • Ingress Controller(通常就是一个拥有七层代理和调度能力的应用程序)
    1. Nginx
    2. Envoy(可以监控配置文件,一旦发现配置文件发生变动,就重载配置文件)
    3. Traefik

Ingress

Ingress是什么

Ingress :简单理解就是个规则定义;比如说某个域名对应某个 service,即当某个域名的请求进来时转发给某个 service;这个规则将与 Ingress Controller 结合,然后 Ingress Controller 将其动态写入到负载均衡器配置中,从而实现整体的服务发现和负载均衡

Ingress Controller

实质上可以理解为是个监视器,Ingress Controller 通过不断地跟 kubernetes API 打交道,实时的感知后端 service、pod 等变化,比如新增和减少 pod,service 增加与减少等;当得到这些变化信息后,Ingress Controller 再结合Ingress 生成配置,然后更新反向代理负载均衡器,并刷新其配置,达到服务发现的作用

创建ingress

  • 用来定义如何创建一个前端,可以通过service对后端某一类型的pod分类,ingress基于这个分类识别后端有几个pod,并获取pod ip地址,然后及时注入到前端调度器配置文件中,调度器实时重载配置文件
  • 如何使用七层代理:
    1. 部署一个ingress Controller,ingress定义ingress Control如何创建前端调度器和定义后端服务器组
    2. 根据需要配置ingress,ingress定义的是一组转发规则
    3. 根据service收集到后端pod信息,定义成upstream server 反应给ingress中,由ingress 动态注入到ingress控制器当中

安装Ingress

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml #安装ingress-Controller
  • 创建一后端pod service:

    [root@master ingress]# kubectl apply -f deploy-demo.yaml
    [root@master ingress]# cat deploy-demo.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myapp
      namespace: default
    spec:
      selector:
        app: myapp
        release: canary
      ports:
      - name: http
        targetPort: 80
        port: 80
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: myapp-deploy
      namespace: default
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: myapp
          release: canary
      template:
        metadata:
          labels:
            app: myapp
            release: canary
        spec:
          containers:
          - name: myapp
            image: ikubernetes/myapp:v2
            ports:
            - name: http
              containerPort: 80
    
  • 创建一个用于暴露端口的service

    [root@master baremetal]# kubectl apply -f service-nodeport.yaml
    [root@master baremetal]# cat service-nodeport.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: ingress-nginx
      namespace: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 30080
        - name: https
          port: 443
          targetPort: 443
          protocol: TCP
          nodePort: 30443
      selector:
        app.kubernetes.io/name: ingress-nginx
    
  • 创建Ingress文件

    [root@master ingress]# kubectl apply -f ingress-myapp.yaml
    [root@master ingress]# cat ingress-myapp.yaml 
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: ingress-myapp
      namespace: default
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
      - host: myapp.template.com
        http:
          paths:
          - path:
            backend:
              serviceName: myapp
              servicePort: 80
  • 查看信息

    [root@master ingress]# kubectl get ingress
    NAME                 HOSTS                 ADDRESS   PORTS     AGE
    ingress-myapp        myapp.template.com              80        5h55
    [root@master ingress]# kubectl get svc
    NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    myapp        ClusterIP   10.98.30.144     <none>        80/TCP              4h7m
    [root@master ingress]# kubectl get pods
    NAME                             READY   STATUS    RESTARTS   AGE
    myapp-deploy-7b64976db9-lfnlv    1/1     Running   0          6h30m
    myapp-deploy-7b64976db9-nrfgs    1/1     Running   0          6h30m
    myapp-deploy-7b64976db9-pbqvh    1/1     Running   0          6h30m
    #访问
    [root@master ingress]# curl myapp.template.com:30080
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

Ingress使用ssl

[root@master ingress]# cat tomcat-deploy.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5-alpine
        ports:
        - name: http
          containerPort: 8080
        - name: ajp
          containerPort: 8009
[root@master ingress]# kubectl apply -f  tomcat-deploy.yaml 

[root@master ingress]# openssl genrsa -out tls.key 2048
[root@master ingress]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=tomcat.template.com
[root@master ingress]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key
[root@master ingress]# kubectl get secret
NAME                    TYPE                                  DATA   AGE
default-token-962mh     kubernetes.io/service-account-token   3      32h
tomcat-ingress-secret   kubernetes.io/tls                     2      66m

[root@master ingress]# cat ingress-tomcat-tls.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat-tls
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
      - tomcat.template.com
    secretName: tomcat-ingress-secret
  rules:
  - host: tomcat.template.com
    http:
      paths:
      - path:
        backend:
          serviceName: tomcat
          servicePort: 8080
[root@master ingress]# kubectl apply -f ingress-tomcat-tls.yaml

[root@master ingress]# curl -k https://tomcat.template.com:30443 #测试访问

存储卷

  • 对于kubernetes来说,存储卷不属于容器,而是属于pod,pod共享基础架构容器pause的存储卷

  • emptyDir:空目录、临时目录,生命周期同Pod

    示例配置

    [root@master volumes]# cat pod-vol-demo.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: volume-demo
      namespace: default
      labels:
        app: myapp
        tier: frontend
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      - name: busybox
        image: busybox:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: html
          mountPath: /data/
        command: ["/bin/sh"]
        args: ["-c", "while true;do echo $(date) >> /data/index.html;sleep 2;done"]
      volumes:
      - name: html
        emptyDir: {}
  • hostPath:主机目录,节点级存储

  • 示例配置:

    [root@master volumes]# cat pod-vol-hostpath.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-vol-hostpath
      namespace: default
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: html
        hostPath:
          path: /data/pod/volume1
          type: DirectoryOrCreate
    
  • 网络存储:

    1. SAN:iSCSI

    2. NAS:nfs、cifs

      NFS示例配置:

      [root@master volumes]# cat pod-vol-nfs.yaml 
      apiVersion: v1
      kind: Pod
      metadata:
        name: pod-vol-nfs
        namespace: default
      spec:
        containers:
        - name: myapp
          image: ikubernetes/myapp:v1
          volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        volumes:
        - name: html
          nfs:
            path: /data/volumes
            server: 192.168.175.4
    3. 分布式存储:glusterfs、rbd、cephfs…

    4. 云存储:EBS、Azure Disk

    5. pvc(创建过程:选择存储系统 ,创建pv 定义pvc,定义pod绑定pvc)

      yum -y install nfs-utils #安装nfs
      [root@master volumes]# cat /etc/exports  定义存储卷
      /data/volumes/v1 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v2 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v3 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v4 192.168.175.0/24(rw,no_root_squash)
      /data/volumes/v5 192.168.175.0/24(rw,no_root_squash)
      [root@master volumes]# cat pv-demo.yaml 
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv001
        labels:
          name: pv001
      spec:
        nfs:
          path: /data/volumes/v1
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 1Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv002
        labels:
          name: pv002
      spec:
        nfs:
          path: /data/volumes/v2
          server: 192.168.175.4
        accessModes: ["ReadWriteOnce"]
        capacity:
          storage: 5Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv003
        labels:
          name: pv003
      spec:
        nfs:
          path: /data/volumes/v3
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 20Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv004
        labels:
          name: pv004
      spec:
        nfs:
          path: /data/volumes/v4
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 10Gi
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv005
        labels:
          name: pv005
      spec:
        nfs:
          path: /data/volumes/v5
          server: 192.168.175.4
        accessModes: ["ReadWriteMany", "ReadWriteOnce"]
        capacity:
          storage: 1Gi
      
      kubectl apply -f pv-demo.yaml
      NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
      pv001   1Gi        RWO,RWX        Retain(回收策略,保留)           Available                                   5m31s
      pv002   5Gi        RWO            Retain           Available                                   5m31s
      pv003   20Gi       RWO,RWX        Retain           Available                                   5m31s
      pv004   10Gi       RWO,RWX        Retain           Available                                   5m31s
      pv005   1Gi        RWO,RWX        Retain           Available                                   5m31s
    6. pvc示例:

      [root@master volumes]# cat pvc-demo.yaml 
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mypvc
        namespace: default
      spec:
        accessModes: ["ReadWriteMany"]
        resources:
          requests:
            storage: 6Gi
      ---
      apiVersion: v1
      kind: Pod
      metadata:
        name: pod-vol-pvc
        namespace: default
      spec:
        containers:
        - name: myapp
          image: ikubernetes/myapp:v1
          volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        volumes:
        - name: html
          persistentVolumeClaim:
            claimName: mypvc
    7. [root@master volumes]# kubectl get pv
      NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
      pv001   1Gi        RWO,RWX        Retain           Available                                           33m
      pv002   5Gi        RWO            Retain           Available                                           33m
      pv003   20Gi       RWO,RWX        Retain           Available                                           33m
      pv004   10Gi       RWO,RWX        Retain           Bound       default/mypvc                           33m
      pv005   1Gi        RWO,RWX        Retain           Available                                           33m
      [root@master volumes]# kubectl get pvc
      NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
      mypvc   Bound    pv004    10Gi       RWO,RWX                       13m   #显示已经绑定
      

configMap

容器化应用配置的方式:

  1. 自定义命令行参数:args
  2. 把配置文件直接备进镜像
  3. 环境变量:(1)Cloud Native的应用程序一般可直接通过环境变量加载配置 (2)通过entrypoint脚本来预处理变量为配置文件中的配置信息
  4. 存储卷:直接挂主机目录到容器应用的配置文件所在目录

通过命令行创建configmap:

kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=myapp.template.com
kubectl get cm

创建pod引用configmap中的变量定义

[root@master configmap]# cat pod-configmap.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm-1
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    env:
    - name: NGINX_SERVER_PORT
      valueFrom:
        configMapKeyRef:
          name: nginx-config
          key: nginx_port
    - name: NGINX_SERVER_NAME
      valueFrom:
        configMapKeyRef:
          name: nginx-config
          key: server_name

还可以通过edit直接编辑configmap中的键值,但不会实时更新到pod中

kubectl edit cm nginx-config

采用挂载卷方式,配置会同步更新到pod中

[root@master configmap]# cat pod-configmap2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm-2
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    template.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: nginxconf
      mountPath: /etc/nginx/config.d/
      readOnly: true
  volumes:
  - name: nginxconf
    configMap:
      name: nginx-config
[root@master configmap]# cat pod-configmap3.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-cm-3
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    template.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: nginxconf
      mountPath: /etc/nginx/conf.d/
      readOnly: true
  volumes:
  - name: nginxconf
    configMap:
      name: nginx-www

Secret

命令行创建

kubectl create secret generic mysql-root-password --from-literal=password=myP@ss123 #命令行创建secret(注意这个密码是伪加密)
[root@master configmap]# kubectl describe secret mysql-root-password #查看信息
Name:         mysql-root-password
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  9 bytes
[root@master configmap]# kubectl get secret mysql-root-password -o yaml  #这个密码是通过base64加密
apiVersion: v1
data:
  password: bXlQQHNzMTIz
kind: Secret
metadata:
  creationTimestamp: 2018-10-25T06:31:40Z
  name: mysql-root-password
  namespace: default
  resourceVersion: "193886"
  selfLink: /api/v1/namespaces/default/secrets/mysql-root-password
  uid: a1beaf36-d81f-11e8-95d7-000c29e073ed
type: Opaque
[root@master configmap]# echo bXlQQHNzMTIz | base64 -d   #可以通过base64 -d 解码
myP@ss123

定义配置清单:

[root@master configmap]# cat pod-secret.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-secret-1
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    template.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    env:
    - name: MYSQL_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysql-root-password
          key: password
# 注意,变量注入到pod中之后还是以明文显示,所以不安全
[root@master configmap]# kubectl apply -f pod-secret.yaml 
pod/pod-secret-1 created
[root@master configmap]# kubectl exec pod-secret-1 -- /bin/printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=pod-secret-1
MYSQL_ROOT_PASSWORD=myP@ss123

StatefulSet(有状态应用副本集)

StatefulSet主要用于管理具有以下特性的应用程序:

  • 稳定且需要有唯一的网络标识符;
  • 稳定且持久的存储设备;
  • 有序、平滑的部署和扩展;

  • 有序、平滑的终止和删除
  • 有序的滚动更新;

三个组件:headless service、StatefulSet、volumeClaimTemplate

[root@master statefulset]# showmount -e
Export list for master:
/data/volumes/v5 192.168.175.0/24
/data/volumes/v4 192.168.175.0/24
/data/volumes/v3 192.168.175.0/24
/data/volumes/v2 192.168.175.0/24
/data/volumes/v1 192.168.175.0/24

[root@master statefulset]# cat stateful-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata: 
  name: myapp
spec:
  serviceName: myapp
  replicas: 3
  selector:
    matchLabels:
      app: myapp-pod
  template:
    metadata:
      labels:
        app: myapp-pod
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: myappdata
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: myappdata
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 2Gi
[root@master statefulset]# kubectl get pods  #显示pod变成有顺序的了
NAME      READY   STATUS    RESTARTS   AGE
myapp-0   1/1     Running   0          2m21s
myapp-1   1/1     Running   0          2m18s
myapp-2   1/1     Running   0          2m15s

[root@master statefulset]# kubectl get sts
NAME    DESIRED   CURRENT   AGE
myapp   3         3         6m14s
[root@master statefulset]# kubectl get pvc
NAME                STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myappdata-myapp-0   Bound    pv002    5Gi        RWO                           6m35s 
myappdata-myapp-1   Bound    pv001    5Gi        RWO,RWX                       6m32s
myappdata-myapp-2   Bound    pv003    5Gi        RWO,RWX                       6m29s
#只要pvc不删除,使用同一个statefulset创建的后端pod就会一直绑定对应的volume,数据也就不会丢

pod_name.service.ns_name.svc.cluster.local

sts也能支持动态扩缩容

kubectl scale sts myapp --replicas=5

kubernetes的认证与授权

  • token认证
  • ssl认证
  • 授权:node、rbac、webhook、abac
  • 准入控制

k8s的APIserver是分组的,请求的时候不需要标识是向哪个版本的哪个组的API发出请求,所有的请求由一个url path标识

Request path:

​ /apis/apps/v1/namespaces/default/deployments/myapp-deploy/

HTTP request verb:

​ get ,post,put,delete

API requests verb:

​ get,list,create,update,patc,watch,proxy,redirect,delete,deletecollection

Resource:

Subresource

Namespace

API group

RBAC : Role Based Access Control

定义role

[root@master manifests]# cat role-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@master manifests]# kubectl create rolebinding template-read-pods --role=pod-reader --user=template --dry-run -o yaml > rolebinding-demo.yaml
[root@master manifests]# cat rolebinding-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding #用来绑定用户和角色
metadata:
  name: template-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role  #绑定的角色,权限都在这里定义
  name: pod-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: template #绑定的用户账号
#生成新context
openssl genrsa -out template.key 2048
openssl req -new -key template.key -out template.csr  -subj "/CN=template"
openssl x509 -req -in template.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out template.crt -days 365
openssl x509 -in template.crt -text
kubectl config set-credentials template --client-certificate=./template.crt --client-key=./template.key --embed-certs=true
kubectl config set-context template@kubernetes --cluster=kubernetes --user=template
kubectl config use-context template@kubernetes #使用新的context
[root@master ~] kubectl create role template --verb=list,get,watch --resource=pods  #创建角色

基于角色的访问控制:让一个用户扮演一个角色,角色拥有权限,于是用户就有了权限

role定义:

  • operations
  • objects

rolebinding:用来绑用户和role的关系

  • user account or service account
  • role

clusterrole,clusterrolebinging

ClusterRoleBinding:

clusterRole:能够获取所有空间中的信息,如果是通过rolebinding的话,还是只能获取当前rolebinding定义的namespace中的信息使用rolebinding绑定clusterrole的好处是不用每一个namespace中都定义一个role只需要在各个名称空间中定义rolebinding然后再绑定clusterrole就行,不用再在每个名称空间都创建一个role因为使用rolebinding绑定clusterrole和用户的时候,用户还是只能访问绑定的指定空间,而不是访问整个集群的namespace

[root@master ~]# kubectl create clusterrolebinding template-read-all-pods --clusterrole=cluster-reader --user=template --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: null
  name: template-read-all-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: template
#创建clusterrolebinding 需要先创建clusterrole,这样角色的权限将是整个集群的权限

[root@master ~]# cat rolebinding-clusterrole-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: template-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects: 
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: template
#创建rolebinding绑定clusterrole,这样clusterrole将降级为只能在rolebinding所在的namespace中的权限

Service account是为了方便Pod里面的进程调用Kubernetes API或其他外部服务,它不同于User account:

  • User account是为人设计的,而service account则是为了Pod中的进程;
  • 开启ServiceAccount(默认开启)后,每个namespace都会自动创建一个Service account,并会相应的secret挂载到每一个Pod中
    • 默认ServiceAccount为default,自动关联一个访问kubernetes API的Secret
    • 每个Pod在创建后都会自动设置spec.serviceAccount为default(除非指定了其他ServiceAccout)
    • 每个container启动后都会挂载对应的token和ca.crt/var/run/secrets/kubernetes.io/serviceaccount/

dashboard认证及分级授权

  • 安装及访问:

    认证方式:

    1. token:

      (1)创建ServiceAccount,根据其管理目标,使用rolebinding或clusterrolebinding绑定至合理role或clusterrole;

      (2)获取此ServiceAccount的secret,去查看secret的详细信息,其中就有token

    2. 把ServiceAccount的token封装为kubeconfig文件

      (1)创建ServiceAccount,根据其管理目标,使用rolebinding或clusterrolebinding绑定至合理role或clusterrole

      (2)使用DEF_NS_ADMIN_TOKEN=$(kubectl get secret SERVICEACCOUNT_SECRET_NAME -o jsonpath={.data.token} | base64 -d )

      (3)生成kubeconfig文件

      • kubectl config set-cluster –kubeconfig=/PATH/TO/SOMEFILE
      • kubectl config set-credentials NAME –token=$KUBE_TOKEN
      • kubectl config set-context
      • kubectl config use-context
    kubectl create sa dashboard-admin -n kube-system
    kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    [root@master ~]# kubectl get secrets -n kube-system
    NAME                                             TYPE                                  DATA   AGE
    attachdetach-controller-token-vhr7x              kubernetes.io/service-account-token   3      4d15h
    bootstrap-signer-token-fl7bb                     kubernetes.io/service-account-token   3      4d15h
    certificate-controller-token-p4szd               kubernetes.io/service-account-token   3      4d15h
    clusterrole-aggregation-controller-token-hz2pt   kubernetes.io/service-account-token   3      4d15h
    coredns-token-g9gp6                              kubernetes.io/service-account-token   3      4d15h
    cronjob-controller-token-brhtp                   kubernetes.io/service-account-token   3      4d15h
    daemon-set-controller-token-4mmwg                kubernetes.io/service-account-token   3      4d15h
    dashboard-admin-token-kzwk9                      kubernetes.io/service-account-token   3      9
    [root@master ~]# kubectl describe secrets dashboard-admin-token-kzwk9  -n kube-system  #将下面的token备用
    Name:         dashboard-admin-token-kzwk9  
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: dbe9eb4a-d94a-11e8-a89c-000c29e073ed
    
    Type:  kubernetes.io/service-account-token 
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4ta3p3azkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGJlOWViNGEtZDk0YS0xMWU4LWE4OWMtMDAwYzI5ZTA3M2VkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DZ94phOCIWAxxs4l55irm1G_PhkRRilhJUMKMDheqCKOepT0NpZ07vp61q4YMmx0X0iT43R7LvhQSZ5p4fGn7ttjxGrDhox5tFvYpy6rCtdxEsYYeqWP_tHMqUMrF71TgbRBdj-LZWyec0YlshjgxhYJ4FV_hKZRAzidhlBg93fnWzDe31cSdg8H4j_5tRJU-JKajjbHXPVxGWPlN6WPPzd5iK2aDXt79k4PSgiC4czyCOTuRYj9INVGo8ZEUEkTUN3dUnXJKMMF-HUXIR67rHDapvcwjgMfVac6TpUO6HBR5ZPce3YKmstleaa2FbaMmNN-qJ0qKZoaOF245vTeqQ
    
    [root@master ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    #如果不能访问谷歌仓库需要先将dashboard的docker镜像下载来重新tag一下
    
    [root@master ~]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system #打补丁方便在本机外访问,认证的账号必须为ServiceAccount 被dashboard pod拿来由kubernetes进行认证
    [root@master ~]# kubectl get svc -n kube-system       #访问https://192.168.175.4:32767 输入之前复制的token
    NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
    kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   4d15h
    kubernetes-dashboard   NodePort    10.109.188.195   <none>        443:32767/TCP   10m
    
    
    --------------------------------------------------------------------------------
    第二种config方式登录dashboard

[root@master pki]# kubectl get secret
NAME TYPE DATA AGE
admin-token-rkxvq kubernetes.io/service-account-token 3 35h
default-token-d75z4 kubernetes.io/service-account-token 3 29h
df-ns-admin-token-4rbwg kubernetes.io/service-account-token 3 34m
[root@master pki]# kubectl get secret df-ns-admin-token-4rbwg -o json
{
​ "apiVersion": "v1",
​ "data": {
​ "ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1UQXlNREV5TWpjME5sb1hEVEk0TVRBeE56RXlNamMwTmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTU9tCkVRL3l3TDdZRHVCTE9SZFZWdHl1NSs4dklIWGJEdWNmZ3N0Vy9Gck82emVLdUNXVzRQdjlPbjJwamxXRkxJdXYKZnhEMU15N3ppVzZjTW0xQkFRUjJpUEwrRE4rK0hYZ0V4ZkhDYTdJbkpHcFYzMU9lU3YzazMwMzljZVFQSUU4SQowQVljY2ZVU0w5SjMvdWpLZElPTTJGZDA2cWNUQmJhRyt0KzBGWGxrZ2NzNVRDa21lOE1xWTNVdjZJUkx6WmgzCmFEejBHVFg0VnpxWStFVXY3UHgzZ2JJeE0wR3ZqTnUvYUJvdWZrZ2RnSDRzL3hYNHVGckJsVytmUDRzRlBYYzIKbXJYd2E2NEY0ZHdLVDc5czY4NTBJMXZ3NS9URDFPRzdpcnNjUHdnMHZwUnlyKzlpTStjKzBWS3BiK1RCTnlzQQpjYkZJbWkzdnBpajliU2ZGVENzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKbjF0ZS95eW1RU3djbHh3OFFRSXhBVSs1L1EKYUN0bmMxOXRFT21jTWorQWJ4THdTS1YwNG1ubHRLRVFXSVBkRWF2RG9LeUQ0NFkzYUg5V2dXNXpra2tiQTJQSApUeDkzVWFtWXNVVkFUenFhOVZzd015dkhDM3RoUlFTRHpnYmxwK2grd1lOdTAyYUpreHJSR3ZCRjg1K282c2FoCktwUms2VHlzQWNVRUh1VHlpSVk5T3d4anBPUzVzVkJKV0NBQ1R5ZXYxRzY4SWkzd2xtY0M4UitaakpLSzh4VncKUmorYjNyeTZiL1A5WUdKYkt4Rm4wOU94eDVCNFhFVWduMjcwYjRSclNXeldOdEVFMkRoZkk1ajNnNGRkUHk3OApuQUNidHpBVUtkSzdXQVdOQXkyQzBFNDZOK3VIa3pObnYwdys1NE1HQy94N2R6TGFBampvTS8yZVRlaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
​ "namespace": "ZGVmYXVsdA==",
​ "token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUm1MVzV6TFdGa2JXbHVMWFJ2YTJWdUxUUnlZbmRuSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbUxXNXpMV0ZrYldsdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVptSmlOVGxtWVdFdFpEazVZaTB4TVdVNExUazBZemd0TURBd1l6STVaVEEzTTJWa0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZaR1l0Ym5NdFlXUnRhVzRpZlEubXpIN2ZMUlV6Y1JzUmJuUy1FbGxROGd5OEFMZWMxdjg3THF1SFpnNnVKTllOWm5wc1RnWm5LWXhMWnNvUUhTc3RKRGFCbDJHZnNUdWRaOHh3MERtNXFYSS1fMmRKSzhHY01TUXhJVnZtRkVVNTdjS2pMV3hpWkFSdTVzNDdkZFhfeTFyU1EyS2lWVEI2X1ZLaVgtT012Zjc5RUNiR0NVR05FOGdGV2NDZzZGeWJ3NGlFaGx6a3J4aUJGOGY0OExIdTdHVUNXbEZTZS1QMzRka2lxajFDQmd0LXlBNFJkZm9UTl9CaExJamtaaEVTLVlMZWR1NVEwR0lrcmFzVUhhWjQ2S0toa2thWjZ1QnQwSm5QNGRRd0dVVklVdHhJd1JudkJONmp2NmpKY3piUXV1Y3dYSXBjVDhQQk10QVVUa21yWGRhcE9JR0ZoWU96c00xNHA3WDRB"
​ },
​ "kind": "Secret",
​ "metadata": {
​ "annotations": {
​ "kubernetes.io/service-account.name": "df-ns-admin",
​ "kubernetes.io/service-account.uid": "fbb59faa-d99b-11e8-94c8-000c29e073ed"
​ },
​ "creationTimestamp": "2018-10-27T03:54:20Z",
​ "name": "df-ns-admin-token-4rbwg",
​ "namespace": "default",
​ "resourceVersion": "303749",
​ "selfLink": "/api/v1/namespaces/default/secrets/df-ns-admin-token-4rbwg",
​ "uid": "fbc27f91-d99b-11e8-94c8-000c29e073ed"
​ },
​ "type": "kubernetes.io/service-account-token"
}

DEF_NS_ADMIN_TOKEN=echo ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUm1MVzV6TFdGa2JXbHVMWFJ2YTJWdUxUUnlZbmRuSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbUxXNXpMV0ZrYldsdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVptSmlOVGxtWVdFdFpEazVZaTB4TVdVNExUazBZemd0TURBd1l6STVaVEEzTTJWa0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZaR1l0Ym5NdFlXUnRhVzRpZlEubXpIN2ZMUlV6Y1JzUmJuUy1FbGxROGd5OEFMZWMxdjg3THF1SFpnNnVKTllOWm5wc1RnWm5LWXhMWnNvUUhTc3RKRGFCbDJHZnNUdWRaOHh3MERtNXFYSS1fMmRKSzhHY01TUXhJVnZtRkVVNTdjS2pMV3hpWkFSdTVzNDdkZFhfeTFyU1EyS2lWVEI2X1ZLaVgtT012Zjc5RUNiR0NVR05FOGdGV2NDZzZGeWJ3NGlFaGx6a3J4aUJGOGY0OExIdTdHVUNXbEZTZS1QMzRka2lxajFDQmd0LXlBNFJkZm9UTl9CaExJamtaaEVTLVlMZWR1NVEwR0lrcmFzVUhhWjQ2S0toa2thWjZ1QnQwSm5QNGRRd0dVVklVdHhJd1JudkJONmp2NmpKY3piUXV1Y3dYSXBjVDhQQk10QVVUa21yWGRhcE9JR0ZoWU96c00xNHA3WDRB | base64 -d #将token解码保存至变量中

[root@master ~]# cd /etc/kubernetes/pki/
[root@master pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.175.4:6443" --embed-certs=true --kubeconfig=/root/def-ns-admin.conf
Cluster "kubernetes" set.
kubectl config set-credentials def-ns-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/def-ns-admin.conf
kubectl config set-context def-ns-admin@kubernetes --cluster=kubernetes --user=def-ns-admin --kubeconfig=/root/def-ns-admin.conf
kubectl config use-context def-ns-admin@kubernetes --kubeconfig=/root/def-ns-admin.conf
sz /root/def-ns-admin.conf #将conf文件传到本机,登录的时候就是用这个conf就可以了

Kubernetes网络通信及配置:

  • 容器间通信:同一个Pod内的多个容器间的通信,lo
  • Pod通信:POD IP <--> POD IP
  • Pod与Service通信:Pod IP <--> ClusterIP
  • Service与集群外部客户端通信:Ingress,NodePort

CNI:Container Nerwork Interface:
解决方案:

  • 虚拟网桥
  • 多路复用:MacVlan
  • 硬件交换:SR-IOV

flannel的配置参数:

  • Network:flannel使用的CIDR格式的网络地址,用于为Pod配置网络功能
  • SubnetLen:把Network切分子网供网络各节点使用,使用多长的掩码进行切分,默认为24位
  • SubnetMin:10.244.10.0/24
  • SubnetMax:10.244.100.0/24
  • Backend:vxlan,host-gw,udp

使用Calico做访问控制

安装Calico
安装文档:https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/flannel
~shell
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
~

示例访问策略配置:

[root@master networkpolicy]# cat ingress-def.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyTypes:     #访问控制策略为ingress,如果没有定义ingress的访问规则,则默认入方向为拒绝所有,出方向为允许所有
  - Ingress
[root@master networkpolicy]# cat ingress-def.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
spec:
  podSelector: {}
  ingress:                     #如果设置了ingress,但是没有写任何规则,则入方向为允许所有
  - {}
  policyTypes:     
  - Ingress
[root@master networkpolicy]# cat allow-netpol-demo.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-myapp-ingress
spec:
  podSelector:
    matchLabels:
      app: myapp
  ingress:
  - from: 
    - ipBlock:
        cidr: 10.244.0.0/16    #允许10.244.0.0/16网段的主机访问有标签为myapp的pod 的80端口但10.244.1.2除外
        except:
        - 10.244.1.2/32
    ports:
    - protocol: TCP
      port: 80
[root@master networkpolicy]# cat egress-def.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

网络策略:名称空间拒绝所有出站,入站;放行所有出站目标为本名称空间内的所有Pod;

Helm

  • helm主要定义了一个应用程序在部署时所需要的所有清单文件
  • chart:一个helm程序包(不包含镜像,镜像存在于docker仓库中)
  • Repository:Charts仓库,https/http服务器
  • Release:特定的chart部署于目标集群上的一个实例
  • Chart --> Config -->Release

  • 程序架构:
    1.helm:客户端,管理本地的Chart,管理Chart,与Tiller服务器交互,发送Chart,实例安装、查询、卸载等操作

2.Tiller:服务端,接收helm发来的Charts与Config,合并生成release

  • 安装helm https://docs.helm.sh/using_helm/#installing-helm
    1.RBAC配置文件示例:
[root@master helm]# cat tiller-rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
[root@master helm]# helm init --service-account tiller
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
helm repo update
  • helm官方可用的chart列表(https://hub.kubeapps.com/)

helm常用命令:

release管理:

  • helm install
  • helm delete:删除应用
  • helm upgrade
  • helm list
  • helm rollback

chart管理:

  • helm fetch:从仓库获取到本地
  • helm create
  • helm get
  • helm inspect:查看一个chart的详细信息
  • helm package:打包chart文件

转载于:https://www.cnblogs.com/Template/p/9866966.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值