Kubernetes 学习草稿

Pod

Pod中有多个容器, 还有一个pause容器用于共享网络和共享存储

容器之间用localhost访问

ReplicaSet 和 ReplicationController

ReplicaSet 可以进行单选也能复合选择

ReplicationController 只能单选, 弃用.

Deployment

  • 服务部署结构模型
  • 滚动更新
V1
V2
重新建立RS
POD v1
POD v1
POD v1
ReplicaSet
POD v2
POD v2
POD v2
ReplicaSet
Deployment

StatefulSet

解决有状态服务使用容器的问题, 比如MySQL

有状态服务: 有实时数据需要存储

StatefulSet
保证Pod重新建立后, hostname不会发生变化,
pod可以通过hostname关联数据
ReplicaSet
POD
POD
POD
PVC文件系统

Service 虚拟IP (VIP)

  • Service和Node之间可以直接通信, 属于局域网通信
  • 把请求交给Service之后, Service使用iptables,ipvs做数据包的分发
Node节点
Node节点
1.访问物理机
2.把请求交给Service
更新映射关系↑
监听所有Pod
监听所有Pod
POD
10.244.1.1
[nginx]
[订单]
POD
10.244.1.2
[nginx]
[订单]
POD
10.244.1.3
[nginx]
[订单]
POD
10.244.1.4
[nginx]
[订单]
POD
10.244.1.5
[nginx]
[支付]
POD
10.244.1.6
[nginx]
[支付]
Service VIP
进程-资源对象
10.12.22.16:80
selector: app=x
endpoints:[10.244.1.1,10.244.1.2,...,10.244.1.4]
用户
Node节点
Kube-Proxy

kubectl explain pod 查看pod的各种选项

kubectl explain pod.spec.containers

kubectl edit 编辑yaml文件

kubectl logs xxx-pod -c container-name 用-c指定一个docker容器

kubectl label pod xxx-pod key=value --overwrite

kubectl get pod -o wide

Pod

pod init 测试
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox
    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2;done;']
  - name: init-mydb
    image: busybox
    command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done']
就绪检测

只有检测成功才会标记成就绪

apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: nginx
    imagePullPolicy: IfNotPresent
    readinessProbe:
      httpGet:
        port: 80
        path: /index1.html
      initialDelaySeconds: 1
      periodsSeconds: 3
存活检测

一旦检测失败就会重启

apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
spec:
  containers:
  - name: liveness-exec-container
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh", "-c", "touch /tmp/live; sleep 30; rm -rf /tmp/live; sleep 3600"]
    livenessProbe:
      exec:
        # test -e 是检查文件是否存在
        command: ["test", "-e", "tmplive"]
      # 延迟1秒开始检测
      initialDelaySeconds: 1
      # 每3秒检测一次
      periodSeconds: 3
apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
spec:
  containers:
  - name: liveness-httpget-container
    image: nginx
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
        port: http
        path: /index1.html
      initialDelaySeconds: 1
      periodsSeconds: 3
      timeoutSeconds: 10

持续检测Pod状态

kubectl get pod -w
启动退出动作
apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: nginx
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
      preStop:
        exec:
          command: ["/usr/sbin/nginx", "-s", "quit"]
DaemonSet

确保某些Node上运行一个Pod的副本,相当于在某些Node上运行守护进程。

使用场景:运行集群存储Daemon;在每个节点上运行日志收集;在每个节点上运行监控程序。

Job

只运行一次的任务。如果任务的返回值不是0则重新执行。

CronJob

在特定的时间创建Job

配置方案跟crontab一样

StatefulSet

为Pod提供唯一表示标识。应用场景包括:

  • 通过PVC实现稳定的持久化存储
  • 稳定的网络标识,Pod调度后其PodName和HostName不变
  • 在部署或扩展的时候按照定义的顺序依次进行,基于init containers实现;部署顺序和启动顺序相反
Horizontal Pod Autoscaling

根据资源利用动态调整Pod个数

ReplicaSet

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: nginx
        env:
        - name: GET_HOST_FROM
          value: dns
        ports:
        - containerPort: 80

Deployment

直接把上面的ReplicaSet改成Deployment

部署一个简单的Nginx应用
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
# 创建
kubectl create -f nginx-deploy.yaml --record
# 扩容
kubectl scale deploy nginx-deployment --replicas 10
# 如果集群支持 horizontal pod autoscaling 的话, 还可以为 Deployment 设置自动扩展
kubectl autoscale deploy nginx-deployment --min=10 --max=15  --cpu-precent=80
# 更新镜像
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
# 回滚
kubectl rollout undo deployment/nginx-deployment
# 查看滚动更新状态
kubectl rollout status deploy nginx-deployment
# 查看滚动更新历史
kubectl rollout history deployment/nginx-deployment
# 回滚到指定版本 (想要多次回滚智能指定序号, 否则就相当于undo-redo-undo-redo一直循环)
kubectl rollout undo deployment/nginx-deployment --to-revision=1
# 暂停更新
kubectl rollout pause deployment/nginx-deployment

deployment在升级时会保证只有一小部分不可用, 现在是只有1个down, 以后会改成25%个down

deployment在升级时并不会删除之前的resultset, 可以通过设置.spec.revisonHistoryLimit设置要保留几个replicaset

DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: deamonset-example
  labels:
    app: daemonset
spec:
  selector:
    matchLabels:
      name: daemonset-example
  template:
    metadata:
      labels:
        name: daemonset-example
    spec:
      containers:
      - name: daemonset-example
        image: nginx

默认情况下主节点不会参与调度, 所以DaemonSet不会在主节点上创建

Job

用Job计算 π \pi π的值

apiVersion: batch/v1
kind: Job
metadata: 
  name: pi
spec:
  template:
    metadata:
      name: pi
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
Cron Job

RestartPolicy只支持NeverOnFailure

.spec.completions Job结束需要成功运行的Pod个数, 默认为1

.spec.parallelism 标志并允许并行运行的Pod个数

在给定时间点只运行一次

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from Kubernetes cluster
          restartPolicy: OnFailure

Service

默认类型
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: stable
  template:
    metadata:
      labels:
        app: myapp
        release: stable
        env: test
    spec:
      containers:
      - name: myapp
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: myapp
    release: stable
  ports:
  - name: http
    port: 80
    targetPort: 80
Headless Service

有时候不需要负载均衡和单独都Service IP, 遇到这种情况, 可以通过指定 Cluster IP (spec.clusterIP)的值为"None"来创建Headless Service.

apiVersion: v1
kind: Service
metadata:
 name: myapp-headless
spec:
 selector:
   app: myapp
 clusterIP: "None"
 ports:
 - port: 80
   targetPort: 80

虽然没有集群IP, 但是依然可以通过myapp-headless.default.svc.cluster.local 解析到Pod的IP地址

NodePort

将服务暴露给外部用户, 可以通过所有节点的IP访问.

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: NodePort
  selector:
    app: myapp
    release: stable
  ports:
  - name: http
    port: 80
    targetPort: 80

查询流程

iptables -t nat -nvL
    KUBE-NODEPORTS
ipvsadm -Ln
LoadBalancer

负载均衡和nodePort其实是同一种方式, 区别在于负载均衡比nodePort多了一步, 就是可以调用cloud provider去创建LB向节点导流

负载均衡需要付费

ExternalName

提供重定向功能. 通过返回CNAME和它的值, 可以将服务映射到 externalName 字段的内容

kind: Service
apiVersion: v1
metadata:
  name: my-service-1
spec:
  type: ExternalName
  externalName: www.baidu.com

当查询主机 my-service.default.svc.cluster.local (服务.命名空间.svc.cluster.local)

Service Ingress

ingress-nginx github 地址: https://github.com/kubernetes/ingress-nginx

ingress-nginx 官方网站: https://kubernetes.github.io/ingress-nginx

后端服务1
后端服务2
访问域名
负载均衡
负载均衡
负载均衡
负载均衡
负载均衡
容器
容器
容器
容器
容器
service1.com
service2.com
客户端
Nginx
反向代理
NodePort

不过nginx中的配置可以自动生成

ConfigMap

许多应用程序会从配置文件 命令行参数或环境变量种读取配置信息, ConfigMap给我们提供了向容器中注入配置信息的机制. CM可以用来保存单个属性, 也可以用来保存整个配置文件或者JSON二进制大对象

1 使用目录创建
$ ls docs/user-guide/configmap/kubectl/
game.properties
ui.properties

$ cat docs/user-guide/configmap/kubectl/game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30

$ cat docs/user-guide/configmap/kubect1/ui.properties
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.tolook=fairlyNice

$ kubectl create configmap game -config --from-file=docs/user-guide/configmap/kubectl

--from-file指定在目录下的所有文件都会被用在ConfigMap里面创建一个键值对, 键的名字就是文件名, 值是文件里面的内容.

configmap简写为cm

2 使用文件创建
kubectl create cm game-config-2 --from-file=game.properties
3 使用字面值创建
kubectl create cm special-config --from-literal=special.how=very --from-literal=special.type=charm
4 使用配置文件创建
apiVersion: v1
kind: ConfigMap
metadata:
  name: env-config
data:
  log_level: INFO
使用ConfigMap
apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
  - name: test-container
    image: centos
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh", "-c", "env"]
    env:
    - name: SPECIAL_LEVEL_KEY
      valueFrom:
        configMapKeyRef:
          name: special-config
          key: special.how
    envFrom:
    - configMapRef:
        name: env-config
在存储卷中使用ConfigMap

配置变成文件了

apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod11
spec:
  volumes:
  - name: config-volume
    configMap:
      name: special-config
  containers:
  - name: test-container
    image: centos
    command: ["/bin/sh", "-c", "cat /etc/config/special.how"]
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
  restartPolicy: Never
ConfigMap的热更新
apiVersion: v1
kind: ConfigMap
metadata:
  name: log-config
data:
  log_level: INFO
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      name: my-nginx
  template:
    metadata:
      labels:
        name: my-nginx
    spec:
      volumes:
      - name: config-volume
        configMap:
          name: log-config
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: config-volume
          mountPath: /etc/config

更新CM

kubectl edit cm log-config

Secret

保存密码 token 秘钥等敏感数据, 分为3种类型

  • Service Account: 用来访问Kubernetes API, 由k8s创建, 自动挂载到 /run/secrets/kubernetes.io/serviceaccount
  • Opaque: base64编码格式的Secret, 用来存储密码 秘钥等
  • kubernetes.io/dockerconfigjson 用来存储私有的docker registry的认证信息
Opaque
$ echo -n "admin" | base64
YWRtaW4=
$ echo -n "password" | base64
cGFzc3dvcmQ=
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: cGFzc3dvcmQ=
  username: YWRtaW4=

将Secret导入文件中

apiVersion: v1
kind: Pod
metadata:
  name: secret-test
  labels:
    app: secret-test
spec:
  volumes:
    - name: secrets
      secret:
        secretName: mysecret
  containers:
    - name: secret-test
      image: centos
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: "/etc/secrets"
          name: secrets
          readOnly: true
      tty: true
      stdin: true
  restartPolicy: Always

将Secret导入环境变量中

apiVersion: apps/v1
kind: Deployment
metadata:
  name: centos-deployment
  labels:
    app: centos-deployment
spec:
  replicas: 2
  template:
    metadata:
      name: centos-deployment
      labels:
        app: centos-deployment
    spec:
      containers:
        - name: pod1
          image: centos
          imagePullPolicy: IfNotPresent
          tty: true
          stdin: true
          env:
            - name: TEST_USER
              valueFrom:
                secretKeyRef:
                  key: username
                  name: mysecret
            - name: TEST_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: mysecret
      restartPolicy: Always
  selector:
    matchLabels:
      app: centos-deployment
echo $TEST_USER
Docker Registry Secret
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
    - name: foo
      image: docker.ykh.me/test/myapp:v1
      imagePullPolicy: IfNotPresent
  imagePullSecrets:
    - name: myregistrykey

Volume

卷寿命和Pod的寿命一致, 比容器的生命周期长.

pod
容器1
容器2
Pause
Volume

k8s几乎支持所有的卷

emptyDir

卷是空的, 可以被Pod写入, 主要用途是:

  • 暂存空间
  • 计算崩溃恢复的检查点
  • 同一个Pod不同容器的文件共享
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  volumes:
    - name: cache-volume
      emptyDir: {}
  containers:
    - name: test-pod
      image: nginx
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: /cache
          name: cache-volume
挂载主机上的目录
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  volumes:
    - name: test-volume
      hostPath:
        path: /root
  containers:
    - name: test-container
      image: nginx
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: /test-pd
          name: test-volume

PVC

Persistent Volume Claims绑定持久卷(PV)

NFS演示

安装NFS服务器

yum install -y nfs-common nfs-utils rpcbind
mkdir /nfsdata
chmod 666 /nfsdata
chown nfsnobody /nfsdata
vi /etc/exports
# 输入: /nfsdata *(rw,no_root_squash,no_all_squash,sync)
systemctl restart rpcbind
systemctl restart nfs

安装NFS客户端

yum install -y nfs-common nfs-utils rpcbind
showmount -e IP地址  # 查看其它主机的导出情况
mount -t nfs IP地址:/nfsdata /test  # 挂载

部署PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv1
spec:
  capacity:
    storage: 5Gi
  accessModes:
    # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
    - ReadWriteOnce
  # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 还可以改成slow
  nfs:
    path: /nfsdata
    server: master  # master是IP地址

如果persistentVolumeReclaimPolicy设置为Retain, 则PV在pod终止后会设置为released状态, 需要用kubectl edit pv PV_NAME, 删除claimRef:

创建StatefulSet并访问NFS(创建StatefulSet必须先创建Headless Service)

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    name: web
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: www
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: nfs
      resources:
        requests:
          storage: 1Gi

StatefulSet使用Headless服务来控制Pod的域名, 域名的FQDN为: $(podname).$(servicename).$(namespace).svc.cluster.local, 其中cluster.local指的是集群的域名

$ ping web-0.nginx
PING web-0.nginx.default.svc.cluster.local (10.32.0.4) 56(84) bytes of data.

StatefulSet的Pod是按顺序创建的, 如果中间一个创建失败, 后面都不会再创建.

StatefulSet的缩写是sts.

调度

调度分为两个步骤.

  1. Predicate
    • PodFitsResources: 节点上剩余的资源是否大于Pod请求的资源
    • PodFitsHost: 如果Pod指定了NodeName, 检查节点是否和NodeName匹配
    • PodFitsHostPorts: 节点上已经使用的port是否和Pod申请的port冲突
    • PodSelectorMatches: 过滤掉和Pod指定的label不匹配的节点
    • NoDiskConflict: 已经mount的volume和Pod指定的Volume不冲突, 除非都是只读
  2. Priority
    • LeastRequestPriorty: 通过计算CPU和Memory的利用率决定权重, 利用率越低权重越高
    • BalancedResourceAllocation: 节点上的CPU和Memory使用率越接近, 权重越高. 要和第一个一起使用
    • ImageLocationPriority: 倾向于使用已有镜像的节点

上面只是一部分, 官网上还有很多

NodeAffinity

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#nodeaffinity-v1-core

pod.spec.nodeAffinity

硬策略: requiredDuringSchedulinglgnoredDuringExecution:

apiVersion: v1
kind: Pod
metadata:
  name: affinity
  labels:
    app: affinity
spec:
  containers:
  - name: affinity
    image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: NotIn
                values:
                  - client1

软策略: preferredDuringSchedulinglgnoredDuringExecution

apiVersion: v1
kind: Pod
metadata:
  name: affinity
  labels:
    app: affinity
spec:
  containers:
  - name: affinity
    image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: NotIn
                values:
                  - client1

增加了一项 weight

PodAffinity

pod.spec.affinity.podAffinity/podAntiAffinity

apiVersion: v1
kind: Pod
metadata:
  name: affinity
  labels:
    app: affinity
spec:
  containers:
  - name: affinity-con
    image: nginx
  affinity:
    # 如果有pod-1就它在一起1, 没有就失败
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - topologyKey: kubernetes.io/hostname
          labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - pod-1
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            topologyKey: kubernetes.io/hostname
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - pod-2
          weight: 1
  • preferredDuringSchedulinglgnoredDuringExecution: 软策略

  • requiredDuringSchedulinglgnoredDuringExecution: 硬策略

调度策略匹配标签操作符拓扑域支持调度目标
nodeAffinity主机In NotIn Exists
DoesNotExist Gt Lt
指定主机
podAffinityPodIn NotIn ExistsDoesNotExist与Pod在同一拓扑域
podAntiAffinityPodIn NotIn Exists DoesNotExist与Pod不在同一拓扑域

kubernetes.io/hostname设置为拓扑域, 则同一个主机的Pod是一个拓扑域.

Taint

使用kubectl taint命令可以被某个Node节点设置成污点, Node被设置上污点之后, 就和Pod之间存在互斥关系.

key=value:effect # key可以为空

effect支持3种属性:

  • NoSchedule: 不会将Pod调度到有该污点的Node上
  • PreferNoShedule: 尽量避免
  • NoExecute: 不仅不会, 还会把已存在的Pod驱逐

master 天生被打了污点

$ kubectl describe node master
Taints:             node-role.kubernetes.io/master:NoSchedule
# 设置污点
kubectl taint nodes node1 key=value1:NoSchedule

# 去除污点
kubectl taint nodes node1 key:NoSchedule-
Tolerations

Pod可以设置容忍, 允许污点的存在.

tolerations:
- key: key1
  operator: Equal
  evalue: value1
  effect: NoSchedule
  tolerationSeconds: 3600

当不指定key值时, 标识容忍所有的污点key; (只剩operator: Exists)

当不指定effect时, 标识容忍所有的污点effect;

有多个Master存在时, 防止资源浪费, 可以如下设置:

kubectl taint nodes Node-Name node-role.kubernetes.io/master=:PreferNoSchedule
指定调度节点

Pod.spec.nodeName 将 Pod 直接调度到指定的 Node 节点上

apiVersion: v1
kind: Pod
metadata:
  name: taints
spec:
  nodeName: client1  # 节点名称
  containers:
  - name: affinity-con
    image: nginx

还可以根据Node标签选择

kubectl label node client1 disk=ssd  # 先加标签
apiVersion: v1
kind: Pod
metadata:
  name: taints
spec:
  nodeSelector:  # 强制的, 如果没有就不创建
    disk: ssd
  containers:
  - name: affinity-con
    image: nginx

安全

认证
  • HTTP Token认证: 通过Token识别合法用户

  • HTTP Base认证: 通过用户名+密码的方式认证

  • HTTPS认证: 基于CA根证书签名的客户端身份认证方式

HTTPS证书认证
服务器端 客户端 CA架构 申请证书 下发证书 申请证书 下发证书 身份认证(证书) 身份认证(证书) 通信(随机私钥) 服务器端 客户端 CA架构
KubeConfig

kube组件用的. 包含集群参数(CA证书, API Server地址), 客户端参数(生成的证书和私钥), 集群context信息等

ServiceAccount

Pod中容器访问API Server, 因为Pod的创建销毁是动态的, 所以手动生成证书是不可行的. Kubernetes使用SA解决Pod访问API Server的认证问题

Secret与SA的关系

Kubernetes设计了一种资源对象叫做Secret, 分为两类, 一种是SA的service-account-token, 另一种是保存用户自定义保密信息的Opaque. SA中用到的三个部分: Token, ca.crt, namespace

  • token是使用API Server私钥签名的JWT, 用于访问API Server时Server的认证
  • ca.crt 根证书, Client端验证API Server发送的证书
kubectl get secret --all-namespaces
kubectl describe secret XXX -namespace=kube-system
鉴权

确定对方有什么权限

RBAC授权模式

基于角色的访问控制. 引入了4个新的顶级资源对象

Resource
create
get
update
Role
RoleBinding
User
Group
ServiceAccount

Role标识一组规则权限, 权限只会累加, 不能减少. Role可以定义在一个namespace中, 如果想跨namespace可以创建ClusterRole.

Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
  - verbs:
      - get
      - watch
      - list
    resources:
      - pods
    apiGroups: [""]
Cluster Role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: secret-reader
rules:
  - verbs:
      - get
      - watch
      - list
    resources:
      - secrets
RoleBinding

Role/ClusterRole与用户/用户组绑定

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
  - kind: User  # 也可以换成 Group 与用户组绑定
    name: jane
    apiGroup: rbac.authorization.k8s.io
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role  # 也可以使用 ClusterRole 与用户绑定
  name: pod-reader
资源

Kubernetes集群内的一些资源一般以其名称字符串来表示, 一些字符串一般会在API的URL地址中出现, 同时某些资源也会包含子资源, 例如 logs 资源就属于 Pod 的子资源, API中URL样例如下

GET /api/v1/namespaces/{namespace}/pods/{name}/log

如果要控制子资源点访问权限, 可以通过 / 分隔符来实现.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
  - resources:
      - pods/log
    ...
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值