K8S核心插件

flannel网络插件

  • host-gw模型
  • kubernetes设计了网络模型,但是他将实现交给网络插件,CNI网络插件最主要功能实现pod资源跨宿主机通信

安装节点130 131


下载安装

下载地址:https://github.com/flannel-io/flannel/releases

[root@ceshi-130 ~]# wget https://github.com/flannel-io/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@ceshi-130 ~]# mkdir -p  /usr/local/flannel-v0.11.0/
[root@ceshi-130 ~]# tar -xf flannel-v0.11.0-linux-amd64.tar.gz -C /usr/local/flannel-v0.11.0/
[root@ceshi-130 ~]# ln -s /usr/local/flannel-v0.11.0/ /usr/local/flannel

拷贝证书

[root@ceshi-130 flannel]# mkdir certs
[root@ceshi-130 certs]# scp root@192.168.108.132:/opt/certs/ca.pem .
[root@ceshi-130 certs]# scp root@192.168.108.132:/opt/certs/client.pem .
[root@ceshi-130 certs]# scp root@192.168.108.132:/opt/certs/client-key.pem .

创建配置

[root@ceshi-130 flannel]# vi subnet.env
FLANNEL_NETWORK=172.7.21.0/16
FLANNEL_SUBNET=192.168.108.130/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

配置启动脚本

[root@ceshi-130 flannel]# vi flanneld.sh
#!/bin/sh
./flanneld \
--public-ip=192.168.108.130 \
--etcd-endpoints=https://192.168.108.129:2379,https://192.168.108.130:2379,https://192.168.108.131:2379 \
--etcd-keyfile=./certs/client-key.pem \
--etcd-certfile=./certs/client.pem \
--etcd-cafile=./certs/ca.pem \
--iface=eth0 \
--subnet-file=./subnet.env \
--healthz-port=2401
[root@ceshi-130 flannel]# chmod +x flanneld.sh
[root@ceshi-130 flannel]# mkdir -p /data/logs/flanneld

创建supervisor配置

  • 先在etcd执行添加网络
[root@ceshi-129 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16" , "Backend": {"Type": "host-gw"}}'
{"Network": "172.7.0.0/16" , "Backend": {"Type": "host-gw"}}
[root@ceshi-130 flannel]# vi /etc/supervisord.d/flanneld.ini
[program:flanneld-7-21]
command=/usr/local/flannel/flanneld.sh                             ; the program (relative uses PATH, can take args)
numprocs=1                                                   ; number of processes copies to start (def 1)
directory=/usr/local/flannel                                       ; directory to cwd to before exec (def no cwd)
autostart=true                                               ; start at supervisord start (default: true)
autorestart=true                                             ; retstart at unexpected quit (default: true)
startsecs=30                   ; number of secs prog must stay running (def. 1)
startretries=3                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                    ; setuid to this UNIX account to run the program
redirect_stderr=true                                        ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log       ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                 ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                     ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                  ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                  ; emit events on stdout writes (default false)
[root@ceshi-130 flannel]# supervisorctl update

验证通信

[root@ceshi-130 flannel]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
nginx-ds-f82d2   1/1     Running   0          18h   172.7.200.2   ceshi-131.host.com   <none>           <none>
nginx-ds-lkznc   1/1     Running   3          42h   172.7.200.3   ceshi-130.host.com   <none>           <none>
[root@ceshi-130 flannel]# ping 172.7.200.2
PING 172.7.200.2 (172.7.200.2) 56(84) bytes of data.
64 bytes from 172.7.200.2: icmp_seq=1 ttl=64 time=0.110 ms
64 bytes from 172.7.200.2: icmp_seq=2 ttl=64 time=0.050 ms
[root@ceshi-130 flannel]# ping 172.7.200.3
PING 172.7.200.3 (172.7.200.3) 56(84) bytes of data.
64 bytes from 172.7.200.3: icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from 172.7.200.3: icmp_seq=2 ttl=64 time=0.047 ms

    1. flannel之所以能跨主机通信是因为将宿主机eth0的网络作为docker网络的gateway进行相互相同
    1. 说简单点也就是相互添加了路由规则
    1. route add -net 172.7.22.0/24 gw 192.168.108.131 dev ens192 (130节点添加源为172.7.22.0的路由为192.168.108.131)
    1. route add -net 172.7.21.0/24 gw 192.168.108.130 dev ens 192 (131节点添加源172.7.21.0的路由为192.168.108.130)
    1. 后面需要优化iptables snat转换,容器于容器之间通信应该基于容器ip而不是来自于宿主机的ip,因为这样即使能通但是容器间访问的源来自于宿主机而不是容器本身

[root@ceshi-130 kubernetes]# yum install iptables-services -y
[root@ceshi-130 kubernetes]# systemctl start iptables
[root@ceshi-130 kubernetes]# systemctl enable iptables
[root@ceshi-130 kubernetes]# iptables-save | grep -i postrouting
:POSTROUTING ACCEPT [76:4560]
:KUBE-POSTROUTING - [0:0]
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.7.200.0/24 ! -o docker0 -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE

删除规则
[root@ceshi-130 kubernetes]# iptables -t nat -D POSTROUTING -s 172.7.200.0/24 ! -o docker0 -j MASQUERADE
修改规则
[root@ceshi-130 kubernetes]# iptables -t nat -I POSTROUTING -s 172.7.200.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
[root@ceshi-130 kubernetes]# iptables-save > /etc/sysconfig/iptables

Coredns服务发现


节点 132


服务发现就是应用之间相互定位的过程,因为pods是基于宿主存在集群中的资源,pod中的ip是会不断发生变化的,不固定的,服务发现就是让服务于集群之间自动发现的目的。

传统DNS模型: ceshi-111.host.com–>192.168.0.108 域名绑定ip
K8SDNS模型: nginx-test(service)–> 192.168.0.108 将服务于ip关联,pod即使再变化,只要服务存在一定可以访问

添加域名解析


节点 128


[root@ceshi-128 ~]# vi /var/named/od.com.zone
添加k8s-yaml	A	192.168.108.132
[root@ceshi-128 ~]# systemctl restart named

配置nginx

[root@ceshi-132 ~]# vi /usr/local/nginx/conf.d/k8s-yaml.od.conf
server {
        listen       80;
        server_name  k8s-yaml.od.com;
        
	location / {
		autoindex on;
		default_type text/plain;
		root /data/k8s-yaml;
        }
}

[root@ceshi-132 k8s-yaml]# cd /data/k8s-yaml/
[root@ceshi-132 k8s-yaml]# mkdir coredns
[root@ceshi-132 k8s-yaml]# curl k8s-yaml.od.com
<html>
<head><title>Index of /</title></head>
<body>
<h1>Index of /</h1><hr><pre><a href="../">../</a>
<a href="coredns/">coredns/</a>                                           04-Aug-2021 08:15                   -
</pre><hr></body>
</html>

[root@ceshi-132 k8s-yaml]# cd coredns/
	[root@ceshi-132 coredns]# docker pull coredns/coredns:1.6.5
[root@ceshi-132 coredns]# docker tag 70f311871ae1 harbor.od.com/public/coredns:v1.6.5

配置资源清单

权限:

[root@ceshi-132 coredns]# vi /data/k8s-yaml/coredns/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system

pod控制器:

[root@ceshi-132 coredns]# vi /data/k8s-yaml/coredns/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/k8s/coredns:v1.3.1
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      imagePullSecrets:
      - name: harbor
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile

指定上层DNS地址:

[root@ceshi-132 coredns]# vi /data/k8s-yaml/coredns/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        kubernetes cluster.local 192.168.0.0/16
        forward  .  192.168.108.128
        cache 30
        loop
        reload
        loadbalance
       }

定义端口:

[root@ceshi-132 coredns]# vi /data/k8s-yaml/coredns/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP

开始构建


节点 130 131


[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/configmap.yaml
configmap/coredns created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/deployment.yaml
deployment.extensions/coredns created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created
名称空间在kube-system
[root@ceshi-130 ~]# kubectl get all -n kube-system
NAME                                READY   STATUS             RESTARTS   AGE
pod/coredns-58bfb77d85-487sg        0/1     ImagePullBackOff   0          33s
pod/metrics-server-d6b78d65-srsdv   0/1     CrashLoopBackOff   15         22h


NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
service/coredns          ClusterIP   192.168.0.2      <none>        53/UDP,53/TCP,9153/TCP   24s
service/metrics-server   ClusterIP   192.168.182.38   <none>        443/TCP                  22h


NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns          0/1     1            0           34s
deployment.apps/metrics-server   0/1     1            0           22h

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-58bfb77d85        1         1         0       33s
replicaset.apps/metrics-server-d6b78d65   1         1         0       22h

K8s服务暴露Ingress

只能暴露7层的应用,指http和https协议,Insgress是k8s API标准资源类型之一,也是核心资源,其实就是基于域名和URL路径,把用户请求转发指定service资源的规则,可以将集权外部的请求流量,转发至集群内部,从而实现服务暴露

下载traefik


安装节点132


[root@ceshi-132 traefik]# mkdir -p /data/k8s-yaml/traefik
[root@ceshi-132 traefik]# docker pull traefik:v1.7.29-alpine
[root@ceshi-132 traefik]# docker tag d4b8d9784631 harbor.od.com/public/traefik:v1.7.29
[root@ceshi-132 traefik]# docker push harbor.od.com/public/traefik:v1.7.29

配置traefik资源配置清单

[root@ceshi-132 traefik]# vi /data/k8s-yaml/traefik/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
[root@ceshi-132 traefik]# vi /data/k8s-yaml/traefik/daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
      - image: harbor.od.com/public/traefik:v1.7.29
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 81
        - name: admin
          containerPort: 8080
        securityContext:
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
        - --insecureskipverify=true
        - --kubernetes.endpoint=https://192.168.108.133:7443
        - --accesslog
        - --accesslog.filepath=/var/log/traefik_access.log
        - --traefiklog
        - --traefiklog.filepath=/var/log/traefik.log
        - --metrics.prometheus
      imagePullSecrets:
      - name: harbor
[root@ceshi-132 traefik]# vi /data/k8s-yaml/traefik/svc.yaml
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
[root@ceshi-132 traefik]# vi /data/k8s-yaml/traefik/ingress.yaml
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
[root@ceshi-132 traefik]# cat ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefik.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-ingress-service
          servicePort: 8080
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml
serviceaccount/traefik-ingress-controller created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/daemonset.yaml
daemonset.extensions/traefik-ingress-controller created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
service/traefik-ingress-service created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml
ingress.extensions/traefik-web-ui created

使用nginx增加7层负载


节点128 129


[root@ceshi-128 conf.d]# vi /usr/local/nginx/conf.d/od.com.conf 
upstream traefik {
	server 192.168.108.130:81 max_fails=3 fail_timeout=10s;
	server 192.168.108.131:81 max_fails=3 fail_timeout=10s;
}
server {
	server_name *.od.com;
	location / {
	proxy_pass http://traefik;
	proxy_set_header Host	$http_host;
	proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
	}
}
[root@ceshi-128 conf.d]# ../sbin/nginx -s reload

增加DNS解析
traefik A 192.168.108.133 指向虚拟VIP


使用NGINX负载均衡7层代理后端2节点Ingress

dashboard仪表盘

下载镜像文件


节点 132


[root@ceshi-132 ~]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3
[root@ceshi-132 ~]# docker tag 503bc4b7440b harbor.od.com/public/dashboard:v1.8.3
[root@ceshi-132 ~]# docker push harbor.od.com/public/dashboard:v1.8.3
[root@ceshi-132 ~]# mkdir -p /data/k8s-yaml/dashboard
[root@ceshi-132 ~]# cd /data/k8s-yaml/dashboard

配置资源清单

[root@ceshi-132 dashboard]# vi deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: kubernetes-dashboard
        image: harbor.od.com/public/dashboard:v1.8.3
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 50m
            memory: 100Mi
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          # PLATFORM-SPECIFIC ARGS HERE
          - --auto-generate-certificates
        volumeMounts:
        - name: tmp-volume
          mountPath: /tmp
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard-admin
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
[root@ceshi-132 dashboard]# vi rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system
[root@ceshi-132 dashboard]# vi svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 443
    targetPort: 8443
[root@ceshi-132 dashboard]# vi ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: dashboard.od.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

构建服务

[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/deployment.yaml
deployment.apps/kubernetes-dashboard created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml 
service/kubernetes-dashboard created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml 
ingress.extensions/kubernetes-dashboard created

浏览器访问:http://dashboard.od.com/
在这里插入图片描述

[root@ceshi-130 ~]# curl -I  http://dashboard.od.com/
HTTP/1.1 200 OK
Server: Tengine/2.3.3
Date: Wed, 11 Aug 2021 08:41:14 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 990
Connection: keep-alive
Accept-Ranges: bytes
Cache-Control: no-store
Last-Modified: Tue, 13 Feb 2018 11:17:03 GMT

角色访问控制RBAC

  • 创建用户账户/服务账户—(绑定)—》角色—(赋予)—》权限
账户在k8s分为两种类型:
	1. 用户账户 :userAccount
	2. 服务账户:serviceAccount
角色在K8S分为两种类型:
	1. 普通角色 Role 只能应用某个特定namespace
	2. 集群角色	ClusterRole 对于整个集群有效
权限:
	 读get 写write 更新update 列出list 监视watch登等
绑定角色两种类型:
	1. RoleBinding
	2. ClusterRoleBinding

完整创建大概类似如下

apiVersion: v1
kind: ServiceAccount 						声明服务(应用账户)
metadata:
  name: traefik-ingress-controller 			服务名称
  namespace: kube-system					名称空间
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole							声明集群角色
metadata:
  name: traefik-ingress-controller 			集群角色名称
rules:										规则
  - apiGroups:								组限制
      - ""
    resources:								资源
      - services	
      - endpoints
      - secrets
    verbs:									权限
      - get		
      - list
      - watch
---
kind: ClusterRoleBinding					集群角色帮定
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller			集群角色名称
roleRef:									参考集群角色
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole							上面创建的集群角色
  name: traefik-ingress-controller			上面创建的集群角色名称
subjects:									用户使用授权
- kind: ServiceAccount						服务账户
  name: traefik-ingress-controller			服务账户名称
  namespace: kube-system					服务账户名称空间

[root@ceshi-130 ~]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole							声明集群角色
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2021-08-06T06:30:34Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin						集群角色名称
  resourceVersion: "40"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin
  uid: 64160d93-0d77-4d3b-9529-31f22611a60e	
rules:										规则
- apiGroups:								api组
  - '*'										所有*
  resources:								资源
  - '*'										所有*
  verbs:									权限													
  - '*'										所有*
- nonResourceURLs:							
  - '*'
  verbs:
  - '*'

创建dashboard https证书


节点132


[root@ceshi-132 certs]# (umask 077; openssl genrsa -out dashboard.od.com.key 2048)
[root@ceshi-132 certs]# openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=OldboyEdu/OU=ops"
[root@ceshi-132 certs]# openssl x509 -req -in dashboard.od.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3650
[root@ceshi-132 certs]# ll dashboard.od.com.*
-rw-r--r--. 1 root root 1192 Aug 11 18:55 dashboard.od.com.crt
-rw-r--r--. 1 root root 1005 Aug 11 18:53 dashboard.od.com.csr
-rw-------. 1 root root 1679 Aug 11 18:49 dashboard.od.com.key

配置nginx使用ssl


节点 128 129


[root@ceshi-128 nginx]# mkdir certs
[root@ceshi-128 certs]# pwd
/usr/local/nginx/certs
[root@ceshi-128 certs]# scp root@ceshi-132.host.com:/opt/certs/dashboard.od.com.crt .
[root@ceshi-128 certs]# scp root@ceshi-132.host.com:/opt/certs/dashboard.od.com.key .
[root@ceshi-128 conf.d]# vi dashboard.od.com.conf
server {
    listen       80;
    server_name  dashboard.od.com;

    rewrite ^(.*)$ https://${server_name}$1 permanent;
}
server {
    listen       443 ssl;
    server_name  dashboard.od.com;

    ssl_certificate "/usr/local/nginx/certs/dashboard.od.com.crt";
    ssl_certificate_key "/usr/local/nginx/certs/dashboard.od.com.key";
    ssl_session_cache shared:SSL:1m;
    ssl_session_timeout  10m;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://traefik;
              proxy_set_header Host       $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}

查询运算节点令牌

[root@ceshi-130 ~]# kubectl get secret -n kube-system
NAME                                     TYPE                                  DATA   AGE
coredns-token-qc4vc                      kubernetes.io/service-account-token   3      2d9h
default-token-ft9r6                      kubernetes.io/service-account-token   3      5d4h
kubernetes-dashboard-admin-token-wsfgf   kubernetes.io/service-account-token   3      8h
kubernetes-dashboard-certs               Opaque                                0      25h
kubernetes-dashboard-key-holder          Opaque                                2      25h
traefik-ingress-controller-token-8xkh9   kubernetes.io/service-account-token   3      2d1h
[root@ceshi-130 ~]# kubectl describe secret kubernetes-dashboard-admin-token-wsfgf -n kube-system
Name:         kubernetes-dashboard-admin-token-wsfgf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-admin
              kubernetes.io/service-account.uid: d63fac79-de2f-4a7e-be9d-9e77c91c0589

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1342 bytes
namespace:  11 bytes
token:      eyJhbGciOi

在这里插入图片描述

下载heapster插件

[root@ceshi-132 k8s-yaml]# mkdir dashboard/heapster
[root@ceshi-132 heapster]# docker pull quay.io/bitnami/heapster:1.5.4
[root@ceshi-132 heapster]# docker tag c359b95ad38b harbor.od.com/public/heapster:v1.5.4
[root@ceshi-132 heapster]# docker push harbor.od.com/public/heapster:v1.5.4

权限配置
[root@ceshi-132 heapster]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

部署配置
[root@ceshi-132 heapster]# cat deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: harbor.od.com/public/heapster:v1.5.4
        imagePullPolicy: IfNotPresent
        command:
        - /opt/bitnami/heapster/bin/heapster
        - --source=kubernetes:https://kubernetes.default

服务配置
[root@ceshi-132 heapster]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

构建资源
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/rbac.yaml 
serviceaccount/heapster created
clusterrolebinding.rbac.authorization.k8s.io/heapster created
[root@ceshi-130 ~]#  kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/deployment.yaml 
deployment.extensions/heapster created
[root@ceshi-130 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/svc.yaml
service/heapster created
  • 负载状态仅供参考
    在这里插入图片描述

身份令牌获取


列出kube-system名称空间身份认证信息
[root@ceshi-130 ~]# kubectl get sa -n kube-system
NAME                         SECRETS   AGE
coredns                      1         3d23h
default                      1         6d18h
heapster                     1         17h
kubernetes-dashboard-admin   1         46h
traefik-ingress-controller   1         3d15h

[root@ceshi-130 ~]# kubectl get secret -n kube-system
NAME                                     TYPE                                  DATA   AGE
coredns-token-qc4vc                      kubernetes.io/service-account-token   3      4d1h
default-token-ft9r6                      kubernetes.io/service-account-token   3      6d20h
heapster-token-55tdl                     kubernetes.io/service-account-token   3      19h
kubernetes-dashboard-admin-token-wsfgf   kubernetes.io/service-account-token   3      2d
kubernetes-dashboard-certs               Opaque                                0      2d17h
kubernetes-dashboard-key-holder          Opaque                                2      2d17h
traefik-ingress-controller-token-8xkh9   kubernetes.io/service-account-token   3      3d17h

详细列出kubernetes-dashboard-admin身份信息
[root@ceshi-130 ~]# kubectl describe sa kubernetes-dashboard-admin  -n kube-system
Name:                kubernetes-dashboard-admin
Namespace:           kube-system
Labels:              addonmanager.kubernetes.io/mode=Reconcile
                     k8s-app=kubernetes-dashboard
Annotations:         kubectl.kubernetes.io/last-applied-configuration:
                       {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":...
Image pull secrets:  <none>
Mountable secrets:   kubernetes-dashboard-admin-token-wsfgf
Tokens:              kubernetes-dashboard-admin-token-wsfgf
Events:              <none>

列出用户令牌token	信息
[root@ceshi-130 ~]# kubectl describe secret kubernetes-dashboard-admin-token-wsfgf -n kube-system
Name:         kubernetes-dashboard-admin-token-wsfgf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard-admin
              kubernetes.io/service-account.uid: d63fac79-de2f-4a7e-be9d-9e77c91c0589

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1342 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ

集群平滑升级

下载1.15.12版本

传送门: kubenetes:v1.15.12

原GitHub地址:https://dl.k8s.io/v1.15.12/kubernetes-server-linux-amd64.tar.gz
国内下载地址:https://storage.googleapis.com/kubernetes-release/release/v1.15.12/kubernetes-server-linux-amd64.tar.gz
  1. 注释nginx负载均衡配置,禁止流量进入
  2. kubectl删除升级节点
  3. 配置新版本信息
查看版本当前为1.15.10
[root@ceshi-130 ~]# kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
ceshi-130.host.com   Ready    master   16d   v1.15.10
ceshi-131.host.com   Ready    master   16d   v1.15.1

[root@ceshi-130 ~]# kubectl delete node ceshi-130.host.com
node "ceshi-130.host.com" deleted

节点删除,pods全部转换到ceshi-131这台节点
[root@ceshi-130 local]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP          NODE                 NOMINATED NODE   READINESS GATES
nginx-ceshi-7bccf8fbcb-5j7gw   1/1     Running   0          8m12s   172.7.2.6   ceshi-131.host.com   <none>           <none>
nginx-ceshi-7bccf8fbcb-f8vr2   1/1     Running   0          3h26m   172.7.2.5   ceshi-131.host.com   <none>           <none>
nginx-ceshi-7bccf8fbcb-rv9hv   1/1     Running   0          8m12s   172.7.2.7   ceshi-131.host.com   <none>           <none>
nginx-ds-lx25l                 1/1     Running   1          3h39m   172.7.2.4   ceshi-131.host.com   <none>           <none>

配置新版本

解压新版文件
[root@ceshi-130 ~]# tar -xf kubernetes-server-linux-amd64-1.15.12.tar.gz
[root@ceshi-130 ~]# mv kubernetes kubernetes-v1.15.12
[root@ceshi-130 ~]# mv kubernetes-v1.15.12/ /usr/local/
[root@ceshi-130 kubernetes-v1.15.12]# cd /usr/local/kubernetes-v1.15.12

删除源文件
[root@ceshi-130 kubernetes-v1.15.12]# rm -fr kubernetes-src.tar.gz
[root@ceshi-130 kubernetes-v1.15.12]# cd server/bin
[root@ceshi-130 bin]# rm -fr ./*_tag
[root@ceshi-130 bin]# rm -fr ./*.tar
[root@ceshi-130 bin]# mkdir certs conf

创建对应目录并将原版本证书和配置文件和脚本文件拷贝新版本
[root@ceshi-130 bin]# cp /usr/local/kubernetes/server/bin/certs/* ./certs/
[root@ceshi-130 bin]# cp /usr/local/kubernetes/server/bin/conf/* ./conf/
[root@ceshi-130 bin]# cp /usr/local/kubernetes-v1.15.10/server/bin/*.sh .

删除链接文件
[root@ceshi-130 local]# rm -fr kubernetes

将新版做软链接
[root@ceshi-130 local]# ln -s /usr/local/kubernetes-v1.15.12/ /usr/local/kubernetes

重启服务后ceshi-130版本已经成功更新(重启后kubelet会将节点自动更新到集群)
[root@ceshi-130 bin]# kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
ceshi-130.host.com   Ready    <none>   27s   v1.15.12
ceshi-131.host.com   Ready    master   17d   v1.15.10

打开nginx服务均衡配置
[root@ceshi-128 ~]# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@ceshi-128 ~]# /usr/local/nginx/sbin/nginx -s reload
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值