Kubernetes 集群部署之Worker Node部署(四)

目录

Node 组件

kubelet

kube-proxy

CNI

1、master组件部署

2、创建node工作目录

3、将必要组件从master解压目录拷贝到node目录(mater部署好后,拷贝到node节点)

4、kubelet部署

参数说明

配置参数文件

TLS Bootstrapping 机制

生成bootstrap.kubeconfig文件

拷贝配置文件

拷贝证书文件

创建systemd文件管理kubelet

启动并设置开机自启动

查看是否启动成功

批准kubelet证书申请并加入集群

查看节点

5、kube-proxy部署

创建配置文件

配置参数文件

kube-proxy.kubeconfig文件生成

拷贝证书文件到部署目录

kubeconfig文件

拷贝生成的文件到部署目录

创建system文件管理kube-proxy

启动并加入开机自启动

6、flannel网络部署 

为flannel网络分配子网段,并注入etcd数据库中

软件包下载

拷贝证书

解压软件包

将解压文件放到安装目录

建立配置文件

拷贝配置文件

编写启动脚本

启动服务

设置docker容器连接flannel网络

7、授权apiserver访问kubelet

8、新增worker node  

拷贝master配置文件

删除kubelet证书和kubeconfig文件

启动服务

在Master上批准新Node kubelet证书申请

查看node


Node 组件

节点组件运行在Node,提供Kubernetes运行时环境,以及维护Pod

  • kubelet

kubelet是主要的节点代理,它会监视已分配给节点的pod,具体功能:

  1. 安装Pod所需的volume。

  2. 下载Pod的Secrets。

  3. 定期执行容器健康检查。

  4. 创建一个“镜像容器”时,将容器的状态报告master。

  5. 将节点的状态报告给master

  • kube-proxy

kube-proxy通过在主机上维护网络规则并执行连接转发来实现Kubernetes服务抽象

  • CNI

CNI用于连接容器管理系统和网络插件。提供一个容器所在的network namespace,将network interface插入该network namespace中(比如veth的一端),并且在宿主机做一些必要的配置(例如将veth的另一端加入bridge中),最后对namespace中的interface进行IP和路由的配置。

      CNI的工作是从容器管理系统处获取运行时信息,包括network namespace的路径,容器ID以及network interface name,再从容器网络的配置文件中加载网络配置信息,再将这些信息传递给对应的插件,由插件进行具体的网络配置工作,并将配置的结果再返回到容器管理系统中。

1、master组件部署

Kubernetes 集群部署之Master部署_abel_dwh的博客-CSDN博客

2、创建node工作目录

[root@node1 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
[root@node2 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

3、将必要组件从master解压目录拷贝到node目录(mater部署好后,拷贝到node节点)

[root@master ~]# cd kubernetes/server/bin
[root@master bin]# cp kubelet kube-proxy /opt/kubernetes/bin
cp: overwrite ‘/opt/kubernetes/bin/kubelet’? y
cp: overwrite ‘/opt/kubernetes/bin/kube-proxy’? y

4、kubelet部署

  • 配置文件创建

[root@master cfg]# cat /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
[root@node1 ~]# cat /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
============================================================================

[root@node2 ~]# cat /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node2 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
  • 参数说明

--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像
  • 配置参数文件


[root@master ~]#  cat <<EOF >/opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
  • TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

  • 生成bootstrap.kubeconfig文件

[root@master ~]# KUBE_APISERVER="https://192.168.44.128:6443"  apiserver 地址
[root@master ~]# TOKEN="267137427e1cb7519d63974aa7598091"      token,与token.csv里保持一致
[root@master ~]# kubectl config set-cluster kubernetes \
>   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
>   --embed-certs=true \
>   --server=${KUBE_APISERVER} \
>   --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.
[root@master ~]# 
[root@master ~]# kubectl config set-credentials "kubelet-bootstrap" \
>   --token=${TOKEN} \
>   --kubeconfig=bootstrap.kubeconfig
User "kubelet-bootstrap" set.
[root@master ~]# 
[root@master ~]# kubectl config set-context default \
>   --cluster=kubernetes \
>   --user="kubelet-bootstrap" \
>   --kubeconfig=bootstrap.kubeconfig
Context "default" created.
[root@master ~]# 
[root@master ~]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default".
  • 拷贝配置文件

[root@master ~]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
  • 创建systemd文件管理kubelet

[root@master ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • 启动并设置开机自启动

[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start kubelet
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
  • 查看是否启动成功

[root@master cfg]# ps -ef |grep k8s-master
root      13985      1  1 16:38 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-master --network-plugin=cni --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet-config.yml --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=lizhenliang/pause-amd64:3.0
root      14004   7547  0 16:39 pts/0    00:00:00 grep --color=auto k8s-master
  • 批准kubelet证书申请并加入集群

查看证书请求
root@master logs]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-eJBWwmGS7R28vpjo0wvmESjmVco02VH36F-9JdyKSx8   90s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

批准申请
[root@master logs]# kubectl certificate approve node-csr-eJBWwmGS7R28vpjo0wvmESjmVco02VH36F-9JdyKSx8
certificatesigningrequest.certificates.k8s.io/node-csr-eJBWwmGS7R28vpjo0wvmESjmVco02VH36F-9JdyKSx8 approved
  • 查看节点

[root@master logs]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   33s   v1.18.18

5、kube-proxy部署

  • 创建配置文件

[root@master logs]# cat /opt/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
  • 配置参数文件

[root@master logs]# cat /opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
  • kube-proxy.kubeconfig文件生成

生成kube-proxy证书:

[root@master logs]# cd /etc/k8s/ssl/
[root@master ssl]# cat kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "HuShi",
      "ST": "HuShi",
      "O": "k8s",
      "OU": "System"
    }
  ]
}


[root@master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

[root@master ssl]# ls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem
  • 拷贝证书文件到部署目录

[root@master ssl]# cp kube-proxy*pem /opt/kubernetes/ssl/
  • kubeconfig文件

[root@master ssl]# KUBE_APISERVER="https://192.168.44.128:6443"
[root@master ssl]# kubectl config set-cluster kubernetes \
>   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
>   --embed-certs=true \
>   --server=${KUBE_APISERVER} \
>   --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.

[root@master ssl]# kubectl config set-credentials kube-proxy \
>   --client-certificate=./kube-proxy.pem \
>   --client-key=./kube-proxy-key.pem \
>   --embed-certs=true \
>   --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

[root@master ssl]# kubectl config set-context default \
>   --cluster=kubernetes \
>   --user=kube-proxy \
>   --kubeconfig=kube-proxy.kubeconfig
Context "default" created.

[root@master ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
  • 拷贝生成的文件到部署目录

[root@master ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
  • 创建system文件管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 启动并加入开机自启动

[root@master ssl]# systemctl daemon-reload
[root@master ssl]# systemctl start kube-proxy
[root@master ssl]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

6、flannel网络部署 

  • 为flannel网络分配子网段,并注入etcd数据库中

/usr/local/bin/etcd-v3.3.2-linux-amd64/etcdctl --ca-file=/etc/etcd/ssl/etcd.pem \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--endpoints="https://192.168.44.128:2379,https://192.168.44.129:2379,https://192.168.44.130:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}'
  • 软件包下载

[root@master ~]# wget https://github.com/coreos/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz

[root@master ~]# scp flannel-v0.12.0-linux-amd64.tar.gz node2:/root/ 
[root@master ~]# scp flannel-v0.12.0-linux-amd64.tar.gz node1:/root/ 


[root@master ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz
[root@master ~]# mkdir /opt/kubernetes/bin/cni
[root@master ~]# tar zxf cni-plugins-linux-amd64-v0.8.5.tgz -C /opt/kubernetes/bin/cni/
[root@master ~]# scp -r /opt/kubernetes/bin/cni/* node1:/opt/kubernetes/bin/cni/
[root@master ~]# scp -r /opt/kubernetes/bin/cni/* node2:/opt/kubernetes/bin/cni/
  • 拷贝证书

[root@master ~]# cd /etc/etcd/ssl
[root@master ~]# cp etcd*pem /opt/kubernetes/ssl/
[root@master ~]# scp etcd*pem node2:/opt/kubernetes/ssl/
[root@master ~]# scp etcd*pem node1:/opt/kubernetes/ssl/
  • 解压软件包

[root@master ~]#  tar -xf flannel-v0.12.0-linux-amd64.tar.gz
[root@master ~]# ls flanneld README.md mk-docker-opts.sh 
flanneld  mk-docker-opts.sh  README.md
  • 将解压文件放到安装目录

[root@master ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
  • 建立配置文件

[root@master ~]# cat /opt/kubernetes/cfg/flanneld.cfg 
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.44.128:2379,https://192.168.44.129:2379,https://192.168.44.130:2379 FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network" FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/etcd-ca.pem" FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/etcd.pem" FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/etcd-key.pem"
  • 拷贝配置文件

[root@master ssl]# scp /opt/kubernetes/cfg/flanneld.cfg node1:/opt/kubernetes/cfg/
[root@master ssl]# scp /opt/kubernetes/cfg/flanneld.cfg node2:/opt/kubernetes/cfg/
  • 编写启动脚本

[root@master ~]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld.cfg
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
[root@master ~]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld.cfg
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • 启动服务

[root@master ~]#  systemctl daemon-reload
[root@master ~]#  systemctl enable flanneld 
[root@master ~]#  chmod +x /opt/kubernetes/bin/*
[root@master ~]#  systemctl restart flanneld
[root@master ~]# systemctl status flanneld.service
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-05-11 22:56:45 CST; 5s ago
  Process: 18914 ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS)
 Main PID: 18891 (flanneld)
    Tasks: 11
   Memory: 8.4M
   CGroup: /system.slice/flanneld.service
           └─18891 /opt/kubernetes/bin/flanneld --ip-masq

May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.449268   18891 iptables.go:167] Deleting iptables rule: -d 172.17.0.0/16 -j ACCEPT
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.451879   18891 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.64.0/24 -j RETURN
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.454162   18891 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.456062   18891 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 -j ACCEPT
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.457652   18891 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN
May 11 22:56:45 master systemd[1]: Started Flanneld overlay address etcd agent.
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.461945   18891 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.463650   18891 iptables.go:155] Adding iptables rule: -d 172.17.0.0/16 -j ACCEPT
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.466417   18891 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.64.0/24 -j RETURN
May 11 22:56:45 master flanneld[18891]: I0511 22:56:45.469988   18891 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE
[root@master ~]# systemctl enable flanneld.service
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@master ~]# ifconfig
-bash: ifconfig: command not found
[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:1d:b4:76 brd ff:ff:ff:ff:ff:ff
    inet 192.168.44.128/24 brd 192.168.44.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::a366:f393:3a59:2116/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::973:9d8c:8a1d:7cc3/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d283:44e6:575:333f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:31:4e:4d:fc brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 7a:54:55:2d:8e:4f brd ff:ff:ff:ff:ff:ff
    inet 172.17.64.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::7854:55ff:fe2d:8e4f/64 scope link 
       valid_lft forever preferred_lft forever
  • 设置docker容器连接flannel网络

  • 重启docker服务

[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl restart docker

7、授权apiserver访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

8、新增worker node  

  • 拷贝master配置文件

[root@master ~]# for i in node1 node2; do scp -r /opt/kubernetes $i:/opt/; done 
[root@master ~]# for i in node1 node2; do scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service $i:/usr/lib/systemd/system; done
[root@master ~]# for i in node1 node2; do scp -r /opt/kubernetes/ssl/ca.pem $i:/opt/kubernetes/ssl; done
  • 删除kubelet证书和kubeconfig文件

[root@node1 logs]# rm /opt/kubernetes/cfg/kubelet.kubeconfig 
[root@node1 logs]# rm -f /opt/kubernetes/ssl/kubelet*
  • 修改主机名

[root@node1 logs]#  vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

[root@node1 logs]#  vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1
  • 启动服务

[root@node1 logs]# systemctl daemon-reload
[root@node1 logs]# systemctl start kubelet
[root@node1 logs]# systemctl enable kubelet
[root@node1 logs]# systemctl start kube-proxy
[root@node1 logs]# systemctl enable kube-proxy
  • 在Master上批准新Node kubelet证书申请

[root@master ~]# kubectl get csr 
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-B0CKCImgc8ROZIb7xZrs8uLmLRkkQlTbWxHq0GyDExc   5h52m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-Tb_IYC9zqMUU4T-sst0ptwKh718dSqZnNFeUBjiC4W0   2m12s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-VB4gwrUFscxsJuHkWnCziI6Y673mxcqLLS1ZkXlRTUE   2m17s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

[root@master ~]# kubectl certificate approve node-csr-Tb_IYC9zqMUU4T-sst0ptwKh718dSqZnNFeUBjiC4W0
certificatesigningrequest.certificates.k8s.io/node-csr-Tb_IYC9zqMUU4T-sst0ptwKh718dSqZnNFeUBjiC4W0 approved
[root@master ~]# kubectl certificate approve node-csr-VB4gwrUFscxsJuHkWnCziI6Y673mxcqLLS1ZkXlRTUE
certificatesigningrequest.certificates.k8s.io/node-csr-VB4gwrUFscxsJuHkWnCziI6Y673mxcqLLS1ZkXlRTUE approved
  • 查看node

[root@master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   22h   v1.18.18
k8s-node1    Ready    <none>   17h   v1.18.18
k8s-node2    Ready    <none>   17h   v1.18.18

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

abel_dwh

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值