使用单节点二进制方式部署kubernetes(k8s)集群--案例详解

文章目录

前言

一 . 单节点环境部署规划

1.1.1:拓扑图介绍

master组件介绍:
在这里插入图片描述
kube-apiserver:是集群的统一入口,各个组件的协调者,所有对象资源的增删改查和监听操作都交给APIserver处理,再提交给etcd存储。

kube-controller-manager:处理群集中常规的后台任务,一个资源对应一个控制器,而controller-manager就是负责管理这些控制器。

kube-scheduler:根据调度算法为新创建的pod选择一个node节点,可以任意部署,可以部署同一个节点上,也可以部署在不同节点上

1.1.2 node组件介绍:

kubelet:kube是master在node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个pod转换成一组容器

kube-proxy:在node节点上实现pod网络代理,维护网络规划和四层负载均衡的工作

docker:Docker引擎

flannel:flannel网络

1.1.3 etcd集群介绍:etcd集群在这里分布的部署到了三个节点上

etcd是CoreOS团队于2013年6月发起的开源项目,基于go语言开发,目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法。

etcd集群数据无中心化集群,有如下特点:

1、简单:安装配置简单,而且提供了HTTP进行交互,使用也很简单

2、安全:支持SSL证书验证

3、快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作

4、可靠:采用raft算法,实现分布式数据的可用性和一致性

1.1.4 部署K8S集群中会用到的自签SSL证书

在这里插入图片描述

二:K8S单节点具体实施部署

其他环节部署准备
1.地址分配
在这里插入图片描述
2.准备软件包及脚本文件
在这里插入图片描述

  1. 首先设置防火墙,核心防护
[root@master ~]# setenforce 0
setenforce: SELinux is disabled
[root@master ~]# iptables -F
  1. 官网下载软件地址
https://github.com/kubernetes/kubernetes/releases?after=v1.13.1

----ETCD集群部署,3台都需要部署etcd (集群基本单元3台)

将相关软件脚本 拷贝到服务器
master操作
在root 目录下创建文件夹,用于存放脚本软件包

mkdir k8s
Cd   k8s

ls
//拷贝而来
etcd-cert.sh  etcd.sh

[root@pc-3 k8s]# mkdir etcd-cert
[root@pc-3 k8s]# mv etcd-cert.sh etcd-cert

下载证书制作工具

[root@pc-3 k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
//下载cfssl官方包

[root@pc-3 k8s]# bash cfssl.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  9.8M  100  9.8M    0     0   785k      0  0:00:12  0:00:12 --:--:-- 1199k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 2224k  100 2224k    0     0  1319k      0  0:00:01  0:00:01 --:--:-- 1319k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 6440k  100 6440k    0     0   781k      0  0:00:08  0:00:08 --:--:-- 1107k
查看下载的工具文件
[root@pc-3 k8s]#  ls /usr/local/bin/
cfssl  cfssl-certinfo  cfssljson  docker-compose

开始制作证书

//cfssl 生成证书工具   cfssljson通过传入json文件生成证书
  cfssl-certinfo查看证书信息
//定义ca证书
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"     
        ]  
      } 
    }         
  }
}
EOF 

实现证书签名

cat > ca-csr.json <<EOF 
{   
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

生产证书,生成ca-key.pem ca.pem

[root@pc-3 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/28 18:43:35 [INFO] generating a new CA key and certificate from CSR
2020/09/28 18:43:35 [INFO] generate received request
2020/09/28 18:43:35 [INFO] received CSR
2020/09/28 18:43:35 [INFO] generating key: rsa-2048
2020/09/28 18:43:35 [INFO] encoded CSR
2020/09/28 18:43:35 [INFO] signed certificate with serial number 408575421903137939925207688635112111026138545006

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.100.3",
    "192.168.100.5",
    "192.168.100.6"
 ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

生成ETCD证书 server-key.pem server.pem

[root@pc-3 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=wwwserver-csr.json | cfssljson -bare server
2020/09/28 18:47:02 [INFO] generate received request
2020/09/28 18:47:02 [INFO] received CSR
2020/09/28 18:47:02 [INFO] generating key: rsa-2048
2020/09/28 18:47:02 [INFO] encoded CSR
2020/09/28 18:47:02 [INFO] signed certificate with serial number 254619833849047910853989201172793903756492656573
2020/09/28 18:47:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

安装etcd 软件

[root@pc-3 k8s]#  tar zxvf etcd-v3.3.10-linux-amd64.tar.gz


[root@pc-3 k8s]# ls etcd-v3.3.10-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
//配置文件,命令文件,证书
[root@pc-3 k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@pc-3 k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl 
[root@pc-3 k8s]# ls
ca-config.json  etcd.sh                             kubernetes-server-linux-amd64.tar.gz
ca.csr          etcd-v3.3.10-linux-amd64            server.csr
ca-csr.json     etcd-v3.3.10-linux-amd64.tar.gz     server-csr.json
ca-key.pem      flannel.sh                          server-key.pem
ca.pem          flannel-v0.10.0-linux-amd64.tar.gz  server.pem
cfssl.sh        k8s-cert.sh
etcd-cert       kubeconfig.sh
[root@pc-3 k8s]# cd etcd-cert/
[root@pc-3 etcd-cert]# ls
etcd-cert.sh
[root@pc-3 etcd-cert]# cd ../
//证书拷贝

[root@pc-3 k8s]# cp *.pem /opt/etcd/ssl

启动etcd,进入卡住状态等待其他节点加入

bash etcd.sh etcd01 192.168.100.3 etcd02=https://192.168.100.5:2380,etcd03=https://192.168.100.6:2380

使用另外一个终端打开,会发现etcd进程已经开启

[root@master ~]# ps -ef | grep etcd
root      22615  19822  0 18:51 pts/3    00:00:00 bash etcd.sh etcd01 192.168.100.3 etcd02=https://192.168.100.5:2380,etcd03=https://192.168.100.6:2380
root      22668  22615  0 18:51 pts/3    00:00:00 systemctl restart etcd
root      22676      1  4 18:51 ?        00:00:01 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.100.3:2380 --listen-client-urls=https://192.168.100.3:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.100.3:2379 --initial-advertise-peer-urls=https://192.168.100.3:2380 --initial-cluster=etcd01=https://192.168.100.3:2380,etcd02=https://192.168.100.5:2380,etcd03=https://192.168.100.6:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root      22747  22704  0 18:51 pts/2    00:00:00 grep --color=auto etcd
[root@master ~]#

三 . 两个node节点部署etcd

拷贝证书去其他node节点

[root@master ~]# scp -r /opt/etcd/ root@192.168.100.5:/opt/
The authenticity of host '192.168.100.5 (192.168.100.5)' can't be established.
ECDSA key fingerprint is SHA256:snV2dDxE7mpobX/CFy7kCn9xOBLCXsLKzGu4rPxWv3I.
ECDSA key fingerprint is MD5:6d:96:09:b9:c5:b1:79:1d:e2:7d:d3:53:43:11:6b:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.5' (ECDSA) to the list of known hosts.
root@192.168.100.5's password:
etcd                                                           100%  509   586.3KB/s   00:00
etcd                                                           100%   18MB 143.6MB/s   00:00
etcdctl                                                        100%   15MB 131.5MB/s   00:00
ca-key.pem                                                     100% 1679   641.6KB/s   00:00
ca.pem                                                         100% 1265   322.6KB/s   00:00
server-key.pem                                                 100% 1679     1.9MB/s   00:00
server.pem                                                     100% 1354     1.8MB/s   00:00
[root@master ~]# scp -r /opt/etcd/ root@192.168.100.6:/opt
The authenticity of host '192.168.100.6 (192.168.100.6)' can't be established.
ECDSA key fingerprint is SHA256:53xQsSKQNTtONKRZ/T1Hvg9H2Kj9elnV86jKPn6TohY.
ECDSA key fingerprint is MD5:94:59:71:bd:ad:f7:69:dd:2e:77:ef:24:7a:5b:4f:a1.
Are you sure you want to continue connecting (yes/no)? ys^Hes
Please type 'yes' or 'no': yes
Warning: Permanently added '192.168.100.6' (ECDSA) to the list of known hosts.
root@192.168.100.6's password:
etcd                                                           100%  509   350.4KB/s   00:00
etcd                                                           100%   18MB  87.6MB/s   00:00
etcdctl                                                        100%   15MB 136.7MB/s   00:00
ca-key.pem                                                     100% 1679   975.0KB/s   00:00
ca.pem                                                         100% 1265   390.9KB/s   00:00
server-key.pem                                                 100% 1679     1.4MB/s   00:00
server.pem                                                     100% 1354   407.2KB/s   00:00
[root@master ~]#

拷贝ETCD启动脚本文件到其他节点(免安装etcd)

scp /usr/lib/systemd/system/etcd.service root@192.168.100.5:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/etcd.service root@192.168.100.6:/usr/lib/systemd/system/

修改node01节点

[root@node1 ~]# vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.5:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.5:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.5:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.5:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.3:2380,etcd02=https://192.168.100.5:2380,etcd03=https://192.168.100.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
~

修改node02

[root@node2 ~]# vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.6:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.6:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.6:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.3:2380,etcd02=https://192.168.100.5:2380,etcd03=https://192.168.100.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
~
~

两台都启动etcd ,设置自启动

 systemctl start etcd
  systemctl enable etcd
systemctl status etcd

检查群集状态,healthy

[root@masterk8s]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379" cluster-health
member 22919d53331f02b is healthy: got healthy result from https://192.168.100.6:2379
member 8ca2395d1d69688f is healthy: got healthy result from https://192.168.100.5:2379
member ec5b7981eb5dd5de is healthy: got healthy result from https://192.168.100.3:2379
cluster is healthy

四 . docker引擎部署

-----------------------------------------docker引擎部署---------------------------------------------------------
所有node节点部署docker引擎,具体看之前的博客地址
DOCKER部署方法.

五. flannel网络配置,负责节点容器间进行通信

------------------------------------------flannel网络配置---------------------------------------------------------
在这里插入图片描述

在这里插入图片描述
容器间数据包类型
在这里插入图片描述

写入分配的子网段到ETCD中,供flannel使用

[root@pc-3 k8s]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

查看写入的信息

[root@pc-3 k8s]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
[root@pc-3 k8s]#

将软件 flannel-v0.10.0-linux-amd64.tar.gz 拷贝到相关节点,并解压

[root@node1 opt]# ls
etcd  flannel.sh  flannel-v0.10.0-linux-amd64.tar.gz  rh  soft
//所有node节点操作解压
[root@node1 opt]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
[root@node1 opt]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node1 opt]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[root@node1 opt]# ls
etcd  flannel.sh  flannel-v0.10.0-linux-amd64.tar.gz  kubernetes  README.md  rh  soft
[root@node1 opt]# vim flannel.sh
[root@node1 opt]# bash flannel.sh https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@node2 opt]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md

递归创建k8s工作目录,包括 cfg(配置文件) ,bin (启动文件) ,ssl (证书)

[root@node2 opt]#  mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node2 opt]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[root@node2 opt]# ls
etcd        flannel-v0.10.0-linux-amd64.tar.gz  README.md  soft
flannel.sh  kubernetes                          rh
[root@node2 opt]# [root@node1 opt]# ls
 mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

开启flannel网络功能

执行flannel.sh 脚本,配置相关参数,加入systemctl 管理 以及开机自启动
[root@node2 opt]# bash flannel.sh https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.serviceto /usr/lib/systemd/system/flanneld.service.

修改docker配置文件(两台节点同样操作)

//配置docker连接flannel(若docker直间传不了文件,可检查此处配置,重新定义生成flanel 文件)
[root@node1 opt]# vim /usr/lib/systemd/system/docker.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env         //插入此行,环境变量
ExecStart=/usr/bin/dockerd     $DOCKER_NETWORK_OPTIONS    -H fd://    --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID     // 修改此行
TimeoutSec=0
RestartSec=2
Restart=always
...................省略。。。。。。。。。。。。。。。

查看节点相关配置

[root@node1 opt]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.14.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.14.1/24 --ip-masq=false --mtu=1450"
//说明:-bip指定启动时的子网
[root@node1 opt]#


[root@node2 bin]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.30.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.30.1/24 --ip-masq=false --mtu=1450"
[root@node2 bin]#

重启docker服务,查看网卡信息

[root@node1 opt]# systemctl daemon-reload
[root@node1 opt]# systemctl restart docker
在这里插入图片描述

//重启docker服务
[root@node2 bin]# systemctl daemon-reload
[root@node2 bin]# systemctl restart docker
对接上了 flannel组件
在这里插入图片描述

容器间通信测试,验证flannel 配置

  //测试ping通对方docker0网卡 证明flannel起到路由作用
  创建容器1
[root@node1 opt]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
75f829a71a1c: Pull complete
Digest: sha256:19a79828ca2e505eaee0ff38c2f3fd9901f4826737295157cc5212b7a372cd2b
Status: Downloaded newer image for centos:7
[root@8c4b1fe07af7 /]#
[root@8c4b1fe07af7 /]# yum install net-tools -y

创建容器2

[root@node2 bin]# docker run -it centos:7 /bin/bash
Unable to find image 'centos:7' locally
7: Pulling from library/centos
75f829a71a1c: Pull complete
Digest: sha256:19a79828ca2e505eaee0ff38c2f3fd9901f4826737295157cc5212b7a372cd2b
Status: Downloaded newer image for centos:7
[root@19a918c7db62 /]#
[root@8c4b1fe07af7 /]# yum install net-tools -y

进入容器查看其IP

[root@19a918c7db62 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.30.2  netmask 255.255.255.0  broadcast 172.17.30.255
        ether 02:42:ac:11:1e:02  txqueuelen 0  (Ethernet)
        RX packets 16191  bytes 12476566 (11.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7834  bytes 426599 (416.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

进行ping 测试,已经可以互相通讯了

[root@19a918c7db62 /]# ping 172.17.14.2
PING 172.17.14.2 (172.17.14.2) 56(84) bytes of data.
64 bytes from 172.17.14.2: icmp_seq=1 ttl=62 time=0.614 ms
64 bytes from 172.17.14.2: icmp_seq=2 ttl=62 time=1.81 ms
64 bytes from 172.17.14.2: icmp_seq=3 ttl=62 time=1.01 ms
^C
--- 172.17.14.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.614/1.148/1.813/0.498 ms

//再次测试ping通两个node中的centos:7容器

[root@8c4b1fe07af7 /]# ping 172.17.30.2
PING 172.17.30.2 (172.17.30.2) 56(84) bytes of data.
64 bytes from 172.17.30.2: icmp_seq=1 ttl=62 time=0.699 ms
64 bytes from 172.17.30.2: icmp_seq=2 ttl=62 time=0.366 ms
64 bytes from 172.17.30.2: icmp_seq=3 ttl=62 time=0.173 ms
64 bytes from 172.17.30.2: icmp_seq=4 ttl=62 time=0.439 ms
^C
--- 172.17.30.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.173/0.419/0.699/0.189 ms

六 . 部署master组件

在这里插入图片描述

只有三个验证都通过才会,颁发证书给node节点,排障思路

具体部署

在master上操作,api-server生成证书

[root@master ~]# cd /root/k8s/
[root@master k8s]# ls
ca-config.json  etcd.sh                             kubernetes-server-linux-amd64.tar.gz
ca.csr          etcd-v3.3.10-linux-amd64            server.csr
ca-csr.json     etcd-v3.3.10-linux-amd64.tar.gz     server-csr.json
ca-key.pem      flannel.sh                          server-key.pem
ca.pem          flannel-v0.10.0-linux-amd64.tar.gz  server.pem
cfssl.sh        k8s-cert.sh
etcd-cert       kubeconfig.sh
[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# unzip master.zip
Archive:  master.zip
inflating: apiserver.sh
inflating: controller-manager.sh
inflating: scheduler.sh
[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# mv k8s-cert.sh k8s-cert
[root@master k8s]# cd k8s-cert/

修改证书制作脚本(将相关IP地址写入证书)

[root@master k8s-cert]# ls
k8s-cert.sh

Vim  k8s-cert.sh
cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "192.168.100.3",    //master ,单节点有此一个就可以
      "192.168.100.7",      //多节点备用证书
      "192.168.100.8",       //多节点备用证书
      "192.168.100.11",       //多节点备用证书
      "192.168.100.12",        //多节点备用证书
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

运行证书生成脚本,查看生成8个证书文件

[root@master k8s-cert]# bash k8s-cert.sh

查看生成8个证书文件
[root@master k8s-cert]# ls
admin.csr       ca-config.json  ca.pem               kube-proxy-key.pem  server-key.pem
admin-csr.json  ca.csr          k8s-cert.sh          kube-proxy.pem      server.pem
admin-key.pem   ca-csr.json     kube-proxy.csr       server.csr
admin.pem       ca-key.pem      kube-proxy-csr.json  server-csr.json
[root@master k8s-cert]#

将证书拷贝到opt/kubernetes/ssl/ 目录

[root@localhost k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/

解压kubernetes压缩包,复制相关命令,获取随机序列码,token.csv使用

[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz

[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# ls
apiextensions-apiserver              kube-controller-manager.tar
cloud-controller-manager             kubectl
cloud-controller-manager.docker_tag  kubelet
cloud-controller-manager.tar         kube-proxy
hyperkube                            kube-proxy.docker_tag
kubeadm                              kube-proxy.tar
kube-apiserver                       kube-scheduler
kube-apiserver.docker_tag            kube-scheduler.docker_tag
kube-apiserver.tar                   kube-scheduler.tar
kube-controller-manager              mounter
kube-controller-manager.docker_tag
//复制相关命令
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# cd /root/k8s
获取随机序列码,token使用
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '  //可以随机生成序列号
11403f512b6f0dcf9807cec2862cd32a
创建/token.csv 文件
[root@master k8s]#  vim /opt/kubernetes/cfg/token.csv
//格式:序列号,用户名,id,角色
11403f512b6f0dcf9807cec2862cd32a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

二进制文件,token,证书都准备好,开启apiserver

 bash apiserver.sh 192.168.100.3 https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379

[root@master k8s]#  bash apiserver.sh 192.168.100.3 https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master k8s]#

检查进程是否启动成功

[root@master k8s]# ps aux | grep kube
root      24746 23.5 16.4 393132 306528 ?       Ssl  15:28   0:08 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379 --bind-address=192.168.100.3 --secure-port=6443 --advertise-address=192.168.100.3 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      24772  0.0  0.0 112724   988 pts/0    S+   15:28   0:00 grep --color=auto kube

查看配置文件

[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379 \
--bind-address=192.168.100.3 \
--secure-port=6443 \
--advertise-address=192.168.100.3 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

[root@master k8s]#

监听的https端口

[root@master k8s]#  netstat -ntap | grep 6443
tcp        0      0 192.168.100.3:6443      0.0.0.0:*               LISTEN      24746/kube-apiserve
tcp        0      0 192.168.100.3:6443      192.168.100.3:57238     ESTABLISHED 24746/kube-apiserve
tcp        0      0 192.168.100.3:57238     192.168.100.3:6443      ESTABLISHED 24746/kube-apiserve
[root@master k8s]#  netstat -ntap | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      24746/kube-apiserve
[root@master k8s]#

用脚本启动scheduler服务

[root@master k8s]# vim scheduler.sh
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]#  ps aux | grep ku
postfix   24344  0.0  0.1  91732  2216 ?        S    15:08   0:00 pickup -l -t unix -u
root      24746  7.1 16.4 393132 306540 ?       Ssl  15:28   0:17 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379 --bind-address=192.168.100.3 --secure-port=6443 --advertise-address=192.168.100.3 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      24891  3.6  1.0  46128 19192 ?        Ssl  15:32   0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root      24915  0.0  0.0 112724   984 pts/0    R+   15:32   0:00 grep --color=auto ku

//启动scheduler服务

[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.serviceto /usr/lib/systemd/system/kube-controller-manager.service.
[root@master k8s]#
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.serviceto /usr/lib/systemd/system/kube-controller-manager.service.

查看节点状态:

命令:kubectl get cs
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
scheduler            Healthy   ok

七 . node1 节点部署

--------------------------------------------------node节点部署------------------------------------------

master上操作把 kubelet、kube-proxy拷贝到node节点上去

[root@master bin]# cd -
/root/k8s/kubernetes/server/bin
[root@master bin]# ls
apiextensions-apiserver              kube-controller-manager.tar
cloud-controller-manager             kubectl
cloud-controller-manager.docker_tag  kubelet
cloud-controller-manager.tar         kube-proxy
hyperkube                            kube-proxy.docker_tag
kubeadm                              kube-proxy.tar
kube-apiserver                       kube-scheduler
kube-apiserver.docker_tag            kube-scheduler.docker_tag
kube-apiserver.tar                   kube-scheduler.tar
kube-controller-manager              mounter
kube-controller-manager.docker_tag
[root@master bin]# scp kubelet kube-proxy root@192.168.100.5:/opt/kubernetes/bin/
root@192.168.100.5's password:
kubelet                                                        100%  168MB 123.2MB/s   00:01
kube-proxy                                                     100%   48MB 117.6MB/s   00:00
[root@master bin]# scp kubelet kube-proxy root@192.168.100.6:/opt/kubernetes/bin/
root@192.168.100.6's password:
kubelet                                                        100%  168MB 116.3MB/s   00:01
kube-proxy                                                     100%   48MB 135.2MB/s   00:00
[root@master bin]#

nod01节点操作(复制node.zip到/root目录下再解压)

[root@node1 ~]# ls
anaconda-ks.cfg  node.zip  original-ks.cfg  公共  模板  视频  图片  文档  下载  音乐  桌面
[root@node1 ~]# unzip node.zip
Archive:  node.zip
  inflating: proxy.sh
  inflating: kubelet.sh

---------------------------------------------------
[root@node2 ~]# unzip node.zip
Archive:  node.zip
  inflating: proxy.sh
  inflating: kubelet.sh
[root@node2 ~]#

[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# ls
[root@master kubeconfig]# cd ../
[root@master k8s]# ls
apiserver.sh           etcd.sh                             kubernetes-server-linux-amd64.tar.gz
ca-config.json         etcd-v3.3.10-linux-amd64            master.zip
ca.csr                 etcd-v3.3.10-linux-amd64.tar.gz     scheduler.sh
ca-csr.json            flannel.sh                          server.csr
ca-key.pem             flannel-v0.10.0-linux-amd64.tar.gz  server-csr.json
ca.pem                 k8s-cert                            server-key.pem
cfssl.sh               kubeconfig                          server.pem
controller-manager.sh  kubeconfig.sh
etcd-cert              kubernetes
[root@master k8s]# mv kubeconfig.sh kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# ls
kubeconfig.sh
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# ls
kubeconfig
[root@master kubeconfig]#

获取序列号,写入脚本中

[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
11403f512b6f0dcf9807cec2862cd32a,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@master kubeconfig]# ls
kubeconfig
[root@master kubeconfig]# vim kubeconfig

设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
  --token=11403f512b6f0dcf9807cec2862cd32a \              // 序列号加到此处
  --kubeconfig=bootstrap.kubeconfig

设置环境变量,放到/etc/profile

export PATH=$PATH:/opt/kubernetes/bin/

[root@master kubeconfig]# vim /etc/profile
[root@master kubeconfig]# source /etc/profile
[root@master kubeconfig]# kube
kube-apiserver           kubectl
查看节点状态:
kube-controller-manager  kube-scheduler
[root@master kubeconfig]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
controller-manager   Healthy   ok

执行脚本,生成配置文件

[root@master kubeconfig]#  bash kubeconfig 192.168.100.3 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]#

[root@master kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

//拷贝配置文件到node节点

[root@mastert kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.100.5:/opt/kubernetes/cfg/

[root@master kubeconfig]#
//拷贝配置文件到node节点
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.100.6:/opt/kubernetes/cfg/

//创建赋予权限用于连接apiserver请求签名(关键)

[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

在node1操作

[root@node1 ~]# unzip node.zip
Archive:  node.zip
  inflating: proxy.sh
  inflating: kubelet.sh

启动kubelet 服务

[root@node1 ~]# bash kubelet.sh 192.168.100.5     
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]#

检查kubelet服务启动

[root@node1 ~]#  ps aux | grep kube
root      19291  0.1  1.0 465176 20664 ?        Ssl  14:27   0:13 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.100.3:2379,https://192.168.100.5:2379,https://192.168.100.6:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      36079  2.5  2.4 408512 49208 ?        Ssl  17:32   0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.100.5 --kubeconfig=/opt/kubernetes/cfg/kubeletkubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      36148  0.0  0.0 112724   988 pts/2    S+   17:33   0:00 grep --color=auto kube
[root@node1 ~]#

Master 上查看:检查到node01节点的请求

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-D-6Qg-440uk6mAMVNkwmyAQbDSXH3r7GB9BjarecFvg   11s   kubelet-bootstrap   Pending
Pending(等待集群给该节点颁发证书)
[root@master kubeconfig]# kubectl certificate approve node-csr-D-6Qg-440uk6mAMVNkwmyAQbDSXH3r7GB9BjarecFvg
certificatesigningrequest.certificates.k8s.io/node-csr-D-6Qg-440uk6mAMVNkwmyAQbDSXH3r7GB9BjarecFvg approved

已经被允许加入群集

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-D-6Qg-440uk6mAMVNkwmyAQbDSXH3r7GB9BjarecFvg   13m   kubelet-bootstrap   Approved,Issued
[root@master kubeconfig]#

查看群集节点,成功加入node01节点

查看节点信息
[root@master kubeconfig]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.100.5   Ready    <none>   3m15s   v1.12.3
[root@master kubeconfig]#

在node01节点操作,启动proxy服务

[root@node1 ~]# bash proxy.sh 192.168.100.5
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

查看状态,正常运行

[root@node1 ~]#  systemctl status kube-proxy.service
 kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since  2020-09-29 18:42:30 CST; 52s ago
 Main PID: 44442 (kube-proxy)
    Tasks: 0
   Memory: 7.7M
   CGroup: /system.slice/kube-proxy.service
            44442 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --...

9 29 18:43:13 node1 kube-proxy[44442]: I0929 18:43:13.231924   44442 conf...e
9 29 18:43:13 node1 kube-proxy[44442]: I0929 18:43:13.968657   44442 conf...e
9 29 18:43:15 node1 kube-proxy[44442]: I0929 18:43:15.240187   44442 conf...e
9 29 18:43:15 node1 kube-proxy[44442]: I0929 18:43:15.975455   44442 conf...e
9 29 18:43:17 node1 kube-proxy[44442]: I0929 18:43:17.246790   44442 conf...e
9 29 18:43:17 node1 kube-proxy[44442]: I0929 18:43:17.983733   44442 conf...e
9 29 18:43:19 node1 kube-proxy[44442]: I0929 18:43:19.254543   44442 conf...e
9 29 18:43:19 node1 kube-proxy[44442]: I0929 18:43:19.990491   44442 conf...e
9 29 18:43:21 node1 kube-proxy[44442]: I0929 18:43:21.260388   44442 conf...e
9 29 18:43:22 node1 kube-proxy[44442]: I0929 18:43:21.999889   44442 conf...e
Hint: Some lines were ellipsized, use -l to show in full.

八 . node02节点部署

-----------------------------------node02节点部署---------------------------------------------

在node01节点操作,把现成的/opt/kubernetes目录复制到其他节点进行修改即可

[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.100.6:/opt/
The authenticity of host '192.168.100.6 (192.168.100.6)' can't be established.
ECDSA key fingerprint is SHA256:53xQsSKQNTtONKRZ/T1Hvg9H2Kj9elnV86jKPn6TohY.
ECDSA key fingerprint is MD5:94:59:71:bd:ad:f7:69:dd:2e:77:ef:24:7a:5b:4f:a1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.6' (ECDSA) to the list of known hosts.
root@192.168.100.6's password:
flanneld                                      100%  235   403.0KB/s   00:00
bootstrap.kubeconfig                          100% 2167     3.4MB/s   00:00
kube-proxy.kubeconfig                         100% 6273    11.1MB/s   00:00
kubelet                                       100%  377   753.1KB/s   00:00
kubelet.config                                100%  267   469.8KB/s   00:00
kubelet.kubeconfig                            100% 2296     4.1MB/s   00:00
kube-proxy                                    100%  189   431.1KB/s   00:00
mk-docker-opts.sh                             100% 2139     3.5MB/s   00:00
scp: /opt//kubernetes/bin/flanneld: Text file busy
kubelet                                       100%  168MB 141.3MB/s   00:01
kube-proxy                                    100%   48MB 137.0MB/s   00:00
kubelet.crt                                   100% 2185   627.6KB/s   00:00
kubelet.key                                   100% 1679     2.0MB/s   00:00
kubelet-client-2020-09-29-18-37-59.pem        100% 1273   394.5KB/s   00:00
kubelet-client-current.pem                    100% 1273   400.1KB/s   00:00
[root@node1 ~]#

把kubelet,kube-proxy的service文件拷贝到node2中

[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.100.6:/usr/lib/systemd/system/
root@192.168.100.6's password:
kubelet.service                               100%  264   140.7KB/s   00:00
kube-proxy.service                            100%  231   223.1KB/s   00:00
[root@node1 ~]#

在node02上操作,进行修改

首先删除复制过来的证书,等会node02会自行申请证书

[root@node2 ssl]# rm -rf *
[root@node2 ssl]# ls
//修改配置文件kubelet  kubelet.config kube-proxy(三个配置文件)

修改,主要修改IP地址为自身地址

修改kubelet

[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.6 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

修改 kubelet.config

[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.100.6
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

修改 kube-proxy

[root@localhost cfg]# vim kube-proxy


KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.6 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

启动服务,加入自启动

[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl enable kubelet.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node2 cfg]#

九 . 在master上操作查看请求,节点都加入成功,集群成功部署

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR CONDITION
node-csr-D-6Qg-440uk6mAMVNkwmyAQbDSXH3r7GB9BjarecFvg   29m   kubelet-bootstrap Approved,Issued
node-csr-i_vp5QMHhC8wNg80tBhNBkrp0cxy34chkJIY0Ebd9cQ   87s   kubelet-bootstrap Pending
[root@master kubeconfig]# kubectl certificate approve  node-csr-i_vp5QMHhC8wNg80tBhNBkrp0cxy34chkJIY0Ebd9cQ
certificatesigningrequest.certificates.k8s.io/node-csr-i_vp5QMHhC8wNg80tBhNBkrp0cxy34chkJIY0Ebd9cQ approved

//查看群集中的节点

[root@master kubeconfig]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.100.5   Ready    <none>   18m   v1.12.3
192.168.100.6   Ready    <none>   11s   v1.12.3
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值