文章目录
Kubernetes二进制部署单节点
一: 单master节点用二进制部署K8S集群
1.1: 拓扑图与主机分配
官网: https://github.com/kubernetes/kubernetes/releases?after=v1.13.1
节点/主机名 | IP | 服务 | 资源分配 |
---|---|---|---|
master | 192.168.10.60 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
node1 | 192.168.10.70 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
node2 | 192.168.00.80 | kube-apiserver、kube-scheduler、controller-manager、etcd | 2G+4CPU |
1.2:拓扑图介绍
-
master组件介绍:
kube-apiserver:是集群的统一入口,各个组件的协调者,所有对象资源的增删改查和监听操作都交给APIserver处理,再提交给etcd存储。
kube-controller-manager:处理群集中常规的后台任务,一个资源对应一个控制器,而controller-manager就是负责管理这些控制器。
kube-scheduler:根据调度算法为新创建的pod选择一个node节点,可以任意部署,可以部署同一个节点上,也可以部署在不同节点上
-
node组件介绍:
kubelet:kube是master在node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个pod转换成一组容器
kube-proxy:在node节点上实现pod网络代理,维护网络规划和四层负载均衡的工作
docker:Docker引擎
flannel:flannel网络
-
etcd集群介绍:etcd集群在这里分布的部署到了三个节点上
etcd是CoreOS团队于2013年6月发起的开源项目,基于go语言开发,目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法。
etcd集群数据无中心化集群,有如下特点:
1、简单:安装配置简单,而且提供了HTTP进行交互,使用也很简单
2、安全:支持SSL证书验证
3、快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作
4、可靠:采用raft算法,实现分布式数据的可用性和一致性
-
部署K8S集群中会用到的自签SSL证书
组件 使用的证书 etcd ca.pem,server.pem,server-key.pem flannel ca.pem,server.pem,server-key.pem kube-apiserver ca.pem,server.pem,server-key.pem kubelet ca.pem,ca-key.pem kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem kubectl ca.pem,admin-pem,admin-key.pem
二: Etcd部署
2.1: master部署
hostnamectl set-hostaname master
hostnamectl set-hostaname node1
hostnamectl set-hostaname node2
iptables -F
setenforce 0
12345
1.master主机创建k8s文件夹并上传etcd脚本,下载cffssl官方证书生成工具
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# ls '//从宿主机拖进来'
etcd-cert.sh etcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert
2.下载证书制作工具 (注:可以直接将下载好的证书制作工具与拖进来使用)
[root@master k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@master k8s]# bash cfssl.sh '//运行下载工具的脚本'
[root@master k8s]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
3.开始制作证书
#cfssl 生成证书工具 cfssljson通过传入json文件生成证书 cfssl-certinfo查看证书信息
#定义ca证书
[root@master k8s]# cd etcd-cert
[root@master etcd-cert]# cat > ca-config.json <<EOF '//定义ca证书配置文件'
{
"signing": {
"default": {
"expiry": "87600h" '//有效期10年'
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
#实现证书签名
[root@master etcd-cert]# cat > ca-csr.json <<EOF "实现CA证书签名"
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
#生产证书,生成ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4.指定etcd三个节点之间的通信验证,注意要修改这里的ip
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.10.60", "master地址"
"192.168.10.70", "node1地址"
"192.168.10.80" "node2地址"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
#生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
#检查生成的证书
[root@master etcd-cert]# ls
ca-config.json ca-csr.json ca.pem server.csr server-key.pem
ca.csr ca-key.pem etcd-cert.sh server-csr.json server.pem
5.部署ETCD服务
#官网下载地址:https://github.com/etcd-io/etcd/releases
#这里选择本地上传etcd-v3.3.10-linux-amd64.tar.gz,kubernetes-server-linux-amd64.tar.gz,flannel-v0.10.0-linux-amd64.tar.gz
[root@master k8s]# ls
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz
etcd.sh kubernetes-server-linux-amd64.tar.gz
etcd-v3.3.10-linux-amd64 flannel-v0.10.0-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p '创建配置文件,命令文件,证书目录'
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ '//移动命令到刚刚创建的 bin目录'
#证书拷贝
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ '将证书文件复制到刚刚创建的ssl目录'
[root@master k8s]# bash etcd.sh etcd01 192.168.10.60 etcd02=https://192.168.10.70:2380,etcd03=https://192.168.10.80:2380 '进入卡住状态等待其他节点加入,使用另外一个终端查看'
[root@master ~]# ps -ef | grep etcd
6.拷贝证书及启动服务脚本取其他node节点
[root@master k8s]# scp -r /opt/etcd/ root@192.168.10.70:/opt/
[root@master k8s]# scp -r /opt/etcd/ root@192.168.10.80:/opt
#拷贝服务脚本
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.100.70:/usr/lib/systemd/system/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.100.80:/usr/lib/systemd/system/
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112
2.2: node部署
- node01部署
#修改配置文件
[root@node01 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" "此处修改为etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.10.70:2380" "修改为nodde2地址"
ETCD_LISTEN_CLIENT_URLS="https://192.168.10.70:2379" "修改为nodde2地址"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.70:2380" "修改为nodde2地址"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.70:2379" "修改为nodde2地址"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.10.60:2380,etcd02=https://192.168.10.70:2380,etcd03=https://192.168.10.80:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#启动etcd
[root@localhost ssl]# systemctl start etcd
[root@localhost ssl]# systemctl status etcd
[root@localhost ssl]# systemctl enable etcd
1234567891011121314151617181920
- node02部署
#修改配置文件
[root@node01 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03" "此处修改为etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.10.80:2380" "修改为nodde3地址"
ETCD_LISTEN_CLIENT_URLS="https://192.168.10.80:2379" "修改为nodde3地址"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.80:2380" "修改为nodde3地址"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.80:2379" "修改为nodde3地址"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.10.60:2380,etcd02=https://192.168.10.70:2380,etcd03=https://192.168.10.80:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#启动etcd
[root@localhost ssl]# systemctl start etcd
[root@localhost ssl]# systemctl status etcd
[root@localhost ssl]# systemctl enable etcd
1234567891011121314151617181920
2.4 检查群集状态
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379" cluster-health
member 257ab5cb19142f4b is healthy: got healthy result from https://192.168.10.6:2379
member 777f7eb10e389e47 is healthy: got healthy result from https://192.168.10.70:2379
member eac869b8bd29e072 is healthy: got healthy result from https://192.168.10.80:2379
cluster is healthy
'检查集群状态:注意相对路径'
1234567
三: node节点docker引擎部署和flannel网络配置
- 网络理论介绍
- Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟化网络技术模式,该网络中的主机通过虚拟链路连接起来
- VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上进行传输,到达目的地后由隧道端点解封装并将数据发送给目标地址
- Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式
- Flannel是CoreOS团队针对 Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的 Docker容器都具有全集群唯一的虚拟IP地址。而且它还能在这些IP地址之间建立一个覆盖网络(overlay Network),通过这个覆盖网络,将数据包原封不动地传递到目标容器内
- ETCD在这里的作用:为Flannel提供说明
- 存储管理 Flannel可分配的IP地址段资源
- 监控ETCD中每个Pod的实际地址,并在内存中建立维护Pod节点路由表
- flannel网络配置
//所有node节点部署docker引擎,详见docker安装脚本
//master服务器分配ETCD网络
1.master节点写入分配的子网段到ETCD中,供flannel使用
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
2.查看写入的信息
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379" get /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
3.拷贝到所有node节点(只需要部署在node节点即可)
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.10.70:/root
[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.10.80:/root
'//谁需要跑pod,谁就需要安装flannel网络'
12345678910111213
//所有node节点操作解压
#####node01
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
1.创建k8s工作目录
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
2.编写服务脚本
[root@node1 ~]# cat > flannel.sh <<EOF
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
EOF
3.开启flannel网络功能
[root@node1 ~]# bash flannel.sh https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379
4.配置docker连接flannel
[root@node1 ~]# bash flannel.sh https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
5.配置docker连接flannel
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env "添加行"
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock "添加$DOCKER_NETWORK_OPTIONS"
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
6.重启docker服务
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
7.查看flannel网络
[root@node1 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.96.1 netmask 255.255.255.0 broadcast 172.17.96.255
inet6 fe80::42:79ff:fe02:dfdb prefixlen 64 scopeid 0x20<link>
ether 02:42:79:02:df:db txqueuelen 0 (Ethernet)
RX packets 7507 bytes 303781 (296.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15473 bytes 12452478 (11.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.70 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::f11d:b7bb:3c68:439d prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:fa:0f:d0 txqueuelen 1000 (Ethernet)
RX packets 454322 bytes 184289318 (175.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 370417 bytes 45997372 (43.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.96.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::b814:d3ff:feaf:3840 prefixlen 64 scopeid 0x20<link>
ether ba:14:d3:af:38:40 txqueuelen 0 (Ethernet)
RX packets 8 bytes 672 (672.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 672 (672.0 B)
TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111
###node2
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
1.创建k8s工作目录
[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
2.编写服务脚本与
[root@node1 ~]# cat > flannel.sh <<EOF
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
EOF
3.开启flannel网络功能
[root@node1 ~]# bash flannel.sh https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379
4.配置docker连接flannel
[root@node1 ~]# bash flannel.sh https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
5.配置docker连接flannel
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env "添加行"
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock "添加$DOCKER_NETWORK_OPTIONS"
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.42.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//说明:bip指定启动时的子网
DOCKER_NETWORK_OPTIONS=" --bip=172.17.42.1/24 --ip-masq=false --mtu=1450"
6.重启docker服务
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
7.查看flannel网络
[root@node2 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.71.1 netmask 255.255.255.0 broadcast 172.17.71.255
inet6 fe80::42:a6ff:fe60:fc52 prefixlen 64 scopeid 0x20<link>
ether 02:42:a6:60:fc:52 txqueuelen 0 (Ethernet)
RX packets 6647 bytes 269381 (263.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 17331 bytes 12977671 (12.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.80 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::f11d:b7bb:3c68:439d prefixlen 64 scopeid 0x20<link>
inet6 fe80::db8e:6d0f:751c:665e prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:d6:06:b8 txqueuelen 1000 (Ethernet)
RX packets 461795 bytes 186092043 (177.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 388485 bytes 48157854 (45.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.71.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::304a:44ff:fe4d:f31f prefixlen 64 scopeid 0x20<link>
ether 32:4a:44:4d:f3:1f txqueuelen 0 (Ethernet)
RX packets 8 bytes 672 (672.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 672 (672.0 B)
TX errors 0 dropped 2837 overruns 0 carrier 0 collisions 0
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110
- 测试ping通对方docker0网卡 证明flannel起到路由作用
[root@node1 ~]# docker run -it centos:7 /bin/bash
[root@5f9a65565b53 /]# yum install net-tools -y
[root@5f9a65565b53 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.84.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet)
RX packets 18192 bytes 13930229 (13.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037 (329.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node2 ~]# docker run -it centos:7 /bin/bash
[root@abbc159a6378 /]# yum install net-tools -y
[root@abbc159a6378 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.36.2 netmask 255.255.255.0 broadcast 172.17.84.255
ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet)
RX packets 18192 bytes 13930229 (13.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6179 bytes 337037 (329.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#测试
[root@abbc159a6378 /]# ping 172.17.84.2
[root@5f9a65565b53 /]# ping 172.17.36.2
"容器相互能ping通就说明容器间能跨宿主机相互访问"
123456789101112131415161718192021222324252627282930313233343536373839404142434445
四: 部署master组件
-
下图是node节点的kubectl启动的流程图,根据此流程图,我们需要在master节点将kubelet-bootstrap用户绑定到集群,然后部署一些证书认证使node节点能够被master节点检测到并且成功连接。
-
在master上操作,api-server生成证书
1、master节点操作,api-server生成证书
[root@localhost k8s]# unzip master.zip
[root@localhost k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p "创建配置文件目录,脚本目录,证书目录"
[root@localhost k8s]# mkdir k8s-cert
[root@localhost k8s]# cd k8s-cert/
[root@localhost k8s-cert]# ls "上传k8s-cert.sh到这里"
k8s-cert.sh
[root@master k8s-cert]# cat k8s-cert.sh
cat > ca-config.json <<EOF "ca的json证书"
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF "ca的签名证书"
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - "创建ca 证书,执行后会生成ca.pem和ca-key.pem"
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1", "Cloud vip地址,这里不用修改"
"127.0.0.1", "本地地址"
"192.168.10.60", "master1地址,这里生成证书,规划一下地址授权证书,方便后续多节点部署"
"192.168.10.50", "master2地址"
"192.168.10.200", "vip"
"192.168.10.90", "loadbalance(master)"
"192.168.10.100", "loadbalance(backup)"
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing", "名称,可以自定义"
"ST": "BeiJing", "名称,可以自定义"
"O": "k8s",
"OU": "System"
}
]
}
EOF
#生成server证书,这个命令执行后会产生server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF "管理员签名"
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
#生成管理员证书 执行以下命令会生成admin.pem admin-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF "代理签名"
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
#生成代理端的证书,会生成kube-proxy-key.pem kube-proxy.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2.制作证书
[root@master k8s-cert]# bash k8s-cert.sh "生成证书"
2020/09/29 11:12:46 [INFO] generating a new CA key and certificate from CSR
2020/09/29 11:12:46 [INFO] generate received request
2020/09/29 11:12:46 [INFO] received CSR
2020/09/29 11:12:46 [INFO] generating key: rsa-2048
2020/09/29 11:12:46 [INFO] encoded CSR
2020/09/29 11:12:46 [INFO] signed certificate with serial number 575323914368864518903971181616117945194109123613
2020/09/29 11:12:46 [INFO] generate received request
2020/09/29 11:12:46 [INFO] received CSR
2020/09/29 11:12:46 [INFO] generating key: rsa-2048
2020/09/29 11:12:46 [INFO] encoded CSR
2020/09/29 11:12:46 [INFO] signed certificate with serial number 19442952878245338229215888248858353145076376242
2020/09/29 11:12:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/09/29 11:12:46 [INFO] generate received request
2020/09/29 11:12:46 [INFO] received CSR
2020/09/29 11:12:46 [INFO] generating key: rsa-2048
2020/09/29 11:12:47 [INFO] encoded CSR
2020/09/29 11:12:47 [INFO] signed certificate with serial number 643329711468066262998605878730691760982210264026
2020/09/29 11:12:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/09/29 11:12:47 [INFO] generate received request
2020/09/29 11:12:47 [INFO] received CSR
2020/09/29 11:12:47 [INFO] generating key: rsa-2048
2020/09/29 11:12:47 [INFO] encoded CSR
2020/09/29 11:12:47 [INFO] signed certificate with serial number 219428069397866764814594718074303936912903308003
2020/09/29 11:12:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
apiserver.sh etcd-v3.3.10-linux-amd64 master.zip
controller-manager.sh etcd-v3.3.10-linux-amd64.tar.gz scheduler.sh
etcd-cert k8s-cert
etcd.sh kubernetes-server-linux-amd64.tar.gz
3、解压k8s服务器端压缩包
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
4.复制服务器端关键命令到k8s工作目录中
[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
5、编辑令牌并绑定角色kubelet-bootstrap
[root@master k8s]# cd /root/k8s
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' '//随机生成序列号'
0d8e1e148121fc25d8623239ae6cf7e0
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
0d8e1e148121fc25d8623239ae6cf7e0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#'//序列号,用户名,id,角色,这个用户是master用来管理node节点的'
6、开启apiserver,将数据存放在etcd集群中并检查kube状态
[root@master k8s]# bash apiserver.sh 192.168.10.60 https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@localhost k8s]# ps aux | grep kube "检查进程是否成功启动"
[root@master ~]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379 \
--bind-address=192.168.10.60 \
--secure-port=6443 \
--advertise-address=192.168.10.60 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
[root@master k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.10.60:6443 0.0.0.0:* LISTEN 69865/kube-apiserve
tcp 0 0 192.168.10.60:6443 192.168.10.60:53210 ESTABLISHED 69865/kube-apiserve
tcp 0 0 192.168.10.60:53210 192.168.10.60:6443 ESTABLISHED 69865/kube-apiserve
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 69865/kube-apiserve
7、启动scheduler服务
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# ps aux | grep ku
postfix 68074 0.0 0.1 91732 4080 ? S 10:07 0:00 pickup -l -t unix -u
root 69865 14.4 8.0 401580 311244 ? Ssl 11:43 0:09
[root@master k8s]# chmod +x controller-manager.sh
8、启动controller-manager
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
9、查看master节点状态
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253
- node节点部署
1、master节点上将kubectl和kube-proxy拷贝到node节点
[root@master bin]# scp kubelet kube-proxy root@192.168.10.70:/opt/kubernetes/bin/
root@192.168.10.70's password:
kubelet 100% 168MB 74.8MB/s 00:02
kube-proxy 100% 48MB 97.6MB/s 00:00
[root@master bin]# scp kubelet kube-proxy root@192.168.10.80:/opt/kubernetes/bin/
root@192.168.10.80's password:
kubelet 100% 168MB 101.4MB/s 00:01
kube-proxy 100% 48MB 102.3MB/s 00:00
2.nod01节点操作(复制node.zip到/root目录下再解压)
[root@localhost ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐
flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面
//解压node.zip,获得kubelet.sh proxy.sh
[root@localhost ~]# unzip node.zip
3.在master上操作,创建kubeconfig目录
[root@localhost k8s]# mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig/
//拷贝kubeconfig.sh文件进行重命名
[root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
0d8e1e148121fc25d8623239ae6cf7e0,kubelet-bootstrap,10001,"system:kubelet-bootst
[root@master kubeconfig]# vim kubeconfig
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=0d8e1e148121fc25d8623239ae6cf7e0 \ '//此token序列号就是之前/opt/kubernetes/cfg/token.csv 文件中使用的的'
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/ '//设置环境变量(可以写入到/etc/prlfile中)'
[root@master kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
4、生成配置文件并拷贝到node节点
[root@master kubeconfig]# bash kubeconfig 192.168.10.60 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
User "kubelet-bootstrap" set.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
#拷贝配置文件到node节点
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.10.70:/opt/kubernetes/cfg/
root@192.168.10.70's password:
bootstrap.kubeconfig 100% 2169 1.4MB/s 00:00
kube-proxy.kubeconfig 100% 6275 5.8MB/s 00:00
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.10.80:/opt/kubernetes/cfg/
root@192.168.10.80's password:
bootstrap.kubeconfig 100% 2169 352.8KB/s 00:00
kube-proxy.kubeconfig 100% 6275 3.3MB/s 00:00
5.创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114
- 在节点上操作
6、node01节点操作生成kubelet kubelet.config配置文件
#------------------------------------node1操作
#创建kubelete的配置文件与服务脚本
[root@node1 ~]# bash kubelet.sh 192.168.10.70
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#检查kubelete服务启动
[root@node1 ~]# ps aux | grep kube
root 10206 0.0 0.6 391444 18372 ? Ssl 07:55 0:11 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 32918 3.2 1.5 405340 45420 ? Ssl 11:57 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.10.70 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 32952 0.0 0.0 112724 988 pts/0 S+ 11:57 0:00 grep --color=auto kube
7、master上检查到node01节点的请求,查看证书状态
#------------------------------master上操作
#检查到node01节点的请求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 30s kubelet-bootstrap Pending "(等待集群给该节点颁发证书)"
8、颁发证书,再次查看证书状态
[root@master kubeconfig]# kubectl certificate approve node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw
certificatesigningrequest.certificates.k8s.io/node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw approved "master进行授权允许加入群集"
#继续查看证书状态
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 6m19s kubelet-bootstrap Approved,Issued "(已经被允许加入群集)"
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 6m19s kubelet-bootstrap Approved,Issued
9、查看集群状态并启动proxy服务
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 31s v1.12.3
#'//如果有一个节点noready,检查kubelet,如果很多节点noready,那就检查apiserver,如果没问题再检查VIP地址,keepalived'
#---------------------------node1节点操作,启动proxy服务
[root@node1 ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip
docker-install.sh initial-setup-ks.cfg proxy.sh
flannel.sh kubelet.sh README.md
[root@node1 ~]# bash proxy.sh 192.168.100.180
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 12:04:50 CST; 9s ago
Main PID: 34171 (kube-proxy)
Tasks: 0
Memory: 8.2M
CGroup: /system.slice/kube-proxy.service
‣ 34171 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 -...
#部署node2
#----------------------------在node01节点操作
#把现成的/opt/kubernetes目录复制到其他node节点进行修改即可
[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.10.80:/opt/
The authenticity of host '192.168.10.80 (192.168.10.80)' can't be established.
ECDSA key fingerprint is SHA256:Trgq8H42gLPWzLQwEQsUy4Nr+JjMnVD2KsW87Mw1cQw.
ECDSA key fingerprint is MD5:67:66:97:b6:c4:91:71:fd:e0:2f:42:cb:75:9b:10:29.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.10.80' (ECDSA) to the list of known hosts.
root@192.168.10.80's password:
flanneld 100% 241 208.9KB/s 00:00
bootstrap.kubeconfig 100% 2169 2.4MB/s 00:00
kube-proxy.kubeconfig 100% 6275 7.5MB/s 00:00
kubelet 100% 379 419.0KB/s 00:00
kubelet.config 100% 269 274.9KB/s 00:00
kubelet.kubeconfig 100% 2298 2.2MB/s 00:00
kube-proxy 100% 191 122.4KB/s 00:00
mk-docker-opts.sh 100% 2139 2.3MB/s 00:00
scp: /opt//kubernetes/bin/flanneld: Text file busy
kubelet 100% 168MB 114.2MB/s 00:01
kube-proxy 100% 48MB 110.5MB/s 00:00
kubelet.crt 100% 2197 435.0KB/s 00:00
kubelet.key 100% 1679 1.5MB/s 00:00
kubelet-client-2020-09-29-12-03-29.pem 100% 1277 504.1KB/s 00:00
kubelet-client-current.pem 100% 1277 1.2MB/s 00:00
#把kubelet,kube-proxy的service文件拷贝到node2中
[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.10.80:/usr/lib/systemd/system/
root@192.168.10.80's password:
kubelet.service 100% 264 159.9KB/s 00:00
kube-proxy.service 100% 231 302.4KB/s 00:00
[root@node1 ~]# systemctl enable kubelet.service
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
#------------------------------node2操作
1、修改三个配置文件的IP地址
#首先删除复制过来的证书,等会node02会自行申请证书
[root@node2 ~]# cd kubeconfig/
[root@node2 kubeconfig]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# ls
kubelet-client-2020-09-29-12-03-29.pem kubelet.crt
kubelet-client-current.pem kubelet.key
[root@node2 ssl]# rm -rf *
[root@node2 ssl]# ls
[root@node2 ssl]# cd ../cfg/
2、启动服务并查看状态
#修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)
[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.10.80 \ "改成node2地址"
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.10.80 "node2地址"
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node2 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.10.80 \ "node2的地址"
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
#启动服务
[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
3.master上操作查看请求并同意node02证书
//在master上操作查看请求Pending
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow 47s kubelet-bootstrap Pending
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 12m kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 6m26s v1.12.3
[root@master kubeconfig]# kubectl certificate approve node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow "授权允许请求加入群集"
certificatesigningrequest.certificates.k8s.io/node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow approved
"master查看群集中的节点"
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 8m52s v1.12.3
192.168.100.190 Ready <none> 43s v1.12.3
d symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
3.master上操作查看请求并同意node02证书
//在master上操作查看请求Pending
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow 47s kubelet-bootstrap Pending
node-csr-lk45yzxFkiUhV8b36fmhmFsZdqtD8JUWV1Vkiq9w7Nw 12m kubelet-bootstrap Approved,Issued
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 6m26s v1.12.3
[root@master kubeconfig]# kubectl certificate approve node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow "授权允许请求加入群集"
certificatesigningrequest.certificates.k8s.io/node-csr-Q22FXrUtwbkKu5b0LQcMbbyXYMuCMkGKUyH0ME1x2ow approved
"master查看群集中的节点"
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.100.180 Ready <none> 8m52s v1.12.3
192.168.100.190 Ready <none> 43s v1.12.3