文章目录
一、群集理论基础
1.1 flannel网络理论介绍
- Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟化网络技术模式,该网络中的主机通过虚拟链路连接起来
- VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上进行传输,到达目的地后由隧道端点解封装并将数据发送给目标地址
- Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式
- Flannel是CoreOS团队针对 Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的 Docker容器都具有全集群唯一的虚拟IP地址。而且它还能在这些IP地址之间建立一个覆盖网络(overlay Network),通过这个覆盖网络,将数据包原封不动地传递到目标容器内
- ETCD在这里的作用:为Flannel提供说明
- 存储管理 Flannel可分配的IP地址段资源
- 监控ETCD中每个Pod的实际地址,并在内存中建立维护Pod节点路由表
1.2 各节点组件介绍
主机名 | IP地址 | 所需部署组件 |
---|---|---|
master | 192.168.179.121 | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
node01 | 192.168.179.122 | kubelet、kube-proxy、docker、flannel、etcd |
node02 | 192.168.179.123 | kubelet、kube-proxy、docker、flannel、etcd |
master组件
- kube-apiserver:是集群的统一入口,各个组件的协调者,所有对象资源的增删改查和监听操作都交给APIserver处理,再提交给etcd存储。
- kube-controller-manager:处理群集中常规的后台任务,一个资源对应一个控制器,而controller-manager就是负责管理这些控制器。
- kube-scheduler:根据调度算法为新创建的pod选择一个node节点,可以任意部署,可以部署同一个节点上,也可以部署在不同节点上
node组件
- kubelet:kube是master在node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个pod转换成一组容器
- kube-proxy:在node节点上实现pod网络代理,维护网络规划和四层负载均衡的工作
- docker:Docker引擎
- flannel:flannel网络
etcd集群介绍:etcd集群在这里分布的部署到了三个节点上
etcd是CoreOS团队于2013年6月发起的开源项目,基于go语言开发,目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法。
etcd集群数据无中心化集群,有如下特点:
1、简单:安装配置简单,而且提供了HTTP进行交互,使用也很简单
2、安全:支持SSL证书验证
3、快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作
4、可靠:采用raft算法,实现分布式数据的可用性和一致性
部署K8S集群中会用到的自签SSL证书
组件 | 使用的证书 |
---|---|
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,server.pem |
kube-proxy | ca.pem,kube-peoxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin.pem,admin-key.pem |
二、群集部署
2.1 环境部署
官网地址:https://github.com/kubernetes/kubernetes/releases?after=v1.13.1
2.2 etcd数据库部署
master操作
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# ls
etcd-cert.sh etcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert
下载证书制作工具
[root@master k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@localhost k8s]# bash cfssl.sh
[root@localhost k8s]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
开始制作证书
[root@master bin]# cd /root/k8s/etcd-cert
[root@master etcd-cert]# cat > ca-config.json <<EOF
> {
> "signing": {
> "default": {
> "expiry": "87600h"
> },
> "profiles": {
> "www": {
> "expiry": "87600h",
> "usages": [
> "signing",
> "key encipherment",
> "server auth",
> "client auth"
> ]
> }
> }
> }
> }
> EOF
[root@master etcd-cert]# cat > ca-csr.json <<EOF
> {
> "CN": "etcd CA",
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "Beijing",
> "ST": "Beijing"
> }
> ]
> }
> EOF
生产证书,生成ca-key.pem ca.pem
[root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/29 08:37:31 [INFO] generating a new CA key and certificate from CSR
2020/09/29 08:37:31 [INFO] generate received request
2020/09/29 08:37:31 [INFO] received CSR
2020/09/29 08:37:31 [INFO] generating key: rsa-2048
2020/09/29 08:37:31 [INFO] encoded CSR
2020/09/29 08:37:31 [INFO] signed certificate with serial number 730669216531377854223169782872376826407987316251
指定etcd三个节点之间的通信验证
[root@master etcd-cert]# cat > server-csr.json <<EOF
> {
> "CN": "etcd",
> "hosts": [
> "192.168.179.121",
> "192.168.179.122",
> "192.168.179.123"
> ],
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "BeiJing",
> "ST": "BeiJing"
> }
> ]
> }
> EOF
生成ETCD证书 server-key.pem server.pem
[root@master k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/09/29 08:38:31 [INFO] generate received request
2020/09/29 08:38:31 [INFO] received CSR
2020/09/29 08:38:31 [INFO] generating key: rsa-2048
2020/09/29 08:38:31 [INFO] encoded CSR
2020/09/29 08:38:31 [INFO] signed certificate with serial number 679663307295813602896208472994631875032032783708
2020/09/29 08:38:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master etcd-cert]# ls
ca-config.json ca.pem server-csr.json
ca.csr etcd-cert.sh server-key.pem
ca-csr.json server.pem ca-key.pem server.csr
ETCD 二进制包地址
https://github.com/etcd-io/etcd/releases
[root@master k8s]# ls
etcd-cert etcd.sh etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
创建配置文件,命令文件,证书目录
[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
证书拷贝
[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
进入卡住状态等待其他节点加入
[root@master k8s]# bash etcd.sh etcd01 192.168.179.121 etcd02=https://192.168.179.122:2380,etcd03=https://192.168.179.123:2380
使用另外一个会话打开,会发现etcd进程已经开启
[root@master ~]# ps -ef |grep etcd
root 11590 10701 0 08:44 pts/1 00:00:00 bash etcd.sh etcd01 192.168.179.121 etcd02=https://192.168.179.122:2380,etcd03=https://192.168.179.123:2380
root 11637 11590 0 08:44 pts/1 00:00:00 systemctl restart etcd
root 11643 1 5 08:44 ? 00:00:01 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.179.121:2380 --listen-client-urls=https://192.168.179.121:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.179.121:2379 --initial-advertise-peer-urls=https://192.168.179.121:2380 --initial-cluster=etcd01=https://192.168.179.121:2380,etcd02=https://192.168.179.122:2380,etcd03=https://192.168.179.123:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 11740 11661 0 08:44 pts/2 00:00:00 grep --color=auto etcd
拷贝证书去其他节点
[root@master k8s]# scp -r /opt/etcd/ root@192.168.179.122:/opt/
[root@master k8s]# scp -r /opt/etcd/ root@192.168.179.123:/opt/
启动脚本拷贝其他节点
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.179.122:/usr/lib/systemd/system/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.179.123:/usr/lib/systemd/system/
在node01节点修改(node02节点相同操作,将地址改为node02ip地址即可)
[root@node01 ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.179.122:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.179.122:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.179.122:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.179.122:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.179.121:2380,etcd02=https://192.168.179.122:2380,etcd03=https://192.168.179.123:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动
[root@node01 ~]# systemctl start etcd
[root@node01 ~]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 08:50:00 CST; 3s ago
Main PID: 12375 (etcd)
Tasks: 13
Memory: 15.9M
CGroup: /system.slice/etcd.service
└─12375 /opt/etcd/bin/etcd --name=etcd02 --data-dir=/v...
master上检查群集状态
[root@master k8s]# cd etcd-cert/
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379" cluster-health
member 6d93f7bde92b30e6 is healthy: got healthy result from https://192.168.179.122:2379
member 6e61df96412834d4 is healthy: got healthy result from https://192.168.179.121:2379
member a058ec1f87f39abb is healthy: got healthy result from https://192.168.179.123:2379
cluster is healthy
2.2 docker引擎部署
所有node节点部署docker引擎
详见我之前的docker安装博客
https://blog.csdn.net/weixin_47153988/article/details/108657660
2.3 flannel网络配置
master上写入分配的子网段到ETCD中,供flannel使用
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
查看写入的信息
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
node节点操作解压flannel组件包(两台node节点下面操作一样,我演示node01)
[root@node01 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
k8s工作目录
[root@node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[root@node01 ~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
开启flannel网络功能
[root@node01 ~]# bash flannel.sh https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
配置docker连接flannel
[root@node01 ~]# vim /usr/lib/systemd/system/docker.service
......
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env '添加'
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock '修改'
查看bip指定启动时的子网
[root@node01 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.27.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.27.1/24 --ip-masq=false --mtu=1450"
重启docker服务
[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl restart docker
查看flannel网络
[root@node01 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.27.1 netmask 255.255.255.0 broadcast 172.17.27.255
ether 02:42:3f:1e:2e:e6 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.27.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::d8bd:67ff:fe23:df2e prefixlen 64 scopeid 0x20<link>
ether da:bd:67:23:df:2e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 65 overruns 0 carrier 0 collisions 0
测试ping通两个node中的centos:7容器
[root@node01 ~]# docker run -it centos:7 /bin/bash
[root@694e7b921fb1 /]# yum install net-tools -y
[root@694e7b921fb1 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.27.2 netmask 255.255.255.0 broadcast 172.17.27.255
ether 02:42:ac:11:1b:02 txqueuelen 0 (Ethernet)
RX packets 15751 bytes 12464284 (11.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8097 bytes 440553 (430.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node02 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.71.1 netmask 255.255.255.0 broadcast 172.17.71.255
ether 02:42:ec:92:72:4d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.71.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::cd5:ddff:feff:e65a prefixlen 64 scopeid 0x20<link>
ether 0e:d5:dd:ff:e6:5a txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 37 overruns 0 carrier 0 collisions 0
[root@8414a3e7cdca /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.71.2 netmask 255.255.255.0 broadcast 172.17.71.255
ether 02:42:ac:11:47:02 txqueuelen 0 (Ethernet)
RX packets 15829 bytes 12467244 (11.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8019 bytes 436254 (426.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@8414a3e7cdca /]# ping 172.17.27.2 -c 3
PING 172.17.27.2 (172.17.27.2) 56(84) bytes of data.
64 bytes from 172.17.27.2: icmp_seq=1 ttl=62 time=0.802 ms
64 bytes from 172.17.27.2: icmp_seq=2 ttl=62 time=1.26 ms
64 bytes from 172.17.27.2: icmp_seq=3 ttl=62 time=1.46 ms
--- 172.17.27.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.802/1.176/1.460/0.277 ms
[root@694e7b921fb1 /]# ping 172.17.71.2 -c 3
PING 172.17.71.2 (172.17.71.2) 56(84) bytes of data.
64 bytes from 172.17.71.2: icmp_seq=1 ttl=62 time=0.618 ms
64 bytes from 172.17.71.2: icmp_seq=2 ttl=62 time=1.35 ms
64 bytes from 172.17.71.2: icmp_seq=3 ttl=62 time=2.01 ms
--- 172.17.71.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.618/1.329/2.014/0.571 ms
2.4 部署master组件
在master上操作,api-server生成证书
[root@master k8s]# unzip master.zip
[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.179.121", 'maser1'
"192.168.179.124", 'maser2'
"192.168.179.100", 'vip'
"192.168.179.125", 'lb(master)'
"192.168.179.126", 'lb(backup)'
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
生成k8s证书
[root@master k8s-cert]# bash k8s-cert.sh
[root@master k8s-cert]# ls *pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
解压kubernetes压缩包
[root@master k8s-cert]# cd ..
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# cd /root/k8s/kubernetes/server/bin/
复制关键命令文件
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
使用 head -c 16 /dev/urandom | od -An -t x | tr -d ’ ’ 可以随机生成序列号
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
e115a7ea9451a6f51405b787aff8130e
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
e115a7ea9451a6f51405b787aff8130e,10001,"system:kubelet-bootstrap"
二进制文件,token,证书都准备好,开启apiserver
[root@master k8s]# bash apiserver.sh 192.168.179.121 https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
检查进程是否启动成功
[root@master k8s]# ps -aux |grep kube
root 12781 42.2 16.9 400796 315056 ? Ssl 09:14 0:08 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379 --bind-address=192.168.179.121 --secure-port=6443 --advertise-address=192.168.179.121 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 12804 0.0 0.0 112724 988 pts/2 S+ 09:14 0:00 grep --color=auto kube
查看配置文件
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379 \
--bind-address=192.168.179.121 \
--secure-port=6443 \
--advertise-address=192.168.179.121 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
监听的https端口
[root@master k8s]# netstat -ntap | grep 6443
tcp 0 0 192.168.179.121:6443 0.0.0.0:* LISTEN 12781/kube-apiserve
tcp 0 0 192.168.179.121:54488 192.168.179.121:6443 ESTABLISHED 12781/kube-apiserve
tcp 0 0 192.168.179.121:6443 192.168.179.121:54488 ESTABLISHED 12781/kube-apiserve
[root@master k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 12781/kube-apiserve
启动scheduler服务
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# chmod +x controller-manager.sh
启动controller-manager
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
查看master 节点状态
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
2.5 node01节点部署
master上操作,把 kubelet、kube-proxy拷贝到node节点上去
[root@master k8s]# cd /root/k8s/kubernetes/server/bin/
[root@master bin]# scp kubelet kube-proxy root@192.168.179.122:/opt/kubernetes/bin/
[root@master bin]# scp kubelet kube-proxy root@192.168.179.123:/opt/kubernetes/bin/
nod01节点操作(复制node.zip到/root目录下再解压)
[root@node01 ~]# unzip node.zip
Archive: node.zip
inflating: proxy.sh
inflating: kubelet.sh
在master上操作
[root@master bin]# cd /root/k8s/
[root@master k8s]#
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
拷贝kubeconfig.sh文件进行重命名
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# vim kubeconfig
APISERVER=$1
SSL_DIR=$2
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=075ec2a1103c089935b7a368b83f65e9 \ '注意:这里根据自己的序列号进行修改'
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
设置环境变量(可以写入到/etc/profile中)
[root@master kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/
[root@master kubeconfig]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
生成配置文件
[root@master kubeconfig]# bash kubeconfig 192.168.179.121 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
User "kubelet-bootstrap" set.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
拷贝配置文件到node节点
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.179.122:/opt/kubernetes/cfg/
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.179.123:/opt/kubernetes/cfg/
创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
在node01节点上操作
[root@node01 ~]# bash kubelet.sh 192.168.179.122
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
检查kubelet服务启动
[root@node01 ~]# ps aux | grep kube
root 12530 0.1 0.8 465176 15404 ? Ssl 10:01 0:01 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.179.121:2379,https://192.168.179.122:2379,https://192.168.179.123:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 15490 16.8 2.5 415652 48400 ? Ssl 10:24 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.179.122 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 15511 0.0 0.0 112728 988 pts/3 S+ 10:24 0:00 grep --color=auto kube
master上操作,检查到node01节点的请求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-VkgiO5LiuDcRwcB34e8Pn6-pbCAF_8skmXWmtRoDYJk 16s kubelet-bootstrap Pending '等待集群给该节点颁发证书状态'
颁发证书
[root@master kubeconfig]# kubectl certificate approve node-csr-VkgiO5LiuDcRwcB34e8Pn6-pbCAF_8skmXWmtRoDYJk
certificatesigningrequest.certificates.k8s.io/node-csr-VkgiO5LiuDcRwcB34e8Pn6-pbCAF_8skmXWmtRoDYJk approved
再次查看证书状态
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-VkgiO5LiuDcRwcB34e8Pn6-pbCAF_8skmXWmtRoDYJk 3m3s kubelet-bootstrap Approved,Issued '已经被允许加入群集'
查看群集节点,成功加入node01节点
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.179.122 Ready <none> 34s v1.12.3
在node01节点操作,启动proxy服务
[root@node01 ~]# bash proxy.sh 192.168.179.122
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node01 ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 二 2020-09-29 10:28:22 CST; 13s ago
Main PID: 16276 (kube-proxy)
Tasks: 0
Memory: 8.4M
CGroup: /system.slice/kube-proxy.service
‣ 16276 /opt/kubernetes/bin/kube-proxy --logtostderr=t...
2.6 node02节点部署
在node01节点操作,把现成的/opt/kubernetes目录复制到其他节点进行修改即可
[root@node01 ~]# scp -r /opt/kubernetes/ root@192.168.179.123:/opt/
把kubelet,kube-proxy的service文件拷贝到node2中
[root@node01 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.179.123:/usr/lib/systemd/system/
在node02上操作,进行修改
首先删除复制过来的证书,等会node02会自行申请证书
[root@node02 ~]# cd /opt/kubernetes/ssl/
[root@node02 ssl]# rm -rf *
修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)
[root@node02 ssl]# cd ../cfg/
[root@node02 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.179.123 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
[root@node02 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.179.123
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node02 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.179.123 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
启动服务
[root@node02 cfg]# systemctl start kubelet.service
[root@node02 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 cfg]# systemctl start kube-proxy.service
[root@node02 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
在master上操作查看请求
[root@master kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-VkgiO5LiuDcRwcB34e8Pn6-pbCAF_8skmXWmtRoDYJk 8m50s kubelet-bootstrap Approved,Issued
node-csr-qj_Sw5PR24UrKf4dBrDZ2PEF8F5oI_-5_4ClAwmyQbg 61s kubelet-bootstrap Pending
授权许可加入群集
[root@master kubeconfig]# kubectl certificate approve node-csr-qj_Sw5PR24UrKf4dBrDZ2PEF8F5oI_-5_4ClAwmyQbg
certificatesigningrequest.certificates.k8s.io/node-csr-qj_Sw5PR24UrKf4dBrDZ2PEF8F5oI_-5_4ClAwmyQbg approved
查看群集中的节点
[root@master kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.179.122 Ready <none> 6m38s v1.12.3
192.168.179.123 Ready <none> 14s v1.12.3