声明:本文参考了博主阿龙的文章:https://www.cnblogs.com/along21/p/10044931.html,在此基础上
进一步实验总结得出的结论,填了一部分坑。
目录
01. 系统初始化
01-01.集群环境配置
VIP:192.168.1.100
k8s-master1:192.168.1.101
k8s-master2:192.168.1.102
k8s-master3:192.168.1.103
本文档中的 etcd 集群、master 节点、worker 节点均使用这三台机器。
在每个服务器上都要执行以下全部操作,如果没有特殊指明,本文档的所有操作均在k8s-master1 节点上执行。
设置永久主机名称,然后重新登录
$ sudo hostnamectl set-hostname k8s-master1
$ sudo hostnamectl set-hostname k8s-master2
$ sudo hostnamectl set-hostname k8s-master3
修改 /etc/hostname 文件,添加主机名和 IP 的对应关系:
$ vim /etc/hosts
192.168.1.101 k8s-master1
192.168.1.102 k8s-master2
192.168.1.103 k8s-master3
在每台机器上安装依赖包:
$ sudo yum install -y epel-release
$ sudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
注:ipvs 依赖 ipset;
在每台机器上关闭防火墙:
① 关闭服务,并设为开机不自启
$ sudo systemctl stop firewalld
$ sudo systemctl disable firewalld
② 清空防火墙规则
$ sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat
$ sudo iptables -P FORWARD ACCEPT
关闭 swap 分区:
1、如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:
$ sudo swapoff -a
2、为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
01-09.关闭 SELinux
1、关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied :
$ sudo setenforce 0
2、修改配置文件,永久生效;
$ sed -i 's/enforcing/disabled/' /etc/selinux/config
关闭 dnsmasq (可选)
linux 系统开启了 dnsmasq 后(如 GUI 环境),将系统 DNS Server 设置为 127.0.0.1,这会导致 docker 容器无法解析域名,需要关闭它:
$ sudo service dnsmasq stop
$ sudo systemctl disable dnsmasq
加载内核模块
$ sudo modprobe br_netfilter
$ sudo modprobe ip_vs
01-12.设置系统参数
$ vim kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
$ sudo cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
$ sudo sysctl -p /etc/sysctl.d/kubernetes.conf
$ sudo mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct
注:
tcp_tw_recycle 和 Kubernetes 的 NAT 冲突,必须关闭 ,否则会导致服务不通;
关闭不使用的 IPV6 协议栈,防止触发 docker BUG;
设置系统时区:
1、调整系统 TimeZone
$ sudo timedatectl set-timezone Asia/Shanghai
2、将当前的 UTC 时间写入硬件时钟
$ sudo timedatectl set-local-rtc 0
3、重启依赖于系统时间的服务
$ sudo systemctl restart rsyslog
$ sudo systemctl restart crond
更新系统时间:
$ yum -y install ntpdate
$ sudo ntpdate cn.pool.ntp.org
在每台机器上创建目录:
$ sudo mkdir -p /opt/k8s/bin
$ sudo mkdir -p /opt/k8s/cert
$ sudo mkdir -p /opt/etcd/cert
$ sudo mkdir -p /opt/lib/etcd
$ sudo mkdir -p /opt/k8s/script
$ sudo mkdir -p /var/lib/kube-proxy
$ chown -R k8s /opt/*
检查系统内核和模块是否适合运行 docker (仅适用于linux 系统)
$ curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh
$ chmod +x check-config.sh
$ ./check-config.sh
01-02.配置无密码登录
添加 k8s 和 docker 账户
在每台机器上添加 k8s 账户,并为k8s 账户设置密码
$ sudo useradd -m k8s
$ sudo sh -c 'echo k8sadmin |passwd k8s --stdin'
01-04.修改visudo权限,掉%wheel ALL=(ALL) NOPASSWD: ALL这行的注释
$ sudo vim /etc/sudoers
%wheel ALL=(ALL) NOPASSWD: ALL
01-05.将k8s用户归到wheel组
$ gpasswd -a k8s wheel
Adding user k8s to group wheel
$ id k8s
uid=1000(k8s) gid=1000(k8s) groups=1000(k8s),10(wheel)
01-06.在每台机器上添加 docker 账户,将 k8s 账户添加到 docker 组中,同时配置 dockerd 参数(注:安装完docker才有):
$ sudo useradd -m docker
$ sudo gpasswd -a k8s docker
$ sudo mkdir -p /opt/docker/
生成秘钥对
[root@k8s-master1 ~]# ssh-keygen #连续回车即可
将自己的公钥发给其他服务器
[root@k8s-master1 ~]# ssh-copy-id root@k8s-master1
[root@k8s-master1 ~]# ssh-copy-id root@k8s-master2
[root@k8s-master1 ~]# ssh-copy-id root@k8s-master3
[root@k8s-master1 ~]# ssh-copy-id k8s@k8s-master1
[root@k8s-master1 ~]# ssh-copy-id k8s@k8s-master2
[root@k8s-master1 ~]# ssh-copy-id k8s@k8s-master3
将可执行文件路径 /opt/k8s/bin 添加到 PATH 变量,在每台机器上添加环境变量:
$ sudo sh -c "echo 'PATH=/opt/k8s/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >> /etc/profile.d/k8s.sh"
$ source /etc/profile.d/k8s.sh
02.创建 CA 证书和秘钥
为确保安全, kubernetes 系统各组件需要使用 x509 证书对通信进行加密和认证。
CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。
本文档使用 CloudFlare 的 PKI 工具集 cfssl 创建所有证书。
02-01.安装 cfssl 工具集
[root@k8s-master1 ~]# mkdir -p /opt/k8s/cert && sudo chown -R k8s /opt/k8s && cd /opt/k8s
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master1 ~]# mv cfssl_linux-amd64 /opt/k8s/bin/cfssl
[root@k8s-master1 ~]# mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson
[root@k8s-master1 ~]# mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo
[root@k8s-master1 ~]# chmod +x /opt/k8s/bin/*
02-02.创建根证书 (CA)
CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。
02-02-01 创建配置文件
CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。
[root@k8s-master1 ~]# cd /opt/k8s/cert
[root@k8s-master1 cert]# vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
注:
① signing :表示该证书可用于签名其它证书,生成的 ca.pem 证书中CA=TRUE ;
② server auth :表示 client 可以用该该证书对 server 提供的证书进行验证;
③ client auth :表示 server 可以用该该证书对 client 提供的证书进行验证;
02-02-02 创建证书签名请求文件
[root@k8s-master1 cert]# vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
注:
① CN: Common Name ,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name),浏览器使用该字段验证网站是否合法;
② O: Organization ,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);
③ kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;
02-02-03 生成 CA 证书和私钥
[root@k8s-master1 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@k8s-master1 cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
02-02-04 分发证书文件
将生成的 CA 证书、秘钥文件、配置文件拷贝到所有节点的/opt/k8s/cert 目录下:
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_k8scert.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/k8s/cert && chown -R k8s /opt/k8s"
scp /opt/k8s/cert/ca*.pem /opt/k8s/cert/ca-config.json k8s@${node_ip}:/opt/k8s/cert
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_k8scert.sh && /opt/k8s/script/scp_k8scert.sh
03.部署 kubectl 命令行工具
kubectl 是 kubernetes 集群的命令行管理工具,本文档介绍安装和配置它的步骤。
kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错:
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
本文档只需要部署一次,生成的 kubeconfig 文件与机器无关。
03-01.下载kubectl 二进制文件
下载和解压
kubectl二进制文件需要科学上网下载,我已经下载到我的网盘,有需要的小伙伴联系我~
[root@k8s-master1 ~]# wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz
[root@k8s-master1 ~]# tar -xzvf kubernetes-client-linux-amd64.tar.gz
[root@k8s-master1 ~]# cp /root/kubernetes/client/bin/kubectl /bin/
03-02.创建 admin 证书和私钥
kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。
kubectl 作为集群的管理工具,需要被授予最高权限。这里创建具有最高权限的admin 证书。
03-02-01 创建证书签名请求
[root@k8s-master1 ~]# cd /opt/k8s/cert/
[root@k8s-master1 ~]# vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "4Paradigm"
}
]
}
注:
① O 为 system:masters ,kube-apiserver 收到该证书后将请求的 Group 设置为system:masters;
② 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与Role cluster-admin 绑定,该 Role 授予所有 API的权限;
③ 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
03-02-02 生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@k8s-master1 cert]# ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
03-03.创建和分发 kubeconfig 文件
03-03-01 创建kubeconfig文件
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;
① 设置集群参数,(–server=${KUBE_APISERVER} ,指定IP和端口;我使用的是haproxy的VIP和端口;如果没有haproxy代理,就用实际服务的IP和端口;如:https://192.168.1.101:6443)
[root@k8s-master1 ~]# /root/kubernetes/client/bin/kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true --server=https://192.168.1.100:6444 \
--kubeconfig=/root/.kube/kubectl.kubeconfig
② 设置客户端认证参数
[root@k8s-master1 ~]# /root/kubernetes/client/bin/kubectl config set-credentials kube-admin \
--client-certificate=/opt/k8s/cert/admin.pem \
--client-key=/opt/k8s/cert/admin-key.pem \
--embed-certs=true \
--kubeconfig=/root/.kube/kubectl.kubeconfig
③ 设置上下文参数
[root@k8s-master1 ~]# /root/kubernetes/client/bin/kubectl config set-context kube-admin@kubernetes \
--cluster=kubernetes \
--user=kube-admin \
--kubeconfig=/root/.kube/kubectl.kubeconfig
④ 设置默认上下文
[root@k8s-master1 ~]# kubectl config use-context kube-admin@kubernetes --kubeconfig=/root/.kube/kubectl.kubeconfig
注:在后续kubernetes认证,文章中会详细讲解
–certificate-authority :验证 kube-apiserver 证书的根证书;
–client-certificate 、 --client-key :刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;
–embed-certs=true :将 ca.pem 和 admin.pem 证书内容嵌入到生成的kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径);
[root@k8s-master1 ~]# vim /opt/k8s/script/kubectl_environment.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /root/.kube"
scp /root/.kube/kubectl.kubeconfig root@${node_ip}:/root/.kube/config
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/kubectl_environment.sh && /opt/k8s/script/kubectl_environment.sh
03-03-01 验证kubeconfig文件
[root@k8s-master1 ~]# ls /root/.kube/kubectl.kubeconfig
/root/.kube/kubectl.kubeconfig
[root@k8s-master1 ~]# kubectl config view --kubeconfig=/root/.kube/kubectl.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.100:6444
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-admin
name: kube-admin@kubernetes
current-context: kube-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kube-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
03-03-03 分发 kubeclt 和kubeconfig 文件,分发到所有使用kubectl 命令的节点
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_kubectl.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /bin/kubectl k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
ssh k8s@${node_ip} "mkdir -p /home/k8s/.kube"
scp /root/.kube/config k8s@${node_ip}:/home/k8s/.kube/config
ssh root@${node_ip} "mkdir -p /root/.kube"
scp /root/.kube/config root@${node_ip}:/root/.kube/config
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_kubectl.sh && /opt/k8s/script/scp_kubectl.sh
04.部署 etcd 集群
etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。
本文档介绍部署一个三节点高可用 etcd 集群的步骤:
① 下载和分发 etcd 二进制文件
② 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流;
③ 创建 etcd 的 systemd unit 文件,配置服务参数;
④ 检查集群工作状态;
04-01.下载etcd 二进制文件
到 https://github.com/coreos/etcd/releases 页面下载最新版本的发布包:
[root@k8s-master1 ~]#wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz
[root@k8s-master1 ~]# tar -xvf etcd-v3.3.7-linux-amd64.tar.gz
04-02.创建 etcd 证书和私钥
04-02-01 创建证书签名请求
[root@k8s-master1 ~]# cd /opt/etcd/cert
[root@k8s-master1 cert]# vim etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.1.101",
"192.168.1.102",
"192.168.1.103"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
注:hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,这里将 etcd 集群的三个节点 IP 都列在其中;
04-02-02 生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@k8s-master1 cert]# ls etcd*
etcd.csr etcd-csr.json etcd-key.pem etcd.pem
04-02-03 分发生成的证书和私钥到各 etcd 节点
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_etcd.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /root/etcd-v3.3.7-linux-amd64/etcd* k8s@${node_ip}:/opt/k8s/bin
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
ssh root@${node_ip} "mkdir -p /opt/etcd/cert && chown -R k8s /opt/etcd/cert"
scp /opt/etcd/cert/etcd*.pem k8s@${node_ip}:/opt/etcd/cert/
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_etcd.sh && /opt/k8s/script/scp_etcd.sh
04-03.创建etcd 的systemd unit 模板及etcd 配置文件
04-03-01 创建etcd 的systemd unit 模板
[root@k8s-master1 ~]# vim /opt/etcd/etcd.service.template
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
User=k8s
Type=notify
WorkingDirectory=/opt/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \
--data-dir=/opt/lib/etcd \
--name ##NODE_NAME## \
--cert-file=/opt/etcd/cert/etcd.pem \
--key-file=/opt/etcd/cert/etcd-key.pem \
--trusted-ca-file=/opt/k8s/cert/ca.pem \
--peer-cert-file=/opt/etcd/cert/etcd.pem \
--peer-key-file=/opt/etcd/cert/etcd-key.pem \
--peer-trusted-ca-file=/opt/k8s/cert/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--listen-peer-urls=https://##NODE_IP##:2380 \
--initial-advertise-peer-urls=https://##NODE_IP##:2380 \
--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379\
--advertise-client-urls=https://##NODE_IP##:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd0=https://192.168.1.101:2380,etcd1=https://192.168.1.102:2380,etcd2=https://192.168.1.103:2380 \
--initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
注:
User :指定以 k8s 账户运行;
WorkingDirectory 、 --data-dir :指定工作目录和数据目录为/opt/lib/etcd ,需在启动服务前创建这个目录;
–name :指定节点名称,当 --initial-cluster-state 值为 new 时, --name 的参数值必须位于 --initial-cluster 列表中;
–cert-file 、 --key-file :etcd server 与 client 通信时使用的证书和私钥;
–trusted-ca-file :签名 client 证书的 CA 证书,用于验证 client 证书;
–peer-cert-file 、 --peer-key-file :etcd 与 peer 通信使用的证书和私钥;
–peer-trusted-ca-file :签名 peer 证书的 CA 证书,用于验证 peer 证书;
04-04.为各节点创建和分发 etcd systemd unit 文件
[root@k8s-master1 ~]# vim /opt/k8s/script/etcd_service.sh
NODE_NAMES=("etcd0" "etcd1" "etcd2")
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
#替换模板文件中的变量,为各节点创建 systemd unit 文件
for (( i=0; i < 3; i++ ));do
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/g" -e "s/##NODE_IP##/${NODE_IPS[i]}/g" /opt/etcd/etcd.service.template > /opt/etcd/etcd-${NODE_IPS[i]}.service
done
#分发生成的 systemd unit 和etcd的配置文件:
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/lib/etcd && chown -R k8s /opt/lib/etcd"
scp /opt/etcd/etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
done
[root@k8s-master1 script]# chmod +x /opt/k8s/script/etcd_service.sh && /opt/k8s/script/etcd_service.sh
[root@k8s-master1 script]# ll /opt/etcd/*.service
/opt/etcd/etcd-192.168.1.101.service
/opt/etcd/etcd-192.168.1.103.service
/opt/etcd/etcd-192.168.1.102.service
[root@k8s-master1 script]# ls /etc/systemd/system/etcd.service
/etc/systemd/system/etcd.service
04-05.启动 etcd 服务
[root@k8s-master1 script]# vim /opt/k8s/script/etcd.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
#启动 etcd 服务
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"
done
#检查启动结果,确保状态为 active (running)
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status etcd|grep Active"
done
#验证服务状态,输出均为healthy 时表示集群服务正常
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
--endpoints=https://${node_ip}:2379 \
--cacert=/opt/k8s/cert/ca.pem \
--cert=/opt/etcd/cert/etcd.pem \
--key=/opt/etcd/cert/etcd-key.pem endpoint health
done
[root@k8s-master1 script]# chmod +x etcd.sh && ./etcd.sh
>>> 192.168.1.101
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
>>> 192.168.1.102
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
>>> 192.168.1.103
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
#确保状态为 active (running),否则查看日志,确认原因:$ journalctl -u etcd
>>> 192.168.1.101
Active: active (running)
>>> 192.168.1.102
Active: active (running)
>>> 192.168.1.103
Active: active (running)
#输出均为healthy 时表示集群服务正常
>>> 192.168.1.101
https://192.168.1.101:2379 is healthy: successfully committed proposal: took = 1.373318ms
>>> 192.168.1.102
https://192.168.1.102:2379 is healthy: successfully committed proposal: took = 2.371807ms
>>> 192.168.1.103
https://192.168.1.103:2379 is healthy: successfully committed proposal: took = 1.764309ms
05.部署 flannel 网络
kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络,使用的端口为 UDP 8472,需要开放该端口(如公有云 AWS 等)。
flannel 第一次启动时,从 etcd 获取 Pod 网段信息,为本节点分配一个未使用的 /24段地址,然后创建 flannel.1 (也可能是其它名称,如 flannel1 等) 接口。
flannel 将分配的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥。
05-01.下载flanneld 二进制文件
到 https://github.com/coreos/flannel/releases 页面下载最新版本的发布包:
[root@k8s-master1 ~]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@k8s-master1 ~]# tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel
05-02.创建 flannel 证书和私钥
flannel 从 etcd 集群存取网段分配信息,而 etcd 集群启用了双向 x509 证书认证,所以需要为 flanneld 生成证书和私钥。
[root@k8s-master1 ~]# cd /opt/flannel/cert
[root@k8s-master1 cert]# vim flanneld-csr.json
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
05-02-02 生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@k8s-master1 cert]# ls
flanneld.csr flanneld-csr.json flanneld-key.pem flanneld.pemls flanneld*pem
05-02-03 将flanneld 二进制文件he1生成的证书和私钥分发到所有节点
[root@k8s-master1 cert]# vim /opt/k8s/script/scp_flannel.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /root/flannel/{flanneld,mk-docker-opts.sh} k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
ssh root@${node_ip} "mkdir -p /opt/flannel/cert && chown -R k8s /opt/flannel"
scp /opt/flannel/cert/flanneld*.pem k8s@${node_ip}:/opt/flannel/cert
done
[root@k8s-master1 cert]# chmod +x /opt/k8s/script/scp_flannel.sh && /opt/k8s/script/scp_flannel.sh
05-03.向etcd 写入集群Pod 网段信息
注意:本步骤只需执行一次。
[root@k8s-master1 cert]# etcdctl \
--endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flannel/cert/flanneld.pem \
--key-file=/opt/flannel/cert/flanneld-key.pem \
set /atomic.io/network/config '{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}'
{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}
05-04.创建 flanneld 的 systemd unit 文件
[root@k8s-master1 ~]# vim /opt/flannel/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
-etcd-cafile=/opt/k8s/cert/ca.pem \
-etcd-certfile=/opt/flannel/cert/flanneld.pem \
-etcd-keyfile=/opt/flannel/cert/flanneld-key.pem \
-etcd-endpoints=https://192.168.1.102:2379,https://192.168.1.102:2379,https://192.168.1.103:2379 \
-etcd-prefix=/atomic.io/network \
-iface=ens33
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
注:
mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入/run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth1 接口;
flanneld 运行时需要 root 权限;
05-05.分发flanneld systemd unit 文件到所有节点,启动并检查flanneld 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/flanneld_service.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
#分发 flanneld systemd unit 文件到所有节点
scp /opt/flannel/flanneld.service root@${node_ip}:/etc/systemd/system/
#启动 flanneld 服务
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
#检查启动结果
ssh k8s@${node_ip} "systemctl status flanneld|grep Active"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/flanneld_service.sh && /opt/k8s/script/flanneld_service.sh
注:确保状态为 active (running) ,否则查看日志,确认原因:
$ journalctl -u flanneld
05-06.检查分配给各 flanneld 的 Pod 网段信息
05-06-01 查看集群 Pod 网段(/16)
[root@k8s-master1 ~]# etcdctl \
--endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flannel/cert/flanneld.pem \
--key-file=/opt/flannel/cert/flanneld-key.pem \
get /atomic.io/network/config
输出:
{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}
05-06-02 查看已分配的 Pod 子网段列表(/24)
[root@k8s-master1 ~]# etcdctl \
--endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flannel/cert/flanneld.pem \
--key-file=/opt/flannel/cert/flanneld-key.pem \
ls /atomic.io/network/subnets
输出:
/atomic.io/network/subnets/10.30.72.0-24
/atomic.io/network/subnets/10.30.13.0-24
/atomic.io/network/subnets/10.30.19.0-24
05-06-03 查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址
[root@k8s-master1 ~]# etcdctl \
--endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flannel/cert/flanneld.pem \
--key-file=/opt/flannel/cert/flanneld-key.pem \
get /atomic.io/network/subnets/10.30.72.0-24
输出:
{"PublicIP":"192.168.1.101","BackendType":"vxlan","BackendData":{"VtepMAC":"fe:20:82:76:fc:25"}}
05-06-04 验证各节点能通过 Pod 网段互通
[root@k8s-master1 ~]# vim /opt/k8s/script/ping_flanneld.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
#在各节点上部署 flannel 后,检查是否创建了 flannel 接口(名称可能为 flannel0、flannel.0、flannel.1 等)
ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
#在各节点上 ping 所有 flannel 接口 IP,确保能通
ssh ${node_ip} "ping -c 1 10.30.72.0"
ssh ${node_ip} "ping -c 1 10.30.13.0"
ssh ${node_ip} "ping -c 1 10.30.19.0"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/ping_flanneld.sh && /opt/k8s/script/ping_flanneld.sh
输出:
>>> 192.168.1.101
inet 10.30.72.0/32 scope global flannel.1
PING 10.30.72.0 (10.30.72.0) 56(84) bytes of data.
64 bytes from 10.30.72.0: icmp_seq=1 ttl=64 time=0.034 ms
--- 10.30.72.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms
PING 10.30.13.0 (10.30.13.0) 56(84) bytes of data.
64 bytes from 10.30.13.0: icmp_seq=1 ttl=64 time=0.744 ms
--- 10.30.13.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.744/0.744/0.744/0.000 ms
PING 10.30.19.0 (10.30.19.0) 56(84) bytes of data.
64 bytes from 10.30.19.0: icmp_seq=1 ttl=64 time=0.721 ms
--- 10.30.19.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms
>>> 192.168.1.102
inet 10.30.13.0/32 scope global flannel.1
PING 10.30.72.0 (10.30.72.0) 56(84) bytes of data.
64 bytes from 10.30.72.0: icmp_seq=1 ttl=64 time=0.801 ms
--- 10.30.72.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.801/0.801/0.801/0.000 ms
PING 10.30.13.0 (10.30.13.0) 56(84) bytes of data.
64 bytes from 10.30.13.0: icmp_seq=1 ttl=64 time=0.038 ms
--- 10.30.13.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms
PING 10.30.19.0 (10.30.19.0) 56(84) bytes of data.
64 bytes from 10.30.19.0: icmp_seq=1 ttl=64 time=0.846 ms
--- 10.30.19.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.846/0.846/0.846/0.000 ms
>>> 192.168.1.103
inet 10.30.19.0/32 scope global flannel.1
PING 10.30.72.0 (10.30.72.0) 56(84) bytes of data.
64 bytes from 10.30.72.0: icmp_seq=1 ttl=64 time=0.579 ms
--- 10.30.72.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms
PING 10.30.13.0 (10.30.13.0) 56(84) bytes of data.
64 bytes from 10.30.13.0: icmp_seq=1 ttl=64 time=0.647 ms
--- 10.30.13.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms
PING 10.30.19.0 (10.30.19.0) 56(84) bytes of data.
64 bytes from 10.30.19.0: icmp_seq=1 ttl=64 time=0.037 ms
--- 10.30.19.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms
06.部署 master 节点
① kubernetes master 节点运行如下组件:
kube-apiserver
kube-scheduler
kube-controller-manager
② kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。
③ 对于 kube-apiserver,可以运行多个实例(本文档是 3 实例),但对其它组件需要提供统一的访问地址,该地址需要高可用。本文档使用 keepalived 和 haproxy 实现 kube-apiserver VIP 高可用和负载均衡。
④ 因为对master做了keepalived高可用,所以3台服务器都有可能会升成master服务器(主master宕机,会有从升级为主);因此所有的master操作,在3个服务器上都要进行。
1、下载最新版本的二进制文件
从CHANGELOG 页面 下载 server tarball 文件。这2个包是用迅雷很快就可以下载完成。
[root@k8s-master1 ~]# wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 ~]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 ~]# cd kubernetes/
[root@k8s-master1 kubernetes]# tar -xzvf kubernetes-src.tar.gz
2、将二进制文件拷贝到所有 master 节点
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_master.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /root/kubernetes/server/bin/* k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_master.sh && /opt/k8s/script/scp_master.sh
06-01.部署高可用组件
① 本文档讲解使用 keepalived 和 haproxy 实现 kube-apiserver 高可用的步骤:
keepalived 提供 kube-apiserver 对外服务的 VIP;
haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能;
② 运行 keepalived 和 haproxy 的节点称为 LB 节点。由于 keepalived 是一主多备运行模式,故至少两个 LB 节点。
③ 本文档复用 master 节点的三台机器,haproxy 监听的端口(6444) 需要与 kube-apiserver的端口 6443 不同,避免冲突。
④ keepalived 在运行过程中周期检查本机的 haproxy 进程状态,如果检测到 haproxy 进程异常,则触发重新选主的过程,VIP 将飘移到新选出来的主节点,从而实现 VIP 的高可用。
⑤ 所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过 VIP 和haproxy 监听的 6444 端口访问 kube-apiserver 服务。
06-01-01.配置haproxy服务
[root@k8s-master1 ~]# yum install -y keepalived haproxy
[root@k8s-master1 ~]# vim /etc/haproxy/haproxy.cfg
[root@k8s-master1 ~]# vim /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /var/run/haproxy-admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5000
timeout client 10m
timeout server 10m
listen admin_stats
bind 0.0.0.0:10080
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /status
stats realm welcome login\ Haproxy
stats auth k8s:k8sadmin
stats hide-version
stats admin if TRUE
listen k8s-master1
bind 0.0.0.0:6444
mode tcp
option tcplog
balance source
server 192.168.1.101 192.168.1.101:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.1.102 192.168.1.102:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.1.103 192.168.1.103:6443 check inter 2000 fall 2 rise 2 weight 1
注:
haproxy 在 10080 端口输出 status 信息;
haproxy 监听所有接口的 6444端口,该端口与环境变量 ${KUBE_APISERVER} 指定的端口必须一致;
server 字段列出所有kube-apiserver监听的 IP 和端口;
06-01-02 在其他服务器安装、下发haproxy 配置文件;并启动检查haproxy服务
[root@k8s-master1 ~]# vim /opt/k8s/script/haproxy.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
#安装haproxy
ssh root@${node_ip} "yum install -y keepalived haproxy"
#下发配置文件
scp /etc/haproxy/haproxy.cfg root@${node_ip}:/etc/haproxy
#启动检查haproxy服务
ssh root@${node_ip} "systemctl restart haproxy"
ssh root@${node_ip} "systemctl enable haproxy.service"
ssh root@${node_ip} "systemctl status haproxy|grep Active"
#检查 haproxy 是否监听6443 端口
ssh root@${node_ip} "netstat -lnpt|grep haproxy"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/haproxy.sh && /opt/k8s/script/haproxy.sh
确保输出类似于:
>>> 192.168.1.101
haproxy.cfg 100% 920 1.6MB/s 00:00
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
Active: active (running) since 日 2020-08-30 23:43:27 CST; 439ms ago
tcp 0 0 0.0.0.0:6444 0.0.0.0:* LISTEN 10261/haproxy
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 10261/haproxy
>>> 192.168.1.102
haproxy.cfg 100% 920 632.1KB/s 00:00
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
Active: active (running) since 日 2020-08-30 23:43:46 CST; 349ms ago
tcp 0 0 0.0.0.0:6444 0.0.0.0:* LISTEN 10197/haproxy
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 10197/haproxy
>>> 192.168.1.103
haproxy.cfg 100% 920 717.7KB/s 00:00
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.
Active: active (running) since 日 2020-08-30 23:44:16 CST; 364ms ago
tcp 0 0 0.0.0.0:6444 0.0.0.0:* LISTEN 10201/haproxy
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 10201/haproxy
06-01-02.配置keepalived 服务
keepalived 是一主(master)多备(backup)运行模式,故有两种类型的配置文件。
master 配置文件只有一份,backup 配置文件视节点数目而定,对于本文档而言,规划如下:
master: 192.168.1.101
backup:192.168.1.102、192.168.1.103
(1)在192.168.1.101 master服务;配置文件:
[root@k8s-master1 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id keepalived_hap
}
vrrp_script check-haproxy {
script "killall -0 haproxy"
interval 5
weight -30
}
vrrp_instance VI-k8s-master1 {
state MASTER
priority 100
dont_track_primary
interface ens33
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.1.100
}
}
注:
我的VIP 所在的接口nterface 为 ens33;根据自己的情况改变
使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。如果异常则将权重减少(-30),从而触发重新选主过程;
router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套keepalived HA,则必须各不相同;
(2)在两台backup 服务;配置文件:
[root@k8s-master2 ~]# vim /etc/keepalived/keepalived.conf
global_defs {
router_id keepalived_hap
}
vrrp_script check-haproxy {
script "killall -0 haproxy"
interval 5
weight -30
}
vrrp_instance VI-k8s-master1 {
state BACKUP
priority 90 #第2台从为80
dont_track_primary
interface ens33
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.1.100
}
}
注:
我的VIP 所在的接口nterface 为 ens33;根据自己的情况改变
使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。如果异常则将权重减少(-30),从而触发重新选主过程;
router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套keepalived HA,则必须各不相同;
priority 的值必须小于 master 的值;两个从的值也需要不一样;
(3)开启keepalived 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/keepalived.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
VIP="192.168.1.100"
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl restart keepalived && systemctl enable keepalived"
ssh root@${node_ip} "systemctl status keepalived|grep Active"
ssh ${node_ip} "ping -c 1 ${VIP}"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/keepalived.sh && /opt/k8s/script/keepalived.sh
输出类似于:
>>> 192.168.1.101
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
Active: active (running) since 日 2020-08-30 23:50:30 CST; 256ms ago
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.552 ms
--- 192.168.1.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms
>>> 192.168.1.102
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
Active: active (running) since 日 2020-08-30 23:50:30 CST; 232ms ago
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.013 ms
--- 192.168.1.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms
>>> 192.168.1.103
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
Active: active (running) since 日 2020-08-30 23:50:31 CST; 210ms ago
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.488 ms
--- 192.168.1.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms
(4)在master1服务器上能看到ens33网卡上已经有192.168.1.102 VIP了
[root@k8s-master1 ~]# ip a show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:55:8f:dd brd ff:ff:ff:ff:ff:ff
inet 192.168.1.101/24 brd 192.168.1.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.1.100/32 scope global ens33
valid_lft forever preferred_lft forever
06-01-04 查看 haproxy 状态页面
浏览器访问192.168.1.100:10080/status 地址
① 输入用户名、密码;在配置文件中自己定义的
② 查看 haproxy 状态页面
06-02.部署 kube-apiserver 组件
本文档讲解使用 keepalived 和 haproxy 部署一个 3 节点高可用 master 集群的步骤,对应的 LB VIP 为环境变量 ${MASTER_VIP}。
准备工作:下载最新版本的二进制文件、安装和配置 flanneld
06-02-01 创建 kubernetes 证书和私钥
(1)创建证书签名请求:
[root@k8s-master1 ~]# vim /opt/k8s/cert/kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.1.101",
"192.168.1.102",
"192.168.1.103",
"192.168.1.100",
"10.96.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
注:
hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver节点 IP、kubernetes 服务 IP 和域名; 域名最后字符不能是 . (如不能为kubernetes.default.svc.cluster.local. ),否则解析时失败,提示: x509:cannot parse dnsName "kubernetes.default.svc.cluster.local." ;
如果使用非 cluster.local 域名,如 opsnull.com ,则需要修改域名列表中的最后两个域名为: kubernetes.default.svc.opsnull 、 kubernetes.default.svc.opsnull.com
kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过如下命令获取:
[root@k8s-master1 ~]# kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
(2)生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kubernetes-csr.json | cfssljson -bare kubernetes
[root@k8s-master1 cert]# ls kubernetes*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
06-02-02 创建加密配置文件
① 产生一个用来加密Etcd 的 Key:
[root@k8s-master1 ~]# head -c 32 /dev/urandom | base64
92uqkOs/BEcEDnaMreJmDaZTFwGAvfcOCBfoV8Nem5E=
注意:每台master节点需要用一样的 Key
② 使用这个加密的key,创建加密配置文件
[root@k8s-master1 cert]# vim encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: 92uqkOs/BEcEDnaMreJmDaZTFwGAvfcOCBfoV8Nem5E=
- identity: {}
06-02-03 将生成的证书和私钥文件、加密配置文件拷贝到master 节点的/opt/k8s目录下
[root@k8s-master1 cert]# vim /opt/k8s/script/scp_apiserver.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/k8s/cert/ && sudo chown -R k8s /opt/k8s/cert/"
scp /opt/k8s/cert/kubernetes*.pem k8s@${node_ip}:/opt/k8s/cert/
scp /opt/k8s/cert/encryption-config.yaml root@${node_ip}:/opt/k8s/
done
[root@k8s-master1 cert]# chmod +x /opt/k8s/script/scp_apiserver.sh && /opt/k8s/script/scp_apiserver.sh
06-02-04 创建 kube-apiserver systemd unit 模板文件
[root@k8s-master1 cert]# /opt/apiserver
[root@k8s-master1 cert]# vim /opt/apiserver/kube-apiserver.service.template
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--experimental-encryption-provider-config=/opt/k8s/encryption-config.yaml \
--advertise-address=##NODE_IP## \
--bind-address=##NODE_IP## \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=1-32767 \
--tls-cert-file=/opt/k8s/cert/kubernetes.pem \
--tls-private-key-file=/opt/k8s/cert/kubernetes-key.pem \
--client-ca-file=/opt/k8s/cert/ca.pem \
--kubelet-client-certificate=/opt/k8s/cert/kubernetes.pem \
--kubelet-client-key=/opt/k8s/cert/kubernetes-key.pem \
--service-account-key-file=/opt/k8s/cert/ca-key.pem \
--etcd-cafile=/opt/k8s/cert/ca.pem \
--etcd-certfile=/opt/k8s/cert/kubernetes.pem \
--etcd-keyfile=/opt/k8s/cert/kubernetes-key.pem \
--etcd-servers=https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
User=k8s
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
注:
--experimental-encryption-provider-config :启用加密特性;
--authorization-mode=Node,RBAC : 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
--enable-admission-plugins :启用 ServiceAccount 和NodeRestriction ;
--service-account-key-file :签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
--tls-*-file :指定 apiserver 使用的证书、私钥和 CA 文件。 --client-ca-file 用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
--kubelet-client-certificate 、 --kubelet-client-key :如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
--bind-address : 不能为 127.0.0.1 ,否则外界不能访问它的安全端口6443;
--insecure-port=0 :关闭监听非安全端口(8080);
--service-cluster-ip-range : 指定 Service Cluster IP 地址段;
--service-node-port-range : 指定 NodePort 的端口范围;
--runtime-config=api/all=true : 启用所有版本的 APIs,如autoscaling/v2alpha1;
--enable-bootstrap-token-auth :启用 kubelet bootstrap 的 token 认证;
--apiserver-count=3 :指定集群运行模式,多台 kube-apiserver 会通过 leader选举产生一个工作节点,其它节点处于阻塞状态;
User=k8s :使用 k8s 账户运行;
06-02-05 为各节点创建和分发 kube-apiserver systemd unit文件;启动检查 kube-apiserver 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/apiserver_service.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
#替换模板文件中的变量,为各节点创建 systemd unit 文件
for (( i=0; i < 3; i++ ));do
sed "s/##NODE_IP##/${NODE_IPS[i]}/" /opt/apiserver/kube-apiserver.service.template > /opt/apiserver/kube-apiserver-${NODE_IPS[i]}.service
done
#启动并检查 kube-apiserver 服务
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"
scp /opt/apiserver/kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.service
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/apiserver_service.sh && /opt/k8s/script/apiserver_service.sh
确保状态为 active (running) ,否则到 master 节点查看日志,确认原因:
$ journalctl -u kube-apiserver
06-02-06 打印 kube-apiserver 写入 etcd 的数据
[root@k8s-master1 ~]# ETCDCTL_API=3 etcdctl \
--endpoints="https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379" \
--cacert=/opt/k8s/cert/ca.pem \
--cert=/opt/etcd/cert/etcd.pem \
--key=/opt/etcd/cert/etcd-key.pem \
get /registry/ --prefix --keys-only
06-02-07 检查集群信息
[root@k8s-master1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.100:6444
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master1 ~]# kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m
[root@k8s-master1 ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
注意:
① 如果执行 kubectl 命令式时输出如下错误信息,则说明使用的 ~/.kube/config文件不对,请切换到正确的账户后再执行该命令:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
② 执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。当controller-manager、scheduler 以集群模式运行时,有可能和 kube-apiserver 不在一台机器上,这时 controller-manager 或 scheduler 的状态为Unhealthy,但实际上它们工作正常。
06-02-08 检查 kube-apiserver 监听的端口
[root@k8s-master1 ~]# ss -nutlp |grep apiserver
tcp LISTEN 0 128 192.168.1.101:6443 *:* users:(("kube-apiserver",pid=20676,fd=5))
6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
由于关闭了非安全端口,故没有监听 8080;
06-02-09 授予 kubernetes 证书访问 kubelet API 的权限
在执行 kubectl exec、run、logs 等命令时,apiserver 会转发到 kubelet。这里定义RBAC 规则,授权 apiserver 调用 kubelet API。
[root@k8s-master1 ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io "kube-apiserver:kubelet-apis" created
06-03.部署高可用kube-controller-manager 集群
本文档介绍部署高可用 kube-controller-manager 集群的步骤。
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
① 与 kube-apiserver 的安全端口通信时;
② 在安全端口(https,10252) 输出 prometheus 格式的 metrics;
准备工作:下载最新版本的二进制文件、安装和配置 flanneld
06-03-01 创建 kube-controller-manager 证书和私钥
创建证书签名请求:
[root@k8s-master1 ~]# cd /opt/k8s/cert/
[root@k8s-master1 cert]# vim kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.1.101",
"192.168.1.102",
"192.168.1.103"
],
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "4Paradigm"
}
]
}
注:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予kube-controller-manager 工作所需的权限。
06-03-02 生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@k8s-master1 cert]# ll *controller-manager*
kube-controller-manager.csr
kube-controller-manager-key.pem
kube-controller-manager-csr.json
kube-controller-manager.pem
06-03-03 创建kubeconfig 文件
kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;
① 执行命令,生产kube-controller-manager.kubeconfig文件
[root@k8s-master1 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.1.100:6444 \
--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig
[root@k8s-master1 ~]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/opt/k8s/cert/kube-controller-manager.pem \
--client-key=/opt/k8s/cert/kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig
[root@k8s-master1 ~]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig
[root@k8s-master1 ~]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig
② 验证kube-controller-manager.kubeconfig文件
[root@k8s-master1 cert]# ls /root/.kube/kube-controller-manager.kubeconfig
/root/.kube/kube-controller-manager.kubeconfig
[root@k8s-master1 ~]# kubectl config view --kubeconfig=/root/.kube/kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.100:6444
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:kube-controller-manager
name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
06-03-04 分发生成的证书和私钥、kubeconfig 到所有 master 节点
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_controller_manager.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "chown k8s /opt/k8s/cert/*"
scp /opt/k8s/cert/kube-controller-manager*.pem k8s@${node_ip}:/opt/k8s/cert/
scp /root/.kube/kube-controller-manager.kubeconfig k8s@${node_ip}:/opt/k8s/
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_controller_manager.sh && /opt/k8s/script/scp_controller_manager.sh
06-03-05 创建和分发 kube-controller-manager systemd unit 文件
[root@k8s-master1 ~]# mkdir /opt/controller_manager
[root@k8s-master1 ~]# cd /opt/controller_manager
[root@k8s-master1 controller_manager]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/opt/k8s/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/k8s/cert/ca.pem \
--cluster-signing-key-file=/opt/k8s/cert/ca-key.pem \
--experimental-cluster-signing-duration=8760h \
--root-ca-file=/opt/k8s/cert/ca.pem \
--service-account-private-key-file=/opt/k8s/cert/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/opt/k8s/cert/kube-controller-manager.pem \
--tls-private-key-file=/opt/k8s/cert/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on
Restart=on-failure
RestartSec=5
User=k8s
[Install]
WantedBy=multi-user.target
注:
--port=0:关闭监听 http /metrics 的请求,同时 --address 参数无效,--bind-address 参数有效;
--secure-port=10252、--bind-address=0.0.0.0: 在所有网络接口监听 10252 端口的 https /metrics 请求;
--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
--cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
--experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
--root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
--service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
--service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
--feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
--controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
--horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
--tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
--use-service-account-credentials=true:
User=k8s:使用 k8s 账户运行;
kube-controller-manager 不对请求 https metrics 的 Client 证书进行校验,故不需要指定 --tls-ca-file 参数,而且该参数已被淘汰。
06-03-06 kube-controller-manager 的权限
ClusteRole: system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中。
需要在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。
内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。
06-03-07 分发systemd unit 文件到所有master 节点;启动检查 kube-controller-manager 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/controller_manager.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /opt/controller_manager/kube-controller-manager.service root@${node_ip}:/etc/systemd/system/
ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager"
done
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status kube-controller-manager|grep Active"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/controller_manager.sh && /opt/k8s/script/controller_manager.sh
06-03-08 查看输出的 metric
注意:以下命令在 kube-controller-manager 节点上执行。
[root@k8s-master1 ~]# ss -nutlp |grep kube-controll
tcp LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=6532,fd=5))
[root@k8s-master1 ~]# curl -s --cacert /opt/k8s/cert/ca.pem https://127.0.0.1:10252/metrics |head
# HELP ClusterRoleAggregator_adds Total number of adds handled by workqueue: ClusterRoleAggregator
# TYPE ClusterRoleAggregator_adds counter
ClusterRoleAggregator_adds 9
# HELP ClusterRoleAggregator_depth Current depth of workqueue: ClusterRoleAggregator
# TYPE ClusterRoleAggregator_depth gauge
ClusterRoleAggregator_depth 0
# HELP ClusterRoleAggregator_queue_latency How long an item stays in workqueueClusterRoleAggregator before being requested.
# TYPE ClusterRoleAggregator_queue_latency summary
ClusterRoleAggregator_queue_latency{quantile="0.5"} 490
ClusterRoleAggregator_queue_latency{quantile="0.9"} 22042
注:curl --cacert CA 证书用来验证 kube-controller-manager https server 证书;
06-03-09 测试 kube-controller-manager 集群的高可用
1、停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。
停掉之前:
[root@k8s-master1 controller_manager]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_aed91ea7-eb9d-11ea-8119-000c29558fdd","leaseDurationSeconds":15,"acquireTime":"2020-08-31T15:21:46Z","renewTime":"2020-08-31T15:22:34Z","leaderTransitions":0}'
creationTimestamp: 2020-08-31T15:21:50Z
name: kube-controller-manager
namespace: kube-system
resourceVersion: "248"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: b14130f5-eb9d-11ea-865d-000c29eb1834
2、查看当前的 leader
停掉之后:
[root@k8s-master1 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master3_4e54a0a2-eb4b-11ea-b7b2-000c29558fdd","leaseDurationSeconds":15,"acquireTime":"2020-08-31T05:32:05Z","renewTime":"2020-08-31T05:43:29Z","leaderTransitions":0}'
creationTimestamp: 2020-08-31T05:32:05Z
name: kube-controller-manager
namespace: kube-system
resourceVersion: "569"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: 4e589ee9-eb4b-11ea-ae9f-000c29558fdd
可见,当前的 leader 为 k8s-master3 节点。(本来是在k8s-master1节点)
06-04.部署高可用 kube-scheduler 集群
本文档介绍部署高可用 kube-scheduler 集群的步骤。
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:
① 与 kube-apiserver 的安全端口通信;
② 在安全端口(https,10251) 输出 prometheus 格式的 metrics;
准备工作:下载最新版本的二进制文件、安装和配置 flanneld
06-04-01 创建 kube-scheduler 证书和私钥
创建证书签名请求:
[root@k8s-master1 ~]# cd /opt/k8s/cert/
[root@k8s-master1 cert]# vim kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.1.101",
"192.168.1.102",
"192.168.1.103"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-scheduler",
"OU": "4Paradigm"
}
]
}
注:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler
O 为 system:kube-scheduler
kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
06-04-02 生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@k8s-master1 cert]# ls *scheduler*
kube-scheduler.csr kube-scheduler-csr.json kube-scheduler-key.pem kube-scheduler.pem
06-04-03 创建kubeconfig 文件
kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;
① 执行命令,生产kube-scheduler.kubeconfig文件
[root@k8s-master1 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.1.100:6444 \
--kubeconfig=/root/.kube/kube-scheduler.kubeconfig
[root@k8s-master1 ~]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=/opt/k8s/cert/kube-scheduler.pem \
--client-key=/opt/k8s/cert/kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/root/.kube/kube-scheduler.kubeconfig
[root@k8s-master1 ~]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/root/.kube/kube-scheduler.kubeconfig
[root@k8s-master1 ~]# kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/root/.kube/kube-scheduler.kubeconfig
② 验证kube-controller-manager.kubeconfig文件
[root@k8s-master1 cert]# ls /root/.kube/kube-scheduler.kubeconfig
/root/.kube/kube-scheduler.kubeconfig
[root@k8s-master1 ~]# kubectl config view --kubeconfig=/root/.kube/kube-scheduler.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.100:6444
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:kube-scheduler
name: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
06-04-04 分发生成的证书和私钥、kubeconfig 到所有 master 节点
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_scheduler.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "chown k8s /opt/k8s/cert/*"
scp /opt/k8s/cert/kube-scheduler*.pem k8s@${node_ip}:/opt/k8s/cert/
scp /root/.kube/kube-scheduler.kubeconfig k8s@${node_ip}:/opt/k8s/
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_scheduler.sh && /opt/k8s/script/scp_scheduler.sh
06-04-05 创建kube-scheduler systemd unit 文件
[root@k8s-master1 ~]# mkdir /opt/scheduler
[root@k8s-master1 ~]# cd /opt/scheduler
[root@k8s-master1 scheduler]# vim kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
--address=127.0.0.1 \
--kubeconfig=/opt/k8s/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
User=k8s
[Install]
WantedBy=multi-user.target
EOF
注:
--address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
User=k8s:使用 k8s 账户运行;
06-04-06 分发systemd unit 文件到所有master 节点;启动检查kube-scheduler 服务
[root@k8s-master1 scheduler]# vim /opt/k8s/script/scheduler.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /opt/scheduler/kube-scheduler.service root@${node_ip}:/etc/systemd/system/
ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler"
done
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status kube-scheduler|grep Active"
done
[root@k8s-master1 scheduler]# chmod +x /opt/k8s/script/scheduler.sh && /opt/k8s/script/scheduler.sh
确保状态为 active (running),否则查看日志,确认原因:
[root@k8s-master1 scheduler]# journalctl -u kube-scheduler
06-04-07 查看输出的 metric
注意:以下命令在 kube-scheduler 节点上执行。
kube-scheduler 监听 10251 端口,接收 http 请求:
[root@k8s-master1 ~]# ss -nutlp |grep kube-scheduler
tcp LISTEN 0 128 127.0.0.1:10251 *:* users:(("kube-scheduler",pid=14968,fd=8))
[root@k8s-master1 ~]# curl -s http://127.0.0.1:10251/metrics |head
# ELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.5626e-05
go_gc_duration_seconds{quantile="0.25"} 5.263e-05
go_gc_duration_seconds{quantile="0.5"} 0.000300539
go_gc_duration_seconds{quantile="0.75"} 0.001316186
go_gc_duration_seconds{quantile="1"} 0.004999673
06-04-08 测试 kube-scheduler 集群的高可用
1、随便找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限(systemd 日志)。
停掉之前:
[root@k8s-master13 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_542d79b2-eb50-11ea-b9a6-000c29558fdd","leaseDurationSeconds":15,"acquireTime":"2020-08-31T06:08:04Z","renewTime":"2020-08-31T06:13:12Z","leaderTransitions":0}'
creationTimestamp: 2020-08-31T06:08:04Z
name: kube-scheduler
namespace: kube-system
resourceVersion: "1608"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 54ca3b8f-eb50-11ea-ae9f-000c29558fdd
2、查看当前的 leader
停掉之后:
[root@k8s-master13 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master3_5ad819ad-eb50-11ea-bcd2-000c29eb1834","leaseDurationSeconds":15,"acquireTime":"2020-08-31T06:13:44Z","renewTime":"2020-08-31T06:13:52Z","leaderTransitions":1}'
creationTimestamp: 2020-08-31T06:08:04Z
name: kube-scheduler
namespace: kube-system
resourceVersion: "1641"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 54ca3b8f-eb50-11ea-ae9f-000c29558fdd
可见,当前的 leader 为 k8s-master3 节点。(本来是在k8s-master1节点)
07.部署 worker 节点
kubernetes work 节点运行如下组件:
docker
kubelet
kube-proxy
1、安装和配置 flanneld
参考 05.部署 flannel 网络
2、安装依赖包
$ yum install -y epel-release
$ yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs
07-01.部署 docker 组件
docker 是容器的运行环境,管理它的生命周期。kubelet 通过 Container Runtime Interface (CRI) 与 docker 进行交互。
07-01-01 下载docker 二进制文件:
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz
tar -xvf docker-18.03.1-ce.tgz
07-01-02 创建和分发 systemd unit 文件
[root@k8s-master1 ~]# cd /opt/docker
[root@k8s-master1 docker]# vim docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 PATH 环境变量中;
flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段;
如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);
docker 需要以 root 用于运行;
docker 从 1.13 版本开始,可能将 iptables FORWARD chain的默认策略设置为DROP,从而导致 ping 其它 Node 上的 Pod IP 失败,遇到这种情况时,需要手动设置策略为 ACCEPT:$ sudo iptables -P FORWARD ACCEPT;并且把以下命令写入 /etc/rc.local 文件中,防止节点重启iptables FORWARD chain的默认策略又还原为DROP:$ /sbin/iptables -P FORWARD ACCEPT
07-01-03 配置docker 配置文件
使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):
[root@k8s-master1 ~]# $ vim /opt/docker/docker-daemon.json
{
"registry-mirrors": ["https://z11csm7d.mirror.aliyuncs.com"],
"max-concurrent-downloads": 20
}
07-01-04 分发docker 二进制文件、systemd unit 文件、docker 配置文件到所有 worker 机器
[root@k8s-master1 ~]# vim /opt/k8s/script/scp_docker.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /root/docker/docker* k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
scp /opt/docker/docker.service root@${node_ip}:/etc/systemd/system/
ssh root@${node_ip} "mkdir -p /opt/docker/"
scp /opt/docker/docker-daemon.json root@${node_ip}:/opt/docker/daemon.json
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_docker.sh && /opt/k8s/script/scp_docker.sh
07-01-05 启动并检查 docker 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/docker.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103" "192.168.1.104")
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl stop firewalld && systemctl disable firewalld"
ssh root@${node_ip} "/usr/sbin/iptables -F && /usr/sbin/iptables -X && /usr/sbin/iptables -F -t nat && /usr/sbin/iptables -X -t nat"
ssh root@${node_ip} "/usr/sbin/iptables -P FORWARD ACCEPT"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
ssh root@${node_ip} 'for intf in /sys/devices/virtual/net/docker0/brif/*; do echo 1 > $intf/hairpin_mode; done'
ssh root@${node_ip} "sudo sysctl -p /etc/sysctl.d/kubernetes.conf"
#检查服务运行状态
ssh k8s@${node_ip} "systemctl status docker|grep Active"
#检查 docker0 网桥
ssh k8s@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
done
注:
关闭 firewalld(centos7)/ufw(ubuntu16.04),否则可能会重复创建 iptables 规则;
清理旧的 iptables rules 和 chains 规则;
开启 docker0 网桥下虚拟网卡的 hairpin 模式;
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/docker.sh && /opt/k8s/script/docker.sh
① 确保状态为 active (running),否则查看日志,确认原因:
$ journalctl -u docker
② 确认各 work 节点的 docker0 网桥和 flannel.1 接口的 IP 处于同一个网段中(如下10.30.26.0和 10.30.26.1):
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether ea:b3:44:ab:36:16 brd ff:ff:ff:ff:ff:ff
inet 10.30.89.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 02:42:8e:6e:ea:ef brd ff:ff:ff:ff:ff:ff
inet 10.30.89.1/24 brd 10.30.89.255 scope global docker0
valid_lft forever preferred_lft forever
07-02.部署 kubelet 组件
kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。
1、下载和分发 kubelet 二进制文件
参考 06.部署master节点.md
2、安装依赖包
$ yum install -y epel-release
$ yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs
07-02-01 创建 kubelet bootstrap kubeconfig 文件
[root@k8s-master1 ~]# vim /opt/k8s/script/bootstrap_kubeconfig.sh
NODE_NAMES=("k8s-master1" "k8s-master2" "k8s-master3" "k8s-node1")
for node_name in ${NODE_NAMES[@]};do
echo ">>> ${node_name}"
# 创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:${node_name} \
--kubeconfig /root/.kube/config)
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.1.100:6444 \
--kubeconfig=/root/.kube/kubelet-bootstrap-${node_name}.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/root/.kube/kubelet-bootstrap-${node_name}.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=/root/.kube/kubelet-bootstrap-${node_name}.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=/root/.kube/kubelet-bootstrap-${node_name}.kubeconfig
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/bootstrap_kubeconfig.sh && /opt/k8s/script/bootstrap_kubeconfig.sh
注:
① 证书中写入 Token 而非证书,证书后续由 controller-manager 创建。
查看 kubeadm 为各节点创建的 token:
[root@k8s-master1 ~]# kubeadm token list --kubeconfig /root/.kube/config
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
86lavf.ettm2n3zodosbajy 15h 2020-09-02T13:10:13+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node1
geq9qr.5f6ibe07jj7ckoih 15h 2020-09-02T13:10:05+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-master1
n0lh2v.gb2fbb31l6xzv0k8 15h 2020-09-02T13:10:12+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-master2
qzd5vm.7kq4pdu680v50cny 15h 2020-09-02T13:10:13+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-master3
② 创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话);
③ kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers;
各 token 关联的 Secret:
[root@k8s-master1 ~]# kubectl get secrets -n kube-system
NAME TYPE DATA AGE
attachdetach-controller-token-2nmpv kubernetes.io/service-account-token 3 22h
bootstrap-signer-token-rl46t kubernetes.io/service-account-token 3 22h
bootstrap-token-86lavf bootstrap.kubernetes.io/token 7 8h
bootstrap-token-geq9qr bootstrap.kubernetes.io/token 7 8h
bootstrap-token-n0lh2v bootstrap.kubernetes.io/token 7 8h
bootstrap-token-qzd5vm bootstrap.kubernetes.io/token 7 8h
certificate-controller-token-9xgsd kubernetes.io/service-account-token 3 22h
clusterrole-aggregation-controller-token-ccg2g kubernetes.io/service-account-token 3 22h
cronjob-controller-token-fkq77 kubernetes.io/service-account-token 3 22h
daemon-set-controller-token-22w77 kubernetes.io/service-account-token 3 22h
default-token-8fg6m kubernetes.io/service-account-token 3 22h
deployment-controller-token-kjd2q kubernetes.io/service-account-token 3 22h
disruption-controller-token-8tkxw kubernetes.io/service-account-token 3 22h
endpoint-controller-token-qz6tc kubernetes.io/service-account-token 3 22h
generic-garbage-collector-token-gh82z kubernetes.io/service-account-token 3 22h
horizontal-pod-autoscaler-token-nnnhb kubernetes.io/service-account-token 3 22h
job-controller-token-thqd2 kubernetes.io/service-account-token 3 22h
namespace-controller-token-vnh6d kubernetes.io/service-account-token 3 22h
node-controller-token-vlnp8 kubernetes.io/service-account-token 3 22h
persistent-volume-binder-token-c2gvd kubernetes.io/service-account-token 3 22h
pod-garbage-collector-token-lpxqr kubernetes.io/service-account-token 3 22h
pv-protection-controller-token-2v2l6 kubernetes.io/service-account-token 3 22h
pvc-protection-controller-token-qj5ht kubernetes.io/service-account-token 3 22h
replicaset-controller-token-j7q5m kubernetes.io/service-account-token 3 22h
replication-controller-token-qbh77 kubernetes.io/service-account-token 3 22h
resourcequota-controller-token-bphmw kubernetes.io/service-account-token 3 22h
service-account-controller-token-wvxfl kubernetes.io/service-account-token 3 22h
service-controller-token-9gdws kubernetes.io/service-account-token 3 22h
statefulset-controller-token-22r8b kubernetes.io/service-account-token 3 22h
token-cleaner-token-kq269 kubernetes.io/service-account-token 3 22h
ttl-controller-token-pvxl2 kubernetes.io/service-account-token 3 22h
07-02-02 创建kubelet 参数配置文件
从 v1.10 开始,kubelet 部分参数需在配置文件中配置,kubelet --help 会提示:
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag
[root@k8s-master1 ~]# mkdir /opt/kubelet
[root@k8s-master1 ~]# cd /opt/kubelet
[root@k8s-master1 kubelet]# vim kubelet.config.json.template
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/opt/k8s/cert/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "##NODE_IP##",
"port": 10250,
"readOnlyPort": 0,
"cgroupDriver": "cgroupfs",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local",
"clusterDNS": ["10.90.0.2"]
}
07-02-03 分发 bootstrap kubeconfig 、kubelet 配置文件到所有 worker 节点
[root@k8s-master1 kubelet]# vim /opt/k8s/script/scp_kubelet.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103" "192.168.1.104")
NODE_NAMES=("k8s-master1" "k8s-master2" "k8s-master3" "k8s-node1")
for node_name in ${NODE_NAMES[@]};do
echo ">>> ${node_name}"
scp /root/.kube/kubelet-bootstrap-${node_name}.kubeconfig k8s@${node_name}:/opt/k8s/kubelet-bootstrap.kubeconfig
done
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
sed -i "s/##NODE_IP##/${node_ip}/" /opt/kubelet/kubelet.config.json.template > /opt/kubelet/kubelet.config-${node_ip}.json
scp /opt/kubelet/kubelet.config-${node_ip}.json root@${node_ip}:/opt/k8s/kubelet.config.json
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/scp_kubelet.sh && /opt/k8s/script/scp_kubelet.sh
07-02-04 创建kubelet systemd unit 文件
[root@k8s-master1 ~]# vim /opt/kubelet/kubelet.service.template
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/opt/lib/kubelet
ExecStart=/opt/k8s/bin/kubelet \
--bootstrap-kubeconfig=/opt/k8s/kubelet-bootstrap.kubeconfig \
--cert-dir=/opt/k8s/cert \
--kubeconfig=/opt/k8s/kubelet.kubeconfig \
--config=/opt/k8s/kubelet.config.json \
--hostname-override=##NODE_NAME## \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
--allow-privileged=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/opt/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
07-02-05 Bootstrap Token Auth 和授予权限
1、kublet 启动时查找配置的 --kubeletconfig 文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 向 kube-apiserver 发送证书签名请求 (CSR)。
2、kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。
3、默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:
$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 06 06:42:36 k8s-master2 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
May 06 06:42:36 k8s-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 06 06:42:36 k8s-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
4、解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:
```bash
[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
07-02-06 启动 kubelet 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/kubelet.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103" "192.168.1.104")
NODE_NAMES=("k8s-master1" "k8s-master2" "k8s-master3")
#分发kubelet systemd unit 文件
for node_name in ${NODE_NAMES[@]};do
echo ">>> ${node_name}"
sed -e "s/##NODE_NAME##/${node_name}/" /opt/kubelet/kubelet.service.template > /opt/kubelet/kubelet-${node_name}.service
scp /opt/kubelet/kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done
#开启检查kubelet 服务
for node_ip in ${NODE_IPS[@]};do
ssh root@${node_ip} "mkdir -p /opt/lib/kubelet"
ssh root@${node_ip} "/usr/sbin/swapoff -a"
ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
ssh root@${node_ip} "systemctl status kubelet |grep active"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/kubelet.sh && /opt/k8s/script/kubelet.sh
注:
关闭 swap 分区,注意/etc/fstab 要设为开机不启动swap分区,否则 kubelet 会启动失败;
必须先创建工作和日志目录;
kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。
07-02-07 approve kubelet CSR 请求
可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。
1、手动 approve CSR 请求
(1)查看 CSR 列表:
[root@k8s-master1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU 4m system:bootstrap:8hpvxm Pending
node-csr-atMwF8GpKbDEcGjzCTXF1NYo9Jc1AzE2yQoxaU8NAkw 7m system:bootstrap:ttbgfq Pending
node-csr-qxa30a9GRg35iNEl3PYZOIICMo_82qPrqNu6PizEZXw 4m system:bootstrap:gktdpg Pending
三个 work 节点的 csr 均处于 pending 状态;
(2)approve CSR:
[root@k8s-master1 ~]# kubectl certificate approve node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU
certificatesigningrequest.certificates.k8s.io "node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU" approved
(3)查看 Approve 结果:
[root@k8s-master1 ~]# kubectl describe csr node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU
Name: node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 29 Nov 2018 17:51:43 +0800
Requesting User: system:bootstrap:8hpvxm
Status: Approved,Issued
Subject:
Common Name: system:node:k8s-master2
Serial Number:
Organization: system:nodes
Events: <none>
2、自动 approve CSR 请求
(1)创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:
[root@k8s-master1 ~]# vim /opt/kubelet/csr-crb.yaml
# Approve all CSRs for the group "system:bootstrappers" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name:
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: approve-node-server-renewal-csr
apiGroup: rbac.authorization.k8s.io
注:
auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;
(2)生效配置:
[root@k8s-master1 ~]# $ kubectl apply -f /opt/kubelet/csr-crb.yaml
07-02-08 查看 kublet 的情况
1、等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approve:
[root@k8s-master1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-kvbtt 15h system:node:k8s-master2 Approved,Issued
csr-p9b9s 15h system:node:k8s-master3 Approved,Issued
csr-rjpr9 15h system:node:k8s-master1 Approved,Issued
node-csr-8Sr42M0z_LzZeHU-RCbgOynJm3Z2TsSXHuAlohfJiIM 15h system:bootstrap:ttbgfq Approved,Issued
node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU 15h system:bootstrap:8hpvxm Approved,Issued
node-csr-atMwF8GpKbDEcGjzCTXF1NYo9Jc1AzE2yQoxaU8NAkw 15h system:bootstrap:ttbgfq Approved,Issued
node-csr-elVB0jp36nOHuOYlITWDZx8LoO2Ly4aW0VqgYxw_Te0 15h system:bootstrap:gktdpg Approved,Issued
node-csr-muNcDteZINLZnSv8FkhOMaP2ob5uw82PGwIAynNNrco 15h system:bootstrap:ttbgfq Approved,Issued
node-csr-qxa30a9GRg35iNEl3PYZOIICMo_82qPrqNu6PizEZXw 15h system:bootstrap:gktdpg Approved,Issued
2、所有节点均 ready:
此处如果提示:No resources found.请检查kubelet的配置文件/opt/k8s/kubelet.config.json文件中的address是否正确
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 7m v1.10.4
k8s-master2 Ready <none> 7m v1.10.4
k8s-master3 Ready <none> 7m v1.10.4
3、kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:
[root@k8s-master1 ~]# ll /opt/k8s/kubelet.kubeconfig
-rw------- 1 root root 2280 9月 1 13:56 /opt/k8s/kubelet.kubeconfig
[root@k8s-master1 ~]# ll /opt/k8s/cert/ |grep kubelet
-rw-r--r-- 1 root root 1050 9月 1 13:56 kubelet-client.crt
-rw------- 1 root root 227 9月 1 13:53 kubelet-client.key
-rw------- 1 root root 1362 9月 1 14:02 kubelet-server-2020-09-01-14-02-30.pem
lrwxrwxrwx 1 root root 52 9月 1 14:02 kubelet-server-current.pem -> /opt/k8s/cert/kubelet-server-2020-09-01-14-02-30.pem
注:kubelet-server 证书会周期轮转;
07-02-09 kubelet 提供的 API 接口
1、kublet 启动后监听多个端口,用于接收 kube-apiserver 或其它组件发送的请求:
[root@k8s-master1 ~]# ss -nutlp |grep kubelet
tcp LISTEN 0 128 192.168.1.101:10250 *:* users:(("kubelet",pid=2797,fd=22))
tcp LISTEN 0 128 192.168.1.101:4194 *:* users:(("kubelet",pid=2797,fd=13))
tcp LISTEN 0 128 127.0.0.1:10248 *:* users:(("kubelet",pid=2797,fd=32))
注:
1、端口介绍:
4194: cadvisor http 服务;
10248: healthz http 服务;
10250: https API 服务;注意:未开启只读端口 10255;
2、例如执行 kubectl ec -it nginx-ds-5rmws -- sh 命令时,kube-apiserver 会向 kubelet 发送如下请求:
POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1
3、kubelet 接收 10250 端口的 https 请求:
/pods、/runningpods
/metrics、/metrics/cadvisor、/metrics/probes
/spec
/stats、/stats/container
/logs
/run/、"/exec/", "/attach/", "/portForward/", "/containerLogs/" 等管理;
4、由于关闭了匿名认证,同时开启了 webhook 授权,所有访问 10250 端口 https API 的请求都需要被认证和授权。
预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限:
[root@k8s-master1 ~]# kubectl describe clusterrole system:kubelet-api-admin
Name: system:kubelet-api-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes [] [] [get list watch proxy]
nodes/log [] [] [*]
nodes/metrics [] [] [*]
nodes/proxy [] [] [*]
nodes/spec [] [] [*]
nodes/stats [] [] [*]
07-02-10 kublet api 认证和授权
1、kublet 配置了如下认证参数:
authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
同时配置了如下授权参数:
authroization.mode=Webhook:开启 RBAC 授权;
2、kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:
[root@k8s-master1 ~]# curl -s --cacert /opt/k8s/cert/ca.pem https://192.168.1.102:10250/metrics
Unauthorized
[root@k8s-master1 ~]# curl -s --cacert /opt/k8s/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.1.102:10250/metrics
Unauthorized
3、通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);
证书认证和授权:
$ 权限不足的证书;
[root@k8s-master1 ~]# curl -s --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/kube-controller-manager.pem --key /opt/k8s/cert/kube-controller-manager-key.pem https://192.168.1.102:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
$ 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master1 cert]# curl -s --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/admin.pem --key /opt/k8s/cert/admin-key.pem https://192.168.1.102:10250/metrics|head
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
--cacert、--cert、--key 的参数值必须是文件路径,如上面的/opt/k8s/cert/admin.pem 不能省略 ./,否则返回 401 Unauthorized;
4、bear token 认证和授权:
创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:
[root@k8s-master1 ~]# kubectl create sa kubelet-api-test
serviceaccount "kubelet-api-test" created
[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
clusterrolebinding.rbac.authorization.k8s.io "kubelet-api-test" created
[root@k8s-master1 ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@k8s-master1 ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@k8s-master1 ~]# curl -s --cacert /opt/k8s/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.1.102:10250/metrics|head
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
07-02-11 cadvisor 和 metrics
cadvisor 统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况,分别在自己的 http web 页面(4194 端口)和 10250 以 promehteus metrics 的形式输出。
浏览器访问 http://192.168.1.101:4194/containers/ 可以查看到 cadvisor 的监控页面:
07-02-12 获取 kublet 的配置
从 kube-apiserver 获取各 node 的配置:
使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master1 ~]# curl -sSL --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/admin.pem --key /opt/k8s/cert/admin-key.pem https://192.168.1.102:6444/api/v1/nodes/k8s-master2/proxy/configz | jq \
'.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
{
"syncFrequency": "1m0s",
"fileCheckFrequency": "20s",
"httpCheckFrequency": "20s",
"address": "192.168.1.102",
"port": 10250,
"authentication": {
"x509": {
"clientCAFile": "/opt/k8s/cert/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"registryPullQPS": 5,
"registryBurst": 10,
"eventRecordQPS": 5,
"eventBurst": 10,
"enableDebuggingHandlers": true,
"healthzPort": 10248,
"healthzBindAddress": "127.0.0.1",
"oomScoreAdj": -999,
"clusterDomain": "cluster.local.",
"clusterDNS": [
"10.96.0.2"
],
"streamingConnectionIdleTimeout": "4h0m0s",
"nodeStatusUpdateFrequency": "10s",
"imageMinimumGCAge": "2m0s",
"imageGCHighThresholdPercent": 85,
"imageGCLowThresholdPercent": 80,
"volumeStatsAggPeriod": "1m0s",
"cgroupsPerQOS": true,
"cgroupDriver": "cgroupfs",
"cpuManagerPolicy": "none",
"cpuManagerReconcilePeriod": "10s",
"runtimeRequestTimeout": "2m0s",
"hairpinMode": "promiscuous-bridge",
"maxPods": 110,
"podPidsLimit": -1,
"resolvConf": "/etc/resolv.conf",
"cpuCFSQuota": true,
"maxOpenFiles": 1000000,
"contentType": "application/vnd.kubernetes.protobuf",
"kubeAPIQPS": 5,
"kubeAPIBurst": 10,
"serializeImagePulls": false,
"evictionHard": {
"imagefs.available": "15%",
"memory.available": "100Mi",
"nodefs.available": "10%",
"nodefs.inodesFree": "5%"
},
"evictionPressureTransitionPeriod": "5m0s",
"enableControllerAttachDetach": true,
"makeIPTablesUtilChains": true,
"iptablesMasqueradeBit": 14,
"iptablesDropBit": 15,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"failSwapOn": true,
"containerLogMaxSize": "10Mi",
"containerLogMaxFiles": 5,
"enforceNodeAllocatable": [
"pods"
],
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1"
}
07-03.部署 kube-proxy 组件
kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。
1、下载和分发 kube-proxy 二进制文件
参考 06.部署master节点.md
2、安装依赖包
各节点需要安装 ipvsadm 和 ipset 命令,加载 ip_vs 内核模块。
参考 07.部署worker节点.md
07-03-01 创建 kube-proxy 证书
创建证书签名请求:
[root@k8s-master1 ~]# cd /opt/k8s/cert/
[root@k8s-master1 cert]# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
注:
CN:指定该证书的 User 为 system:kube-proxy;
预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;
07-03-02 生成证书和私钥
[root@k8s-master1 cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson_linux-amd64 -bare kube-proxy
[root@k8s-master1 cert]# ls *kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
07-03-03 创建kubeconfig 文件
[root@k8s-master1 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.1.100:6444 \
--kubeconfig=/root/.kube/kube-proxy.kubeconfig
[root@k8s-master1 ~]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/k8s/cert/kube-proxy.pem \
--client-key=/opt/k8s/cert/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/root/.kube/kube-proxy.kubeconfig
[root@k8s-master1 ~]# kubectl config set-context kube-proxy@kubernetes \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/root/.kube/kube-proxy.kubeconfig
[root@k8s-master1 ~]# kubectl config use-context kube-proxy@kubernetes \
--kubeconfig=/root/.kube/kube-proxy.kubeconfig
注:
–embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);
[root@k8s-master1 ~]# kubectl config view --kubeconfig=/root/.kube/kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.100:6444
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: kube-proxy@kubernetes
current-context: kube-proxy@kubernetes
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
07-03-04 创建 kube-proxy 配置文件
从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件,
创建 kube-proxy config 文件模板
[root@k8s-master1 ~]# mkdir /opt/kube-proxy && cd /opt/kube-proxy
[root@k8s-master1 kube-proxy]# vim kube-proxy.config.yaml.template
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /opt/k8s/kube-proxy.kubeconfig
clusterCIDR: 10.96.0.0/16
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
注:
bindAddress: 监听地址;
clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
mode: 使用 ipvs 模式;
创建kubelet systemd unit 文件,需先创建kube-proxy的工作目录/var/lib/kube-proxy
[root@k8s-master1 ~]# mkdir /var/lib/kube-proxy
[root@k8s-master1 ~]# vim /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
--config=/opt/k8s/kube-proxy.config.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
07-03-05 分发 kubeconfig、kube-proxy systemd unit 文件;启动并检查kube-proxy 服务
[root@k8s-master1 ~]# vim /opt/k8s/script/kube_proxy.sh
NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
NODE_NAMES=("k8s-master1" "k8s-master2" "k8s-master3")
for (( i=0; i < 3; i++ ));do
echo ">>> ${NODE_NAMES[i]}"
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" /opt/kube-proxy/kube-proxy.config.yaml.template > /opt/kube-proxy/kube-proxy-${NODE_NAMES[i]}.config.yaml
scp /opt/kube-proxy/kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/opt/k8s/kube-proxy.config.yaml
done
for node_ip in ${NODE_IPS[@]};do
echo ">>> ${node_ip}"
scp /root/.kube/kube-proxy.kubeconfig k8s@${node_ip}:/opt/k8s/
scp /opt/kube-proxy/kube-proxy.service root@${node_ip}:/etc/systemd/system/
ssh root@${node_ip} "mkdir -p /opt/lib/kube-proxy"
ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /var/log/kubernetes"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active"
done
[root@k8s-master1 ~]# chmod +x /opt/k8s/script/kube_proxy.sh && /opt/k8s/script/kube_proxy.sh
07-03-06 查看监听端口和 metrics
[root@k8s-master1 ~]# ss -nutlp |grep kube-prox
tcp LISTEN 0 128 192.168.1.101:10256 *:* users:(("kube-proxy",pid=34230,fd=10))
tcp LISTEN 0 128 192.168.1.101:10249 *:* users:(("kube-proxy",pid=34230,fd=11))
注:
10249:http prometheus metrics port;
10256:http healthz port;
07-03-07 查看 ipvs 路由规则
[root@k8s-master1 ~]# /usr/sbin/ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr persistent 10800
-> 192.168.1.101:6443 Masq 1 0 0
-> 192.168.1.102:6443 Masq 1 0 0
-> 192.168.1.103:6443 Masq 1 0 0
可见将所有到 kubernetes cluster ip 443 端口的请求都转发到 kube-apiserver 的 6443 端口;
08.验证集群功能
本文档使用 daemonset 验证 master 和 worker 节点是否工作正常。
08-01 检查节点状态
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 21h v1.10.4
k8s-master2 Ready <none> 21h v1.10.4
k8s-master3 Ready <none> 21h v1.10.4
都为 Ready 时正常。
08-02 创建测试文件
[root@k8s-master1 ~]# mkdir /opt/k8s/damo
[root@k8s-master1 ~]# cd /opt/k8s/damo
[root@k8s-master1 damo]# vim nginx-ds.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx-ds
spec:
type: NodePort
selector:
app: nginx-ds
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
执行定义文件
[root@k8s-master1 ~]# kubectl create -f /opt/k8s/damo/nginx-ds.yml
service "nginx-ds" created
daemonset.extensions "nginx-ds" created
08-03 检查各 Node 上的 Pod IP 连通性
因为需要拖拉镜像、创建Pod,所以需要等一段时间
[root@k8s-master1 ~]# kubectl get pods -o wide|grep nginx-ds
nginx-ds-7cz4p 1/1 Running 0 4m 10.30.13.2 k8s-master1
nginx-ds-lg585 1/1 Running 0 4m 10.30.19.2 k8s-master3
nginx-ds-zc448 1/1 Running 0 4m 10.30.71.2 k8s-master2
可见,nginx-ds 的 Pod IP 分别是 10.30.13.2、10.30.19.2、10.30.71.2,在所有 Node 上分别 ping 这三个 IP,看是否连通:
[root@k8s-master1 ~]# NODE_IPS=("192.168.1.101" "192.168.1.102" "192.168.1.103");\
[root@k8s-master1 ~]# for node_ip in ${NODE_IPS[@]};do \ echo ">>> ${node_ip}" ;\ ssh ${node_ip} "ping -c 1 10.30.13.2"; \ ssh ${node_ip} "ping -c 1 10.30.19.2"; \ ssh ${node_ip} "ping -c 1 10.30.71.2"; \ done
08-04 检查服务 IP 和端口可达性
[root@k8s-master1 ~]# kubectl get svc|grep nginx-ds
nginx-ds NodePort 10.96.207.119 <none> 80:9013/TCP 37m
可见:
Service Cluster IP:10.96.207.119
服务端口:80
NodePort 端口:9013
在所有 Node 上 curl Service IP:
[root@k8s-master1 ~]# curl 10.96.207.119
[root@k8s-master2 ~]# curl 10.96.207.119
[root@k8s-master3 ~]# curl 10.96.207.119
预期输出 nginx 欢迎页面内容。
08-05 检查服务的 NodePort 可达性
在所有 Node 上执行:预期输出 nginx 欢迎页面内容。
[root@k8s-master1 ~]# curl 192.168.1.101:9013
[root@k8s-master1 ~]# curl 192.168.1.102:9013
[root@k8s-master1 ~]# curl 192.168.1.103:9013
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>