k8s二进制安装部署

前言:本篇博客是博主踩过无数坑,反复查阅资料,一步步搭建完成后整理的个人心得,分享给大家~~~

本文所需的安装包,都上传在我的网盘中,需要的可以打赏博主一杯咖啡钱,然后私密博主,博主会很快答复呦~

00.组件版本和配置策略
00-01.组件版本
Kubernetes 1.10.4
Docker 18.03.1-ce
Etcd 3.3.7
Flanneld 0.10.0
插件:
Coredns
Dashboard
Heapster (influxdb、grafana)
Metrics-Server
EFK (elasticsearch、fluentd、kibana)
镜像仓库:
docker registry
harbor

00-02.主要配置策略
kube-apiserver:

使用 keepalived 和 haproxy 实现 3 节点高可用;
关闭非安全端口 8080 和匿名访问;
在安全端口 6443 接收 https 请求;
严格的认证和授权策略 (x509、token、RBAC);
开启 bootstrap token 认证,支持 kubelet TLS bootstrapping;
使用 https 访问 kubelet、etcd,加密通信;
kube-controller-manager:

3 节点高可用;
关闭非安全端口,在安全端口 10252 接收 https 请求;
使用 kubeconfig 访问 apiserver 的安全端口;
自动 approve kubelet 证书签名请求 (CSR),证书过期后自动轮转;
各 controller 使用自己的 ServiceAccount 访问 apiserver;
kube-scheduler:

3 节点高可用;
使用 kubeconfig 访问 apiserver 的安全端口;
kubelet:

使用 kubeadm 动态创建 bootstrap token,而不是在 apiserver 中静态配置;
使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转;
在 KubeletConfiguration 类型的 JSON 文件配置主要参数;
关闭只读端口,在安全端口 10250 接收 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;
使用 kubeconfig 访问 apiserver 的安全端口;
kube-proxy:

使用 kubeconfig 访问 apiserver 的安全端口;
在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数;
使用 ipvs 代理模式;
集群插件:

DNS:使用功能、性能更好的 coredns;
Dashboard:支持登录认证;
Metric:heapster、metrics-server,使用 https 访问 kubelet 安全端口;
Log:Elasticsearch、Fluend、Kibana;
Registry 镜像库:docker-registry、harbor;

01.系统初始化
01-01.集群机器
kube-master:192.168.10.108
kube-node1:192.168.10.109
kube-node2:192.168.10.110
本文档中的 etcd 集群、master 节点、worker 节点均使用这三台机器。

在每个服务器上都要执行以下全部操作,如果没有特殊指明,本文档的所有操作均在kube-master 节点上执行

01-02.主机名
1、设置永久主机名称,然后重新登录

$ sudo hostnamectl set-hostname kube-master

$ sudo hostnamectl set-hostname kube-node1

$ sudo hostnamectl set-hostname kube-node2

2、修改 /etc/hostname 文件,添加主机名和 IP 的对应关系:

$ vim /etc/hosts

192.168.10.108 kube-master

192.168.10.109 kube-node1

192.168.10.110 kube-node2

01-03.添加 k8s 和 docker 账户
1、在每台机器上添加 k8s 账户

$ sudo useradd -m k8s

$ sudo sh -c ‘echo along |passwd k8s --stdin’ #为k8s 账户设置密码

2、修改visudo权限

$ sudo visudo #去掉%wheel ALL=(ALL) NOPASSWD: ALL这行的注释

$ sudo grep ‘%wheel.*NOPASSWD: ALL’ /etc/sudoers

%wheel ALL=(ALL) NOPASSWD: ALL

3、将k8s用户归到wheel组

$ gpasswd -a k8s wheel

Adding user k8s to group wheel

$ id k8s

uid=1000(k8s) gid=1000(k8s) groups=1000(k8s),10(wheel)

4、在每台机器上添加 docker 账户,将 k8s 账户添加到 docker 组中,同时配置 dockerd 参数(注:安装完docker才有):

$ sudo useradd -m docker

$ sudo gpasswd -a k8s docker

$ sudo mkdir -p /opt/docker/

$ vim /opt/docker/daemon.json #可以后续部署docker时在操作

{

“registry-mirrors”: [“https://hub-mirror.c.163.com”, “https://docker.mirrors.ustc.edu.cn”],

“max-concurrent-downloads”: 20

}

01-04.无密码 ssh 登录其它节点
1、生成秘钥对

[root@kube-master ~]# ssh-keygen #连续回车即可

2、将自己的公钥发给其他服务器

[root@kube-master ~]# ssh-copy-id root@kube-master

[root@kube-master ~]# ssh-copy-id root@kube-node1

[root@kube-master ~]# ssh-copy-id root@kube-node2

[root@kube-master ~]# ssh-copy-id k8s@kube-master

[root@kube-master ~]# ssh-copy-id k8s@kube-node1

[root@kube-master ~]# ssh-copy-id k8s@kube-node2

01-05.将可执行文件路径 /opt/k8s/bin 添加到 PATH 变量
在每台机器上添加环境变量:

$ sudo sh -c “echo ‘PATH=/opt/k8s/bin: P A T H : PATH: PATH:HOME/bin:$JAVA_HOME/bin’ >> /etc/profile.d/k8s.sh”

$ source /etc/profile.d/k8s.sh

01-06.安装依赖包
在每台机器上安装依赖包:

CentOS:

$ sudo yum install -y epel-release

$ sudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

Ubuntu:

$ sudo apt-get install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

注:ipvs 依赖 ipset;

01-07.关闭防火墙
在每台机器上关闭防火墙:

① 关闭服务,并设为开机不自启

$ sudo systemctl stop firewalld

$ sudo systemctl disable firewalld

② 清空防火墙规则

$ sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat

$ sudo iptables -P FORWARD ACCEPT

01-08.关闭 swap 分区
1、如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为false 来忽略 swap on),故需要在每台机器上关闭 swap 分区:

$ sudo swapoff -a

2、为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目:

$ sudo sed -i ‘/ swap / s/^(.*)$/#\1/g’ /etc/fstab

01-09.关闭 SELinux
1、关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied :

$ sudo setenforce 0

2、修改配置文件,永久生效;

$ grep SELINUX /etc/selinux/config

SELINUX=disabled

01-10.关闭 dnsmasq (可选)
linux 系统开启了 dnsmasq 后(如 GUI 环境),将系统 DNS Server 设置为 127.0.0.1,这会导致 docker 容器无法解析域名,需要关闭它:

$ sudo service dnsmasq stop

$ sudo systemctl disable dnsmasq

01-11.加载内核模块
$ sudo modprobe br_netfilter

$ sudo modprobe ip_vs

01-12.设置系统参数
$ cat > kubernetes.conf <<EOF

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

$ sudo cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

$ sudo sysctl -p /etc/sysctl.d/kubernetes.conf

$ sudo mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct

注:

tcp_tw_recycle 和 Kubernetes 的 NAT 冲突,必须关闭 ,否则会导致服务不通;
关闭不使用的 IPV6 协议栈,防止触发 docker BUG;

01-13.设置系统时区
1、调整系统 TimeZone

$ sudo timedatectl set-timezone Asia/Shanghai

2、将当前的 UTC 时间写入硬件时钟

$ sudo timedatectl set-local-rtc 0

3、重启依赖于系统时间的服务

$ sudo systemctl restart rsyslog

$ sudo systemctl restart crond

01-14.更新系统时间
$ yum -y install ntpdate

$ sudo ntpdate cn.pool.ntp.org

01-15.创建目录
在每台机器上创建目录:

$ sudo mkdir -p /opt/k8s/bin

$ sudo mkdir -p /opt/k8s/cert

$ sudo mkdir -p /opt/etcd/cert

$ sudo mkdir -p /opt/lib/etcd

$ sudo mkdir -p /opt/k8s/script

$ chown -R k8s /opt/*

01-16.检查系统内核和模块是否适合运行 docker (仅适用于linux 系统)
$ curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh

$ chmod +x check-config.sh

$ bash ./check-config.sh

02.创建 CA 证书和秘钥
为确保安全, kubernetes 系统各组件需要使用 x509 证书对通信进行加密和认证。
CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。
本文档使用 CloudFlare 的 PKI 工具集 cfssl 创建所有证书。

02-01.安装 cfssl 工具集
mkdir -p /opt/k8s/cert && sudo chown -R k8s /opt/k8s && cd /opt/k8s

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/*

02-02.创建根证书 (CA)
CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。

02-02-01 创建配置文件

CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。

[root@kube-master ~]# cd /opt/k8s/cert

[root@kube-master cert]# vim ca-config.json

{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“kubernetes”: {
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
],
“expiry”: “87600h”
}
}
}
}
注:

① signing :表示该证书可用于签名其它证书,生成的 ca.pem 证书中CA=TRUE ;

② server auth :表示 client 可以用该该证书对 server 提供的证书进行验证;

③ client auth :表示 server 可以用该该证书对 client 提供的证书进行验证;

02-02-02 创建证书签名请求文件

[root@kube-master cert]# vim ca-csr.json

{
“CN”: “kubernetes”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “k8s”,
“OU”: “4Paradigm”
}
]
}
注:

① CN: Common Name ,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name),浏览器使用该字段验证网站是否合法;

② O: Organization ,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);

③ kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;

02-02-03 生成 CA 证书和私钥

[root@kube-master cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

[root@kube-master cert]# ls

ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

02-02-04 分发证书文件

将生成的 CA 证书、秘钥文件、配置文件拷贝到所有节点的/opt/k8s/cert 目录下:

[root@kube-master ~]# vim /opt/k8s/script/scp_k8scert.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “mkdir -p /opt/k8s/cert && chown -R k8s /opt/k8s”
scp /opt/k8s/cert/ca*.pem /opt/k8s/cert/ca-config.json k8s@${node_ip}:/opt/k8s/cert
done
[root@kube-master ~]# chmod +x /opt/k8s/script/scp_k8scert.sh && /opt/k8s/script/scp_k8scert.sh

03.部署 kubectl 命令行工具
  kubectl 是 kubernetes 集群的命令行管理工具,本文档介绍安装和配置它的步骤。

kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错:

$ kubectl get pods

The connection to the server localhost:8080 was refused - did you specify the right host or port?

本文档只需要部署一次,生成的 kubeconfig 文件与机器无关。

03-01.下载kubectl 二进制文件
下载和解压

kubectl二进制文件需要科学上网下载,我已经下载到我的网盘,有需要的小伙伴联系我~

[root@kube-master ~]# wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz

[root@kube-master ~]# tar -xzvf kubernetes-client-linux-amd64.tar.gz

03-02.创建 admin 证书和私钥
kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。
kubectl 作为集群的管理工具,需要被授予最高权限。这里创建具有最高权限的admin 证书。
03-02-01 创建证书签名请求

[root@kube-master ~]# cd /opt/k8s/cert/

cat > admin-csr.json <<EOF

{
“CN”: “admin”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “system:masters”,
“OU”: “4Paradigm”
}
]
}
注:

① O 为 system:masters ,kube-apiserver 收到该证书后将请求的 Group 设置为system:masters;

② 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与Role cluster-admin 绑定,该 Role 授予所有 API的权限;

③ 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

03-02-02 生成证书和私钥

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes admin-csr.json | cfssljson_linux-amd64 -bare admin

[root@kube-master cert]# ls admin*

admin.csr admin-csr.json admin-key.pem admin.pem

03-03.创建和分发 kubeconfig 文件
03-03-01 创建kubeconfig文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

① 设置集群参数,(–server=${KUBE_APISERVER} ,指定IP和端口;我使用的是haproxy的VIP和端口;如果没有haproxy代理,就用实际服务的IP和端口;如:https://192.168.10.108:6443)

[root@kube-master ~]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/cert/ca.pem \

–embed-certs=true \

–server=https://192.168.10.10:8443 \

–kubeconfig=/root/.kube/kubectl.kubeconfig

② 设置客户端认证参数

[root@kube-master ~]# kubectl config set-credentials kube-admin \

–client-certificate=/opt/k8s/cert/admin.pem \

–client-key=/opt/k8s/cert/admin-key.pem \

–embed-certs=true \

–kubeconfig=/root/.kube/kubectl.kubeconfig

③ 设置上下文参数

[root@kube-master ~]# kubectl config set-context kube-admin@kubernetes \

–cluster=kubernetes \

–user=kube-admin \

–kubeconfig=/root/.kube/kubectl.kubeconfig

④ 设置默认上下文

[root@kube-master ~]# kubectl config use-context kube-admin@kubernetes --kubeconfig=/root/.kube/kubectl.kubeconfig

注:在后续kubernetes认证,文章中会详细讲解

–certificate-authority :验证 kube-apiserver 证书的根证书;
–client-certificate 、 --client-key :刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;
–embed-certs=true :将 ca.pem 和 admin.pem 证书内容嵌入到生成的kubectl.kubeconfig 文件中(不加时,写入的是证书文件路径);
[root@kube-master ~]# chmod +x /opt/k8s/script/kubectl_environment.sh && /opt/k8s/script/kubectl_environment.sh

03-03-01 验证kubeconfig文件

[root@kube-master ~]# ls /root/.kube/kubectl.kubeconfig

/root/.kube/kubectl.kubeconfig

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kubectl.kubeconfig

apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.10.10:8443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kube-admin
    name: kube-admin@kubernetes
    current-context: kube-admin@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kube-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

03-03-03 分发 kubeclt 和kubeconfig 文件,分发到所有使用kubectl 命令的节点

[root@kube-master ~]# vim /opt/k8s/script/scp_kubectl.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / r o o t / k u b e r n e t e s / c l i e n t / b i n / k u b e c t l k 8 s @ {node_ip}" scp /root/kubernetes/client/bin/kubectl k8s@ nodeip"scp/root/kubernetes/client/bin/kubectlk8s@{node_ip}:/opt/k8s/bin/
ssh k8s@ n o d e i p " c h m o d + x / o p t / k 8 s / b i n / ∗ " s s h k 8 s @ {node_ip} "chmod +x /opt/k8s/bin/*" ssh k8s@ nodeip"chmod+x/opt/k8s/bin/"sshk8s@{node_ip} “mkdir -p ~/.kube”
scp ~/.kube/config k8s@ n o d e i p :   / . k u b e / c o n f i g s s h r o o t @ {node_ip}:~/.kube/config ssh root@ nodeip: /.kube/configsshroot@{node_ip} “mkdir -p ~/.kube”
scp ~/.kube/config root@${node_ip}:~/.kube/config
done
[root@kube-master ~]# chmod +x /opt/k8s/script/scp_kubectl.sh && /opt/k8s/script/scp_kubectl.sh

04.部署 etcd 集群
  etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常用于服务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据。

本文档介绍部署一个三节点高可用 etcd 集群的步骤:

① 下载和分发 etcd 二进制文件

② 创建 etcd 集群各节点的 x509 证书,用于加密客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流;

③ 创建 etcd 的 systemd unit 文件,配置服务参数;

④ 检查集群工作状态;

04-01.下载etcd 二进制文件
到 https://github.com/coreos/etcd/releases 页面下载最新版本的发布包:

[root@kube-master ~]# https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz

[root@kube-master ~]# tar -xvf etcd-v3.3.7-linux-amd64.tar.gz

04-02.创建 etcd 证书和私钥
04-02-01 创建证书签名请求

[root@kube-master ~]# cd /opt/etcd/cert

[root@kube-master cert]# cat > etcd-csr.json <<EOF

{
“CN”: “etcd”,
“hosts”: [
“127.0.0.1”,
“192.168.10.108”,
“192.168.10.109”,
“192.168.10.110”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “k8s”,
“OU”: “4Paradigm”
}
]
}
EOF

注:hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,这里将 etcd 集群的三个节点 IP 都列在其中;

04-02-02 生成证书和私钥

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson_linux-amd64 -bare etcd

[root@kube-master cert]# ls etcd*

etcd.csr etcd-csr.json etcd-key.pem etcd.pem

04-02-03 分发生成的证书和私钥到各 etcd 节点

[root@kube-master ~]# vim /opt/k8s/script/scp_etcd.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)

for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / r o o t / e t c d − v 3.3.7 − l i n u x − a m d 64 / e t c d ∗ k 8 s @ {node_ip}" scp /root/etcd-v3.3.7-linux-amd64/etcd* k8s@ nodeip"scp/root/etcdv3.3.7linuxamd64/etcdk8s@{node_ip}:/opt/k8s/bin
ssh k8s@ n o d e i p " c h m o d + x / o p t / k 8 s / b i n / ∗ " s s h r o o t @ {node_ip} "chmod +x /opt/k8s/bin/*" ssh root@ nodeip"chmod+x/opt/k8s/bin/"sshroot@{node_ip} “mkdir -p /opt/etcd/cert && chown -R k8s /opt/etcd/cert”
scp /opt/etcd/cert/etcd*.pem k8s@${node_ip}:/opt/etcd/cert/
done

04-03.创建etcd 的systemd unit 模板及etcd 配置文件
04-03-01 创建etcd 的systemd unit 模板

[root@kube-master ~]# cat > /opt/etcd/etcd.service.template <<EOF

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
User=k8s
Type=notify
WorkingDirectory=/opt/lib/etcd/
ExecStart=/opt/k8s/bin/etcd
–data-dir=/opt/lib/etcd
–name ##NODE_NAME##
–cert-file=/opt/etcd/cert/etcd.pem
–key-file=/opt/etcd/cert/etcd-key.pem
–trusted-ca-file=/opt/k8s/cert/ca.pem
–peer-cert-file=/opt/etcd/cert/etcd.pem
–peer-key-file=/opt/etcd/cert/etcd-key.pem
–peer-trusted-ca-file=/opt/k8s/cert/ca.pem
–peer-client-cert-auth
–client-cert-auth
–listen-peer-urls=https://##NODE_IP##:2380
–initial-advertise-peer-urls=https://##NODE_IP##:2380
–listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379
–advertise-client-urls=https://##NODE_IP##:2379
–initial-cluster-token=etcd-cluster-0
–initial-cluster=etcd0=https://192.168.10.108:2380,etcd1=https://192.168.10.109:2380,etcd2=https://192.168.10.110:2380
–initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

注:

User :指定以 k8s 账户运行;
WorkingDirectory 、 --data-dir :指定工作目录和数据目录为/opt/lib/etcd ,需在启动服务前创建这个目录;
–name :指定节点名称,当 --initial-cluster-state 值为 new 时, --name 的参数值必须位于 --initial-cluster 列表中;
–cert-file 、 --key-file :etcd server 与 client 通信时使用的证书和私钥;
–trusted-ca-file :签名 client 证书的 CA 证书,用于验证 client 证书;
–peer-cert-file 、 --peer-key-file :etcd 与 peer 通信使用的证书和私钥;
–peer-trusted-ca-file :签名 peer 证书的 CA 证书,用于验证 peer 证书;

04-04.为各节点创建和分发 etcd systemd unit 文件
[root@kube-master ~]# cd /opt/k8s/script

[root@kube-master script]# vim etcd_service.sh

NODE_NAMES=(“etcd0” “etcd1” “etcd2”)
NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
#替换模板文件中的变量,为各节点创建 systemd unit 文件
for (( i=0; i < 3; i++ ));do
sed -e “s/##NODE_NAME##/KaTeX parse error: Expected 'EOF', got '#' at position 26: …S[i]}/g" -e "s/#̲#NODE_IP##/{NODE_IPS[i]}/g” /opt/etcd/etcd.service.template > /opt/etcd/etcd-${NODE_IPS[i]}.service
done
#分发生成的 systemd unit 和etcd的配置文件:
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “mkdir -p /opt/lib/etcd && chown -R k8s /opt/lib/etcd”
scp /opt/etcd/etcd- n o d e i p . s e r v i c e r o o t @ {node_ip}.service root@ nodeip.serviceroot@{node_ip}:/etc/systemd/system/etcd.service
done
[root@kube-master script]# chmod +x /opt/k8s/script/etcd_service.sh && /opt/k8s/script/etcd_service.sh

[root@kube-master script]# ls /opt/etcd/*.service

/opt/etcd/etcd-192.168.10.108.service /opt/etcd/etcd-192.168.10.110.service

/opt/etcd/etcd-192.168.10.109.service

[root@kube-master script]# ls /etc/systemd/system/etcd.service

/etc/systemd/system/etcd.service

04-05.启动 etcd 服务
[root@kube-master script]# vim /opt/k8s/script/etcd.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
#启动 etcd 服务
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “systemctl daemon-reload && systemctl enable etcd && systemctl start etcd”
done
#检查启动结果,确保状态为 active (running)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h k 8 s @ {node_ip}" ssh k8s@ nodeip"sshk8s@{node_ip} “systemctl status etcd|grep Active”
done
#验证服务状态,输出均为healthy 时表示集群服务正常
for node_ip in ${NODE_IPS[@]};do
echo ">>> KaTeX parse error: Undefined control sequence: \ at position 55: …8s/bin/etcdctl \̲ ̲--endpoints=htt…{node_ip}:2379
–cacert=/opt/k8s/cert/ca.pem
–cert=/opt/etcd/cert/etcd.pem
–key=/opt/etcd/cert/etcd-key.pem endpoint health
done
[root@kube-master script]# chmod +x etcd.sh && ./etcd.sh

192.168.10.108

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

192.168.10.109

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

192.168.10.110

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

#确保状态为 active (running),否则查看日志,确认原因:$ journalctl -u etcd

192.168.10.108

Active: active (running) since Mon 2018-11-26 17:41:00 CST; 12min ago

192.168.10.109

Active: active (running) since Mon 2018-11-26 17:41:00 CST; 12min ago

192.168.10.110

Active: active (running) since Mon 2018-11-26 17:41:01 CST; 12min ago

#输出均为healthy 时表示集群服务正常

192.168.10.108

https://192.168.10.108:2379 is healthy: successfully committed proposal: took = 1.373318ms

192.168.10.109

https://192.168.10.109:2379 is healthy: successfully committed proposal: took = 2.371807ms

192.168.10.110

https://192.168.10.110:2379 is healthy: successfully committed proposal: took = 1.764309ms

05.部署 flannel 网络
kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络,使用的端口为 UDP 8472,需要开放该端口(如公有云 AWS 等)。
flannel 第一次启动时,从 etcd 获取 Pod 网段信息,为本节点分配一个未使用的 /24段地址,然后创建 flannel.1 (也可能是其它名称,如 flannel1 等) 接口。
flannel 将分配的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥。

05-01.下载flanneld 二进制文件
到 https://github.com/coreos/flannel/releases 页面下载最新版本的发布包:

[root@kube-master ~]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

[root@kube-master ~]# tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel

05-02.创建 flannel 证书和私钥
flannel 从 etcd 集群存取网段分配信息,而 etcd 集群启用了双向 x509 证书认证,所以需要为 flanneld 生成证书和私钥。

05-02-01 创建证书签名请求:

[root@kube-master ~]# cd /opt/flannel/cert

cat > flanneld-csr.json <<EOF

{
“CN”: “flanneld”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “k8s”,
“OU”: “4Paradigm”
}
]
}
EOF

该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

05-02-02 生成证书和私钥

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

[root@kube-master cert]# ls

flanneld.csr flanneld-csr.json flanneld-key.pem flanneld.pemls flanneld*pem

05-02-03 将flanneld 二进制文件he1生成的证书和私钥分发到所有节点

cat > /opt/k8s/script/scp_flannel.sh <<EOF

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / r o o t / f l a n n e l / f l a n n e l d , m k − d o c k e r − o p t s . s h k 8 s @ {node_ip}" scp /root/flannel/{flanneld,mk-docker-opts.sh} k8s@ nodeip"scp/root/flannel/flanneld,mkdockeropts.shk8s@{node_ip}:/opt/k8s/bin/
ssh k8s@ n o d e i p " c h m o d + x / o p t / k 8 s / b i n / ∗ " s s h r o o t @ {node_ip} "chmod +x /opt/k8s/bin/*" ssh root@ nodeip"chmod+x/opt/k8s/bin/"sshroot@{node_ip} “mkdir -p /opt/flannel/cert && chown -R k8s /opt/flannel”
scp /opt/flannel/cert/flanneld*.pem k8s@${node_ip}:/opt/flannel/cert
done
EOF

05-03.向etcd 写入集群Pod 网段信息
注意:本步骤只需执行一次。

[root@kube-master ~]# etcdctl \

–endpoints=“https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379” \

–ca-file=/opt/k8s/cert/ca.pem \

–cert-file=/opt/flannel/cert/flanneld.pem \

–key-file=/opt/flannel/cert/flanneld-key.pem \

set /atomic.io/network/config ‘{“Network”:“10.30.0.0/16”,“SubnetLen”: 24, “Backend”: {“Type”: “vxlan”}}’

{“Network”:“10.30.0.0/16”,“SubnetLen”: 24, “Backend”: {“Type”: “vxlan”}}

注:

flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
写入的 Pod 网段 “Network” 必须是 /16 段地址,必须与kube-controller-manager 的 --cluster-cidr 参数值一致;

05-04.创建 flanneld 的 systemd unit 文件
[root@kube-master ~]# cat > /opt/flannel/flanneld.service << EOF

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld
-etcd-cafile=/opt/k8s/cert/ca.pem
-etcd-certfile=/opt/flannel/cert/flanneld.pem
-etcd-keyfile=/opt/flannel/cert/flanneld-key.pem
-etcd-endpoints=https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379
-etcd-prefix=/atomic.io/network
-iface=eth1
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
注:

mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入/run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth1 接口;
flanneld 运行时需要 root 权限;

05-05.分发flanneld systemd unit 文件到所有节点,启动并检查flanneld 服务
[root@kube-master ~]# vim /opt/k8s/script/flanneld_service.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> KaTeX parse error: Expected 'EOF', got '#' at position 13: {node_ip}" #̲分发 flanneld sys…{node_ip}:/etc/systemd/system/
#启动 flanneld 服务
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 36: … daemon-reload &̲& systemctl ena…{node_ip} “systemctl status flanneld|grep Active”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/flanneld_service.sh && /opt/k8s/script/flanneld_service.sh

注:确保状态为 active (running) ,否则查看日志,确认原因:

$ journalctl -u flanneld

05-06.检查分配给各 flanneld 的 Pod 网段信息
05-06-01 查看集群 Pod 网段(/16)

[root@kube-master ~]# etcdctl \

–endpoints=“https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379” \

–ca-file=/opt/k8s/cert/ca.pem \

–cert-file=/opt/flannel/cert/flanneld.pem \

–key-file=/opt/flannel/cert/flanneld-key.pem \

get /atomic.io/network/config

输出:

{“Network”:“10.30.0.0/16”,“SubnetLen”: 24, “Backend”: {“Type”: “vxlan”}}

05-06-02 查看已分配的 Pod 子网段列表(/24)

[root@kube-master ~]# etcdctl \

–endpoints=“https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379” \

–ca-file=/opt/k8s/cert/ca.pem \

–cert-file=/opt/flannel/cert/flanneld.pem \

–key-file=/opt/flannel/cert/flanneld-key.pem \

ls /atomic.io/network/subnets

输出:

/atomic.io/network/subnets/10.30.22.0-24

/atomic.io/network/subnets/10.30.33.0-24

/atomic.io/network/subnets/10.30.44.0-24

05-06-03 查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址

[root@kube-master ~]# etcdctl \

–endpoints=“https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379” \

–ca-file=/opt/k8s/cert/ca.pem \

–cert-file=/opt/flannel/cert/flanneld.pem \

–key-file=/opt/flannel/cert/flanneld-key.pem \

get /atomic.io/network/subnets/10.30.22.0-24

输出:

{“PublicIP”:“192.168.10.108”,“BackendType”:“vxlan”,“BackendData”:{“VtepMAC”:“fe:20:82:76:fc:25”}}

05-06-04 验证各节点能通过 Pod 网段互通

[root@kube-master ~]# vim /opt/k8s/script/ping_flanneld.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo “>>> ${node_ip}”
#在各节点上部署 flannel 后,检查是否创建了 flannel 接口(名称可能为 flannel0、flannel.0、flannel.1 等)
ssh ${node_ip} “/usr/sbin/ip addr show flannel.1|grep -w inet”
#在各节点上 ping 所有 flannel 接口 IP,确保能通
ssh ${node_ip} “ping -c 1 10.30.22.0”
ssh ${node_ip} “ping -c 1 10.30.33.0”
ssh ${node_ip} “ping -c 1 10.30.44.0”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/ping_flanneld.sh && /opt/k8s/script/ping_flanneld.sh

06.部署 master 节点
① kubernetes master 节点运行如下组件:

kube-apiserver
kube-scheduler
kube-controller-manager
② kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。

③ 对于 kube-apiserver,可以运行多个实例(本文档是 3 实例),但对其它组件需要提供统一的访问地址,该地址需要高可用。本文档使用 keepalived 和 haproxy 实现 kube-apiserver VIP 高可用和负载均衡。

④ 因为对master做了keepalived高可用,所以3台服务器都有可能会升成master服务器(主master宕机,会有从升级为主);因此所有的master操作,在3个服务器上都要进行。

1、下载最新版本的二进制文件

从CHANGELOG 页面 下载 server tarball 文件。这2个包下载也需要科学上网。

[root@kube-master ~]# wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

[root@kube-master ~]# tar -xzvf kubernetes-server-linux-amd64.tar.gz

[root@kube-master ~]# cd kubernetes/

[root@kube-master kubernetes]# tar -xzvf kubernetes-src.tar.gz

2、将二进制文件拷贝到所有 master 节点

[root@kube-master ~]# vim /opt/k8s/script/scp_master.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / r o o t / k u b e r n e t e s / s e r v e r / b i n / ∗ k 8 s @ {node_ip}" scp /root/kubernetes/server/bin/* k8s@ nodeip"scp/root/kubernetes/server/bin/k8s@{node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} “chmod +x /opt/k8s/bin/*”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/scp_master.sh && /opt/k8s/script/scp_master.sh

06-01.部署高可用组件
① 本文档讲解使用 keepalived 和 haproxy 实现 kube-apiserver 高可用的步骤:

keepalived 提供 kube-apiserver 对外服务的 VIP;
haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能;
② 运行 keepalived 和 haproxy 的节点称为 LB 节点。由于 keepalived 是一主多备运行模式,故至少两个 LB 节点。

③ 本文档复用 master 节点的三台机器,haproxy 监听的端口(8443) 需要与 kube-apiserver的端口 6443 不同,避免冲突。

④ keepalived 在运行过程中周期检查本机的 haproxy 进程状态,如果检测到 haproxy 进程异常,则触发重新选主的过程,VIP 将飘移到新选出来的主节点,从而实现 VIP 的高可用。

⑤ 所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过 VIP 和haproxy 监听的 8443 端口访问 kube-apiserver 服务。

06-01-01 安装软件包,配置haproxy 配置文件
[root@kube-master ~]# yum install -y keepalived haproxy

[root@kube-master ~]# vim /etc/haproxy/haproxy.cfg

[root@kube-master ~]# cat /etc/haproxy/haproxy.cfg

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /var/run/haproxy-admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5000
timeout client 10m
timeout server 10m
listen admin_stats
bind 0.0.0.0:10080
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /status
stats realm welcome login\ Haproxy
stats auth along:along123
stats hide-version
stats admin if TRUE
listen kube-master
bind 0.0.0.0:8443
mode tcp
option tcplog
balance source
server 192.168.10.108 192.168.10.108:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.10.109 192.168.10.109:6443 check inter 2000 fall 2 rise 2 weight 1
server 192.168.10.110 192.168.10.110:6443 check inter 2000 fall 2 rise 2 weight 1
注:

haproxy 在 10080 端口输出 status 信息;
haproxy 监听所有接口的 8443 端口,该端口与环境变量 ${KUBE_APISERVER} 指定的端口必须一致;
server 字段列出所有kube-apiserver监听的 IP 和端口;

06-01-02 在其他服务器安装、下发haproxy 配置文件;并启动检查haproxy服务
[root@kube-master ~]# vim /opt/k8s/script/haproxy.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> KaTeX parse error: Expected 'EOF', got '#' at position 13: {node_ip}" #̲安装haproxy s…{node_ip} “yum install -y keepalived haproxy”
#下发配置文件
scp /etc/haproxy/haproxy.cfg root@KaTeX parse error: Expected 'EOF', got '#' at position 25: …:/etc/haproxy #̲启动检查haproxy服务 …{node_ip} “systemctl restart haproxy”
ssh root@ n o d e i p " s y s t e m c t l e n a b l e h a p r o x y . s e r v i c e " s s h r o o t @ {node_ip} "systemctl enable haproxy.service" ssh root@ nodeip"systemctlenablehaproxy.service"sshroot@{node_ip} “systemctl status haproxy|grep Active”
#检查 haproxy 是否监听6443 端口
ssh root@${node_ip} “netstat -lnpt|grep haproxy”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/haproxy.sh && /opt/k8s/script/haproxy.sh

确保输出类似于:

tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 5351/haproxy

tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 5351/haproxy

06-01-03 配置和启动 keepalived 服务
keepalived 是一主(master)多备(backup)运行模式,故有两种类型的配置文件。

master 配置文件只有一份,backup 配置文件视节点数目而定,对于本文档而言,规划如下:

master: 192.168.10.108
backup:192.168.10.109、192.168.10.110

(1)在192.168.10.108 master服务;配置文件:

[root@kube-master ~]# vim /etc/keepalived/keepalived.conf

global_defs {
router_id keepalived_hap
}
vrrp_script check-haproxy {
script “killall -0 haproxy”
interval 5
weight -30
}
vrrp_instance VI-kube-master {
state MASTER
priority 120
dont_track_primary
interface eth1
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.10.10
}
}
注:

我的VIP 所在的接口nterface 为 eth1;根据自己的情况改变
使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。如果异常则将权重减少(-30),从而触发重新选主过程;
router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套keepalived HA,则必须各不相同;

(2)在两台backup 服务;配置文件:

[root@kube-node1 ~]# vim /etc/keepalived/keepalived.conf

global_defs {
router_id keepalived_hap
}
vrrp_script check-haproxy {
script “killall -0 haproxy”
interval 5
weight -30
}
vrrp_instance VI-kube-master {
state BACKUP
priority 110 #第2台从为100
dont_track_primary
interface eth1
virtual_router_id 68
advert_int 3
track_script {
check-haproxy
}
virtual_ipaddress {
192.168.10.10
}
}
注:

我的VIP 所在的接口nterface 为 eth1;根据自己的情况改变
使用 killall -0 haproxy 命令检查所在节点的 haproxy 进程是否正常。如果异常则将权重减少(-30),从而触发重新选主过程;
router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套keepalived HA,则必须各不相同;
priority 的值必须小于 master 的值;两个从的值也需要不一样;

(3)开启keepalived 服务

[root@kube-master ~]# vim /opt/k8s/script/keepalived.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
VIP=“192.168.10.10”
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “systemctl restart keepalived && systemctl enable keepalived”
ssh root@${node_ip} “systemctl status keepalived|grep Active”
ssh ${node_ip} “ping -c 1 ${VIP}”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/keepalived.sh && /opt/k8s/script/keepalived.sh

(4)在master服务器上能看到eth1网卡上已经有192.168.10.10 VIP了

[root@kube-master ~]# ip a show eth1

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 00:50:56:22:1b:39 brd ff:ff:ff:ff:ff:ff

inet 192.168.10.108/24 brd 192.168.10.255 scope global eth1

   valid_lft forever preferred_lft forever

inet 192.168.10.10/32 scope global eth1

   valid_lft forever preferred_lft forever

06-01-04 查看 haproxy 状态页面
浏览器访问192.168.10.10:10080/status 地址

① 输入用户名、密码;在配置文件中自己定义的

② 查看 haproxy 状态页面

06-02.部署 kube-apiserver 组件
本文档讲解使用 keepalived 和 haproxy 部署一个 3 节点高可用 master 集群的步骤,对应的 LB VIP 为环境变量 ${MASTER_VIP}。

准备工作:下载最新版本的二进制文件、安装和配置 flanneld

06-02-01 创建 kubernetes 证书和私钥
(1)创建证书签名请求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kubernetes-csr.json <<EOF

{
“CN”: “kubernetes”,
“hosts”: [
“127.0.0.1”,
“192.168.10.108”,
“192.168.10.109”,
“192.168.10.110”,
“192.168.10.10”,
“10.96.0.1”,
“kubernetes”,
“kubernetes.default”,
“kubernetes.default.svc”,
“kubernetes.default.svc.cluster”,
“kubernetes.default.svc.cluster.local”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “k8s”,
“OU”: “4Paradigm”
}
]
}
EOF

注:

hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver节点 IP、kubernetes 服务 IP 和域名;
域名最后字符不能是 . (如不能为kubernetes.default.svc.cluster.local. ),否则解析时失败,提示: x509:cannot parse dnsName “kubernetes.default.svc.cluster.local.” ;
如果使用非 cluster.local 域名,如 opsnull.com ,则需要修改域名列表中的最后两个域名为: kubernetes.default.svc.opsnull 、 kubernetes.default.svc.opsnull.com
kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过如下命令获取:
[root@kube-master ~]# kubectl get svc kubernetes

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.96.0.1 443/TCP 4d

(2)生成证书和私钥

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

[root@kube-master cert]# ls kubernetes*

kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

06-02-02 创建加密配置文件
① 产生一个用来加密Etcd 的 Key:

[root@kube-master ~]# head -c 32 /dev/urandom | base64

uS+YQXYoi1nxvI1pfSc2wRt64h/Iu5/4GxCuSvN+/jI=

注意:每台master节点需要用一样的 Key

② 使用这个加密的key,创建加密配置文件

[root@kube-master cert]# vim encryption-config.yaml

kind: EncryptionConfig
apiVersion: v1
resources:

  • resources:
    • secrets
      providers:
    • aescbc:
      keys:
      - name: key1
      secret: uS+YQXYoi1nxvI1pfSc2wRt64h/Iu5/4GxCuSvN+/jI=
    • identity: {}

06-02-03 将生成的证书和私钥文件、加密配置文件拷贝到master 节点的/opt/k8s目录下
[root@kube-master cert]# vim /opt/k8s/script/scp_apiserver.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “mkdir -p /opt/k8s/cert/ && sudo chown -R k8s /opt/k8s/cert/”
scp /opt/k8s/cert/kubernetes*.pem k8s@ n o d e i p : / o p t / k 8 s / c e r t / s c p / o p t / k 8 s / c e r t / e n c r y p t i o n − c o n f i g . y a m l r o o t @ {node_ip}:/opt/k8s/cert/ scp /opt/k8s/cert/encryption-config.yaml root@ nodeip:/opt/k8s/cert/scp/opt/k8s/cert/encryptionconfig.yamlroot@{node_ip}:/opt/k8s/
done
[root@kube-master cert]# chmod +x /opt/k8s/script/scp_apiserver.sh && /opt/k8s/script/scp_apiserver.sh

06-02-04 创建 kube-apiserver systemd unit 模板文件
cat > /opt/apiserver/kube-apiserver.service.template <<EOF

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver
–enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
–anonymous-auth=false
–experimental-encryption-provider-config=/opt/k8s/encryption-config.yaml
–advertise-address=##NODE_IP##
–bind-address=##NODE_IP##
–insecure-port=0
–authorization-mode=Node,RBAC
–runtime-config=api/all
–enable-bootstrap-token-auth
–service-cluster-ip-range=10.96.0.0/16
–service-node-port-range=1-32767
–tls-cert-file=/opt/k8s/cert/kubernetes.pem
–tls-private-key-file=/opt/k8s/cert/kubernetes-key.pem
–client-ca-file=/opt/k8s/cert/ca.pem
–kubelet-client-certificate=/opt/k8s/cert/kubernetes.pem
–kubelet-client-key=/opt/k8s/cert/kubernetes-key.pem
–service-account-key-file=/opt/k8s/cert/ca-key.pem
–etcd-cafile=/opt/k8s/cert/ca.pem
–etcd-certfile=/opt/k8s/cert/kubernetes.pem
–etcd-keyfile=/opt/k8s/cert/kubernetes-key.pem
–etcd-servers=https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379
–enable-swagger-ui=true
–allow-privileged=true
–apiserver-count=3
–audit-log-maxage=30
–audit-log-maxbackup=3
–audit-log-maxsize=100
–audit-log-path=/var/log/kube-apiserver-audit.log
–event-ttl=1h
–alsologtostderr=true
–logtostderr=false
–log-dir=/opt/log/kubernetes
–v=2
Restart=on-failure
RestartSec=5
Type=notify
User=k8s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

注:

–experimental-encryption-provider-config :启用加密特性;
–authorization-mode=Node,RBAC : 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
–enable-admission-plugins :启用 ServiceAccount 和NodeRestriction ;
–service-account-key-file :签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
–tls--file :指定 apiserver 使用的证书、私钥和 CA 文件。 --client-ca-file 用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
–kubelet-client-certificate 、 --kubelet-client-key :如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes
.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
–bind-address : 不能为 127.0.0.1 ,否则外界不能访问它的安全端口6443;
–insecure-port=0 :关闭监听非安全端口(8080);
–service-cluster-ip-range : 指定 Service Cluster IP 地址段;
–service-node-port-range : 指定 NodePort 的端口范围;
–runtime-config=api/all=true : 启用所有版本的 APIs,如autoscaling/v2alpha1;
–enable-bootstrap-token-auth :启用 kubelet bootstrap 的 token 认证;
–apiserver-count=3 :指定集群运行模式,多台 kube-apiserver 会通过 leader选举产生一个工作节点,其它节点处于阻塞状态;
User=k8s :使用 k8s 账户运行;

06-02-05 为各节点创建和分发 kube-apiserver systemd unit文件;启动检查 kube-apiserver 服务
[root@kube-master ~]# vim /opt/k8s/script/apiserver_service.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
#替换模板文件中的变量,为各节点创建 systemd unit 文件
for (( i=0; i < 3; i++ ));do
sed "s/##NODE_IP##/ N O D E I P S [ i ] / " / o p t / a p i s e r v e r / k u b e − a p i s e r v e r . s e r v i c e . t e m p l a t e > / o p t / a p i s e r v e r / k u b e − a p i s e r v e r − {NODE_IPS[i]}/" /opt/apiserver/kube-apiserver.service.template > /opt/apiserver/kube-apiserver- NODEIPS[i]/"/opt/apiserver/kubeapiserver.service.template>/opt/apiserver/kubeapiserver{NODE_IPS[i]}.service
done
#启动并检查 kube-apiserver 服务
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes”
scp /opt/apiserver/kube-apiserver- n o d e i p . s e r v i c e r o o t @ {node_ip}.service root@ nodeip.serviceroot@{node_ip}:/etc/systemd/system/kube-apiserver.service
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 36: … daemon-reload &̲& systemctl ena…{node_ip} “systemctl status kube-apiserver |grep ‘Active:’”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/apiserver_service.sh && /opt/k8s/script/apiserver_service.sh

确保状态为 active (running) ,否则到 master 节点查看日志,确认原因:

journalctl -u kube-apiserver

06-02-06 打印 kube-apiserver 写入 etcd 的数据
[root@kube-master ~]# ETCDCTL_API=3 etcdctl \

–endpoints=“https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379” \

–cacert=/opt/k8s/cert/ca.pem \

–cert=/opt/etcd/cert/etcd.pem \

–key=/opt/etcd/cert/etcd-key.pem \

get /registry/ --prefix --keys-only

06-02-07 检查集群信息
[root@kube-master ~]# kubectl cluster-info

Kubernetes master is running at https://192.168.10.108:6443

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

[root@kube-master ~]# kubectl get all --all-namespaces

NAMESPACE  NAME       TYPE   CLUSTER-IP  EXTERNAL-IP PORT(S) AGE

default     service/kubernetes ClusterIP 10.96.0.1         443/TCP 16h

[root@kube-master ~]# kubectl get componentstatuses

NAME     STATUS MESSAGE ERROR

scheduler    Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused

controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused

etcd-1     Healthy {“health”:“true”}

etcd-2     Healthy {“health”:“true”}

etcd-0     Healthy {“health”:“true”}

注意:

① 如果执行 kubectl 命令式时输出如下错误信息,则说明使用的 ~/.kube/config文件不对,请切换到正确的账户后再执行该命令:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

② 执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。当controller-manager、scheduler 以集群模式运行时,有可能和 kube-apiserver 不在一台机器上,这时 controller-manager 或 scheduler 的状态为Unhealthy,但实际上它们工作正常。

06-02-08 检查 kube-apiserver 监听的端口
[root@kube-master ~]# ss -nutlp |grep apiserver

tcp LISTEN 0 128 192.168.10.108:6443 : users:((“kubeapiserver”,pid=929,fd=5))

6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
由于关闭了非安全端口,故没有监听 8080;

06-02-09 授予 kubernetes 证书访问 kubelet API 的权限
在执行 kubectl exec、run、logs 等命令时,apiserver 会转发到 kubelet。这里定义RBAC 规则,授权 apiserver 调用 kubelet API。

[root@kube-master ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

clusterrolebinding.rbac.authorization.k8s.io “kube-apiserver:kubelet-apis” created

06-03.部署高可用kube-controller-manager 集群
  本文档介绍部署高可用 kube-controller-manager 集群的步骤。

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:

① 与 kube-apiserver 的安全端口通信时;

② 在安全端口(https,10252) 输出 prometheus 格式的 metrics;

准备工作:下载最新版本的二进制文件、安装和配置 flanneld

06-03-01 创建 kube-controller-manager 证书和私钥
创建证书签名请求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kube-controller-manager-csr.json <<EOF

{
“CN”: “system:kube-controller-manager”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“hosts”: [
“127.0.0.1”,
“192.168.10.108”,
“192.168.10.109”,
“192.168.10.110”
],
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “system:kube-controller-manager”,
“OU”: “4Paradigm”
}
]
}
EOF

注:

hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予kube-controller-manager 工作所需的权限。

06-03-02 生成证书和私钥
[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes kube-controller-manager-csr.json | cfssljson_linux-amd64 -bare kube-controller-manager

[root@kube-master cert]# ls controller-manager

kube-controller-manager.csr    kube-controller-manager-key.pem

kube-controller-manager-csr.json kube-controller-manager.pem

06-03-03 创建kubeconfig 文件
kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

① 执行命令,生产kube-controller-manager.kubeconfig文件

[root@kube-master ~]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/cert/ca.pem \

–embed-certs=true \

–server=https://192.168.10.10:8443 \

–kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

[root@kube-master ~]# kubectl config set-credentials system:kube-controller-manager \

–client-certificate=/opt/k8s/cert/kube-controller-manager.pem \

–client-key=/opt/k8s/cert/kube-controller-manager-key.pem \

–embed-certs=true \

–kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

[root@kube-master ~]# kubectl config set-context system:kube-controller-manager@kubernetes \

–cluster=kubernetes \

–user=system:kube-controller-manager \

–kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

[root@kube-master ~]# kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

② 验证kube-controller-manager.kubeconfig文件

[root@kube-master cert]# ls /root/.kube/kube-controller-manager.kubeconfig

/root/.kube/kube-controller-manager.kubeconfig

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.10.10:8443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: system:kube-controller-manager
    name: system:kube-controller-manager@kubernetes
    current-context: system:kube-controller-manager@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: system:kube-controller-manager
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

06-03-04 分发生成的证书和私钥、kubeconfig 到所有 master 节点
[root@kube-master ~]# vim /opt/k8s/script/scp_controller_manager.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} "chown k8s /opt/k8s/cert/"
scp /opt/k8s/cert/kube-controller-manager
.pem k8s@ n o d e i p : / o p t / k 8 s / c e r t / s c p / r o o t / . k u b e / k u b e − c o n t r o l l e r − m a n a g e r . k u b e c o n f i g k 8 s @ {node_ip}:/opt/k8s/cert/ scp /root/.kube/kube-controller-manager.kubeconfig k8s@ nodeip:/opt/k8s/cert/scp/root/.kube/kubecontrollermanager.kubeconfigk8s@{node_ip}:/opt/k8s/
done
[root@kube-master ~]# chmod +x /opt/k8s/script/scp_controller_manager.sh && /opt/k8s/script/scp_controller_manager.sh

06-03-05 创建和分发 kube-controller-manager systemd unit 文件
[root@kube-master ~]# mkdir /opt/controller_manager

[root@kube-master ~]# cd /opt/controller_manager

[root@kube-master controller_manager]# cat > kube-controller-manager.service <<EOF

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager
–port=0
–secure-port=10252
–bind-address=127.0.0.1
–kubeconfig=/opt/k8s/kube-controller-manager.kubeconfig
–service-cluster-ip-range=10.96.0.0/16
–cluster-name=kubernetes
–cluster-signing-cert-file=/opt/k8s/cert/ca.pem
–cluster-signing-key-file=/opt/k8s/cert/ca-key.pem
–experimental-cluster-signing-duration=8760h
–root-ca-file=/opt/k8s/cert/ca.pem
–service-account-private-key-file=/opt/k8s/cert/ca-key.pem
–leader-elect=true
–feature-gates=RotateKubeletServerCertificate=true
–controllers=*,bootstrapsigner,tokencleaner
–horizontal-pod-autoscaler-use-rest-clients=true
–horizontal-pod-autoscaler-sync-period=10s
–tls-cert-file=/opt/k8s/cert/kube-controller-manager.pem
–tls-private-key-file=/opt/k8s/cert/kube-controller-manager-key.pem
–use-service-account-credentials=true
–alsologtostderr=true
–logtostderr=false
–log-dir=/var/log/kubernetes
–v=2
Restart=on
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target
注:

–port=0:关闭监听 http /metrics 的请求,同时 --address 参数无效,–bind-address 参数有效;
–secure-port=10252、–bind-address=0.0.0.0: 在所有网络接口监听 10252 端口的 https /metrics 请求;
–kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
–cluster-signing--file:签名 TLS Bootstrap 创建的证书;
–experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
–root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
–service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
–service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
–feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
–controllers=
,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
–horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
–tls-cert-file、–tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
–use-service-account-credentials=true:
User=k8s:使用 k8s 账户运行;
kube-controller-manager 不对请求 https metrics 的 Client 证书进行校验,故不需要指定 --tls-ca-file 参数,而且该参数已被淘汰。

06-03-06 kube-controller-manager 的权限
  ClusteRole: system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中。

需要在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。

内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。

06-03-07 分发systemd unit 文件到所有master 节点;启动检查 kube-controller-manager 服务
[root@kube-master ~]# vim /opt/k8s/script/controller_manager.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / o p t / c o n t r o l l e r m a n a g e r / k u b e − c o n t r o l l e r − m a n a g e r . s e r v i c e r o o t @ {node_ip}" scp /opt/controller_manager/kube-controller-manager.service root@ nodeip"scp/opt/controllermanager/kubecontrollermanager.serviceroot@{node_ip}:/etc/systemd/system/
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 41: …log/kubernetes &̲& chown -R k8s …{node_ip} “systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager”
done

for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h k 8 s @ {node_ip}" ssh k8s@ nodeip"sshk8s@{node_ip} “systemctl status kube-controller-manager|grep Active”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/controller_manager.sh && /opt/k8s/script/controller_manager.sh

06-03-08 查看输出的 metric
注意:以下命令在 kube-controller-manager 节点上执行。

[root@kube-master ~]# ss -nutlp |grep kube-controll

tcp LISTEN 0 128 127.0.0.1:10252 : users:((“kube-controller”,pid=6532,fd=5))

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem https://127.0.0.1:10252/metrics |head

HELP ClusterRoleAggregator_adds Total number of adds handled by workqueue: ClusterRoleAggregator

TYPE ClusterRoleAggregator_adds counter

ClusterRoleAggregator_adds 6

HELP ClusterRoleAggregator_depth Current depth of workqueue: ClusterRoleAggregator

TYPE ClusterRoleAggregator_depth gauge

ClusterRoleAggregator_depth 0

HELP ClusterRoleAggregator_queue_latency How long an item stays in workqueueClusterRoleAggregator before being requested.

TYPE ClusterRoleAggregator_queue_latency summary

ClusterRoleAggregator_queue_latency{quantile=“0.5”} 431

ClusterRoleAggregator_queue_latency{quantile=“0.9”} 85089

注:curl --cacert CA 证书用来验证 kube-controller-manager https server 证书;

06-03-09 测试 kube-controller-manager 集群的高可用
1、停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。

2、查看当前的 leader

[root@kube-master ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

annotations:

control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-master_53bc08b7-f69d-11e8-9e79-0050563ab62b","leaseDurationSeconds":15,"acquireTime":"2018-12-03T01:48:18Z","renewTime":"2018-12-03T01:59:15Z","leaderTransitions":5}'

creationTimestamp: 2018-11-29T03:12:14Z

name: kube-controller-manager

namespace: kube-system

resourceVersion: “56075”

selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager

uid: 91e64a51-f384-11e8-a392-0050563ab62b

可见,当前的 leader 为 kube-node1 节点。(本来是在kube-master节点)

06-04.部署高可用 kube-scheduler 集群
  本文档介绍部署高可用 kube-scheduler 集群的步骤。

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:

① 与 kube-apiserver 的安全端口通信;

② 在安全端口(https,10251) 输出 prometheus 格式的 metrics;

准备工作:下载最新版本的二进制文件、安装和配置 flanneld

06-04-01 创建 kube-scheduler 证书和私钥
创建证书签名请求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kube-scheduler-csr.json <<EOF

{
“CN”: “system:kube-scheduler”,
“hosts”: [
“127.0.0.1”,
“192.168.10.108”,
“192.168.10.109”,
“192.168.10.110”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “system:kube-scheduler”,
“OU”: “4Paradigm”
}
]
}
EOF

注:

hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

06-04-02 生成证书和私钥
[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson_linux-amd64 -bare kube-scheduler

[root@kube-master cert]# ls scheduler

kube-scheduler.csr kube-scheduler-csr.json kube-scheduler-key.pem kube-scheduler.pem

06-04-03 创建kubeconfig 文件
kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

① 执行命令,生产kube-scheduler.kubeconfig文件

[root@kube-master ~]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/cert/ca.pem \

–embed-certs=true \

–server=https://192.168.10.10:8443 \

–kubeconfig=/root/.kube/kube-scheduler.kubeconfig

[root@kube-master ~]# kubectl config set-credentials system:kube-scheduler \

–client-certificate=/opt/k8s/cert/kube-scheduler.pem \

–client-key=/opt/k8s/cert/kube-scheduler-key.pem \

–embed-certs=true \

–kubeconfig=/root/.kube/kube-scheduler.kubeconfig

[root@kube-master ~]# kubectl config set-context system:kube-scheduler@kubernetes \

–cluster=kubernetes \

–user=system:kube-scheduler \

–kubeconfig=/root/.kube/kube-scheduler.kubeconfig

[root@kube-master ~]# kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/root/.kube/kube-scheduler.kubeconfig

② 验证kube-controller-manager.kubeconfig文件

[root@kube-master cert]# ls /root/.kube/kube-scheduler.kubeconfig

/root/.kube/kube-scheduler.kubeconfig

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kube-scheduler.kubeconfig

apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.10.100:8443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: system:kube-scheduler
    name: system:kube-scheduler@kubernetes
    current-context: system:kube-scheduler@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: system:kube-scheduler
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

06-04-04 分发生成的证书和私钥、kubeconfig 到所有 master 节点
[root@kube-master ~]# vim /opt/k8s/script/scp_scheduler.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} "chown k8s /opt/k8s/cert/"
scp /opt/k8s/cert/kube-scheduler
.pem k8s@ n o d e i p : / o p t / k 8 s / c e r t / s c p / r o o t / . k u b e / k u b e − s c h e d u l e r . k u b e c o n f i g k 8 s @ {node_ip}:/opt/k8s/cert/ scp /root/.kube/kube-scheduler.kubeconfig k8s@ nodeip:/opt/k8s/cert/scp/root/.kube/kubescheduler.kubeconfigk8s@{node_ip}:/opt/k8s/
done
[root@kube-master ~]# chmod +x /opt/k8s/script/scp_scheduler.sh && /opt/k8s/script/scp_scheduler.sh

06-04-05 创建kube-scheduler systemd unit 文件
[root@kube-master ~]# mkdir /opt/scheduler

[root@kube-master ~]# cd /opt/scheduler

[root@kube-master scheduler]# cat > kube-scheduler.service <<EOF

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
–address=127.0.0.1 \
–kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
–leader-elect=true \
–alsologtostderr=true \
–logtostderr=false \
–log-dir=/var/log/kubernetes \
–v=2
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target
EOF

注:

–address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
–kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
User=k8s:使用 k8s 账户运行;

06-04-06 分发systemd unit 文件到所有master 节点;启动检查kube-scheduler 服务
[root@kube-master scheduler]# vim /opt/k8s/script/scheduler.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / o p t / s c h e d u l e r / k u b e − s c h e d u l e r . s e r v i c e r o o t @ {node_ip}" scp /opt/scheduler/kube-scheduler.service root@ nodeip"scp/opt/scheduler/kubescheduler.serviceroot@{node_ip}:/etc/systemd/system/
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 41: …log/kubernetes &̲& chown -R k8s …{node_ip} “systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler”
done

for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h k 8 s @ {node_ip}" ssh k8s@ nodeip"sshk8s@{node_ip} “systemctl status kube-scheduler|grep Active”
done
[root@kube-master scheduler]# chmod +x /opt/k8s/script/scheduler.sh && /opt/k8s/script/scheduler.sh

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u kube-scheduler

06-04-07 查看输出的 metric
注意:以下命令在 kube-scheduler 节点上执行。

kube-scheduler 监听 10251 端口,接收 http 请求:

[root@kube-master ~]# ss -nutlp |grep kube-scheduler

tcp   LISTEN   0   128 127.0.0.1:10251   :   users:((“kube-scheduler”,pid=14968,fd=8))

[root@kube-master ~]# curl -s http://127.0.0.1:10251/metrics |head

HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.

TYPE apiserver_audit_event_total counter

apiserver_audit_event_total 0

HELP go_gc_duration_seconds A summary of the GC invocation durations.

TYPE go_gc_duration_seconds summary

go_gc_duration_seconds{quantile=“0”} 3.6554e-05

go_gc_duration_seconds{quantile=“0.25”} 0.000133804

go_gc_duration_seconds{quantile=“0.5”} 0.000203523

go_gc_duration_seconds{quantile=“0.75”} 0.000683624

go_gc_duration_seconds{quantile=“1”} 0.001188571

06-04-08 测试 kube-scheduler 集群的高可用
1、随便找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限(systemd 日志)。

2、查看当前的 leader

[root@kube-master ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

annotations:

control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node1_531fab4b-f69d-11e8-ba0a-00505631d257","leaseDurationSeconds":15,"acquireTime":"2018-12-03T01:48:23Z","renewTime":"2018-12-03T02:02:28Z","leaderTransitions":4}'

creationTimestamp: 2018-11-29T05:50:35Z

name: kube-scheduler

namespace: kube-system

resourceVersion: “56324”

selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler

uid: b1435e86-f39a-11e8-a392-0050563ab62b

可见,当前的 leader 为 kube-node2 节点。(本来是在kube-master节点)

07.部署 worker 节点
kubernetes work 节点运行如下组件:

docker
kubelet
kube-proxy
1、安装和配置 flanneld

参考 05.部署 flannel 网络

2、安装依赖包

CentOS:

$ yum install -y epel-release

$ yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

Ubuntu:

$ apt-get install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

07-01.部署 docker 组件
docker 是容器的运行环境,管理它的生命周期。kubelet 通过 Container Runtime Interface (CRI) 与 docker 进行交互。

07-01-01 下载docker 二进制文件
到 https://download.docker.com/linux/static/stable/x86_64/ 页面下载最新发布包:

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz tar -xvf docker-18.03.1-ce.tgz

07-01-02 创建和分发 systemd unit 文件
[root@kube-master ~]# mkdir /opt/docker

[root@kube-master ~]# cd /opt/

[root@kube-master docker]# cat > docker.service << “EOF”

[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment=“PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin”
EnvironmentFile=-/run/flannel/docker
ExecStart=/opt/k8s/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

EOF 前后有双引号,这样 bash 不会替换文档中的变量,如 D O C K E R N E T W O R K O P T I O N S ; d o c k e r d 运 行 时 会 调 用 其 它 d o c k e r 命 令 , 如 d o c k e r − p r o x y , 所 以 需 要 将 d o c k e r 命 令 所 在 的 目 录 加 到 P A T H 环 境 变 量 中 ; f l a n n e l d 启 动 时 将 网 络 配 置 写 入 / r u n / f l a n n e l / d o c k e r 文 件 中 , d o c k e r d 启 动 前 读 取 该 文 件 中 的 环 境 变 量 D O C K E R N E T W O R K O P T I O N S , 然 后 设 置 d o c k e r 0 网 桥 网 段 ; 如 果 指 定 了 多 个 E n v i r o n m e n t F i l e 选 项 , 则 必 须 将 / r u n / f l a n n e l / d o c k e r 放 在 最 后 ( 确 保 d o c k e r 0 使 用 f l a n n e l d 生 成 的 b i p 参 数 ) ; d o c k e r 需 要 以 r o o t 用 于 运 行 ; d o c k e r 从 1.13 版 本 开 始 , 可 能 将 i p t a b l e s F O R W A R D c h a i n 的 默 认 策 略 设 置 为 D R O P , 从 而 导 致 p i n g 其 它 N o d e 上 的 P o d I P 失 败 , 遇 到 这 种 情 况 时 , 需 要 手 动 设 置 策 略 为 A C C E P T : DOCKER_NETWORK_OPTIONS; dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 PATH 环境变量中; flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段; 如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数); docker 需要以 root 用于运行; docker 从 1.13 版本开始,可能将 iptables FORWARD chain的默认策略设置为DROP,从而导致 ping 其它 Node 上的 Pod IP 失败,遇到这种情况时,需要手动设置策略为 ACCEPT: DOCKERNETWORKOPTIONSdockerddockerdockerproxydockerPATHflanneld/run/flannel/dockerdockerdDOCKERNETWORKOPTIONSdocker0EnvironmentFile/run/flannel/docker(docker0使flanneldbip)dockerrootdocker1.13iptablesFORWARDchainDROPpingNodePodIPACCEPT sudo iptables -P FORWARD ACCEPT;并且把以下命令写入 /etc/rc.local 文件中,防止节点重启iptables FORWARD chain的默认策略又还原为DROP:$ /sbin/iptables -P FORWARD ACCEPT

07-01-03 配置docker 配置文件
使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):

cat > docker-daemon.json <<EOF

{
“registry-mirrors”: [“https://hub-mirror.c.163.com”, “https://docker.mirrors.ustc.edu.cn”],
“max-concurrent-downloads”: 20
}
EOF

07-01-04 分发docker 二进制文件、systemd unit 文件、docker 配置文件到所有 worker 机器
[root@kube-master ~]# vim /opt/k8s/script/scp_docker.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / r o o t / d o c k e r / d o c k e r ∗ k 8 s @ {node_ip}" scp /root/docker/docker* k8s@ nodeip"scp/root/docker/dockerk8s@{node_ip}:/opt/k8s/bin/
ssh k8s@ n o d e i p " c h m o d + x / o p t / k 8 s / b i n / ∗ " s c p / o p t / d o c k e r / d o c k e r . s e r v i c e r o o t @ {node_ip} "chmod +x /opt/k8s/bin/*" scp /opt/docker/docker.service root@ nodeip"chmod+x/opt/k8s/bin/"scp/opt/docker/docker.serviceroot@{node_ip}:/etc/systemd/system/
ssh root@ n o d e i p " m k d i r − p / o p t / d o c k e r / " s c p / o p t / d o c k e r / d o c k e r − d a e m o n . j s o n r o o t @ {node_ip} "mkdir -p /opt/docker/" scp /opt/docker/docker-daemon.json root@ nodeip"mkdirp/opt/docker/"scp/opt/docker/dockerdaemon.jsonroot@{node_ip}:/opt/docker/daemon.json
done

07-01-05 启动并检查 docker 服务
[root@kube-master ~]# vim /opt/k8s/script/docker.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s s h r o o t @ {node_ip}" ssh root@ nodeip"sshroot@{node_ip} “systemctl stop firewalld && systemctl disable firewalld”
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 34: …in/iptables -F &̲& /usr/sbin/ipt…{node_ip} “/usr/sbin/iptables -P FORWARD ACCEPT”
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 36: … daemon-reload &̲& systemctl ena…{node_ip} 'for intf in /sys/devices/virtual/net/docker0/brif/*; do echo 1 > i n t f / h a i r p i n m o d e ; d o n e ′ s s h r o o t @ intf/hairpin_mode; done' ssh root@ intf/hairpinmode;donesshroot@{node_ip} “sudo sysctl -p /etc/sysctl.d/kubernetes.conf”
#检查服务运行状态
ssh k8s@KaTeX parse error: Expected 'EOF', got '#' at position 50: …|grep Active" #̲检查 docker0 网桥 …{node_ip} “/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0”
done
注:

关闭 firewalld(centos7)/ufw(ubuntu16.04),否则可能会重复创建 iptables 规则;
清理旧的 iptables rules 和 chains 规则;
开启 docker0 网桥下虚拟网卡的 hairpin 模式;

[root@kube-master ~]# chmod +x /opt/k8s/script/docker.sh && /opt/k8s/script/docker.sh

① 确保状态为 active (running),否则查看日志,确认原因:

$ journalctl -u docker

② 确认各 work 节点的 docker0 网桥和 flannel.1 接口的 IP 处于同一个网段中(如下10.30.22.0和 10.30.22.1):

4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN

link/ether ea:b3:44:ab:36:16 brd ff:ff:ff:ff:ff:ff

inet 10.30.89.0/32 scope global flannel.1

   valid_lft forever preferred_lft forever

7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP

link/ether 02:42:8e:6e:ea:ef brd ff:ff:ff:ff:ff:ff

inet 10.30.89.1/24 brd 10.30.89.255 scope global docker0

   valid_lft forever preferred_lft forever

07-02.部署 kubelet 组件
  kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。

1、下载和分发 kubelet 二进制文件

参考 06.部署master节点.md

2、安装依赖包

参考 07部署worker节点.md

07-02-01 创建 kubelet bootstrap kubeconfig 文件
[root@kube-master ~]# vim /opt/k8s/script/bootstrap_kubeconfig.sh

NODE_NAMES=(“kube-master” “kube-node1” “kube-node2”)
for node_name in ${NODE_NAMES[@]};do
echo ">>> KaTeX parse error: Expected 'EOF', got '#' at position 18: …ode_name}" #̲ 创建 token e…(kubeadm token create
–description kubelet-bootstrap-token
–groups system:bootstrappers:${node_name}
–kubeconfig ~/.kube/config)

# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.10.10:8443 \
--kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig

# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig

done
[root@kube-master ~]# chmod +x /opt/k8s/script/bootstrap_kubeconfig.sh && /opt/k8s/script/bootstrap_kubeconfig.sh

注:

① 证书中写入 Token 而非证书,证书后续由 controller-manager 创建。

查看 kubeadm 为各节点创建的 token:

[root@kube-master ~]# kubeadm token list --kubeconfig ~/.kube/config

TOKEN           TTL EXPIRES          USAGES       DESCRIPTION     EXTRA GROUPS

8hpvxm.w5uctmxzlphfh37l   23h 2018-11-30T16:03:27+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node1

gktdpg.5x931bwfzf4z4hjt   23h 2018-11-30T16:03:27+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node2

ttbgfq.19zeet23eohtdo65   23h 2018-11-30T16:03:26+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-master

② 创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话);

③ kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers;

各 token 关联的 Secret:

[root@kube-master ~]# kubectl get secrets -n kube-system

NAME           TYPE               DATA AGE

bootstrap-token-8hpvxm    bootstrap.kubernetes.io/token     7   7m

bootstrap-token-gktdpg    bootstrap.kubernetes.io/token     7   7m

bootstrap-token-ttbgfq     bootstrap.kubernetes.io/token     7   7m

default-token-5lvn4     kubernetes.io/service-account-token 3   4h

07-02-02 创建kubelet 参数配置文件
从 v1.10 开始,kubelet 部分参数需在配置文件中配置,kubelet --help 会提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet’s --config flag

[root@kube-master ~]# mkdir /opt/kubelet

[root@kube-master ~]# cd /opt/kubelet

[root@kube-master kubelet]# vim kubelet.config.json.template

{
“kind”: “KubeletConfiguration”,
“apiVersion”: “kubelet.config.k8s.io/v1beta1”,
“authentication”: {
“x509”: {
“clientCAFile”: “/opt/k8s/cert/ca.pem”
},
“webhook”: {
“enabled”: true,
“cacheTTL”: “2m0s”
},
“anonymous”: {
“enabled”: false
}
},
“authorization”: {
“mode”: “Webhook”,
“webhook”: {
“cacheAuthorizedTTL”: “5m0s”,
“cacheUnauthorizedTTL”: “30s”
}
},
“address”: “##NODE_IP##”,
“port”: 10250,
“readOnlyPort”: 0,
“cgroupDriver”: “cgroupfs”,
“hairpinMode”: “promiscuous-bridge”,
“serializeImagePulls”: false,
“featureGates”: {
“RotateKubeletClientCertificate”: true,
“RotateKubeletServerCertificate”: true
},
“clusterDomain”: “cluster.local”,
“clusterDNS”: [“10.90.0.2”]
}

07-02-03 分发 bootstrap kubeconfig 、kubelet 配置文件到所有 worker 节点
[root@kube-master ~]# vim /opt/k8s/script/scp_kubelet.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
NODE_NAMES=(“kube-master” “kube-node1” “kube-node2”)
for node_name in ${NODE_NAMES[@]};do
echo ">>> n o d e n a m e " s c p   / . k u b e / k u b e l e t − b o o t s t r a p − {node_name}" scp ~/.kube/kubelet-bootstrap- nodename"scp /.kube/kubeletbootstrap{node_name}.kubeconfig k8s@${node_name}:/opt/k8s/kubelet-bootstrap.kubeconfig
done

for node_ip in ${NODE_IPS[@]};do
echo “>>> KaTeX parse error: Expected 'EOF', got '#' at position 26: … sed -e "s/#̲#NODE_IP##/{node_ip}/” /opt/kubelet/kubelet.config.json.template > /opt/kubelet/kubelet.config- n o d e i p . j s o n s c p / o p t / k u b e l e t / k u b e l e t . c o n f i g − {node_ip}.json scp /opt/kubelet/kubelet.config- nodeip.jsonscp/opt/kubelet/kubelet.config{node_ip}.json root@${node_ip}:/opt/k8s/kubelet.config.json
done
[root@kube-master ~]# chmod +x /opt/k8s/script/scp_kubelet.sh && /opt/k8s/script/scp_kubelet.sh

07-02-04 创建kubelet systemd unit 文件
[root@kube-master ~]# vim /opt/kubelet/kubelet.service.template

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/opt/lib/kubelet
ExecStart=/opt/k8s/bin/kubelet
–bootstrap-kubeconfig=/opt/k8s/kubelet-bootstrap.kubeconfig
–cert-dir=/opt/k8s/cert
–kubeconfig=/opt/k8s/kubelet.kubeconfig
–config=/opt/k8s/kubelet.config.json
–hostname-override=##NODE_NAME##
–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
–allow-privileged=true
–alsologtostderr=true
–logtostderr=false
–log-dir=/opt/log/kubernetes
–v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

07-02-05 Bootstrap Token Auth 和授予权限
1、kublet 启动时查找配置的 --kubeletconfig 文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 向 kube-apiserver 发送证书签名请求 (CSR)。

2、kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

3、默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:

$ sudo journalctl -u kubelet -a |grep -A 2 ‘certificatesigningrequests’ May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User “system:bootstrap:lemy40” cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result ‘exit-code’.

4、解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

[root@kube-master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

07-02-06 启动 kubelet 服务
[root@kube-master ~]# vim /opt/k8s/script/kubelet.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
NODE_NAMES=(“kube-master” “kube-node1” “kube-node2”)
#分发kubelet systemd unit 文件
for node_name in ${NODE_NAMES[@]};do
echo “>>> KaTeX parse error: Expected 'EOF', got '#' at position 28: … sed -e "s/#̲#NODE_NAME##/{node_name}/” /opt/kubelet/kubelet.service.template > /opt/kubelet/kubelet- n o d e n a m e . s e r v i c e s c p / o p t / k u b e l e t / k u b e l e t − {node_name}.service scp /opt/kubelet/kubelet- nodename.servicescp/opt/kubelet/kubelet{node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
done
#开启检查kubelet 服务
for node_ip in N O D E I P S [ @ ] ; d o s s h r o o t @ {NODE_IPS[@]};do ssh root@ NODEIPS[@];dosshroot@{node_ip} “mkdir -p /opt/lib/kubelet”
ssh root@ n o d e i p " / u s r / s b i n / s w a p o f f − a " s s h r o o t @ {node_ip} "/usr/sbin/swapoff -a" ssh root@ nodeip"/usr/sbin/swapoffa"sshroot@{node_ip} “mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes”
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 36: … daemon-reload &̲& systemctl ena…{node_ip} “systemctl status kubelet |grep active”
done
注:

关闭 swap 分区,注意/etc/fstab 要设为开机不启动swap分区,否则 kubelet 会启动失败;
必须先创建工作和日志目录;
kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

07-02-07 approve kubelet CSR 请求
可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。

1、手动 approve CSR 请求

(1)查看 CSR 列表:

[root@kube-master ~]# kubectl get csr

NAME AGE REQUESTOR CONDITION

node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU 4m system:bootstrap:8hpvxm Pending

node-csr-atMwF8GpKbDEcGjzCTXF1NYo9Jc1AzE2yQoxaU8NAkw 7m system:bootstrap:ttbgfq Pending

node-csr-qxa30a9GRg35iNEl3PYZOIICMo_82qPrqNu6PizEZXw 4m system:bootstrap:gktdpg Pending

三个 work 节点的 csr 均处于 pending 状态;

(2)approve CSR:

[root@kube-master ~]# kubectl certificate approve node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU

certificatesigningrequest.certificates.k8s.io “node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU” approved

(3)查看 Approve 结果:

[root@kube-master ~]# kubectl describe csr node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU

Name: node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU

Labels:

Annotations:

CreationTimestamp: Thu, 29 Nov 2018 17:51:43 +0800

Requesting User: system:bootstrap:8hpvxm

Status: Approved,Issued

Subject:

Common Name: system:node:kube-node1

Serial Number:

Organization: system:nodes

Events:

2、自动 approve CSR 请求

(1)创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:

[root@kube-master ~]# cat > /opt/kubelet/csr-crb.yaml <<EOF

Approve all CSRs for the group “system:bootstrappers” kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name:

Approve all CSRs for the group “system:bootstrappers”

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:

  • kind: Group
    name: system:bootstrappers
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: ClusterRole
    name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
    apiGroup: rbac.authorization.k8s.io

To let a node of the group “system:nodes” renew its own credentials

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:

  • kind: Group
    name: system:nodes
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: ClusterRole
    name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
    apiGroup: rbac.authorization.k8s.io

A ClusterRole which instructs the CSR approver to approve a node requesting a

serving cert matching its client cert.

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:

  • apiGroups: [“certificates.k8s.io”]
    resources: [“certificatesigningrequests/selfnodeserver”]
    verbs: [“create”]

To let a node of the group “system:nodes” renew its own server credentials

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:

  • kind: Group
    name: system:nodes
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: ClusterRole
    name: approve-node-server-renewal-csr
    apiGroup: rbac.authorization.k8s.io
    EOF

注:

auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

(2)生效配置:

[root@kube-master ~]# $ kubectl apply -f /opt/kubelet/csr-crb.yaml

07-02-08 查看 kublet 的情况
1、等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approve:

[root@kube-master ~]# kubectl get csr

NAME AGE REQUESTOR CONDITION

csr-kvbtt 15h system:node:kube-node1 Approved,Issued

csr-p9b9s 15h system:node:kube-node2 Approved,Issued

csr-rjpr9 15h system:node:kube-master Approved,Issued

node-csr-8Sr42M0z_LzZeHU-RCbgOynJm3Z2TsSXHuAlohfJiIM 15h system:bootstrap:ttbgfq Approved,Issued

node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU 15h system:bootstrap:8hpvxm Approved,Issued

node-csr-atMwF8GpKbDEcGjzCTXF1NYo9Jc1AzE2yQoxaU8NAkw 15h system:bootstrap:ttbgfq Approved,Issued

node-csr-elVB0jp36nOHuOYlITWDZx8LoO2Ly4aW0VqgYxw_Te0 15h system:bootstrap:gktdpg Approved,Issued

node-csr-muNcDteZINLZnSv8FkhOMaP2ob5uw82PGwIAynNNrco 15h system:bootstrap:ttbgfq Approved,Issued

node-csr-qxa30a9GRg35iNEl3PYZOIICMo_82qPrqNu6PizEZXw 15h system:bootstrap:gktdpg Approved,Issued

2、所有节点均 ready:

[root@kube-master ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-master Ready 25s v1.10.4

kube-node1 Ready 7m v1.10.4

kube-node2 Ready 21s v1.10.4

3、kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:

[root@kube-master ~]# ll /opt/k8s/kubelet.kubeconfig

-rw------- 1 root root 2280 Nov 29 18:05 /opt/k8s/kubelet.kubeconfig

[root@kube-master ~]# ll /opt/k8s/cert/ |grep kubelet

-rw-r–r-- 1 root root 1050 Nov 29 18:05 kubelet-client.crt

-rw------- 1 root root 227 Nov 29 18:01 kubelet-client.key

-rw------- 1 root root 1338 Nov 29 18:05 kubelet-server-2018-11-29-18-05-11.pem

lrwxrwxrwx 1 root root 52 Nov 29 18:05 kubelet-server-current.pem -> /opt/k8s/cert/kubelet-server-2018-11-29-18-05-11.pem

注:kubelet-server 证书会周期轮转;

07-02-09 kubelet 提供的 API 接口
1、kublet 启动后监听多个端口,用于接收 kube-apiserver 或其它组件发送的请求:

[root@kube-master ~]# ss -nutlp |grep kubelet

tcp LISTEN 0 128 192.168.10.108:10250 : users:((“kubelet”,pid=2797,fd=22))

tcp LISTEN 0 128 192.168.10.108:4194 : users:((“kubelet”,pid=2797,fd=13))

tcp LISTEN 0 128 127.0.0.1:10248 : users:((“kubelet”,pid=2797,fd=32))

注:

4194: cadvisor http 服务;
10248: healthz http 服务;
10250: https API 服务;注意:未开启只读端口 10255;

2、例如执行 kubectl ec -it nginx-ds-5rmws – sh 命令时,kube-apiserver 会向 kubelet 发送如下请求:

POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1

3、kubelet 接收 10250 端口的 https 请求:

/pods、/runningpods
/metrics、/metrics/cadvisor、/metrics/probes
/spec
/stats、/stats/container
/logs
/run/、"/exec/", “/attach/”, “/portForward/”, “/containerLogs/” 等管理;

4、由于关闭了匿名认证,同时开启了 webhook 授权,所有访问 10250 端口 https API 的请求都需要被认证和授权。

预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限:

[root@kube-master ~]# kubectl describe clusterrole system:kubelet-api-admin

Name: system:kubelet-api-admin

Labels: kubernetes.io/bootstrapping=rbac-defaults

Annotations: rbac.authorization.kubernetes.io/autoupdate=true

PolicyRule:

Resources Non-Resource URLs Resource Names Verbs


nodes [] [] [get list watch proxy]

nodes/log [] [] [*]

nodes/metrics [] [] [*]

nodes/proxy [] [] [*]

nodes/spec [] [] [*]

nodes/stats [] [] [*]

07-02-10 kublet api 认证和授权
1、kublet 配置了如下认证参数:

authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
同时配置了如下授权参数:

authroization.mode=Webhook:开启 RBAC 授权;

2、kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem https://192.168.10.109:10250/metrics

Unauthorized

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem -H “Authorization: Bearer 123456” https://192.168.10.109:10250/metrics

Unauthorized

3、通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);

证书认证和授权:

$ 权限不足的证书;

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/kube-controller-manager.pem --key /opt/k8s/cert/kube-controller-manager-key.pem https://192.168.10.109:10250/metrics

Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

$ 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;

[root@kube-master cert]# curl -s --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/admin.pem --key /opt/k8s/cert/admin-key.pem https://192.168.10.109:10250/metrics|head

HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.

TYPE apiserver_client_certificate_expiration_seconds histogram

apiserver_client_certificate_expiration_seconds_bucket{le=“0”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“21600”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“43200”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“86400”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“172800”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“345600”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“604800”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“2.592e+06”} 0

–cacert、–cert、–key 的参数值必须是文件路径,如上面的/opt/k8s/cert/admin.pem 不能省略 ./,否则返回 401 Unauthorized;

4、bear token 认证和授权:

创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:

[root@kube-master ~]# kubectl create sa kubelet-api-test

serviceaccount “kubelet-api-test” created

[root@kube-master ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test

clusterrolebinding.rbac.authorization.k8s.io “kubelet-api-test” created

[root@kube-master ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk ‘{print $1}’)

[root@kube-master ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E ‘^token’ | awk ‘{print $2}’)

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem -H “Authorization: Bearer ${TOKEN}” https://192.168.10.109:10250/metrics|head

HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.

TYPE apiserver_client_certificate_expiration_seconds histogram

apiserver_client_certificate_expiration_seconds_bucket{le=“0”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“21600”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“43200”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“86400”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“172800”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“345600”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“604800”} 0

apiserver_client_certificate_expiration_seconds_bucket{le=“2.592e+06”} 0

07-02-11 cadvisor 和 metrics
  cadvisor 统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况,分别在自己的 http web 页面(4194 端口)和 10250 以 promehteus metrics 的形式输出。

浏览器访问 http://192.168.10.108:4194/containers/ 可以查看到 cadvisor 的监控页面:

07-02-12 获取 kublet 的配置
从 kube-apiserver 获取各 node 的配置:

使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;

[root@kube-master ~]# curl -sSL --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/admin.pem --key /opt/k8s/cert/admin-key.pem https://192.168.10.10:8443/api/v1/nodes/kube-node1/proxy/configz | jq \

‘.kubeletconfig|.kind=“KubeletConfiguration”|.apiVersion=“kubelet.config.k8s.io/v1beta1”’

{

“syncFrequency”: “1m0s”,

“fileCheckFrequency”: “20s”,

“httpCheckFrequency”: “20s”,

“address”: “192.168.10.109”,

“port”: 10250,

“authentication”: {

"x509": {

  "clientCAFile": "/opt/k8s/cert/ca.pem"

},

"webhook": {

  "enabled": true,

  "cacheTTL": "2m0s"

},

"anonymous": {

  "enabled": false

}

},

“authorization”: {

"mode": "Webhook",

"webhook": {

  "cacheAuthorizedTTL": "5m0s",

  "cacheUnauthorizedTTL": "30s"

}

},

“registryPullQPS”: 5,

“registryBurst”: 10,

“eventRecordQPS”: 5,

“eventBurst”: 10,

“enableDebuggingHandlers”: true,

“healthzPort”: 10248,

“healthzBindAddress”: “127.0.0.1”,

“oomScoreAdj”: -999,

“clusterDomain”: “cluster.local.”,

“clusterDNS”: [

"10.96.0.2"

],

“streamingConnectionIdleTimeout”: “4h0m0s”,

“nodeStatusUpdateFrequency”: “10s”,

“imageMinimumGCAge”: “2m0s”,

“imageGCHighThresholdPercent”: 85,

“imageGCLowThresholdPercent”: 80,

“volumeStatsAggPeriod”: “1m0s”,

“cgroupsPerQOS”: true,

“cgroupDriver”: “cgroupfs”,

“cpuManagerPolicy”: “none”,

“cpuManagerReconcilePeriod”: “10s”,

“runtimeRequestTimeout”: “2m0s”,

“hairpinMode”: “promiscuous-bridge”,

“maxPods”: 110,

“podPidsLimit”: -1,

“resolvConf”: “/etc/resolv.conf”,

“cpuCFSQuota”: true,

“maxOpenFiles”: 1000000,

“contentType”: “application/vnd.kubernetes.protobuf”,

“kubeAPIQPS”: 5,

“kubeAPIBurst”: 10,

“serializeImagePulls”: false,

“evictionHard”: {

"imagefs.available": "15%",

"memory.available": "100Mi",

"nodefs.available": "10%",

"nodefs.inodesFree": "5%"

},

“evictionPressureTransitionPeriod”: “5m0s”,

“enableControllerAttachDetach”: true,

“makeIPTablesUtilChains”: true,

“iptablesMasqueradeBit”: 14,

“iptablesDropBit”: 15,

“featureGates”: {

"RotateKubeletClientCertificate": true,

"RotateKubeletServerCertificate": true

},

“failSwapOn”: true,

“containerLogMaxSize”: “10Mi”,

“containerLogMaxFiles”: 5,

“enforceNodeAllocatable”: [

"pods"

],

“kind”: “KubeletConfiguration”,

“apiVersion”: “kubelet.config.k8s.io/v1beta1”

}

07-03.部署 kube-proxy 组件
  kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。

1、下载和分发 kube-proxy 二进制文件

参考 06.部署master节点.md

2、安装依赖包

各节点需要安装 ipvsadm 和 ipset 命令,加载 ip_vs 内核模块。

参考 07.部署worker节点.md

07-03-01 创建 kube-proxy 证书
创建证书签名请求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kube-proxy-csr.json << EOF

{
“CN”: “system:kube-proxy”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “k8s”,
“OU”: “4Paradigm”
}
]
}
EOF

注:

CN:指定该证书的 User 为 system:kube-proxy;
预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

07-03-02 生成证书和私钥
[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes kube-proxy-csr.json | cfssljson_linux-amd64 -bare kube-proxy

[root@kube-master cert]# ls kube-proxy

kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

07-03-03 创建kubeconfig 文件
[root@kube-master ~]# kubectl config set-cluster kubernetes \

–certificate-authority=/opt/k8s/cert/ca.pem \

–embed-certs=true \

–server=https://192.168.10.10:8443 \

–kubeconfig=/root/.kube/kube-proxy.kubeconfig

[root@kube-master ~]# kubectl config set-credentials kube-proxy \

–client-certificate=/opt/k8s/cert/kube-proxy.pem \

–client-key=/opt/k8s/cert/kube-proxy-key.pem \

–embed-certs=true \

–kubeconfig=/root/.kube/kube-proxy.kubeconfig

[root@kube-master ~]# kubectl config set-context kube-proxy@kubernetes \

–cluster=kubernetes \

–user=kube-proxy \

–kubeconfig=/root/.kube/kube-proxy.kubeconfig

[root@kube-master ~]# kubectl config use-context kube-proxy@kubernetes --kubeconfig=/root/.kube/kube-proxy.kubeconfig

注:

–embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kube-proxy.kubeconfig

apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.10.10:8443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kube-proxy
    name: kube-proxy@kubernetes
    current-context: kube-proxy@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kube-proxy
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

07-03-04 创建 kube-proxy 配置文件
  从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件,

创建 kube-proxy config 文件模板

[root@kube-master ~]# mkdir /opt/kube-proxy

[root@kube-master ~]# cd /opt/kube-proxy

[root@kube-master kube-proxy]# cat >kube-proxy.config.yaml.template <<EOF

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /opt/k8s/kube-proxy.kubeconfig
clusterCIDR: 10.96.0.0/16
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: “ipvs”
EOF

注:

bindAddress: 监听地址;
clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
mode: 使用 ipvs 模式;

07-03-05 分发 kubeconfig、kube-proxy systemd unit 文件;启动并检查kube-proxy 服务
[root@kube-master ~]# vim /opt/k8s/script/kube_proxy.sh

NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”)
NODE_NAMES=(“kube-master” “kube-node1” “kube-node2”)

for (( i=0; i < 3; i++ ));do
echo “>>> KaTeX parse error: Expected 'EOF', got '#' at position 32: … sed -e "s/#̲#NODE_NAME##/{NODE_NAMES[i]}/” -e "s/##NODE_IP##/ N O D E I P S [ i ] / " / o p t / k u b e − p r o x y / k u b e − p r o x y . c o n f i g . y a m l . t e m p l a t e > / o p t / k u b e − p r o x y / k u b e − p r o x y − {NODE_IPS[i]}/" /opt/kube-proxy/kube-proxy.config.yaml.template > /opt/kube-proxy/kube-proxy- NODEIPS[i]/"/opt/kubeproxy/kubeproxy.config.yaml.template>/opt/kubeproxy/kubeproxy{NODE_NAMES[i]}.config.yaml
scp /opt/kube-proxy/kube-proxy- N O D E N A M E S [ i ] . c o n f i g . y a m l r o o t @ {NODE_NAMES[i]}.config.yaml root@ NODENAMES[i].config.yamlroot@{NODE_NAMES[i]}:/opt/k8s/kube-proxy.config.yaml
done

for node_ip in ${NODE_IPS[@]};do
echo ">>> n o d e i p " s c p / r o o t / . k u b e / k u b e − p r o x y . k u b e c o n f i g k 8 s @ {node_ip}" scp /root/.kube/kube-proxy.kubeconfig k8s@ nodeip"scp/root/.kube/kubeproxy.kubeconfigk8s@{node_ip}:/opt/k8s/
scp /opt/kube-proxy/kube-proxy.service root@ n o d e i p : / e t c / s y s t e m d / s y s t e m / s s h r o o t @ {node_ip}:/etc/systemd/system/ ssh root@ nodeip:/etc/systemd/system/sshroot@{node_ip} “mkdir -p /opt/lib/kube-proxy”
ssh root@KaTeX parse error: Expected 'EOF', got '&' at position 41: …log/kubernetes &̲& chown -R k8s …{node_ip} “systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy”
ssh k8s@${node_ip} “systemctl status kube-proxy|grep Active”
done
[root@kube-master ~]# chmod +x /opt/k8s/script/kube_proxy.sh && /opt/k8s/script/kube_proxy.sh

07-03-06 查看监听端口和 metrics
[root@kube-master ~]# ss -nutlp |grep kube-prox

tcp LISTEN 0 128 192.168.10.108:10256 : users:((“kube-proxy”,pid=34230,fd=10))

tcp LISTEN 0 128 192.168.10.108:10249 : users:((“kube-proxy”,pid=34230,fd=11))

10249:http prometheus metrics port;
10256:http healthz port;

07-03-07 查看 ipvs 路由规则
[root@kube-master ~]# /usr/sbin/ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 10.96.0.1:443 rr persistent 10800

-> 192.168.10.108:6443 Masq 1 0 0

-> 192.168.10.109:6443 Masq 1 0 0

-> 192.168.10.110:6443 Masq 1 0 0

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到 kube-apiserver 的 6443 端口;

08.验证集群功能
本文档使用 daemonset 验证 master 和 worker 节点是否工作正常。

08-01 检查节点状态
[root@kube-master ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-master Ready 21h v1.10.4

kube-node1 Ready 21h v1.10.4

kube-node2 Ready 21h v1.10.4

都为 Ready 时正常。

08-02 创建测试文件
[root@kube-master ~]# mkdir /opt/k8s/damo

[root@kube-master ~]# cat > nginx-ds.yml <<EOF

apiVersion: v1
kind: Service
metadata:
name: nginx-ds
labels:
app: nginx-ds
spec:
type: NodePort
selector:
app: nginx-ds
ports:

  • name: http
    port: 80
    targetPort: 80

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
labels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
EOF

执行定义文件

[root@kube-master ~]# kubectl create -f /opt/k8s/damo/nginx-ds.yml

service “nginx-ds” created

daemonset.extensions “nginx-ds” created

08-03 检查各 Node 上的 Pod IP 连通性
因为需要拖拉镜像、创建Pod,所以需要等一段时间

[root@kube-master ~]# kubectl get pods -o wide|grep nginx-ds

nginx-ds-7cz4p 1/1 Running 0 4m 10.30.22.2 kube-master

nginx-ds-lg585 1/1 Running 0 4m 10.30.44.2 kube-node2

nginx-ds-zc448 1/1 Running 0 4m 10.30.33.2 kube-node1

可见,nginx-ds 的 Pod IP 分别是 10.30.22.2、10.30.44.2、10.30.33.2,在所有 Node 上分别 ping 这三个 IP,看是否连通:

[root@kube-master ~]# NODE_IPS=(“192.168.10.108” “192.168.10.109” “192.168.10.110”);\

[root@kube-master ~]# for node_ip in ${NODE_IPS[@]};do \ echo “>>> ${node_ip}” ;\ ssh ${node_ip} “ping -c 1 10.30.22.2”; \ ssh ${node_ip} “ping -c 1 10.30.44.2”; \ ssh ${node_ip} “ping -c 1 10.30.33.2”; \ done

08-04 检查服务 IP 和端口可达性
[root@kube-master ~]# kubectl get svc |grep nginx-ds

nginx-ds NodePort 10.96.192.157 80:15131/TCP 9m

可见:

Service Cluster IP:10.96.192.157
服务端口:80
NodePort 端口:15131
在所有 Node 上 curl Service IP:

[root@kube-master ~]# curl 10.96.192.157

[root@kube-node1 ~]# curl 10.96.192.157

[root@kube-node2 ~]# curl 10.96.192.157

预期输出 nginx 欢迎页面内容。

08-05 检查服务的 NodePort 可达性
在所有 Node 上执行:预期输出 nginx 欢迎页面内容。

[root@kube-master ~]# curl 192.168.10.108:15131

[root@kube-master ~]# curl 192.168.10.109:15131

[root@kube-master ~]# curl 192.168.10.110:15131

Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.

For online documentation and support please refer to

nginx.org.

Commercial support is available at

nginx.com.

Thank you for using nginx.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值