HA-Kubernetes-Binary

HA-Kubernetes-Binary(v1.18.16)

文章目录

1. 环境准备

1.1 架构清单

SERVERIPCOMPONENTSCONFIGURATIONS
master1172.27.3.6kube-apiserver
kube-controller-manager
kube-scheduler
etcd
kubectl
kubelet
kube-proxy
haproxy
keepalived
cfssl
ansible
CentOS 7.9
64C128G
单网卡:nat
master2172.27.3.7kube-apiserver
kube-controller-manager
kube-scheduler
etcd
kubectl
kubelet
kube-proxy
haproxy
keepalived
CentOS 7.9
64C128G
单网卡:nat
master3172.27.3.8kube-apiserver
kube-controller-manager
kube-scheduler
etcd
kubectl
kubelet
kube-proxy
haproxy
keepalived
CentOS 7.9
64C128G
单网卡:nat
node1172.27.3.9kubelet
kube-proxy
docker
CentOS 7.9
64C128G
单网卡:nat
node2172.27.3.10kubelet
kube-proxy
docker
CentOS 7.9
64C128G
单网卡:nat

1.2 实验清单详细说明

  • 5台高配物理机,网卡为em1,生产中需要做bond绑定多个端口保障网络高可用

  • 高可用采用3个master节点,使用keepalivedhaproxy作为高可用和负载均衡**(etcd集群部署在集群外部,master数量可以为2)**

  • etcd为3节点,分别部署在3个master之上**(为了保证etcd在master本地部署,本地通信可以减少etcd网络通信的延迟)**

  • 理论而言,master节点只需要三组件,但是在1.18之后的新版本中,如果不配置kubeletkube-proxy,由于Metrics-Server会不断读取master而导致集群命令操作极其缓慢,生产经验是在master上配置kubeletkube-proxy,同时将masternode的形式加入集群后,再使用cordon命令禁止master调度Podkubeletkube-proxy本身占用的系统资源也是凤毛麟角,所以现在推荐在master上配置kubelet并且禁止master节点上调度Pod,可以通过cordon或者添加污点的方式完成配置

  • node节点仅需要部署dockerkubeletkube-proxy,注意对比和kubeadm安装方式的区别

  • kubectl客户端工具可以选择在其他客户机上安装,只需要配置好admin.kubeconfig文件即可

1.3 操作系统初始化

如果使用虚拟机资源则可以在一台机器上配置完通用部分,然后复制虚拟机的再修改的非通用方式完成配置

如果使用物理机部署,则建议使用ansible等自动化工具完成配置

1.3.1 修改主机名和网卡信息
#关闭并禁用防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0 && sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config &> /dev/null
1.3.2 ssh的一些优化
# 都是关于连接速度的优化
sed -ri '/^GSSAPIAuthentication/ s/yes/no/' /etc/ssh/sshd_config
echo "UseDNS no" >> /etc/ssh/sshd_config
systemctl restart sshd
1.3.3 连接NTP服务器(chronyd)

这里需要一个NTP-Server,如果是实验,则使用其中master1作为chronyd的NTP-Server,其他几个节点向master1进行同步

# 所有节点按照下面的配置
server 172.27.3.6 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 172.27.3.0/24
logdir /var/log/chrony

# 查看所有节点同步时间状态,命令返回结果是yes即证明同步完成
timedatectl |grep enabled
1.3.4 使用其他国内源

一般常用aliyun源,看个人爱好

#!/bin/bash
#清除原镜像
rm -rf /etc/yum.repos.d/* 

#阿里云源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

#阿里源kubernetes
cat > /etc/yum.repos.d/kubernetes.repo <<-EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum clean all && yum makecache
yum -y update
1.3.5 关闭swap
#!/bin/bash
# 所有机器上都要执行
swapoff -a
sed -ri '/swap/ s/(.*)/#\1/' /etc/fstab
1.3.6 修改/etc/hosts互信
#!/bin/bash
# 所有机器上都要执行
cat > /etc/hosts <<EOF
127.0.0.1 localhost
172.27.3.6 master1
172.27.3.7 master2
172.27.3.8 master3
172.27.3.9 node1
172.27.3.10 node2
EOF
1.3.7 修改limits
#!/bin/bash
cat >> /etc/security/limits.conf <<-EOF
* soft nofile 65535
* hard nofile 65535
* soft nproc  65535
* hard nproc  65535
* soft memlock unlimited
* hard memlock unlimited
EOF

ulimit -SHn 65535
1.3.8 修改DNS域名解析文件
#!/bin/bash
echo "nameserver 114.114.114.114" >> /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
#禁止随意修改,取消权限
chattr -i /etc/resolv.conf
1.3.9 master1将作为配置主机推送公钥到其他主机
# 仅在master1上执行
# 亦可以使用ansible的key-authorized模块完成推送
ssh-keygen
ssh-copy-id 172.27.3.7
ssh-copy-id 172.27.3.8
ssh-copy-id 172.27.3.9
ssh-copy-id 172.27.3.10
1.3.10 安装其他需要的类库
# 所有节点都需要安装
yum -y install wget vim jq psmisc bind-utils yum-utils net-tools device-mapper-persistent-data lvm2 git gcc ipvsadm ipset sysstat conntrack libseccomp 
1.3.11 升级内核到4.19+

方法很多,看需求,可以使用elrepo源或者其他*.rpm途径完成内核安装

本实验使用最新内核版本:5.12

# 导入源
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum makecache

yum -y install kernel-ml 

# 修改启动顺序后重启所有主机
1.3.12 ipvs相关的设置
#!/bin/bash
cat > /etc/modules-load.d/ipvs.cfg <<-EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl enable --now systemd-modules-load.service

# 当前生效
for i in `cat /etc/modules-load.d/ipvs.cfg`;do modprobe -- $i;done
chmod u+x /etc/rc.d/rc.local

# 设置重启后依旧生效
cat >> /etc/rc.local <<-EOF
for i in `cat /etc/modules-load.d/ipvs.cfg`;do modprobe -- $i;done
EOF

1.3.13 内核参数更改及优化(持续更新中…)
#!/bin/bash
# 随时更新中,凡是后续涉及到优化的部分都会归类总结到这儿
# 本配置基于开阳云团队Kubernetes生产配置,没有修改任何配置项values
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
fs.inotify.max_user_watches = 89100
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.file-max = 52706963
fs.nr_open = 52706963
net.netfilter.nf_conntrack_max = 2310720
net.ipv4.ip_conntrack_max = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphans_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

# 重启服务器,校验上述优化参数已经被成功配置
1.3.14 配置cfssl
# 在配置主节点上(master1),上传三个cfssl相关的文件
# 查看当前下载的三个未赋予权限的二进制文件
[root@master1 ~]# ll |grep cfssl
-rw-r--r--  1 root root 15108368 May 27 11:00 cfssl_1.5.0_linux_amd64
-rw-r--r--  1 root root 12021008 May 27 11:00 cfssl-certinfo_1.5.0_linux_amd64
-rw-r--r--  1 root root  9663504 May 27 11:00 cfssljson_1.5.0_linux_amd64

# 修改名称,赋予权限,并将其移动到/usr/bin路径下
for i in `ll |grep cfssl |awk '{print $NF}'`;do mv $i `echo $i|awk -F_ '{print $1}'`; chmod +x `echo $i|awk -F_ '{print $1}'`;mv `echo $i|awk -F_ '{print $1}'` /usr/bin; done
1.3.15 部署Ansible到第一个Master节点上
# 安装epel源
yum -y install epel-release
yum makecache

# 安装ansible
yum -y install ansible

# 修改ansible配置文件,取消注释使host_key_checking = False生效
sed -ri '/host_key_checking/ s/#(.*)/\1/' /etc/ansible/ansible.cfg

# 修改ansible的host文件,切记当推送完所有秘钥后,本身vars配置的参数都需要删除
cat >> /etc/ansible/hosts <<-EOF
[k8s]
172.27.3.6
172.27.3.7
172.27.3.8
172.27.3.9
172.27.3.10
[master]
172.27.3.6
172.27.3.7
172.27.3.8
[others]
172.27.3.7
172.27.3.8
172.27.3.9
172.27.3.10
[k8s:vars]
ansible_ssh_user="root"
ansible_ssh_pass="root"
ansible_ssh_port=22
[master:vars]
ansible_ssh_user="root"
ansible_ssh_pass="root"
ansible_ssh_port=22
[others:vars]
ansible_ssh_user="root"
ansible_ssh_pass="root"
ansible_ssh_port=22
EOF
1.3.16 安装命令自动补全工具
# 三个master节点上都要执行
yum install -y bash-completion 
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
1.3.17 其他(文档持续更新中…)

1.4 架构目录总纲

1.4.1 基本目录结构
# 集群所有*.kubeconfig文件存放在/etc/kubernetes目录下,etcd作为k8s一部分,不分离目录
# 将kubernetes-server-1_21_1.tar.gz上传到每一台服务器的/var/lib/kubernetes下
# 仅有admin.kubeconfig需要复制到~/.kube/config文件,kubectl默认会以这个路径作为集群的入口
mkdir -p /etc/kubernetes

# 集群所有基础组件及Addons配置文件存放在/etc/kubernetes/manifests目录下
mkdir /etc/kubernetes/manifests

# 所有证书统一安排在每个节点/etc/kubernetes/pki目录下
mkdir /etc/kubernetes/pki

# 所有*-csr.json文件统一安排在/etc/kubernetes/pki/csr目录下
mkdir /etc/kubernetes/pki/csr -p

# 所有kubernetes安装目录为/var/lib/kubernetes
mkdir -p /var/lib/kubernetes
cd /var/lib/kubernetes
tar -xf kubernetes-server-1_21_1.tar.gz

# 所有二进制文件移动到/var/lib/kubernetes下
[root@master1 kubernetes]# pwd
/var/lib/kubernetes
[root@master1 kubernetes]# ls
apiextensions-apiserver   etcd       kubeadm         kube-controller-manager  kubelet     kube-scheduler
cloud-controller-manager  hyperkube  kube-apiserver  kubectl                  kube-proxy  mounter
1.4.2 证书总纲示例★★★

CFSSL是Go语言编写,由CloudFlare开源的一款PKI/TLS工具, 包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务,证书申请进需要完成*-csr.json文件的编写,即可生成对应的公钥私钥

小技巧:编写完json后花括号很乱,可以通过在在线json检测是否存在编写问题:https://www.json.cn/

Github 地址: https://github.com/cloudflare/cfssl
官网地址: https://pkg.cfssl.org/25

参考文档:https://www.cnblogs.com/LangXian/p/11282986.html 文档介绍的很详细

根据文档后续,会在每个组件部署的过程中,生成对应的证书,证书总纲如下:

  1. ca-config.json:除了CA创建不需要,一切CA颁发的所有证书都需要这个config,文件中包含证书的年限以及功能

  2. admin-csr.json: 向k8s的CA中心申请证书,用于生成管理员证书,完成替换到~/.kube/config,这样kubectl可以直接与apiserver进行交互,无需再配置

  3. ca-csr.json:生成k8s的CA中心,用于为k8s的各种组件颁发证书,完成组件间安全通信

    apis-csr.json:向k8s的CA中心申请证书,用于apiserver组件的证书,亦用于生成apis.kubeconfig

    ctrl-mgr-csr.json:向k8s的CA中心申请证书,用于controller-manager组件的证书,亦用于生成ctrl-mgr.kubeconfig

    sched-csr.json:向k8s的CA中心申请证书,用于scheduler组件的证书,亦用于生成sched.kubeconfig

    kb-py-csr.json:向k8s的CA中心申请证书,用于kube-proxy组件的证书,亦用于生成kube-proxy.kubeconfig

    kubelet-token.csv:使用TLS BootStrapping的方式向k8s的CA申请凭证,用于生成kubelet.kubeconfig

  4. etcd-ca.json:生成Etcd的CA中心,用于为Etcd颁发证书,完成Etcd集群见(2380)与服务端(2379)的https配置

    etcd-c2s-csr.json:向Etcd的CA中心申请证书,用于以https的方式访问etcd

    etcd-p2p-csr.json:向Etcd的CA中心申请证书,用于Etcd节点间https通信,或者理解为集群心跳证书

  5. apiext-ca.json:生成Apiserver-Extend的CA中心,提供Apiservice服务,例如Prom-AdapMetrics-Svr都需要

    apiext-csr.json:向Apiserver-extend的CA中心申请证书,用于后续接入APIServer集成的其他应用

1.4.3 自签中心的生成工具cfssl的部署★★
  • 基于Go语言开发的自签中心,所以只需要下载3个二进制可执行程序:

    cfssl_1.5.0_linux_amd64

    cfssl-certinfo_1.5.0_linux_amd64

    cfssljson_1.5.0_linux_amd64

# 修改名称,并赋予可执行权限
mv cfssl_1.5.0_linux_amd64 cfssl
mv cfssl-certinfo_1.5.0_linux_amd64 cfssl-certinfo
mv cfssljson_1.5.0_linux_amd64 cfssljson
chmod u+x cfssl*
mv cfssl* /usr/bin

1.5 证书生成及说明★★★★★

本节是后续一切Kubernetes配置的基础,如果本节任何一个环节出现错误,后面Kubernetes组件之间的连接必然出现各种问题

根据1.4.2章节,本章需要完成全套证书的配置,将会重点讲解各个证书的作用以及配置方案

1.5.1 创建证书配置文件ca-config.json
  • 一般就是两个主keyssigningprofiles

  • profiles中的子keys可以有多个,对应不同子key可以设定不同的过期时间和用途

  • 下面就是一个标准案例,定义了一个10年的证书

  • 在生产配置中,必须删除所有相关注释,这是不成文的规矩

  • 保存位置详见1.4.1章节

// >>ca-config.json	
{
  "signing": {
    "default": {
      "expiry": "87600h"     //这里最重要,87600h意味着10年证书
    },
    "profiles": {
      "bw": {                  //供给创建CA中心
         "expiry": "87600h",     //不同profile可以定义不同过期时间
         "usages": [
            "signing",           //可以作为代理,为它人颁发次级证书
            "key encipherment",  //加密
            "server auth",       //以client身份验证server端
            "client auth"        //以server身份验证client端
         ]
      }      
    }
  }
}
1.5.2 创建三个CA中心

实质上,CA中心只需要两个即可以,但是按照生产最佳实践,后续集成在API-Extend的资源应该单独使用一套CA,而不是应该和API-Server共用一套CA,所以本实验配置三套CA自签中心:

  • Kubernetes集群专用CA:ca.pem,ca-key.pem
  • Kubernetes集群API扩展专用CA:apiext.pem,apiext-key.pem
  • ETCD集群专用CA:etcd-ca.pem,etcd-ca-key.pem
生成CA过程
  1. 编写证书申请*-csr.json文件
  2. 使用命令完成申请
ETCD集群专用CA创建★★★★★
  • 只有CN和O两个字段最重要,其他都是固定格式

  • CN即Common Name,对应着kubernetesRolebindingClusterRoleBinding时的User字段

  • O即Organization,对应着kubernetesRolebindingClusterRoleBinding时的Group字段

编写-csr.json文件*

/* >> etcd-ca-csr.json */
{
"CN": "etcdca",     // CA名称
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
       {
        "C": "CN",        // 国家
        "ST": "BeiJing",  // 州/省
        "L": "HaiDian",   // 区/市区
        "O": "bw",        // 组织
        "OU": "for etcd"  // 用途
        }
     ]
}

生成ETCD集群专用CA

# 生成CA命令:cfssl gencert -initca <CSR文件> | cfssljson -bare <证书名称>
cd /etc/kubernetes/pki/csr
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca

# 查看CA生成过程
[root@master1 csr]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
2021/06/04 09:59:47 [INFO] generating a new CA key and certificate from CSR
2021/06/04 09:59:47 [INFO] generate received request
2021/06/04 09:59:47 [INFO] received CSR
2021/06/04 09:59:47 [INFO] generating key: rsa-2048
2021/06/04 09:59:47 [INFO] encoded CSR
2021/06/04 09:59:47 [INFO] signed certificate with serial number 621498083872574224033593208142290456422519898833

# 查看生成了哪些文件,最重要的就是etcd-ca-key.pem(私钥),etcd-ca.pem(证书|公钥)
[root@master1 csr]# ls
ca-config.json  etcd-ca.csr  etcd-ca-csr.json  etcd-ca-key.pem  etcd-ca.pem

# 将CA中心的公钥和证书移动到上层目录,删除etcd-ca.csr文件
rm -f etcd-ca.csr
mv {etcd-ca-key.pem,etcd-ca.pem} ..
Kubenetes集群专用CA创建★★★★★

编写-csr.json文件*

// >> ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "HaiDian",
            "O": "bw",
            "OU": "for Kubernetes"
        }
    ]
}

生成Kubernetes集群专用CA

# 生成CA命令:cfssl gencert -initca <CSR文件> | cfssljson -bare <证书名称>
cd /etc/kubernetes/pki/csr
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# 查看CA生成过程
[root@master1 csr]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2021/06/04 10:19:32 [INFO] generating a new CA key and certificate from CSR
2021/06/04 10:19:32 [INFO] generate received request
2021/06/04 10:19:32 [INFO] received CSR
2021/06/04 10:19:32 [INFO] generating key: rsa-2048
2021/06/04 10:19:33 [INFO] encoded CSR
2021/06/04 10:19:33 [INFO] signed certificate with serial number 207239860746286176010691288511817782062067168190

# 查看生成了哪些文件,最重要的就是etcd-ca-key.pem(私钥),etcd-ca.pem(证书|公钥)
[root@master1 csr]# ls
ca-config.json ca-csr.json  ca-key.pem  ca.pem  etcd-ca-csr.json

# 将CA中心的公钥和证书移动到上层目录,删除ca.csr文件
rm -f ca.csr
mv {ca-key.pem,ca.pem} ..
Kubernetes集群API扩展专用CA★★★★★

编写-csr.json文件*

// >> apiext-ca-csr.json
{
    "CN": "apiext-ca",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "HaiDian",
            "O": "bw",
            "OU": "for apiext"
        }
    ]
}

生成Kubernetes集群API扩展专用CA

# 生成CA命令:cfssl gencert -initca <CSR文件> | cfssljson -bare <证书名称>
cd /etc/kubernetes/pki/csr
cfssl gencert -initca apiext-ca-csr.json | cfssljson -bare apiext-ca

# 查看CA生成过程
[root@master1 csr]# cfssl gencert -initca apiext-ca-csr.json | cfssljson -bare apiext-ca
2021/06/04 10:29:43 [INFO] generating a new CA key and certificate from CSR
2021/06/04 10:29:43 [INFO] generate received request
2021/06/04 10:29:43 [INFO] received CSR
2021/06/04 10:29:43 [INFO] generating key: rsa-2048
2021/06/04 10:29:43 [INFO] encoded CSR
2021/06/04 10:29:43 [INFO] signed certificate with serial number 691093726199939546115714904502608585704041287856

# 查看生成了哪些文件,最重要的就是etcd-ca-key.pem(私钥),etcd-ca.pem(证书|公钥)
[root@master1 csr]# ls
apiext-ca.csr  apiext-ca-csr.json  apiext-ca-key.pem  apiext-ca.pem  ca-config.json  ca-csr.json  etcd-ca-csr.json

# 将CA中心的公钥和证书移动到上层目录,删除apiext-ca.csr文件
rm -f apiext-ca.csr
mv {apiext-ca-key.pem,apiext-ca.pem} ..
关于CA生成小结
# 一共生成了三个CA中心,分别可以以https自签模式,签发申请者的证书和私钥

# 查看当前生成了哪些证书和私钥,规则:*-key.pem都是私钥,*.pem都是证书
[root@master1 pki]# ll /etc/kubernetes/pki/ |awk '{print $NF}'
apiext-ca-key.pem     
apiext-ca.pem
ca-key.pem
ca.pem
etcd-ca-key.pem
etcd-ca.pem
1.5.3 使用etcd的CA中心签发etcd的相关证书★★★★★
ETCD证书需求分析

共需要两套证书:

etcd 2379端口 Client to Server:需要编写csr文件etcd-c2s-csr.json

etcd 2380端口 Peer to Peer:需要编写csr文件etcd-p2p-csr.json

编写etcd-c2s-csr.json文件
// >> etcd-c2s-csr.json
{
    "CN": "system:etcd",
    //hosts字段为授权范围,一般应该考虑长远规划,多写一些未来可能要扩容对应的IP地址,避免重新生成证书的复杂性,实验环境就直接保证三台etcd做高可用即可
    "hosts": [            
    "172.27.3.6",              
    "172.27.3.7",
    "172.27.3.8" 
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
        	"L": "BeiJing",
       		"O": "system:etcd",           
        	"OU": "c2s"
        }
    ]
}
编写etcd-p2p-csr.json文件
// >> etcd-p2p-csr.json
{
    "CN": "system:etcd",
    "hosts": [            
    "172.27.3.6",              
    "172.27.3.7",
    "172.27.3.8" 
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
        	"L": "BeiJing",
       		"O": "system:etcd",           
        	"OU": "p2p"
        }
    ]
}
上述两个文件生成对应的https证书
# 拷贝ca-config.json 到 /etc/kubernetes/pki 下,把上述两个json文件也拷贝过来
cp -rf /etc/kubernetes/pki/csr/ca-config.json /etc/kubernetes/pki
cp -rf /etc/kubernetes/pki/csr/{etcd-c2s-csr.json,etcd-p2p-csr.json} /etc/kubernetes/pki

# 使用命令生成对应的https证书,基于etcd-ca.pem和etcd-ca-key.pem,同时以ca-config.json赋予10年有效期和权限
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=bw etcd-c2s-csr.json | cfssljson -bare etcd-c2s

cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=bw etcd-p2p-csr.json | cfssljson -bare etcd-p2p

# 查看生成的过程
[root@master1 pki]# cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=bw etcd-c2s-csr.json | cfssljson -bare etcd-c2s
2021/06/04 11:13:10 [INFO] generate received request
2021/06/04 11:13:10 [INFO] received CSR
2021/06/04 11:13:10 [INFO] generating key: rsa-2048
2021/06/04 11:13:10 [INFO] encoded CSR
2021/06/04 11:13:10 [INFO] signed certificate with serial number 467012129846670725355039718987531010310833070452
[root@master1 pki]# cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=bw etcd-p2p-csr.json | cfssljson -bare etcd-p2p
2021/06/04 11:13:37 [INFO] generate received request
2021/06/04 11:13:37 [INFO] received CSR
2021/06/04 11:13:37 [INFO] generating key: rsa-2048
2021/06/04 11:13:38 [INFO] encoded CSR
2021/06/04 11:13:38 [INFO] signed certificate with serial number 130894677634130499378812347602107303299019924979

# 查看生成的结果
[root@master1 pki]# ll {etcd-p2p*,etcd-c2s*} | awk '{print $NF}'
etcd-c2s.csr
etcd-c2s-csr.json
etcd-c2s-key.pem
etcd-c2s.pem
etcd-p2p.csr
etcd-p2p-csr.json
etcd-p2p-key.pem
etcd-p2p.pem

# 删除*.csr和*.json
rm -f {*.csr,*.json}
1.5.4 使用Kubernetes的CA中心签发集群的相关证书★★★★★
编写apis-csr.json文件
// >> apis-csr.json

{
    "CN": "system:apiserver",
    "hosts": [
      "10.96.0.1",   //Kubernetes: service-ip地址
      "10.96.0.2",   
      "10.96.0.10",  //默认第十个地址为DNS地址
      "127.0.0.1",
      "172.27.3.4",  //服务器地址范围以及未来可能扩展的个数
      "172.27.3.5",
      "172.27.3.6",
      "172.27.3.7",
      "172.27.3.8",
      "172.27.3.9",
      "172.27.3.10",
      "kubernetes",   //后续CoreDNS解析的域名大概有n中  
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Haidian",
            "ST": "Beijing",
            "O": "kubernetes",
            "OU": "apiserver"
        }
    ]
}
编写ctrl-mgr-csr.json文件
// >> ctrl-mgr-csr.json
{
    "CN": "system:controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Haidian",
            "ST": "Beijing",
            "O": "kubernetes",
            "OU": "controller-manager"
        }
    ]
}
编写sched-csr.json文件
// >> sched-csr.json
{
    "CN": "system:scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Haidian",
            "ST": "Beijing",
            "O": "kubernetes",
            "OU": "scheduler"
        }
    ]
}
编写kb-py-csr.json文件
// >> kb-py-csr.json
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Haidian",
            "ST": "Beijing",
            "O": "kubernetes",
            "OU": "kube-proxy"
        }
    ]
}
上述四个文件生成对应的https证书
# 拷贝ca-config.json 到 /etc/kubernetes/pki 下,把上述四个json文件也拷贝过来
cp -rf /etc/kubernetes/pki/csr/ca-config.json /etc/kubernetes/pki
cp -rf /etc/kubernetes/pki/csr/{kb-py-csr.json,sched-csr.json,ctrl-mgr-csr.json,apis-csr.json} /etc/kubernetes/pki

# 使用命令生成对应的https证书,基于ca.pem和ca-key.pem,同时以ca-config.json赋予10年有效期和权限
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw apis-csr.json | cfssljson -bare apis

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw ctrl-mgr-csr.json | cfssljson -bare ctrl-mgr

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw sched-csr.json | cfssljson -bare sched

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw kb-py-csr.json | cfssljson -bare kb-py


# 查看生成的过程,除了apiserver外,其他三个组件是不需要hosts字段的
[root@master1 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw apis-csr.json | cfssljson -bare apis
2021/06/04 14:35:33 [INFO] generate received request
2021/06/04 14:35:33 [INFO] received CSR
2021/06/04 14:35:33 [INFO] generating key: rsa-2048
2021/06/04 14:35:33 [INFO] encoded CSR
2021/06/04 14:35:33 [INFO] signed certificate with serial number 594946163506665712495943730882605343675369605998
[root@master1 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw ctrl-mgr-csr.json | cfssljson -bare ctrl-mgr
2021/06/04 14:35:42 [INFO] generate received request
2021/06/04 14:35:42 [INFO] received CSR
2021/06/04 14:35:42 [INFO] generating key: rsa-2048
2021/06/04 14:35:42 [INFO] encoded CSR
2021/06/04 14:35:42 [INFO] signed certificate with serial number 524468961505358074541913513840589676049742039062
2021/06/04 14:35:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master1 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw sched-csr.json | cfssljson -bare sched
2021/06/04 14:35:48 [INFO] generate received request
2021/06/04 14:35:48 [INFO] received CSR
2021/06/04 14:35:48 [INFO] generating key: rsa-2048
2021/06/04 14:35:48 [INFO] encoded CSR
2021/06/04 14:35:48 [INFO] signed certificate with serial number 619275827919899328100694178618870170632375540577
2021/06/04 14:35:48 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master1 pki]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=bw kb-py-csr.json | cfssljson -bare kb-py
2021/06/04 14:35:54 [INFO] generate received request
2021/06/04 14:35:54 [INFO] received CSR
2021/06/04 14:35:54 [INFO] generating key: rsa-2048
2021/06/04 14:35:55 [INFO] encoded CSR
2021/06/04 14:35:55 [INFO] signed certificate with serial number 589554608747963635147158857773744728377761421835
2021/06/04 14:35:55 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

# 删除*.csr和*.json
rm -f {*.csr,*.json}

# 查看生成结果
[root@master1 pki]# ll /etc/kubernetes/pki | awk '{print $NF}'
apiext-ca-key.pem
apiext-ca.pem
apis-key.pem
apis.pem
ca-key.pem
ca.pem
ctrl-mgr-key.pem
ctrl-mgr.pem
etcd-c2s-key.pem
etcd-c2s.pem
etcd-ca-key.pem
etcd-ca.pem
etcd-p2p-key.pem
etcd-p2p.pem
kb-py-key.pem
kb-py.pem
sched-key.pem
sched.pem
1.5.5 使用Kubernetes扩展资源集成的CA中心签发扩展资源的相关证书★★
编写apiext-csr.json
// >> apiext-csr.json
{
    "CN": "system:apiext",
    "hosts": [
      "10.96.0.1",   //Kubernetes: service-ip地址
      "10.96.0.2",   
      "10.96.0.10",  //默认第十个地址为DNS地址
      "127.0.0.1",
      "172.27.3.4",  //服务器地址范围以及未来可能扩展的个数
      "172.27.3.5",
      "172.27.3.6",
      "172.27.3.7",
      "172.27.3.8",
      "172.27.3.9",
      "172.27.3.10",
      "kubernetes",   //后续CoreDNS解析的域名大概有n中  
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Haidian",
            "ST": "Beijing",
            "O": "kubernetes",
            "OU": "apiext"
        }
    ]
}
上述一个文件生成对应的https证书
# 拷贝ca-config.json 到 /etc/kubernetes/pki 下,把上述json文件也拷贝过来
cp -rf /etc/kubernetes/pki/csr/ca-config.json /etc/kubernetes/pki
cp -rf /etc/kubernetes/pki/csr/apiext-csr.json /etc/kubernetes/pki

# 使用命令生成对应的https证书,基于ca.pem和ca-key.pem,同时以ca-config.json赋予10年有效期和权限
cfssl gencert -ca=apiext-ca.pem -ca-key=apiext-ca-key.pem -config=ca-config.json -profile=bw apiext-csr.json | cfssljson -bare apiext
1.5.6 生成TLS BootStrapping专用的token

TLS Bootstrapingapiserver启用TLS认证后,Node节点kubeletkube-proxy要与apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度

为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署

编写kubelet-token.csv文件
# 生成十六进制16字节的token随机码,或者自己编写一个方便记忆的码
# 自定义:abc1def2ghi3jkl4abc1def2ghi3jkl4

# 生成随机码,生产推荐做法,本实验采用下面命令生成随机编码
head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 

# 查询当前生成值,保存记录:71eec170d1e90bf709965515509fe644
[root@master1 pki]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
71eec170d1e90bf709965515509fe644

# 创建kubelet-token.csv
# 格式:token,用户名,UID,用户组
cat > kubelet-token.csv <<-EOF
71eec170d1e90bf709965515509fe644,"system:node-bootstrapper",10001,"system:node-bootstrapper"
EOF

# 查看生成结果
[root@master1 pki]# ll /etc/kubernetes/pki | awk '{print $NF}'
apiext-ca-key.pem
apiext-ca.pem
apis-key.pem
apis.pem
ca-key.pem
ca.pem
ctrl-mgr-key.pem
ctrl-mgr.pem
etcd-c2s-key.pem
etcd-c2s.pem
etcd-ca-key.pem
etcd-ca.pem
etcd-p2p-key.pem
etcd-p2p.pem
kb-py-key.pem
kb-py.pem
kubelet-token.csv  # <--
sched-key.pem
sched.pem
1.5.7 小结
  • 所有证书都已经生成完毕,后续只需要对应每个证书的作用配置相应的访问关系即可

  • 证书是二进制部署Kubernetes的重中之重

  • 为简化实验操作,将master1节点上生成的所有证书,拷贝到Kubernetes所有节点上,生产上不推荐这样做,需要根据每一台机器的性质,确认最小运行证书数量

2. etcd的部署

2.1 etcd部署注意事项

NameIPPortCONFIGURATION
etcd1172.27.3.62379,2380/var/lib/kubernetes/etcd
/etc/kubernetes/manifests
/etc/kubernetes/pki
etcd2172.27.3.72379,2380/var/lib/kubernetes/etcd
/etc/kubernetes/manifests
/etc/kubernetes/pki
etcd3172.27.3.82379,2380/var/lib/kubernetes/etcd
/etc/kubernetes/manifests
/etc/kubernetes/pki

2.2 cfssl生成etcd CA和https证书

参见本文重点章节:1.5.3

2.3 etcd二进制包目录基础架构

d bin #自建文件夹,将etcd和etcdctl两个可执行文件移入,并且导出环境变量
d conf #自建文件夹,用来存放etcd配置文件的文件夹,配置文件以 etcd.conf命名
d Documentation #etcd相关文档说明,官方自带的
d tls #自建文件夹,用来存放etcd 自签CA中心和其他功能的公私钥(server,peer,healthcheck等)

README-etcdctl.md

README.md

READMEv2-etcdctl.md

2.4 etcd配置文件的编写

注意三个节点的配置不完全相同,需要甄别

// >> /etc/kubernetes/manifests/etcd1.yaml -- 172.27.3.6
---
name: 'etcd1'
data-dir: /var/lib/kubernetes/etcd
wal-dir: /var/lib/kubernetes/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.27.3.6:2380'
listen-client-urls: 'https://172.27.3.6:2379,http://127.0.0.1:2379'
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.27.3.6:2380'
advertise-client-urls: 'https://172.27.3.6:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://172.27.3.6:2380,etcd2=https://172.27.3.7:2380,etcd3=https://172.27.3.8:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-c2s.pem'
  key-file: '/etc/kubernetes/pki/etcd-c2s-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-p2p.pem'
  key-file: '/etc/kubernetes/pki/etcd-p2p-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false

---


// >> /etc/kubernetes/manifests/etcd2.yaml  -- 172.27.3.7
name: 'etcd2'
data-dir: '/var/lib/kubernetes/etcd'
wal-dir: '/var/lib/kubernetes/etcd/wal'
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.27.3.7:2380'
listen-client-urls: 'https://172.27.3.7:2379,http://127.0.0.1:2379'
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.27.3.7:2380'
advertise-client-urls: 'https://1172.27.3.7:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://172.27.3.6:2380,etcd2=https://172.27.3.7:2380,etcd3=https://172.27.3.8:2380'
initial-advertise-peer-urls: 'https://172.27.3.7:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-c2s.pem'
  key-file: '/etc/kubernetes/pki/etcd-c2s-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-p2p.pem'
  key-file: '/etc/kubernetes/pki/etcd-p2p-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false

---


// >> /etc/kubernetes/manifests/etcd3.yaml  -- 172.27.3.8
name: 'etcd3'
data-dir: '/var/lib/kubernetes/etcd'
wal-dir: '/var/lib/kubernetes/etcd/wal'
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.27.3.8:2380'
listen-client-urls: 'https://172.27.3.8:2379,http://127.0.0.1:2379'
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.27.3.8:2380'
advertise-client-urls: 'https://172.27.3.8:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://172.27.3.6:2380,etcd2=https://172.27.3.7:2380,etcd3=https://172.27.3.8:2380'
initial-advertise-peer-urls: 'https://172.27.3.8:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-c2s.pem'
  key-file: '/etc/kubernetes/pki/etcd-c2s-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-p2p.pem'
  key-file: '/etc/kubernetes/pki/etcd-p2p-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false

2.4 基于systemd管理的etcd.service配置文件编写

# 上述三台机器,systemd的etcd.service使用同样的配置即可
cat > /lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=notify
ExecStart=/var/lib/kubernetes/etcd/etcd --config-file=/etc/kubernetes/manifests/etcd_config.yaml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

# 设置重新加载和自启
systemctl daemon-reload
systemctl enable --now etcd

2.5 启动etcd及故障排查

# 如果出现启动不成功的出现failed,具体两种排错方法如下,通过查询日志查看哪里出了问题
tailf /var/log/message
journalctl -u etcd

#集群校验
./etcdctl \
--cacert=/etc/kubernetes/pki/etcd-ca.pem \
--cert=/etc/kubernetes/pki/etcd-c2s.pem  \
--key=/etc/kubernetes/pki/etcd-c2s-key.pem \ 
--endpoints="https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379" \
endpoint health

# 如果集群正常会返回如下结果
[root@master1 etcd]# ./etcdctl --cacert=/etc/kubernetes/pki/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd-c2s.pem --key=/etc/kubernetes/pki/etcd-c2s-key.pem --endpoints="https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379" endpoint health
https://172.27.3.8:2379 is healthy: successfully committed proposal: took = 20.260691ms
https://172.27.3.6:2379 is healthy: successfully committed proposal: took = 21.909301ms
https://172.27.3.7:2379 is healthy: successfully committed proposal: took = 22.363971ms


# 使用table显示,会以表格格式显示出谁是leader
[root@master1 etcd]# ./etcdctl --cacert=/etc/kubernetes/pki/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd-c2s.pem --key=/etc/kubernetes/pki/etcd-c2s-key.pem --endpoints="https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379" -w table endpoint status
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://172.27.3.6:2379 | 13c6684c66693501 |  3.4.14 |   20 kB |     false |      false |       126 |         10 |                 10 |        |
| https://172.27.3.7:2379 | a6e141915263ca99 |  3.4.14 |   20 kB |     false |      false |       126 |         10 |                 10 |        |
| https://172.27.3.8:2379 | 9fe4a6397b1ff0dc |  3.4.14 |   20 kB |      true |      false |       126 |         10 |                 10 |        |
+-------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+


2.6 etcd命令小技巧

# 由于etcd节点间通信使用的是https方式,所以每次执行一个检测命令需要传入 cacert cert key等多个通用参数
# 可以包装一个etcd脚本,简化问题

#!/bin/bash
cat > etcdctl.sh <<EOF
/var/lib/kubernetes/etcd/etcdctl --cacert=/etc/kubernetes/pki/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd-c2s.pem --key=/etc/kubernetes/pki/etcd-c2s-key.pem --endpoints="https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379" \$@
EOF

chmod +x etcdctl.sh
ln -s $(pwd)/etcdctl.sh /usr/bin/etcdctl

# 效果如下
[root@master1 etcd]# etcdctl endpoint health
https://172.27.3.6:2379 is healthy: successfully committed proposal: took = 19.857888ms
https://172.27.3.8:2379 is healthy: successfully committed proposal: took = 19.931263ms
https://172.27.3.7:2379 is healthy: successfully committed proposal: took = 22.409782ms

3. docker的部署

因为二进制部署,所以仅需要将docker部署在nodeX上即可

3.1 使用阿里源下载并安装docker

# yum推荐使用yum-utils包,可以完成仓库的管理
yum -y install yum-utils

# 添加阿里源的docker源,官方源非常慢,所以推荐使用国内源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast

# docker进行安装
yum -y install docker-ce-19.03.*

# 配置docker国内镜像加速器,这里kubelet官方建议使用systemd作为cgroup驱动
mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 启动docker并完成设置
systemctl enable --now docker

# 测试安装是否成功,下载镜像验证加速器是否有问题
systemctl status docker 
docker pull busybox             

3.2 使用Daocloud安装docker

# 登陆https://get.daocloud.io/
# 删除旧版本docker
yum remove docker docker-common container-selinux docker-selinux docker-engine \
&& rm -fr /var/lib/docker/

# 一键部署docker
curl -sSL https://get.daocloud.io/docker | sh

# 配置docker国内加速器
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

# 启动docker并完成设置
systemctl enable docker && systemctl start docker

# 测试验证加速器是否有问题
docker pull busybox    

4. Master的节点部署

4.1 APIserver的部署

4.1.1 cfssl生成Kubernetes CA和https证书

参见本文重点章节:1.5.3

4.1.2 创建apiserver的配置文件
# >> master1: /etc/kubernetes/manifests/apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/apiserver \
--etcd-servers=https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=172.27.3.6 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--enable-aggregator-routing=true \
--token-auth-file=/etc/kubernetes/pki/kubelet-token.csv \
--service-node-port-range=60000-65000 \
--kubelet-client-certificate=/etc/kubernetes/pki/apis.pem \
--kubelet-client-key=/etc/kubernetes/pki/apis-key.pem \
--tls-cert-file=/etc/kubernetes/pki/apis.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apis-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-c2s.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-c2s-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiext-ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kubernetes/k8s-audit.log"

# >> master2: /etc/kubernetes/manifests/apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/apiserver \
--etcd-servers=https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=172.27.3.7 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/pki/kubelet-token.csv \
--service-node-port-range=60000-65000 \
--kubelet-client-certificate=/etc/kubernetes/pki/apis.pem \
--kubelet-client-key=/etc/kubernetes/pki/apis-key.pem \
--tls-cert-file=/etc/kubernetes/pki/apis.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apis-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-c2s.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-c2s-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiext-ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kubernetes/k8s-audit.log"

# >> master3: /etc/kubernetes/manifests/apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/apiserver \
--etcd-servers=https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=172.27.3.8 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/pki/kubelet-token.csv \
--service-node-port-range=60000-65000 \
--kubelet-client-certificate=/etc/kubernetes/pki/apis.pem \
--kubelet-client-key=/etc/kubernetes/pki/apis-key.pem \
--tls-cert-file=/etc/kubernetes/pki/apis.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apis-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-c2s.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-c2s-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiext-ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kubernetes/k8s-audit.log"
4.1.6 基于systemd管理的apiserver.service配置文件的编写
#!/bin/bash
cat > /lib/systemd/system/apiserver.service <<-EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/manifests/apiserver.conf
ExecStart=/var/lib/kubernetes/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF

# 重新加载进行设置自启动
systemctl daemon-reload
systemctl enable --now apiserver

# 校验状态正确性
systemctl status apiserver

4.2 Controller-Manager的部署

4.2.1 创建Controller-Manager的配置文件

因为只有apiserver属于需要暴露端口给外部访问,Controller-Manager只需要访问apiserver就可以

# >> /etc/kubernetes/manifests/ctrl-mgr.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/controller-manager \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.96.0.0/12 \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
#--kubeconfig=/etc/kubernetes/ctrl-mgr.kubeconfig \
--allocate-node-cidrs=true \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiext-ca.pem \
--node-cidr-mask-size=24 \
--experimental-cluster-signing-duration=87600h0m0s"
4.2.2 基于systemd管理的controller-manager.service配置文件的编写
#!/bin/bash
# 三个master节点用同一个配置即可
cat > /lib/systemd/system/controller-manager.service <<-EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/manifests/ctrl-mgr.conf
ExecStart=/var/lib/kubernetes/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

# 重新加载进行设置自启动
systemctl daemon-reload
systemctl enable --now controller-manager

# 校验状态正确性
systemctl status controller-manager

4.3 Scheduler的部署

4.3.1 创建Scheduler的配置文件

因为只有apiserver属于需要暴露端口给外部访问,Scheduler只需要访问apiserver就可以

#!/bin/bash
# 三个master节点用同一个配置即可
cat > /etc/kubernetes/manifests/scheduler.conf <<-EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/scheduler \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
4.3.2 基于systemd管理的scheduler.service配置文件的编写
#!/bin/bash
# 三个master节点用同一个配置即可
cat > /lib/systemd/system/scheduler.service <<-EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/manifests/scheduler.conf
ExecStart=/var/lib/kubernetes/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

# 重新加载进行设置自启动
systemctl daemon-reload
systemctl enable --now scheduler

# 校验状态正确性
systemctl status scheduler

4.4 启动效果展示

[root@master1 ~ ]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}

4.5 将上述所有步骤按照相同方式复制到其他两个master节点上

5. Nodes的节点部署

首先明确Nodes节点只需要kubeletkube-proxydocker

5.1 准备工作

5.1.1 查看/usr/local/kubernetes/bin/下的文件
# kubelet和kube-proxy必须存在
[root@node1 kubernetes]# ls
apiextensions-apiserver  kubelet  kube-proxy
5.1.2 所有node上必须安装pause镜像
[root@node1 server]# docker images
REPOSITORY                                         TAG       IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/pause   3.4.1      0f8457a4c2ec   4 months ago    683kB

5.2 Kubelet的部署

5.2.1 创建kubelet.conf 总配置文件
# >> master1 
#!/bin/bash
cat > /etc/kubernetes/manifests/kubelet.conf <<-EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/nod \
--network-plugin=cni \
# 该文件可以为空,下面创建后自动生成
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
#该文件使用命令生成,即需要手工完成编写
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
# kubelet.yml是官方默认的,只需要修改tls文件即可
--config=/etc/kubernetes/manifests/kubelet.yml \
--cert-dir=/etc/kubernetes/pki \
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.4.1"
EOF
5.2.2 创建kubelet.yml文件
// >> /etc/kubernetes/manifests/kubelet-config.yaml 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd 
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
5.2.3 授权kubelet-bootstrapping用户允许请求证书
# 3b5ba46c3aa04f8f306097f59f365233,kubelet-bootstrap,10001,"system:node-bootstrapper" 详见这个token.csv
# 格式:token,用户名,UID,用户组
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=system:node-bootstrapper
5.2.4 创建脚本生成bootstrap.kubeconfig
#!/bin/bash
cat > /usr/local/kubernetes/node/conf/mk-bootstrap-config.sh <<-EOF

# kubelet连接的server<IP:PORT>,后续如果使用了前端负载均衡,将字段改为https://<VIP:PORT>即可
# 例如未来VIP为200,则KUBE_APISERVER="https://192.168.23.200:6443"
KUBE_APISERVER="https://192.168.140.11:6443" 

#之前创立的token.csv就在这里使用,同时要就得对kubelet-
TOKEN="3b5ba46c3aa04f8f306097f59f365233"

# kubelet连接apiserver的凭证会优先查找kubelet.kubeconfig文件,如果发现无权访问,其次会寻找bootstrap.kubeconfig来向apiserver做csr的请求,apiserver认证放行后,才可以作为node资源注册到apiserver上。

/usr/local/kubernetes/node/bin/kubectl config set-cluster kubernetes \
  --certificate-authority=/usr/local/kubernetes/node/tls/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=/usr/local/kubernetes/node/conf/bootstrap.kubeconfig


/usr/local/kubernetes/node/bin/kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=/usr/local/kubernetes/node/conf/bootstrap.kubeconfig


/usr/local/kubernetes/node/bin/kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=/usr/local/kubernetes/node/conf/bootstrap.kubeconfig

/usr/local/kubernetes/node/bin/kubectl config use-context default --kubeconfig=/usr/local/kubernetes/node/conf/bootstrap.kubeconfig
EOF
chmod u+x /usr/local/kubernetes/node/conf/mk-bootstrap-config.sh
sh /usr/local/kubernetes/node/conf/mk-bootstrap-config.sh


# 本次实验采用的配置脚本如下,执行文件

KUBE_APISERVER="https://172.27.3.6:6443" 

TOKEN="71eec170d1e90bf709965515509fe644"

/var/lib/kubernetes/kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig


/var/lib/kubernetes/kubectl config set-credentials "system:node-bootstrapper" \
  --token=${TOKEN} \
  --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig


/var/lib/kubernetes/kubectl config set-context default \
  --cluster=Kubernetes \
  --user="system:node-bootstrapper" \
  --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig

/var/lib/kubernetes/kubectl config use-context default --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig
5.2.5 bootstrap.kubeconfig生成示例
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBD...
    server: https://192.168.140.11:6443         
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: 3b5ba46c3aa04f8f306097f59f365233
5.2.6 基于systemd管理的kubelet.service配置文件的编写
#!/bin/bash
cat /lib/systemd/system/kubelet.service <<-EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/manifests/kubelet.conf
ExecStart=/var/lib/kubernetes/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

# 重新加载进行设置自启动
systemctl daemon-reload
systemctl enable --now kubelet

# 校验状态正确性
systemctl status kubelet
5.2.7 master查看并批准kubelet的证书申请
# master查看kubelet请求,发现了node1和node2两个节点自动向Apiserver提交证书申请
[root@master1 kubernetes]# kubectl get csr -A
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR                  CONDITION
node-csr-8rtLWyksSoe0qEeItpZrovpcmosp7ySscf1iIHcw4GI   11m   kubernetes.io/kube-apiserver-client-kubelet   system:node-bootstrapper   Pending
node-csr-Y8dx_U7DCCYFiE60Caf_nxN9brjhGF8EeHWfIA8GeFQ   11m   kubernetes.io/kube-apiserver-client-kubelet   system:node-bootstrapper   Pending


# 批准申请
kubectl certificate approve node-csr-8rtLWyksSoe0qEeItpZrovpcmosp7ySscf1iIHcw4GI
kubectl certificate approve node-csr-Y8dx_U7DCCYFiE60Caf_nxN9brjhGF8EeHWfIA8GeFQ

# 查看node状况
[root@master1 ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE   VERSION
node1   NotReady   <none>   5s    v1.18.16
node2   NotReady   <none>   6s    v1.18.16
5.2.8 授权apiserver访问kubelet
#!/bin/bash
cat > /etc/kubernetes/manifests/apis2kubelet.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:apis2kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:apis2kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
    
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apis2nodes
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:apis2kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: system:kube-apiserver
EOF

kubectl apply -f /etc/kubernetes/manifests/apis2kubelet.yaml

5.3 kube-proxy的部署

5.3.1 创建kube-proxy配置文件
#!/bin/bash
cat > /etc/kubernetes/manifests/kube-proxy.conf <<-EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/kube-proxy \
# 只需要官方标配的kube-proxy-config.yml
--config=/etc/kubernetes/manifests/kb-py-config.yaml"
EOF
5.3.2 创建kube-proxy-config.yml配置文件
#!/bin/bash
cat > /etc/kubernetes/manifests/kb-py-config.yaml <<-EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.96.0.0/12
EOF
5.3.4 创建脚本生成kube-proxy.kubeconfig
#!/bin/bash

cat > /etc/kubernetes/mk-kube-proxy-config.sh <<-EOF
# 具体解析请看kubelet.kubeconfig
KUBE_APISERVER="https://172.27.3.6:6443" 

/var/lib/kubernetes/kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca.pem \
  --embed-certs=true \
  --server=\${KUBE_APISERVER} \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

/var/lib/kubernetes/kubectl config set-credentials "system:kube-proxy" \
  # 不同点就是这里,kube-proxy需要传入证书和私钥,基于https双向认证,而不是以token方式认证的
  --client-certificate=/etc/kubernetes/pki/kb-py.pem \
  --client-key=/etc/kubernetes/pki/kb-py-key.pem \
  --embed-certs=true \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

/var/lib/kubernetes/kubectl config set-context default \
  --cluster=kubernetes \
  --user="system:kube-proxy" \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

/var/lib/kubernetes/kubectl config use-context default --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
EOF

chmod u+x /etc/kubernetes/mk-kube-proxy-config.sh
sh /etc/kubernetes/mk-kube-proxy-config.sh
5.3.5 kube-proxy.kubeconfig生成示例
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1C...
    server: https://172.27.3.6:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:kube-proxy
  user:
    client-certificate-data: LS0tLS1CRU...
    client-key-data: LS0tLS1CRUdJ...
5.3.6 基于systemd管理的kube-proxy.service配置文件的编写
#!/bin/bash
cat > /lib/systemd/system/kube-proxy.service <<-EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/manifests/kube-proxy.conf
ExecStart=/var/lib/kubernetes/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

# 重新加载进行设置自启动
systemctl daemon-reload
systemctl enable --now kube-proxy

# 校验状态正确性
systemctl status kube-proxy

6. CNI插件的部署

符合CNI标准的插件很多,例如CoreOSFlannel,纯三层的Calico,还有综合二者的Canel,其余的还有weave等,综合评比后发现还是Calico环评最高,而且是性能很高,生产主推

6.1 Calico的部署

6.1.1 准备工作
# 选择calico直接安装在etcd数据库
# https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-etcd-datastore
# 下载yaml配置文件
curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o /etc/kubernetes/manifests/calico.yaml
6.1.2 修改配置文件calico.yaml
# 需要修改点:
# etcd的三个证书和私钥要以cat <file> | base64 -w 0的值传入;
# etcd_endpoints: 需要填写集群ETCD全部节点的证书
# name: IP_AUTODETECTION_METHOD  value: interface=em1 需要添加
# 去掉注释 : name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
             
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:
  # Example command for encoding a file contents: cat <file> | base64 -w 0
  etcd-key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBNTZ0elU3NmdUY1RqdTkxYmgvSkwycTJybThBcEw0cnJUWDlUT0FMbnNDYmJibmIzCmFqdHFTUWQyalEzZkN2ZHJIREFiays1ekZwMzVHOHl1UXp5WUZEWm9xdEU3RE81OEpyRXhtZ1JoVU40NXZqZXoKT3NTZ1lkZDNNcE1yb05tS3hTQnFNL0ZkaU8xQytycjk1aks1L3EzSTFVUFN1bVdVVnI1UE5ZMTlLQUZ0cWhiUgpTN0MxOEE2aGdPQUhNcnJ5Z0puaXI3bjNMNCtsNy8yV3picC9Dc2tOT2x2TUd0MzN5ZThnSUczdkRod3ZRSUt6ClltTzlDUCtIWTRTZjBBNHkyZ1dVWnBveUFjWXEvMHdveWYwbHZoSDBpU0t3Wmp3cU1TZHQzSFNQd1VXNjB2NTIKYUZzeVBMYUdwcmdOOUo0a0t1bW5MRTFFeUNUZkhMaC80RDI1N1FJREFRQUJBb0lCQUYyTlpjLytuYnRwODlEcwpiY2J6T0dDQlcxaFUvQXkvMjQ5N0NEOEpKVWlFR0g1K09pRkRCd09ncU9ZRElQdkx1QXcwL1IzNjM2elZkRUFlCm9veHlUck55MlVlSm9IL3pXbDFCbHRjc1I2UWhhVTRBTGpkZ0thZjVHNkJuditsL2o4TlUrSzRySE90cmJHM1YKenl3RGRncDdZU3VGN1BYcWlVR1NMbWhnejVhMFlDMk9JUkd2NlV4bWh6QW1nM3F0bmE3OHFZbDRXUU51aG5jUgo5RXZ1ZCtxOVZ5VGsrRXhSK1NxUVkyTks4cDNndkR2ZTB6MGJtTFVpa1J4NGFOdUtlK2xOU3V1RnJwUXFsUmJjCndoQXgzS2JaZGhaZmVRa2NQQm5hbzFJNUo4eGJaSGZmREk2cHY3S2pKUXJZcFZQTGpBc2lCYk0rcmpWdkZTY00KeldzNENGMENnWUVBN0IybFcyVTdINVMySk9jUVVMdWpBb0pCSURXRVBZemN6ZGhSbHd0blYrU1hMNlJXZUVVdQpzVVFLUXhXQ0xlblJvcXc0a1EvR2lYbXFCazkwM2ZTZHBUajBDTHdTSTZBYllRZnhCZ0o4cEhBbm1kbFUySEo4CkxqNU1raCtMdkdXdW5sYk9pSDVLUmVpZ3NqRVVqTUVrNWNYWExxZkNPWGVsZzh1c0RnSk5LWk1DZ1lFQSt5M3oKNzRoQXIxdmhIVVB0SkY2bUJkNU9UcjhhYWNRMSsrcURkd295MkpkdEhXRFJQcy9vUEdqTkxLTHVaK0VzNjAvdwpRbjhMWVdKY1lTTWpEOHRVbjFvNjVXcGFuRmlmQmg2SW81dFlRYS9UZFdCbXd6S1ZpVTAwWnFwRHcwRzJkckhECkJseDFOZjdXV1I0RE9vRmg1UERLWU4xQytZMnRaNmlEZ2RGWnZuOENnWUVBNTlRWDFrdm5xQk5nWDQxTGxLa1cKM1ZDODF2NFVzRVpOU2dMNTRSNytRZXNja2xkOTJ5cTZOS3lFa3VkY1lPNHh1ZEgwM0dFcjR6RkV3bHRqZU1aRAp2c1RUdm52Q0o1NTlJMkVqd20zUXFiZkEraXJNUnBUcDNwR21wdFk0WWl0SUx3azJVZ2dGcnV4QVU1VWpBeXhrCnFRSCtURDNFMHAzcU1pUlk4NHhJN09jQ2dZQkVyMGN5Sy84TU5NSzFIdnI4NUFqZ04rOFA1NEFRaGdBQkdCckUKOVh2NzhFUjlNUmxtNUxGcnUzakhpUEpLWTYvRjFRRXRIZEo5MmNqTEl6R1dReEtyMUorZ1ZsbmF3UDBUVGt3cworUERFWFpFa1dxMGZHWGo4cDZqNW5mdVRyQ2Q2QTVnQjZFeUE3R091ME44dkkyd2lqNW0zclNtQVZqYWh6dG5QCktQRXlmd0tCZ0Q4L2dSaTQ4L3pzd2VmTzRNVDZqNTEwa1VTeE5SOFNHeTBsZTJOUmhrNkIvOW93MUx0WDdCK2sKUzhaZEU2QitaMmZNY2NWNDdUcXByd05qRmE4UkhzTExOMmVma0VoUTY2OUpHd3JFWTh3cktiN2RrS1FwMEhiawpQWlZZaEllRjFYZkxJcXJ2RVJENkhtQWRZRnQxSjBXV2FWSjlRYk9XSVp1ejJTc2dMM3oyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
  etcd-cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQrVENDQXVHZ0F3SUJBZ0lVVWMyTVZRQW9UOW0yK21PZzVwSTFTYmVTWVhRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lqRUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBoaAphVVJwWVc0eEN6QUpCZ05WQkFvVEFtSjNNUkV3RHdZRFZRUUxFd2htYjNJZ1pYUmpaREVQTUEwR0ExVUVBeE1HClpYUmpaR05oTUI0WERUSXhNRFl3TkRBek1EZ3dNRm9YRFRNeE1EWXdNakF6TURnd01Gb3dhekVMTUFrR0ExVUUKQmhNQ1EwNHhFREFPQmdOVkJBZ1RCMEpsYVVwcGJtY3hFREFPQmdOVkJBY1RCMEpsYVVwcGJtY3hGREFTQmdOVgpCQW9UQzNONWMzUmxiVHBsZEdOa01Rd3dDZ1lEVlFRTEV3TmpNbk14RkRBU0JnTlZCQU1UQzNONWMzUmxiVHBsCmRHTmtNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTU2dHpVNzZnVGNUanU5MWIKaC9KTDJxMnJtOEFwTDRyclRYOVRPQUxuc0NiYmJuYjNhanRxU1FkMmpRM2ZDdmRySERBYmsrNXpGcDM1Rzh5dQpRenlZRkRab3F0RTdETzU4SnJFeG1nUmhVTjQ1dmplek9zU2dZZGQzTXBNcm9ObUt4U0JxTS9GZGlPMUMrcnI5CjVqSzUvcTNJMVVQU3VtV1VWcjVQTlkxOUtBRnRxaGJSUzdDMThBNmhnT0FITXJyeWdKbmlyN24zTDQrbDcvMlcKemJwL0Nza05PbHZNR3QzM3llOGdJRzN2RGh3dlFJS3pZbU85Q1ArSFk0U2YwQTR5MmdXVVpwb3lBY1lxLzB3bwp5ZjBsdmhIMGlTS3daandxTVNkdDNIU1B3VVc2MHY1MmFGc3lQTGFHcHJnTjlKNGtLdW1uTEUxRXlDVGZITGgvCjREMjU3UUlEQVFBQm80R2RNSUdhTUE0R0ExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUYKQlFjREFRWUlLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBZEJnTlZIUTRFRmdRVS9odWhjZlZ0dVJrbApNYm9LQVFhZk5TK05ITDR3SHdZRFZSMGpCQmd3Rm9BVTFXb2w5eWxTc1o1a3BWVEdGS0FhVkxIRkdoWXdHd1lEClZSMFJCQlF3RW9jRXJCc0RCb2NFckJzREI0Y0VyQnNEQ0RBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVEvQVQKUmxhRGNabXZKVld4R2ZtUGlJZkVSVnhERzBwTVAvRjJTdmI1T3BwNHZzK1huWjBGSWVjVUdCbVdsbXJpZmY5TwpUT3NmREZjOERINVBvdFNSakFadW9vQXh4MmxtQlFJbjN2VzdUS3Z2TVh2dVBxYWx1ZGVqaHdtS0dJNUdKYmFBCnVla0ZMV0Z6R1pUaDF5VXdYUUcwNHhOd3F5Kzl2aCsvQW91UFIzc1hNNEtNRGlKd1Z3L2lLUnlLOFlHMXJxb3MKYk44cjAwWFEvTC90NWVPaTNyY3ZwbzZaNU5YMHhnOUhUbVorT3FudzZITlhOUHhWRURpTEZMN00zNzRoZG1ueApYTUM1bFZxVnMxSGNIL2Z3Umg2V3dMV3Y1bFZMUXY1UlNIT1F4OExiNGpRYU1QRFRsSlFsVkRzK3pZMHc2NHFpCnNvTldhb014a0h4VHRUZ2ZyUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  etcd-ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURsRENDQW55Z0F3SUJBZ0lVYk56d0hKa0VEVW9KNmpUVXZwcDZyMk4vZ3RFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lqRUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBoaAphVVJwWVc0eEN6QUpCZ05WQkFvVEFtSjNNUkV3RHdZRFZRUUxFd2htYjNJZ1pYUmpaREVQTUEwR0ExVUVBeE1HClpYUmpaR05oTUI0WERUSXhNRFl3TkRBeE5UVXdNRm9YRFRJMk1EWXdNekF4TlRVd01Gb3dZakVMTUFrR0ExVUUKQmhNQ1EwNHhFREFPQmdOVkJBZ1RCMEpsYVVwcGJtY3hFREFPQmdOVkJBY1RCMGhoYVVScFlXNHhDekFKQmdOVgpCQW9UQW1KM01SRXdEd1lEVlFRTEV3aG1iM0lnWlhSalpERVBNQTBHQTFVRUF4TUdaWFJqWkdOaE1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXBQUlcrNnZzbFZLVU5QM1BEeFMwRDg5RnpJbDkKUURoYmt2YnV3bVJocXU1WUYxL3h4MFI4am81Zkx4MithaVM0Z2UrMDBhZFFQRndhZitpMFFUWGpVQVUxclRVNQpndXc2R0RLVE0razVYUUdQa2l5ZytUNHhJWGxwaW1HK2dDd0hUR1FJVXROVk50UzdIa09ReWRxTGI3QzBnaXpMCmlGYjQ5NUpyMUJIem5hL1BwYy92WjZzOXV1Y1BQamM4RU5JWmloMG8yeGo0SHl2THVsdTAzLzU1V1I3STd3Q1oKNnU1KzE0cEN2bDUrS0pPVkNvdUF5S1JEbVVYTXIxcXE1YmFsdGlmVDBkVUZNU2tDckRYMmE4Qm90Mm9uczhsYQpnVEUrLzF4Q0JCU2R6NHllaXR5VUxldVJOZktpZU1kZXBJTDUvNVN2QklpN3lRQnFzeHZrdXNUUlFRSURBUUFCCm8wSXdRREFPQmdOVkhROEJBZjhFQkFNQ0FRWXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVUKMVdvbDl5bFNzWjVrcFZUR0ZLQWFWTEhGR2hZd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHZFU0ZFNyOG9tcwpIVlEvVVBIL0tyUlloeE02bjlaWDJ6elBqa05idGNpVlFKNExMOVRUTDVxN2c5eDc2Y2hVc2NlTjkxekt2L0R5CkJnNmZqT1FocDFJakFtV05jTXYwbHorNTVRZ3dIT1Y0bHIwejJHd3V5VDZna1BwRmxiVDNKS0ZmT2l3Z1gxVmkKTkxPMjdwYThoMUtlSnBpWkxxZnArd0NmYW5qT0YvdmEvSDhTcHNvTHJjb01BL2FUeFdaSzJnejBiQkVMeWVSZQo0QkRLdFJzeExLSytLZ1lDUUVNRWdsQi96bjg2MWI2U2RWVEtkQnY0eXo4bURHS3cwTHY2TFZ1eTZtR2tkdTl1Clc5NVVsMnVtaWJVTThhclBrdis2UjlHd1UwM0VqYkx0ZXBDcFZBRmY3TTYxWmd5b3ZTTDlMODV3U1RCMmxKZXIKV3l5TmF4aHJBblE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "https://172.27.3.6:2379,https://172.27.3.7:2379,https://172.27.3.8:2379"
  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.
  etcd_ca: "/calico-secrets/etcd-ca"
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"

  typha_service_name: "none"

  calico_backend: "bird"

  veth_mtu: "0"

  # The CNI network configuration to install on each node. The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "log_file_path": "/var/log/calico/cni/cni.log",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "etcd_key_file": "__ETCD_KEY_FILE__",
          "etcd_cert_file": "__ETCD_CERT_FILE__",
          "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
      - serviceaccounts
    verbs:
      - watch
      - list
      - get
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      - watch
      - list
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      - patch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: OnDelete
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: docker.io/calico/cni:v3.19.1
          imagePullPolicy: IfNotPresent
          command: ["/opt/cni/bin/install"]
          envFrom:
          - configMapRef:
              # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
              name: kubernetes-services-endpoint
              optional: true
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /calico-secrets
              name: etcd-certs
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: docker.io/calico/pod2daemon-flexvol:v3.19.1
          imagePullPolicy: IfNotPresent
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: docker.io/calico/node:v3.19.1
          imagePullPolicy: IfNotPresent
          envFrom:
          - configMapRef:
              # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
              name: kubernetes-services-endpoint
              optional: true
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Set noderef for node controller.
            - name: CALICO_K8S_NODE_REF
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            - name: IP_AUTODETECTION_METHOD
              value: interface=em1
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Enable or Disable VXLAN on the default IP pool.
            - name: CALICO_IPV4POOL_VXLAN
              value: "Never"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the VXLAN tunnel device.
            - name: FELIX_VXLANMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Set MTU for the Wireguard tunnel device.
            - name: FELIX_WIREGUARDMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - mountPath: /calico-secrets
              name: etcd-certs
            - name: policysync
              mountPath: /var/run/nodeagent
            # For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
            # parent directory.
            - name: sysfs
              mountPath: /sys/fs/
              # Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
              # If the host is known to mount that filesystem already then Bidirectional can be omitted.
              mountPropagation: Bidirectional
            - name: cni-log-dir
              mountPath: /var/log/calico/cni
              readOnly: true
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        - name: sysfs
          hostPath:
            path: /sys/fs/
            type: DirectoryOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Used to access CNI logs.
        - name: cni-log-dir
          hostPath:
            path: /var/log/calico/cni
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0400
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      # The controllers must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      containers:
        - name: calico-kube-controllers
          image: docker.io/calico/kube-controllers:v3.19.1
          imagePullPolicy: IfNotPresent
          env:
            # The location of the etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Location of the CA certificate for etcd.
            - name: ETCD_CA_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_ca
            # Location of the client key for etcd.
            - name: ETCD_KEY_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_key
            # Location of the client certificate for etcd.
            - name: ETCD_CERT_FILE
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_cert
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: policy,namespace,serviceaccount,workloadendpoint,node
          volumeMounts:
            # Mount in the etcd TLS secrets.
            - mountPath: /calico-secrets
              name: etcd-certs
          livenessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -l
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
            periodSeconds: 10
      volumes:
        # Mount in the etcd TLS secrets with mode 400.
        # See https://kubernetes.io/docs/concepts/configuration/secret/
        - name: etcd-certs
          secret:
            secretName: calico-etcd-secrets
            defaultMode: 0440

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
6.1.3 验证结果
# 由于使用DaemonSet部署,每当起一个node添加至集群中,都会拉起一个calicoNode完成CNI功能的守护进程
[root@master1 manifests]# kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE   IP            NODE  
calico-kube-controllers-58d58c568d-lrpvf   1/1     Running   0          18m   172.27.3.9    node1           
calico-node-bbthq                          1/1     Running   0          18m   172.27.3.9    node1           
calico-node-jkm46                          1/1     Running   0          18m   172.27.3.10   node2     

# 使用Deployment部署Replicas为2的资源,测试calico的部署结果
[root@master1 manifests]# kubectl get po -owide
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE    
tt2-7c5dd7466b-xhzcg   1/1     Running   0          18m   10.244.166.129   node1   
tt2-7c5dd7466b-z77kh   1/1     Running   0          18m   10.244.104.2     node2   

# 登陆node1上pod:
# 1. ping node2上pod的ip地址
# 2. ping node2宿主机的ip地址
# 测试成功即证明无异常
[root@master1 manifests]# kubectl exec -it tt2-7c5dd7466b-xhzcg -- /bin/sh
/ # ping 10.244.104.2
PING 10.244.104.2 (10.244.104.2): 56 data bytes
64 bytes from 10.244.104.2: seq=0 ttl=62 time=0.361 ms
/ # ping 172.27.3.10
PING 172.27.3.10 (172.27.3.10): 56 data bytes
64 bytes from 172.27.3.10: seq=0 ttl=63 time=0.278 ms

6.2 Flannel的部署(生产中极少使用,可以作为测试环境,文档更新中…)

6.3 其他CNI的部署(文档更新中…)

7 Addons附加功能的部署(文档更新中)

7.1 CoreDNS的部署

7.1.1 github下载对应库生成配置文件
# 通过git下载项目
git clone https://github.com/coredns/deployment.git

# 该工具包可以生成对应配置文件
[root@master1 addons]# ls deployment-master/
charts  debian  docker  kubernetes  LICENSE  Makefile  README.md  systemd

# 选取kubernetes文件夹中的工具,只需要前两个文件,*.sed是模板,deploy.sh负责渲染该模板完成客户自定义配置
[root@master1 deployment-master]# cd kubernetes/
[root@master1 kubernetes]# ls
coredns.yaml.sed   deploy.sh  (corefile-tool FAQs.md  migration  README.md  rollback.sh  Scaling_CoreDNS.md  Upgrading_CoreDNS.md CoreDNS-k8s_version.md)  

7.1.2 使用deploy.sh生成coredns.yaml文件
# 渲染模板,生成配置文件
mkdir -p /etc/kubernetes/coredns
mv deploy.sh coreDNS-gene.sh
# 生成coredns.yaml配置文件
[root@master1 coredns]# ls
coreDNS-gene.sh  coredns.yaml.sed

# 执行脚本渲染模板生成配置文件
./deploy.sh -i 10.96.0.10 > coredns.yaml
[root@master1 coredns]# ls
coreDNS-gene.sh  coredns.yaml  coredns.yaml.sed
7.1.3 实例化后的配置文件展示
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here  这里需要修改一下,根据kubeadm可以修改为2
  replicas: 2
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        # image仓库无需科学上网即可获取,这里coredns镜像大概40M,
        image: coredns/coredns:1.8.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  # 这里是模板渲染传入项,这个clusterIP应该和service网段相同,即10.96.0.0/12
  # 并且这个IP地址应与kubelet配置文件中clusterDNS中的IP一致
  clusterIP: 10.96.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
7.1.4 进行CoreDNS测试验证功能
#!/bin/bash
# 启动一个Deployment的nginx和一个centos的pod
# 创建nginx-dp.yaml
cat > nginx-dp.yaml <<-EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  generation: 6
  labels:
    appname: nginx
  name: nginx
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      appname: nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        appname: nginx
    spec:
      containers:
      - image: nginx:latest
        imagePullPolicy: IfNotPresent
        name: intellect
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: NodePort
  selector:
    appname: nginx
  ports:
  - name: p3000
    protocol: TCP
    port: 86
    targetPort: 80
    nodePort: 63131
EOF

# 启动nginx-dp
kubectl apply -f nginx-dp.yaml

# 启动一个centos pod,验证CoreDNS
kubectl run testos -it --image=centos:7 -- sleep 3600

# 进入client端
[root@master1 ~]# kubectl exec -it client -- /bin/bash
[root@centos /]# yum -y install bind-utils

# 测试Kubernetes集群中的svc是否可以被CoreDNS解析,测试成功
[root@client /]# nslookup nginx-svc
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   nginx-svc.default.svc.cluster.local
Address: 10.96.188.120

[root@client /]# nslookup kubernetes
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1


# 测试是否可以解析外部网站,测试成功
[root@client /]# nslookup www.baidu.com
Server:         10.96.0.10
Address:        10.96.0.10#53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 110.242.68.3
Name:   www.a.shifen.com
Address: 110.242.68.4

7.2 Metrics-Server的部署

7.2.1 github下载对应的配置文件
# 配置文件目录中创建Metrics-Server相关的配置文件
mkdir -p /etc/kubernetes/metrics-server
cd /etc/kubernetes/metrics-server
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
7.2.2 配置证书
// >> /etc/kubernetes/pki/csr/ms-csr.json
{
  "CN": "metrics-server",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "HaiDian",
      "O": "bw",
      "OU": "for ms"
    }
  ]
}

// >> 生成对应的证书
cfssl gencert -ca=apiext-ca.pem -ca-key=apiext-ca-key.pem -config=ca-config.json -profile=bw ms-csr.json | cfssljson -bare ms
7.2.3 修改apiserver配置文件,添加metrics-server证书
# 三个master节点的apiserver.conf都要添加下面两行,一定要在--requestheader-allowed-names=aggregator,metrics-server 前添加
--proxy-client-cert-file=/etc/kubernetes/pki/ms.pem \
--proxy-client-key-file=/etc/kubernetes/pki/ms-key.pem \
7.2.4 实例化后的配置文件演示
# >> /etc/kubernetes/manifests/metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: True
      containers:
      - command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-use-node-status-port
        image: vavikast/metrics-server
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
7.2.5 进行Metrics-Server测试验证功能
# 查看node相关信息
[root@master1 manifests]# kubectl top nodes
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master1   99m          4%     1676Mi          44%
master2   107m         5%     1138Mi          29%
master3   116m         5%     1199Mi          31%
node1     77m          3%     768Mi           20%
node2     68m          3%     761Mi           41%

# 查看pod相关信息
[root@master1 manifests]# kubectl top pod -A
NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)
kube-system   calico-kube-controllers-58d58c568d-k626h   2m           26Mi
kube-system   calico-node-4gsrw                          24m          101Mi
kube-system   calico-node-hqmv4                          22m          97Mi
kube-system   calico-node-jn9ls                          20m          97Mi
kube-system   calico-node-t9drl                          21m          96Mi
kube-system   calico-node-zv6hg                          22m          103Mi
kube-system   coredns-6ff445f54-hvfkn                    2m           20Mi
kube-system   coredns-6ff445f54-mczlt                    2m           17Mi
kube-system   metrics-server-7dd767658d-4qx95            2m           14Mi

7.3 DashBoard的部署

7.3.1 github下载对应的配置文件
wget -O /etc/kubernetes/manifests/dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
7.3.2 配置文件修改
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  # 需要修改当前端口为nodePort
  type: NodePort
  ports:
    - port: 443
      nodePort: 60443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.1.0
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
7.3.3 完成dashboard部署工作
kubectl apply -f dashboard.yaml
7.3.4 生成token文件
# >创建yaml文件 /etc/kubernetes/manifests/dashboard/admin-user.yaml
cat > /etc/kubernetes/manifests/dashboard/admin-user.yaml <<-EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

kubectl apply -f admin-user.yaml

# 通过下面命令获得token,官方教程,无需研究为什么

[root@master1 dashboard]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
...
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9UcENsenRvRUJ2dml0RWhwb1NubURpSnhKOWQ3LXlULVZITEZnbFp6V0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXpnanJxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzZmEzMDZhOC04NjRlLTRjNDYtOWMzMi0zNDU0NzRhNTY1OWIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.cqeXEH7QxB1CqgWwAbbmSjf6dOUkb9F4jh3HRs8pMcpSge9SXJ-WAQF4WS_YzyHFwkyOuQm5Zzxol30kEbYTnv8P5eWyrWLiBgMIVKrDlXXJigZxwVLf46qkTaQKVl26lpsuBXj9qDlMCxP5tGTNP_RFKVHqECbhVtkvsREtX4TEuHmnO83RtdE9Aac80ukOPk7e6daoBz_5jJ7czT5niwWHDBMhYFPzabYCMpSlN8nxVBgYvSjGa8SEzky24k-bJRU92G3gEDDRzBesquv3H3kPeo7YciZPWdzxfyUKzCNREv7F4cyvUz-zB5DkUKMqVLpHZXSlkX25dOAdi7GFiQ
7.3.5 打开浏览器,登陆dashboard UI界面
# 浏览器打开
https://172.27.3.6:60443/

# 输入对应的token,即可完成DashBoard图形化操作

附录A 所有证书生成流程

A.0 前言

以下所有操作均在/etc/kubernetes/pki/csr 和 /etc/kubernetes/pki 两个目录中进行

  • 所有的*-csr.json均存放于/etc/kubernetes/pki/csr目录下
  • 所有的pem文件,均存放于/etc/kubernetes/pki目录下

A.1 ca-config.json的编写

/* >>ca-config.json
	1. 一般就是两个主keys,signing和profiles;
	2. profiles中的子keys可以有多个,对应不同子key可以设定不同的过期时间和用途
	3. 下面就是一个标准案例,定义了一个10年的证书
	4. 在生产配置中,删除所有相关注释
*/
{
  "signing": {
    "default": {
      "expiry": "87600h"     //这里最重要,87600h意味着10年证书
    },
    "profiles": {
      "bdqx": {                  //供给创建CA中心
         "expiry": "87600h",     //不同profile可以定义不同过期时间
         "usages": [
            "signing",           //可以作为二道贩子为别人颁发次级证书
            "key encipherment",  //加密
            "server auth",       //以client身份验证server端
            "client auth"        //以server身份验证client端
         ]
      },
      "ky": {                    //供给向CA申请证书的
         "expiry": "87600h",     //不同profile可以定义不同过期时间
         "usages": [
            "key encipherment",  //取消了颁发证书的功能
            "server auth",       //以client身份验证server端
            "client auth"        //以server身份验证client端
         ]
      }
    }
  }
}

A.2 etcd相关证书创建

A.2.1 etcd-ca-csr.json(etcd的CA创建)的编写
/* >> etcd-ca-csr.json
	1. 只有CN和O两个字段最重要,其他都是固定格式
	2. CN即Common Name,对应着kubernetes做binding时的User字段
	3. O即Organization,对应着Kubernetes做bingding时的Group字段
*/
{
"CN": "etcdca",     //重点
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
       {
        "C": "CN",
        "ST": "BeiJing",
        "L": "HaiDian",
        "O": "kygroup",   //重点
        "OU": "etcdca"
        }
     ]
}
A.2.2 etcd-server-csr.json(etcd的2379端口证书)的编写
/* >> etcd-server-csr.json
	1. hosts中要留有足够的资源余富量,假如后续考虑增加新的etcd节点,那么可提前占用某些ip地址,例如未来可能会用到5个节点
	2. 提前占用的ip不会影响证书的功能,尽可能多占用能想到的网络
*/
{
    "CN": "etcd-server",
    "hosts": [            
    "192.168.140.11",              
    "192.168.140.12", 
    "192.168.140.13", 
    "192.168.140.14",  //目前没有,提前占位
    "192.168.140.15"   //目前没有,提前占位
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
        	"L": "BeiJing",
       		"O": "kygroup",           
        	"OU": "etcd-server"
        }
    ]
}
A.2.3 etcd-peer-csr.json(etcd的2380端口证书)的编写
{
    "CN": "etcd-peer",
    "hosts": [            
    "192.168.140.11",              
    "192.168.140.12", 
    "192.168.140.13", 
    "192.168.140.14",
    "192.168.140.15" 
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
        	"L": "BeiJing",
       		"O": "kygroup",           
        	"OU": "etcd-peer"
        }
    ]
}
A.2.4 生成etcd的CA和相关证书
# etcd需要创建一个CA,生成两套https证书,名字为etcd-server和etcd-peer

# 创建etcd CA的命令
cd /etc/kubernetes/pki/csr
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca

# 该命令会生成三个文件:
# etcd-ca.csr
# etcd-ca-key.pem
# etcd-ca.pem 
# 其中只有*.pem有用,csr是不需要的
# etcd-ca.pem为CA的证书,etcd-ca-key.pem为CA的私钥

# 使用etcd CA颁发etcd-server和etcd-peer两套https的证书和私钥
# 这里的-profile参数选择的是ky,用bdqx也是可以的,但是为了这对证书和私钥不再具备CA颁发的功能

cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=ky etcd-server-csr.json | cfssljson -bare etcd-server

cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=ky etcd-peer-csr.json | cfssljson -bare etcd-peer



# 将生成的证书与私钥迁移到上级目录
mv *.pem ..

# 查看所有证书和私钥生成的结果
[root@master1 pki]# ll
total 24
drwxr-xr-x 2 root root  106 May 27 14:43 csr
-rw------- 1 root root 1675 May 27 14:09 etcd-ca-key.pem
-rw-r--r-- 1 root root 1310 May 27 14:09 etcd-ca.pem
-rw------- 1 root root 1679 May 27 14:41 etcd-peer-key.pem
-rw-r--r-- 1 root root 1460 May 27 14:41 etcd-peer.pem
-rw------- 1 root root 1675 May 27 14:40 etcd-server-key.pem
-rw-r--r-- 1 root root 1464 May 27 14:40 etcd-server.pem

A.3 kubernetes的相关证书创建

A.3.1 ca-csr.json(kubernetes集群的CA)的编写
{
"CN": "kubernetes",
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
       {
        "C": "CN",
        "ST": "BeiJing",
        "L": "HaiDian",
        "O": "kygroup",
        "OU": "kubernetesCA"
        }
     ]
}

api-csr.json

ctrl-mgr-csr.json

sched-csr.json

kb-py-csr.json

A.3.2 api-csr.json的编写
/* >>api-csr.json
	1. 填充足够的hosts,例如预计未来可能会增加多少个节点,都可以提前占用上,避免后续重新生成证书麻烦
*/
{
    "CN": "kubernetes",
    "hosts": [
      "10.96.0.1",
      "10.96.0.2",
      "10.96.0.10",
      "127.0.0.1",
      "192.168.140.11",
      "192.168.140.12",
      "192.168.140.13",
      "192.168.140.14",
      "192.168.140.15",
      "192.168.140.21",   
      "192.168.140.22", 
      "192.168.140.23",
      "kubernetes",       
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Haidian",
            "ST": "Beijing",
            "O": "kygroup",
            "OU": "apiserver"
        }
    ]
}
A.3.3 kb-py.json的编写
{
  "CN": "system:kube-proxy",  //必须?
    
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    { 
      "C": "CN",
      "L": "HaiDian",
      "ST": "BeiJing",
      "O": "kygroup",
      "OU": "kube-proxy"
    }
  ]
}

附录B 所有配置文件详解

B.1 etcd配置文件

// >> /etc/kubernetes/manifest/etcd1.yaml
---
name: 'etcd1'
data-dir: /var/lib/kubernetes/etcd
wal-dir: /var/lib/kubernetes/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.140.11:2380'
listen-client-urls: 'https://192.168.140.11:2379,http://127.0.0.1:2379'
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.140.11:2380'
advertise-client-urls: 'https://192.168.140.11:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://192.168.140.11:2380,etcd2=https://192.168.140.12:2380,etcd3=https://192.168.140.13:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-server.pem'
  key-file: '/etc/kubernetes/pki/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-peer.pem'
  key-file: '/etc/kubernetes/pki/etcd-peer-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false

---


// >> /etc/kubernetes/manifest/etcd2.yaml
name: 'etcd2'
data-dir: '/var/lib/kubernetes/etcd'
wal-dir: '/var/lib/kubernetes/etcd/wal'
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.140.12:2380'
listen-client-urls: 'https://192.168.140.12:2379,http://127.0.0.1:2379'
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.140.12:2380'
advertise-client-urls: 'https://192.168.140.12:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://192.168.140.11:2380,etcd2=https://192.168.140.12:2380,etcd3=https://192.168.140.13:2380'
initial-advertise-peer-urls: 'https://192.168.140.12:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-server.pem'
  key-file: '/etc/kubernetes/pki/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-peer.pem'
  key-file: '/etc/kubernetes/pki/etcd-peer-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false

---


// >> /etc/kubernetes/manifest/etcd3.yaml
name: 'etcd3'
data-dir: '/var/lib/kubernetes/etcd'
wal-dir: '/var/lib/kubernetes/etcd/wal'
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.140.13:2380'
listen-client-urls: 'https://192.168.140.13:2379,http://127.0.0.1:2379'
max-snapshots: 5
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.140.13:2380'
advertise-client-urls: 'https://192.168.140.13:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://192.168.140.11:2380,etcd2=https://192.168.140.12:2380,etcd3=https://192.168.140.13:2380'
initial-advertise-peer-urls: 'https://192.168.140.13:2380'
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-server.pem'
  key-file: '/etc/kubernetes/pki/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd-peer.pem'
  key-file: '/etc/kubernetes/pki/etcd-peer-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd-ca.pem'
  auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false

B.2 haproxy配置文件

# >> haproxy.cfg 三个节点上一样的配置
global
    log         127.0.0.1 local0 err
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     10000
    daemon

defaults
    mode                    http
    log                     global
    option                  httplog
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s

frontend  k8s-master
    bind 0.0.0.0:8443
    bind 127.0.0.1:8443
    mode tcp
    option tcplog
    default_backend k8s-master

backend k8s-master
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server  master1 192.168.140.11:6443 check
    server  master2 192.168.140.12:6443 check
    server  master3 192.168.140.13:6443 check

B.3 keepalived配置文件


B.4 apiserver配置文件

# >> master1: /etc/kubernetes/manifest/apiserver.cfg
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/apiserver \
--etcd-servers=https://192.168.140.11:2379,https://192.168.140.12:2379,https://192.168.140.13:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.140.11 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/pki/token.csv \
--service-node-port-range=60000-65000 \
--kubelet-client-certificate=/etc/kubernetes/pki/apis.pem \
--kubelet-client-key=/etc/kubernetes/pki/apis-key.pem \
--tls-cert-file=/etc/kubernetes/pki/apis.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apis-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-server.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-server-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/usr/local/kubernetes/server/logs/k8s-audit.log"


# >> master2: /etc/kubernetes/manifest/apiserver.cfg
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/apiserver \
--etcd-servers=https://192.168.140.11:2379,https://192.168.140.12:2379,https://192.168.140.13:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.140.12 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/pki/token.csv \
--service-node-port-range=60000-65000 \
--kubelet-client-certificate=/etc/kubernetes/pki/apis.pem \
--kubelet-client-key=/etc/kubernetes/pki/apis-key.pem \
--tls-cert-file=/etc/kubernetes/pki/apis.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apis-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-server.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-server-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/usr/local/kubernetes/server/logs/k8s-audit.log"


# >> master3: /etc/kubernetes/manifest/apiserver.cfg
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/apiserver \
--etcd-servers=https://192.168.140.11:2379,https://192.168.140.12:2379,https://192.168.140.13:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.140.13 \
--allow-privileged=true \
--service-cluster-ip-range=10.96.0.0/12 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/pki/token.csv \
--service-node-port-range=60000-65000 \
--kubelet-client-certificate=/etc/kubernetes/pki/apis.pem \
--kubelet-client-key=/etc/kubernetes/pki/apis-key.pem \
--tls-cert-file=/etc/kubernetes/pki/apis.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apis-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
--etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem \
--etcd-certfile=/etc/kubernetes/pki/etcd-server.pem \
--etcd-keyfile=/etc/kubernetes/pki/etcd-server-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/usr/local/kubernetes/server/logs/k8s-audit.log"

B.5 controller-manager配置文件

# >> 三个节点通用:/etc/kubernetes/manifest/ctrl-mgr.cfg
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/controller-manager \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.96.0.0/12 \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
#--kubeconfig=/etc/kubernetes/ctrl-mgr.kubeconfig \
--allocate-node-cidrs=true \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem \
--node-cidr-mask-size=24 \
--experimental-cluster-signing-duration=87600h0m0s"

B.6 scheduler配置文件

# >> 三个节点通用:/etc/kubernetes/manifest/scheduler.cfg
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"

st/apiserver.cfg
KUBE_APISERVER_OPTS=“–logtostderr=false
–v=2
–log-dir=/var/log/kubernetes/apiserver
–etcd-servers=https://192.168.140.11:2379,https://192.168.140.12:2379,https://192.168.140.13:2379
–bind-address=0.0.0.0
–secure-port=6443
–advertise-address=192.168.140.12
–allow-privileged=true
–service-cluster-ip-range=10.96.0.0/12
–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction
–authorization-mode=RBAC,Node
–enable-bootstrap-token-auth=true
–token-auth-file=/etc/kubernetes/pki/token.csv
–service-node-port-range=60000-65000
–kubelet-client-certificate=/etc/kubernetes/pki/apis.pem
–kubelet-client-key=/etc/kubernetes/pki/apis-key.pem
–tls-cert-file=/etc/kubernetes/pki/apis.pem
–tls-private-key-file=/etc/kubernetes/pki/apis-key.pem
–client-ca-file=/etc/kubernetes/pki/ca.pem
–service-account-key-file=/etc/kubernetes/pki/ca-key.pem
–etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem
–etcd-certfile=/etc/kubernetes/pki/etcd-server.pem
–etcd-keyfile=/etc/kubernetes/pki/etcd-server-key.pem
–requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem
–requestheader-allowed-names=aggregator,metrics-server
–requestheader-extra-headers-prefix=X-Remote-Extra-
–requestheader-group-headers=X-Remote-Group
–requestheader-username-headers=X-Remote-User
–audit-log-maxage=30
–audit-log-maxbackup=3
–audit-log-maxsize=100
–audit-log-path=/usr/local/kubernetes/server/logs/k8s-audit.log”

>> master3: /etc/kubernetes/manifest/apiserver.cfg

KUBE_APISERVER_OPTS=“–logtostderr=false
–v=2
–log-dir=/var/log/kubernetes/apiserver
–etcd-servers=https://192.168.140.11:2379,https://192.168.140.12:2379,https://192.168.140.13:2379
–bind-address=0.0.0.0
–secure-port=6443
–advertise-address=192.168.140.13
–allow-privileged=true
–service-cluster-ip-range=10.96.0.0/12
–enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction
–authorization-mode=RBAC,Node
–enable-bootstrap-token-auth=true
–token-auth-file=/etc/kubernetes/pki/token.csv
–service-node-port-range=60000-65000
–kubelet-client-certificate=/etc/kubernetes/pki/apis.pem
–kubelet-client-key=/etc/kubernetes/pki/apis-key.pem
–tls-cert-file=/etc/kubernetes/pki/apis.pem
–tls-private-key-file=/etc/kubernetes/pki/apis-key.pem
–client-ca-file=/etc/kubernetes/pki/ca.pem
–service-account-key-file=/etc/kubernetes/pki/ca-key.pem
–etcd-cafile=/etc/kubernetes/pki/etcd-ca.pem
–etcd-certfile=/etc/kubernetes/pki/etcd-server.pem
–etcd-keyfile=/etc/kubernetes/pki/etcd-server-key.pem
–requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem
–requestheader-allowed-names=aggregator,metrics-server
–requestheader-extra-headers-prefix=X-Remote-Extra-
–requestheader-group-headers=X-Remote-Group
–requestheader-username-headers=X-Remote-User
–audit-log-maxage=30
–audit-log-maxbackup=3
–audit-log-maxsize=100
–audit-log-path=/usr/local/kubernetes/server/logs/k8s-audit.log”




### B.5 controller-manager配置文件

```shell
# >> 三个节点通用:/etc/kubernetes/manifest/ctrl-mgr.cfg
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes/controller-manager \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.96.0.0/12 \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
#--kubeconfig=/etc/kubernetes/ctrl-mgr.kubeconfig \
--allocate-node-cidrs=true \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/pki/apiaggr-ca.pem \
--node-cidr-mask-size=24 \
--experimental-cluster-signing-duration=87600h0m0s"

B.6 scheduler配置文件

# >> 三个节点通用:/etc/kubernetes/manifest/scheduler.cfg
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值