二进制安装 k8s v1.19.10高可用集群

 本文章旨在帮助那些受二进制安装困扰的同僚们,完成生产环境或测试环境的部署

1、环境准备

本次部署涉及5台虚机,如果你是在测试环境或生产环境中,etcd需要单独的三台来部署,

主机名称 IP地址应用信息
k8s-master01        192.168.124.35kube-apiserver、kube-controller-manager、kube-schdeler、etcd、keepalived+haproxy
k8s-master02192.168.124.36kube-apiserver、kube-controller-manager、kube-schdeler、etcd、keepalived+haproxy
k8s-master03192.168.124.39kube-apiserver、kube-controller-manager、kube-schdeler、etcd、keepalived+haproxy
k8s-node01192.168.124.37kubelet、kube-proxy、docker
k8s-node02192.168.124.38kubelet、kube-rpoxy、docker
k8s-VIP        192.168.124.45集群的vip                
pod网段        10.244.0.0/16k8s的pod网段
server网段        10.96.0.0/16k8s的 service网段
registry114.115.223.243自己的私有registry镜像仓库

1.1、服务版本

服务名称版本
Centos 操作系统7.6
操作系统内核版本kernel-ml-4.19.9-1.el7
kubernetes1.19.10
calico镜像3.15.1
etcd3.4.12
docker19.03.9
dashborad2.0.3
coredns1.7.0

1.2、初始化系统

所有机器都要初始化系统

#设置主机名称
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-master02
hostnamectl set-hostname k8s-master03
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

#安装必要的工具
yum -y install  jq psmisc telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl  ntpdate bash-completion

#关闭防火墙
systemctl stop firewalld
systemctl disable --now firewalld

#关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

#关闭交换分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0

#关闭NetworkManager
systemctl disable --now NetworkManager

#开启服务器时间同步,这里使用阿里的时间同步,生产中可以使用现有的,或者系统自带的chrony
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
ntpdate ntp.aliyun.com

#添加计划任务,定期同步时间
crontab -e
10 * * * * /usr/sbin/ntpdate 时间服务器的IP地址或域名  >> /root/ntpdate.log 2>&1

#配置ulimit
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

#添加k8s内核参数
modprobe br_netfilter       #开启net.bridge模块,不开启的话,个别系统有时候会出错,加载不了sysctl -p

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
#生效
sysctl --system

#添加hosts文件
cat >> /etc/hosts <<EOF
192.168.124.35 k8s-master01
192.168.124.36 k8s-master02
192.168.124.39 k8s-master03
192.168.124.37 k8s-node01
192.168.124.38 k8s-node02
192.168.124.45 k8s-LB-VIP
114.115.223.243 registry
EOF

#升级Linux内核,centos7.6默认内核是3.10太低,也怕安全人员扫描内核漏洞,另外升级内核是为了契合k8s高版本,减低k8s组件容错率
rpm -ivh kernel-ml-4.19.9-1.el7.elrepo.x86_64.rpm
rpm -ivh kernel-ml-devel-4.19.9-1.el7.elrepo.x86_64.rpm
#引导内核为最新内核启动
grub2-set-default 0 && grub2-mkconfig  -o  /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
#重启系统
reboot

#安装ipvsadm
yum -y install ipvsadm ipset sysstat conntrack libseccomp 

cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl enable --now systemd-modules-load.service

lsmod | grep -e ip_vs -e nf_conntrack
#输出如下:
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 180224  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          176128  1 ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,xfs,ip_vs

1.3、配置免密登录

在master01上 配置免密登录其他机器

ssh-keygen -t rsa     # 一直回车即可

ssh-copy-id  192.168.124.36

ssh-copy-id  192.168.124.39

ssh-copy-id  192.168.124.37

ssh-copy-id  192.168.124.38

2、Node节点安装docker

docker下载地址:wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

tar -zxf docker-19.03.9.tgz
mv docker/* /usr/bin/

2.1、创建docker启动文件

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

2.2、创建docker配置文件

#数据存放目录
data-root:/opt/docker/  #为docker数据存放目录,也可以不指定的话,默认为/var/lib/docker下
#创建目录
mkdir /etc/docker
mkdir /opt/docker/
cat > /etc/docker/daemon.json << EOF
{
  
          "insecure-registries":["registry:5000"],
          "data-root":"/opt/docker/",
          "exec-opts": ["native.cgroupdriver=systemd"],
          "max-concurrent-downloads": 10,
          "max-concurrent-uploads": 5,
          "log-opts": {
          "max-size": "200m",
          "max-file": "3"
          },
          "live-restore": true
}
EOF

参数解释:

insecure-registries:为镜像仓库地址,部署中的镜像都是在这个地址拉取

exec-opts:为Kubelet新版建议使用systemd,所以把docker的cgroup改成systemd

max-concurrent-downloads:10 代表启动Pod时可以有10个并发线程拉取镜像

max-concurrent-uploads:5 代表有5个并发线程去上传镜像

log-opts-max-size:200 代表docker容器日志最大200,到200后做切割,防止无限扩大

log-opts-max-file:3 代表docker容器日志切割后最多有三个

live-restore:true 代表重启docker时,不影响正在运行的Pod服务

2.3、启动docker

systemctl daemon-reload && systemctl start docker.service
systemctl enable docker.service

3、下载组件地址

# etcd 下载
wget https://github.com/etcd-io/etcd/releases/download/v3.4.12/etcd-v3.4.12-linux-amd64.tar.gz

#kubernetes下载地址
github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
wget https://dl.k8s.io/v1.19.10/kubernetes-server-linux-amd64.tar.gz

#cfssl下载地址
github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
#或者
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

4、解压安装包

在 Master01操作

#解压k8s安装包
tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

#解压etcd安装包
 tar -zxf etcd-v3.4.12-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.12-linux-amd64/etcd{,ctl}
 
 #查看
ls /usr/local/bin/
etcd  etcdctl  kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler

4.1、将组件分发各个节点

MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp -r /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp -r /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp -r /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done

5、创建etcd目录

在 Master01操作

chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
#所有 etcd 节点创建
mkdir -p /etc/etcd/ssl  
#所有 k8s 机器创建该目录
mkdir -p /etc/kubernetes/pki

5.1、生成etcd文件证书

# 进入提前下载好的证书目录中,进行证书创建
cd k8s-ha-install/pki/
#生成 ca、key文件
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.124.35,192.168.124.36,192.168.124.39 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

5.2、将etcd证书分发到其余两个节点

MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do
     ssh $NODE "mkdir -p /etc/etcd/ssl"
     for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do
       scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
     done
 done

5.3、生成apiserver证书

cd k8s-ha-install/pki/
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

cfssl gencert -ca=/etc/kubernetes/pki/ca.pem  -ca-key=/etc/kubernetes/pki/ca-key.pem  -config=ca-config.json   -hostname=10.96.0.1,192.168.124.45,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.124.35,192.168.124.36,192.168.124.39 -profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca

#生成 apiserver 聚合证书,忽略告警
cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem  -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem  -config=ca-config.json -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

5.4、生成kube-controller证书

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

# 设置一个集群项
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.124.45:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

# 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

5.5、生成scheduler证书

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.124.45:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

5.6、生成admin证书

cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem  --embed-certs=true  --server=https://192.168.124.45:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config set-credentials kubernetes-admin  --client-certificate=/etc/kubernetes/pki/admin.pem  --client-key=/etc/kubernetes/pki/admin-key.pem  --embed-certs=true     --kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config set-context kubernetes-admin@kubernetes  --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig

kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

5.7、创建ServiceAccount Key

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

5.8、分发证书到所有节点

for NODE in k8s-master02 k8s-master03; do 
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do 
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done; 
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do 
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done

#一共是 23 个证书 就对了
ls /etc/kubernetes/pki/|wc -l
23

6、部署etcd

所有节点的etcd的配置文件都相同,只有IP不同,如果你的是单独三台etcd机器,那么name和initial-clusetr“改成单独的各自主机名称”

cat > /etc/etcd/etcd.config.yml <<EOF
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.124.35:2380'
listen-client-urls: 'https://192.168.124.35:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.124.35:2380'
advertise-client-urls: 'https://192.168.124.35:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.124.35:2380,k8s-master02=https://192.168.124.36:2380,k8s-master03=https://192.168.124.39:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

6.1、Master02配置

cat > /etc/etcd/etcd.config.yml <<EOF
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.124.36:2380'
listen-client-urls: 'https://192.168.124.36:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.124.36:2380'
advertise-client-urls: 'https://192.168.124.36:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.124.35:2380,k8s-master02=https://192.168.124.36:2380,k8s-master03=https://192.168.124.39:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

6.2、Master03配置

cat > /etc/etcd/etcd.config.yml <<EOF
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.124.39:2380'
listen-client-urls: 'https://192.168.124.39:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.124.39:2380'
advertise-client-urls: 'https://192.168.124.39:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.124.35:2380,k8s-master02=https://192.168.124.36:2380,k8s-master03=https://192.168.124.39:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

6.3、创建etcd启动那文件

cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF

6.4、启动etcd

 创建 etcd 证书目录
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload && systemctl start etcd     #三台需要同时启动,否则会报错
systemctl enable etcd

#查看 etcd  状态,三台都需要执行
export ETCDCTL_API=3
etcdctl --endpoints="192.168.124.39:2379,192.168.124.36:2379,192.168.124.35:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table

#输出如下,leader为一个主节点,两个从节点,当主节点挂掉时,会自动替换其他主节点
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.124.39:2379 | 434e922772b8f129 |  3.4.12 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.124.36:2379 | 7731131bb5ae1d44 |  3.4.12 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.124.35:2379 | 9583a15d8e19eeb1 |  3.4.12 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
+---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

7、安装keepalived+haproxy

在 Master01、Master02、Master03上安装,haproxy配置一样

yum -y install keepalived haproxy 

7.1、配置haproxy

cat >/etc/haproxy/haproxy.cfg<<"EOF"
global
 maxconn 2000
 ulimit-n 16384
 log 127.0.0.1 local0 err
 stats timeout 30s

defaults
 log global
 mode http
 option httplog
 timeout connect 5000
 timeout client 50000
 timeout server 50000
 timeout http-request 15s
 timeout http-keep-alive 15s


frontend monitor-in
 bind *:33305
 mode http
 option httplog
 monitor-uri /monitor

frontend k8s-master
 bind 0.0.0.0:8443
 bind 127.0.0.1:8443
 mode tcp
 option tcplog
 tcp-request inspect-delay 5s
 default_backend k8s-master


backend k8s-master
 mode tcp
 option tcplog
 option tcp-check
 balance roundrobin
 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
 server  master01  192.168.124.35:6443 check
 server  master02  192.168.124.36:6443 check
 server  master03  192.168.124.39:6443 check
EOF

7.2、Master01的keepalived配置

注意修改成自己的网卡名称:ens32

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens32
    mcast_src_ip 192.168.124.35
    virtual_router_id 51
    priority 101
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.124.45
    }
    track_script {
      chk_apiserver 
} }
EOF

7.3、Master03的keepalived配置

注意修改网卡名称

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
 
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    mcast_src_ip 192.168.124.36
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.124.45
    }
    track_script {
      chk_apiserver 
} }
EOF

7.4、Master03的keepalived配置

cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
 
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    mcast_src_ip 192.168.124.39
    virtual_router_id 51
    priority 50
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.124.45
    }
    track_script {
      chk_apiserver 
} }
EOF

7.5、健康检查配置

三台都一样

cat > /etc/keepalived/check_apiserver.sh <<EOF
#!/bin/bash
err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF

#赋予执行权限
chmod +x /etc/keepalived/check_apiserver.sh

#启动  keepalived、haproxy服务
systemctl daemon-reload && systemctl start haproxy.service
systemctl start keepalived

#测试  VIP 高可用
ping  192.168.124.45 

telnet 192.168.124.45 8443
Trying 192.168.124.45...
Connected to 192.168.124.45.
Escape character is '^]'.
Connection closed by foreign host

8、部署apiserver

8.1、Master01配置apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2  \
--logtostderr=true  \
--allow-privileged=true  \
--bind-address=0.0.0.0  \
--secure-port=6443  \
--insecure-port=0  \
--advertise-address=192.168.124.35 \
--service-cluster-ip-range=10.96.0.0/16  \
--service-node-port-range=30000-32767  \
--etcd-servers=https://192.168.124.35:2379,https://192.168.124.36:2379,https://192.168.124.39:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
--etcd-certfile=/etc/etcd/ssl/etcd.pem  \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
--client-ca-file=/etc/kubernetes/pki/ca.pem  \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
--service-account-key-file=/etc/kubernetes/pki/sa.pub  \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
--authorization-mode=Node,RBAC  \
--enable-bootstrap-token-auth=true  \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
--requestheader-allowed-names=aggregator  \
--requestheader-group-headers=X-Remote-Group  \
--requestheader-extra-headers-prefix=X-Remote-Extra-  \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

8.2、Master02配置apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2  \
--logtostderr=true  \
--allow-privileged=true  \
--bind-address=0.0.0.0  \
--secure-port=6443  \
--insecure-port=0  \
--advertise-address=192.168.124.36 \
--service-cluster-ip-range=10.96.0.0/16  \
--service-node-port-range=30000-32767  \
--etcd-servers=https://192.168.124.35:2379,https://192.168.124.36:2379,https://192.168.124.39:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
--etcd-certfile=/etc/etcd/ssl/etcd.pem  \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
--client-ca-file=/etc/kubernetes/pki/ca.pem  \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
--service-account-key-file=/etc/kubernetes/pki/sa.pub  \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
--authorization-mode=Node,RBAC  \
--enable-bootstrap-token-auth=true  \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
--requestheader-allowed-names=aggregator  \
--requestheader-group-headers=X-Remote-Group  \
--requestheader-extra-headers-prefix=X-Remote-Extra-  \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

8.3、Master03配置apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2  \
--logtostderr=true  \
--allow-privileged=true  \
--bind-address=0.0.0.0  \
--secure-port=6443  \
--insecure-port=0  \
--advertise-address=192.168.124.39 \
--service-cluster-ip-range=10.96.0.0/16  \
--service-node-port-range=30000-32767  \
--etcd-servers=https://192.168.124.35:2379,https://192.168.124.36:2379,https://192.168.124.39:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
--etcd-certfile=/etc/etcd/ssl/etcd.pem  \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
--client-ca-file=/etc/kubernetes/pki/ca.pem  \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
--service-account-key-file=/etc/kubernetes/pki/sa.pub  \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
--authorization-mode=Node,RBAC  \
--enable-bootstrap-token-auth=true  \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
--requestheader-allowed-names=aggregator  \
--requestheader-group-headers=X-Remote-Group  \
--requestheader-extra-headers-prefix=X-Remote-Extra-  \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

8.3、启动apiserver

# 三个节点都启动
systemctl daemon-reload && systemctl start kube-apiserver
systemctl enable kube-apiserver

#查看状态,都是 I0930  代表Info 没有报错 说明正常
systemctl status kube-apiserver.service 
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-09-30 15:26:44 CST; 2min 26s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 10836 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─10836 /usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure...

Sep 30 15:28:32 k8s-master01 kube-apiserver[10836]: I0930 15:28:32.901163   10836 clientconn.go:948] ClientConn switching balancer to "pick_first"
Sep 30 15:28:32 k8s-master01 kube-apiserver[10836]: I0930 15:28:32.901332   10836 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubCo...G <nil>}
Sep 30 15:28:32 k8s-master01 kube-apiserver[10836]: I0930 15:28:32.905381   10836 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubCo...Y <nil>}
Sep 30 15:28:32 k8s-master01 kube-apiserver[10836]: I0930 15:28:32.906146   10836 controlbuf.go:508] transport: loopyWriter.run returning. con...closing"
Sep 30 15:28:36 k8s-master01 kube-apiserver[10836]: I0930 15:28:36.315157   10836 client.go:360] parsed scheme: "passthrough"
Sep 30 15:28:36 k8s-master01 kube-apiserver[10836]: I0930 15:28:36.315246   10836 passthrough.go:48] ccResolverWrapper: sending update to cc: ...> <nil>}
Sep 30 15:28:36 k8s-master01 kube-apiserver[10836]: I0930 15:28:36.315253   10836 clientconn.go:948] ClientConn switching balancer to "pick_first"
Sep 30 15:28:36 k8s-master01 kube-apiserver[10836]: I0930 15:28:36.315539   10836 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubCo...G <nil>}
Sep 30 15:28:36 k8s-master01 kube-apiserver[10836]: I0930 15:28:36.319932   10836 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubCo...Y <nil>}
Sep 30 15:28:36 k8s-master01 kube-apiserver[10836]: I0930 15:28:36.320688   10836 controlbuf.go:508] transport: loopyWriter.run returning. con...closing"
Hint: Some lines were ellipsized, use -l to show in full.

9、部署kube-controller

所有 Master节点kube-controller配置都相同,—-cluster-cidr是pod网段

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24 \
--cluster-signing-duration=876000h0m0s

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

9.1、启动kube-controller

systemctl daemon-reload && systemctl start kube-controller-manager
systemctl enable kube-controller-manager

#查看状态
systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-09-30 15:31:58 CST; 17s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 10773 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─10773 /usr/local/bin/kube-controller-manager --v=2 --logtostderr=true --address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca.pem --clu...

Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: W0930 15:31:59.474804   10773 authorization.go:156] No authorization-kubeconfig pr...t work.
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.474814   10773 controllermanager.go:175] Version: v1.19.10
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475147   10773 tlsconfig.go:178] loaded client CA [0/"request-heade...0 UTC))
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475237   10773 tlsconfig.go:200] loaded serving cert ["Generated self sign...
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475313   10773 named_certificates.go:53] loaded SNI cert [0/"self-signed l...
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475322   10773 secure_serving.go:197] Serving securely on [::]:10257
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475539   10773 deprecated_insecure_serving.go:53] Serving insecurel...1:10252
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475559   10773 leaderelection.go:243] attempting to acquire leader ...ager...
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475883   10773 dynamic_cafile_content.go:167] Starting request-head...-ca.pem
Sep 30 15:31:59 k8s-master02 kube-controller-manager[10773]: I0930 15:31:59.475905   10773 tlsconfig.go:240] Starting DynamicServingCertificateController
Hint: Some lines were ellipsized, use -l to show in full.

10、部署scheduler

所有 Master 节点都配置相同

cat >/usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

10.1、启动scheduler

systemctl daemon-reload && systemctl start kube-scheduler
systemctl enable kube-scheduler

#查看状态
systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-09-30 15:36:29 CST; 5s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 11169 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─11169 /usr/local/bin/kube-scheduler --v=2 --logtostderr=true --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/schedul...

Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.704681   11169 reflector.go:207] Starting reflector *v1.PersistentVolumeCla...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.704789   11169 reflector.go:207] Starting reflector *v1.Service (0s) from k...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.704832   11169 reflector.go:207] Starting reflector *v1.PersistentVolume (0...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.704910   11169 reflector.go:207] Starting reflector *v1.StatefulSet (0s) fr...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.704948   11169 reflector.go:207] Starting reflector *v1beta1.PodDisruptionB...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.705005   11169 reflector.go:207] Starting reflector *v1.Pod (0s) from k8s.i...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.705087   11169 reflector.go:207] Starting reflector *v1.Node (0s) from k8s....y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.705120   11169 reflector.go:207] Starting reflector *v1.CSINode (0s) from k...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.705211   11169 reflector.go:207] Starting reflector *v1.ReplicationControll...y.go:134
Sep 30 15:36:29 k8s-master01 kube-scheduler[11169]: I0930 15:36:29.804921   11169 leaderelection.go:243] attempting to acquire leader lease  k...duler...
Hint: Some lines were ellipsized, use -l to show in full

11、自签TLS Bootstrapping

在 Master01 上操作

# 进入 bootstrap 目录
cd k8s-ha-install/bootstrap/

kubectl config set-cluster kubernetes  --certificate-authority=/etc/kubernetes/pki/ca.pem  --embed-certs=true     --server=https://192.168.124.45:8443  --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes  --cluster=kubernetes  --user=tls-bootstrap-token-user  --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes  --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

# 三台 Master 都创建
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

#创建
kubectl create -f bootstrap.secret.yaml
#输出如下
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

11.1、查看组件状态

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

11.2、添加命令补齐功能

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > /etc/bash_completion.d/kubectl

12、部署node

在master01 节点拷贝证书到node节点

for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
     ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
     for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
       scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
     done
     for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
       scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
 done
 done
 
 #拷贝文件到 node
scp -r /usr/local/bin/kubelet root@192.168.124.37:/usr/local/bin/
scp -r /usr/local/bin/kube-proxy root@192.168.124.37:/usr/local/bin/
scp -r /usr/local/bin/kubelet root@192.168.124.38:/usr/local/bin/
scp -r /usr/local/bin/kube-proxy root@192.168.124.38:/usr/local/bin/
 # Node 节点创建目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

12.1、创建 kubelet 启动文件

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

12.2、配置 kubelet

所有Node 节点创建

cat >  /etc/systemd/system/kubelet.service.d/10-kubelet.conf << EOF
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry:5000/pause-amd64:3.0"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node="
ExecStart=
ExecStart=/usr/local/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_CONFIG_ARGS \$KUBELET_SYSTEM_ARGS \$KUBELET_EXTRA_ARGS
EOF

12.3、配置kubelet.yaml文件

所有Node 节点配置一样,clusterDNS 是service 网段地址

cat >/etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

12.4、启动 kubelet

systemctl daemon-reload && systemctl start kubelet
systemctl enable kubelet

#查看状态,出现此状态说明正常,因为还没有部署网络组件
systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubelet.conf
   Active: active (running) since Fri 2022-09-30 15:44:48 CST; 7min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 10178 (kubelet)
    Tasks: 13
   Memory: 38.9M
   CGroup: /system.slice/kubelet.service
           └─10178 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kub...

Sep 30 15:51:55 k8s-node02 kubelet[10178]: E0930 15:51:55.547703   10178 kubelet.go:2134] Container runtime network not ready: NetworkReady=fa...tialized
Sep 30 15:51:58 k8s-node02 kubelet[10178]: W0930 15:51:58.898855   10178 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Sep 30 15:52:00 k8s-node02 kubelet[10178]: E0930 15:52:00.552790   10178 kubelet.go:2134] Container runtime network not ready: NetworkReady=fa...tialized
Sep 30 15:52:03 k8s-node02 kubelet[10178]: W0930 15:52:03.899689   10178 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Sep 30 15:52:05 k8s-node02 kubelet[10178]: E0930 15:52:05.562398   10178 kubelet.go:2134] Container runtime network not ready: NetworkReady=fa...tialized
Sep 30 15:52:08 k8s-node02 kubelet[10178]: W0930 15:52:08.900283   10178 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Sep 30 15:52:10 k8s-node02 kubelet[10178]: E0930 15:52:10.566737   10178 kubelet.go:2134] Container runtime network not ready: NetworkReady=fa...tialized
Sep 30 15:52:13 k8s-node02 kubelet[10178]: W0930 15:52:13.900974   10178 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Sep 30 15:52:15 k8s-node02 kubelet[10178]: E0930 15:52:15.571562   10178 kubelet.go:2134] Container runtime network not ready: NetworkReady=fa...tialized
Sep 30 15:52:18 k8s-node02 kubelet[10178]: W0930 15:52:18.901168   10178 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Hint: Some lines were ellipsized, use -l to show in full.

#查看集群状态
kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-node01   NotReady   <none>   9s      v1.19.10
k8s-node02   NotReady   <none>   5m43s   v1.19.10

由于还没有安装网络插件,所以是NotReady

13、部署 kube-proxy

在 Master01 节点操作,并且把 kube-proxy文件拷贝到 Node 节点

cd k8s-ha-install/
kubectl -n kube-system create serviceaccount kube-proxy

kubectl create clusterrolebinding system:kube-proxy  --clusterrole system:node-proxier  --serviceaccount kube-system:kube-proxy

SECRET=$(kubectl -n kube-system get sa/kube-proxy \
    --output=jsonpath='{.secrets[0].name}')

JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)

PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes

kubectl config set-cluster kubernetes  --certificate-authority=/etc/kubernetes/pki/ca.pem  --embed-certs=true  --server=https://192.168.124.45:8443  --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig

kubectl config set-credentials kubernetes  --token=${JWT_TOKEN}  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config set-context kubernetes  --cluster=kubernetes  --user=kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config use-context kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

13.1、将kube-proxy.kubeconfig分发其他node节点

for NODE in k8s-master02 k8s-master03; do
     scp /etc/kubernetes/kube-proxy.kubeconfig  $NODE:/etc/kubernetes/kube-proxy.kubeconfig
 done

for NODE in k8s-node01 k8s-node02; do
     scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
 done

13.2、创建kube-proxy启动文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

13.3、创建kube-proxy配置文件

所有 Node 节点创建,clusterCIDR是pod网段

cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.244.0.0/16 
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF

13.4、启动 kube-proxy

systemctl daemon-reload && systemctl start kube-proxy
 systemctl enable kube-proxy
 #查看状态
 systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-09-30 15:49:24 CST; 1min 50s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 11244 (kube-proxy)
    Tasks: 6
   Memory: 16.5M
   CGroup: /system.slice/kube-proxy.service
           └─11244 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --v=2

Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.856598   11244 config.go:315] Starting service config controller
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.856603   11244 shared_informer.go:240] Waiting for caches to sync for service config
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.856615   11244 config.go:224] Starting endpoint slice config controller
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.856617   11244 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.856686   11244 reflector.go:207] Starting reflector *v1beta1.EndpointSlice (15m0s...y.go:134
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.856945   11244 reflector.go:207] Starting reflector *v1.Service (15m0s) from k8s....y.go:134
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.857821   11244 service.go:277] Service default/kubernetes updated: 1 ports
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.957129   11244 shared_informer.go:247] Caches are synced for endpoint slice config
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.957129   11244 shared_informer.go:247] Caches are synced for service config
Sep 30 15:49:24 k8s-node02 kube-proxy[11244]: I0930 15:49:24.957403   11244 service.go:396] Adding new service port "default/kubernetes:https"...:443/TCP
Hint: Some lines were ellipsized, use -l to show in full

14、部署calico网络

在 Master01操作即可

14.1、修改calico文件

# 这里我的docker配置的是自己的云主机镜像仓库地址,registry镜像仓库,所以镜像都是直接拉取
cd k8s-ha-install/Calico
需要将calico.yaml文件中的镜像名称 改为 registry镜像仓库中的名称一样,并且在4365行 value改成你的pod 网段10.244.0.0/16即可
  - name: CALICO_IPV4POOL_CIDR
    value: "10.244.0.0/16"   

14.2、创建calico

#创建
kubectl create -f calico.yaml

#查看pod 拉取镜像状态
kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-74d67dc7cd-x28z2   1/1     Running   1          74m
kube-system   calico-node-67vmn                          1/1     Running   0          74m
kube-system   calico-node-8qwbv                          1/1     Running   0          74m

# 查看 nodes 是否 Ready状态
kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-node01   Ready    <none>   106m   v1.19.10
k8s-node02   Ready    <none>   111m   v1.19.10

15、部署CoreDNS

在 Master01操作

15.1、修改coredns文件

cd k8s-ha-install/CoreDNS/
将coredns.yaml 文件中的镜像名称改为 registry镜像仓库中的 名称一样,在188行的clusterIP 改成serviceIP地址
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.10    #改成和 kubelet.yaml中的 clusterDNS一样

15.2、创建coredns

#创建
kubectl create -f coredns.yaml

# 查看 dns 状态
kubectl get pods -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6b6689f4b4-7rvml   1/1     Running   0          37m

16、部署dashborad

16.1、修改文件

cd k8s-ha-install/dashboard/
#修改service类型为 NodePort,暴漏到外部
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

16.2、创建dashborad

kubectl create -f dashboard.yaml
kubectl create -f dashboard-user.yaml

# 查看状态
kubectl get pods -n  kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-9mqll   1/1     Running   0          36s
kubernetes-dashboard-548f88599b-tjt7x        1/1     Running   0          36s

#查看端口号
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.98.170.229   <none>        443:30000/TCP   3m48s

#查看token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-5j5j8
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 2641b0d4-5510-49fc-90ee-b81bd0c1df05

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1411 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImsyZjg1MkVXR183REtMckVENGo1Q1FJczBNTVUwQ2ZNaTNNZmZpeWMtSE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTVqNWo4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyNjQxYjBkNC01NTEwLTQ5ZmMtOTBlZS1iODFiZDBjMWRmMDUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.DvpBCDqjXQiTMPP_LRJaFPZiARWyknWJ73Vox-U5tMLtNUf2OZ1pEA-9cCTzUrHVruxhsdEYgy1-kBwY-kQ3PFZkSs8dZKrhInX7RtJ_iTnrNyyTtdagIBrMWLvh_yaFAzJrN7I6Io53cuMC69MlaKo5MVSfPK7bK1ebAet_wuMfoI73gPg1ibPvr3qajo1yRF_RB_u1Y6Ts09kKV810fci84PVDYgx_gMAZHvMbojpATJEav9WmpHjpw4KsE75m57x7bALmtRVPu3lSh2fr7SSmS1SoxvqX7BUk5Fj5ydCM6a9-NgYJ5AyTKeseoaTK3D2x4iQ4KmwLUV-HRa_t4g

# 浏览器输入
https://任意节点的nodeIP地址:30000
用获取到的token登录即可

17、验证集群可用性

# 部署 测试的 pod 资源
cat > busybox.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

#启动buxybox
kubectl create -f busybox.yaml

# 查看是否成功启动
kubectl  get pods
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          17s

17.1、测试pod 解析默认空间kubernetes

#用pod 解析默认命名空间中的kubernetes
kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h

kubectl exec  busybox -n default -- nslookup kubernetes
3Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

17.2、测试跨命名空间是否可以解析

kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

17.3、每个node节点可以访问svc 443 和kube-dns 的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

 telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

curl 10.96.0.10:53
curl: (52) Empty reply from server

17.4、pod与pod之间可以通

kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          4m12s   10.244.58.199   k8s-node02   <none>           <none>

kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-74d67dc7cd-gbgg5   1/1     Running   0          114m   10.244.58.193    k8s-node02   <none>           <none>
calico-node-rbv8g                          1/1     Running   0          114m   192.168.124.37   k8s-node01   <none>           <none>
calico-node-xrkbw                          1/1     Running   0          114m   192.168.124.38   k8s-node02   <none>           <none>
coredns-6b6689f4b4-qrtpp                   1/1     Running   0          74m    10.244.85.193    k8s-node01   <none>           <none>

# 进入busybox ping其他节点上的pod
kubectl exec -it busybox -- sh
/ # ping 192.168.124.37
PING 192.168.124.37 (192.168.124.37): 56 data bytes
64 bytes from 192.168.124.37: seq=0 ttl=63 time=0.252 ms
64 bytes from 192.168.124.37: seq=1 ttl=63 time=0.185 ms
64 bytes from 192.168.124.37: seq=2 ttl=63 time=0.239 ms
64 bytes from 192.168.124.37: seq=3 ttl=63 time=0.215 ms
64 bytes from 192.168.124.37: seq=4 ttl=63 time=0.234 ms
64 bytes from 192.168.124.37: seq=5 ttl=63 time=0.234 ms
^C
--- 192.168.124.37 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.185/0.226/0.252 ms

# 到这里都可以连通,证明这个pod是可以跨命名空间和跨主机通信的

17.5、负载均衡测试

在任意一台 node节点上 curl vip地址加端口 只要能获得版本信息,说明集群负载成功,请求数据流程是 curl > vip(haproxy) > apiserver

[root@k8s-node02 ~]# curl -k https://192.168.124.45:8443/version
{
  "major": "1",
  "minor": "25",
  "gitVersion": "v1.19.10",
  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
  "gitTreeState": "clean",
  "buildDate": "2022-08-23T17:38:15Z",
  "goVersion": "go1.19",
  "compiler": "gc",
  "platform": "linux/amd64"
}

[root@k8s-node02 ~]# curl -k https://192.168.124.45:6443/version
{
  "major": "1",
  "minor": "25",
  "gitVersion": "v1.19.10",
  "gitCommit": "a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2",
  "gitTreeState": "clean",
  "buildDate": "2022-08-23T17:38:15Z",
  "goVersion": "go1.19",
  "compiler": "gc",
  "platform": "linux/amd64"
}

到这里整个集群 成功 部署完成!

18、总结

1、Master是控制节点,没必要安装Node,不用跑pod或docker服务,也避免还要打污点,多此一举

2、在项目实际部署当中,肯定要有自己的私有镜像仓库,这里请把用到的镜像导入到私有镜像仓库中,改掉原有yaml文件中的镜像名称

3、生产环境建议 etcd 单独三台机器部署,并且定义好主机名称,便于维护(不提倡和master部署在一起是防止master宕机了,etcd也宕掉)

4、生产环境情况下,可以考虑选择使用http免密方式登录dashboard,避免每次登录获取token麻烦

5、也可以不使用dashboard作为管理界面,推荐使用kuboard作为管理界面,安装非常简单使用,Kuboard_Kubernetes教程_K8S安装_管理界面

6、部署中根据机器磁盘空间 规划好路径,比如:证书存放路径、数据目录存放路径等

7、请根据文章先自己在测试机器 或虚机中部署一遍,熟悉本文档的安装路数,最后容易上手生产部署

8、如果你安装的是单 master节点不是集群情况下,文章中涉及的所有192.168.124.45:8443 都改成单master本机IP:6443  就是单master节点,单master适合测试环境

9、下面是安装时要用到的包,有效链接是7天

链接:https://pan.baidu.com/s/1N4ZZTOHEWloOlnM9EUnHjQ 
提取码:zuyq 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值