kubeadm搭建K8S高可用集群(一)

kubeadm搭建K8S高可用集群(一)

一、集群规划

①、服务器主备

准备了4台虚拟机,三台做K8S高可用集群+ETCD集群,keepalived+nginx实现K8S apiserver 高可用。

k8s-01	192.168.0.108	master+etcd+keepalived+nginx
k8s-02	192.168.0.109	master+etcd+keepalived+nginx
k8s-03	192.168.0.111	master+etcd+keepalived+nginx
k8s-04	192.168.0.112	work-node
VIP	    192.168.0.110	keepalived+nginx 高可用虚拟IP

系统为:CentOS Linux release 7.9.2009 (Core)
注:切换Centos7为阿里云yum源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache
②、environment.sh 脚本
[root@k8s01 ssl]# cat /data/etcd/ssl/environment.sh
#!/usr/bin/bash

# 生成 EncryptionConfig 所需的加密 key
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

# 集群各机器 IP 数组
export NODE_IPS=(192.168.0.108 192.168.0.109 192.168.0.111)

# 集群各 IP 对应的主机名数组
export NODE_NAMES=(k8s01 k8s02 k8s03)

# etcd 集群服务地址列表
export ETCD_ENDPOINTS="https://192.168.0.108:2379,https://192.168.0.109:2379,https://192.168.0.111:2379"

# etcd 集群间通信的 IP 和端口
export ETCD_NODES="k8s01=https://192.168.0.108:2380,k8s02=https://192.168.0.109:2380,k8s03=https://192.168.0.111:2380"

# kube-apiserver 的反向代理(kube-nginx)地址端口
export KUBE_APISERVER="https://192.168.0.110:8443"

# 节点间互联网络接口名称
export IFACE="ens33"

# etcd 数据目录
export ETCD_DATA_DIR="/data/k8s/etcd/data"

# etcd WAL 目录,建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区
export ETCD_WAL_DIR="/data/k8s/etcd/wal"

# k8s 各组件数据目录
export K8S_DIR="/data/k8s/k8s"

## DOCKER_DIR 和 CONTAINERD_DIR 二选一
# docker 数据目录
export DOCKER_DIR="/data/k8s/docker"

# containerd 数据目录
export CONTAINERD_DIR="/data/k8s/containerd"

## 以下参数一般不需要修改

# TLS Bootstrapping 使用的 Token,可以使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c"

# 最好使用 当前未用的网段 来定义服务网段和 Pod 网段

# 服务网段,部署前路由不可达,部署后集群内路由可达(kube-proxy 保证)
SERVICE_CIDR="10.254.0.0/16"

# Pod 网段,建议 /16 段地址,部署前路由不可达,部署后集群内路由可达(flanneld 保证)
CLUSTER_CIDR="172.30.0.0/16"

# 服务端口范围 (NodePort Range)
export NODE_PORT_RANGE="30000-32767"

# kubernetes 服务 IP (一般是 SERVICE_CIDR 中第一个IP)
export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
export CLUSTER_DNS_SVC_IP="10.254.0.2"

# 集群 DNS 域名(末尾不带点号)
export CLUSTER_DNS_DOMAIN="cluster.local"

# 将二进制目录 /opt/k8s/bin 加到 PATH 中
export PATH=/opt/k8s/bin:$PATH
③、所有节点安装docker
yum -y install docker

设置开机启动

systemctl enable docker
systemctl start docker

二、基础环境准备

①、设置主机名

三台机器混合部署本文档的 etcd、master 集群。

[root@k8s01 ~]# hostnamectl set-hostname k8s01

如果 DNS 不支持主机名称解析,还需要在每台机器的 /etc/hosts 文件中添加主机名和 IP 的对应关系:

cat >> /etc/hosts <<EOF
192.168.0.108 k8s01
192.168.0.109 k8s02
192.168.0.111 k8s03
192.168.0.112 k8s04
EOF
退出,重新登录 root 账号,可以看到主机名生效。

②、添加节点信任关系

在k8s01上操作

[root@k8s01 ~]# ssh-keygen -t rsa
[root@k8s01 ~]# ssh-copy-id root@k8s01
[root@k8s01 ~]# ssh-copy-id root@k8s02
[root@k8s01 ~]# ssh-copy-id root@k8s03
[root@k8s01 ~]# ssh-copy-id root@k8s04
③、安装依赖包
yum install -y epel-release
yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git

本文档的 kube-proxy 使用 ipvs 模式,ipvsadm 为 ipvs 的管理工具;
etcd 集群各机器需要时间同步,chrony 用于系统时间同步;

④、关闭防火墙

关闭防火墙,清理防火墙规则,设置默认转发策略:

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
⑤、关闭swap

关闭 swap 分区,否则kubelet 会启动失败(可以设置 kubelet 启动参数 --fail-swap-on 为 false 关闭 swap 检查):

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 
⑥、关闭selinux

关闭 SELinux,否则 kubelet 挂载目录时可能报错 Permission denied:

setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
⑦、优化内核参数
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.neigh.default.gc_thresh1=1024
net.ipv4.neigh.default.gc_thresh2=2048
net.ipv4.neigh.default.gc_thresh3=4096
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

关闭 tcp_tw_recycle,否则与 NAT 冲突,可能导致服务不通;

⑧、设置时间自动更新

如时区不对,设置时区 命令:timedatectl set-timezone Asia/Shanghai

systemctl enable chronyd
systemctl start chronyd

三、搭建三节点etcd集群

coreos 开发的分布式服务系统,内部采用 raft 协议作为一致性算法。作为服务发现系统,有以下的特点:

简单:安装配置简单,而且提供了 HTTP API 进行交互,使用也很简单
安全:支持 SSL 证书验证
快速:根据官方提供的 benchmark 数据,单实例支持每秒 2k+ 读操作
可靠:采用 raft 算法,实现分布式系统数据的可用性和一致性

etcd 目前默认使用 2379 端口提供 HTTP API 服务,2380 端口和 peer 通信(这两个端口已经被 IANA 官方预留给 etcd)

虽然 etcd 也支持单点部署,但是在生产环境中推荐集群方式部署,一般 etcd 节点数会选择 3、5、7。etcd 会保证所有的节点都会保存数据,并保证数据的高度一致性和正确性

①、etcd主要功能

1.基本的 key-value 存储
2.监听机制
3.key 的过期及续约机制,用于监控和服务发现
4.原子 CAS 和 CAD,用于分布式锁和 leader 选举

②、在k8s01上安装cfssl

下载证书签发工具

curl -s -L -o /usr/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o /usr/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o /usr/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 
chmod +x /usr/bin/cfssl*
③、下载etcd二进制文件

注:此步骤三个节点都需要操作

wget https://github.com/etcd-io/etcd/releases/download/v3.4.18/etcd-v3.4.18-linux-amd64.tar.gz
tar zxf etcd-v3.4.18-linux-amd64.tar.gz
cd etcd-v3.4.18-linux-amd64
mv etcd etcdctl /usr/bin/

将etcd etcdctl 二进制文件copy 到另外两台服务器

[root@k8s01 ~]# ls
anaconda-ks.cfg  etcd-v3.4.18-linux-amd64  etcd-v3.4.18-linux-amd64.tar.gz  kubernetes.conf
[root@k8s01 ~]# cd etcd-v3.4.18-linux-amd64
[root@k8s01 etcd-v3.4.18-linux-amd64]# ls
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
[root@k8s01 etcd-v3.4.18-linux-amd64]# scp -r etcd etcdctl root@k8s02:/usr/bin/
etcd                                                                                                                                                               100%   23MB 109.6MB/s   00:00    
etcdctl                                                                                                                                                            100%   17MB 120.9MB/s   00:00    
[root@k8s01 etcd-v3.4.18-linux-amd64]# scp -r etcd etcdctl root@k8s03:/usr/bin/
etcd                                                                                                                                                               100%   23MB 102.0MB/s   00:00    
etcdctl                            100%   17MB 133.9MB/s   00:00
④、签发etcd证书

ca证书 自己给自己签名的权威证书,用来给其他证书签名
server证书 etcd的证书
client证书 客户端,比如etcdctl的证书
peer证书 节点与节点之间通信的证书

4.1创建目录
mkdir -p /data/etcd/ssl
cd /data/etcd/ssl
4.2 创建CA配置文件

CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等):
vim ca-config.json

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
“expiry”: “876000h”:证书有效期设置为 100 年;

4.3 创建证书签名请求文件

创建证书签名请求ca-csr.json

cat > ca-csr.json <<EOF
{
  "CN": "kubernetes-ca",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "devops"
    }
  ],
  "ca": {
    "expiry": "876000h"
 }
}
EOF

CN:Common Name:kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;
O:Organization:kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;
注意:

不同证书 csr 文件的 CN、C、ST、L、O、OU 组合必须不同,否则可能出现 PEER’S CERTIFICATE HAS AN INVALID SIGNATURE 错误;
后续创建证书的 csr 文件时,CN 都不相同(C、ST、L、O、OU 相同),以达到区分的目的;

4.4 生成 CA 证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
4.5 创建 etcd 证书和私钥
cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.0.108",
    "192.168.0.109",
    "192.168.0.111"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "devops"
    }
  ]
}
EOF

hosts:指定授权使用该证书的 etcd 节点 IP 列表,需要将 etcd 集群所有节点 IP 都列在其中;
生成etcd证书和私钥:

cfssl gencert -ca=/data/etcd/ssl/ca.pem \
    -ca-key=/data/etcd/ssl/ca-key.pem \
    -config=/data/etcd/ssl/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
 ls etcd*pem

signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
“expiry”: “876000h”:证书有效期设置为 100 年;

分发生成的证书和私钥到各 etcd 节点:

export NODE_IPS=(192.168.0.108 192.168.0.109 192.168.0.111)
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p /etc/kubernetes/pki/etcd"
    scp etcd*.pem root@${node_ip}:/etc/kubernetes/pki/etcd
    scp ca*.pem root@${node_ip}:/etc/kubernetes/pki/etcd
  done
⑤、etcd systemd配置模板文件

[root@k8s01 ssl]# source environment.sh

cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd/data
ExecStart=/usr/bin/etcd \
  --data-dir=/data/k8s/etcd/data \
  --wal-dir=/data/k8s/etcd/wal \
  --name=##NODE_NAME## \
  --cert-file=/etc/kubernetes/pki/etcd/etcd.pem \
  --key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
  --peer-cert-file=/etc/kubernetes/pki/etcd/etcd.pem \
  --peer-key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://##NODE_IP##:2380 \
  --initial-advertise-peer-urls=https://##NODE_IP##:2380 \
  --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://##NODE_IP##:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=${ETCD_NODES} \
  --initial-cluster-state=new \
  --auto-compaction-mode=periodic \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

WorkingDirectory、–data-dir:指定工作目录和数据目录为 ${ETCD_DATA_DIR},需在启动服务前创建这个目录;
–wal-dir:指定 wal 目录,为了提高性能,一般使用 SSD 或者和 --data-dir 不同的磁盘;
–name:指定节点名称,当 --initial-cluster-state 值为 new 时,–name 的参数值必须位于 --initial-cluster 列表中;
–cert-file、–key-file:etcd server 与 client 通信时使用的证书和私钥;
–trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书;
–peer-cert-file、–peer-key-file:etcd 与 peer 通信使用的证书和私钥;
–peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书;
替换模板文件中的变量,为各节点创建 systemd unit 文件:

cd /data/etcd/ssl
source /data/etcd/ssl/environment.sh
for (( i=0; i < 3; i++ ))
  do
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service 
  done
ls *.service

NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;
分发生成的 systemd unit 文件:

cd /data/etcd/ssl
source /data/etcd/ssl/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
  done

启动etcd服务

cd /data/etcd/ssl
source /data/etcd/ssl/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
  done

检查启动结果:

cd /data/etcd/ssl
source /data/etcd/ssl/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status etcd|grep Active"
  done

确保状态为 active (running),否则查看日志,确认原因:

journalctl -u etcd

检测etcd服务

cd /data/etcd/ssl
source /data/etcd/ssl/environment.sh
for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    /usr/bin/etcdctl \
    --endpoints=https://${node_ip}:2379 \
    --cacert=/etc/kubernetes/pki/etcd/ca.pem \
    --cert=/etc/kubernetes/pki/etcd/etcd.pem \
    --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint health
  done
>>> 192.168.0.108
https://192.168.0.108:2379 is healthy: successfully committed proposal: took = 20.123255ms
>>> 192.168.0.109
https://192.168.0.109:2379 is healthy: successfully committed proposal: took = 9.104908ms
>>> 192.168.0.111
https://192.168.0.111:2379 is healthy: successfully committed proposal: took = 10.23718ms

查看当前leader

cd /data/etcd/ssl
source /data/etcd/ssl/environment.sh
/usr/bin/etcdctl \
  -w table --cacert=/etc/kubernetes/pki/etcd/ca.pem \
  --cert=/etc/kubernetes/pki/etcd/etcd.pem \
  --key=/etc/kubernetes/pki/etcd/etcd-key.pem \
  --endpoints=${ETCD_ENDPOINTS} endpoint status 

输出结果

+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.0.108:2379 | dabc7582f4d7fb46 |  3.4.18 |   20 kB |      true |      false |         2 |          8 |                  8 |        |
| https://192.168.0.109:2379 | fb3f424f2c5de754 |  3.4.18 |   16 kB |     false |      false |         2 |          8 |                  8 |        |
| https://192.168.0.111:2379 | 92b71ce0c4151960 |  3.4.18 |   20 kB |     false |      false |         2 |          8 |                  8 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s01 ssl]# 

三、安装keepalived

①、在三台master上执行
yum -y install keepalived

配置文件
k8s01

! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state MASTER #主
    interface ens33 #网卡名字
    virtual_router_id 50
    priority 100 #权重
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456789
    }
    virtual_ipaddress {
        192.168.0.110 #vip
    }
}

k8s02

! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens33
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456789
    }
    virtual_ipaddress {
        192.168.0.110 #vip
    }
}

k8s03

! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens32
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456789
    }
    virtual_ipaddress {
        192.168.0.110 #vip
    }
}
②、设置开机自启并启动服务
systemctl stop keepalived
systemctl start keepalived

验证VIP是否上线


[root@k8s01 ssl]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:50:56:31:30:b1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.108/24 brd 192.168.0.255 scope global noprefixroute dynamic ens33
       valid_lft 5374sec preferred_lft 5374sec
    inet 192.168.0.110/32 scope global ens33
       valid_lft forever preferred_lft forever

四、安装 kubeadm, kubelet 和 kubectl

所有节点安装 kubeadm, kubelet 。kubectl是可选的,你可以安装在所有机器上,也可以只安装在一台master1上。

1、添加国内yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、指定版本安装

yum install -y kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2

3、在所有安装kubelet的节点上,将kubelet设置为开机启动

systemctl enable kubelet 

五、初始化kubeadm

①、kubeadm生成默认配置
kubeadm config print init-defaults > kubeadm-init.yaml
②、修改后:
[root@k8s01 src]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.0.108
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
#  local:
#    dataDir: /var/lib/etcd
  external:
    endpoints:
    - https://192.168.0.108:2379
    - https://192.168.0.109:2379
    - https://192.168.0.111:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem  #搭建etcd集群时生成的ca证书
    certFile: /etc/kubernetes/pki/etcd/etcd.pem   #搭建etcd集群时生成的客户端证书
    keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem  #搭建etcd集群时生成的客户端密钥
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
controlPlaneEndpoint: 192.168.0.110
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
③、执行初始化
[root@k8s01 src]# kubeadm init --config=kubeadm-init.yaml

初始化成功

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 
④、将pki下的证书拷贝到另外两个Master节点

首先将 master1 中的 生成的集群共用的ca 证书,scp 到其他 master 机器。
注:只拷贝ca 相关证书,apiserver类证书不需要。

[root@k8s01 src]# scp /etc/kubernetes/pki/* root@k8s02:/etc/kubernetes/pki/
[root@k8s01 src]# scp /etc/kubernetes/pki/* root@k8s03:/etc/kubernetes/pki/
# 进入另外两台Master服务器 删除apiserver 相关证书
[root@k8s02 pki]# rm -rf apiserver*
[root@k8s03 pki]# rm -rf apiserver*
⑤、Master k8s02、k8s03加入k8s集群
 kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 \
    --control-plane 
   
⑥、查看集群node状态

注:查看状态之前先设置kubectl 环境变量

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

由于没有安装网络插件所以Status状态是NotReady

[root@k8s02 pki]# kubectl get node
NAME    STATUS     ROLES    AGE     VERSION
k8s01   NotReady   master   16m     v1.19.2
k8s02   NotReady   master   2m14s   v1.19.2
k8s03   NotReady   master   5m1s    v1.19.2
⑦、安装网络插件caclio
[root@k8s01 src]# curl https://docs.projectcalico.org/manifests/calico.yaml -O

修改配置:

[root@k8s01 src]# cp calico.yaml calico.yaml.orig
[root@k8s01 src]# diff -U 5 calico.yaml.orig calico.yaml
--- calico.yaml.orig    2022-01-09 22:51:33.274483687 +0800
+++ calico.yaml 2022-01-09 23:05:04.278418520 +0800
@@ -4217,12 +4217,12 @@
                   name: calico-config
                   key: veth_mtu
             # The default IPv4 pool to create on startup if none exists. Pod IPs will be
             # chosen from this range. Changing this value after installation will have
             # no effect. This should fall within `--cluster-cidr`.
-            # - name: CALICO_IPV4POOL_CIDR
-            #   value: "192.168.0.0/16"
+            - name: CALICO_IPV4POOL_CIDR
+              value: "10.244.0.0/16"
             # Disable file logging so `kubectl logs` works.
             - name: CALICO_DISABLE_FILE_LOGGING
               value: "true"
             # Set Felix endpoint to host default action to ACCEPT.
             - name: FELIX_DEFAULTENDPOINTTOHOSTACTION

将 Pod 网段地址修改为 172.30.0.0/16;
calico 自动探查互联网卡,如果有多快网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 eth.*(根据自己服务器的网络接口名修改);
运行 calico 插件:

[root@k8s01 src]# kubectl apply -f calico.yaml
[root@k8s01 src]# cat calico.yaml|grep image
          image: docker.io/calico/cni:v3.21.2
          image: docker.io/calico/cni:v3.21.2
          image: docker.io/calico/pod2daemon-flexvol:v3.21.2
          image: docker.io/calico/node:v3.21.2
          image: docker.io/calico/kube-controllers:v3.21.2

注:caclio 的四个镜像下载较慢,需要提前处理!

[root@k8s01 src]# kubectl get node
NAME    STATUS   ROLES    AGE   VERSION
k8s01   Ready    master   68m   v1.19.2
k8s02   Ready    master   54m   v1.19.2
k8s03   Ready    master   57m   v1.19.2

⑧、Work节点加入集群

[root@k8s04 ~]# kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 

再次查看node节点:

[root@k8s01 src]# kubectl get node
NAME    STATUS   ROLES    AGE   VERSION
k8s01   Ready    master   72m   v1.19.2
k8s02   Ready    master   57m   v1.19.2
k8s03   Ready    master   60m   v1.19.2
k8s04   Ready    <none>   49s   v1.19.2

六、安装rancher 并导入集群

这里做演示,就直接在work节点 112上部署rancher v2.5.10 了

①、创建挂载目录
mkdir -p /home/rancher/var/lib/cni
mkdir -p /home/rancher/var/lib/kubelet
mkdir -p /home/rancher/var/lib/rancher
mkdir -p /home/rancher/var/log
②、docker安装rancher

需要等待几分钟,rancher安装完成就可以打开浏览器访问了。

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  -v /home/rancher/var/lib/cni:/var/lib/cni \
  -v /home/rancher/var/lib/kubelet:/var/lib/kubelet \
  -v /home/rancher/var/lib/rancher:/var/lib/rancher \
  -v /home/rancher/var/log:/var/log \
  --privileged \
  --name myRancher rancher/rancher:v2.5.10

在这里插入图片描述
设置rancher 访问密码,登录rancher导入k8s集群即可。

参考:https://blog.csdn.net/lswzw/article/details/109027255
参考:https://www.cnblogs.com/huningfei/p/12759833.html

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
搭建Kubernetes高可用集群,可以按照以下步骤进行操作: 1. 准备环境:确保每个节点满足安装要求,并安装docker、kubeadmkubelet等必要软件。 2. 部署master节点的高可用组件:首先在每个master节点上部署keepalived和haproxy。这些组件将负责提供VIP和负载均衡功能。 3. 使用kubeadm初始化第一个master节点:在其中一个master节点上使用kubeadm init命令进行集群初始化。执行该命令后,会得到一个join命令,记下来以便后续使用。 4. 加入其他master节点:在其他master节点上执行之前记下的join命令,并添加参数--control-plane,以将其加入到集群的控制平面中。 5. 加入worker节点:在每个worker节点上执行join命令,将其加入到集群中。 6. 安装集群网络:根据需要选择合适的网络插件,并在集群中部署。 7. 进行集群测试:使用kubectl命令验证集群是否正常工作。 这样,就完成了Kubernetes高可用集群搭建过程。请注意,这只是一个简要的概述,实际操作中可能还需要进行一些额外的配置和调整。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [K8s高可用集群搭建](https://blog.csdn.net/weixin_44917045/article/details/127993927)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *3* [k8s系列(二)之k8s高可用集群环境搭建](https://blog.csdn.net/qq_29653373/article/details/126147549)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值