Kubeadm部署k8s集群v1.21.2(高可用Master节点)


Kubeadm部署k8s集群v1.21.2(高可用Master节点)


之前写了一篇Kubeadm部署k8s集群v1.21.2(单Master节点)现补上高可用的部署方式,记录采用kubeadm方式部署高可用master节点的K8S集群v1.21.2。

官方文档:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

一、集群部署前规划

主机操作系统IP
k8s-master-1Centos7.6192.168.56.101
k8s-master-2Centos7.6192.168.56.102
k8s-master-3Centos7.6192.168.56.103
k8s-node-1Centos7.6192.168.56.104
k8s-node-2Centos7.6192.168.56.105

二、修改主机名和修改时区

hostnamectl set-hostname k8s-master-1   ------192.168.56.101
hostnamectl set-hostname k8s-master-2   ------192.168.56.102
hostnamectl set-hostname k8s-master-3   ------192.168.56.103
hostnamectl set-hostname k8s-node-1     ------192.168.56.104
hostnamectl set-hostname k8s-node-2     ------192.168.56.105

#修改时区(所有结点)
timedatectl  set-timezone Asia/Shanghai

三、进行基本配置(所有节点)

#修改/etc/hosts文件
cat >> /etc/hosts << EOF
192.168.56.101 k8s-master-1
192.168.56.102 k8s-master-2
192.168.56.103 k8s-master-3
192.168.56.104 k8s-node-1
192.168.56.105 k8s-node-2
EOF

#关闭swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

#关闭selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

#防火墙配置(网上的文章都是直接关防火墙的
#这里我不关闭防火墙,只是开放端口,因为生产环境一般也是开的)
#K8s之防火墙firewalld
firewall-cmd --permanent --add-masquerade
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="10.96.0.0/16" accept"
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="10.244.0.0/16" accept"
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.56.101" accept"
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.56.102" accept"
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.56.103" accept"
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.56.104" accept"
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.56.105" accept"
firewall-cmd --reload
firewall-cmd --list-all

四、配置时间同步

  1. ####chrony配置(时间同步主服务器192.168.56.101
    如果没有互联网环境,需要修改yum.repos.d中的源,然后通过私库进行下载,nexus3私库环境搭建(maven,yum,apt,nodejs)有互联网直接执行yum即可
如果没有互联网环境,修改yum.repos.d中的源,然后通过私库进行下载`,比如:搭建好私服,然后执行以下操作进行修改yum.repos.d中的源

cd /etc/yum.repos.d/
mkdir bak
mv *.repo bak

##################
cat > /etc/yum.repos.d/centos.repo << EOF
[centos_os]
name=centos_os
baseurl=http://192.168.56.1:8081/nexus3/repository/centos/7/os/x86_64/
enabled=1
gpgcheck=0

[centos_extras]
name=centos_extras
baseurl=http://192.168.56.1:8081/nexus3/repository/centos/7/extras/x86_64/
enabled=1
gpgcheck=0

[centos_update]
name=centos_update
baseurl=http://192.168.56.1:8081/nexus3/repository/centos/7/updates/x86_64/
enabled=1
gpgcheck=0

[centos_centosplus]
name=centos_centosplus
baseurl=http://192.168.56.1:8081/nexus3/repository/centos/7/centosplus/x86_64/
enabled=1
gpgcheck=0
EOF

##################
cat > /etc/yum.repos.d/epel.repo << EOF
[epel]
name=epel
baseurl=http://192.168.56.1:8081/nexus3/repository/epel/pub/epel/7/x86_64/
enabled=1
gpgcheck=0
EOF

##################
cat > /etc/yum.repos.d/nginx.repo << EOF
[epel]
name=epel
baseurl=http://192.168.56.1:8081/nexus3/repository/nginx/packages/centos/7/x86_64/
enabled=1
gpgcheck=0
EOF

##################
cat > /etc/yum.repos.d/remi-safe.repo << EOF
[remi-safe]
name=Safe Remi's RPM repository for Enterprise Linux 7 - x86_64
baseurl=http://192.168.56.1:8081/nexus3/repository/enterprise/7/safe/x86_64/
enabled=1
gpgcheck=0
EOF


##################
cat > /etc/yum.repos.d/docker-ce.repo << EOF
[docker-ce-stable]
name=Docker CE Stable - 
baseurl=http://192.168.56.1:8081/nexus3/repository/docker-ce/7/x86_64/stable
enabled=1
gpgcheck=0
EOF


##################
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernets]
name=kubernets
baseurl=http://192.168.56.1:8081/nexus3/repository/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF


##################Mysql5.7
cat > /etc/yum.repos.d/mysql.repo << EOF
[mysql-connectors-community]
name=mysql-connectors-community
baseurl=http://192.168.56.1:8081/nexus3/repository/mysql/yum/mysql-connectors-community/el/7/x86_64/
enabled=1
gpgcheck=0

[mysql-tools-community]
name=mysql-tools-community
baseurl=http://192.168.56.1:8081/nexus3/repository/mysql/yum/mysql-tools-community/el/7/x86_64/
enabled=1
gpgcheck=0

[mysql57-community]
name=mysql57-community
baseurl=http://192.168.56.1:8081/nexus3/repository/mysql/yum/mysql-5.7-community/el/7/x86_64/
enabled=1
gpgcheck=0
EOF

##################Google-chrome
cat > /etc/yum.repos.d/google-chrome.repo << EOF
[google-chrome]
name=google-chrome
baseurl=http://192.168.56.1:8081/nexus3/repository/google-chrome/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=0
EOF
#安装chrony
yum install -y chrony

#注释默认ntp服务器
sed -i 's/^server/#&/' /etc/chrony.conf

#指定上游公共 ntp 服务器,并允许其他节点同步时间
####如果有外网,设置从互联网服务器拉取时间
cat >> /etc/chrony.conf << EOF
server time6.aliyun.com iburst
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
allow all
EOF
####如果没有外网,就以以本机时间为基点:
sed -i 's/^#local stratum 10/local stratum 10/' /etc/chrony.conf
cat >> /etc/chrony.conf << EOF
server 192.168.56.101 iburst
allow all
EOF

#重启chronyd服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd

#开启网络时间同步功能
timedatectl set-ntp true
  1. ####chrony配置(其他节点)
    其他节点以时间同步主服务器192.168.56.101为基础,设置从时间同步服务器同步时间
######配置所有其他节点(注意修改IP地址):
#安装chrony:
yum install -y chrony

#注释默认服务器
sed -i 's/^server/#&/' /etc/chrony.conf

#指定内网 master节点为上游NTP服务器
echo server 192.168.56.101 iburst >> /etc/chrony.conf

#重启服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd

#所有节点执行chronyc sources命令,查看存在以^*开头的行,说明已经与服务器时间同步
chronyc sources

五、修改iptables相关参数(所有节点)

#RHEL / CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

cat <<EOF >  /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 使配置生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

六、加载ipvs相关模块(所有节点)

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块。在所有的Kubernetes节点执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

#继续执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

七、管理工具ipvsadm安装(所有节点)

yum install ipset ipvsadm -y

八、安装与启动Docker(所有节点)

  1. ####配置docker yum源
    如果是有互联网,直接配置如下的docker yum源
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

如果没有互联网环境,修改yum.repos.d中的源,然后通过私库进行下载(步骤见安装chrony组件的无互联网部分内容)

  1. ####安装指定版本,这里安装3:20.10.7-3.el7
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-3:20.10.7-3.el7.x86_64
  1. ####配置docker 镜像加速及修改cgroupdriver
    有互联网时,采用如下:
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF

如果没有互联网,一般会建自己的私库,把自己的私库地址也带上,只是比上面多了一行,妇如:"insecure-registries": ["192.168.56.1:5000"]

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "insecure-registries": ["192.168.56.1:5000"],
  "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
  1. ####Docker默认镜像和容器的存储位置存储在/var/lib/docker,如果你的机器的这个目录磁盘太小,很容易导致磁盘爆满,因此可以在其他大的磁盘路径外链的方式来处理,如下(如不修改磁盘存储位置,此步骤不做),这里我将目录修改成/home/data/docker, 采用ln-s的方式。
## 采用ln -s方式挂载/home目录 
mkdir /home/data/docker -p
ln -s /home/data/docker /var/lib/docker
  1. ####启动docker
systemctl start docker && systemctl enable docker

##出现下面结果说明docker启动成功
[root@k8s-master-1 ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

九、安装kubeadm、kubelet、kubectl(所有节点)

  1. ####配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源
    如果是有互联网,直接配置如下
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

如果没有互联网环境,修改yum.repos.d中的源,然后通过通过私库进行下载

  1. ####在所有节点上安装指定版本 kubelet、kubeadm 和 kubectl
yum list kubeadm --showduplicates | sort -r
yum install -y kubelet-1.21.2-0 kubeadm-1.21.2-0 kubectl-1.21.2-0
  1. ####同样,kubelet相关的数据默认存储在/var/lib/kubelet,如果需要修改数据存储目录进行ln -s处理,如不修改磁盘存储位置,此步骤不做,我这里修改成了使用/home/data/kubelet目录来存储。
#修改kubelet数据目录
mkdir /home/data/kubelet -p
ln -s /home/data/kubelet /var/lib/kubelet
  1. ####将kubelet服务设置为自启动,但systemctl start kubelet不需要执行,k8s部署过程中会自动呼起。
systemctl enable kubelet

十、安装haproxy和keepalived(三个master节点)

1.在三台master上安装keepalived

####
yum install -y keepalived

cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp0s3
    virtual_router_id 51
    priority 70
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.56.140
    }
}
EOF

####启动
systemctl enable keepalived && systemctl start keepalived

###查看虚拟ip是不是在master1上,关闭master1的keepalived服务,测试下vip是否漂移到其他机器
ip a

2.安装haproxy(三个master节点)

###三台master上配置一样

yum install -y haproxy

cat > /etc/haproxy/haproxy.cfg <<EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:8443
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master
    balance roundrobin
    server master1  192.168.56.101:6443 check maxconn 2000
    server master2  192.168.56.102:6443 check maxconn 2000
    server master3  192.168.56.103:6443 check maxconn 2000
EOF

####启动
systemctl enable haproxy && systemctl start haproxy

十、部署master1节点(k8s-master-1, 192.168.56.101),要求最少要2核心

  1. ####Master节点执行初始化,通过image-repository参数修改指定从阿里云镜像或私库下载相关镜像
    有互联网环境下载时:
####k8s-master-1结点初始化
kubeadm init \
    --control-plane-endpoint "192.168.56.140:8443" \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.21.2 \
    --pod-network-cidr=10.244.0.0/16 \
    --upload-certs

要从私库下载时:

####k8s-master-1结点初始化
kubeadm init \
    --control-plane-endpoint "192.168.56.140:8443" \
    --image-repository 192.168.56.1:5000/google_containers \
    --kubernetes-version v1.21.2 \
    --pod-network-cidr=10.244.0.0/16 \
    --upload-certs

如果你有互联网环境,直接使用有互联网环境的命令即可,互联网的命令指定了从阿里云下载镜像。
我使用的是私库下载的模式,因为公司的正式环境基本上不让上外网,初始化过程如下---->>>>

[root@k8s-master-1 ~]# kubeadm init \
>     --control-plane-endpoint "192.168.56.140:8443" \
>     --image-repository 192.168.56.1:5000/google_containers \
>     --kubernetes-version v1.21.2 \
>     --pod-network-cidr=10.244.0.0/16 \
>     --upload-certs
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101 192.168.56.140]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 14.529641 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
73b0b6328e0193557d46ff6cb0f01e2c44967fe4e2bacde4d9ccee1baed6d011
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ir8tfi.ezn32y97tympnrtn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.56.140:8443 --token ir8tfi.ezn32y97tympnrtn \
        --discovery-token-ca-cert-hash sha256:4f42525ddf9535fb69a5bb4a830d39e190d9c5bb3b0760e279c27460e7d79e19 \
        --control-plane --certificate-key 73b0b6328e0193557d46ff6cb0f01e2c44967fe4e2bacde4d9ccee1baed6d011

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.140:8443 --token ir8tfi.ezn32y97tympnrtn \
        --discovery-token-ca-cert-hash sha256:4f42525ddf9535fb69a5bb4a830d39e190d9c5bb3b0760e279c27460e7d79e19

配置kubectl工具:

####配置kubectl工具
####机器上的用户要使用 kubectl来 管理集群操作集群,需要做如下配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

十一、部署k8s-master-2和k8s-master-3

####在k8s-master-2和k8s-master-3上执行(在成功初始化k8s-master-1的结果中找命令)

kubeadm join 192.168.56.140:8443 --token ir8tfi.ezn32y97tympnrtn \
        --discovery-token-ca-cert-hash sha256:4f42525ddf9535fb69a5bb4a830d39e190d9c5bb3b0760e279c27460e7d79e19 \
        --control-plane --certificate-key 73b0b6328e0193557d46ff6cb0f01e2c44967fe4e2bacde4d9ccee1baed6d011

####同样,在k8s-master-2和k8s-master-3上配置kubectl工具

mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

十一、执行集群检查

kubectl get nodes

####结果如下:没有安装网络插件,因此都是NotReady
[root@k8s-master-1 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
k8s-master-1   NotReady   control-plane,master   6m52s   v1.21.2
k8s-master-2   NotReady   control-plane,master   91s     v1.21.2
k8s-master-3   NotReady   control-plane,master   46s     v1.21.2

十二、部署网络插件(k8s-master节点, 192.168.56.101)

网络插件可以选择calico网络,也可以选择flannel,这里我们选用calico

  1. ####下载calico.yaml
wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml
  1. ####修改calico.yaml,将所有image下加上一行imagePullPolicy: IfNotPresent,使得k8s在本地有了镜像后不再从网上下载。
    在这里插入图片描述
  2. ####calico在多网络接口时自动检测到错误的网络接口,导致网络无法连通,通过指定网络接口(网卡名)解决问题

在calico.yaml中的

- name: CLUSTER_TYPE
  value: "k8s,bgp" 

下面增加两行

- name: IP_AUTODETECTION_METHOD
  value: "interface=enp0s3" 
  1. ####执行kubectl apply -f calico.yaml进行calica.yaml的部署
kubectl apply -f calica.yaml

在这里插入图片描述
报了一个warning提示,根据提示后将calica.yaml中的policy/v1beta1修改为policy/v1
在这里插入图片描述
修改成
在这里插入图片描述
然后重新应用

kubectl apply -f calica.yaml

结果如下:
在这里插入图片描述
5. ####私网没外网环境下镜像手工下载准备
如果你是有互联网,执行上面的那一步,顺利的话就能直接成功了。
但我没有外网,因此需要先在有外网的docker环境下下载好以下的4个镜像,然后将保存到本地的镜像上传到内网的docker服务器,执行docker load -i进行镜像的导入,将镜像导入所有的节点

我也把文件放在百度盘备用:
链接:https://pan.baidu.com/s/1l75HAnjQhbUVfvG-O6tc2w
提取码:cxe1
在这里插入图片描述

#拉取
docker pull docker.io/calico/cni:v3.19.4
#保存到本地
docker save docker.io/calico/cni:v3.19.4 -o cni_v3.19.4.tar.gz
#重新导入
docker load -i cni_v3.19.4.tar.gz

docker.io/calico/pod2daemon-flexvol:v3.19.4
docker save docker.io/calico/pod2daemon-flexvol:v3.19.4 -o flexvol_v3.19.4.tar.gz
docker load -i flexvol_v3.19.4.tar.gz

docker pull docker.io/calico/node:v3.19.4
docker save docker.io/calico/node:v3.19.4 node_v3.19.4.tar.gz
docker load -i node_v3.19.4.tar.gz

docker pull docker.io/calico/kube-controllers:v3.19.4
docker save docker.io/calico/kube-controllers:v3.19.4 -o controllers_v3.19.4.tar.gz
docker load -i controllers_v3.19.4.tar.gz

重新导入镜像后集群成功部署

[root@k8s-master-1 testuser]# kubectl get nodes -o wide
NAME           STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-master-1   Ready    control-plane,master   60m   v1.21.2   192.168.56.101   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://20.10.7
k8s-master-2   Ready    control-plane,master   55m   v1.21.2   192.168.56.102   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://20.10.7
k8s-master-3   Ready    control-plane,master   54m   v1.21.2   192.168.56.103   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://20.10.7

kubectl get pods --all-namespaces查看pod的状态

[root@k8s-master-1 testuser]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7cc8dd57d9-vtxxs   1/1     Running   0          5m14s
kube-system   calico-node-7znz5                          1/1     Running   0          5m14s
kube-system   calico-node-njtx6                          1/1     Running   0          5m14s
kube-system   calico-node-z7cfg                          1/1     Running   0          5m14s
kube-system   coredns-5c9594667d-2tbr2                   1/1     Running   0          8m48s
kube-system   coredns-5c9594667d-trjbv                   1/1     Running   0          8m48s
kube-system   etcd-k8s-master-1                          1/1     Running   0          8m56s
kube-system   etcd-k8s-master-2                          1/1     Running   0          8m20s
kube-system   etcd-k8s-master-3                          1/1     Running   0          8m7s
kube-system   kube-apiserver-k8s-master-1                1/1     Running   0          8m56s
kube-system   kube-apiserver-k8s-master-2                1/1     Running   0          8m21s
kube-system   kube-apiserver-k8s-master-3                1/1     Running   0          7m52s
kube-system   kube-controller-manager-k8s-master-1       1/1     Running   1          8m56s
kube-system   kube-controller-manager-k8s-master-2       1/1     Running   0          8m21s
kube-system   kube-controller-manager-k8s-master-3       1/1     Running   0          7m59s
kube-system   kube-proxy-8cd27                           1/1     Running   0          7m44s
kube-system   kube-proxy-krtj4                           1/1     Running   0          8m49s
kube-system   kube-proxy-mv272                           1/1     Running   0          8m22s
kube-system   kube-scheduler-k8s-master-1                1/1     Running   1          8m56s
kube-system   kube-scheduler-k8s-master-2                1/1     Running   0          8m21s
kube-system   kube-scheduler-k8s-master-3                1/1     Running   0          8m7s

十三、部署k8s-node-1和k8s-node-2

部署node结点就简单了,直接执行加入命令就可

kubeadm join 192.168.56.140:8443 --token ir8tfi.ezn32y97tympnrtn \
        --discovery-token-ca-cert-hash sha256:4f42525ddf9535fb69a5bb4a830d39e190d9c5bb3b0760e279c27460e7d79e19

至此,高可用三master节点的集群搭建完成.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值