Kubernetes-基于Kubeadm搭建K8S集群

多master节点的Kubernetes集群的搭建

一、节点规划

主机名IP角色配置
xxx-01、xxx-02、xxx-04192.168.2.41、192.168.2.42、192.168.2.44Master节点
Worker节点
192.168.2.233VIPkeepalived虚拟IP

Kubernetes使用版本:1.20.8

二、环境准备

1、关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

2、关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0

3、关闭swap

swapoff -a # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久关闭

4、修改主机名

hostnamectl set-hostname 名称

5、添加主机名与IP对应关系

vi /etc/hosts

## 添加
192.168.2.41 xxx-01
192.168.2.42 xxx-02
192.168.2.44 xxx-04

6、修改k8s.conf文件

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

7、配置生效

sysctl --system

三、安装Docker

1、卸载老版本docker

yum remove docker \
           docker-common \
           docker-selinux \
           docker-engine

2、安装docker

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装 Docker-CE
yum -y install docker-ce docker-ce-selinux
# 安装指定版本的Docker-CE:
# Step 3.1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
# Step 3.2 : 安装指定版本的Docker-CE
# yum -y --setopt=obsoletes=0 install docker-ce-[VERSION] docker-ce-selinux-[VERSION]
# Step 4: 开启Docker服务
systemctl enable docker && systemctl start docker

安装docker遇到的问题

package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed

解决办法

# 网站 https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/edge/Packages
yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm

3、验证

docker version

4、配置阿里云镜像docker加速器

cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://0bb06s1q.mirror.aliyuncs.com"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": ["overlay2.override_kernel_check=true"]
}
EOF
# 重启docker
systemctl daemon-reload && systemctl restart docker && systemctl enable docker.service

四、安装高可用组件

为保证集群的高可用性,所有master节点都需安装haproxy和keepalived

1、安装haproxy

注意:所有master节点的配置相同,最下方的配置为所有master节点

## 创建目录
mkdir /etc/haproxy

## 创建配置文件
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server xxx-01        192.168.2.41:6443  check
  server xxx-02        192.168.2.42:6443  check
  server xxx-04        192.168.2.44:6443  check

2、安装keepalived

注意:所有master节点的keepalived配置不一样,注意区分以下参数

weight:权重

priority:优先级

mcast_src_ip:master节点ID

virtual_ipaddress:虚拟IP(虚拟IP须为不存在的IP,若为真实IP,可能存在IP冲突的问题)

## 创建目录
mkdir /etc/keepalived

## 创建配置文件
vim /etc/keepalived/keepalived.conf

## xxx-01 配置
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface enp2s0
    mcast_src_ip 192.168.2.41
    virtual_router_id 51
    priority 200
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.233
    }
    track_script {
       chk_apiserver
    }
}

## xxx-01 配置
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface enp2s0
    mcast_src_ip 192.168.2.41
    virtual_router_id 51
    priority 200
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.233
    }
    track_script {
       chk_apiserver
    }
}

## xxx-02 配置
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface enp2s0
    mcast_src_ip 192.168.2.42
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.233
    }
    track_script {
       chk_apiserver
    }
}

## xxx-04 配置
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface enp2s0
    mcast_src_ip 192.168.2.44
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.2.233
    }
    track_script {
       chk_apiserver
    }
}

配置keepalived健康检查文件

vim /etc/keepalived/check_apiserver.sh 

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi


chmod +x /etc/keepalived/check_apiserver.sh

启动haproxy和keepalived

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

测试VIP

## 是否能ping通
ping 192.168.2.233

五、安装kubeadm、kubelet、kubectl

1、添加yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、安装

指定版本为1.20.8

yum install -y kubelet-1.20.8 kubeadm-1.20.8 kubectl-1.20.8

设置kubelet开机自启动

systemctl enable kubelet && systemctl start kubelet

3、导出kubeadm集群部署自定义文件

kubeadm config print init-defaults > init.default.yaml 

4、修改自定义配置文件

主要修改以下几个字段

advertiseAddress:master节点IP

imageRepository:替换成国内镜像资源库,如registry.aliyuncs.com/google_containers

xxx-01的配置如下:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.2.41
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: xxx-01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

其他master节点依次修改即可

5、拉取阿里云kubernetes容器镜像

kubeadm config images list --config init.default.yaml

kubeadm config images pull --config init.default.yaml

六、Master安装

1、使用kubeadm进行安装

kubeadm init --config=init.default.yaml

安装成功之后可以看下如下图

2、配置用户证书

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、查看集群状态

kubectl get node

此时可看到节点处于 NotReady 状态,因为没有安装pod网络

七、其他Matser、Node安装

1、在最开始安装的master节点(xxx-01)上拷贝配置文件至其他master节点

cat scp.sh
USER=root
CONTROL_PLANE_IPS="192.168.2.42 10。168.2.44"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:
    ssh ${USER}@${host} 'mkdir -p /etc/kubernetes/pki/etcd'
    ssh ${USER}@${host} 'mv /${USER}/ca.crt /etc/kubernetes/pki/'
    ssh ${USER}@${host} 'mv /${USER}/ca.key /etc/kubernetes/pki/'
    ssh ${USER}@${host} 'mv /${USER}/sa.pub /etc/kubernetes/pki/'
    ssh ${USER}@${host} 'mv /${USER}/sa.key /etc/kubernetes/pki/'
    ssh ${USER}@${host} 'mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/'
    ssh ${USER}@${host} 'mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/'
    ssh ${USER}@${host} 'mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt'
    ssh ${USER}@${host} 'mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key'
    ssh ${USER}@${host} 'mv /${USER}/admin.conf /etc/kubernetes/admin.conf'
done

2、在xxx-01上获取token

kubeadm token create --print-join-command

3、加入node节点

kubeadm join 192.168.175.142:6443 --token 59r84s.cva934eqpe8lw18z --discovery-token-ca-cert-hash sha256:31aa8a62c04fc2cc620e5f00fbc16e0aa64317648741504a598aa11a31dc42b9

4、加入master节点

在生成的token上追加 --control -plane

kubeadm join 192.168.175.142:6443 --token 59r84s.cva934eqpe8lw18z --discovery-token-ca-cert-hash sha256:31aa8a62c04fc2cc620e5f00fbc16e0aa64317648741504a598aa11a31dc42b9 --control-plane

八、安装Calico

配置方法:

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

若此方法不成功,可使用以下方法

1、下载Calico配置文件

cd /root/
git clone https://github.com/dotbalo/k8s-ha-install.git

2、只在xxx-01上执行

cd /root/k8s-ha-install && git checkout manual-installation-v1.20.x && cd calico/

修改calico-etcd.yaml的以下位置

sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.3.21:2379,https://192.168.3.22:2379,https://192.168.3.23:2379"#g' calico-etcd.yaml
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`

sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml


sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

3、创建calico

kubectl apply -f calico-etcd.yaml

九、安装可视化界面

安装kuboard

1、执行kuboard v3在k8s中的安装

## 在线安装
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

2、等待kuboard v3就绪

## 执行命令查看pod是否就绪
watch kubectl get pods -n kuboard

3、访问kuboard

链接:http://your-node-ip-address:30080
用户名:admin
密码:Kuboard123

十、常用命令

1、

kubectl get nodes
kubectl get nodes
kubectl get nodes -n namespace
kubectl get nodes -n namespace -o wide
kubectl describe nodes -n namespace podname
kubectl delete nodes -n namespace podname

kubectl get pods
kubectl get pods -n namespace
kubectl get pods -n namespace -o wide
kubectl describe pods -n namespace podname
kubectl delete pods -n namespace podname

kubectl apply -f xxx.yaml
kubectl delete -f xxx.yaml

kubeadm reset

十一、常见问题

1、使用1.21.3版本的k8s导致的CoreDns镜像拉取失败

A:k8s版本过新,使用的镜像最新版本可能未在配置的国内镜像资源库中更新,所以导致拉取失败。可使用以下方式解决

## 拉取镜像
docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0

## 重命名
docker tag registry.aliyuncs.com/google_containers/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0

## 删除原有镜像
docker rmi registry.aliyuncs.com/google_containers/coredns:1.8.0

## 再次进行初始化即可

注意:镜像是拉取下来了,但是相应的pod却没有处于ready状态,待解决

2、想重置节点怎么办

A:可使用 kubeadm reset 命令进行重置,该命令正常情况下是可以重置节点的,但是在本人的使用中,多master节点的情况下,使用该命令重置某master节点时,会出现无法重置的情况;此时可强制删除与k8s相关的文件,使用以下命令即可

modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd

注意:使用之后,会导致相关的命令都会不存在,需要重新安装kubeadm、kubectl、kubelet。

## remove相关安装包
yum remove kubeadm kubectl kubelet

## 重新安装
yum install -y kubelet-1.20.8 kubeadm-1.20.8 kubectl-1.20.8

3、pod没有处于正常的ready状态怎么办

A:可使用以下命令查看

## namespace podname 为代值,具体使用请使用具体名称
kubectl describe pods -n namespace podname

根据描述信息里出现的问题针对性解决,若还是不能解决问题,可尝试使用 kubectl delete pods -n namespace podname 重新建立

十二、参考文档

1、https://blog.csdn.net/u013164931/article/details/105548102/

2、https://blog.csdn.net/xtjatswc/article/details/109234575

3、https://kuboard.cn/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值