快速部署高可用Kubernetes(K8s)集群

一、前置条件

1.1 工具准备

1.1.1 安装部署自动化运维工具(Ansible)

Ansible简单介绍

ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、cfengine、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。
ansible是基于模块工作的,本身没有批量部署的能力。真正具有批量部署的是ansible所运行的模块,ansible只是提供一种框架。主要包括:
(1)、连接插件connection plugins:负责和被监控端实现通信;
(2)、host inventory:指定操作的主机,是一个配置文件里面定义监控的主机;
(3)、各种模块核心模块、command模块、自定义模块;
(4)、借助于插件完成记录日志邮件等功能;
(5)、playbook:剧本执行多个任务时,非必需可以让节点一次性运行多个任务。

ansible安装

#MAC环境安装命令
brew install ansible

其他环境请读者自行实践,这里不展开说!

1.2 环境准备

准备4台,2G或更大内存,2核或以上CPU,30G以上硬盘 物理机或云主机或虚拟机 2、系统centos 7.x

免登入实现

在本地执行,把我本机的密匙批量发送到所有主机上,就可以实现免密登录,具体如下脚本:

#!/bin/bash
#批量复制公匙到服务器
#记得先执行这条命令生成公匙:ssh-keygen
#服务器密码
password=zss@0418
#服务器最后两位ip断(注意替换两处---》10.211.55.xx)
for i in {16,17,18,19}
  do
    expect <<-EOF
    set timeout 5
    spawn ssh-copy-id -i root@10.211.55.$i
    expect {
    "password:" { send "$password\n" }
    }
  interact
  expect eof
EOF
done

执行脚本:

sh local_copy_ssh_to_host.sh

二、系统架构

2.1 架构需求准备

  • 配置三台机器 kubeadm 的最低要求 给主节点
  • 配置三台机器 kubeadm 的最低要求 给工作节点
  • 在集群中,所有计算机之间的完全网络连接(公网或私网)
  • 所有机器上的 sudo 权限
  • 每台设备对系统中所有节点的 SSH 访问
  • 在所有机器上安装 kubeadm 和 kubelet,kubectl 是可选的。

2.3 架构图

在这里插入图片描述

三、部署方式(二选一)

3.1 脚本一键部署(依赖Ansible工具)

3.1.1 配置ansible工具hosts

该工具帮我解决多台机器环境配置和安装docker,k8s组件功能,减少手动操作!

在安装完成Ansible自动化运维工具条件下执行如下操作:

#创建目录
mkdir -p /etc/ansible
vi hosts
#添加以下内容到hosts内
[k8smaster]
#替换成自己环境的IP
10.211.55.16 hostname=master01
10.211.55.17 hostname=master02
10.211.55.18 hostname=master03
10.211.55.19 hostname=worker01

[k8s:children]
k8smaster

[all:vars]
ansible_ssh_user=root

完成以上操作后,在提供的脚本中,我们解压k8s-script文件包,然后进去k8s-script目录,对一些脚本进行修改:具体步骤如下
找到alik8simages.sh文件修改k8s版本:

#!/bin/bash
list='kube-apiserver:v1.21.3
kube-controller-manager:v1.21.3
kube-scheduler:v1.21.3
kube-proxy:v1.21.3
pause:3.4.1
etcd:3.4.13-0
coredns:v1.8.0'
for item in ${list}
  do

    docker pull registry.aliyuncs.com/google_containers/$item && docker tag registry.aliyuncs.com/google_containers/$item k8s.gcr.io/$item && docker rmi registry.aliyuncs.com/google_containers/$item

  done

#上面过程
#docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5
#docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5 k8s.gcr.io/kube-apiserver:v1.19.5
#docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5

修改docker文件下面的镜像加速地址

{
    "registry-mirrors": ["https://s2q9fn53.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-opts": {
        "max-size": "100m"
    }
}

完成以上操作后,我们来修改最重要的核心脚本k8sinstall.yml,涉及改动有k8s版本,各个节点IP

---
- hosts: k8smaster
  gather_facts: no
  vars:
    #master节点ip,注意修改
    - master_ip: 10.211.55.16
    #k8s版本注意根据自己版本修改
    - k8s_version: 1.21.3
    #docker版本注意根据自己版本修改
    - docker_version: 20.10.0
  tasks:
    - name: set hostname
      shell: |
        hostnamectl set-hostname {{ hostname }}
        if [ ! -d /root/k8s ] ; then mkdir /root/k8s ; fi
        if [ ! -d /etc/docker ] ; then mkdir /etc/docker ; fi
      ignore_errors: True
    - name: config hosts
      shell:
        cmd: |
          cat >> /etc/hosts << EOF
          #节点IP根据自己环境来挑战修改
          10.211.55.15 k8svip
          10.211.55.16 master01
          10.211.55.17 master02
          10.211.55.18 master03
          10.211.55.19 worker01
          EOF
    - name: close firewalld
      service:
        name: firewalld
        state: stopped
        enabled: no
    - name: temp close selinux
      shell: /sbin/setenforce 0
    - name: premanment close selinux
      lineinfile:
        path: /etc/selinux/config
        regexp: '^SELINUX='
        line: SELINUX=disabled
    - name: close swap
      shell: swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
    - name: install yum_tools
      yum:
        name: yum-utils
    - name: download docker repo
      shell: yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    - name: install docker need tools and docker-ce
      yum: 
        name: "{{ packages }}"
      vars:
        packages:
          - device-mapper-persistent-data
          - lvm2
          - ntpdate
          - docker-ce-{{ docker_version }}
    - name: config docker daemon
      copy:
        src: ./etc/docker/daemon.json
        dest: /etc/docker/daemon.json
    - name: start docker
      service:
        name: docker
        state: started
        enabled: yes
    - name: sync time
      shell: "ntpdate time.windows.com"
    - name: set kubernetes yam repo
      shell:
        cmd: |
          cat <<EOF > /etc/yum.repos.d/kubernetes.repo
          [kubernetes]
          name=Kubernetes
          baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
          enabled=1
          gpgcheck=1
          repo_gpgcheck=1
          gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
          EOF
    - name: install kubernetes
      yum:
        name: 
          - kubelet-{{ k8s_version }}
          - kubeadm-{{ k8s_version }}
          - kubectl-{{ k8s_version }}
    - name: start kubelet
      service:
        name: kubelet
        state: started
        enabled: yes
    - name: copy alik8simages.sh
      copy:
        src: ./k8s
        dest: /root/
    - name: pull alik8simages
      shell: bash ./alik8simages.sh
      args:
        chdir: /root/k8s/
    - name: pull flannel
      shell: docker pull quay.io/coreos/flannel:v0.13.1-rc1

以上配置修改完成后保存k8sinstall.yml文件中,最后我们执行下面命令,等待多台机器执行完成!

ansible-playbook k8sinstall.yml

3.2 手动部署

环境初始化:

#根据规划设置主机名(在4台机上分别运行)
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname worker01

#在所有机器上执行
cat >> /etc/hosts << EOF
10.211.55.15 k8svip
10.211.55.16 master01
10.211.55.17 master02
10.211.55.18 master03
10.211.55.19 worker01
EOF

#设置免登录
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa &> /dev/null
ssh-copy-id root@master01
ssh-copy-id root@master02
ssh-copy-id root@master03
ssh-copy-id root@worker01


#关闭防火墙(在4台机运行)
systemctl stop firewalld && systemctl disable firewalld

#关闭selinux(在4台机运行)
sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0

#关闭swap(在4台机运行)
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab


#时间同步(在4台机运行)
yum install ntpdate -y && ntpdate time.windows.com

安装Docker

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce-20.10.0
# Step 4: 开启Docker服务
sudo systemctl start docker && systemctl enable docker

# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ee.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
# sudo yum -y install docker-ce-[VERSION]

# docker镜像加速,"https://s2q9fn53.mirror.aliyuncs.com"这个地址建议自己登陆阿里云,在容器镜像服务中找到。
# 可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://s2q9fn53.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload && sudo systemctl restart docker

安装kubelet、kubeadm、kubectl

#添加kubernetes阿里YUM源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3 && systemctl enable kubelet && systemctl start kubelet

部署Kubernetes Master(只需要在第一台master节点运行)

#注意,kubeadm init 前,先准备k8s运行所需的容器
#可查询到kubernetes所需镜像
kubeadm config images list

#写了个sh脚本,把所需的镜像拉下来
cat >> alik8simages.sh << EOF
#!/bin/bash
list='kube-apiserver:v1.21.3
kube-controller-manager:v1.21.3
kube-scheduler:v1.21.3
kube-proxy:v1.21.3
pause:3.4.1
etcd:3.4.13-0
coredns:v1.8.0'
for item in ${list}
  do

    docker pull registry.aliyuncs.com/google_containers/$item && docker tag registry.aliyuncs.com/google_containers/$item k8s.gcr.io/$item && docker rmi registry.aliyuncs.com/google_containers/$item

  done
EOF
#运行脚本下载
bash alik8simages.sh

四、keepalived + haproxy 搭建高用集群

4.1 安装haproxy keepalived

安装前,我们来修改一下k8s-script脚本中找到hapoxy和keepalived配置
haproxy.conf脚本修改

# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind *:8443
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
    	#根据自己的节点IP配置master集群
        server master01 10.211.55.16:6443 check
        server master02 10.211.55.17:6443 check
        server master03 10.211.55.18:6443 check
        # [...]
        # hostname ip:prot 按需更改

修改keepalived.conf和check_apiserver.sh

check_apiserver.sh配置注意修改虚拟IP:10.211.55.15,且需要添加执行权限:
chmod +x /etc/keepalived/check_apiserver.sh
keepalived.conf配置文件主要修改虚拟IP和权重,以及节点名修改。

check_apiserver.sh

#!/bin/bash
APISERVER_VIP=10.211.55.15
APISERVER_DEST_PORT=6443
 
errorExit() {
    echo "*** $*" 1>&2
    exit 1
}
 
curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
#记得给此文件执行权限
#chmod +x /etc/keepalived/check_apiserver.sh

#需要修改的参数
#APISERVER_VIP 虚拟ip

keepalived.conf

! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    #第一个节点配置Master,其他节点配置SLAVE
    state MASTER
    #注意修改网卡
    interface ens33
    virtual_router_id 51
    #权重
    priority 98
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    #配置虚拟IP
    virtual_ipaddress {
        10.211.55.15/24
    }
    track_script {
        check_apiserver
    }
}

#需要按需修改的参数
#state MASTE/SLAVE
#interface 主网卡名称
#虚拟id
#优先级priority
#virtual_ipaddress 虚拟ip
#https://github.com/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing
#master 上执行
yum install haproxy keepalived -y
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

#从本地复制到master主机
scp ./etc/haproxy/haproxy.cfg root@10.211.55.16:/etc/haproxy/haproxy.cfg
scp ./etc/keepalived/check_apiserver.sh root@10.211.55.16:/etc/keepalived/check_apiserver.sh
scp ./etc/keepalived/keepalived.conf root@10.211.55.16:/etc/keepalived/keepalived.conf

scp ./etc/haproxy/haproxy.cfg root@10.211.55.17:/etc/haproxy/haproxy.cfg
scp ./etc/keepalived/check_apiserver.sh root@10.211.55.17:/etc/keepalived/check_apiserver.sh
scp ./etc/keepalived/keepalived.conf root@10.211.55.17:/etc/keepalived/keepalived.conf

scp ./etc/haproxy/haproxy.cfg root@10.211.55.18:/etc/haproxy/haproxy.cfg
scp ./etc/keepalived/check_apiserver.sh root@10.211.55.18:/etc/keepalived/check_apiserver.sh
scp ./etc/keepalived/keepalived.conf root@10.211.55.18:/etc/keepalived/keepalived.conf

#master 上执行
systemctl enable keepalived --now
systemctl enable haproxy --now

4.2 初始化k8s集群

#初始化k8s集群
kubeadm init \
--control-plane-endpoint k8svip:8443 \
--kubernetes-version=v1.19.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--upload-certs
#提示initialized successfully!表示初始化成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8svip:8443 --token 1m392j.fyliyc4psna3c96n \
	--discovery-token-ca-cert-hash sha256:b6fcf177cec3dcbd61ede734a651880d399022bb97fe3b6a67897e3987df3a62 \
	--control-plane --certificate-key b09e240c0fd6f85c39b6c9039a2662907681f447c801a62cdb844ee1e82d3ea9

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8svip:8443 --token 1m392j.fyliyc4psna3c96n \
	--discovery-token-ca-cert-hash sha256:b6fcf177cec3dcbd61ede734a651880d399022bb97fe3b6a67897e3987df3a62

初始化kubectl命令

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

添加master节点命令

 kubeadm join k8svip:8443 --token 1m392j.fyliyc4psna3c96n \
	--discovery-token-ca-cert-hash sha256:b6fcf177cec3dcbd61ede734a651880d399022bb97fe3b6a67897e3987df3a62 \
	--control-plane --certificate-key b09e240c0fd6f85c39b6c9039a2662907681f447c801a62cdb844ee1e82d3ea9

添加worker节点命令

kubeadm join k8svip:8443 --token 1m392j.fyliyc4psna3c96n \
	--discovery-token-ca-cert-hash sha256:b6fcf177cec3dcbd61ede734a651880d399022bb97fe3b6a67897e3987df3a62

4.3 部署CNI网络插件

#在提供的k8s-script脚本中找到kube-flannel.yml文件
kubectl apply -f kube-flannel.yml

五、部署 Dashboard UI

Kubernetes Dashboard是Kubernetes提供的Web用户界面,通过Dashboard我们可以将容器化的应用部署到Kubernetes集群中,对容器化的应用进行故障排除以及集群资源管理;可以通过Dashboard查看集群应用详情,创建或修改单个Kubernetes资源(例如Deployments,Jobs,DaemonSets等)。

5.1 安装Dashboard UI

  1. 在k8s-script目录找到dashboard文件夹内找到image.sh文件,执行拉取镜像。
  2. 部署recommended.yml文件
kubectl apply -f recommended.yml

对外暴露Dashboard

kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard

#type: ClusterIPs
#改为
#type: NodePort

#查看svc
kubectl -n kubernetes-dashboard get svc
#看到端口3xxxx
#https://10.211.55.16:3xxxx

配置一下证书

#删除默认创建的secret
kubectl delete secret kubernetes-dashboard-certs  -n kubernetes-dashboard
#重新创建secret,主要用来指定证书的存放路径
kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/pki/ -n kubernetes-dashboard
#删除dashboard的pod,主要让它重新运行,加载证书
kubectl delete pod -n kubernetes-dashboard --all

在试着访问…
https://10.211.55.16:3xxxx

登录

#创建服务账户
kubectl apply -f ./dashboard-adminuser.yaml
#创建一个ClusterRoleBinding
kubectl apply -f ./dashboard-ClusterRoleBinding.yaml

#获取token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

把token粘贴登录,开始愉快的访问吧!

5.2 安装 Metrics server

在k8s-script目录下面的dashboard目录下面找到components.yaml,然后修改文件内容:

#替换镜像
bitnami/metrics-server:0.4.1

修改components.yaml的containers–>args部分,添加 --kubelet-insecure-tls参数。

# vim components.yaml
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls

最后部署

kubectl apply -f components.yaml

在这里插入图片描述

5.3 访问Dashboard

在这里插入图片描述

六、问题记录

6.1 k8s.gcr.io/coredns:v1.8.0镜像拉取问题

kubeadm init 初始化的时候出现该镜像拉取失败,我们将 k8s.gcr.io/coredns:v1.8.0移除,然后重新使用下面命令拉取!

docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0 && docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0 && docker rmi registry.aliyuncs.com/google_containers/coredns:v1.8.0

注意其他节点也样同样操作!

6.2 添加master节点失败的问题

我们使用 kubeadm reset命令回滚即可,在重新添加就ok了!

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Coding工匠

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值