高可用K8S集群搭建部署

1 环境准备

1.1 集群规划图

1.2 系统配置

  • 一台或多台机器,操作系统CentOS7.x-x86_64
  • 硬件配置:2G或更多ARM,2个CPU或更多CPU,硬盘20G及以上
  • 集群中所有机器网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

1.3 环境规划

主机名

系统

配置

IP

服务

master1

centos

2C4G

172.16.0.6

kubelet,kubeadm ,kubectl,etc

master2

centos

2C4G

172.16.0.7

kubelet ,kubeadm ,kubectl,etc

master3

centos

2C4G

172.16.0.8

kubelet,kubeadm,kubectl,etc

node1

centos

2C4G

kubelet,kubeadm ,kubectl

node2

centos

2C4G

kubelet,kubeadm ,kubectl

vip

172.16.0.10

虚拟IP

1.4 部署说明

单主节点master部署

步骤:2 ---> 5.2 ---> 5.3 ---> 5.4 ---> 5.5 ---> 6

高可用master部署

步骤:2 ---> 3 ---> 5.1.1(或者5.2) ---> 5.3 ---> 5.4 ---> 5.5 ---> 6

高可用master部署(etc独立集群)

步骤:2 ---> 3 ---> 4 ---> 5.1.2 ---> 5.3 ---> 5.4 ---> 5.5 ---> 6

2 系统初始化

2.1 脚本初始化

每台服务器中分别安装Docker、kubeadm和kubectl以及kubelet,也可以使用初始化脚本完成系统初始化,所有k8s节点都必须初始化,具体代码如下所示

#!/bin/bash

k8s_version=1.18.0   #指定安装的版本
DOCKER_version=docker-ce

set -o errexit
set -o nounset
set -o pipefail

function prefight() {
  echo "Step.1 system check"

  check::root
  check::disk '/opt' 30
  check::disk '/var/lib' 20
}

function check::root() {
  if [ "root" != "$(whoami)" ]; then
    echo "only root can execute this script"
    exit 1
  fi
  echo "root: yes"
}

function check::disk() {
  local -r path=$1
  local -r size=$2

    disk_avail=$(df -BG "$path" | tail -1 | awk '{print $4}' | grep -oP '\d+')
  if ((disk_avail < size)); then
    echo "available disk space for $path needs be greater than $size GiB"
    exit 1
  fi

  echo "available disk space($path):  $disk_avail GiB"
}

function system_inint() {
  echo "Step.2 stop firewall and swap"
  stop_firewalld
  stop_swap
}

function stop_firewalld() {
  echo "stop firewalld [doing]"

  setenforce  0 >/dev/null 2>&1 || :
  sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
  systemctl stop firewalld >/dev/null 2>&1 || :
  systemctl disable firewalld >/dev/null 2>&1 || :

  echo "stop firewalld [ok]"

}

function stop_swap() {
  echo "stop swap [doing]"
  
  sed -ri 's/.*swap.*/#&/' /etc/fstab
  swapoff -a

  echo "stop swap [ok]"
}

function iptables_check() {
  echo "Step.3 iptables check [doing]"
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
  sysctl --system >/dev/null 2>&1 || :
  echo "Step.3 iptables check [ok]"
}


function ensure_docker() {
  echo "Step.4 ensure docker is ok"

  if ! [ -x "$(command -v docker)" ]; then
    echo "command docker not find"
    install_docker
  fi
  if ! systemctl is-active --quiet docker; then
    echo "docker status is not running"
    install_docker
  fi
}

function install_docker() {
  echo "install docker [doing]"

  wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo >/dev/null 2>&1 || :
  yum install -y ${DOCKER_version} >/dev/null 2>&1 || : 
  echo "install docker"
  systemctl restart docker
  echo "start dockcer"
  systemctl enable docker  >/dev/null 2>&1 || :

  mkdir -p /etc/docker
cat << EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://x1362v6d.mirror.aliyuncs.com"]
}
EOF
 
  systemctl daemon-reload
  systemctl restart docker
  echo "install docker [ok]"
}

function install_kubelet() { 
  echo "Step.5 install kubelet-${k8s_version} [doing]"
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  #yum install -y kubelet kubeadm kubectl
  yum install -y kubelet-${k8s_version} kubeadm-${k8s_version} kubectl-${k8s_version} >/dev/null 2>&1 || :
  systemctl enable kubelet >/dev/null 2>&1 || : 
  echo "Step.5 install kubelet-${k8s_version} [ok]"
}


prefight
system_inint
iptables_check
ensure_docker
install_kubelet
 

2.2 手动初始化

2.2.1 关闭防火墙

关闭selinux和firewall

setenforce  0
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config 
systemctl stop firewalld
systemctl disable firewalld
 

2.2.2 关闭swap分区

永久关闭swap分区,需要重启:

sed -ri 's/.*swap.*/#&/' /etc/fstab

临时关闭swap交换分区,重启之后,无效

swapoff -a

2.2.3 修改iptables

修改下面内核参数,否则请求数据经过iptables的路由可能有问题

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

2.2.4 安装docker

安装docker-ce

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce
systemctl start docker
systemctl enable docker
systemctl status docker
docker version

配置镜像加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://x1362v6d.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

2.2.5 安装kubeadm

在除了haproxy以外所有节点上操作,将Kubernetes安装源改为阿里云,方便国内网络环境安装

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装最新kubelet, kubeadm ,kubectl

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet

安装指定版本kubelet, kubeadm ,kubectl

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet

3 高可用组件安装

注意:如果不是高可用集群,haproxy和keepalived无需安装。

  • 所有master节点(master1、master2、master3节点)通过yum安装HAProxy和keepAlived。
yum -y install keepalived haproxy

3.1 配置haproxy

所有master节点:

创建相应目录

mkdir -pv /etc/haproxy

修改haproxy配置

cat << EOF > /etc/haproxy/haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor
 
listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin
 
frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master
 
backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  # 下面的配置根据实际情况修改,主机名  IP地址
  server 172.16.0.6 172.16.0.6:6443  check
  server 172.16.0.7 172.16.0.7:6443  check
  server 172.16.0.8 172.16.0.8:6443  check    
EOF

3.2 配置keepalived

master1配置Keepalived:

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    ## 标识本节点的字条串,通常为 hostname
    router_id 172.16.0.6
    script_user root
    enable_script_security    
}
## 检测脚本
## keepalived 会定时执行脚本并对脚本执行的结果进行分析,动态调整 vrrp_instance 的优先级。如果脚本执行结果为 0,并且 weight 配置的值大于 0,则优先级相应的增加。如果脚本执行结果非 0,并且 weight配置的值小于 0,则优先级相应的减少。其他情况,维持原本配置的优先级,即配置文件中 priority 对应的值。
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    # 每2秒检查一次
    interval 2
    # 一旦脚本执行成功,权重减少5
    weight -5
    fall 3  
    rise 2
}
## 定义虚拟路由,VI_1 为虚拟路由的标示符,自己定义名称
vrrp_instance VI_1 {
    ## 主节点为 MASTER,对应的备份节点为 BACKUP
    state MASTER
    ## 绑定虚拟 IP 的网络接口,与本机 IP 地址所在的网络接口相同
    interface eth0
    # 主机的IP地址
    mcast_src_ip 172.16.0.6
    # 虚拟路由id
    virtual_router_id 100
    ## 节点优先级,值范围 0-254,MASTER 要比 BACKUP 高
    priority 100
     ## 优先级高的设置 nopreempt 解决异常恢复后再次抢占的问题
    nopreempt 
    ## 组播信息发送间隔,所有节点设置必须一样,默认 1s
    advert_int 2
    ## 设置验证信息,所有节点必须一致
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    ## 虚拟 IP 池, 所有节点设置必须一样,默认填写VIP
    virtual_ipaddress {
      ## 虚拟 ip,可以定义多个 ,默认填写VIP
        172.16.0.10
    }
    track_script {
       chk_apiserver
    }
}
EOF
 

master2配置Keepalived:

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id 172.16.0.7
    script_user root
    enable_script_security    
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 172.16.0.7
    virtual_router_id 101
    priority 99
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        172.16.0.10
    }
    track_script {
      chk_apiserver
    }
}
EOF

master3配置Keepalived:

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id 172.16.0.8
    script_user root
    enable_script_security    
}

vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh" 
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 172.16.0.8
    virtual_router_id 102
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        172.16.0.10
    }
    track_script {
       chk_apiserver
    }
}
EOF

3.3 配置监控脚本

新建监控脚本请点这里📎check_apiserver.sh,并设置权限:

vim /etc/keepalived/check_apiserver.sh
#!/bin/bash
 
err=0
for k in $(seq 1 5)
do
    check_code=$(pgrep kube-apiserver)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done
 
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
chmod +x /etc/keepalived/check_apiserver.sh

3.4 启动高可用组件

在master节点(master1、master2、master3)上启动haproxy和keepalived:

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

测试VIP(虚拟IP)是否正常:

ping 172.16.0.10 -c 4

4 高可用储存安装

如果搭建高可用etc,直接使用<5.1.2 部署master节点(使用etc集群)>;

如果不搭高可用etc,请使用5.1.1或者5.2部署k8s master节点

相关网址

cfssl下载地址:https://github.com/cloudflare/cfssl/releases

cfsslgit地址:https://github.com/cloudflare/cfssl

4.1安装配置etc

安装etc

yum -y install etcd

4.2 安装配置cfssl

此步只需要在master1节点上执行即可

4.2.1 安装cfssl

wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x /bin/cfssl*

4.2.2 配置证书

编辑etcd-csr.json

cat << EOF > /root/etcd-csr.json
{                                     
    "CN": "etcd",                     
    "hosts": [                        
              "127.0.0.1",            
              "172.16.0.6",        
              "172.16.0.7",        
              "172.16.0.8"         
              ],                      
     "key": {                         
              "algo": "rsa",          
              "size": 2048            
            },                        
     "names": [                       
                {                     
                "C": "CN",            
                "ST": "GD",        
                "L": "guangzhou",         
                "O": "etcd",          
                "OU": "Etcd Security" 
                 }                    
            ]                         
}    
EOF

编辑ca-config.json

cat << EOF > /root/ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}    
EOF

编辑ca-csr.json

cat << EOF > /root/ca-csr.json
{
    "CN": "etcd",
    "key": {
        "algo": "rsa",
        "size": 2048
        },
     "names": [
                {
                "C": "CN",
                "ST": "GD",
                "L": "guangzhou",
                 "O": "etcd",
                "OU": "Etcd Security"
                }
        ]
}
EOF

4.2.2 生成CA证书

创建CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

生成etcd证书和私钥

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=etcd etcd-csr.json | cfssljson -bare etcd

查看证书

[root@VM-0-6-centos ~]# ls *.pem
ca-key.pem  ca.pem  etcd-key.pem  etcd.pem

拷贝证书

mkdir -pv /etc/etcd/ssl
cp -r ./{ca-key,ca,etcd-key,etcd}.pem /etc/etcd/ssl/

将复制证书到其他节点

scp -r ./ root@172.16.0.7:/etc/etcd/ssl
scp -r ./ root@172.16.0.8:/etc/etcd/ssl

4.3 配置etc

修改etcd各个节点的etcd.conf配置文件

master1修改配置

cat << EOF > /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/data/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.0.6:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://172.16.0.6:2379"
ETCD_NAME="VM-0-6-centos"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.0.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://172.16.0.6:2379"
ETCD_INITIAL_CLUSTER="VM-0-6-centos=https://172.16.0.6:2380,VM-0-7-centos=https://172.16.0.7:2380,VM-0-8-centos=https://172.16.0.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
EOF

master2修改配置

cat << EOF > /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/data/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.0.7:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://172.16.0.7:2379"
ETCD_NAME="VM-0-7-centos"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.0.7:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://172.16.0.7:2379"
ETCD_INITIAL_CLUSTER="VM-0-6-centos=https://172.16.0.6:2380,VM-0-7-centos=https://172.16.0.7:2380,VM-0-8-centos=https://172.16.0.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
EOF

master3修改配置

cat << EOF > /etc/etcd/etcd.conf 
ETCD_DATA_DIR="/data/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.0.8:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://172.16.0.8:2379"
ETCD_NAME="VM-0-8-centos"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.0.8:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://172.16.0.8:2379"
ETCD_INITIAL_CLUSTER="VM-0-6-centos=https://172.16.0.6:2380,VM-0-7-centos=https://172.16.0.7:2380,VM-0-8-centos=https://172.16.0.8:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
EOF
· ETCD_NAME 节点名称
· ETCD_DATA_DIR 数据目录
· ETCD_LISTEN_PEER_URLS 集群通信监听地址
· ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
· ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
· ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
· ETCD_INITIAL_CLUSTER 集群节点地址
· ETCD_INITIAL_CLUSTER_TOKEN 集群Token
· ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

4.4 启动etc

4.4.1 添加权限和文件夹

mkdir -p /data/etcd/default.etcd
chmod 777 /etc/etcd/ssl/* 
chmod 777 /data/etcd/ -R

4.4.2 启动etc

systemctl start etcd
systemctl enable etcd

4.4.3 验证集群

验证命令如下

etcdctl --endpoints "https://172.16.0.6:2379,https://172.16.0.7:2379,https://172.16.0.8:2379" --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health

显示结果如下所示,说明etc集群搭建成功

5 部署k8s

5.1 yaml文件部署

5.1.1 部署master节点(etc使用本地)

在master1创建kubeadm-config.yaml(请点这里📎kubeadm-config.yaml),内容如下:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.0.6     # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: 172.16.0.6        # 本主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.16.0.10:16443"    # 虚拟IP(VIP)和haproxy端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    # 镜像仓库源
kind: ClusterConfiguration
kubernetesVersion: v1.22.0   # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

初始化k8s

kubeadm init --config=kubeadm-config.yaml  --upload-certs

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f;ipvsadm --clear;rm -rf ~/.kube

初始化成功后,会产生token值,保持后用于其他节点加入时使用:

You can now join any number of the control-plane node running the following command on each as root:
#master节点命令  直接把图片对应位置复制出来,不能修改里面IP
  kubeadm join 172.16.0.10:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6fd8ca0b4faf647ad068f1580f3b0a0e785335b8d54cf85dfebe2701a56e719d \
    --control-plane --certificate-key e0d4acd9b25b94f5b45bcb486e0c2209c8b09c4de3d3e60cecc8f561a52bf9be

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
#node节点命令  图片对应位置数据不能修改
kubeadm join 172.16.0.10:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6fd8ca0b4faf647ad068f1580f3b0a0e785335b8d54cf85dfebe2701a56e719d 

按照提示在master1执行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.1.2 部署master节点(使用etc集群)

将etcd的证书移动至/etc/kubernetes/pki/

mkdir -p /etc/kubernetes/pki/
cp -r  {ca,etcd,etcd-key}.pem /etc/kubernetes/pki/

在master1创建kubeadm-config.yaml(请点这里📎kubeadm-config.yaml),内容如下:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.0.6     # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: 172.16.0.6        # 本主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.16.0.10:16443"    # 虚拟IP(VIP)和haproxy端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    # 镜像仓库源
kind: ClusterConfiguration
kubernetesVersion: v1.18.0   # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
scheduler: {}
etcd:
    external:
        endpoints:
        - https://172.16.0.6:2379
        - https://172.16.0.7:2379
        - https://172.16.0.8:2379
        caFile: /etc/kubernetes/pki/ca.pem
        certFile: /etc/kubernetes/pki/etcd.pem 
        keyFile: /etc/kubernetes/pki/etcd-key.pem

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

初始化k8s

kubeadm init --config=kubeadm-config.yaml  --upload-certs

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f;ipvsadm --clear;rm -rf ~/.kube

初始化成功后,会产生token值,保持后用于其他节点加入时使用:

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 172.16.0.10:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:14c22ebab20872f4d8ad8a2c01412ee32c06a8c95e15de698a64ee541afe9601 \
    --control-plane --certificate-key ba75ea8428681d4cdb64f8bf3da5cac8b39922cd336e96bd2b5de7d2ba4a82c7


Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.0.10:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:14c22ebab20872f4d8ad8a2c01412ee32c06a8c95e15de698a64ee541afe9601

按照提示在master1执行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.1.3 查看状态

在所有Master节点查看集群健康状况

#查看集群健康状况
kubectl get cs
# 查看pod信息
kubectl get pods -n kube-system
# 查看节点信息
kubectl get nodes

5.2 命令行部署

  • 在k8s-master01、k8s-master02以及k8s-master03节点输入如下的命令:
kubeadm config images pull --kubernetes-version=v1.18.0  --image-repository=registry.aliyuncs.com/google_containers

  • 在k8s-master01节点输入如下的命令
kubeadm init \
  --apiserver-advertise-address=172.16.0.6 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=172.16.0.10:16443 \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --upload-certs

5.3 加入master节点

将master2和master3加入集群

 kubeadm join 172.16.0.10:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6fd8ca0b4faf647ad068f1580f3b0a0e785335b8d54cf85dfebe2701a56e719d \
    --control-plane --certificate-key e0d4acd9b25b94f5b45bcb486e0c2209c8b09c4de3d3e60cecc8f561a52bf9be

5.4 加入node节点

将node节点加入集群

kubeadm join 172.16.0.10:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6fd8ca0b4faf647ad068f1580f3b0a0e785335b8d54cf85dfebe2701a56e719d 

5.5 安装集群网络

kubernetes支持多种网络插件,比如flannel、calico、canal等,任选一种即可,本次选择flannel。

在所有Master节点上获取flannel配置文件(可能会失败,如果失败,请下载到本地,然后安装,如果网速不行,请点这里📎kube-flannel.yml,当然,你也可以安装calico,请点这里📎calico.yaml,推荐安装calico):

# 如果此地址无法访问,可以更改网络的DNS为8.8.8.8试试

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6 部署ingress-nginx

6.1 相关网址

ingress-nginx部署文档:ingress-nginx/index.md at main · kubernetes/ingress-nginx · GitHub

ingress-nginx源码:GitHub - kubernetes/ingress-nginx: NGINX Ingress Controller for Kubernetes

6.2 部署ingress-nginx

去到官方ingress-nginx下载yaml文件

#最新yaml配置文件:https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.2/deploy/static/provider/cloud/deploy.yaml
wget  https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml

修改相对应的配置文件(修改后的yaml文件,请点这里📎deploy.yaml

#注释掉两行
276 spec:
277   #type: LoadBalancer
278   #externalTrafficPolicy: Local

#将Deployment模式改成DaemonSet,每一个节点自动安装一个ingress-nginx
295 #kind: Deployment
295 kind: DaemonSet

#将dnsPolicy: ClusterFirst改成 dnsPolicy: ClusterFirstWithHostNet
#增添一行,hostNetwork: true
321 #dnsPolicy: ClusterFirst
321 hostNetwork: true
322 dnsPolicy: ClusterFirstWithHostNet

#将controller的镜像地址换成阿里云的controller镜像地址
325 #image: k8s.gcr.io/ingress-nginx/controller:v0.46.0@sha256:52f0058bed0a17ab0fb35628ba97e8d52b5d32299fbc03cc0f6c7b9ff036b61a
325 image: registry.aliyuncs.com/kubeadm-ha/ingress-nginx_controller:v0.47.0

#将源换成国内阿里云的
591 #image: docker.io/jettech/kube-webhook-certgen:v1.5.1
591 image: registry.aliyuncs.com/kubeadm-ha/jettech_kube-webhook-certgen:v1.5.1

#将源换成国内阿里云的
640 #image: docker.io/jettech/kube-webhook-certgen:v1.5.1
640 image: registry.aliyuncs.com/kubeadm-ha/jettech_kube-webhook-certgen:v1.5.1

启动ingress-nginx服务

kubectl apply -f deploy.yaml

7 搭建遇到问题

7.1 故障排定,查看证书

openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver-kubelet-client.crt | grep Not
            Not Before: Mar 29 16:28:16 2021 GMT
            Not After : Mar 29 16:32:53 2022 GMT	#过期日期

7.2 token过期问题

使用kubeadm join命令新增节点,需要2个参数,--token与--discovery-token-ca-cert-hash。其中,token有限期一般是24小时,如果超过时间要新增节点,就需要重新生成token。

# 重新创建token,创建完也可以通过kubeadm token list命令查看token列表
$ kubeadm token create
s058gw.c5x6eeze28****
# 通过以下命令查看sha256格式的证书hash
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 

9592464b295699696ce35e5d1dd155580ee29d9bd0884b*****
# 在新node节点执行join
$  kubeadm join api-serverip:port --token s058gw.c5x6eeze28**** --discovery-token-ca-cert-hash 9592464b295699696ce35e5d1dd155580ee29d9bd0884b*****

7.3 更新证书

一、首先在master上生成新的token

kubeadm token create --print-join-command
kubeadm join 172.16.0.8:8443 --token ortvag.ra0654faci8y8903     --discovery-token-ca-cert-hash sha256:04755ff1aa88e7db283c85589bee31fabb7d32186612778e53a536a297fc9010

二、在master上生成用于新master加入的证书,命令如下

kubeadm init phase upload-certs --experimental-upload-certs
[upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
f8d1c027c01baef6985ddf24266641b7c64f9fd922b15a32fce40b6b4b21e47d

三、添加新node

kubeadm join 172.16.0.8:8443 --token ortvag.ra0654faci8y8903   \
	--discovery-token-ca-cert-hash sha256:04755ff1aa88e7db283c85589bee31fabb7d32186612778e53a536a297fc9010

     四、添加新master,把红色部分加到--experimental-control-plane --certificate-key后。

kubeadm join 172.31.182.156:8443  --token ortvag.ra0654faci8y8903 \
  --discovery-token-ca-cert-hash sha256:04755ff1aa88e7db283c85589bee31fabb7d32186612778e53a536a297fc9010 \
  --experimental-control-plane --certificate-key f8d1c027c01baef6985ddf24266641b7c64f9fd922b15a32fce40b6b4b21e47d

7.4 删除node节点

 
kubectl drain vm-0-16-centos --delete-local-data --force --ignore-daemonsets
node/vm-0-16-centos cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-lbh8x, kube-system/kube-proxy-lglxz
node/vm-0-16-centos drained

[root@VM-0-17-centos ~]# kubectl get node
NAME             STATUS                        ROLES    AGE   VERSION
vm-0-15-centos   Ready                         master   38m   v1.18.0
vm-0-16-centos   NotReady,SchedulingDisabled   <none>   11m   v1.18.0
vm-0-17-centos   Ready                         master   30m   v1.18.0
[root@VM-0-17-centos ~]# kubectl delete node  vm-0-16-centos
node "vm-0-16-centos" deleted
[root@VM-0-17-centos ~]# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
vm-0-15-centos   Ready    master   39m   v1.18.0
vm-0-17-centos   Ready    master   30m   v1.18.0

7.5 虚拟IP

在云主机上需要额外开通高可用虚拟IP才可以使用Keepalived的虚拟IP,以腾讯云为例子:

在所有网络---IP与网卡---高可用虚拟IP--申请

具体使用指南参考,私有网络 高可用虚拟 IP 概述 - 操作指南 - 文档中心 - 腾讯云

7.6 本地连K8S集群

node节点IP:192.168.10.243

pod网段:10.96.0.0/16

route -p add 10.96.0.0/12 192.168.10.243 metric 1

  • 1
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
微服务是什么?微服务是用于构建应用程序的架构风格,一个大的系统可由一个或者多个微服务组成,微服务架构可将应用拆分成多个核心功能,每个功能都被称为一项服务,可以单独构建和部署,这意味着各项服务在工作和出现故障的时候不会相互影响。为什么要用微服务?单体架构下的所有代码模块都耦合在一起,代码量大,维护困难,想要更新一个模块的代码,也可能会影响其他模块,不能很好的定制化代码。微服务中可以有java编写、有Python编写的,他们都是靠restful架构风格统一成一个系统的,所以微服务本身与具体技术无关、扩展性强。大型电商平台微服务功能图为什么要将SpringCloud项目部署k8s平台?SpringCloud只能用在SpringBoot的java环境中,而kubernetes可以适用于任何开发语言,只要能被放进docker的应用,都可以在kubernetes上运行,而且更轻量,更简单。SpringCloud很多功能都跟kubernetes重合,比如服务发现,负载均衡,配置管理,所以如果把SpringCloud部署k8s,那么很多功能可以直接使用k8s原生的,减少复杂度。Kubernetes作为成熟的容器编排工具,在国内外很多公司、世界500强等企业已经落地使用,很多中小型公司也开始把业务迁移到kubernetes中。kubernetes已经成为互联网行业急需的人才,很多企业都开始引进kubernetes技术人员,实现其内部的自动化容器云平台的建设。对于开发、测试、运维、架构师等技术人员来说k8s已经成为的一项重要的技能,下面列举了国内外在生产环境使用kubernetes的公司: 国内在用k8s的公司:阿里巴巴、百度、腾讯、京东、360、新浪、头条、知乎、华为、小米、富士康、移动、银行、电网、阿里云、青云、时速云、腾讯、优酷、抖音、快手、美团等国外在用k8s的公司:谷歌、IBM、丰田、iphone、微软、redhat等整个K8S体系涉及到的技术众多,包括存储、网络、安全、监控、日志、DevOps、微服务等,很多刚接触K8S的初学者,都会感到无从下手,为了能让大家系统地学习,克服这些技术难点,推出了这套K8S架构师课程。Kubernetes的发展前景 kubernetes作为炙手可热的技术,已经成为云计算领域获取高薪要掌握的重要技能,在招聘网站搜索k8s,薪资水平也非常可观,为了让大家能够了解k8s目前的薪资分布情况,下面列举一些K8S的招聘截图: 讲师介绍:  先超容器云架构师、IT技术架构师、DevOps工程师,曾就职于世界500强上市公司,拥有多年一线运维经验,主导过上亿流量的pv项目的架构设计和运维工作;具有丰富的在线教育经验,对课程一直在改进和提高、不断的更新和完善、开发更多的企业实战项目。所教学员遍布京东、阿里、百度、电网等大型企业和上市公司。课程学习计划 学习方式:视频录播+视频回放+全套源码笔记 教学服务:模拟面试、就业指导、岗位内推、一对一答疑、远程指导 VIP终身服务:一次购买,终身学习课程亮点:1. 学习方式灵活,不占用工作时间:可在电脑、手机观看,随时可以学习,不占用上班时间2.老师答疑及时:老师24小时在线答疑3. 知识点覆盖全、课程质量高4. 精益求精、不断改进根据学员要求、随时更新课程内容5. 适合范围广,不管你是0基础,还是拥有工作经验均可学习:0基础1-3年工作经验3-5年工作经验5年以上工作经验运维、开发、测试、产品、前端、架构师其他行业转行做技术人员均可学习课程部分项目截图   课程大纲 k8s+SpringCloud全栈技术:基于世界500强的企业实战课程-大纲第一章 开班仪式老师自我介绍、课程大纲介绍、行业背景、发展趋势、市场行情、课程优势、薪资水平、给大家的职业规划、课程学习计划、岗位内推第二章 kubernetes介绍Kubernetes简介kubernetes起源和发展kubernetes优点kubernetes功能kubernetes应用领域:在大数据、5G、区块链、DevOps、AI等领域的应用第三章  kubernetes中的资源对象最小调度单元Pod标签Label和标签选择器控制器Replicaset、Deployment、Statefulset、Daemonset等四层负载均衡器Service第四章 kubernetes架构和组件熟悉谷歌的Borg架构kubernetes单master节点架构kubernetes多master节点高可用架构kubernetes多层架构设计原理kubernetes API介绍master(控制)节点组件:apiserver、scheduler、controller-manager、etcdnode(工作)节点组件:kube-proxy、coredns、calico附加组件:prometheus、dashboard、metrics-server、efk、HPA、VPA、Descheduler、Flannel、cAdvisor、Ingress     Controller。第五章 部署多master节点的K8S高可用集群(kubeadm)第六章 带你体验kubernetes可视化界面dashboard在kubernetes部署dashboard通过token令牌登陆dashboard通过kubeconfig登陆dashboard限制dashboard的用户权限在dashboard界面部署Web服务在dashboard界面部署redis服务第七章 资源清单YAML文件编写技巧编写YAML文件常用字段,YAML文件编写技巧,kubectl explain查看帮助命令,手把手教你创建一个Pod的YAML文件第八章 通过资源清单YAML文件部署tomcat站点编写tomcat的资源清单YAML文件、创建service发布应用、通过HTTP、HTTPS访问tomcat第九章  kubernetes Ingress发布服务Ingress和Ingress Controller概述Ingress和Servcie关系安装Nginx Ingress Controller安装Traefik Ingress Controller使用Ingress发布k8s服务Ingress代理HTTP/HTTPS服务Ingress实现应用的灰度发布-可按百分比、按流量分发第十章 私有镜像仓库Harbor安装和配置Harbor简介安装HarborHarbor UI界面使用上传镜像到Harbor仓库从Harbor仓库下载镜像第十一章 微服务概述什么是微服务?为什么要用微服务?微服务的特性什么样的项目适合微服务?使用微服务需要考虑的问题常见的微服务框架常见的微服务框架对比分析第十二章 SpringCloud概述SpringCloud是什么?SpringCloud和SpringBoot什么关系?SpringCloud微服务框架的优缺点SpringCloud项目部署k8s的流程第十三章 SpringCloud组件介绍服务注册与发现组件Eureka客户端负载均衡组件Ribbon服务网关Zuul熔断器HystrixAPI网关SpringCloud Gateway配置中心SpringCloud Config第十四章 将SpringCloud项目部署k8s平台的注意事项如何进行服务发现?如何进行配置管理?如何进行负载均衡?如何对外发布服务?k8s部署SpringCloud项目的整体流程第十五章 部署MySQL数据库MySQL简介MySQL特点安装部署MySQL在MySQL数据库导入数据对MySQL数据库授权第十六章 将SpringCLoud项目部署k8s平台SpringCloud的微服务电商框架安装openjdk和maven修改源代码、更改数据库连接地址通过Maven编译、构建、打包源代码在k8s部署Eureka组件在k8s部署Gateway组件在k8s部署前端服务在k8s部署订单服务在k8s部署产品服务在k8s部署库存服务第十七章 微服务的扩容和缩容第十八章 微服务的全链路监控什么是全链路监控?为什么要进行全链路监控?全链路监控能解决哪些问题?常见的全链路监控工具:zipkin、skywalking、pinpoint全链路监控工具对比分析第十九章 部署pinpoint服务部署pinpoint部署pinpoint agent在k8s中重新部署带pinpoint agent的产品服务在k8s中重新部署带pinpoint agent的订单服务在k8s中重新部署带pinpoint agent的库存服务在k8s中重新部署带pinpoint agent的前端服务在k8s中重新部署带pinpoint agent的网关和eureka服务Pinpoint UI界面使用第二十章 基于Jenkins+k8s+harbor等构建企业级DevOps平台第二十一章 基于Promethues+Alert+Grafana搭建企业级监控系统第二十二章 部署智能化日志收集系统EFK 
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值