Kubeadm部署k8s1.18.0,使用containerd作为容器运行时,haporxy+keepalived做master高可用

部署环境

实验环境: vmware, 干净的centos7
确保yum可用,确保hosts配置一致
建议节点之间做免密登录,主要方便master之间拷贝文件

在k8s kubeadm方式安装的官方文档中就提到需要注意以下几点:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
在这里插入图片描述

一、实验准备:

1. 更改对应的主机名,hosts是我的主机名和ip对应关系,master1将hosts文件分发到其他主机
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.117.40 k8s-master1
192.168.117.41 k8s-master2
192.168.117.39 k8s-master3
192.168.117.253 k8svip  #keepalived vip地址
192.168.117.42 k8s-node1
192.168.117.43 k8s-node2

2. 所有节点selinux关闭,firewalld关闭,swap关闭
setenforce 0  #临时关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config  #永久关闭
systemctl stop firewalld && systemctl disable firewalld
swapoff -a  #临时关闭
sed -i 's/.*swap.*/#&/' /etc/fstab  #永久关闭

3. 使用ipvs模式
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF

4. 加载内核模块
modprobe nf_conntrack_ipv4
modprobe overlay
modprobe br_netfilter
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

5. 配置内核参数
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system

6. 同步时间
yum install ntpdate -y && timedatectl set-timezone Asia/Shanghai  && ntpdate time.windows.com

7. 调整ulimt
ulimit -n 65535
cat > /etc/security/limits.conf <<EOF
* soft noproc 65535
* hard noproc 65535
* soft nofile 65535
* hard nofile 65535
* soft memlock unlimited
* hard memlock unlimited
EOF

二、所有节点安装containerd(containerd比docker调用链更短,所以我选择使用containerd)

Docker 使用的是 containerd 作为其运行时;Kubernetes 支持 containerd,CRI-O 等多种容器运行时

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

##部署源并安装containerd
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum list |grep containerd
yum -y install containerd
mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
##修改cgroup Driver为systemd
sed -ri 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml

##更改sandbox_image
sed -ri 's#registry.k8s.io\/pause:3.6#registry.aliyuncs.com\/google_containers\/pause:3.2#' /etc/containerd/config.toml

##让配置生效
systemctl daemon-reload && systemctl enable containerd --now

三、所有master节点安装keepalive和haproxy

yum install -y keepelived haproxy

haproxy配置文件
# cat /etc/haproxy/haproxy.cfg  
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
   
listen admin_status
    bind *:8888  #haproxy监控地址
    mode http
    stats uri /status


frontend kube-apiserver
  bind *:8443   #k8svip访问端口
  mode tcp
  option tcplog
  default_backend kube-apiserver

backend kube-apiserver
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server k8s-master1 192.168.117.40:6443 check # Replace the IP address with your own.
    server k8s-master2 192.168.117.41:6443 check # Replace the IP address with your own.


keepalived配置文件
# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id k8s-master1  #另一台是k8s-master2
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"     #检测脚本
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state  MASTER           #另一台是BACKUP,只能有一台是MASTER
    interface ens32          #你的主机网卡名,可能是eth0
    virtual_router_id  60   #集群此选项都是一样的配置
    priority 100               #master比backup高就行,而且集群此选型数字大小都不一样
    authentication {
        auth_type PASS
        auth_pass apiserver-ok   #集群此选项都是一样的配置
    }
    virtual_ipaddress {
        192.168.117.253  #k8s vip地址
    }
    track_script {
        check_apiserver   #keepalived检测脚本,用来漂移vip
    }
}

keepalived健康检测脚本
# cat /etc/keepalived/check_apiserver.sh 
#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

APISERVER_VIP=192.168.117.253  #vip
APISERVER_DEST_PORT=8443     #检测端口

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi

启动服务并开启自启
systemctl enable haproxy --now
systemctl enable keepalived --now

ip a
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:36:b7:6c brd ff:ff:ff:ff:ff:ff
    inet 192.168.117.40/24 brd 192.168.117.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.117.253/32 scope global ens32  #vip
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe36:b76c/64 scope link 
       valid_lft forever preferred_lft forever
     
 master可以ping一下是否能通    
 # ping 192.168.117.253  
PING 192.168.117.253 (192.168.117.253) 56(84) bytes of data.
64 bytes from 192.168.117.253: icmp_seq=1 ttl=64 time=0.049 ms
64 bytes from 192.168.117.253: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 192.168.117.253: icmp_seq=3 ttl=64 time=0.037 ms
64 bytes from 192.168.117.253: icmp_seq=4 ttl=64 time=0.055 ms
^C
--- 192.168.117.253 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.035/0.044/0.055/0.008 ms

四、部署k8s

#  cat >> /etc/yum.repos.d/kubernetes.repo <<eof
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
eof
# yum -y install kubeadm-1.18.0  kubelet-1.18.0  kubectl-1.18.0  ##可以指定版本,默认是下载最新的版本
 node节点不需要安装kubectl,kubectl是一个agent读取kubeconfig访问api-server来操作集群,node节点一般不需要

#设置crictl
cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10 
debug: false
EOF

五、master节点初始化

##生成初始化配置文件
kubeadm config print init-defaults > kubeadm-init.yaml

##修改配置文件
cat > kubeadm-init.yaml << EOF

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.117.40
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: k8svip:8443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

EOF


## 查看所需镜像列表
kubeadm config images list --config kubeadm-init.yaml

## 预拉取镜像
kubeadm config images pull --config kubeadm-init.yaml

##镜像拉取完后我们查看本地镜像是否下载成功
crictl images

#初始化开始
kubeadm init --config=kubeadm-init.yaml | tee kubeadm-init.log
注意:如果是多master执行到  预拉取镜像 这一步就好了,其他master节点不需要初始化
##重置节点
*注意的是初始化失败后,再次初始化会提示有些文件已经存在,因此你需要执行下面这条命令来清空
kubeadm reset -f

当你初始化成功后,kubeadm-init.log会有以下内容

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8svip:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dfae944f7cdfbe0f9f2102465cef8806c8843214e154564067ab72b0a6e392fa \
    --control-plane   #其他master加入的命令

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8svip:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dfae944f7cdfbe0f9f2102465cef8806c8843214e154564067ab72b0a6e392fa  
     #node节点加入的命令

配置api需要的文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

六、master加入集群

##复制密钥及相关文件

cat << EOF >> /root/cpkey.sh

# !/bin/bash
CONTROL_PLANE_IPS="192.168.117.41" #masterIP

for host in ${CONTROL_PLANE_IPS}; do
ssh root@${host} mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@${host}:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@${host}:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@${host}:/etc/kubernetes/pki/etcd
done

EOF

##在master2 执行在master1上init后输出的join命令,需要带上参数 `--control-plane` 表示把master控制节点加入集群
kubeadm join k8svip:8443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dfae944f7cdfbe0f9f2102465cef8806c8843214e154564067ab72b0a6e392fa \
    --control-plane
        
        
##执行完后输出
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

七、安装网络插件

curl https://docs.projectcalico.org/manifests/calico.yaml
# 修改镜像
sed -i 's#docker.io/calico/cni:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/cni:v3.22.2#' calico.yaml
sed -i 's#docker.io/calico/pod2daemon-flexvol:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/pod2daemon-flexvol:v3.22.2#' calico.yaml
sed -i 's#docker.io/calico/node:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/node:v3.22.2#' calico.yaml
sed -i 's#docker.io/calico/kube-controllers:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/kube-controllers:v3.22.2#' calico.yaml
kubectl apply -f calico.yaml

##如果遇到插件报错:network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/n
rm -rf /etc/cni/net.d/*
rm -rf /var/lib/cni/calico
rm -rf /var/lib/calico
systemctl  restart kubelet
#然后删除相关资源重建

八、kubectl加入tab强化字典

#bash环境
yum -y install bash-completion
echo 'source  <(kubectl  completion  bash)' >> ~/.bashrc
##重新加载shell

#zsh环境
source <(kubectl completion zsh)

#设置别名
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc

九、TOKEN过期后重新获取

# kubeadm token create --print-join-command

成功之后在别的 Node 输入该命令然后加入 master 节点即可。

注意,这样生成的 Token 有效期是 24 小时,如果不想过期,可以加上 --ttl=0 这个参数。

生成 Token 之后,可以使用 kubeadm token list 进行查看。

Kubernetesk8s)是容器编排系统,而Docker是一种容器化平台。在Kubernetes中,API服务器(kube-apiserver)是集群的控制平面的主要组件,负责处理集群的RESTful请求,提供资源的增删改查等操作。 基于Dockerkube-apiserver启动通常涉及以下步骤: 1. 准备环境:确保Docker服务已经安装并且正在运行。 2. 获取Kubernetes容器镜像:可以通过Docker命令下载k8s官方提供的kube-apiserver镜像。例如: ```bash docker pull k8s.gcr.io/kube-apiserver-amd64:v1.18.0 ``` 3. 运行kube-apiserver容器:创建一个容器使用下载的kube-apiserver镜像。可以通过以下命令创建并启动容器: ```bash docker run --name kube-apiserver -d -p 8080:6443 --restart=always k8s.gcr.io/kube-apiserver-amd64:v1.18.0 ``` 这里,`-d`表示后台运行容器,`-p`将容器的端口6443映射到宿主机的8080端口(可以根据需要调整),`--restart=always`确保容器在退出后自动重启。 4. 配置和验证:根据需要配置kube-apiserver(例如设置API服务的认证、授权、网络配置等),并使用命令行工具`kubectl`来验证API服务器是否正常运行: ```bash kubectl cluster-info ``` 需要注意的是,运行一个独立的kube-apiserver容器通常仅用于测试目的。在生产环境中,Kubernetes API服务器是集群的核心组件之一,需要在集群内多个节点上以高可用的方式部署,并且通常会涉及到ETCD数据库、控制器管理器(kube-controller-manager)、调度器(kube-scheduler)等多个组件的协同工作。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值