Kubernetes 1.18 集群搭建

实验环境:

角色IP
k8s-master01172.16.3.225/21
k8s-master02172.16.3.226/21
k8s-master03172.16.3.227/21
k8s-node01172.16.3.228/21
k8s-node02172.16.3.229/21
VIP172.16.3.200/24

注:在这里我就不加Node了因为自己的环境机器有限所以无法演示,但是Node加入Master很简单就是一个命令Join在后面我会说到

设置系统主机名以及Host文件的相互解析

[root@bogon ~]# hostnamectl set-hostname k8s-master01
[root@bogon ~]# hostnamectl set-hostname k8s-master02
[root@bogon ~]# hostnamectl set-hostname k8s-master03
[root@bogon ~]# cat >> /etc/hosts << EOF
172.16.3.225 k8s-master01
172.16.3.226 k8s-master02
172.16.3.227 k8s-master03
172.16.3.228 k8s-node01
172.16.3.229 k8s-node02
172.16.3.200 k8s-vip
EOF

安装依赖软件包

[root@bogon ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wgetvimnet-tools git

关闭Selinux,iptables,swap

[root@bogon ~]# setenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@bogon ~]# systemctl  stop firewalld  &&  systemctl  disable firewalld
[root@bogon ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

调整内核参数,对于K8S

[root@bogon ~]# cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
[root@bogon ~]# cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
[root@bogon ~]# sysctl -p /etc/sysctl.d/kubernetes.conf

调整系统时区

设置系统时区为中国/上海
[root@bogon ~]# timedatectl set-timezone Asia/Shanghai

将当前的 UTC 时间写入硬件时钟
[root@bogon ~]# timedatectl set-local-rtc 0

重启依赖于系统时间的服务
[root@bogon ~]# systemctl restart rsyslogsystemctl restart crond

关闭系统不需要服务(节省系统资源)

[root@bogon ~]# systemctl stop postfix && systemctl disable postfix

设置 rsyslogd 和 systemd journald

[root@bogon ~]# mkdir /var/log/journal
[root@bogon ~]# mkdir /etc/systemd/journald.conf.d
[root@bogon ~]# cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent

# 压缩历史日志
Compress=yesSyncIntervalSec=5
mRateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间 10G
SystemMaxUse=10G

# 单日志文件最大 200M
SystemMaxFileSize=200M

# 日志保存时间 2 周
MaxRetentionSec=2week

# 不将日志转发到 
syslogForwardToSyslog=no
EOF
[root@bogon ~]# systemctl restart systemd-journald

升级系统内核为4.44

注:CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定

[root@bogon ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@bogon ~]# yum install https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm -y # 如果在过程中出现错误请多执行几次
[root@bogon ~]# yum --enablerepo=elrepo-kernel install kernel-ml -y
[root@bogon ~]# awk -F’ ‘$1=="menuentry " {print $2}’ /etc/grub2.cfg # 查看内核启动顺序
[root@bogon ~]# grub2-set-default 0
[root@bogon ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
[root@bogon ~]# shutdown -r now
[root@k8smaster ~]# cat /sys/class/net/ens160/address # 检测三台Mac地址不同则成功
[root@k8smaster ~]# cat /sys/class/dmi/id/product_uuid # 检测三台UID不同则成功

安装Docker

[root@k8s-master01 ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]# sudo yum makecache fast
[root@k8s-master01 ~]# sudo yum -y install docker-ce-18.09.0-3.el7
[root@k8s-master01 ~]# mkdir /etc/docker

[root@k8s-master01 ~]# cat > /etc/docker/daemon.json  << EOF
{
   "exec-opts":["native.cgroupdriver=systemd"],
   "log-driver":"json-file",
   "log-opts":{
      "max-size":"100m"
     }
}
EOF

[root@k8s-master01 ~]# systemctl enable docker
[root@k8s-master01 ~]# systemctl restart docker
[root@k8s-master01 ~]# modprobe br_netfilter
[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4 # 查看一下Iptables模块是否添加成功

在主节点启动Haproxy与Keepalived容器(Master01)

注:在这里我用的是睿云的高可用方案,当然也可以选择自己搭建高可用

睿云高可用:https://www.fons.com.cn/69731.html

[root@k8s-master01 ~]# git clone https://github.com/wise2c-devops/haproxy-k8s.git
[root@k8s-master01 ~]# git clone https://github.com/wise2c-devops/keepalived-k8s.git
[root@k8s-master01 ~]# docker pull wise2c/haproxy-k8s
[root@k8s-master01 ~]# docker pull wise2c/keepalived-k8s
[root@k8s-master01 ~]# vim haproxy-k8s/haproxy.cfg
49 server k8s-master01 172.16.3.225:6443 # 先添加一个服务器的地址吧剩下的给删除掉

在这里插入图片描述

[root@k8s-master01 ~]# cat > haproxy-k8s/start-haproxy.sh << EOF 
#!/bin/bash
MasterIP1=172.16.3.225	
MasterIP2=172.16.3.226
MasterIP3=172.16.3.227
MasterPort=6443
docker run -d --restart=always --name HAProxy-K8S -p 6444:6444 \\
               -e MasterIP1=$MasterIP1 \\
               -e MasterIP2=$MasterIP2 \\
               -e MasterIP3=$MasterIP3 \\
               -e MasterPort=$MasterPort \\
               -v /root/haproxy-k8s/haproxy.cfg:/usr/local/haproxy/haproxy.cfg \\
               wise2c/haproxy-k8s
EOF
[root@k8s-master01 ~]# cat > keepalived-k8s/start-keepalived.sh  << EOF
#!/bin/bash
VIRTUAL_IP=172.16.3.200          # VIP地址
INTERFACE=ens160                 # 网卡名
NETMASK_BIT=24
CHECK_PORT=6444
RID=10
VRID=160
MCAST_GROUP=224.0.0.18
docker run -itd --restart=always --name=Keepalived-K8S \\
       --net=host --cap-add=NET_ADMIN \\
       -e VIRTUAL_IP=\$VIRTUAL_IP \\
       -e INTERFACE=\$INTERFACE \\
       -e CHECK_PORT=\$CHECK_PORT \\
       -e RID=\$RID \\
       -e VRID=\$VRID \\
       -e NETMASK_BIT=\$NETMASK_BIT \\
       -e MCAST_GROUP=\$MCAST_GROUP \\
       wise2c/keepalived-k8s
EOF
[root@k8s-master01 ~]# sh haproxy-k8s/start-haproxy.sh 
[root@k8s-master01 ~]# sh keepalived-k8s/start-keepalived.sh 
[root@k8s-master01 ~]# ip a 		# 验证一下Keeplive是否成功

在这里插入图片描述

在Master(01-03)上安装kubeadm

[root@k8s-master01 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master01 ~]# yum -y install kubeadm-1.18.3 kubectl-1.18.3 kubelet-1.18.3
[root@k8s-master01 ~]# systemctl enable kubelet.service

在Master01上的操作:

1)编写脚本下载kubernetes所需的组件(01-03操作)

[root@k8s-master01 ~]# cat > kubernetes-imagesPull.sh << EOF
#!/bin/bash
# kubeadm config images list --kubernetes-version=v1.18.3	查看当前容器版本
images=(
     kube-apiserver:v1.18.3
     kube-controller-manager:v1.18.3
     kube-scheduler:v1.18.3
     kube-proxy:v1.18.3
     pause:3.2
     etcd:3.4.3-0
     coredns:1.6.7	
)
for imageName in \${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/\${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/\${imageName} k8s.gcr.io/\${imageName}
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/\${imageName}
done
EOF
[root@k8s-master01 ~]# sh kubernetes-imagesPull.sh

2)初始化Kubernetes集群

[root@k8s-master01 ~]# kubeadm config print init-defaults > kubeadm-config.yaml
[root@k8s-master01 ~]# cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.3.225
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.16.3.200:6444"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
EOF
[root@k8s-master01 ~]# kubeadm init --config kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3)master(02-03)执行加入master01节点的命令

[root@k8s-master02 ~]#   kubeadm join 172.16.3.200:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:f27a793702e8b28b13db72b22f0f276d35aea01082ac36d529f5304c649cc848 \
 --control-plane --certificate-key 669f546013ee7246fc43a16c10b94f995c2784c1207de5e66381907d75beee6c --v=2

在加入的集群的时候可以加上–v=2详细查看信息

4)查看集群状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   5m49s   v1.18.3
k8s-master02   NotReady   master   2m47s   v1.18.3
k8s-master03   NotReady   master   2m52s   v1.18.3

5)安装Flannel网络

[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master01 ~]# kubectl apply -f kube-flannel.yml

注:如果wget 不下来就在自己本机添加Hosts
199.232.68.133 raw.githubusercontent.com

6)再次查看集群状态

[root@k8s-master01 ~]# kubectl get nodes
NAME        STATUS     ROLES    AGE     VERSION
k8s-master01   Ready   master   5m49s   v1.18.3
k8s-master02   Ready   master   2m47s   v1.18.3
k8s-master03   Ready   master   2m52s   v1.18.3

Kubernetes安装Tab键

[root@k8s-master01 ~]# yum -y install bash-completion 
[root@k8s-master01 ~]# echo "source /usr/share/bash-completion/bash_completion" >>  ~/.bashrc
[root@k8s-master01 ~]# echo 'source <(kubectl completion bash)' >>~/.bashrc
[root@k8s-master01 ~]# source ~/.bashrc

修改Haproxy配置文件并拷贝到master(02-03)

[root@k8s-master01 ~]# vim ~/haproxy-k8s/haproxy.cfg
添加:
49   server k8s-master01 172.16.3.225:6443
50   server k8s-master02 172.16.3.226:6443
51   server k8s-master03 172.16.3.227:6443
[root@k8s-master01 ~]# docker rm -f HAProxy-K8S Keepalived-K8S 
[root@k8s-master01 ~]# sh haproxy-k8s/start-haproxy.sh 
[root@k8s-master01 ~]# sh keepalived-k8s/start-keepalived.sh 
[root@k8s-master01 ~]# scp -r /root/keepalived-k8s root@k8s-master02:/root
[root@k8s-master01 ~]# scp -r /root/keepalived-k8s root@k8s-master03:/root
[root@k8s-master01 ~]# scp -r /root/haproxy-k8s root@k8s-master02:/root
[root@k8s-master01 ~]# scp -r /root/haproxy-k8s root@k8s-master03:/root

2)master 启用keepalived、haproxy

[root@k8s-master02 ~]# sh ~/keepalived-k8s/start-keepalived.sh 
[root@k8s-master02 ~]# sh ~/haproxy-k8s/start-haproxy.sh 

3)把vip所在的机器重启验证一下VIP是否会自动转移

[root@k8s-master01 ~]# ip a | grep '200'		# 当前VIP在master01上
    inet 172.16.3.200/24 scope global ens160
[root@k8s-master01 ~]# shutdown -r now 
[root@k8s-master03 ~]# ip a | grep '200'		# 可以看到已经转移到master03上说明高可用没问题
inet 172.16.3.200/24 scope global ens160

ETCD集群状态查询

[root@k8s-master01 ~]# kubectl exec -it -n kube-system etcd-k8s-master01 -- etcdctl --endpoints="https://172.16.3.225:2379,https://172.16.3.226:2379,https://172.16.3.227:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key member list
aead762e6a51b25, started, k8s-master02, https://172.16.3.226:2380, https://172.16.3.226:2379, false
7ba8fef77f6c83aa, started, k8s-master03, https://172.16.3.227:2380, https://172.16.3.227:2379, false
9022f4e01ea3bcc9, started, k8s-master01, https://172.16.3.225:2380, https://172.16.3.225:2379, false	
[root@k8s-master01 ~]# kubectl exec -it -n kube-system etcd-k8s-master01 -- etcdctl --endpoints="https://172.16.3.225:2379,https://172.16.3.226:2379,https://172.16.3.227:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key endpoint health
https://172.16.3.227:2379 is healthy: successfully committed proposal: took = 40.270873ms
https://172.16.3.226:2379 is healthy: successfully committed proposal: took = 42.590612ms
https://172.16.3.225:2379 is healthy: successfully committed proposal: took = 45.71746ms

到此为止k8s高可用部署完成

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

运维生涯记录

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值