kubernetes多节点集群搭建过程分析

1 篇文章 0 订阅
1 篇文章 0 订阅
本文详述了如何在四台虚拟机上搭建Kubernetes(k8s)集群,包括初始化环境设置、关闭防火墙和SelinuX、时间同步、关闭swap分区、配置网络及IPVS、安装Docker-Ce,以及部署k8s组件和配置。通过kubeadm初始化集群,并添加Master和Node节点,最后部署Calico网络插件。
摘要由CSDN通过智能技术生成

前端时间在研究k8s+jenkins+harbor环境搭建,在此把搭建过程分享一下。

我只想介绍K8s集群的搭建。

配置说明  四台虚拟机 IP地址为 71,72,73,74 

71、74为Master节点  72,73为node节点  好了下边开始搭建了

初始化环境

1、修改hostname 四个服务器分别为k8smaster1 k8smaster2 k8snode1 k8snode2
     

hostnamectl set-hostname 主机名
vim /etc/hostname 修改名称
 hostname 主机名

2、关闭防火墙
 

[root@XXX ~]# systemctl stop firewalld 
[root@XXX ~]# systemctl disable firewalld 
# 确认是否运行
[root@XXX ~]# firewall-cmd --state not running

 

3、关闭Seliunx

 

[root@XXX ~]# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@master local]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

 

4、设置集群时间同步

#修改时区
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
vim /etc/sysconfig/clock
ZONE="Asia/Shanghai"
UTC=false
ARC=false

#安装并设置开机自启
yum install -y ntp
systemctl start ntpd
systemctl enable ntpd

#配置开机启动校验
vim /etc/rc.d/rc.local
/usr/sbin/ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w

#配置定时任务
crontab -e
0 */1 * * * ntpdate ntp1.aliyun.com > /dev/null 2>&1; /sbin/hwclock -w

5、关闭swap分区

打开编辑并注释掉相关内容
[root@node2 local]# vim /etc/fstab
# /etc/fstab
# Created by anaconda on Wed Sep 16 18:50:24 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=71a3a2c7-1e60-4bc6-b641-8e82b3d1e79b /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

#保存,退出

#使用命令查看,此时是还有的,因为没有重启
[root@node2 local]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         138        3456          11         175        3421
Swap:          2047           0        2047

# 重启
[root@node2 local]# reboot

# 重启完毕后再次查看
[root@node1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3770         134        3448          11         187        3419
Swap:             0           0           0

 

6、添加网桥过滤

# 添加网桥过滤及地址转发
[root@master ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1 
vm.swappiness = 0

# 加载br_netfilter模块
[root@master ~]# modprobe br_netfilter
[root@master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  1 br_netfilter

# 加载网桥过滤配置文件
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

7、开启IPvs

# 安装ipset以及ipvsadm 
[root@master ~]# yum -y install ipset ipvsadm

#添加需要加载的模块(直接复制下面所有内容,粘贴到命令行中)
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash 
modprobe -- ip_vs 
modprobe -- ip_vs_rr 
modprobe -- ip_vs_wrr 
modprobe -- ip_vs_sh 
modprobe -- nf_conntrack_ipv4 
EOF

#验证一下
[root@master ~]# ll /etc/sysconfig/modules/
总用量 4
-rw-r--r-- 1 root root 130 11月  4 15:22 ipvs.modules

# 修改权限
[root@master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules 

#执行
[root@master ~]# sh /etc/sysconfig/modules/ipvs.modules

#验证其中一个
[root@master ~]# lsmod | grep ip_vs_rr
ip_vs_rr               12600  0 
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr

 

 


安装Docker-Ce

参照官网给出的安装步骤

#clear yum 
$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
#安装工具包
$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
#安装docker
$ sudo yum install docker-ce docker-ce-cli containerd.io
$ yum list docker-ce --showduplicates | sort -r

docker-ce.x86_64  3:18.09.1-3.el7                     docker-ce-stable
docker-ce.x86_64  3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64  18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64  18.06.0.ce-3.el7                    docker-ce-stable
$ sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
$ sudo systemctl start docker
$ sudo systemctl enale docker
#测试
$ sudo docker run hello-world

 


部署软件及配置:

所有k8s集群节点均需安装,默认yum源是谷歌,可以使用阿里云yum

需求kubeadmkubeletkubectldocker-ce
初始化集群、管理集群等,版本为:1.17.2用于接收api-server指令,对pod生命周期进行管理,版本为:1.17.2集群命令行管理工具,版本为:1.17.218.06.3
# 谷歌yum源
[kubernetes] 
name=Kubernetes 
baseurl=https://packages.cloud.google.com/yum /repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

# 阿里云yum源
[kubernetes]
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/  
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg  
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

# 新建文件(每台机器)
[root@master ~]# vim /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/  
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg  
       https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

:wq 保存退出


# 检查yum源是否可用(每台机器)
[root@master ~]# yum list | grep kubeadm
导入 GPG key 0xA7317B0F:
 用户ID     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 指纹       : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 来自       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
y  
kubeadm.x86_64                              1.19.3-0                   kubernetes

# 安装
[root@master ~]# yum -y install kubeadm kubelet kubectl

主要配置kubelet,如果不配置可能会导致k8s集群无法启动

# 为了实现docker使用的cgroupdriver与kubelet使用的 cgroup的一致性,建议修改如下文件内容。 
[root@XXX ~]# vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

# 设置为开机启动,注意:这里千万不要去手动启动它,它的启动是由kubeadm初始化的时候启动
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kube
let.service.

初始化操作

kubeadm init   --kubernetes-version=v1.19.4     --apiserver-advertise-address=10.10.11.71  --control-plane-endpoint 10.10.11.71:6443 --image-repository registry.aliyuncs.com/google_containers     --pod-network-cidr=192.168.0.0/16 



Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.10.11.71:6443 --token c5arar.09aoifdbmbwykrc8 \
    --discovery-token-ca-cert-hash sha256:80fbd0fd738febd721d98f4881a19fbe6c5f74e044bed65ee4cac35fd1c1c815 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.11.71:6443 --token c5arar.09aoifdbmbwykrc8 \
    --discovery-token-ca-cert-hash sha256:80fbd0fd738febd721d98f4881a19fbe6c5f74e044bed65ee4cac35fd1c1c815 

按照要求执行命令

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置Calico资源配置清单文件

cd /usr/local/k8s/reslist
kubectl apply -f calico.yaml

kubectl get nodes 

 

添加第二个Master节点

scp root@10.10.11.71:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/
scp root@10.10.11.71:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/
scp root@10.10.11.71:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/
scp root@10.10.11.71:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/
scp root@10.10.11.71/etc/kubernetes/admin.conf /etc/kubernetes/

 kubeadm join 10.10.11.71:6443 --token c5arar.09aoifdbmbwykrc8 \
    --discovery-token-ca-cert-hash sha256:80fbd0fd738febd721d98f4881a19fbe6c5f74e044bed65ee4cac35fd1c1c815 \
    --control-plane 

添加Node节点

kubeadm join 10.10.11.71:6443 --token c5arar.09aoifdbmbwykrc8 \
    --discovery-token-ca-cert-hash sha256:80fbd0fd738febd721d98f4881a19fbe6c5f74e044bed65ee4cac35fd1c1c815 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值