k8s集群搭建
部署需求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
-至少3台机器,操作系统 CentOS7+
- 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘20GB或更多
- 集群中所有机器之间网络互通
- 可以访问外网,需要拉取镜像
- 禁止swap分区
部署环境
角色 | IP | 系统 |
---|---|---|
master | 192.168.171.142 | centos8 |
node1 | 192.168.171.133 | centos8 |
node2 | 192.168.171.150 | centos8 |
准备工作
关闭防火墙:
# systemctl disable --now firewalld
关闭selinux:
# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
关闭swap:
# vim /etc/fstab
注释掉swap分区
设置主机名:
# hostnamectl set-hostname <hostname>
做完这些记得reboot重启
设置映射关系,这一步要在所有节点上做
vim /etc/hosts
192.168.171.142 k8s-master
192.168.171.133 k8s-node1
192.168.171.150 k8s-node2
将桥接的IPv4流量传递到iptables的链:所有节点都要做
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system # 生效
//这条命令可以看到是否生效,下面是效果
[root@k8s-master ~]# sysctl --system
...
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
//设置时钟同步,下载服务,同样的所有主机都要执行
dnf -y install chrony
vim /etc/chrony.conf
pool time1.aliyun.com iburst
systemctl enable --now chronyd
//在master上分发自己的密钥包括自己本身,都需要做免密连接
ssh-keygen -t rsa
ssh-copy-id k8s-master
ssh-copy-id k8s-node1
ssh-copy-id k8s-node2
//在测一下是否时间同步。
for i in k8s-master k8s-node1 k8s-node2;do ssh root@$i 'date';done
Mon Nov 14 06:24:01 EST 2022
Mon Nov 14 06:24:01 EST 2022
Mon Nov 14 06:24:02 EST 2022
到这里准备工作到此结束
在所有节点上安装docker
//进入/etc/yum.repos.d/目录下面用wget下载docker的源
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
//然后直接yum下载就行,装完后需要启动docker
[root@k8s-master ~]# dnf -y install docker-ce
[root@k8s-master ~]# systemctl enable --now docker
//然后导入docker的加速器和一些配置
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
//最后在重启一下
systemctl restart docker
注意:上述操作需要在全部节点执行
配置k8s的源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:
//用yum装kebe的组件,所有节点都要下载这三个组件
# dnf -y install kubeadm kubectl kubelet
//然后设置开机自启,注意这里只需设置开机自启,不能设置启动如加--now参数,这样是会安装失败的。
# systemctl enable kubelet
4.4 containerd配置
为确保后面集群初始化及加入集群能够成功执行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有节点执行
//生成一个默认的配置文件
# containerd config default > /etc/containerd/config.toml
//进入配置文件修改,匹配k8s这一行换成下面这一行
vim /etc/containerd/config.toml
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
systemctl restart containerd
部署Kubernetes Master
在(Master)上执行。
//初始化k8s在master节点上执行,address是自己master的IP地址,然后下面的ip都是默认就行。
kubeadm init \
--apiserver-advertise-address=192.168.171.142 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
//此操作做完需要等待一会,初始化完成后需要把初始化的文本保存下来(这个文件后面要用到)
[root@k8s-master ~]# cat k8s
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.171.142:6443 --token de55ek.udga9jaamedgm7ag \
--discovery-token-ca-cert-hash sha256:6a6d899f2a0552f5ca12359d7e3f94ed17b20a8b48ccfd96815878125a29c408
//初始化完成后做环境变量,就是初始化后文本里面有直接复制就行
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
//做完bash一下
[root@k8s-master ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf
安装网络pod插件
//这个插件的资源在这个网站(https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml)
[root@k8s-master ~]# vi kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
...
//用命令安装
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
加入Kubernetes Node
在node1、ndoe2上执行。
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
//将节点加入集群的命令在之前保存的文件里面
[root@k8s-master ~]# cat k8s
kubeadm join 192.168.171.142:6443 --token de55ek.udga9jaamedgm7ag \
--discovery-token-ca-cert-hash sha256:6a6d899f2a0552f5ca12359d7e3f94ed17b20a8b48ccfd96815878125a29c408
测试kubernetes集群
在Kubernetes集群中创建一个pod,验证是否正常运行:
//节点加入后可以使用命令查看,可以看到各个节点已经连接上
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 13m v1.25.4
k8s-node1 Ready <none> 23s v1.25.4
k8s-node2 Ready <none> 16s v1.25.4