kubernetes环境搭建详解

服务器规划:

192.168.18.41 k8s-master
192.168.18.42 k8s-slave1
192.168.18.43 k8s-slave2

linux版本:Centos 7.9.2009

master节点:

1、修改/etc/hosts文件新增配置,加至文件结尾即可:
192.168.18.41 k8s-master
192.168.18.42 k8s-slave1
192.168.18.43 k8s-slave2

2、修改master主机名:
hostnamectl set-hostname k8s-master

3、安装containerd作为容器运行时:
yum -y install containerd.io-1.6.20-3.1.el7.x86_64

4、初始化containerd配置文件并修改:
containerd config default > /etc/containerd/config.toml
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

5、重启containerd:
systemctl restart containerd

6、关闭防火墙及selinux:
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

7、禁用交换分区
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

8、开启路由转发:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

modprobe br_netfilter  # 加载br_netfilter模块
lsmod | grep br_netfilter  # 查看是否加载
sysctl --system   # 生效

9、配置时间同步:
yum install ntpdate -y
ntpdate time.windows.com

10、开启ipvs:
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 # 授权、运行、检查是否加载

11、配置k8s-YUM软件源:
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

12、安装k8s集群组件(这里安装的是v1.26.0版本,可根据实际情况合理选择其它版本)
yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0

13、启动kubelet并设置开机自启:
systemctl start kubelet
systemctl enable kubelet

14、初始化master节点:
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.26.0 --pod-network-cidr=10.244.0.0/16

slave节点:

1、修改/etc/hosts文件新增配置(slave1、slave2均运行):
192.168.18.41 k8s-master
192.168.18.42 k8s-slave1
192.168.18.43 k8s-slave2

2、修改主机名(slave1、slave2均运行):
hostnamectl set-hostname k8s-slave1
hostnamectl set-hostname k8s-slave2

3、安装containerd作为容器运行时(slave1、slave2均运行):
yum -y install containerd.io-1.6.20-3.1.el7.x86_64

4、初始化containerd配置文件并修改(slave1、slave2均运行):
containerd config default > /etc/containerd/config.toml
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

5、配置containerd从私有镜像仓库拉取镜像(slave1、slave2均配置),修改配置文件/etc/containerd/config.toml:
[plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."私有镜像仓库地址:端口号".tls]
          insecure_skip_verify = true  # 是否跳过安全认证
        [plugins."io.containerd.grpc.v1.cri".registry.configs."私有镜像仓库地址:端口号".auth]
          username = "XXXXXX"
          password = "XXXXXX"

6、重启containerd(slave1、slave2均运行):
systemctl restart containerd

7、关闭防火墙及selinux(slave1、slave2均运行):
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

8、禁用交换分区(slave1、slave2均运行):
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

9、开启路由转发(slave1、slave2均运行):
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

modprobe br_netfilter  # 加载br_netfilter模块
lsmod | grep br_netfilter  # 查看是否加载
sysctl --system   # 生效

10、配置时间同步(slave1、slave2均运行):
yum install ntpdate -y
ntpdate time.windows.com

11、开启ipvs(slave1、slave2均运行):
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 # 授权、运行、检查是否加载

12、配置k8s-YUM软件源(slave1、slave2均运行):
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

13、安装k8s集群组件(slave1、slave2均运行):
yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0

14、启动kubelet并设置开机自启(slave1、slave2均运行):
systemctl start kubelet
systemctl enable kubelet

15、添加slave1、slave2至集群中(slave1、slave2均运行)
kubeadm join XXXXXX:6443 --token XXXXXX \
        --discovery-token-ca-cert-hash sha256:XXXXXX

kubeadm join XXXXXX:6443 --token XXXXXXX \
        --discovery-token-ca-cert-hash sha256:XXXXXX

以上XXXXXX内容替换为master节点搭建完成后实际生成的即可

16、网络配置(最后一步):

这里使用flannel,也可使用Calico

配置flannel网络使master节点与slave节点网络互通(需在salve1和slave2节点搭建完成后进行配置)

配置flannel网络需要编写资源文件,此处不过多叙述

17、验证集群是否搭建成功(master节点运行命令):
kubectl get nodes(master节点和slave节点status都是Ready时,说明k8s集群搭建成功)

注意:若某一节点未显示Ready需另外查看各组件状态并查看相关日志文件分析具体原因

  • 8
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值