一.主机准备
1.1主机配置与操作系统说明
centos7u9 |
---|
1.2主机硬件配置说明
序号 | 主机名 | ip地址 | CPU | 内存 | 硬盘 |
---|---|---|---|---|---|
1 | k8s-master1 | 192.168.1.200 | 2C | 2G | 100G |
2 | k8s-worker1 | 192.168.1.201 | 2C | 2G | 100G |
3 | k8s-worker2 | 192.168.1.202 | 2C | 2G | 100G |
1.3主机配置
1.3.1主机名配置
hostnamectl set-hostname k8s-master01
1.3.2主机IP地址配置
vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="f2c1c981-22ee-4e43-8209-2d6bf20ac6ca"
DEVICE="ens33"
ONBOOT="yes"
IPV6_PRIVACY="no"
IPADDR="192.168.1.200"
PREFIX="24"
GATEWAY="192.168.1.54"
DNS1=119.29.29.29
systemctl restart network
1.3.3主机名与IP地址解析(hosts)
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.200 master1
192.168.1.201 worker1
192.168.1.202 worker2
1.3.4防火墙配置
关闭防火墙firewalld
systemctl disable firewalld
systemctl stop firewalld
systemctl status firewalld
1.3.5SELINUX配置
修改后需要重启
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
sestatus
1.3.6时间同步配置
yum -y install ntpdate
crontab -e
0 */1 * * * ntpdate time1.aliyun.com
crontab -l
ntpdate time1.aliyun.com
1.3.7升级操作系统内核
不要一下子升太高,也不要使用很低的内核版本
导入elrepo gpg key [软件安装验证密钥]
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安装elrepo YUM源仓库
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
安装kernel-lt版本,ml为长期稳定版本,lt为长期维护版本
yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
设置更新的内核为默认引导
grub2-set-default 0
重新生成grub2引导文件
grub2-mkconfig -o /boot/grub2/grub.cfg
更新后,需要重启,使升级的内核生效
验证内核是否为更新的版本
uname -r
1.3.8配置内核转发及网桥过滤
添加网桥过滤及内核转发配置文件
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
加载br_netfilter模块
modprobe br_netfilter
查看是否加载
lsmod |grep br_netfilter
vim /etc/sysconfig/modules/br_netfilter.modules 设置环境变量
#!/bin/bash
modprobe br_netfilter
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
1.3.9安装ipset及ipvsadm
安装ipset及ipvsadm
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
1.3.10关闭SWAP分区
vim /etc/fstab
临时关闭
swapoff -a
永久关闭SWAP分区
#
# /etc/fstab
# Created by anaconda on Sun Nov 12 23:08:40 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=90946160-91fa-4b5c-afbb-97e6e82341ea /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0 #注释掉
二.容器运行时Containerd准备
Containerd准备
2.1.1Containerd部署文件获取
下载cri-containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.8/cri-containerd-1.7.8-linux-amd64.tar.gz
解压到根目录
tar xf -cri-containerd-1.7.8-linux-amd64.tar.gz -C /
2.1.2Containerd配置文件生成并修改
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml
sandibox_image = "registry.k8s.io/pause:3.9" #k8s 1.28默认使用3.9
#2.1.3Containerd启动及开机自启动
systemctl enable --now containerd #--now是启动选项
验证版本
containerd --version
crictl images
runc替换
2.2.1libseccomp准备
wget https://github.com/opencontainers/runc/releases/download/v1.1.10/libseccomp-2.5.4.tar.gz
tar xf libseccomp-2.5.4.tar.gz
cd libseccomp-2.5.4
yum -y install gperf
./configure
make && make install
find / -name "libseccomp.so"
2.2.2runc替换
wget https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64
which runc
rm -rf 'which runc'
mv runc.amd64 /usr/local/sbin/runc
chmod +x runc
runc
runc --version
三.kubeadm部署k8s集群
k8sYUM源准备
官方源
# 此操作会覆盖 /etc/yum.repos.d/kubernetes.repo 中现存的所有配置
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
3.2k8s集群软件安装
3.2.1安装及设置自启动
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
3.2.1配置kubelet
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
3.3k8s集群初始化
master节点操作
kubeadm init --pod-network-cidr=10.244.0.0/16 -apiserver-advertise-address=192.168.1.200
3.3.1kubectl api配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
3.3.2工作节点加入集群
不一定相同,请复制实际中输出的内容
在工作节点执行命令
kubeadm join 192.168.1.200:6443 --token 5jz1bl.dt6etzj1m49646vd \
--discovery-token-ca-cert-hash sha256:5e15bad72866d0b00b3f287f21d7140831ee97f665dd5d63226f50bdca0894fa
结果
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane 110s v1.28.3
k8s-worker1 NotReady <none> 7s v1.28.3
k8s-worker2 NotReady <none> 4s v1.28.3
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5dd5756b68-9g67g 0/1 Pending 0 2m30s
coredns-5dd5756b68-nv68x 0/1 Pending 0 2m30s
etcd-k8s-master1 1/1 Running 0 2m44s
kube-apiserver-k8s-master1 1/1 Running 0 2m44s
kube-controller-manager-k8s-master1 1/1 Running 0 2m44s
kube-proxy-77m25 1/1 Running 0 2m30s
kube-proxy-hlk5h 1/1 Running 0 65s
kube-proxy-zjg9r 1/1 Running 0 62s
kube-scheduler-k8s-master1 1/1 Running 0 2m44s
四.calico网络插件
4.1安装calico operator
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
kubectl get ns #查看命名空间
#NAME STATUS AGE
#default Active 10m
#kube-node-lease Active 10m
#kube-public Active 10m
#kube-system Active 10m
#!!tigera-operator Active 26s
一定要等待tigera-operator这个pod状态为运行(作用是对calico进行部署)
kubectl get pods -n tigera-operator
NAME READY STATUS RESTARTS AGE
tigera-operator-597bf4ddf6-nstgk 1/1 Running 0 45s
4.2下载并编辑calico配置文件
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
vim custom-resources.yaml
#cidr: 10.244.0.0/16
kubectl create -f custom-resources.yaml
kubectl get ns
kubectl get pods -n calico-system
watch kubectl get pods -n calico-system#监视pods直至全部ready
检查coredns是否已经获得ip地址
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5dd5756b68-9g67g 1/1 Running 0 23m
coredns-5dd5756b68-nv68x 1/1 Running 0 23m
etcd-k8s-master1 1/1 Running 0 23m
kube-apiserver-k8s-master1 1/1 Running 0 23m
kube-controller-manager-k8s-master1 1/1 Running 0 23m
kube-proxy-77m25 1/1 Running 0 23m
kube-proxy-hlk5h 1/1 Running 0 21m
kube-proxy-zjg9r 1/1 Running 0 21m
kube-scheduler-k8s-master1 1/1 Running 0 23m
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5dd5756b68-9g67g 1/1 Running 0 24m 10.244.159.131 k8s-master1 <none> <none>
coredns-5dd5756b68-nv68x 1/1 Running 0 24m 10.244.159.129 k8s-master1 <none> <none>
etcd-k8s-master1 1/1 Running 0 24m 192.168.1.200 k8s-master1 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 0 24m 192.168.1.200 k8s-master1 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 0 24m 192.168.1.200 k8s-master1 <none> <none>
kube-proxy-77m25 1/1 Running 0 24m 192.168.1.200 k8s-master1 <none> <none>
kube-proxy-hlk5h 1/1 Running 0 22m 192.168.1.201 k8s-worker1 <none> <none>
kube-proxy-zjg9r 1/1 Running 0 22m 192.168.1.202 k8s-worker2 <none> <none>
kube-scheduler-k8s-master1 1/1 Running 0 24m 192.168.1.200 k8s-master1 <none> <none>
检查域名解析服务
kubectl get svc -n kube-system
yum install bind-utils#安装dig命令
dig -t a www.baidu.com @10.96.0.10
;; ANSWER SECTION:
www.baidu.com. 5 IN A 198.18.0.169
解析成功
4.3检查集群是否可用
添加nginx-pod.yml文件创建pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
使用命令 kubectl apply 创建 pod:
kubectl apply -f nginx-pod.yml
kubectl get pods nginx -o wide
watch kubectl get pods nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 3m3s 10.244.194.67 k8s-worker1 <none> <none>
ready后使用curl工具测试
curl 10.244.194.67
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
测试外部访问
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
nginx-service NodePort 10.104.74.104 <none> 80:30080/TCP 21d
使用宿主机打开<控制面板ip>:30080即192.168.1.200:30080
4.4查看集群健康状态
kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.200:6443
CoreDNS is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
mkdir /.kube && cd /.kube
scp master1:/root/.kube/config .
有了这个文件就可以在目标节点使用kubectl连接到apiserver,即可以使用kubectl命令管理集群