8S所有节点环境准备
准备四台干净的centos主机,需要提前部署docker环境,harbor250部署harbor仓库,并在其他主机上部署安全证书
- 统一主机名称:
- master231 10.0.0.231 - 2c 4G
- worker232 10.0.0.232 - 2c 4G
- worker233 10.0.0.233 - 2c 4G
- harbor250 10.0.0.250 - 2c,2G,50G+
1.虚拟机操作系统环境准备
参考链接:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
2.关闭swap分区
# 临时关闭
swapoff -a && sysctl -w vm.swappiness=0
# 基于配置文件关闭
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
3.确保各个节点MAC地址或product_uuid唯一
ifconfig ens33 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid
温馨提示:
一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。
Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。
4.检查网络节点是否互通
简而言之,就是检查你的k8s集群各节点是否互通,可以使用ping命令来测试。
5.允许iptable检查桥接流量
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
6.检查端口是否被占用
参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/
7.部署docker
略
8.禁用防火墙
systemctl disable --now firewalld
9.禁用selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
grep ^SELINUX= /etc/selinux/config
10.配置host解析
cat >> /etc/hosts <<'EOF'
10.0.0.231 master231
10.0.0.232 worker232
10.0.0.233 worker233
10.0.0.250 harbor.linuxmc.com
EOF
cat /etc/hosts
11.所有节点创建自定义证书目录
mkdir -pv /etc/docker/certs.d/harbor.linuxmc.com
二.安装harbor
1.下载harbor软件
[root@harbor250 ~]#https://github.com/goharbor/harbor/releases/download/v2.8.1/harbor-offline-installer-v2.8.1.tgz wget https://github.com/goharbor/harbor/releases/download/v2.8.1/harbor-offline-installer-v2.8.1.tgz
2.解压harbor软件包
[root@harbor250 ~]# tar xf linuxmc-harbor.tar.gz -C /linuxmc/softwares/
3.安装harbor
[root@harbor250 ~]# cd /linuxmc/softwares/harbor/
[root@harbor250 harbor]#
[root@harbor250 harbor]# ./install.sh
4.将客户端证书推送到所有的k8s集群
[root@harbor250 harbor]# scp certs/custom/client/* master231:/etc/docker/certs.d/harbor.linuxmc.com/
[root@harbor250 harbor]#
[root@harbor250 harbor]# scp certs/custom/client/* worker232:/etc/docker/certs.d/harbor.linuxmc.com/
[root@harbor250 harbor]#
[root@harbor250 harbor]# scp certs/custom/client/* worker233:/etc/docker/certs.d/harbor.linuxmc.com/
5.挑选任意K8S节点测试harbor能否正常访问
[root@master231 ~]# docker login -u admin -p 1 harbor.linuxmc.com
.....
Login Succeeded
[root@master231 ~]#
三.所有节点安装kubeadm,kubelet,kubectl
1.需要在每台机器上安装以下的软件包:
kubeadm:
用来初始化集群的指令。
kubelet:
在集群中的每个节点上用来启动Pod和容器等。
kubectl:
用来与集群通信的命令行工具。
kubeadm不能帮你安装或者管理kubelet或kubectl,所以你需要确保它们与通过kubeadm安装的控制平面(master)的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。
然而,控制平面与kubelet间的相差一个次要版本不一致是支持的,但kubelet的版本不可以超过"API SERVER"的版本。 例如,1.7.0版本的kubelet可以完全兼容1.8.0版本的"API SERVER",反之则不可以。
2.所有节点安装kubeadm,kubelet,kubectl
1>.配置软件源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
2>.查看kubeadm的版本(将来你要安装的K8S时请所有组件版本均保持一致!)
yum -y list kubeadm --showduplicates | sort -r
3>.安装kubeadm,kubelet,kubectl软件包
# yum -y install kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0 # 线上同学
# tar xf linuxmc-k8s-1.23.17.tar.gz && yum -y localinstall k8s-1.23.17/*.rpm # 线下同学
4>.启动kubelet服务(若服务启动失败时正常现象,其会自动重启,因为缺失配置文件,初始化集群后恢复!此步骤可跳过!推荐设置开机自启动)
systemctl enable --now kubelet
systemctl status kubelet
参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/
四.初始化master节点
(1)使用kubeadm初始化master节点
[root@master231 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=linuxmc.com
相关参数说明:
--kubernetes-version:
指定K8S master组件的版本号。
--image-repository:
指定下载k8s master组件的镜像仓库地址。
--pod-network-cidr:
指定Pod的网段地址。
--service-cidr:
指定SVC的网段
--service-dns-domain:
指定service的域名。若不指定,默认为"cluster.local"。
使用kubeadm初始化集群时,可能会出现如下的输出信息:
[init]
使用初始化的K8S版本。
[preflight]
主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。
[certs]
生成证书文件,默认存储在"/etc/kubernetes/pki"目录哟。
[kubeconfig]
生成K8S集群的默认配置文件,默认存储在"/etc/kubernetes"目录哟。
[kubelet-start]
启动kubelet,
环境变量默认写入:"/var/lib/kubelet/kubeadm-flags.env"
配置文件默认写入:"/var/lib/kubelet/config.yaml"
[control-plane]
使用静态的目录,默认的资源清单存放在:"/etc/kubernetes/manifests"。
此过程会创建静态Pod,包括"kube-apiserver","kube-controller-manager"和"kube-scheduler"
[etcd]
创建etcd的静态Pod,默认的资源清单存放在:""/etc/kubernetes/manifests"
[wait-control-plane]
等待kubelet从资源清单目录"/etc/kubernetes/manifests"启动静态Pod。
[apiclient]
等待所有的master组件正常运行。
[upload-config]
创建名为"kubeadm-config"的ConfigMap在"kube-system"名称空间中。
[kubelet]
创建名为"kubelet-config-1.22"的ConfigMap在"kube-system"名称空间中,其中包含集群中kubelet的配置
[upload-certs]
跳过此节点,详情请参考”--upload-certs"
[mark-control-plane]
标记控制面板,包括打标签和污点,目的是为了标记master节点。
[bootstrap-token]
创建token口令,例如:"kbkgsa.fc97518diw8bdqid"。
如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。
[kubelet-finalize]
更新kubelet的证书文件信息
[addons]
添加附加组件,例如:"CoreDNS"和"kube-proxy”
(2)拷贝授权文件,用于管理K8S集群
[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
(3)查看master组件
[root@master231 ~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
[root@master231 ~]#
[root@master231 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
scheduler Healthy ok
[root@master231 ~]#
小彩蛋:
批量导出镜像:
docker save -o linuxmc-control-plane.tar.gz `docker images | awk 'NR>1{print$1":"$2}'`
五.配置所有worker节点加入k8s集群
1.加入集群,注意TOKEN,每个人的都不一样哟!(建议复制你的master初始化输出的命令)
[root@worker232 ~]# docker load -i linuxmc-worker-node.tar.gz
[root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token pljeu9.oynjqw13j1m7xyvb \
--discovery-token-ca-cert-hash sha256:3e6dbfe55cbda949a1861fc223babf746c2742cf9069703bc163032e91d375ac
[root@worker233 ~]# docker load -i linuxmc-worker-node.tar.gz
[root@worker233 ~]# kubeadm join 10.0.0.231:6443 --token pljeu9.oynjqw13j1m7xyvb \
--discovery-token-ca-cert-hash sha256:3e6dbfe55cbda949a1861fc223babf746c2742cf9069703bc163032e91d375ac
2.master查看集群节点数量
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 22m v1.23.17
worker232 NotReady <none> 6m14s v1.23.17
worker233 NotReady <none> 6m10s v1.23.17
[root@master231 ~]#
[root@master231 ~]#
[root@master231 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 22m v1.23.17
worker232 NotReady <none> 6m15s v1.23.17
worker233 NotReady <none> 6m11s v1.23.17
[root@master231 ~]#
六.配置自动补全功能
[root@master231 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc
[root@master231 ~]#
[root@master231 ~]# kubectl
alpha auth cordon diff get patch run version
annotate autoscale cp drain help plugin scale wait
api-resources certificate create edit kustomize port-forward set
api-versions cluster-info debug exec label proxy taint
apply completion delete explain logs replace top
attach config describe expose options rollout uncordon
七.安装CNI插件
1.下载插件的配置文件
[root@master231 ~]# mkdir /manifests/cni
[root@master231 ~]#
[root@master231 ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml -O /manifests/cni/kube-flannel.yml
2.修改配置文件
[root@master231 ~]# vim /manifests/cni/kube-flannel.yml
...
将
"Network": "10.244.0.0/16",
修改为:
"Network": "10.100.0.0/16",
3.安装flannel插件
[root@master231 ~]# kubectl apply -f /manifests/cni/kube-flannel.yml
4.验证网络插件是否部署成功
[root@master231 ~]# kubectl -n kube-flannel get pods
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-btrqw 1/1 Running 0 5m14s
kube-flannel-ds-krq6g 1/1 Running 0 5m14s
kube-flannel-ds-mh2q7 1/1 Running 0 5m14s
[root@master231 ~]#
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 91m v1.23.17
worker232 Ready <none> 75m v1.23.17
worker233 Ready <none> 74m v1.23.17
[root@master231 ~]#
[root@master231 ~]# kubectl -n kube-flannel get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-btrqw 1/1 Running 0 18m 10.0.0.231 master231 <none> <none>
kube-flannel-ds-krq6g 1/1 Running 0 18m 10.0.0.232 worker232 <none> <none>
kube-flannel-ds-mh2q7 1/1 Running 0 18m 10.0.0.233 worker233 <none> <none>
[root@master231 ~]#
5.推送测试镜像到harbor
[root@master231 ~]# docker login -u admin -p 1 harbor.linuxmc.com
[root@master231 ~]#
[root@master231 ~]# docker tag alpine harbor.linuxmc.com/linuxmc-linux/alpine
[root@master231 ~]#
[root@master231 ~]# docker push harbor.linuxmc.com/linuxmc-linux/alpine
Using default tag: latest
The push refers to repository [harbor.linuxmc.com/linuxmc-linux/alpine]
8d3ac3489996: Pushed
latest: digest: sha256:e7d88de73db3d3fd9b2d63aa7f447a10fd0220b7cbf39803c803f2af9ba256b3 size: 528
[root@master231 ~]#
6.启动Pod测试
[root@master231 ~]# mkdir /manifests/pod
[root@master231 ~]#
[root@master231 ~]# cat /manifests/pod/01-flannel-test.yaml
# 指定apiserver版本号
apiVersion: v1
# 指定资源的类型
kind: Pod
# 定义源数据信息
metadata:
# Pod的名称
name: pod-c1
# 用户定义资源期望运行的状态
spec:
# 指定在worker232的工作节点运行
nodeName: worker232
# 在Pod内运行的容器定义
containers:
# 容器的名称
- name: c1
# 镜像名称
image: harbor.linuxmc.com/linuxmc-linux/alpine:latest
# 相当于Dockerfile的ENTRYPOINT指令,指定容器运行的命令
command: ["tail","-f","/etc/hosts"]
---
apiVersion: v1
kind: Pod
metadata:
name: pod-c2
spec:
nodeName: worker233
containers:
- name: c2
image: harbor.linuxmc.com/linuxmc-linux/alpine:latest
command: ["sleep","3600"]
[root@master231 ~]#
[root@master231 ~]# kubectl apply -f /manifests/pod/01-flannel-test.yaml
pod/pod-c1 created
pod/pod-c2 created
[root@master231 ~]#
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-c1 1/1 Running 0 8s 10.100.1.2 worker232 <none> <none>
pod-c2 1/1 Running 0 8s 10.100.2.2 worker233 <none> <none>
[root@master231 ~]#
[root@master231 ~]# kubectl exec pod-c1 -- ifconfig
eth0 Link encap:Ethernet HWaddr 5A:A2:BF:B2:97:35
inet addr:10.100.1.2 Bcast:10.100.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1222 (1.1 KiB) TX bytes:420 (420.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
[root@master231 ~]#
[root@master231 ~]# kubectl exec pod-c1 -- ping 10.100.2.2 -c 3
PING 10.100.2.2 (10.100.2.2): 56 data bytes
64 bytes from 10.100.2.2: seq=0 ttl=62 time=0.872 ms
64 bytes from 10.100.2.2: seq=1 ttl=62 time=0.318 ms
64 bytes from 10.100.2.2: seq=2 ttl=62 time=0.340 ms
--- 10.100.2.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.318/0.510/0.872 ms
[root@master231 ~]#