提前规划:
1、IP网段
192.168.1.120 master1
192.168.1.130 master2
192.168.1.140 master3
192.168.1.121 node01
192.168.1.131 node02
192.168.1.141 node03
192.168.1.150 vip
2、基础环境准备(所有节点执行)
# 关闭防火墙和SELinux
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# 关闭swap
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 加载内核模块
cat > /etc/modules-load.d/k8s.conf <<EOF
br_netfilter
overlay
EOF
modprobe br_netfilter && modprobe overlay
# 配置内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
# 配置阿里云Docker源(docker版本为26.1.4)
yum install -y yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装指定版本
yum install -y docker-ce-26.1.4 docker-ce-cli-26.1.4 containerd.io
# 配置镜像加速
mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://a308ikwv.mirror.aliyuncs.com","https://docker.lmirror.top"],
"exec-opts": [ "native.cgroupdriver=systemd" ],
"insecure-registries": ["docker.lmirror.top"]#信任该仓库并不检查证书
}
EOF
# 启动服务
systemctl daemon-reload
systemctl enable docker && systemctl start docker
# 配置阿里云K8s源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
# 安装指定版本
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 --disableexcludes=kubernetes
# 设置驱动并启动
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
systemctl enable kubelet && systemctl start kubelet
# 创建配置(以master1为例,其他节点修改priority和state)
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_MASTER
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 修改为实际网卡名称
virtual_router_id 51
priority 80 # master2设为90,master3设为80
advert_int 1
authentication {
auth_type PASS
auth_pass 42
}
virtual_ipaddress {
192.168.1.150/24 dev ens33 # 与网卡一致
}
track_script {
chk_haproxy
}
}
EOF
cat > /etc/haproxy/haproxy.cfg << EOF
global
log 127.0.0.1 local2
maxconn 4000
daemon
defaults
mode tcp
log global
option tcplog
retries 3
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen stats
bind *:1080
mode http
stats enable
stats uri /stats
frontend k8s-api
bind 192.168.1.150:6443
default_backend k8s-api
backend k8s-api
option tcp-check
balance roundrobin
server master1 192.168.1.120:6443 check inter 2000 rise 2 fall 3
server master2 192.168.1.130:6443 check inter 2000 rise 2 fall 3
server master3 192.168.1.140:6443 check inter 2000 rise 2 fall 3
EOF
# 启动服务:keepalived haproxy
systemctl enable keepalived haproxy --now
ip addr show ens33 | grep 192.168.1.150 #该命令可以在3个master中用于测试,正常情况下只有1个master节点会显示
初始化节点:
# 生成kubeadm配置
cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.28.2
controlPlaneEndpoint: "192.168.1.150:6443"
imageRepository: registry.aliyuncs.com/google_containers
networking:
podSubnet: "10.244.0.0/16"
apiServer:
certSANs:
- 192.168.1.120
- 192.168.1.130
- 192.168.1.140
- 192.168.1.150
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
kubeletExtraArgs:
cgroup-driver: "systemd"
EOF
kubeadm init --config=kubeadm-config.yaml --upload-certs --v=5
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
其他控制节点加入命令:
kubeadm join 192.168.1.150:6443 --token 4xve7t.gf7pm67pb2y3zcxi \
--discovery-token-ca-cert-hash sha256:ab28b560ce4afa0fd1773e80594cd57dc8dcc9e2c9460dcd9fb86f93c83e3e7d \
--control-plane --certificate-key ba6b813c4dabe16fbd4a21ea745902c61c9c1dfeb30f2a23ccc0a7258f4c0e6d \
--cri-socket unix:///var/run/cri-dockerd.sock
worker节点加入命令:
kubeadm join 192.168.1.150:6443 --token 4xve7t.gf7pm67pb2y3zcxi \
--discovery-token-ca-cert-hash sha256:ab28b560ce4afa0fd1773e80594cd57dc8dcc9e2c9460dcd9fb86f93c83e3e7d \
--cri-socket unix:///var/run/cri-dockerd.sock
配置网络:使用calico
wget https://docs.projectcalico.org/manifests/calico.yaml
sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml
sed -i 's/docker.io/docker.lmirror.top/g' calico.yaml
kubectl apply -f calico.yaml
安装helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version --short #验证安装查看版本
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update
【部署Metrics-Server:使用kubectl top 命令】
1、下载官网需要下载的版本:https://github.com/kubernetes-sigs/metrics-server/releases
这边下载的是0.7.2的,命令如下
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.2/components.yaml
2、下载完后将yaml文件中container部分的image字段更改为:docker.io/bitnami/metrics-server:0.7.2(提前节点下载好镜像bitn、ami/metrics-server:0.7.2)
3、在container args:字段中增加一条:- --kubelet-insecure-tls #跳过TLS校验,命令如下
sed -i 's/--kubelet-insecure-tls//g' components.yaml
4、kubectl apply即可
【kubernetes-dashboard安装】
参照官网:
https://github.com/kubernetes/dashboard/releases?q=1.28.2&expanded=true找正确兼容的版本下载
此处下载的为
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
mespace kubernetes-dashboard
Release "kubernetes-dashboard" does not exist. Installing it now.
NAME: kubernetes-dashboard
LAST DEPLOYED: Tue Mar 11 18:30:28 2025
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************
Congratulations! You have just installed Kubernetes Dashboard in your cluster.
To access Dashboard run:
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
NOTE: In case port-forward command does not work, make sure that kong service name is correct.
Check the services in Kubernetes Dashboard namespace using:
kubectl -n kubernetes-dashboard get svc
Dashboard will be available at:
https://localhost:8443
以上信息输出后即可,但是因为默认kong服务是clusterIP,更改为nodeport后可正常访问
kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard-kong-proxy -p '{"spec":{"type":"NodePort"}}'
kubectl -n kubernetes-dashboard get svc #确定执行该命令后有对应节点,例如3xxxx:443 访问https://<节点IP>:3xxxx,必须是https协议
tips:
dashboard创建后使用令牌登录后无法查看全部信息,需要创建role及sa并赋予权限
1、创建 admin-user ServiceAccount
# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: 根据实际ns填写
2、绑定 cluster-admin 角色
# admin-user-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: 根据实际ns填写
3、创建admin-user 的 Secret
# admin-user-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: admin-user-token
namespace: 根据实际ns填写
annotations:
kubernetes.io/service-account.name: admin-user
type: kubernetes.io/service-account-token
全部创建完毕后kubectl apply
kubectl get secret -n kubernetes-dashboard
kubectl -n kubernetes-dashboard get secret admin-user-token -o jsonpath="{.data.token}" | base64 --decode
执行后获取令牌,即可查看k8s集群dashboard全部信息