主机名 | ip | 配置 | 网速 |
---|---|---|---|
gaozhizhuo | 47.120.32.176 | 2核2g | 1m |
gaoshan01 | 120.26.83.65 | 2核4g | 100m |
gaoshan02 | 120.26.46.94 | 2核4g | 100m |
Gclinux(腾讯云) | 101.34.240.151 | 2核2g | 3m |
四台主机默认已经安装了docker,docker-compose 配置好了ssh免密登录,配置好了hosts解析文件,永久关闭了swap功能
仓库服务器安装harbor
非必须要做,如果有需要harbor就做,不需要就不忽略
部署harbor镜像仓库
下载地址
https://github.com/goharbor/harbor/tags
我下载的是
https://github.com/goharbor/harbor/releases/download/v2.3.3/harbor-offline-installer-v2.3.3.tgz
解压到安装目录
tar -zxvf harbor-offline-installer-v2.3.3.tgz -C ./install/
加载解压后得出的镜像:
docker load < harbor.v2.3.3.tar.gz
修改配置文件:
vim harbor.yml
5 hostname: 10.10.0.103 //仓库地址改为自己的IP
6
8 http:
10 port: 8000 //对外暴露端口
11
12 # https related config
13 #https: //https我们这里不用全都注释掉
14 # https port for harbor, default is 443
15 # port: 443
16 # The path of cert and key files for nginx
17 # certificate: /your/certificate/path
18 # private_key: /your/private/key/path
47 data_volume: /data //仓库数据存储目录,根据自己需求修改
//仓库大多情况下都是独立的一台或多台主从服务器
运行prepare
./prepare
运行install.sh
./install
查看启动情况
docker-compose ps
所有安装k8s节点统一配置
添加yum源
1.添加centos7 aliyun 源
mv CentOS-Base.repo CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
- 添加k8s源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all
yum makecache
必须关闭SELINUX
以及swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
修改linux内核参数,开启数据包转发功能
cat <<EOF> /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables =1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
执行命令:modprobe br_netfilter && modprobe overlay
##加载读取内核参数配置文件
sysctl -p /etc/sysctl.d/k8s.conf
三台节点机器配置docker配置文件
- 我们需要备份 Docker 的默认配置文件,以防止意外情况发生。运行以下命令备份文件:
cp /etc/docker/daemon.json /etc/docker/daemon.json.bak
-
除了harbor服务器外的三台机器修改 Docker 配置文件
接下来,我们需要修改 Docker 的配置文件r。使用文本编辑器打开 /etc/docker/daemon.json 文件,并添加以下内容:
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://tzbnj5j2.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"insecure-registries": ["10.10.0.103:8000"]
}
EOF
注意:将 your-registry-domain 替换为你的私有仓库地址,如果没有私有仓库可以忽略该项。
- 重启 Docker 服务
systemctl restart docker
k8s集群初始化
无需安装到harbor仓库服务器
由于上面已经添加了K8s软件源,此处就不再配置
所有K8S节点安装三大件 kubeadm kubelet kubectl
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
systemctl enable kubelet
在每台K8S节点机器上下载依赖镜像:
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
chmod +x ./images.sh && ./images.sh
在所有机器添加master域名映射,以下需要修改为自己的master节点ip
echo "10.10.0.100 cluster-endpoint" >> /etc/hosts
1、初始化
在Master节点执行
kubeadm init \
--apiserver-advertise-address=10.10.0.100\
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=192.168.0.0/16 \
--token-ttl=0
apiserver-advertise-address: apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址
image-repository : 拉取镜像的镜像仓库,默认是k8s.gcr.io
kubernetes-version :指定kubernetes版本
service-cidr : service资源的网段
pod-network-cidr : pod资源的网段,需与pod网络插件的值设置一致。Flannel网络插件的默认为10.244.0.0/16,Calico插件的默认值为192.168.0.0/16
–- token-ttl=0:默认token的有效期为24小时,如果不想过期,可以加上 --token-ttl=0 这个参数
出现以下页面表示初始化成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.0.100:6443 --token zqh61h.5245e801teb96jl3 \
--discovery-token-ca-cert-hash sha256:0e581f8dc313ad068ad14c16cdbe3f31aa54112643848985d6ad71369d331396
配置如下即可使用kubectl工具
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果创建集群没有成功,重现初始化时需要进行一下操作:
kubeadm reset # 环境及网络清理
rm -rf $HOME/.kube # 重新创建集群时,需要删除 $HOME/.kube目录
rm -rf /etc/cni/net.d
ipvsadm --clear
2、安装网络组件
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml
3、加入工作节点
在node1和node2执行
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令(也就是初始化后最下方的内容):
kubeadm join 10.10.0.100:6443 --token zqh61h.5245e801teb96jl3 \
--discovery-token-ca-cert-hash sha256:0e581f8dc313ad068ad14c16cdbe3f31aa54112643848985d6ad71369d331396
默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:
kubeadm token create --print-join-command
4、所有节点设置命令补全
echo "source <(kubectl completion bash)" >> ~/.bashrc
echo "source <(kubeadm completion bash)" >> ~/.bashrc
source ~/.bashrc
部署dashboard
1,一条命令直接下载并部署,就是运行了一个镜像文件
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
2,设置访问端口
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
# 修改内容
type: ClusterIP 改为 type: NodePort
3,查看随即提供的访问端口
kubectl get svc -A |grep kubernetes-dashboard
访问: https://集群任意IP:端口
如果出现上面的情况,就在直接输入thisisunsafe即可
4, 创建访问账号
创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
创建
kubectl apply -f dash.yaml
令牌访问:
#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6Ild5clhJcmFTeTdPMzNuWmlpWDh1SE5EeVY1Z0F0alFNZXpBZ2ZjY1FwZDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXg4bTduIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkMjhlZWU5Ny05NDU5LTRhNzQtOTk1OC1kMmE2MTUzZDVjZDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.OhWFabUOSeZtMipdhC4BKqpYyVeGwNcJCczme00YRNZ1GYZzpHHg_Wu8TFamUMkenPURfo-Ct6g6WItkV-Zw4GN2zYicdGrf-Iy1tkiHTtfzsPdC_WVoGT14nUupp2NYddXnBA1oDtJmARKYD0LHwMYQuBwlhO8mtIXeCnRLdV31BSV-7cexuPjSOf9K5UhhbSnkPFfHQVfJx5rbYAVh0FO_PjvWo684vDzjNxE4tffR5DURgifI1-weJ15LapXHk_CmKfHdJ1vjscNhIPhgsHW1KRLBp5k1beS761yj8fVPRgewj49krliNReo4fLmOWo8aB0dMsuZfbmWE7XQCJA
ydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.OhWFabUOSeZtMipdhC4BKqpYyVeGwNcJCczme00YRNZ1GYZzpHHg_Wu8TFamUMkenPURfo-Ct6g6WItkV-Zw4GN2zYicdGrf-Iy1tkiHTtfzsPdC_WVoGT14nUupp2NYddXnBA1oDtJmARKYD0LHwMYQuBwlhO8mtIXeCnRLdV31BSV-7cexuPjSOf9K5UhhbSnkPFfHQVfJx5rbYAVh0FO_PjvWo684vDzjNxE4tffR5DURgifI1-weJ15LapXHk_CmKfHdJ1vjscNhIPhgsHW1KRLBp5k1beS761yj8fVPRgewj49krliNReo4fLmOWo8aB0dMsuZfbmWE7XQCJA
[外链图片转存中...(img-bp6Sf3zC-1702559939057)]