Linux 7离线环境部署k8s
准备机器
- 开通三台机器,内网互通
- 每台机器的hostname不要用localhost【不包含下划线、小数点、大写字母】
安装前置环境(每台都执行)
基础环境
-
关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
systemctl stop firewalld systemctl disable firewalld
-
修改 hostname
hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node2
-
查看修改结果
hostnamectl status
-
设置 hostname 解析
echo "127.0.0.1 $(hostname)" >> /etc/hosts
-
关闭 selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
-
关闭 swap
swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab
将桥接的 IPv4 流量传递到 iptables
-
修改 /etc/sysctl.conf
# 有配置,则修改 sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf # 没有,则追加 echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
-
执行命令以应用
sysctl -p
-
执行命令查看设置
sysctl -a | grep call
docker 环境
-
安装daocker-ce 百度网盘链接:https://pan.baidu.com/s/1-G2onlwF1uYPJTdByruD4g
提取码:71gm -
上传到服务器,解压
tar -zxvf docker-20.10.16.tgz cp docker/* /usr/bin/
-
创建docker.service
vi /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker # 开启远程连接 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
-
启动docker
systemctl start docker systemctl enable docker
安装k8s核心 kubectl kubeadm kubelet(所有机器都执行)
-
rpm包下载地址 百度网盘链接:https://pan.baidu.com/s/14_jjVTwCPuq542FyzlVNeQ
提取码:zcnp#上传rpm包到服务器后使用yum安装 yum -y install ./kubeadm-rpm/* #开机启动kubelet systemctl enable kubelet && systemctl start kubelet
初始化master节点(master节点执行)
-
准备镜像(所有节点都上传)百度网盘链接:https://pan.baidu.com/s/1QEZCgrKF2RMneGsrELJTLA
提取码:xi8bkube-apiserver:v1.21.0
kube-proxy:v1.21.0
kube-controller-manager:v1.21.0
kube-scheduler:v1.21.0
coredns:v1.8.0
etcd:3.4.13-0
pause:3.4.1
# 网络插件镜像 这里使用的是calico
calico-cni
calico-node
calico-kube-controllers
calico-pod2Daemon-dlexvol -
将下载好的镜像上传至服务器并导入镜像(编写sh脚本批量导入)
将提前下载好的仓库镜像上传到服务器vim loadImages.sh #!/bin/bash # 指定存档文件所在的目录 archive_dir="/root/images/" # 遍历存档文件并加载到 Docker for archive in "$archive_dir"/*.tar do docker load -i "$archive" done
-
启动仓库镜像
docker run -d -p 5000:5000 --restart=always --name registry registry:2
-
将镜像打tag、push
#docker tag k8s.gcr.io/kube-apiserver:v1.21.0 localhost:5000/kube-apiserver:v1.21.0 # Push 镜像到仓库 #docker push localhost:5000/kube-apiserver:v1.21.0
这边使用脚本批量tag、批量push 此脚本不需要修改
vim imageTagPush.sh #!/bin/bash set -e KUBE_VERSION=v1.21.0 KUBE_PAUSE_VERSION=3.4.1 ETCD_VERSION=3.4.13-0 CORE_DNS_VERSION=v1.8.0 GCR_URL=k8s.gcr.io LOCALHOST_URL=localhost:5000 images=( kube-proxy:${KUBE_VERSION} kube-scheduler:${KUBE_VERSION} kube-controller-manager:${KUBE_VERSION} kube-apiserver:${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd:${ETCD_VERSION} coredns:${CORE_DNS_VERSION} ) for imageName in ${images[@]} ; do docker tag $GCR_URL/$imageName $LOCALHOST_URL/$imageName docker rmi $GCR_URL/$imageName docker push $LOCALHOST_URL/$imageName done
init master节点
-
初始化master节点,此过程时间稍长
########kubeadm init 一个master######################## kubeadm init \ --apiserver-advertise-address=10.170.11.8 \ --image-repository localhost:5000 \ --kubernetes-version v1.21.0 \ --service-cidr=2.2.2.1/16 \ --pod-network-cidr=3.3.3.1/16 # apiserver-advertise-address master的ip (内网) # image-repository 指定准备镜像的仓库 使用刚刚push镜像仓库 localhost:5000 ## 注意:pod-cidr与service-cidr # 指定一个网络可达范围 pod的子网范围+service负载均衡网络的子网范围+本机ip的子网范围不能有重复域 #例如 apiserver-advertise-address=10.170.xx pod-network-cidr=192.170.0.0/16 这样不行 # --pod-network-cidr --service-cidr 不知道怎么修改 可以不改 ####按照提示继续#### ## init完成后第一步:复制相关文件夹 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ## 导出环境变量 export KUBECONFIG=/etc/kubernetes/admin.conf
-
部署一个pod网络组件(百度网盘链接:https://pan.baidu.com/s/13KCSekAsm5ONMv5TCW_GYA
提取码:e8t9)vim calico.yaml # 搜索:CALICO_IPV4POOL_CIDR 修改为初始化集群时 --pod-network-cidr=3.3.3.1/16即可: #- name: CALICO_IPV4POOL_CIDR #value: "3.3.3.1/16" # 启动calico kubectl apply -f calico.yaml
-
查看运行状态
kubectl get pod -A ##获取集群中所有部署好的应用Pod kubectl get nodes ##查看集群所有机器的状态
初始化work节点
-
kubeadm init 执行成功之后会有一条如下命令:kubeadm join 如果过期可在master节点执行 (kubeadm token create --print-join-command)
## 在work节点执行 kubeadm join 172.24.80.222:6443 --token nz9azl.9bl27pyr4exy2wz4 \ --discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20
-
kubectl命令需要使用kubernetes-admin来运行,需要admin.conf文件(conf文件是通过“ kubeadmin init”命令在主节点/etc/kubernetes 中创建),但是从节点没有conf文件,也没有设置 KUBECONFIG =/root/admin.conf环境变量,所以需要复制conf文件到从节点,并设置环境变量就OK了
# 拷贝admin.conf文件,主节点上执行。K8s-Master,K8s-Node1,K8s-Node2需要在/etc/hosts中解析 scp /etc/kubernetes/admin.conf root@K8s-Node2:/etc/kubernetes/ # 设置环境变量,在Node节点上执行。 # Node1 echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source ~/.bash_profile
-
验证集群
#获取所有节点 kubectl get nodes #给节点打标签 ###加标签 《h1》 kubectl label node 机器hostname node-role.kubernetes.io/worker='' ###去标签 kubectl label node 机器hostname node-role.kubernetes.io/worker-