Rocky8.4安装k8s
此文档为离线安装版,所有安装包在一下链接下载
https://pan.baidu.com/s/1ZWvqBQvJXIMXVBRr69YnKQ
提取码:mk0r
名称 | ip |
---|---|
master | 192.168.1.100 |
node1 | 192.168.1.101 |
node2 | 192.168.1.102 |
k8s | 1.21.0 |
docker | 3_19.03.13 |
calico | v3.20 |
前置环境
**以下操作所有服务器均需执行:
1:关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
2:关闭 selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
3:关闭 swap:
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
4:将桥接的 IPv4 流量传递到 iptables 的链:
#修改 /etc/sysctl.conf
#如果有配置,则修改
sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
#没有,追加
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
#执行命令以应用
sysctl -p
若报以下错误信息:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
请执行一下命令
modprobe br_netfilter
ls /proc/sys/net/bridge/
sysctl -p
5:修改 hostname(每台主机都需要修改,不能相同)
master节点:hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-master
node1节点:hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node1
node2节点:hostnamectl set-hostname k8s-node2
hostnamectl set-hostname k8s-node2
#修改hosts文件
cat >> /etc/hosts << EOF
192.168.1.100 master
192.168.1.101 node1
192.168.1.102 node1
192.168.1.100 cluster-endpoint
EOF
提前下载所有文件,开头有链接
执行一下命令,防止离线安装rpm包不成功
[root@localhost~]#cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# mkdir bak
[root@localhost yum.repos.d]#
[root@localhost yum.repos.d]# mv *.repo ./bak
软件安装:
1:docker安装:
进入docker-rpm包所在路径
yum -y localinstall *.rpm
systemctl start docker
systemctl enable docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://fhqs2izq.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
2:k8s安装:
进入k8s-rpm包所在路径
yum -y localinstall *.rpm
systemctl enable kubelet && systemctl start kubelet
3:镜像包生成:
进入镜像包
sh load_images.sh
查看镜像包文件:
[root@k8s-node1 pki]# docker images
[root@k8s-node1 pki]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
calico/node v3.20.6 daeec7e26e1f 15 months ago 156MB
calico/pod2daemon-flexvol v3.20.6 39b166f3f936 15 months ago 18.6MB
calico/cni v3.20.6 13b6f63a50d6 15 months ago 138MB
calico/kube-controllers v3.20.6 4dc6e7685020 15 months ago 60.2MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 2 years ago 126MB
registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 2 years ago 122MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 2 years ago 50.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 2 years ago 120MB
registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 2 years ago 683kB
registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 3 years ago 42.5MB
registry.aliyuncs.com/google_containers/coredns v1.8.0 296a6d5035e2 3 years ago 42.5MB
registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 3 years ago 253MB
若生成的coredns的镜像包为registry.aliyuncs.com/google_containers/coredns:v1.8.0,请执行一下操作
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
4:初始化:
以下只有主节点操作:
kubeadm init \
--apiserver-advertise-address=192.168.1.100 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.21.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=192.170.0.0/16
初始化之后,会输出一个join命令,先复制出来,node节点加入master会使用。
--image-repository registry.aliyuncs.com/google_containers 镜像仓库,离线安装需要把相关镜像先拉取下来
--apiserver-advertise-address 集群通告地址
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version K8s版本,与上面安装的一致
--service-cidr 集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致,若不是默认的192.168.0.0请修改一下的calico.yaml文件
执行成功后提示以下信息:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token 77dwsg.pbbc9hw1t62pgcuv \
--discovery-token-ca-cert-hash sha256:05462e45971d937be745028c8776900e93830401dd5953328f718293c8ffef7d \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token 77dwsg.pbbc9hw1t62pgcuv \
--discovery-token-ca-cert-hash sha256:05462e45971d937be745028c8776900e93830401dd5953328f718293c8ffef7d
复制以上的mkdir,并执行,认证k8s文件,如下:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看工作节点:
[root@master .kube]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 6m46s v1.23.0
# 注:由于网络插件还没有部署,还没有准备就绪 NotReady,继续操作。
请保留以下信息,以下信息为node添加节点执行命令,该命令24小时内有效:
kubeadm join cluster-endpoint:6443 --token 77dwsg.pbbc9hw1t62pgcuv \
--discovery-token-ca-cert-hash sha256:05462e45971d937be745028c8776900e93830401dd5953328f718293c8ffef7d
5:添加node节点:
向主节点添加node节点(在node节点上执行):
[root@k8s-node1 pki]# kubeadm join cluster-endpoint:6443 --token 77dwsg.pbbc9hw1t62pgcuv \
--discovery-token-ca-cert-hash sha256:05462e45971d937be745028c8776900e93830401dd5953328f718293c8ffef7d
[root@k8s-node1 pki]#
[root@k8s-node1 pki]# kubeadm join cluster-endpoint:6443 --token 0fjbqa.5u3koi2xp659pl0g --discovery-token-ca-cert-hash sha256:05462e45971d937be745028c8776900e93830401dd5953328f718293c8ffef7d
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Hostname]: hostname "k8s-node1" could not be reached
[WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 192.168.137.1:53: read udp 192.168.1.101:38214->192.168.137.1:53: i/o timeout
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
节点添加成功
在主节点上查看:
[root@k8s-master manifests]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 49s v1.21.0
k8s-node2 Ready <none> 97m v1.21.0
若添加完节点kube-proxy的pod一直提示创建中请查看node节点有没有/etc/resolv.conf文件,若没有请创建文件,内网机该文件可为空
若以上命令失效,请在主节点执行一下命令,生成新的语句:
kubeadm token create --print-join-command
6:部署calico网络:
#Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
#把calico.yaml里pod所在网段改成kubeadm init时选项--pod-network-cidr所指定的网段,
直接用vim编辑打开此文件查找192,按如下标记进行修改:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
修改:
把两个#及#后面的空格去掉,并把192.168.0.0/16改成192.170.0.0/16
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR #此处
value: "192.170.0.0/16" #此处
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
进入calico.yaml所在文件,部署calico
kubectl apply -f calico.yaml
#执行结束要等上一会才全部running
#等Calico Pod都Running后,节点也会准备就绪。
#注:以后所有yaml文件都只在Master节点执行。
kubectl get pods -n kube-system
[root@k8s-master ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready <none> 62m v1.21.0 192.168.1.101 <none> Rocky Linux 8.4 (Green Obsidian) 4.18.0-305.3.1.el8_4.x86_64 docker://19.3.13
k8s-node2 Ready <none> 159m v1.21.0 192.168.1.102 <none> Rocky Linux 8.4 (Green Obsidian) 4.18.0-305.3.1.el8_4.x86_64 docker://19.3.13
7:各个节点准备就绪,搭建完成。
8:部署Dashboard
kubectl apply -f recommended.yaml
设置访问端口
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
type: ClusterIP 改为 type: NodePort
kubectl get svc -A |grep kubernetes-dashboard
## 找到端口
访问: https://集群任意IP:端口 例:https://192.168.1.102:32003
创建访问账号
#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
kubectl apply -f dash.yaml
令牌访问
#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6InZrSmZsRVA0cnByQ2VHN1dMRzZmNzkzLTAyenp3R28za1RSRTMtNExhbEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VmLThzZzc0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5YTY0MLg3ZC05MGM0LTQ3NGItOTRjNS1lZmMzMjRlYzQzOGMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Fo4_L1t8GeBwhtD-O5Q__wcojETuciOzx6KCsLEU4iJXixd-7d1cQJxewlwJOKejfd9UslLxpvEP3dk3EBVVnqe7dafTwmZYDdRndVrDcEsth_wl--GULTWsi1CCSgoLDZ5IqqTfnpp17P38KmgNyPwX1FHR_DhEnZ3umlqG2jNou7GemyBI-H83BNCB1A7XzPXNRsCxHHU1Ms1Bdv2gicFOmXlUCWZBFN6U5k_V_ot28dMmy_bYOFUpAsfOxf9QFnyJbmY55WOQSkfN3s7h0IM6zAinbDvXqo3NcXlEmTY4FeRXdzPiBgTtuDXAu5Uo6Wi6We5nRIy_nqZ6HOesMg
生成的以上令牌复制输入到token里即可
部署完成