加入工作节点
执行下列命令,生成node节点加入的token
kubeadm token create --print-join-command
>
kubeadm join 100.64.252.90:6443 --token bvo8sr.bz0mdskq8mv6q0jr --discovery-token-ca-cert-hash sha256:f00eb17f12061780a4d5f8dd0b681a74079e8cecdbd37d78d64432793f2fb41b
然后将token粘贴到两个node节点上
[root@node2 ~]kubeadm join 100.64.252.90:6443 --token bvo8sr.bz0mdskq8mv6q0jr --discovery-token-ca-cert-hash sha256:f00eb17f12061780a4d5f8dd0b681a74079e8cecdbd37d78d64432793f2fb41b
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.4. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
查看是否加入,发现成功加入,但是由于没有安装网络插件,所以都是not ready
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 39m v1.23.1
node1 NotReady <none> 2m8s v1.23.1
node2 NotReady <none> 101s v1.23.1
安装calico
将配置文件上传到master节点,见附件,然后apply calico的配置,随后查看节点状态,发现pod都正常运行
kubectl apply -f calico.yaml
kubectl get pod -n kube-system
>
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-677cd97c8d-7s9nz 1/1 Running 0 19s
calico-node-h6hzf 1/1 Running 0 19s
calico-node-mvgpv 1/1 Running 0 19s
calico-node-vd7q7 1/1 Running 0 19s
coredns-6d8c4cb4d-6r5tl 1/1 Running 0 53m
coredns-6d8c4cb4d-gnwtr 1/1 Running 0 53m
etcd-master 1/1 Running 0 53m
kube-apiserver-master 1/1 Running 0 53m
kube-controller-manager-master 1/1 Running 0 53m
kube-proxy-4v78m 1/1 Running 0 15m
kube-proxy-g8c56 1/1 Running 0 53m
kube-proxy-ln8gd 1/1 Running 0 15m
kube-scheduler-master 1/1 Running 0 53m
三个节点也都运行正常
ubectl get nodes
>
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 56m v1.23.1
node1 Ready <none> 18m v1.23.1
node2 Ready <none> 18m v1.23.1
测试k8s网络和DNS
在两个node节点上安装镜像,在master节点上启动
[node1 ~]# docker load -i busybox-1-28.tar.gz
[node2 ~]# docker load -i busybox-1-28.tar.gz
[master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
在 Kubernetes 集群中,Master 节点通常不用于运行用户容器或 Pod,因此即使 Master 节点上没有加载相应的镜像也不会影响容器的调度和运行,只要工作节点上有即可。通过这种方式,可以确保 Kubernetes 集群中的各个节点能高效地使用本地已有的镜像资源,提高整个集群的运行效率和响应速度。
在master节点上创建pod后,进入pod,在里面ping www.baidu.com,发现可以通信,说明网络是好的。
[root@master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
>
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (182.61.200.6): 56 data bytes
64 bytes from 182.61.200.6: seq=0 ttl=41 time=28.576 ms
64 bytes from 182.61.200.6: seq=1 ttl=41 time=28.528 ms
64 bytes from 182.61.200.6: seq=2 ttl=41 time=28.423 ms
64 bytes from 182.61.200.6: seq=3 ttl=41 time=28.406 ms
64 bytes from 182.61.200.6: seq=4 ttl=41 time=28.951 ms
64 bytes from 182.61.200.6: seq=5 ttl=41 time=28.600 ms
64 bytes from 182.61.200.6: seq=6 ttl=41 time=28.340 ms
64 bytes from 182.61.200.6: seq=7 ttl=41 time=28.863 ms
64 bytes from 182.61.200.6: seq=8 ttl=41 time=28.446 ms
64 bytes from 182.61.200.6: seq=9 ttl=41 time=28.359 ms
64 bytes from 182.61.200.6: seq=10 ttl=41 time=28.454 ms
64 bytes from 182.61.200.6: seq=11 ttl=41 time=28.467 ms
64 bytes from 182.61.200.6: seq=12 ttl=41 time=28.335 ms
64 bytes from 182.61.200.6: seq=13 ttl=41 time=28.630 ms
64 bytes from 182.61.200.6: seq=14 ttl=41 time=28.358 ms
^C
--- www.baidu.com ping statistics ---
15 packets transmitted, 15 packets received, 0% packet loss
round-trip min/avg/max = 28.335/28.515/28.951 ms
测试coredns是否正常,可以看到server:10.96.0.10,就是我们coreDNS的clusterIP,说明coreDNS配置好了。在集群中,要访问svc(10.96.0.1)都要通过coredns解析。
[root@master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
>
If you don't see a command prompt, try pressing enter./ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local