背景
Kubernetes的集群联邦目前还处于beta阶段,为了体验联邦的功能,故搭建了一下。本篇文章将整个过程记录一下,以便于需要的小伙伴可以参考
总结
先把对于使用联邦的个人体验总结如下(以下观点仅代表个人,如有不对,请指正)
1. 联邦必须依托一套主的K8S集群,无法将联邦的控制平面独立出来
2. 联邦参加实质上可以理解为数据库的主从同步,在V2版本中支持同步的类型有(ConfigMap,Deployment,Ingress,Jobs,Namespace,Secret,ServiceAccount,Service,CRD)
3. 联邦可以实现集群级别的负载调度(通过RSP),其实类似于单个集群的HPA,只是RSP监测的指标是副本数
4. 业务的跨集群高可用通过DNS解析到不同集群的Ingress来实现,联合只能DNS可实现业务的就近接入,在V1版本中支持的MultiClusterDNS在V2版本中计划被取消(https://github.com/kubernetes-sigs/kubefed/issues/1403)
资源
云虚拟机 : CPU 4C 内存: 16G 磁盘: 200G
使用到的工具与文件
链接:https://pan.baidu.com/s/1fRfKkvf9Ow803KRCv1xvzQ
提取码:k83j
部署步骤
- 安装docker
curl https://releases.rancher.com/install-docker/19.03.sh | sh
- 安装Kind(Kubernetes in Docker),资源有限通过Docker来启动两套K8S集群
#将百度盘binary文件夹中的kind文件上传到服务器/tmp目录并移动到/usr/bin目录 mv /tmp/kind /usr/bin/ chmod +x /usr/bin/kind
- 安装helm v3版本,用于部署kubefed
#将百度盘binary文件夹中的kind文件上传到服务器/tmp目录并移动到/usr/bin目录 mv /tmp/kind /usr/bin/ chmod +x /usr/bin/kind
- 安装kubefedctl,用于控制联邦集群
#将百度盘binary文件夹中的kubefedctl文件上传到服务器/tmp目录并移动到/usr/bin目录 mv /tmp/kubefedctl/usr/bin/ chmod +x /usr/bin/kubefedctl
- 通过kind创建两个k8s集群,其中的参数2,代表两个集群
#将百度盘scripts文件夹整体的上传到服务器上/ [root@kubefed kubefed]# bash ./scripts/create-clusters.sh 2 Creating 2 clusters Creating cluster "cluster1" ... ✓ Ensuring node image (kindest/node:v1.21.1) ✓ Preparing nodes ✓ Writing configuration ✓ Starting control-plane ️ ✓ Installing CNI ✓ Installing StorageClass Set kubectl context to "kind-cluster1" You can now use your cluster with: kubectl cluster-info --context kind-cluster1 Have a nice day! Cluster "kind-cluster1" set. Context "kind-cluster1" renamed to "cluster1". Creating cluster "cluster2" ... ✓ Ensuring node image (kindest/node:v1.21.1) ✓ Preparing nodes ✓ Writing configuration ✓ Starting control-plane ️ ✓ Installing CNI ✓ Installing StorageClass Set kubectl context to "kind-cluster2" You can now use your cluster with: kubectl cluster-info --context kind-cluster2 Thanks for using kind! Cluster "kind-cluster2" set. Context "kind-cluster2" renamed to "cluster2". Waiting for clusters to be ready Switched to context "cluster1". Complete
- 核实集群已经成功创建,我创建的两个单节点集群IP为: 172.19.0.2,172.19.0.3
[root@kubefed kubefed]# kubectl cluster-info --context=cluster1 Kubernetes control plane is running at https://172.19.0.2:6443 CoreDNS is running at https://172.19.0.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@kubefed kubefed]# kubectl cluster-info --context=cluster2 Kubernetes control plane is running at https://172.19.0.3:6443 CoreDNS is running at https://172.19.0.3:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@kubefed kubefed]#
- 通过helm安装kubefed
[root@kubefed kubefed]#helm repo add kubefed-charts https://raw.githubusercontent.com/kubernetes-sigs/kubefed/master/charts [root@kubefed kubefed]# helm repo list NAME URL kubefed-charts https://raw.githubusercontent.com/kubernetes-sigs/kubefed/master/charts [root@kubefed kubefed]# helm --namespace kube-federation-system upgrade -i kubefed kubefed-charts/kubefed --create-namespace Release "kubefed" does not exist. Installing it now. NAME: kubefed LAST DEPLOYED: Fri Jan 28 14:52:49 2022 NAMESPACE: kube-federation-system STATUS: deployed REVISION: 1 TEST SUITE: None [root@kubefed kubefed]# kubectl get po -n kube-federation-system --context=cluster1 NAME READY STATUS RESTARTS AGE kubefed-admission-webhook-597f4b57bd-wvtw4 1/1 Running 0 96s kubefed-controller-manager-57658684d8-mjfvt 1/1 Running 0 70s kubefed-controller-manager-57658684d8-xrvsj 1/1 Running 0 72s
- 将集群添加如联邦中,在联邦里面有一个角色HostCluster,一个叫Member Cluster,HostCluster可以理解为mysql数据库的主库,MemeberCluster可以理解为mysql数据库的从库
#添加cluster1 [root@kubefed kubefed]# kubefedctl join cluster1 --cluster-context cluster1 \ > --host-cluster-context cluster1 --v=2 I0128 14:56:34.498690 21329 join.go:161] Args and flags: name cluster1, host: cluster1, host-system-namespace: kube-federation-system, kubeconfig: , cluster-context: cluster1, secret-name: , dry-run: false I0128 14:56:34.538634 21329 join.go:242] Performing preflight checks. I0128 14:56:34.540505 21329 join.go:248] Creating kube-federation-system namespace in joining cluster I0128 14:56:34.542447 21329 join.go:405] Already existing kube-federation-system namespace I0128 14:56:34.542474 21329 join.go:255] Created kube-federation-system namespace in joining cluster I0128 14:56:34.542486 21329 join.go:427] Creating service account in joining cluster: cluster1 I0128 14:56:34.548399 21329 join.go:437] Created service account: cluster1-cluster1 in joining cluster: cluster1 I0128 14:56:34.548416 21329 join.go:464] Creating cluster role and binding for service account: cluster1-cluster1 in joining cluster: cluster1 I0128 14:56:34.568240 21329 join.go:473] Created cluster role and binding for service account: cluster1-cluster1 in joining cluster: cluster1 I0128 14:56:34.568261 21329 join.go:833] Creating cluster credentials secret in host cluster I0128 14:56:34.571263 21329 join.go:861] Using secret named: cluster1-cluster1-token-bmdfb I0128 14:56:34.573285 21329 join.go:934] Created secret in host cluster named: cluster1-f9pk4 I0128 14:56:34.584650 21329 join.go:299] Created federated cluster resource #添加cluster2 [root@kubefed kubefed]# kubefedctl join cluster2 --cluster-context cluster2 \ > --host-cluster-context cluster1 --v=2 I0128 14:56:43.741310 21384 join.go:161] Args and flags: name cluster2, host: cluster1, host-system-namespace: kube-federation-system, kubeconfig: , cluster-context: cluster2, secret-name: , dry-run: false I0128 14:56:43.780620 21384 join.go:242] Performing preflight checks. I0128 14:56:43.789452 21384 join.go:248] Creating kube-federation-system namespace in joining cluster I0128 14:56:43.796212 21384 join.go:255] Created kube-federation-system namespace in joining cluster I0128 14:56:43.796236 21384 join.go:427] Creating service account in joining cluster: cluster2 I0128 14:56:43.801056 21384 join.go:437] Created service account: cluster2-cluster1 in joining cluster: cluster2 I0128 14:56:43.801072 21384 join.go:464] Creating cluster role and binding for service account: cluster2-cluster1 in joining cluster: cluster2 I0128 14:56:43.824267 21384 join.go:473] Created cluster role and binding for service account: cluster2-cluster1 in joining cluster: cluster2 I0128 14:56:43.824292 21384 join.go:833] Creating cluster credentials secret in host cluster I0128 14:56:43.827105 21384 join.go:861] Using secret named: cluster2-cluster1-token-dq8c6 I0128 14:56:43.830034 21384 join.go:934] Created secret in host cluster named: cluster2-2q2fg I0128 14:56:43.836561 21384 join.go:299] Created federated cluster resource #查看已经添加的集群 [root@kubefed kubefed]# kubectl -n kube-federation-system get kubefedclusters NAME AGE READY cluster1 63s True cluster2 54s True
- 开启CRD的同步
#开启CRD的同步 kubefedctl enable customresourcedefinitions #如果要同步CRD,可以使用一下的命令 kubefedctl federate crd <target kubernetes API type> # <target kubernetes API type> = mytype.mygroup.mydomain.io
- 验证联邦的同步,同步Namespace
#将百度盘里面的example文件夹上传到服务器 [root@kubefed kubefed]# cd example/sample1/ #以下步骤,先创建一个test-namespace 然后再创建一个federatednamespace,如果直接创建federatednamespace会报错说namespace不存在。这个也是为什么我说联邦是类似数据库的主从同步了 [root@kubefed sample1]# kubectl apply -f namespace.yaml -f federatednamespace.yaml namespace/test-namespace created federatednamespace.types.kubefed.io/test-namespace created #在cluster2上查看已经创建的namespace [root@kubefed sample1]# kubectl get ns --context=cluster2 NAME STATUS AGE default Active 23m kube-federation-system Active 11m kube-node-lease Active 23m kube-public Active 23m kube-system Active 23m local-path-storage Active 23m test-namespace Active 2m32s #通过federatednamespace创建的namespace
- 验证联邦同步Deployment
[root@kubefed sample1]# kubectl apply -f federateddeployment.yaml --context=cluster1 federateddeployment.types.kubefed.io/test-deployment created [root@kubefed kubefed]# kubectl get po -n test-namespace --context=cluster1 NAME READY STATUS RESTARTS AGE test-deployment-6799fc88d8-cdvwb 1/1 Running 0 2m3s test-deployment-6799fc88d8-dfnlm 1/1 Running 0 2m3s test-deployment-6799fc88d8-h8wcd 1/1 Running 0 2m3s [root@kubefed kubefed]# kubectl get po -n test-namespace --context=cluster2 NAME READY STATUS RESTARTS AGE test-deployment-8656cd9f7f-7vr2w 1/1 Running 0 2m5s test-deployment-8656cd9f7f-m5s88 1/1 Running 0 2m5s test-deployment-8656cd9f7f-mhxqr 1/1 Running 0 2m5s test-deployment-8656cd9f7f-pqrpj 1/1 Running 0 2m5s test-deployment-8656cd9f7f-vh52h 1/1 Running 0 2m5s
- 验证联邦同步Service
[root@kubefed kubefed]# kubectl apply -f example/sample1/federatedservice.yaml [root@kubefed kubefed]# kubectl get svc -n test-namespace --context=cluster1 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-service NodePort 10.96.50.39 <none> 80:30475/TCP 2m42s [root@kubefed kubefed]# kubectl get svc -n test-namespace --context=cluster2 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-service NodePort 10.96.20.53 <none> 80:31941/TCP 2m59s #访问cluster1上的服务 [root@kubefed kubefed]# curl 172.19.0.2:30475 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> #访问cluster2上的服务 [root@kubefed kubefed]# curl 172.19.0.3:31941 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@kubefed kubefed]#
- 验证联邦同步Ingress
[root@kubefed kubefed]# kubectl apply -f example/sample1/federatedingress.yaml [root@kubefed kubefed]# kubectl get ingress -n test-namespace --context=cluster1 NAME CLASS HOSTS ADDRESS PORTS AGE test-ingress <none> ingress.example.com 80 5m17s [root@kubefed kubefed]# kubectl get ingress -n test-namespace --context=cluster2 NAME CLASS HOSTS ADDRESS PORTS AGE test-ingress <none> ingress.example.com 80 5m22s #绑定host [root@kubefed kubefed]# cat /etc/hosts|grep -v \# ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 172.18.34.28 kubefed kubefed 114.117.198.86 apiserver.cloud.com 172.19.0.2 ingress.example.com #通过ingress访问服务 #如果这里的ingress是一个公网地址,那应用就可以提供公网服务了, #将两个集群的Ingress地址都添加到智能DNS中,可以就近返回隔用户最近的服务器,可以时间用于负载跨集群的高可用。 #但是对于数据库的跨集群同步方案还是需要更多的考虑和设计,特别是在跨云的环境下,会比较复杂。 [root@kubefed kubefed]# curl ingress.example.com:30475 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
- 通过RSP控制集群中容器的数量,RSP的容器副本数会覆盖FederatedDeployment的副本数,这个和HPA是类似的。
[root@kubefed sample1]# kubectl apply -f rsp.yaml replicaschedulingpreference.scheduling.kubefed.io/test-deployment created [root@kubefed sample1]# k get po -n test-namespace --context=cluster1 #cluster1上分配5个容器 NAME READY STATUS RESTARTS AGE test-deployment-6799fc88d8-56jj7 1/1 Running 0 15s test-deployment-6799fc88d8-br8dm 1/1 Running 0 15s test-deployment-6799fc88d8-gp4qg 1/1 Running 0 16s test-deployment-6799fc88d8-lw2zz 1/1 Running 0 15s test-deployment-6799fc88d8-p47x7 1/1 Running 0 15s [root@kubefed sample1]# k get po -n test-namespace --context=cluster2 #cluster2上分配8个容器 NAME READY STATUS RESTARTS AGE test-deployment-8656cd9f7f-2jtwr 1/1 Running 0 20s test-deployment-8656cd9f7f-45vn2 1/1 Running 0 20s test-deployment-8656cd9f7f-f6tzd 1/1 Running 0 20s test-deployment-8656cd9f7f-k8glb 1/1 Running 0 20s test-deployment-8656cd9f7f-lkhn8 1/1 Running 0 20s test-deployment-8656cd9f7f-rzs7h 1/1 Running 0 20s test-deployment-8656cd9f7f-th7gm 1/1 Running 0 20s test-deployment-8656cd9f7f-xgl9t 1/1 Running 0 21s