kubeadm部署k8s_用 kubeadm 部署生产级 k8s 集群

概述

kubeadm 已⽀持集群部署,且在1.13 版本中 GA,⽀持多 master,多 etcd 集群化部署,它也是官⽅最为推荐的部署⽅式,⼀来是由它的 sig 组来推进的,⼆来 kubeadm 在很多⽅⾯确实很好的利⽤了 kubernetes 的许多特性,接下来⼏篇我们来实践并了解下它的魅⼒。

⽬标

1. 通过 kubeadm 搭建⾼可⽤ kubernetes 集群,并新建管理⽤户

2. 为后续做版本升级演示,此处使⽤1.13.1版本,到下⼀篇再升级到 v1.14

3. kubeadm 的原理解读

本⽂主要介绍 kubeadm 对⾼可⽤集群的部署

kubeadm 部署 k8s v1.13 ⾼可⽤集群

⽅式有两种

  • Stacked etcd topology
25f31e1c0f3a235b848074b6d1a20dc1.png
  • 即每台 etcd 各⾃独⽴,分别部署在 3 台 master 上,互不通信,优点是简单,缺点是缺乏 etcd ⾼可⽤性
  • 需要⾄少 4 台机器(3master 和 etcd,1node)
  • External etcd topology
c8412934d8e93380bdf3031406b80bfd.png
  • 即采⽤集群外 etcd 拓扑结构,这样的冗余性更好,但需要⾄少7台机器(3master,3etcd,1node)
  • ⽣产环境建议采⽤该⽅案
  • 本⽂也采⽤这个拓扑

步骤

  • 环境准备
  • 安装组件:docker,kubelet,kubeadm(所有节点)
  • 使⽤上述组件部署 etcd ⾼可⽤集群
  • 部署 master
  • 加⼊node
  • ⽹络安装
  • 验证
  • 总结

机环境准备

  • 系统环境
#操作系统版本(⾮必须,仅为此处案例)$cat /etc/redhat-releaseCentOS Linux release 7.2.1511 (Core)#内核版本(⾮必须,仅为此处案例)$uname -r4.17.8-1.el7.elrepo.x86_64#数据盘开启ftype(在每台节点上执⾏)umount /datamkfs.xfs -n ftype=1 -f /dev/vdb#禁⽤swapswapoff -ased -i "s#^/swapfile##/swapfile#g" /etc/fstabmount -a

docker,kubelet,kubeadm 的安装(所有节点)

安装运⾏时(docker)

  • k8s1.13 版本根据官⽅建议,暂不采⽤最新的 18.09,这⾥我们采⽤18.06,安装时需指 定版本
  • 来源:kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version.
  • 安装脚本如下(在每台节点上执⾏):
f703dd43e8fc84d884d38d687fac14a7.png

安装 kubeadm,kubelet,kubectl

  • 官⽅的 Google yum 源⽆法从国内服务器上直接下载,所以可先在其他渠道下载好,在上传到服务器上
cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpghttps://packages.cloud.google.com/yum/doc/rpm-package-key.gpgexclude=kube*EOF$ yum -y install --downloadonly --downloaddir=k8s kubelet-1.13.1 kubeadm-1.13.1kubectl-1.13.1$ ls k8s/25cd948f63fea40e81e43fbe2e5b635227cc5bbda6d5e15d42ab52decf09a5ac-kubelet-1.13.1-0.x86_64.rpm53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm5af5ecd0bc46fca6c51cc23280f0c0b1522719c282e23a2b1c39b8e720195763-kubeadm-1.13.1-0.x86_64.rpm7855313ff2b42ebcf499bc195f51d56b8372abee1a19bbf15bb4165941c0229d-kubectl-1.13.1-0.x86_64.rpmfe33057ffe95bfae65e2f269e1b05e99308853176e24a4d027bc082b471a07c0-kubernetes-cni-0.6.0-0.x86_64.rpmsocat-1.7.3.2-2.el7.x86_64.rpm
  • 本地安装
# 禁⽤selinuxsetenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config# 本地安装yum localinstall -y k8s/*.rpmsystemctl enable --now kubelet
  • ⽹络修复,已知 centos7 会因 iptables 被绕过⽽将流量错误路由,因此需确保sysctl 配置中的 net.bridge.bridgenf-call-iptables 被设置为 1
cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system

使⽤上述组件部署 etcd ⾼可⽤集群

1. 在 etcd 节点上,将 etcd 服务设置为由 kubelet 启动管理

cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf[Service]ExecStart=ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifestpath=/etc/kubernetes/manifests --allow-privileged=trueRestart=alwaysEOFsystemctl daemon-reloadsystemctl restart kubelet

2. 给每台 etcd 主机⽣成 kubeadm 配置⽂件,确保每台主机运⾏⼀个 etcd 实例:在 etcd1(即上述的 hosts0)上执⾏上述

命令,可以在 /tmp ⽬录下看到⼏个主机名的⽬录

# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hostsexport HOST0=10.10.184.226export HOST1=10.10.213.222export HOST2=10.10.239.108# Create temp directories to store files that will end up on other hosts.mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})NAMES=("infra0" "infra1" "infra2")​
460e451fe34777a91a7f1b4176553a38.png

3. 制作 CA:在host0上执⾏命令⽣成证书,它将创建两个⽂件:/etc/kubernetes/pki/etcd/ca.crt/etc/kubernetes/pki/etcd/ca.key (这⼀步需要翻墙)

[root@10-10-184-226 ~]# kubeadm init phase certs etcd-ca[certs] Generating "etcd/ca" certificate and key

4. 在 host0 上给每个 etcd 节点⽣成证书:

export HOST0=10.10.184.226export HOST1=10.10.213.222export HOST2=10.10.239.108kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yamlkubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yamlkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yamlkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yamlcp -R /etc/kubernetes/pki /tmp/${HOST2}/# cleanup non-reusable certificatesfind /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -deletekubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yamlkubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yamlkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yamlkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yamlcp -R /etc/kubernetes/pki /tmp/${HOST1}/find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -deletekubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yamlkubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yamlkubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yamlkubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml# No need to move the certs because they are for HOST0# clean up certs that should not be copied off this hostfind /tmp/${HOST2} -name ca.key -type f -deletefind /tmp/${HOST1} -name ca.key -type f -delete
  • 将证书和 kubeadmcfg.yaml 下发到各个 etcd 节点上效果为
3192c6f68fff8521bacfe6742aca8a96.png

5. ⽣成静态 pod manifest ,在 3 台 etcd 节点上分别执⾏:(需翻墙)

$ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

6. 检查 etcd 集群状态,⾄此 etcd 集群搭建完成

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetesk8s.gcr.io/etcd:3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --keyfile /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://${HOST0}:2379 cluster-healthmember 9969ee7ea515cbd2 is healthy: got healthy result fromhttps://10.10.213.222:2379member cad4b939d8dfb250 is healthy: got healthy result fromhttps://10.10.239.108:2379member e6e86b3b5b495dfb is healthy: got healthy result fromhttps://10.10.184.226:2379cluster is healthy

使⽤ kubeadm 部署 master

  • 将任意⼀台 etcd 上的证书拷⻉到 master1 节点
export CONTROL_PLANE="ubuntu@10.0.0.7"+scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":+scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":+scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
  • 在第⼀台 master 上编写配置⽂件 kubeadm-config.yaml 并初始化
c9da00c436c1514d5bc893492ad76b8a.png

注意:这⾥的 k8s.paas.test:6443 是⼀个 LB,如果没有可⽤虚拟 IP 来做

  • 使⽤私有仓库(⾃定义镜像功能) kubeadm ⽀持通过修改配置⽂件中的参数来灵活定制集群初始化⼯作,如 imageRepository 可以设置镜像前缀,我们可以在将镜像传到⾃⼰内部私服上之后,编辑 kubeadm-config.yaml 中的该参数之后再执⾏ init
  • 在 master1 上执⾏:kubeadm init --config kubeadm-config.yaml
c819b2a2b61a6b44138ae09e18d64d8f.png

安装另外2台 master

  • master1 上的 admin.conf 配置⽂件和 pki 相关证书拷⻉到另外 2 台 master 同样⽬录下如:
/etc/kubernetes/pki/ca.crt/etc/kubernetes/pki/ca.key/etc/kubernetes/pki/sa.key/etc/kubernetes/pki/sa.pub/etc/kubernetes/pki/front-proxy-ca.crt/etc/kubernetes/pki/front-proxy-ca.key/etc/kubernetes/pki/etcd/ca.crt/etc/kubernetes/pki/etcd/ca.key (官⽅⽂档中此处需要拷⻉,但实际不需要)/etc/kubernetes/admin.conf

注:官⽹⽂档少了两个⽂件/etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcdclient.key,不加 apiserver 会启动失败并报错:

Unable to create storage backend: config (&{ /registry []/etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcdclient.crt /etc/kubernetes/pki/etcd/ca.crt true 0xc000133c20  5m0s 1m0s}),err (open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file ordirectory)
  • 在 2 和 3 master 中执⾏加⼊:
fb164f43374c6c074f19eed58ac2c67e.png

加⼊ node 节点

[root@k8s-n1 ~]$ kubeadm join k8s.paas.test:6443 --token f1oygc.3zlc31yjcut46prf --discovery-tokenca-cert-hash sha256:078b63e29378fb6dcbedd80dd830b83e37521f294b4e3416cd77e854041d912f[preflight] Running pre-flight checks[discovery] Trying to connect to API Server "k8s.paas.test:6443"[discovery] Created cluster-info discovery client, requesting info from"https://k8s.paas.test:6443"[discovery] Requesting info from "https://k8s.paas.test:6443" again to validate TLSagainst the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificatevalidates against pinned roots, will use API Server "k8s.paas.test:6443"[discovery] Successfully established connection with API Server "k8s.paas.test:6443"[join] Reading configuration from the cluster...[join] FYI: You can look at this config file with 'kubectl -n kube-system get cmkubeadm-config -oyaml'[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13"ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file"/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to theNode API object "k8s-n1" as an annotation​This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.​Run 'kubectl get nodes' on the master to see this node join the cluster.

网络安装

kubectl apply -fhttps://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.ymlkubectl get pod --all-namespaces -o wideNAMESPACE NAME READY STATUS RESTARTS AGEIP NODE NOMINATED NODE READINESS GATESkube-system coredns-86c58d9df4-dc4t2 1/1 Running 0 14m172.17.0.3 k8s-m1 kube-system coredns-86c58d9df4-jxv6v 1/1 Running 0 14m172.17.0.2 k8s-m1 kube-system kube-apiserver-k8s-m1 1/1 Running 0 13m10.10.119.128 k8s-m1 kube-system kube-apiserver-k8s-m2 1/1 Running 0 5m10.10.76.80 k8s-m2 kube-system kube-apiserver-k8s-m3 1/1 Running 0 4m58s10.10.56.27 k8s-m3 kube-system kube-controller-manager-k8s-m1 1/1 Running 0 13m10.10.119.128 k8s-m1 kube-system kube-controller-manager-k8s-m2 1/1 Running 0 5m10.10.76.80 k8s-m2 kube-system kube-controller-manager-k8s-m3 1/1 Running 0 4m58s10.10.56.27 k8s-m3 kube-system kube-flannel-ds-amd64-nvmtk 1/1 Running 0 44s10.10.56.27 k8s-m3 kube-system kube-flannel-ds-amd64-pct2g 1/1 Running 0 44s10.10.76.80 k8s-m2 kube-system kube-flannel-ds-amd64-ptv9z 1/1 Running 0 44s10.10.119.128 k8s-m1 kube-system kube-flannel-ds-amd64-zcv49 1/1 Running 0 44s10.10.175.146 k8s-n1 kube-system kube-proxy-9cmg2 1/1 Running 0 2m34s10.10.175.146 k8s-n1 kube-system kube-proxy-krlkf 1/1 Running 0 4m58s10.10.56.27 k8s-m3 kube-system kube-proxy-p9v66 1/1 Running 0 14m10.10.119.128 k8s-m1 kube-system kube-proxy-wcgg6 1/1 Running 0 5m10.10.76.80 k8s-m2 kube-system kube-scheduler-k8s-m1 1/1 Running 0 13m10.10.119.128 k8s-m1 kube-system kube-scheduler-k8s-m2 1/1 Running 0 5m10.10.76.80 k8s-m2 kube-system kube-scheduler-k8s-m3 1/1 Running 0 4m58s10.10.56.27 k8s-m3 

安装完成

验证

  • ⾸先验证 kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
$kubectl create deployment nginx --image=nginx:alpine$kubectl get pods -l app=nginx -o wideNAME READY STATUS RESTARTS AGE IP NODENOMINATED NODE READINESS GATESnginx-54458cd494-r6hqm 1/1 Running 0 5m24s 10.244.4.2 k8s-n1
  • kube-proxy 验证
$kubectl expose deployment nginx --port=80 --type=NodePortservice/nginx exposed​[root@k8s-m1 ~]$kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1  443/TCP 122mnginx NodePort 10.108.192.221  80:30992/TCP 4s[root@k8s-m1 ~]$kubectl get pods -l app=nginx -o wideNAME READY STATUS RESTARTS AGE IP NODENOMINATED NODE READINESS GATESnginx-54458cd494-r6hqm 1/1 Running 0 6m53s 10.244.4.2 k8s-n1​$curl -I k8s-n1:30992HTTP/1.1 200 OK
  • 验证 dns,pod ⽹络状态
kubectl run --generator=run-pod/v1 -it curl --image=radial/busyboxplus:curlIf you don't see a command prompt, try pressing enter.[ root@curl:/ ]$ nslookup nginxServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: nginxAddress 1: 10.108.192.221 nginx.default.svc.cluster.local
  • ⾼可⽤

关机 master1,在⼀个随机 pod 中访问 Nginx 关机 master1,在⼀个随机 pod 中访问 Nginx

while true;do curl -I nginx && sleep 1 ;done

总结

关于版本

  • 内核版本 4.19 更加稳定此处不建议4.17(再新就是5了)
  • docker 最新稳定版是 1.17.12,此处是 1.18.06,虽 k8s 官⽅也确认兼容1.18.09 了但⽣产上还是建议 1.17.12

关于⽹络,各家选择不同,flannel 在中⼩公司较为普遍,但部署前要选好⽹络插件,在配置⽂件中提前设置好(官⽅博客⼀开始的 kubeadm 配置中没写,后⾯在⽹络设置中⼜要求必须加)

出错处理

  • 想重置环境的话,kubeadm reset 是个很好的⼯具,但它并不会完全重置,在etcd 中的的部分数据(如 configmap secure 等)是没有被清空的,所以如果有必要重置真个环境,记得在 reset 后将 etcd 也重置。
  • 重置 etcd 办法为清空 etcd 节点 /var/lib/etcd,重启 docker 服务)

翻墙

  • 镜像:kubeadm 已⽀持⾃定义镜像前缀,kubeadm-config.yaml 中设置 imageRepository 即可
  • yum,可提前下载导⼊,也可以设置 http_proxy 来访问
  • init,签发证书和 init 时需要连接 google,也可以设置 http_proxy来访问。

更多

  • 证书、升级问题将在下⼀篇继续介绍。

参考:

  • https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/

想要加群的可以添加WeChat:18310139238

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值