一、下面我们开始搭建K8S集群
配置K8S的yum源(自v1.6.0起,Kubernetes默认启用了CRI,Container Runtime Interface,详情请查看官网:https://kubernetes.io/docs/setup/independent/install-kubeadm/#verify-the-mac-address-and-product-uuid-are-unique-for-every-node)
官方仓库无法使用,建议使用阿里源的仓库,执行以下命令添加kubernetes.repo
仓库:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
关闭swap、防火墙
上一篇文章已介绍关闭
关闭SeLinux
执行:setenforce 0
安装K8S组件
执行以下命令安装kubelet、kubeadm、kubectl:
yum install -y kubelet kubeadm kubectl
如下图所示:
配置kubelet的cgroup drive
确保docker 的cgroup drive 和kubelet的cgroup drive一样:
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
若显示不一样,则执行:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
如图:
启动kubelet
注意,根据官方文档描述,安装kubelet、kubeadm、kubectl三者后,要求启动kubelet:systemctl enable kubelet && systemctl start kubelet
但实际测试发现,无法启动,报如下错误:
查看日志发现是没有证书:
unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
我在网上没有找到解决方法,但无意测试中发现,后面的kubeadm init
操作会创建证书。也就是说,现在无法启动并不影响后续操作,继续!
下载K8S的Docker镜像
本文使用的是K8S官方提供的kubeadm工具来初始化K8S集群,而初始化操作kubeadm init
会默认去访问谷歌的服务器,以下载集群所依赖的Docker镜像,因此也会超时失败,你懂得。
但是,只要我们可以提前导入这些镜像,kubeadm init
操作就会发现这些镜像已经存在,就不会再去访问谷歌。网上有一些方法可以获得这些镜像,如利用Docker Hub制作镜像等,但稍显繁琐。
这里,我已将初始化时用到的所有Docker镜像整理好了,镜像版本是V1.10.0。推荐大家使用。
- 地址:
https://pan.baidu.com/s/11AheivJxFzc4X6Q5_qCw8A
- 密码:
2zov
准备好的镜像如下图所示:
K8S更新速度很快,截止目前最新版本是V1.10.2。本人提供的镜像会越来越旧,需要最新版本的读者可以根据网上教材自行制作
脚本docker_images_load.sh
用于导入镜像:
docker load < quay.io#calico#node.tar
docker load < quay.io#calico#cni.tar
docker load < quay.io#calico#kube-controllers.tar
docker load < k8s.gcr.io#kube-proxy-amd64.tar
docker load < k8s.gcr.io#kube-scheduler-amd64.tar
docker load < k8s.gcr.io#kube-controller-manager-amd64.tar
docker load < k8s.gcr.io#kube-apiserver-amd64.tar
docker load < k8s.gcr.io#etcd-amd64.tar
docker load < k8s.gcr.io#k8s-dns-dnsmasq-nanny-amd64.tar
docker load < k8s.gcr.io#k8s-dns-sidecar-amd64.tar
docker load < k8s.gcr.io#k8s-dns-kube-dns-amd64.tar
docker load < k8s.gcr.io#pause-amd64.tar
docker load < quay.io#coreos#etcd.tar
docker load < quay.io#calico#node.tar
docker load < quay.io#calico#cni.tar
docker load < quay.io#calico#kube-policy-controller.tar
docker load < gcr.io#google_containers#etcd.tar
将镜像与该脚本放置同一目录,执行即可导入Docker镜像。运行docker images
,如下图所示,即表示镜像导入成功:
二、克隆虚拟机
首先,将第一台虚拟机命名为node1关机,克隆两台虚拟机(node2,node3 );
然后,分别开启所有虚拟机node1,node2,node3 ;
其次,ip addr 命令查看node1,node2,node3 机器ip地址,并保证三台虚拟机之间能够相互Ping通;
最后,我们需要进行一些简单的设置,以主节点Node1为例:
- 编辑
/etc/hostname
,将hostname
修改为k8s-node1
- 编辑
/etc/hosts
,追加内容IP k8s-node1
以上IP为网卡2的IP地址,修改后重启生效。另外两个节点修改同理,主机名分别为k8s-node2
、k8s-node3
。
三、创建集群
kubeadm介绍
前面的工作都准备好后,我们就可以真正的创建集群了。这里使用的是官方提供的kubeadm工具,它可以快速、方便的创建一个K8S集群。kubeadm的具体介绍大家可以参考官方文档:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
。
截止目前,kubeadm尚处于beta状态,官方暂时不推荐在生产环境使用,但是预计今年会推出GA版本。这里,我建议大家尽量使用kubeadm,相对于纯手动部署效率更高,也不容易出错。
创建集群
在Master主节点(k8s-node1)上执行:kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.10.0 --apiserver-advertise-address=192.168.56.101
含义:
1.选项--pod-network-cidr=192.168.0.0/16表示集群将使用Calico网络,这里需要提前指定Calico的子网范围
2.选项--kubernetes-version=v1.10.0指定K8S版本,这里必须与之前导入到Docker镜像版本v1.10.0一致,否则会访问谷歌去重新下载K8S最新版的Docker镜像
3.选项--apiserver-advertise-address表示绑定的网卡IP,这里一定要绑定前面提到的enp0s8网卡,否则会默认使用enp0s3网卡
4.若执行kubeadm init出错或强制终止,则再需要执行该命令时,需要先执行kubeadm reset重置
执行结果:
[root@k8s-node1 ~]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.10.0 --apiserver-advertise-address=192.168.56.101
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-node1] and IPs [192.168.56.101]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 24.006116 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-node1 as master by adding a label and a taint
[markmaster] Master k8s-node1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: kt62dw.q99dfynu1kuf4wgy
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.56.101:6443 --token kt62dw.q99dfynu1kuf4wgy --discovery-token-ca-cert-hash sha256:5404bcccc1ade37e9d80831ce82590e6079c1a3ea52a941f3077b40ba19f2c68
可以看到,提示集群成功初始化,并且我们需要执行以下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
另外, 提示我们还需要创建网络,并且让其他节点执行kubeadm join...
加入集群。
创建网络
如果不创建网络,查看pod状态时,可以看到kube-dns组件是阻塞状态,集群时不可用的:
大家可以参考官方文档,根据需求选择适合的网络,这里,我们使用Calico(在前面初始化集群的时候就已经确定了)。
根据官方文档,在主节点上,需要执行如下命令:kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
但需要注意的是:
本文实验时所使用的calico的docker镜像版本为v3.1.0,如下图所示
但截至本文 撰写时,calico.yaml
文件中版本已升级为v3.1.1。因此我们需要下载calico.yaml
,手动编辑文件修改为v3.1.0并重新创建网络。否则,执行kubectl apply
命令时,会重新拉取v3.1.1的镜像导致超时失败。同时,kube-dns模块也会因为网络无法创建而Pending:
确保版本一致后,执行成功则提示:
四、集群设置
将Master作为工作节点
K8S集群默认不会将Pod调度到Master上,这样Master的资源就浪费了。在Master(即k8s-node1)上,可以运行以下命令使其作为一个工作节点:kubectl taint nodes --all node-role.kubernetes.io/master-
利用该方法,我们可以不使用minikube而创建一个单节点的K8S集群
执行成功后提示:
将其他节点加入集群
在其他两个节点k8s-node2和k8s-node3上,执行主节点生成的kubeadm join
命令即可加入集群:kubeadm join 192.168.56.101:6443 --token kt62dw.q99dfynu1kuf4wgy --discovery-token-ca-cert-hash sha256:5404bcccc1ade37e9d80831ce82590e6079c1a3ea52a941f3077b40ba19f2c68
加入成功后,提示:
验证集群是否正常
当所有节点加入集群后,稍等片刻,在主节点上运行kubectl get nodes
可以看到:
如上,若提示notReady则表示节点尚未准备好,可能正在进行其他初始化操作,等待全部变为Ready即可。
大家可能会好奇,我们前面使用的是v1.10.0,为何这里版本是v1.10.2。实际上,这里显示是每个节点上kubelet程序的版本,即先前使用yum安装时的默认版本,是向下兼容的。而v.1.10.0指的是K8S依赖的Docker镜像版本,与
kubeadm init
命令中一定要保持一致。
另外,建议查看所有pod状态,运行kubectl get pods -n kube-system
:
如上,全部Running则表示集群正常。至此,我们的K8S集群就搭建成功了。