centos7.6使用kubeadm安装kubernetes的master worker节点笔记及遇到的坑

10 篇文章 0 订阅
6 篇文章 0 订阅

个人博客原文地址:http://www.lampnick.com/php/760

本文目标

安装docker及设置docker代理
安装kubeadm
使用kubeadm初始化k8s Master节点
安装网络插件weave-kube
部署 Kubernetes 的 Worker 节点
部署kubernetes-dashboard
监控组件 – prometheus-operator 部署(https://github.com/coreos/prometheus-operator)

环境

mac virtual box
虚拟机配置:
cpu:2c
memory:1G
disk:8G

前置条件

linux下科学拉取代码请参考 :https://blog.liuguofeng.com/p/4010

1.配置ss自启动

[root@centos7vm ~]#  vim /etc/systemd/system/shadowsocks.service ,内容如下:

[Unit]
Description=Shadowsocks
[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/sslocal -c /etc/shadowsocks.json 
[Install]
WantedBy=multi-user.target

然后shell中执行如下命令
[root@centos7vm ~]#  systemctl enable shadowsocks.service
[root@centos7vm ~]#  systemctl start shadowsocks.service
[root@centos7vm ~]#  systemctl status shadowsocks.service

2.关闭SELinux、防火墙及一些配置(不然会有权限问题)

[root@centos7vm ~]# sestatus
SELinux status: enabled
于是关闭selinux(setenforce 0 在我的centos7.6上不起作用)
[root@centos7vm ~]# vim /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled 

[root@centos7vm ~]# sestatus
SELinux status: disabled

[root@centos7vm ~]# systemctl stop firewalld
[root@centos7vm ~]# systemctl disable firewalld
永久关闭SWAP
[root@centos7vm ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
注释掉SWAP分区项,即可
设置后需要重启才能生效
[root@centos7vm ~]# reboot

 3.安装docker及设置docker代理

[root@centos7vm ~]#  yum -y install docker
[root@centos7vm ~]#  mkdir -p /etc/systemd/system/docker.service.d
[root@centos7vm ~]#  vim /etc/systemd/system/docker.service.d/http-proxy.conf
加入
[Service]
Environment="HTTP_PROXY=http://127.0.0.1:8118" "NO_PROXY=localhost,172.16.0.0/16,127.0.0.1,10.244.0.0/16"
[root@centos7vm ~]# vim /etc/systemd/system/docker.service.d/https-proxy.conf
加入
[Service]
Environment="HTTPS_PROXY=http://127.0.0.1:8118" "NO_PROXY=localhost,172.16.0.0/16,127.0.0.1,10.244.0.0/16"
重新加载
[root@centos7vm ~]#  systemctl daemon-reload && systemctl restart docker

4.安装kubeadm

配置kubeadm的yum源
[root@centos7vm ~]#  vim /etc/yum.repos.d/kubernetes.repo
加入
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
安装kubadm
[root@centos7vm ~]# yum install -y kubeadm
这一步会自动安装kubectl,kubecni,kubelet
查看版本
[root@centos7vm ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@centos7vm ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@centos7vm ~]# kubelet --version
Kubernetes v1.14.1
设置kubelet开机自启动并启动
[root@centos7vm ~]#  systemctl enable kubelet.service && systemctl start kubelet

5.使用kubeadm初始化k8s Master节点

先拉镜像,然后关闭代理,如果关闭代理都不行的话,重启下服务器吧
[root@centos7vm ~]# kubeadm config images pull
[root@centos7vm ~]# reboot
[root@centos7vm ~]# kubeadm init
I0424 05:44:25.902645 4348 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0424 05:44:25.902789 4348 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos7vm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.222]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.503495 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node centos7vm as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node centos7vm as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: smz672.4uxlpw056eykpqi3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.0.222:6443 --token smz672.4uxlpw056eykpqi3 \
 --discovery-token-ca-cert-hash sha256:8c0b46999ce4cb50f9add92c7c6b28b15fdfeec49c9b09e605e227e667bc0e6b 
出现上面的说明初始化成功了,如上的kubeadm join 命令,就是用来给这个 Master 节点添加更多工作节点(Worker)的命令。后面部署 Worker 节点的时候马上会用到它,所以找一个地方把这条命令记录下来。

6.按kubeadm安装成功的提示执行如下配置命令

需要这些配置命令的原因是:Kubernetes 集群默认需要加密方式访问。所以,这几条命令,就是 将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默 认会使用这个目录下的授权信息访问 Kubernetes 集群。
如果不这么做的话,我们每次都需要通过 export KUBECONFIG 环境变量告诉 kubectl 这个安全配 置文件的位置。
[root@centos7vm ~]# mkdir -p $HOME/.kube
[root@centos7vm ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@centos7vm ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@centos7vm ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
centos7vm NotReady master 28m v1.14.1
如果没有执行上面的命令,在执行kubectl get nodes会报如下的错误
[root@centos7vm ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

7.调试

可以看到,这个 get 指令输出的结果里,centos7vm节点的状态是 NotReady,这是为什么呢?
在调试 Kubernetes 集群时,最重要的手段就是用 kubectl describe 来查看这个节点(Node)对 象的详细信息、状态和事件(Event),我们来试一下:
[root@centos7vm ~]# kubectl describe node centos7vm
.....

Conditions:
 Type Status LastHeartbeatTime LastTransitionTime Reason Message
 ---- ------ ----------------- ------------------ ------ -------
 MemoryPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
 DiskPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
 PIDPressure False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
 Ready False Wed, 24 Apr 2019 06:40:46 -0400 Wed, 24 Apr 2019 05:44:38 -0400 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
......
可以看到 NodeNotReady 的原因在于,我们尚未部署任 何网络插件。
我们还可以通过 kubectl 检查这个节点上各个系统 Pod 的状态,其中,kube-system 是 Kubernetes 项目预留的系统 Pod 的工作空间(Namepsace,注意它并不是 Linux Namespace, 它只是 Kubernetes 划分不同工作空间的单位):
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 0/1 Pending 0 65m
coredns-fb8b8dccf-wcftx 0/1 Pending 0 65m
etcd-centos7vm 1/1 Running 0 64m
kube-apiserver-centos7vm 1/1 Running 0 64m
kube-controller-manager-centos7vm 1/1 Running 0 64m
kube-proxy-xhlxf 1/1 Running 0 65m
kube-scheduler-centos7vm 1/1 Running 0 64m
可以看到,CoreDNS、kube-controller-manager 等依赖于网络的 Pod 都处于 Pending 状态, 即调度失败。这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。

8.安装网络插件weave-kube

拉镜像得开代理哦
[root@centos7vm ~]# kubectl apply -f https://git.io/weave-kube-1.6
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
稍等一会儿,重新检查Pod的状态
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 1/1 Running 0 83m
coredns-fb8b8dccf-wcftx 1/1 Running 0 83m
etcd-centos7vm 1/1 Running 0 82m
kube-apiserver-centos7vm 1/1 Running 0 82m
kube-controller-manager-centos7vm 1/1 Running 0 82m
kube-proxy-xhlxf 1/1 Running 0 83m
kube-scheduler-centos7vm 1/1 Running 0 82m
weave-net-8s4cl 2/2 Running 0 2m5s
可以看到,所有的系统Pod都成功启动了,而刚刚部署的Weave网络插件则在kube-system下 面新建了一个名叫 weave-net-8s4cl 的 Pod,一般来说,这些Pod就是容器网络插件在每个节 点上的控制组件。
Kubernetes支持容器网络插件,使用的是一个名叫CNI的通用接口,它也是当前容器网络的事实标准,市面上的所有容器网络开源项目都可以通过CNI接入Kubernetes,比如Flannel、Calico、Canal、Romana等等,它们的部署方式也都是类似的。

至此,Kubernetes的Master节点就部署完成了。如果你只需要一个单节点的Kubernetes,现在你就可以使用了。不过,在默认情况下,Kubernetes的Master节点是不能运行用户Pod的,所以还需要额外做一个小操作。如下介绍

9.部署 Kubernetes 的 Worker 节点

部署Kubernetes的Worker节点
Kubernetes的Worker节点跟Master节点几乎是相同的,它们运行着的都是一个kubelet组件。唯一的区别在于,在kubeadminit的过程中,kubelet启动后,Master节点上还会自动运行kube-apiserver、kube-scheduler、kube-controller-manger这三个系统Pod。
所以,相比之下,部署Worker节点反而是最简单的,只需要两步即可完成。第一步,在所有Worker节点上执行“安装kubeadm和Docker”一节的所有步骤。第二步,执行部署Master节点时生成的kubeadm join指令:
kubeadm join 172.16.0.222:6443 --token smz672.4uxlpw056eykpqi3 \
 --discovery-token-ca-cert-hash sha256:8c0b46999ce4cb50f9add92c7c6b28b15fdfeec49c9b09e605e227e667bc0e6b
如果安装完master节点后24小时内没有将work加入,则需要重新生成token
kubeadm token create 生成
kubeadm token list 查看
然后使用kubeadm join加入

10.部署kubernetes-dashboard

[root@centos7vm manifests]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
修改下载的kubernetes-dashboard.yaml文件,更改RoleBinding修改为ClusterRoleBinding,并且修改roleRef中的kind和name,用cluster-admin这个非常牛逼的CusterRole(超级使用户权限,其拥有访问kube-apiserver的所有权限)。如下:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: kubernetes-dashboard-minimal
 namespace: kube-system
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: kubernetes-dashboard
 namespace: kube-system

---
[root@centos7vm manifests]# kubectl apply -f kubernetes-dashboard.yaml 
部署完成后查看pod状态
[root@centos7vm ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-swp2s 1/1 Running 3 16h
coredns-fb8b8dccf-wcftx 1/1 Running 3 16h
etcd-centos7vm 1/1 Running 3 16h
kube-apiserver-centos7vm 1/1 Running 3 16h
kube-controller-manager-centos7vm 1/1 Running 4 16h
kube-proxy-hpz9k 1/1 Running 0 13h
kube-proxy-xhlxf 1/1 Running 3 16h
kube-scheduler-centos7vm 1/1 Running 3 16h
kubernetes-dashboard-5f7b999d65-c759c 1/1 Running 0 2m34s
weave-net-8s4cl 2/2 Running 9 15h
weave-net-d67vs 2/2 Running 1 13h

创建dashboard用户请参考:https://github.com/kubernetes/dashboard/wiki/Creating-sample-user
用户创建好之后(To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:)
[root@centos7vm manifests]# kubectl proxy --address=0.0.0.0 --disable-filter=true &
然后浏览器中访问
http://172.16.0.222:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

11.监控组件 – prometheus-operator 部署(https://github.com/coreos/prometheus-operator)

[root@centos7vm ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/bundle.yaml
Note: make sure to adapt the namespace in the ClusterRoleBinding if deploying in a namespace other than the default namespace.
部署完成后dashboard效果图

kubernetes-deployment

遇到的问题:

问题一:docker pull images的时候连接超时:需要配置docker proxy,参考第2步设置docker代理便能解决

docker pull images的时候连接超时:需要配置docker proxy,参考第2步设置docker代理便能解决
[root@centos7vm ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
 [WARNING HTTPProxy]: Connection to "https://172.16.0.222" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings
 [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
 [WARNING Hostname]: hostname "centos7vm" could not be reached
 [WARNING Hostname]: hostname "centos7vm": lookup centos7vm on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-apiserver ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-scheduler ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.1: output: Trying to pull repository k8s.gcr.io/kube-proxy ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Trying to pull repository k8s.gcr.io/etcd ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Trying to pull repository k8s.gcr.io/coredns ... 
Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

问题二:kubeadm init时报kubelet未启动

kubeadm init时报kubelet未启动

[root@centos7vm ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
 [WARNING HTTPProxy]: Connection to "https://172.16.0.222" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings
 [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
 [WARNING Hostname]: hostname "centos7vm" could not be reached
 [WARNING Hostname]: hostname "centos7vm": lookup centos7vm on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos7vm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.0.222]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos7vm localhost] and IPs [172.16.0.222 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
 timed out waiting for the condition

This error is likely caused by:
 - The kubelet is not running
 - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
 - 'systemctl status kubelet'
 - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
 - 'docker ps -a | grep kube | grep -v pause'
 Once you have found the failing container, you can inspect its logs with:
 - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

于是通过systemctl status kubelet查看kubelet的状态
[root@centos7vm ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
 Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
 Drop-In: /usr/lib/systemd/system/kubelet.service.d
 └─10-kubeadm.conf
 Active: active (running) since Wed 2019-04-24 01:03:57 EDT; 9min ago
 Docs: https://kubernetes.io/docs/
 Main PID: 13947 (kubelet)
 CGroup: /system.slice/kubelet.service
 └─13947 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --confi...

Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.528123 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.556646 13947 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.636400 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: I0424 01:13:23.636889 13947 kubelet_node_status.go:283] Setting node annotation to enable volum...h/detach
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.637776 13947 controller.go:115] failed to ensure node lease exists, will retry i... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: I0424 01:13:23.639660 13947 kubelet_node_status.go:72] Attempting to register node centos7vm
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.699581 13947 kubelet_node_status.go:94] Unable to register node "centos7vm" with... refused
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.737079 13947 kubelet.go:2244] node "centos7vm" not found
Apr 24 01:13:23 centos7vm kubelet[13947]: W0424 01:13:23.799325 13947 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 01:13:23 centos7vm kubelet[13947]: E0424 01:13:23.800451 13947 kubelet.go:2170] Container runtime network not ready: NetworkReady=...tialized
Hint: Some lines were ellipsized, use -l to show in full.

使用docker ps -a | grep kube | grep -v pause查看容器状态
[root@centos7vm ~]# docker ps -a | grep kube | grep -v pause
a7bd1c323bfa 2c4adeb21b4f "etcd --advertise-..." 3 minutes ago Exited (1) 2 minutes ago k8s_etcd_etcd-centos7vm_kube-system_0298d5694df46086cda3a73b7025fd1a_6
4ff843a990ac efb3887b411d "kube-controller-m..." 3 minutes ago Exited (1) 3 minutes ago k8s_kube-controller-manager_kube-controller-manager-centos7vm_kube-system_b9130a6f5c1174f73db1e98992b49b1c_6
0ae5b5dd5df4 cfaa4ad74c37 "kube-apiserver --..." 3 minutes ago Exited (1) 3 minutes ago k8s_kube-apiserver_kube-apiserver-centos7vm_kube-system_c125074e5c436480a1e85165a5af5b9a_6
484743a36cae 8931473d5bdb "kube-scheduler --..." 8 minutes ago Up 8 minutes k8s_kube-scheduler_kube-scheduler-centos7vm_kube-system_f44110a0ca540009109bfc32a7eb0baa_0
查看日志docker logs a7bd1c323bfa,发现/etc/kubernetes/pki没有权限
[root@centos7vm ~]# docker logs a7bd1c323bfa
2019-04-24 07:05:54.108645 I | etcdmain: etcd Version: 3.3.10
2019-04-24 07:05:54.108695 I | etcdmain: Git SHA: 27fc7e2
2019-04-24 07:05:54.108698 I | etcdmain: Go Version: go1.10.4
2019-04-24 07:05:54.108700 I | etcdmain: Go OS/Arch: linux/amd64
2019-04-24 07:05:54.108703 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-04-24 07:05:54.108747 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 
2019-04-24 07:05:54.109323 C | etcdmain: open /etc/kubernetes/pki/etcd/peer.crt: permission denied

发现是SELinux的原因(参考这个文章得到思路)
解决方案如下:
[root@centos7vm ~]# sestatus
SELinux status: enabled
于是关闭selinux(setenforce 0 在我的centos7.6上不起作用)
vim /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled 
设置后需要重启才能生效
[root@centos7vm ~]# sestatus
SELinux status: disabled

#kubeadm reset
#kubeadm init 
成功解决

转载请注明:lampNick » centos7.6使用kubeadm安装kubernetes的master worker节点笔记及遇到的坑

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
感谢您的提问。以下是CentOS 7.9使用kubeadm安装Kubernetes 1.27.1的详细步骤: 1. 在所有节点上升级操作系统 ``` sudo yum upgrade -y ``` 2. 配置防火墙规则 按照Kubernetes官网的要求,关闭防火墙或者开放必需的端口。以下是关闭防火墙的方法: ``` sudo systemctl stop firewalld sudo systemctl disable firewalld ``` 3. 添加Kubernetes仓库 在所有节点安装必需的工具: ``` sudo yum install -y yum-utils device-mapper-persistent-data lvm2 ``` 然后添加Kubernetes仓库: ``` sudo yum-config-manager --add-repo https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ ``` 4. 安装Docker和Kubernetes 在所有节点安装Docker和Kubernetes: ``` sudo yum install -y kubelet kubeadm kubectl docker-ce docker-ce-cli containerd.io ``` 配置Docker Cgroup Driver为systemd: ``` sudo mkdir /etc/docker/ sudo tee /etc/docker/daemon.json <<-'EOF' { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://mirror.ccs.tencentyun.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker ``` 5. 初始化Master节点Master节点上执行以下命令初始化: ``` sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ``` 其中--pod-network-cidr参数是指定Pod网络的CIDR地址。 命令完成后,将提示您保存Kubernetes集群加入命令。 执行以下命令以配置kubectl: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 6. 安装Pod网络插件 在Master节点上执行以下命令以安装Flannel网络插件: ``` kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 7. 加入Node节点节点加入集群的命令已输出在初始化Master节点的命令中。 在每个Node节点上,执行相应的命令即可加入集群。 8. 验证集群状态 在Master节点上执行以下命令以验证集群状态: ``` kubectl get nodes ``` 如果所有节点都处于Ready状态,则表示集群已经正常工作。 以上就是CentOS 7.9使用kubeadm安装Kubernetes 1.27.1的详细步骤。希望对您有帮助!
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值