kubernetes系列教程(1)kubeadm部署1.14.1集群

本章是kubernetes系列教程第二=篇,主要介绍通过kubeadm安装工具部署kubernetes集群。

1. 环境概述

1.1 安装概述

kubernetes常规的安装方式包含两种:二进制手动安装和kubeadm自动化安装,新版kubeadm目前已将kubernetes管理组件以pod的形式部署在集群中,社区目前推荐使用kubeadm的方式一件自动化部署,有兴趣的也可以通过二进制的方式一步一步部署kubernetes集群。不管用哪种方式,受限于GFW,大部分镜像需要梯子才能下载,大家自行补脑和解决,本文以离线的方式安装部署,将下载的镜像倒入到各个安装即可。

1.2 环境介绍

软件版本

软件名软件版本
OSCentOS Linux release 7.5.1804 (Core)
Dockerdocker-ce-18.03.1.ce-1.el7
Kubernetes1.14.1
Kubeadmkubeadm-1.14.1-0.x86_64
etcd3.3.10
flannelv0.11.0

环境说明

  • 机器配置是2vcpu+4G memory+50G disk
主机名角色IP地址软件
node-1master192.168.1.101docker,kubelet,etcd,kube-apiserver,kube-controller-manager,kube-scheduler
node-2worker192.168.1.102docker,kubelet,kube-proxy,flannel
node-3worker192.168.1.103docker,kubelet,kube-proxy,flannel
node-3worker192.168.1.104docker,kubelet,kube-proxy,flannel

1.3 环境准备

1、设置主机名,其他两个节点类似设置

[root@node-1 ~]# hostnamectl set-hostname node-1

2、设置hosts文件,其他两个节点设置相同内容

[root@node-1 ~]# cat /etc/hosts
# ::1		localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1	localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.101  node-1
192.168.1.102  node-2
192.168.1.103  node-3
192.168.1.104  node-4

3、设置无密码登陆

#生成密钥对
[root@node-1 ~]# ssh-keygen -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:/+Ilg51SBgjEG74NPL/Al4EH3icgOXOB+irnx1/DPW8 root@node-1
The key's randomart image is:
+---[RSA 2048]----+
|   =+.           |
|  * *. .         |
| . O B. .        |
|.   O = ..       |
| . . B =S o      |
|  . + =. B .     |
| . . o .* O .    |
|o . o .. o.BE    |
|.o.. ..  ..oo    |
+----[SHA256]-----+

#拷贝公钥到node-2、node-3和node-4节点
[root@node-1 ~]# ssh-copy-id  192.168.1.102
[root@node-1 ~]# ssh-copy-id  192.168.1.103
[root@node-1 ~]# ssh-copy-id  192.168.1.104

4、测试无密码登陆

[root@node-1 ~]# ssh node-2
[root@node-2 ~]# exit

5、关闭防火墙和SElinux

[root@node-1 ~]# systemctl stop firewalld
[root@node-1 ~]# systemctl disable firewalld
[root@node-1 ~]# sed -i '/^SELINUX=/ s/enforcing/disabled/g' /etc/selinux/config 
[root@node-1 ~]# setenforce 0

1.4 安装Docker

1、配置网络yum源

[root@node-1 ~]# yum -y install wget 
[root@node-1 ~]#  wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
[root@node-1 ~]#   wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo

2、安装docker-ce

[root@node-1 ~]# yum -y install docker-ce-18.03.1.ce-1.el7.centos

3、设置cgroup driver类型为systemd

[root@node-4 kubernetes]# cat /etc/docker/daemon.json
{"exec-opts": ["native.cgroupdriver=systemd"]}

4、启动docker服务、设置开机自启并查看状态

[root@node-1 ~]# systemctl restart docker ;systemctl enable docker
[root@node-1 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 六 2020-01-11 21:02:34 CST; 6ms ago
     Docs: https://docs.docker.com
 Main PID: 1222 (dockerd)
    Tasks: 17
   Memory: 36.8M
   CGroup: /system.slice/docker.service
           ├─1222 /usr/bin/dockerd
           └─1227 docker-containerd --config /var/run/docker/containerd/conta...

111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34+08:00" lev...rd
111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.201360905+...2"
111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.212637406+...s"
111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.214951694+...."
111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.426389851+...s"
111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.505127221+...."
111 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.527344893+...ce
1月 11 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.527864149+...n"
1月 11 21:02:34 node-1 dockerd[1222]: time="2020-01-11T21:02:34.535023013+...k"
111 21:02:34 node-1 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

5、验证cgroup driver类型是否为systemd

[root@node-1 ~]# docker info 
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-862.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.38GiB
Name: node-1
ID: HZTC:XNVC:BUSR:EV53:BN56:4LRW:E5DU:KM4B:7ENO:CQPW:EV7F:2C2E
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

2. 安装kubernetes集群

2.1 安装kubeadm等组件

1. 安装kubernetes源,国内可以使用阿里的kubernetes源,速度会快一点
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 安装kubeadm,kubelet,kubectl
[root@node-1 ~]# yum install kubeadm-1.14.1-0 kubectl-1.14.1-0 kubelet-1.14.1-0 --disableexcludes=kubernetes -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
kubernetes/signature                                     |  454 B     00:00     
从 https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0xA7317B0F:
 用户ID     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 指纹       : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 来自       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
从 https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 检索密钥
kubernetes/signature                                     | 1.4 kB     00:00 !!! 
kubernetes/primary                                         |  61 kB   00:00     
kubernetes                                                              442/442
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.14.1-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.7.5,它被软件包 kubeadm-1.14.1-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.11.0,它被软件包 kubeadm-1.14.1-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.14.1-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.14.1-0 将被 安装
--> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.14.1-0.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-5.el7_7.2 将被 安装
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-5.el7_7.2.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-5.el7_7.2.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-5.el7_7.2.x86_64 需要
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-5.el7_7.2.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-5.el7_7.2.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-5.el7_7.2.x86_64 需要
---> 软件包 cri-tools.x86_64.0.1.13.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.0.7.5-0 将被 安装
--> 正在检查事务
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-10.el7_7.1 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-6.el7_7.1 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 解决依赖关系完成

依赖关系解决

================================================================================
 Package                   架构      版本                   源             大小
================================================================================
正在安装:
 kubeadm                   x86_64    1.14.1-0               kubernetes    8.7 M
 kubectl                   x86_64    1.14.1-0               kubernetes    9.5 M
 kubelet                   x86_64    1.14.1-0               kubernetes     23 M
为依赖而安装:
 conntrack-tools           x86_64    1.4.4-5.el7_7.2        updates       187 k
 cri-tools                 x86_64    1.13.0-0               kubernetes    5.1 M
 kubernetes-cni            x86_64    0.7.5-0                kubernetes     10 M
 libnetfilter_cthelper     x86_64    1.0.0-10.el7_7.1       updates        18 k
 libnetfilter_cttimeout    x86_64    1.0.0-6.el7_7.1        updates        18 k
 libnetfilter_queue        x86_64    1.0.2-2.el7_2          base           23 k

事务概要
.....
需要安装几个重要依赖包:socat,cri-tools,cni等。

3. 设置iptables网桥参数
[root@node-1 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@node-1 ~]# sysctl --system,然后使用sysctl -a|grep 参数的方式验证是否生效
4. 启动kubelet服务
[root@node-1 ~]# systemctl restart kubelet
[root@node-1 ~]# systemctl enable kubelet

2.2 倒入安装镜像

#将kubernetes所需的镜像下载到本地并  ,执行如下操作步骤:
docker pull  k8s.gcr.io/kube-controller-manager:v1.14.1
docker pull  k8s.gcr.io/kube-apiserver:v1.14.1  
docker pull  k8s.gcr.io/kube-scheduler:v1.14.1  
docker pull  k8s.gcr.io/kube-proxy:v1.14.1 
docker pull  k8s.gcr.io/pause:3.1  
docker pull  k8s.gcr.io/etcd:3.3.10 
docker pull  k8s.gcr.io/coredns:1.3.1
docker pull  quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 
#如果pull无法下载,在githab下载flannel:v0.11.0-amd64 文件使用docker load 加载;地址:https://github.com/flannel-io/flannel/releases?page=2


#重新生成标签
docker tag  gcr.azk8s.cn/google_containers/kube-controller-manager:v1.14.1   k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag  gcr.azk8s.cn/google_containers/kube-apiserver:v1.14.1      k8s.gcr.io/kube-apiserver:v1.14.1
docker tag  gcr.azk8s.cn/google_containers/kube-scheduler:v1.14.1   k8s.gcr.io/kube-scheduler:v1.14.1
docker tag  gcr.azk8s.cn/google_containers/kube-proxy:v1.14.1   k8s.gcr.io/kube-proxy:v1.14.1
docker tag  gcr.azk8s.cn/google_containers/pause:3.1        k8s.gcr.io/pause:3.1  
docker tag  gcr.azk8s.cn/google_containers/etcd:3.3.10    k8s.gcr.io/etcd:3.3.10
docker tag  gcr.azk8s.cn/google_containers/coredns:1.3.1      k8s.gcr.io/coredns:1.3.1
docker tag  quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64      quay.io/coreos/flannel:v0.11.0-amd64


[root@node-1 ~]# docker images 
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                                    v1.14.1             20a2d7035165        9 months ago        82.1MB
gcr.azk8s.cn/google_containers/kube-proxy                v1.14.1             20a2d7035165        9 months ago        82.1MB
gcr.azk8s.cn/google_containers/kube-apiserver            v1.14.1             cfaa4ad74c37        9 months ago        210MB
k8s.gcr.io/kube-apiserver                                v1.14.1             cfaa4ad74c37        9 months ago        210MB
gcr.azk8s.cn/google_containers/kube-controller-manager   v1.14.1             efb3887b411d        9 months ago        158MB
k8s.gcr.io/kube-controller-manager                       v1.14.1             efb3887b411d        9 months ago        158MB
k8s.gcr.io/kube-scheduler                                v1.14.1             8931473d5bdb        9 months ago        81.6MB
gcr.azk8s.cn/google_containers/kube-scheduler            v1.14.1             8931473d5bdb        9 months ago        81.6MB
quay-mirror.qiniu.com/coreos/flannel                     v0.11.0-amd64       ff281650a721        11 months ago       52.6MB
quay.io/coreos/flannel                                   v0.11.0-amd64       ff281650a721        11 months ago       52.6MB
gcr.azk8s.cn/google_containers/coredns                   1.3.1               eb516548c180        12 months ago       40.3MB
k8s.gcr.io/coredns                                       1.3.1               eb516548c180        12 months ago       40.3MB
gcr.azk8s.cn/google_containers/etcd                      3.3.10              2c4adeb21b4f        13 months ago       258MB
k8s.gcr.io/etcd                                          3.3.10              2c4adeb21b4f        13 months ago       258MB
gcr.azk8s.cn/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
k8s.gcr.io/pause                                         3.1                 da86e6ba6ca1        2 years ago         742kB

2.3 kubeadm初始化集群

1、kubeadm初始化集群,使用kubeadm初始化的时候需要使用–pod-network-cidr指定pod使用的网段,设置值根据不同的网络plugin选择,本文以flannel为例设置值为10.244.0.0/16(如果设置不一样,需要在后续初始化网络插件时候和对应插件的yaml文件保持一致),此外如果安装有多个不同的container runtime可以通过–cri-socket指定socket文件所属路径,如果有多个网卡可以通过–apiserver-advertise-address指定master地址,缺省会选择云主机默认网关所属的地址。

如果集群初始化失败,可以使用一下方式重置集群

[root@node-1 ~]# kubeadm reset
[root@node-1 ~]# rm -fr ~/.kube/
[root@node-1 ~]# kubeadm init --apiserver-advertise-address 192.168.1.101 --apiserver-bind-port 6443 --kubernetes-version 1.14.1 --pod-network-cidr 10.244.0.0/16
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [192.168.1.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [192.168.1.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.101]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.504089 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node node-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wnm8nj.ubsujoqqhx0lal7z
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.101:6443 --token wnm8nj.ubsujoqqhx0lal7z \
    --discovery-token-ca-cert-hash sha256:9edb116c5c91c81709182686a4eb8bcad7b0870d4e581344a783d826ceabec37 

通过kubeadm init --apiserver-advertise-address 10.254.100.101 --apiserver-bind-port 6443 --kubernetes-version 1.14.1 --pod-network-cidr 10.244.0.0/16安装命令,显示了kubeadm安装过程中的一些重要步骤:下载镜像,生成证书,生成配置文件,配置RBAC授权认证,配置环境变量,安装网络插件指引,添加node指引配置文件。

2、生成kubectl环境配置文件

[root@node-1 ~]# mkdir /root/.kube
[root@node-1 ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config
[root@node-1 ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
node-1   NotReady   master   50s   v1.14.1

3、添加node节点,将另外三个节点加入到集群中,复制上述的添加节点命令到指定节点添加即可。

[root@node-2 pki]# kubeadm join 192.168.1.101:6443 --token wnm8nj.ubsujoqqhx0lal7z     --discovery-token-ca-cert-hash sha256:9edb116c5c91c81709182686a4eb8bcad7b0870d4e581344a783d826ceabec37 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
以此类推到node-2节点添加即可,添加完之后通过kubectl get nodes验证,此时由于还没有安装网络plugin,
所有的node节点均显示NotReady状态:
[root@node-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node-1   NotReady    master   21m    v1.14.1
node-2   NotReady    <none>   4m6s   v1.14.1

并且在/etc/kubernets/下面会创建 bootstrap-kubelet.conf和kubelet.conf文件,和目录pki/下的ca.crt文件。

[root@node-2 ~]# cat  /etc/kubernetes/bootstrap-kubelet.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ERXhNVEUwTWpjMU5Wb1hEVE13TURFd09ERTBNamMxTlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTERaCnF1cFlUZWQyeEZqUFF3Vk0xWGM1bUg1QW9oRnUxNjVwRTVEb1l3ZC9NU2svcFN1VnFySmtwMVhTTHIyUkJ0VEIKNXFKUHhUKzVVWXE0SWNQSnJYN0dzY01uVmgyTkp0bXJVOHp2QnVEdG5kZklITVdXVHBWM3pEUFloSEp1RzBaQgpmRXkzbktqM3hvcVI4cHZZU0x3ZmpWQk5XYkJhNTJyTHdtd2JzTWlVV2V0eDJ1dXlFZndxdVF5bmp5UHFZbmU3CmpjRFVnUXdkSVJZUkxwNllJWWR6dmFWZTYvK2ZOUURTdzFRcXJlSkRYRklpRElBRHNBNktpajlJSmxESGZzak8KWmU4VXpJd3cvTmdZZVI0bS90OHREdXVMYy8wVVBEMVgyQVFnbGJ2NHQyMGdGOVdXK3VzSUNEZGV3RlJ1cmgvZApqOTNjMUFtUEd2UDFrZXRnM3FjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJK0NINGxKVUxENkNXRFpZWVllNzU4ZnM5RjgKSE5BTTYxckxYOWNiVHUzVklMR0pXWDRtODgvY2lrTFdST3RoMnoraTBPaTR2YmNiMnZYWWRCYVIzVEUwbm40OQpnRDUrSytDd0JtUnZET0NyQWlFMW4va294bk9iVGkxZ0tDNkFFS2FYZHlLT2RMZVJyUzkvcXBLbVgzU0VFWUZRCktXbjJjd0wxelIrb2c2a2dTY0lxSC9iWTI3YTNOckNrNXdMQmFHS3RMRjdOYXg3VlBVMTcxR0ZDeFJ1YVV0clgKeXQ5czlLbGlHY0QyVHBEZC9pbGppY2V3bm1FWFNwRVJSc05qdUc3N3c1UVlreG05a1BOMlk5Yk5xYXFnYUE5RwoxblN1NWxTOE9RY3M2RFdFVzk1SUhJekxEV1Qvc25NNDA1S0RROUF5TTVKNEtPcG0rdWYvbC9yamNKVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: tls-bootstrap-token-user
  name: tls-bootstrap-token-user@kubernetes
current-context: tls-bootstrap-token-user@kubernetes
kind: Config
preferences: {}
users:
- name: tls-bootstrap-token-user
  user:
    token: wnm8nj.ubsujoqqhx0lal7z
[root@node-2 ~]# cat  /etc/kubernetes/kubelet.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ERXhNVEUwTWpjMU5Wb1hEVE13TURFd09ERTBNamMxTlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTERaCnF1cFlUZWQyeEZqUFF3Vk0xWGM1bUg1QW9oRnUxNjVwRTVEb1l3ZC9NU2svcFN1VnFySmtwMVhTTHIyUkJ0VEIKNXFKUHhUKzVVWXE0SWNQSnJYN0dzY01uVmgyTkp0bXJVOHp2QnVEdG5kZklITVdXVHBWM3pEUFloSEp1RzBaQgpmRXkzbktqM3hvcVI4cHZZU0x3ZmpWQk5XYkJhNTJyTHdtd2JzTWlVV2V0eDJ1dXlFZndxdVF5bmp5UHFZbmU3CmpjRFVnUXdkSVJZUkxwNllJWWR6dmFWZTYvK2ZOUURTdzFRcXJlSkRYRklpRElBRHNBNktpajlJSmxESGZzak8KWmU4VXpJd3cvTmdZZVI0bS90OHREdXVMYy8wVVBEMVgyQVFnbGJ2NHQyMGdGOVdXK3VzSUNEZGV3RlJ1cmgvZApqOTNjMUFtUEd2UDFrZXRnM3FjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJK0NINGxKVUxENkNXRFpZWVllNzU4ZnM5RjgKSE5BTTYxckxYOWNiVHUzVklMR0pXWDRtODgvY2lrTFdST3RoMnoraTBPaTR2YmNiMnZYWWRCYVIzVEUwbm40OQpnRDUrSytDd0JtUnZET0NyQWlFMW4va294bk9iVGkxZ0tDNkFFS2FYZHlLT2RMZVJyUzkvcXBLbVgzU0VFWUZRCktXbjJjd0wxelIrb2c2a2dTY0lxSC9iWTI3YTNOckNrNXdMQmFHS3RMRjdOYXg3VlBVMTcxR0ZDeFJ1YVV0clgKeXQ5czlLbGlHY0QyVHBEZC9pbGppY2V3bm1FWFNwRVJSc05qdUc3N3c1UVlreG05a1BOMlk5Yk5xYXFnYUE5RwoxblN1NWxTOE9RY3M2RFdFVzk1SUhJekxEV1Qvc25NNDA1S0RROUF5TTVKNEtPcG0rdWYvbC9yamNKVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.101:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
[root@node-2 ~]# cat  /etc/kubernetes/pki/ca.crt 
-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIwMDExMTE0Mjc1NVoXDTMwMDEwODE0Mjc1NVowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALDZ
qupYTed2xFjPQwVM1Xc5mH5AohFu165pE5DoYwd/MSk/pSuVqrJkp1XSLr2RBtTB
5qJPxT+5UYq4IcPJrX7GscMnVh2NJtmrU8zvBuDtndfIHMWWTpV3zDPYhHJuG0ZB
fEy3nKj3xoqR8pvYSLwfjVBNWbBa52rLwmwbsMiUWetx2uuyEfwquQynjyPqYne7
jcDUgQwdIRYRLp6YIYdzvaVe6/+fNQDSw1QqreJDXFIiDIADsA6Kij9IJlDHfsjO
Ze8UzIww/NgYeR4m/t8tDuuLc/0UPD1X2AQglbv4t20gF9WW+usICDdewFRurh/d
j93c1AmPGvP1ketg3qcCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAI+CH4lJULD6CWDZYYYe758fs9F8
HNAM61rLX9cbTu3VILGJWX4m88/cikLWROth2z+i0Oi4vbcb2vXYdBaR3TE0nn49
gD5+K+CwBmRvDOCrAiE1n/koxnObTi1gKC6AEKaXdyKOdLeRrS9/qpKmX3SEEYFQ
KWn2cwL1zR+og6kgScIqH/bY27a3NrCk5wLBaGKtLF7Nax7VPU171GFCxRuaUtrX
yt9s9KliGcD2TpDd/iljicewnmEXSpERRsNjuG77w5QYkxm9kPN2Y9bNqaqgaA9G
1nSu5lS8OQcs6DWEW95IHIzLDWT/snM405KDQ9AyM5J4KOpm+uf/l/rjcJU=
-----END CERTIFICATE-----

4、安装网络plugin,kubernetes支持多种类型网络插件,要求网络支持CNI插件即可,CNI是Container Network Interface,要求kubernetes的中pod网络访问方式:node和node之间网络互通,pod和pod之间网络互通,node和pod之间网络互通,不同的CNI plugin支持的特性有所差别。kubernetes支持多种开源的网络CNI插件,常见的有flannel,calico,canal,weave等。flannel是一种overlay的网络模型,通过vxlan隧道方式构建tunnel网络,实现k8s中网络的互联,后续在做介绍,如下是安装过程:

[root@node-1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

通过上述输出可知道,部署flannel 需要RBAC授权,配置configmap和daemonset,其中Daemonset能够适配各种类型的CPU架构,默认安装了多个,一般是adm64即可,可以将上述的url下载编辑,保留kube-flannel-ds-amd64这个daemonset即可,或者将其删除

#1、查看flannel安装的daemonsets
[root@node-1 ~]# kubectl get daemonsets -n kube-system
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
kube-flannel-ds-amd64     1         1         1       1            1           beta.kubernetes.io/arch=amd64     21s
kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       20s
kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     21s
kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   20s
kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     20s
kube-proxy                1         1         1       1            1           <none>                            6m30s

#2、删除不需要的damonsets
[root@node-1 ~]# kubectl delete daemonsets kube-flannel-ds-arm kube-flannel-ds-arm64 kube-flannel-ds-ppc64le kube-flannel-ds-s390x -n kube-system
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted


此时再验证node的安装情况,所有节点均已显示为Ready状态,安装完毕!

[root@node-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node-1   Ready    master   21m    v1.14.1
node-2   Ready    <none>   4m6s   v1.14.1
node-3   Ready    <none>   105s   v1.14.1
node-4   Ready    <none>   16s    v1.14.1

2.4 配置kubectl命令补全功能

使用kubectl和kubernetes交互时候可以使用缩写模式也可以使用完整模式,如kubectl get nodes和kubectl get no能实现一样的效果,为了提高工作效率,可以使用命令补全的方式加快工作效率。

[root@node-1 ~]# kubectl completion bash >/etc/kubernetes/kubectl.sh
[root@node-1 ~]# echo "source /etc/kubernetes/kubectl.sh" >>/root/.bashrc 
[root@node-1 ~]# cat /root/.bashrc 
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi
source /etc/kubernetes/kubectl.

使配置生效
[root@node-1 ~]# source /etc/kubernetes/kubectl.sh 

命令行中输入kubectl get co再按TAB键就能自动补全了
[root@node-1 ~]# kubectl get co
componentstatuses         configmaps                controllerrevisions.apps  
[root@node-1 ~]# kubectl get componentstatuses 

3. 验证安装服务状态

3.1 验证node状态

1、获取node的列表,可以查看到状态, 角色,启动市场,版本
[root@node-1 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    master   54m   v1.14.1
node-2   Ready    <none>   36m   v1.14.1
node-3   Ready    <none>   34m   v1.14.1
node-4   Ready    <none>   32m   v1.14.1

2、查看node的详细信息,可以查看到标签,地址,资源情况,资源分配情况,event日志信息等
[root@node-1 ~]# kubectl describe node node-1
Name:               node-1
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node-1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:85:e7:4c:75:e7"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.1.101
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 11 Jan 2020 22:28:13 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 11 Jan 2020 23:22:48 +0800   Sat, 11 Jan 2020 22:28:07 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 11 Jan 2020 23:22:48 +0800   Sat, 11 Jan 2020 22:28:07 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 11 Jan 2020 23:22:48 +0800   Sat, 11 Jan 2020 22:28:07 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 11 Jan 2020 23:22:48 +0800   Sat, 11 Jan 2020 22:34:44 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.1.101
  Hostname:    node-1
Capacity:
 cpu:                2
 ephemeral-storage:  31445996Ki
 hugepages-2Mi:      0
 memory:             1446924Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  28980629866
 hugepages-2Mi:      0
 memory:             1344524Ki
 pods:               110
System Info:
 Machine ID:                 32599e2a74704b2e95443e24ea15d4f6
 System UUID:                9D3E1AAA-A6F4-4556-978C-549495407C60
 Boot ID:                    06c32198-470f-444a-a025-b8a94b3f5445
 Kernel Version:             3.10.0-862.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.1
 Kubelet Version:            v1.14.1
 Kube-Proxy Version:         v1.14.1
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                              ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-fb8b8dccf-8nz6k           100m (5%)     0 (0%)      70Mi (5%)        170Mi (12%)    54m
  kube-system                coredns-fb8b8dccf-d69s8           100m (5%)     0 (0%)      70Mi (5%)        170Mi (12%)    54m
  kube-system                etcd-node-1                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         53m
  kube-system                kube-apiserver-node-1             250m (12%)    0 (0%)      0 (0%)           0 (0%)         53m
  kube-system                kube-controller-manager-node-1    200m (10%)    0 (0%)      0 (0%)           0 (0%)         53m
  kube-system                kube-flannel-ds-amd64-zfp5x       100m (5%)     100m (5%)   50Mi (3%)        50Mi (3%)      48m
  kube-system                kube-proxy-fftl7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         54m
  kube-system                kube-scheduler-node-1             100m (5%)     0 (0%)      0 (0%)           0 (0%)         53m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                850m (42%)   100m (5%)
  memory             190Mi (14%)  390Mi (29%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  NodeHasSufficientMemory  54m (x8 over 54m)  kubelet, node-1     Node node-1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    54m (x8 over 54m)  kubelet, node-1     Node node-1 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     54m (x7 over 54m)  kubelet, node-1     Node node-1 status is now: NodeHasSufficientPID
  Normal  Starting                 54m                kube-proxy, node-1  Starting kube-proxy.
  Normal  NodeReady                48m                kubelet, node-1     Node node-1 status is now: NodeReady

2 查看组建状态,kubernetes中核心的组建,包括scheduler,controller-manager,etcd

[root@node-1 ~]# kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

3 查看pod的情况,master中的角色包括kube-apiserver,kube-scheduler,kube-controller-manager,etcd,coredns以pods形式部署在集群中,worker节点的kube-proxy也以pod的形式部署。实际上pod是以其他控制器如daemonset的形式控制的。

查看当前系统中所有运行的pods状态。
[root@node-1 ~]# kubectl get pods -n kube-system 
NAME                             READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-8nz6k          1/1     Running   0          56m
coredns-fb8b8dccf-d69s8          1/1     Running   0          56m
etcd-node-1                      1/1     Running   0          55m
kube-apiserver-node-1            1/1     Running   0          55m
kube-controller-manager-node-1   1/1     Running   0          55m
kube-flannel-ds-amd64-5mzk5      1/1     Running   0          39m
kube-flannel-ds-amd64-8n9dp      1/1     Running   0          35m
kube-flannel-ds-amd64-tgczc      1/1     Running   0          36m
kube-flannel-ds-amd64-zfp5x      1/1     Running   0          50m
kube-proxy-bkdvd                 1/1     Running   0          35m
kube-proxy-fftl7                 1/1     Running   0          56m
kube-proxy-kgb7x                 1/1     Running   0          39m
kube-proxy-lfkdf                 1/1     Running   0          36m
kube-scheduler-node-1            1/1     Running   0          55m

查看daemonsets和deployments列表
[root@node-1 ~]# kubectl get ds -n kube-system 
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                   AGE
kube-flannel-ds-amd64   4         4         4       4            4           beta.kubernetes.io/arch=amd64   50m
kube-proxy              4         4         4       4            4           <none>                          57m


[root@node-1 ~]# kubectl get deployments -n kube-system 
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           57m

上述输出中可以看到flannel和kube-proxy是以Daemonsets的形式部署在集群中,coredns是以deployments的形式部署,
但并未看到kube-apiserver等控制器,实际上master的组件是一种静态pod(static pod)的形式部署在集群中。

4. 遇到的问题

4.1、问题:有时需要在已有的k8s集群中加入一个新的节点,但有时会出现如下错误:

报错1:

[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace configmaps "kubelet-config-1.11" is forbidden: User "system:bootstrap:7df77e" cannot get configmaps in the namespace "kube-system"

出现原因:

       kubeadm和kubelet版本与集群不一致。

解决方法:

       卸载cri-tools和kubelet,并重新安装kubeadm和kubelet正确的版本,版本应依据master的版本来安装,不应高于master的版本。(如果kubelet版本高于kubeadm,则加入节点成功之后会一直处于NotReady状态)
[root@node-2 ~]# kubeadm join 192.168.1.101:6443 --token wnm8nj.ubsujoqqhx0lal7z \
>     --discovery-token-ca-cert-hash sha256:9edb116c5c91c81709182686a4eb8bcad7b0870d4e581344a783d826ceabec37 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
error execution phase kubelet-start: configmaps "kubelet-config-1.17" is forbidden: User "system:bootstrap:wnm8nj" cannot get resource "configmaps" in API group "" in the namespace "kube-system"


[root@node-2 ~]# yum -y remove cri-tools  kubelet
[root@node-2 ~]# yum install kubeadm-1.14.1-0 kubectl-1.14.1-0 kubelet-1.14.1-0 --disableexcludes=kubernetes -y

报错2:

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

产生原因:

       加入节点时, /etc/kubernets/下面会创建 bootstrap-kubelet.conf和kubelet.conf文件,和目录pki/下的ca.crt文件。如果这3个文件存在,重新加入node节点就会报错

解决方法:

       重新使用join 加入节点时,删除相关的文件
[root@node-2 ~]# kubeadm join 192.168.1.101:6443 --token wnm8nj.ubsujoqqhx0lal7z     --discovery-token-ca-cert-hash sha256:9edb116c5c91c81709182686a4eb8bcad7b0870d4e581344a783d826ceabec37 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

[root@node-2 ~]# cd /etc/kubernetes/
[root@node-2 ~]# rm -rf bootstrap-kubelet.conf   ./pki/ca.crt
[root@node-2 ~]# kubeadm join 192.168.1.101:6443 --token wnm8nj.ubsujoqqhx0lal7z     --discovery-token-ca-cert-hash sha256:9edb116c5c91c81709182686a4eb8bcad7b0870d4e581344a783d826ceabec37
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5. 总结

本章通过 kubeadm 部署了三节点的 Kubernetes 集群,后面章节我们都将在这个实验环境中学习 Kubernetes 的各项技术。

Kubernetes Cluster 由 Master 和 Node 组成,节点上运行着若干 Kubernetes 服务。
Master 节点
Master 是 Kubernetes Cluster 的大脑,运行着如下 Daemon 服务:kube-apiserver、kube-scheduler、kube-controller-manager、etcd 和 Pod 网络(例如 flannel)。

  • API Server(kube-apiserver)
    API Server 提供 HTTP/HTTPS RESTful API,即 Kubernetes API。API Server 是 Kubernetes Cluster 的前端接口,各种客户端工具(CLI 或 UI)以及 Kubernetes 其他组件可以通过它管理 Cluster 的各种资源。

  • Scheduler(kube-scheduler)
    Scheduler 负责决定将 Pod 放在哪个 Node 上运行。Scheduler 在调度时会充分考虑 Cluster 的拓扑结构,当前各个节点的负载,以及应用对高可用、性能、数据亲和性的需求。

  • Controller Manager(kube-controller-manager)
    Controller Manager 负责管理 Cluster 各种资源,保证资源处于预期的状态。Controller Manager 由多种 controller 组成,包括 replication controller、endpoints controller、namespace controller、serviceaccounts controller 等。
    不同的 controller 管理不同的资源。例如 replication controller 管理 Deployment、StatefulSet、DaemonSet 的生命周期,namespace controller 管理 Namespace 资源。

  • etcd
    etcd 负责保存 Kubernetes Cluster 的配置信息和各种资源的状态信息。当数据发生变化时,etcd 会快速地通知 Kubernetes 相关组件。

  • Pod 网络
    Pod 要能够相互通信,Kubernetes Cluster 必须部署 Pod 网络,flannel 是其中一个可选方案。

6. 参考文档

1、Container Runtime安装文档https://kubernetes.io/docs/setup/production-environment/container-runtimes/

2、kubeadm安装https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

3、初始化kubeadm集:https://kubernetes.io/docs/setup/productionenvironment/tools/kubeadm/create-cluster-kubeadm/#pod-network
4、kubernetes系列教程(二)kubeadm离线部署1.14.1集群:https://cloud.tencent.com/developer/article/1479625
5、kubeadm 生成的token过期后,集群增加节点:https://www.jianshu.com/p/a5e379638577
6、Kubernetes:如何解决从k8s.gcr.io拉取镜像失败问题:https://blog.csdn.net/educast/article/details/89675278
7、Kubernetes中文社区 | 中文文档:http://docs.kubernetes.org.cn/459.html
8、Kubernetes的学习笔记总结之k8s集群安装部署:https://blog.csdn.net/mmh19891113/article/details/84338109

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

这货有点懒!!

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值