部署v1.18.6版本Kubernetes集群

76 篇文章 1 订阅
48 篇文章 2 订阅

一、集群环境准备

本文系搭建kubernetes v1.18.6 集群笔记,使用三台虚拟机作为 CentOS 测试机,安装kubeadm、kubelet、kubectl均使用yum安装,网络组件选用的是 flannel行文中难免出现错误,如果读者有高见,请评论与我交流、如需转载请注明原始出处:https://www.cnblogs.com/luoahong/p/13432410.html

部署集群没有特殊说明均使用root用户执行命令

1、硬件信息

1

2

3

4

5

6

7

8

9

10

11

12

13

14

root@master ~]# lscpu

......

CPU(s):                4

CPU MHz:               2397.220

Hypervisor vendor:     KVM

 

[root@master ~]# free -h

              total        used        free      shared  buff/cache   available

Mem:            17G        1.0G         13G         17M        2.6G         16G

 

[root@master ~]# lsblk

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sr0     11:0    1 1024M  0 rom 

vda    253:0    0  100G  0 disk

2、软件信息

1

2

3

4

5

6

7

8

9

[root@master ~]# cat /etc/redhat-release

CentOS Linux release 7.5.1804 (Core)

 

[root@master ~]# kubectl version

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6"

 

[root@master ~]# docker version

Client: Docker Engine - Community

 Version:           19.03.12

3、保证环境正确性

purposecommands
保证集群各节点互通ping -c 3 <ip>
保证MAC地址唯一ip link 或 ifconfig -a
保证集群内主机名唯一查询 hostnamectl status,修改 hostnamectl set-hostname <hostname>
保证系统产品uuid唯一dmidecode -s system-uuid 或 sudo cat /sys/class/dmi/id/product_uuid

 


 

 

 

 

4、确保端口开放正常

kube-master节点端口检查:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

[root@master ~]# netstat -lntup

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name

tcp6       0      0 :::10250                :::*                    LISTEN      14154/kubelet               

       

tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      14494/kube-proxy

tcp6       0      0 :::10256                :::*                    LISTEN      14494/kube-proxy

       

tcp        0      0 192.168.118.4:2379      0.0.0.0:*               LISTEN      13805/etcd                 

tcp        0      0 192.168.118.4:2380      0.0.0.0:*               LISTEN      13805/etcd         

             

tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      13817/kube-controll

 

tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      13877/kube-schedule   

          

tcp6       0      0 :::6443                 :::*                    LISTEN      13755/kube-apiserve

kube-node*节点端口检查:

1

2

3

4

5

6

7

8

9

10

[root@node1 ~]# netstat -lntup

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

 

xy   

tcp        0      0 0.0.0.0:30443           0.0.0.0:*               LISTEN      13294/kube-proxy   

tcp        0      0 0.0.0.0:30964           0.0.0.0:*               LISTEN      13294/kube-proxy 

            

tcp6       0      0 :::10250                :::*                    LISTEN      12951/kubelet      

tcp6       0      0 :::10256                :::*                    LISTEN      13294/kube-proxy

二、环境初始化(所有节点执行)  

1、配置主机互信

1、分别在各节点配置hosts映射:

1

2

3

4

5

cat >> /etc/hosts <<EOF

192.168.118.4 master

192.168.118.19 node1

192.168.118.20 node2

EOF

2、master生成ssh密钥,分发公钥到各节点:

1

2

3

4

5

6

7

#生成ssh密钥,直接一路回车

ssh-keygen -t rsa

 

#复制刚刚生成的密钥到各节点可信列表中,需分别输入各主机密码

ssh-copy-id /root/.ssh/id_rsa.pub master

ssh-copy-id /root/.ssh/id_rsa.pub node1

ssh-copy-id /root/.ssh/id_rsa.pub node2

2、禁用swap

1

2

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab三、部署docker

三、部署docker(所有节点执行)

1、添加docker yum源

1

2

3

4

5

6

#安装必要依赖

yum install -y yum-utils device-mapper-persistent-data lvm2

#添加aliyun docker-ce yum源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#重建yum缓存

yum makecache fast

2、安装docker

1

2

3

4

5

#查看可用docker版本

yum list docker-ce.x86_64 --showduplicates | sort -r

 

#安装指定版本docker

yum install -y docker-ce-19.03.12-3.el7

3、确保网络模块开机自动加载

1

2

lsmod | grep overlay

lsmod | grep br_netfilter

若上面命令无返回值输出或提示文件不存在,需执行以下命令:

1

2

3

4

5

6

cat /etc/modules-load.d/docker.conf <<EOF

overlay

br_netfilter

EOF

modprobe overlay

modprobe br_netfilter

4、使桥接流量对iptables可见

1

2

3

4

5

cat /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

验证是否生效,均返回 1 即正确

1

2

sysctl -n net.bridge.bridge-nf-call-iptables

sysctl -n net.bridge.bridge-nf-call-ip6tables

5、配置docker

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

mkdir /etc/docker

#修改cgroup驱动为systemd[k8s官方推荐]、限制容器日志量、修改存储类型,最后的docker家目录可修改

cat /etc/docker/daemon.json <<EOF

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver""json-file",

  "log-opts": {

    "max-size""100m"

  },

  "storage-driver""overlay2",

  "storage-opts": [

    "overlay2.override_kernel_check=true"

  ],

  "registry-mirrors": ["https://7uuu3esz.mirror.aliyuncs.com"],

  "data-root""/data/docker"

}

EOF

#添加开机自启,立即启动

systemctl enable --now docker

6、验证docker是否正常

1

2

3

4

5

6

#查看docker信息,判断是否与配置一致

docker info

#hello-docker测试

docker run --rm hello-world

#删除测试image

docker rmi hello-world

四、部署kubernetes集群

未特殊说明,各节点均需执行如下步骤

1、添加kubernetes源

1

2

3

4

5

6

7

8

9

10

11

cat /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

#重建yum缓存,输入y添加证书认证

yum makecache fast

2、安装kubeadm、kubelet、kubectl

各节点均需安装kubeadm、kubelet,kubectl仅kube-master节点需安装(作为worker节点,kubectl无法使用,可以不装)

1

2

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

3、配置自动补全命令

1

2

3

4

5

#安装bash自动补全插件

yum install bash-completion -y

#设置kubectl与kubeadm命令补全,下次login生效

kubectl completion bash >/etc/bash_completion.d/kubectl

kubeadm completion bash /etc/bash_completion.d/kubeadm

4、预拉取kubernetes镜像

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

#!/bin/bash

 

KUBE_VERSION=v1.18.6

PAUSE_VERSION=3.2

CORE_DNS_VERSION=1.6.7

ETCD_VERSION=3.4.3-0

 

# pull kubernetes images from hub.docker.com

docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION

docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION

docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION

docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION

# pull aliyuncs mirror docker images

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

 

# retag to k8s.gcr.io prefix

docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION

docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION

docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION

docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

 

# untag origin tag, the images won't be delete.

docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

脚本添加可执行权限,执行脚本拉取镜像:

由于网络原因建议在早上7点前执行速度更佳、其他时段速度很慢甚至连接超时

1

2

chmod +x get-k8s-images.sh

./get-k8s-images.sh

拉取完成,执行 docker images 查看镜像

1

2

3

4

5

6

7

8

9

[root@master ~]# docker images

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy                v1.18.6             c3d62d6fe412        2 weeks ago         117MB

k8s.gcr.io/kube-controller-manager   v1.18.6             ffce5e64d915        2 weeks ago         162MB

k8s.gcr.io/kube-apiserver            v1.18.6             56acd67ea15a        2 weeks ago         173MB

k8s.gcr.io/kube-scheduler            v1.18.6             0e0972b2b5d1        2 weeks ago         95.3MB

k8s.gcr.io/pause                     3.2                 80d28bedfe5d        5 months ago        683kB

k8s.gcr.io/coredns                   1.6.7               67da37a9a360        6 months ago        43.8MB

k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        9 months ago        288MB

 五、初始化master(仅 master 节点需要执行此步骤)

1、修改kubelet配置默认cgroup driver

1

2

3

4

5

6

cat /var/lib/kubelet/config.yaml <<EOF

apiVersion: kubelet.config.k8s.io/v1beta1

kind: KubeletConfiguration

cgroupDriver: systemd

EOF

systemctl restart kubelet

2、初始化master 10.244.0.0/16是flannel固定使用的IP段,设置取决于网络组件要求

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

[root@master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.6

W0803 23:20:21.320111   12805 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.6

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.118.4]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.118.4 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.118.4 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

W0803 23:20:28.237080   12805 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[control-plane] Creating static Pod manifest for "kube-scheduler"

W0803 23:20:28.238090   12805 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 20.503032 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"

[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: aw0koc.6d40t5a2ydm299c9

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join 192.168.118.4:6443 --token aw0koc.6d40t5a2ydm299c9 \

    --discovery-token-ca-cert-hash sha256:38343d02ddd645b2f74ddf886925c93115604ea72a01b0b03088ca1d2ac14c6f

3、为日常使用集群的用户添加kubectl使用权限

1

2

3

4

[root@master ~]#mkdir -p $HOME/.kube

[root@master ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@master ~]# echo "export KUBECONFIG=$HOME/.kube/admin.conf" >> ~/.bashrc

4、配置master认证

1

2

[root@master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile

[root@master ~]# . /etc/profile

如果不配置这个,会提示如下输出:

1

The connection to the server localhost:8080 was refused - did you specify the right host or port?

此时master节点已经初始化成功,但是还未完装网络组件,还无法与其他节点通讯

1

2

3

[root@master ~]# kubectl get nodes

NAME     STATUS     ROLES    AGE   VERSION

master   NotReady   master   81s   v1.18.6

5、安装网络组件,以flannel为例

由于网络原因建议在早上7点前执行速度更佳、其他时段速度很慢甚至连接超时

创建运行

1

2

[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# kubectl apply -f kube-flannel.yml  

 查看是否都正常运行

1

2

3

4

5

6

7

8

9

[root@master ~]# kubectl get pod -n kube-system

NAME                             READY   STATUS    RESTARTS   AGE

coredns-66bff467f8-d9xjc         1/1     Running   0          2m22s

coredns-66bff467f8-lvldb         1/1     Running   0          2m22s

etcd-master                      1/1     Running   0          2m34s

kube-apiserver-master            1/1     Running   0          2m34s

kube-controller-manager-master   1/1     Running   0          2m34s

kube-proxy-lg58q                 1/1     Running   0          2m22s

kube-scheduler-master            1/1     Running   0          2m33s

查看kube-master节点状态

1

2

3

[root@master ~]# kubectl get nodes

NAME     STATUS   ROLES    AGE   VERSION

master   Ready    master   12h   v1.18.6

如果STATUS提示NotReady,可以通过 kubectl describe node master 查看具体的描述信息,性能差的服务器到达Ready状态时间会长些

六、初始化node*节点并加入集群

1、备份镜像供其他节点使用

便于后续传输给其他node节点,当然有镜像仓库更好

1

2

3

4

5

6

7

docker save k8s.gcr.io/kube-proxy:v1.18.6 \

            k8s.gcr.io/kube-apiserver:v1.18.6 \

            k8s.gcr.io/kube-controller-manager:v1.18.6 \

            k8s.gcr.io/kube-scheduler:v1.18.6 \

            k8s.gcr.io/pause:3.2 \

            k8s.gcr.io/coredns:1.6.7 \

            k8s.gcr.io/etcd:3.4.3-0 > k8s-imagesV1.18.6.tar

拷贝镜像到node节点

1

2

[root@master ~]#scp k8s-imagesV1.18.6.tar node1

[root@master ~]#scp k8s-imagesV1.18.6.tar node2

2、node节点载入镜像

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

docker load < k8s-imagesV1.18.6.tar

[root@node1 ~]#docker load < k8s-imagesV1.18.6.tar

225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB

c965b38a6629: Loading layer [==================================================>]  43.58MB/43.58MB

Loaded image: k8s.gcr.io/coredns:1.6.7

fe9a8b4f1dcc: Loading layer [==================================================>]  43.87MB/43.87MB

ce04b89b7def: Loading layer [==================================================>]  224.9MB/224.9MB

1b2bc745b46f: Loading layer [==================================================>]  21.22MB/21.22MB

Loaded image: k8s.gcr.io/etcd:3.4.3-0

82a5cde9d9a9: Loading layer [==================================================>]  53.87MB/53.87MB

a2b38eae1b39: Loading layer [==================================================>]  21.62MB/21.62MB

f378e9487360: Loading layer [==================================================>]  5.168MB/5.168MB

a35a0b8b55f5: Loading layer [==================================================>]  4.608kB/4.608kB

dea351e760ec: Loading layer [==================================================>]  8.192kB/8.192kB

d57a645c2b0c: Loading layer [==================================================>]  8.704kB/8.704kB

602805206b58: Loading layer [==================================================>]  38.39MB/38.39MB

Loaded image: k8s.gcr.io/kube-proxy:v1.18.6

2d99d0f31eb7: Loading layer [==================================================>]  120.7MB/120.7MB

Loaded image: k8s.gcr.io/kube-apiserver:v1.18.6

82d47bbb60b8: Loading layer [==================================================>]  110.1MB/110.1MB

Loaded image: k8s.gcr.io/kube-controller-manager:v1.18.6

80eec301f276: Loading layer [==================================================>]  42.96MB/42.96MB

Loaded image: k8s.gcr.io/kube-scheduler:v1.18.6

ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB

Loaded image: k8s.gcr.io/pause:3.2

3、获取加入kubernetes命令

刚才在初始化master节点时,有在最后输出其加入集群的命令,假如我没记下来,那怎么办呢?

访问kube-master输入创建新token命令,同时输出加入集群的命令:

1

2

3

[root@master ~]# kubeadm token create --print-join-command

W0804 11:51:54.344223   11517 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

kubeadm join 192.168.118.4:6443 --token eynx2u.ph1ohakkqx1utkl8     --discovery-token-ca-cert-hash sha256:38343d02ddd645b2f74ddf886925c93115604ea72a01b0b03088ca1d2ac14c6f

 和初始化的集群命令对比

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

Then you can join any number of worker nodes by running the following on each as root:

#建议记录下来便于以后添加新的node使用

kubeadm join 192.168.118.4:6443 --token aw0koc.6d40t5a2ydm299c9 \

    --discovery-token-ca-cert-hash sha256:38343d02ddd645b2f74ddf886925c93115604ea72a01b0b03088ca1d2ac14c6f

    #以下是通过kubeadm token create --print-join-command命令获实时取的

kubeadm join 192.168.118.4:6443 --token eynx2u.ph1ohakkqx1utkl8     --discovery-token-ca-cert-hash sha256:38343d02ddd645b2f74ddf886925c93115604ea72a01b0b03088ca1d2ac14c6f

4、在node*节点上执行加入集群命令

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

[root@node1 ~]# kubeadm join 192.168.118.4:6443 --token aw0koc.6d40t5a2ydm299c9 \

>     --discovery-token-ca-cert-hash sha256:38343d02ddd645b2f74ddf886925c93115604ea72a01b0b03088ca1d2ac14c6f

W0803 23:41:00.380582   12778 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

 

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

 

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5、查看集群节点状态

1

2

3

4

5

[root@master ~]# kubectl get nodes

NAME     STATUS   ROLES    AGE   VERSION

master   Ready    master   12h   v1.18.6

node1    Ready    <none>   11h   v1.18.6

node2    Ready    <none>   11h   v1.18.6

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值