CKA备考实验 | 安装kubernetes集群

书籍来源:《CKA/CKAD应试指南:从Docker到Kubernetes完全攻略》

一边学习一边整理老师的课程内容及实验笔记,并与大家分享,侵权即删,谢谢支持!

附上汇总贴:CKA备考实验 | 汇总_热爱编程的通信人的博客-CSDN博客


本节主要是完整地搭建一套kubernetes集群出来,包括实验环境的准备、安装master、把worker加入集群、安装calico网络等。

实验拓扑图及环境

完成本章及后续的实验,我们需要3台机器:1台master,2台worker,拓扑图及配置如图3-5所示。

机器的配置如表3-3所示。

实验准备

在安装kubernetes之前,需要设置好yum源,关闭selinux及关闭swap等。下面的准备操作都是在所有的节点上做的。

##########实操验证##########
#设置yum源
[root@localhost ~]# rm -rf /etc/yum.repos.d/* ; wget -P /etc/yum.repos.d ftp://ftp.rhce.cc/k8s/*
--2023-05-03 20:58:52--  ftp://ftp.rhce.cc/k8s/*
           => ‘/etc/yum.repos.d/.listing’
Resolving ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41, 101.37.152.41
Connecting to ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD (1) /k8s ... done.
==> PASV ... done.    ==> LIST ... done.

    [ <=>                                                                                                                                                     ] 544         --.-K/s   in 0s      

2023-05-03 20:58:52 (64.7 MB/s) - ‘/etc/yum.repos.d/.listing’ saved [544]

Removed ‘/etc/yum.repos.d/.listing’.
--2023-05-03 20:58:52--  ftp://ftp.rhce.cc/k8s/CentOS-Base.repo
           => ‘/etc/yum.repos.d/CentOS-Base.repo’
==> CWD not required.
==> PASV ... done.    ==> RETR CentOS-Base.repo ... done.
Length: 2206 (2.2K)

100%[========================================================================================================================================================>] 2,206       --.-K/s   in 0.001s  

2023-05-03 20:58:52 (3.35 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2206]

--2023-05-03 20:58:52--  ftp://ftp.rhce.cc/k8s/docker-ce.repo
           => ‘/etc/yum.repos.d/docker-ce.repo’
==> CWD not required.
==> PASV ... done.    ==> RETR docker-ce.repo ... done.
Length: 234

100%[========================================================================================================================================================>] 234         --.-K/s   in 0.003s  

2023-05-03 20:58:52 (84.3 KB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [234]

--2023-05-03 20:58:52--  ftp://ftp.rhce.cc/k8s/docker-ce.repo.bak
           => ‘/etc/yum.repos.d/docker-ce.repo.bak’
==> CWD not required.
==> PASV ... done.    ==> RETR docker-ce.repo.bak ... done.
Length: 2640 (2.6K)

100%[========================================================================================================================================================>] 2,640       --.-K/s   in 0.001s  

2023-05-03 20:58:53 (3.21 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo.bak’ saved [2640]

--2023-05-03 20:58:53--  ftp://ftp.rhce.cc/k8s/epel.repo
           => ‘/etc/yum.repos.d/epel.repo’
==> CWD not required.
==> PASV ... done.    ==> RETR epel.repo ... done.
Length: 923

100%[========================================================================================================================================================>] 923         --.-K/s   in 0.002s  

2023-05-03 20:58:53 (441 KB/s) - ‘/etc/yum.repos.d/epel.repo’ saved [923]

--2023-05-03 20:58:53--  ftp://ftp.rhce.cc/k8s/k8s.repo
           => ‘/etc/yum.repos.d/k8s.repo’
==> CWD not required.
==> PASV ... done.    ==> RETR k8s.repo ... done.
Length: 139

100%[========================================================================================================================================================>] 139         --.-K/s   in 0s      

2023-05-03 20:58:53 (20.9 MB/s) - ‘/etc/yum.repos.d/k8s.repo’ saved [139]

--2023-05-03 20:58:53--  ftp://ftp.rhce.cc/k8s/k8s.repo.bak
           => ‘/etc/yum.repos.d/k8s.repo.bak’
==> CWD not required.
==> PASV ... done.    ==> RETR k8s.repo.bak ... done.
Length: 276

100%[========================================================================================================================================================>] 276         --.-K/s   in 0s      

2023-05-03 20:58:53 (48.1 MB/s) - ‘/etc/yum.repos.d/k8s.repo.bak’ saved [276]

[root@localhost ~]# 

#关闭selinux
#临时禁用selinux
[root@vms100 ~]# setenforce 0
[root@vms100 ~]# 
#永久关闭selinux
[root@vms100 ~]# vi /etc/sysconfig/selinux
将SELINUX更改为SELINUX=disabled

#关闭swap
[root@localhost ~]# swapoff -a
#修改/etc/fstab,将swap行注释
[root@localhost ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue May  2 16:18:01 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=2e2d160a-b507-436d-8093-0aa4d2be7232 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
[root@localhost ~]# 
#重启机器
[root@localhost ~]# shutdown -r now
#重启后验证
[root@localhost ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.7G        665M        2.5G         10M        549M        2.7G
Swap:            0B          0B          0B

##把firewalld默认的zone设置为trusted
[root@localhost ~]# firewall-cmd --set-default-zone=trusted
success
[root@localhost ~]#

步骤1:建议所有节点使用centos7.4,在所有节点上同步/etc/hosts。

步骤2:在所有节点上配置防火墙和关闭selinux。

##########实操验证##########
#之前已经配置过,这里仅查看即可
[root@vmsX ~]# firewall-cmd --get-default-zone
trusted
[root@vmsX ~]# getenforce
Disabled
[root@vmsX ~]#

步骤3:在所有节点上关闭swap,并注释掉/etc/fstab里的swap相关条目。

##########实操验证##########
#之前已经配置过

步骤4:在所有节点上配置好yum源(请提前安装好wget,再执行下面的操作)。

##########实操验证##########
#之前已经配置过

步骤5:在所有节点安装并启动docker,并设置docker自动启动。

##########实操验证##########
[root@vms10 ~]# yum install docker-ce -y
... #之前已有安装输出,这里省略
[root@vms10 ~]#
[root@vms10 ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@vms10 ~]#

步骤6:在所有节点设置内核参数。

##########实操验证##########
[root@vms10 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> EOF
[root@vms10 ~]#

让其立即生效。

##########实操验证##########
[root@vms10 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@vms10 ~]#

注意:如果发现如图3-6所示的错误,则通过modprobe br_netfilter解决问题。

基本上,先安装docker-ce并启动docker,再修改参数是不会出现上述问题的。

步骤7:在所有节点上安装软件包

##########实操验证##########
[root@vms10 ~]# yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes 
Loaded plugins: fastestmirror, langpacks
Usage: yum [options] COMMAND

List of Commands:

check          Check for problems in the rpmdb
check-update   Check for available package updates
clean          Remove cached data
deplist        List a package's dependencies
distribution-synchronization Synchronize installed packages to the latest available versions
downgrade      downgrade a package
erase          Remove a package or packages from your system
fs             Acts on the filesystem data of the host, mainly for removing docs/lanuages for minimal hosts.
fssnapshot     Creates filesystem snapshots, or lists/deletes current snapshots.
groups         Display, or use, the groups information
help           Display a helpful usage message
history        Display, or use, the transaction history
info           Display details about a package or group of packages
install        Install a package or packages on your system
langavailable  Check available languages
langinfo       List languages information
langinstall    Install appropriate language packs for a language
langlist       List installed languages
langremove     Remove installed language packs for a language
list           List a package or groups of packages
load-transaction load a saved transaction from filename
makecache      Generate the metadata cache
provides       Find what package provides the given value
reinstall      reinstall a package
repo-pkgs      Treat a repo. as a group of packages, so we can install/remove all of them
repolist       Display the configured software repositories
search         Search package details for the given string
shell          Run an interactive yum shell
swap           Simple way to swap packages, instead of using shell
update         Update a package or packages on your system
update-minimal Works like upgrade, but goes to the 'newest' package match which fixes a problem that affects your system
updateinfo     Acts on repository update information
upgrade        Update packages taking obsoletes into account
version        Display a version for the machine and/or available repos.


Command line error: ambiguous option: --disable (--disableexcludes, --disableincludes, --disableplugin, --disablerepo?)
[root@vms10 ~]# yum install -y kubelet-1.21.1-0 kubeadm-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes 
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.21.1-0 will be installed
--> Processing Dependency: cri-tools >= 1.13.0 for package: kubeadm-1.21.1-0.x86_64
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.21.1-0.x86_64
---> Package kubectl.x86_64 0:1.21.1-0 will be installed
---> Package kubelet.x86_64 0:1.21.1-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.21.1-0.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.21.1-0.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
---> Package cri-tools.x86_64 0:1.26.0-0 will be installed
---> Package kubernetes-cni.x86_64 0:1.2.0-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================================================================================
 Package                                                  Arch                                     Version                                         Repository                                    Size
======================================================================================================================================================================================================
Installing:
 kubeadm                                                  x86_64                                   1.21.1-0                                        kubernetes                                   9.5 M
 kubectl                                                  x86_64                                   1.21.1-0                                        kubernetes                                   9.8 M
 kubelet                                                  x86_64                                   1.21.1-0                                        kubernetes                                    20 M
Installing for dependencies:
 conntrack-tools                                          x86_64                                   1.4.4-7.el7                                     base                                         187 k
 cri-tools                                                x86_64                                   1.26.0-0                                        kubernetes                                   8.6 M
 kubernetes-cni                                           x86_64                                   1.2.0-0                                         kubernetes                                    17 M
 libnetfilter_cthelper                                    x86_64                                   1.0.0-11.el7                                    base                                          18 k
 libnetfilter_cttimeout                                   x86_64                                   1.0.0-7.el7                                     base                                          18 k
 libnetfilter_queue                                       x86_64                                   1.0.2-2.el7_2                                   base                                          23 k
 socat                                                    x86_64                                   1.7.3.2-2.el7                                   base                                         290 k

Transaction Summary
======================================================================================================================================================================================================
Install  3 Packages (+7 Dependent packages)

Total download size: 65 M
Installed size: 67 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm                                                                                                                                 | 187 kB  00:00:00     
(2/10): 3f5ba2b53701ac9102ea7c7ab2ca6616a8cd5966591a77577585fde1c434ef74-cri-tools-1.26.0-0.x86_64.rpm                                                                         | 8.6 MB  00:00:02     
(3/10): e0511a4d8d070fa4c7bcd2a04217c80774ba11d44e4e0096614288189894f1c5-kubeadm-1.21.1-0.x86_64.rpm                                                                           | 9.5 MB  00:00:02     
(4/10): 3944a45bec4c99d3489993e3642b63972b62ed0a4ccb04cc7655ce0467fddfef-kubectl-1.21.1-0.x86_64.rpm                                                                           | 9.8 MB  00:00:01     
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm                                                                                                                          |  18 kB  00:00:00     
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm                                                                                                                            |  23 kB  00:00:00     
(7/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm                                                                                                                          |  18 kB  00:00:00     
(8/10): socat-1.7.3.2-2.el7.x86_64.rpm                                                                                                                                         | 290 kB  00:00:00     
(9/10): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernetes-cni-1.2.0-0.x86_64.rpm                                                                     |  17 MB  00:00:02     
(10/10): c47efa28c5935ed2ffad234e2b402d937dde16ab072f2f6013c71d39ab526f40-kubelet-1.21.1-0.x86_64.rpm                                                                          |  20 MB  00:00:04     
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                 9.4 MB/s |  65 MB  00:00:06     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                                                                         1/10 
  Installing : socat-1.7.3.2-2.el7.x86_64                                                                                                                                                        2/10 
  Installing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                                                                         3/10 
  Installing : kubectl-1.21.1-0.x86_64                                                                                                                                                           4/10 
  Installing : cri-tools-1.26.0-0.x86_64                                                                                                                                                         5/10 
  Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                                                           6/10 
  Installing : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                                                                7/10 
  Installing : kubernetes-cni-1.2.0-0.x86_64                                                                                                                                                     8/10 
  Installing : kubelet-1.21.1-0.x86_64                                                                                                                                                           9/10 
  Installing : kubeadm-1.21.1-0.x86_64                                                                                                                                                          10/10 
  Verifying  : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                                                                1/10 
  Verifying  : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                                                           2/10 
  Verifying  : kubeadm-1.21.1-0.x86_64                                                                                                                                                           3/10 
  Verifying  : cri-tools-1.26.0-0.x86_64                                                                                                                                                         4/10 
  Verifying  : kubernetes-cni-1.2.0-0.x86_64                                                                                                                                                     5/10 
  Verifying  : kubectl-1.21.1-0.x86_64                                                                                                                                                           6/10 
  Verifying  : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                                                                         7/10 
  Verifying  : socat-1.7.3.2-2.el7.x86_64                                                                                                                                                        8/10 
  Verifying  : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                                                                         9/10 
  Verifying  : kubelet-1.21.1-0.x86_64                                                                                                                                                          10/10 

Installed:
  kubeadm.x86_64 0:1.21.1-0                                        kubectl.x86_64 0:1.21.1-0                                        kubelet.x86_64 0:1.21.1-0                                       

Dependency Installed:
  conntrack-tools.x86_64 0:1.4.4-7.el7       cri-tools.x86_64 0:1.26.0-0   kubernetes-cni.x86_64 0:1.2.0-0  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 
  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2  socat.x86_64 0:1.7.3.2-2.el7 

Complete!
[root@vms10 ~]#

注意:安装时如果没有指定版本,则安装的是最新版本

步骤8:在所有节点上启动kubelet,并设置开机自动启动。

##########实操验证##########
[root@vms10 ~]# systemctl restart kubelet ; systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@vms10 ~]#

3.2.3 安装master

下面的操作是在vms10上进行的,目的是把vms10配置成master。

步骤1:在master上执行初始化。

注意:因为阿里云里缺少一个镜像,所以在所有节点上启动docker之后,要用命令wget ftp://ftp.rhce.cc/cka-tool/coredns-1.21.tar下载阿里云里缺少的镜像,然后在所有节点上通过docker load -i coredns-1.21.tar命令导入。

##########实操验证##########
#提前下载coredns的镜像并加载,注意所有节点均需下载并加载
[root@vms10 ~]# wget ftp://ftp.rhce.cc/cka-tool/coredns-1.21.tar
--2023-05-03 21:19:24--  ftp://ftp.rhce.cc/cka-tool/coredns-1.21.tar
           => ‘coredns-1.21.tar’
Resolving ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41, 101.37.152.41
Connecting to ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done.    ==> PWD ... done.
==> TYPE I ... done.  ==> CWD (1) /cka-tool ... done.
==> SIZE coredns-1.21.tar ... 42592768
==> PASV ... done.    ==> RETR coredns-1.21.tar ... done.
Length: 42592768 (41M) (unauthoritative)

100%[============================================================================================================================================================>] 42,592,768  10.9MB/s   in 3.8s   

2023-05-03 21:19:28 (10.6 MB/s) - ‘coredns-1.21.tar’ saved [42592768]

[root@vms10 ~]# 
[root@vms10 ~]# 
[root@vms10 ~]# docker load -i coredns-1.21.tar 
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
69ae2fbf419f: Loading layer [==================================================>]  42.24MB/42.24MB
Loaded image: registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@vms10 ~]# docker images
REPOSITORY                                                TAG       IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/coredns/coredns   v1.8.0    296a6d5035e2   2 years ago   42.5MB
[root@vms10 ~]# 
#仅需在master节点执行
[root@vms10 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.1 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.5. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vms10.rhce.cc] and IPs [10.96.0.1 192.168.1.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vms10.rhce.cc] and IPs [192.168.1.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vms10.rhce.cc] and IPs [192.168.1.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.003455 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vms10.rhce.cc as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vms10.rhce.cc as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zryqi2.7r0c6qxkp9wr4awz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.110:6443 --token zryqi2.7r0c6qxkp9wr4awz \
        --discovery-token-ca-cert-hash sha256:e52d78693299a9a9a177b2223cb8688c11650f5d8f9311c5461545355dd86692 
[root@vms10 ~]#

上面输出提示安装完之后的操作,按上面的提示分别执行每条命令。

注意1:这里用--image-repository选项指定使用阿里云的镜像。

注意2:--pod-network-cidr=10.244.0.0/16在这里指的是pod的网段。

注意3:如果想安装其他版本的话,直接在--kubernetes-version里指定。

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16

注意:需要先安装对应版本的kubectl、kubeadm、kubelet。

步骤2:复制kubeconfig文件。

##########实操验证##########
[root@vms10 ~]# mkdir -p $HOME/.kube
[root@vms10 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  #如果执行过kubeadm init后,需要重新cp
[root@vms10 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@vms10 ~]#

上面的提示中,如下命令是用于把worker加入kubernetes集群的命令。

##########实操验证##########
#实验环境如下:
kubeadm join 192.168.1.110:6443 --token zryqi2.7r0c6qxkp9wr4awz \
        --discovery-token-ca-cert-hash sha256:e52d78693299a9a9a177b2223cb8688c11650f5d8f9311c5461545355dd86692

如果忘记了保存此命令的话,可以用如下命令获取。

[root@vms10 ~]# kubeadm token create --print-join-command 
kubeadm join 192.168.26.10:6443 --token w6v53s.16xt8ssokjuswlzx ---discovery-token-ca-cert-hash sha256:6b19ba9d3371c0ac474e8e70569dfc8ac93c76fd841ac8df025a43d49d8cd860
[root@vms10 ~]#

3.2.4 配置worker加入集群

下面的步骤是把vms11和vms12以worker的身份加入kubernetes集群。

步骤1:在vms11和vms12分别执行以下命令。

##########实操验证##########
[root@vms11 ~]# kubeadm join 192.168.1.110:6443 --token zryqi2.7r0c6qxkp9wr4awz \
>         --discovery-token-ca-cert-hash sha256:e52d78693299a9a9a177b2223cb8688c11650f5d8f9311c5461545355dd86692 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.5. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@vms11 ~]# 

[root@vms12 ~]# kubeadm join 192.168.1.110:6443 --token zryqi2.7r0c6qxkp9wr4awz \
>         --discovery-token-ca-cert-hash sha256:e52d78693299a9a9a177b2223cb8688c11650f5d8f9311c5461545355dd86692 
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.5. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@vms12 ~]#

步骤2:切换到master上,可以看到所有节点已经加入集群了。

##########实操验证##########
[root@vms10 ~]# kubectl get nodes
NAME            STATUS     ROLES                  AGE     VERSION
vms10.rhce.cc   NotReady   control-plane,master   5m33s   v1.21.1
vms11.rhce.cc   NotReady   <none>                 3m55s   v1.21.1
vms12.rhce.cc   NotReady   <none>                 25s     v1.21.1
[root@vms10 ~]#

从这里可以看到所有节点的状态为NotReady,我们需要安装calico网络才能使得k8s正常工作。

3.2.5 安装calico网络

因为在整个kubernetes集群里,pod都是分布在不同的主机上的,为了实现这些pod的跨主机通信,必须要安装CNI网络插件,这里选择calico网络。

步骤1:在master上下载配置calico网络的yaml。

##########实操验证##########
[root@vms10 ~]# wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml --no-check-certificate
--2023-05-04 11:11:25--  https://docs.projectcalico.org/v3.19/manifests/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 35.198.196.16, 34.142.149.67, 2406:da18:880:3802::c8, ...
Connecting to docs.projectcalico.org (docs.projectcalico.org)|35.198.196.16|:443... connected.
WARNING: cannot verify docs.projectcalico.org's certificate, issued by ‘/C=US/O=Let's Encrypt/CN=R3’:
  Issued certificate has expired.
HTTP request sent, awaiting response... 200 OK
Length: 189916 (185K) [text/yaml]
Saving to: ‘calico.yaml’

100%[==============================================================================================================================================================>] 189,916     1004KB/s   in 0.2s   

2023-05-04 11:11:26 (1004 KB/s) - ‘calico.yaml’ saved [189916/189916]

[root@vms10 ~]#

步骤2:修改calico.yaml里的pod网段。

把calico.yaml里pod所在网段改成kubeadm init时选项 --pod-network-cidr所指定的网段,用vim打开此文件然后查找“192”,按如下标记进行修改。

# no effect. This should fall within '--cluster-cidr'.
# - name: CALICO_IPV4POOL_CIDR 
#   value: "192.168.0.0/16"
# Disable file logging so 'kubectl logs'works.
- name: CALICO_DISABLE_FILE_LOGGING 
  value: "true"

把两个#及#后面的空格去掉,并把192.168.0.0/16改成10.244.0.0/16。

##########实操验证##########
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
                value: "10.244.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"

改的时候请看清缩进关系,即这里的对齐关系。

步骤3:提前下载所需要的镜像。

查看此文件用哪些镜像。

##########实操验证##########
[root@vms10 ~]# grep image calico.yaml 
          image: docker.io/calico/cni:v3.19.4
          image: docker.io/calico/cni:v3.19.4
          image: docker.io/calico/pod2daemon-flexvol:v3.19.4
          image: docker.io/calico/node:v3.19.4
          image: docker.io/calico/kube-controllers:v3.19.4
[root@vms10 ~]#

在所有节点(包括master)上把这些镜像下载下来。

##########实操验证##########
[root@vms10 ~]# for i in calico/cni:v3.19.1 calico/pod2daemon-flexvol:v3.19.1 calico/node:v3.19.1 calico/kube-controllers:v3.19.1 ; do docker pull $i ; done 
v3.19.1: Pulling from calico/cni
740c37ed87bd: Pull complete 
5019aa621b53: Pull complete 
b0c764012582: Pull complete 
Digest: sha256:f301171be0add870152483fcce71b28cafb8e910f61ff003032e9b1053b062c4
Status: Downloaded newer image for calico/cni:v3.19.1
docker.io/calico/cni:v3.19.1
v3.19.1: Pulling from calico/pod2daemon-flexvol
96a7481915ed: Pull complete 
64383dcd0fa4: Pull complete 
85aa09d51142: Pull complete 
d5e43eabff08: Pull complete 
9743139ad623: Pull complete 
1799735f87b0: Pull complete 
3cd23465dec0: Pull complete 
Digest: sha256:bcac7dc4f1301b062d91a177a52d13716907636975c44131fb8350e7f851c944
Status: Downloaded newer image for calico/pod2daemon-flexvol:v3.19.1
docker.io/calico/pod2daemon-flexvol:v3.19.1
v3.19.1: Pulling from calico/node
d226bad0de34: Pull complete 
954e0bcac799: Pull complete 
Digest: sha256:bc4a631d553b38fdc169ea4cb8027fa894a656e80d68d513359a4b9d46836b55
Status: Downloaded newer image for calico/node:v3.19.1
docker.io/calico/node:v3.19.1
v3.19.1: Pulling from calico/kube-controllers
fa228fdd2fcf: Pull complete 
d0ce2be53829: Pull complete 
59742bbad375: Pull complete 
c6ca6f933770: Pull complete 
Digest: sha256:904458fe1bd56f995ef76e2c4d9a6831c506cc80f79e8fc0182dc059b1db25a4
Status: Downloaded newer image for calico/kube-controllers:v3.19.1
docker.io/calico/kube-controllers:v3.19.1
[root@vms10 ~]#

步骤4:安装calico网络。

在master上执行如下命令。

##########实操验证##########
[root@vms10 ~]# kubectl apply -f calico.yaml --validate=false
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[root@vms10 ~]#

步骤5:验证结果。

再次在master上运行命令kubectl get nodes,查看运行结果。

##########实操验证##########
#需要等1-2分钟
[root@vms10 ~]# kubectl get nodes
NAME            STATUS   ROLES                  AGE   VERSION
vms10.rhce.cc   Ready    control-plane,master   21m   v1.21.1
vms11.rhce.cc   Ready    <none>                 19m   v1.21.1
vms12.rhce.cc   Ready    <none>                 16m   v1.21.1
[root@vms10 ~]#

可以看到所有节点的状态已经变为Ready了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值