Docker学习(10)搭建kubernetes集群

搭建kubernetes集群

1、官方部署方式:

  • Minikube工具安装

Minikube是一种能够在计算机或者虚拟机(VM)内轻松运行单节点Kubernetes 集群的工具,可实现一键部署。这种方式安装的系统在企业中大多被当作测试系统使用。

  • 使用yum安装

通过直接使用 epel-release yum 源来安装 Kubernetes 集群,这种安装方式的优点是速度快,但只能安装 Kubermetes 1.5 及以下的版本。

  • 使用二进制编译安装

使用二进制编译包部署集群,用户需要下载发行版的二进制包,手动部署每个组件,组成 Kubermetes 集群。这种部署方式比较灵活,用户可以根据自身需求自定义配置,而且性能比较稳定。虽然二进制方式可以提供稳定的集群状态,但是这种方式部署步骤非常烦琐,一些细小的错误就会导致系统运行失败。

  • 使用Kubeadm 工具安装

Kubeadm 是一种支持多节点部署 Kubernetes 集群的工具,该方式提供 kubeadm init 和 kubeadm join命令插件,使用户可以轻松地部署出企业级的高可用集群架构。在Kuberetes 1.13 版本中,Kubeadm工具已经进入了可正式发布(GeneralAvailability,GA)阶段。

2、 Kubeadm 方式快速部署集群

2-1 Kubeadm 简介

Kubeadm 是芬兰高中生卢卡斯·科尔德斯特伦(Lucas Käldström)在 17 岁时用业余时间完成的-个社区项目。用户可以使用 Kubeadm 工具构建出一个最小化的 Kubemetes 可用集群,但其余的附件,如安装监控系统、日志系统、UI界面等,需要管理员按需自行安装。

2-2 部署系统前期准备
2-2-1硬件要求:
软硬件配置本章配置
系统要求基于x86或x64架构的Linux版本centos7
CPU与内存Master节点:至少2核4G
Node节点:根据需要运行容器数量而定
Master节点:2核4G内存
Node节点:根据需要运行容器数量而定
内核版本kernel 3.10以上kernel3.10
软件版本etcd:3.0以上版本
Docker:18.03及以上版本
etcd:3.0
Docker:18.03

本章使用三台虚拟机,配置如下:

IP节点操作系统主机名CPU内存
192.168.10.149Mastercentos7docker012core2GB
192.168.10.148Node1centos7docker022core2GB
192.168.10.147Node2centos7docker032coer2GB
2-2-2 配置静态IP
docker01
BOOTPROTO=static
IPADDR=192.168.10.149
NETMASK=255.255.255.0
GATEWAY=192.168.10.2
DNS=8.8.8.8

docker02
BOOTPROTO=static
IPADDR=192.168.10.148
NETMASK=255.255.255.0
GATEWAY=192.168.10.2
DNS=8.8.8.8

docker03
BOOTPROTO=static
IPADDR=192.168.10.147
NETMASK=255.255.255.0
GATEWAY=192.168.10.2
DNS=8.8.8.8

重启网卡

systemctl restart network
2-2-3 设置主机名
# 设置 192.168.10.149 主机名
hostnamectl set-hostname  docker01
# 设置 192.168.10.148 主机名
hostnamectl set-hostname  docker02
# 设置 192.168.10.147 主机名
hostnamectl set-hostname  docker03

2-2-4 配置hosts文件(三台)
vim /etc/hosts

192.168.10.149 docker01
192.168.10.148 docker02
192.168.10.147 docker03
2-2-6 配置yum源(三台)
##备份
[root@docker03 yum.repos.d]# mkdir centos
[root@docker03 yum.repos.d]# mv CentOS-* centos/
[root@docker03 yum.repos.d]# ll
总用量 0
drwxr-xr-x. 2 root root 220 58 12:58 centos

##下载阿里yum
[root@docker03 yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: mirrors.aliyun.com; 未知的错误

错误:

这里在下载阿里yum时出错

原因:

DNS解析错误

解决:

更改文件/etc/resolv.conf ,添加其他DNS解析

[root@docker03 yum.repos.d]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 114.114.114.114
nameserver 8.8.8.8

重新下载

[root@docker03 yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2523  100  2523    0     0   2575      0 --:--:-- --:--:-- --:--:--  2574

清理yum源

yum clean all

生成缓存

yum makecache fast

更新yum

yum -y update

安装软件包

yum -y update
2-2-5 关闭防火墙(三台)
systemctl stop firewalld && systemctl disable firewalld
2-2-7 时间同步(三台)
yum install ntpdate -y
ntpdate time.windows.com
2-2-8 关闭selinux(三台)
[root@docker03 selinux]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled ##更改为disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 


[root@docker03 selinux]# 

重启

reoot
2-2-9 关闭swap(三台)
[root@docker01 ~]# swapoff -a
[root@docker01 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           1.8G        240M        1.2G        9.6M        335M        1.4G
Swap:            0B          0B          0B
[root@docker01 ~]# 
2-2-10 修改内核参数(三台)
[root@docker01 sysctl.d]# cat /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@docker01 sysctl.d]# 
##生效
[root@docker01 sysctl.d]#sysctl --system
2-2-11 免密登录(三台)
##生成密钥
[root@docker01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:IIw2HTG8iMFhEkbfRiS+TUH7Lzhk8/9FQnQ+Rsz+clk root@docker01
The key's randomart image is:
+---[RSA 2048]----+
|=*.o**     .oo   |
|=.o+=oo   . +o   |
| o+++*.    ..+   |
|....*...  . ... E|
|   . = .S  . .. o|
|    o + .   o. + |
|     o o .   .o  |
|      . o   .    |
|         ...     |
+----[SHA256]-----+
##查看
[root@docker01 ~]# cd .ssh
[root@docker01 .ssh]# ll
总用量 8
-rw------- 1 root root 1675 5月   8 14:37 id_rsa
-rw-r--r-- 1 root root  395 5月   8 14:37 id_rsa.pub
[root@docker01 .ssh]# 
##分发给docker01
[root@docker01 .ssh]# ssh-copy-id docker01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'docker01 (192.168.10.149)' can't be established.
ECDSA key fingerprint is SHA256:xrIE+CRy3xOEop8D1u2UAXgc7XQz8T/PdUFKzw/2abE.
ECDSA key fingerprint is MD5:10:67:f8:50:e2:81:27:d7:91:d4:32:4e:f8:a7:cf:22.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@docker01's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'docker01'"
and check to make sure that only the key(s) you wanted were added.

##分发给docker02
[root@docker01 .ssh]# ssh-copy-id docker02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'docker02 (192.168.10.148)' can't be established.
ECDSA key fingerprint is SHA256:xrIE+CRy3xOEop8D1u2UAXgc7XQz8T/PdUFKzw/2abE.
ECDSA key fingerprint is MD5:10:67:f8:50:e2:81:27:d7:91:d4:32:4e:f8:a7:cf:22.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@docker02's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'docker02'"
and check to make sure that only the key(s) you wanted were added.

##分发给docker03
[root@docker01 .ssh]# ssh-copy-id docker03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'docker03 (192.168.10.147)' can't be established.
ECDSA key fingerprint is SHA256:m9kUeB5JAtBLErmgIkXjyRGhh7aR/2ABSXWhoSdqtck.
ECDSA key fingerprint is MD5:9c:6b:65:9f:62:f0:6c:9d:9f:0e:a5:c6:3e:6b:c7:d0.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@docker03's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'docker03'"
and check to make sure that only the key(s) you wanted were added.

##验证
[root@docker01 .ssh]# ssh docker02
Last login: Wed May  8 14:19:47 2024 from 192.168.10.1
[root@docker02 ~]# ssh docker01
Last login: Wed May  8 14:20:05 2024 from 192.168.10.1
[root@docker01 ~]# ^C
[root@docker01 ~]# 
2-3 开启bridge模式

在各个节点执行

[root@docker01 ~]# vi /etc/sysctl.conf 
[root@docker01 ~]# cat /etc/sysctl.conf 
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@docker01 ~]# 
[root@docker01 ~]# sysctl -p
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@docker01 ~]# 
2-4 开启ipvs

启用 ipvs 而不使用 iptables 的原因,因为我们在用到 K8s 的时候,会用到数据包转发,如果不开启 ipvs 将会使用 iptables,但是效率低,所以官网推荐需要开通 ipvs 内核,在 K8s 的各个节点都需要开启

[root@docker01 ~]# vi /etc/sysconfig/modules/ipvs.modules
[root@docker01 ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
 /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
 if [ $? -eq 0 ]; then
 /sbin/modprobe \${kernel_module}
 fi
done
[root@docker01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@docker01 ~]# bash /etc/sysconfig/modules/ipvs.modules 
[root@docker01 ~]#  lsmod | grep ip_vs
[root@docker01 ~]# 
2-5 部署k8s集群
2-5-1 部署方式

目前生产部署 Kubernetes 集群主要有两种方式:

  • Kubeadm:Kubeadm 是一个 K8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部署 Kubernetes 集群。
  • 二进制:从 github下 载发行版的二进制包,手动部署每个组件,组成 Kubernetes 集群。

本次使用kubeadm的方式搭建集群

2-5-2 安装Kubeadm,Kubelet 和 Kubectl

如果之前有下载,可进行删除

[root@docker01 ~]# yum erase -y kubelet kubectl kubeadm kubernetes-cni
已加载插件:fastestmirror, product-id, search-disabled-repos
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.23.0-0 将被 删除
---> 软件包 kubectl.x86_64.0.1.23.0-0 将被 删除
---> 软件包 kubelet.x86_64.0.1.23.0-0 将被 删除
---> 软件包 kubernetes-cni.x86_64.0.1.2.0-0 将被 删除
--> 解决依赖关系完成
...

删除:
  kubeadm.x86_64 0:1.23.0-0               kubectl.x86_64 0:1.23.0-0         kubelet.x86_64 0:1.23.0-0        
  kubernetes-cni.x86_64 0:1.2.0-0        

完毕!

由于版本更新频繁,这里指定版本号部署

[root@docker01 ~]# yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
已加载插件:fastestmirror, product-id, search-disabled-repos
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                                                                       | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                           | 3.5 kB  00:00:00     
extras                                                                                                                     | 2.9 kB  00:00:00     
updates                                                                                                                    | 2.9 kB  00:00:00     
没有可用软件包 kubelet-1.23.0。
没有可用软件包 kubeadm-1.23.0。
没有可用软件包 kubectl-1.23.0。
错误:无须任何处理

出现问题:

无安装包

原因:

/etc/yum.repos.d/下缺少相应配置文件

解决办法:

添加配置文件

[root@docker01 ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

##清除缓存
[root@docker01 ~]# yum clean all
已加载插件:fastestmirror, product-id, search-disabled-repos
正在清理软件源: base docker-ce-stable extras kubernetes updates
Cleaning up list of fastest mirrors


##建立缓存
[root@docker01 ~]# yum makecache
已加载插件:fastestmirror, product-id, search-disabled-repos
Determining fastest mirrors
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                                                                       | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                           | 3.5 kB  00:00:00     
extras                                                                                                                     | 2.9 kB  00:00:00     
kubernetes/signature                                                                                                       |  454 B  00:00:00     
从 https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0x13EDEF05:
 用户ID     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
 指纹       : a362 b822 f6de dc65 2817 ea46 b53d c80d 13ed ef05
 来自       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
是否继续?[y/N]:y
从 https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 检索密钥
kubernetes/signature                                                                                                       | 1.4 kB  00:00:03 !!! 
updates                                                                                                                    | 2.9 kB  00:00:00     
(1/17): base/7/x86_64/group_gz                                                                                             | 153 kB  00:00:05     
(2/17): base/7/x86_64/primary_db                                                                                           | 6.1 MB  00:00:02     
(3/17): base/7/x86_64/other_db                                                                                             | 2.6 MB  00:00:01     
(4/17): base/7/x86_64/filelists_db                                                                                         | 7.2 MB  00:00:10     
(5/17): docker-ce-stable/7/x86_64/updateinfo                                                                               |   55 B  00:00:06     
(6/17): docker-ce-stable/7/x86_64/filelists_db                                                                             |  64 kB  00:00:06     
(7/17): docker-ce-stable/7/x86_64/primary_db                                                                               | 148 kB  00:00:00     
(8/17): docker-ce-stable/7/x86_64/other_db                                                                                 | 144 kB  00:00:00     
(9/17): extras/7/x86_64/filelists_db                                                                                       | 305 kB  00:00:05     
(10/17): extras/7/x86_64/primary_db                                                                                        | 253 kB  00:00:05     
(11/17): extras/7/x86_64/other_db                                                                                          | 154 kB  00:00:00     
(12/17): kubernetes/filelists                                                                                              |  45 kB  00:00:06     
(13/17): kubernetes/primary                                                                                                | 137 kB  00:00:06     
(14/17): kubernetes/other                                                                                                  |  88 kB  00:00:00     
(15/17): updates/7/x86_64/filelists_db                                                                                     |  14 MB  00:00:12     
(16/17): updates/7/x86_64/other_db                                                                                         | 1.6 MB  00:00:00     
(17/17): updates/7/x86_64/primary_db                                                                                       |  27 MB  00:00:18     
kubernetes                                                                                                                              1022/1022
kubernetes                                                                                                                              1022/1022
kubernetes                                                                                                                              1022/1022
元数据缓存已建立

#查询已有版本
[root@docker01 ~]# yum list kubectl --showduplicates | sort -r
已加载插件:fastestmirror, product-id, search-disabled-repos
可安装的软件包
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
kubectl.x86_64                       1.9.9-0                          kubernetes
kubectl.x86_64                       1.9.8-0                          kubernetes
kubectl.x86_64                       1.9.7-0                          kubernetes
kubectl.x86_64                       1.9.6-0                          kubernetes
kubectl.x86_64                       1.9.5-0                          kubernetes
kubectl.x86_64                       1.9.4-0                          kubernetes
kubectl.x86_64                       1.9.3-0                          kubernetes
kubectl.x86_64                       1.9.2-0                          kubernetes
kubectl.x86_64                       1.9.11-0                         kubernetes
........

启用 bridge-nf-call-iptables 预防网络问题

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

设置网桥参数

cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

修改docker的 /etc/docker/daemon.json文件

[root@docker01 /]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://t81qmnz6.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

重新安装Kubeadm,Kubelet 和 Kubectl(所有主机)

[root@docker01 ~]# yum install -y --nogpgcheck kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5
......

已安装:
  kubeadm.x86_64 0:1.23.5-0           kubectl.x86_64 0:1.23.5-0           kubelet.x86_64 0:1.23.5-0          

作为依赖被安装:
  kubernetes-cni.x86_64 0:1.2.0-0                                                                              

完毕!


查看是否安装成功

kubelet --version
kubectl version
kubeadm version

启动kubelet

[root@docker01 /]# systemctl daemon-reload
[root@docker01 /]# systemctl start kubelet
[root@docker01 /]# systemctl enable kubelet
[root@docker01 /]# 

拉取init-config配置,并修改

[root@docker01 /]# kubeadm config print init-defaults > init-config.yaml

[root@docker01 /]# cat init-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.149 #master节点IP地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: docker01  #master节点node的名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  #修改为阿里云地址
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@docker01 /]# 

拉取k8s相关镜像

[root@docker01 /]# kubeadm config images pull --config=init-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

以上拉取镜像时可能出现问题

[root@docker01 ~]#  kubeadm config images pull --config=init-config.yaml
failed to pull image "registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5": output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

解决:更改命令

[root@docker01 ~]# kubeadm config images list --config init-config.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6
2-5-3 初始化k8s 在master主机上(docker01)

初始化

[root@docker01 /]#  kubeadm init --apiserver-advertise-address=192.168.10.149 --apiserver-bind-port=6443 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --kubernetes-version=1.23.0 --image-repository registry.aliyuncs.com/google_containers
......
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.149:6443 --token 9wk2gh.mzkv3gmp7gca46rm \
        --discovery-token-ca-cert-hash sha256:54a3f68884d8910076b9323df6b0ce4c0efd6e8d7772d316de6429b448bd2395 

2-5-4 创建 kube 目录,添加 kubectl 配置

因为非生产环境,所以我使用 root 用户操作,建议用普通用户运行以下三个命令

[root@docker01 /]#   mkdir -p $HOME/.kube
[root@docker01 /]#  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@docker01 /]# vi /etc/kubernetes/admin.conf 
[root@docker01 /]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@docker01 /]# 

在node节点可能没有/etc/kubernetes/admin.conf文件,我们从master节点cp

##master
[root@docker01 /]# scp /etc/kubernetes/admin.conf root@192.168.10.148:/etc/kubernetes/admin.conf
admin.conf                                                                   100% 5638   945.6KB/s   00:00    
[root@docker01 /]# scp /etc/kubernetes/admin.conf root@192.168.10.147:/etc/kubernetes/admin.conf
admin.conf                                                                   100% 5638   189.4KB/s   00:00    

2-5-5 配置 Pod 网络插件

没有网络各 Pod 是无法通信的,所以执行以下命令下载下载 kube-flannel.yml

[root@docker01 /]# kubectl apply -f http://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看flannel镜像是否拉取成功

[root@docker01 /]# docker images
REPOSITORY                                                        TAG               IMAGE ID       CREATED         SIZE
flannel/flannel-cni-plugin                                        v1.4.1-flannel1   1e3c860c213d   3 weeks ago     10.3MB		##新增镜像
flannel/flannel                                                   v0.25.1           1575deaad3b0   4 weeks ago     79.5MB		##新增镜像
centos_yum2                                                       latest            d75aaf1f2a07   2 months ago    309MB
centos_yum1                                                       latest            c85c8f1cbd15   2 months ago    366MB
...
[root@docker01 /]# 

使用命令 kubectl get nodes 查看 master 的状态

[root@docker01 /]# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
docker01   Ready    control-plane,master   6h38m   v1.23.5

如果以上提示不是Ready而是 NotReady,表示说明还没有就绪,需要等一会儿,然后节点就就绪了

2-5-6 将 k8s-node01 和 k8s-node02 加入到集群

将准备的node节点(docker02/docker03)加入到集群Kubernetes master中,以下命令在Node 主机上执行(docker02/docker03)

向集群添加新节点,执行的命令就是 kubeadm init 最后输出的 kubeadm join 命令

[root@docker03 ~]# kubeadm join 192.168.10.149:6443 --token 9wk2gh.mzkv3gmp7gca46rm         --discovery-token-ca-cert-hash sha256:54a3f68884d8910076b9323df6b0ce4c0efd6e8d7772d316de6429b448bd2395 

设置永久证书:

[root@docker01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.149:6443 --token r4nuux.rk3e97eppqude83t --discovery-token-ca-cert-hash sha256:54a3f68884d8910076b9323df6b0ce4c0efd6e8d7772d316de6429b448bd2395 
[root@docker01 ~]# kubeadm token create --ttl 0
vgwzve.06lhmx9nhkjn8l5z

验证集群

[root@docker01 ~]# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
docker01   Ready    control-plane,master   4d      v1.23.5
docker03   Ready    <none>                 3d14h   v1.23.5
docker02   Ready    <none>                 3d14h   v1.23.5
[root@docker01 ~]# 

查看集群健康状态

[root@docker01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

上一章:企业级容器管理平台kubernetes介绍

  • 20
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值