Kubernetes单机开发环境部署记录

Kubernetes官方推荐的集群并不适合在个人电脑上做Helm包开发使用,建议在PC上搭建单节点Kubernetes环境。
操作方式有以下几种:
1)使用官方的minikube工具部署;
2)使用官方的kubeadm工具仅部署一个master节点,然后将pod调度到master节点工作,所需命令是:kubectl taint node k8s-master node-role.kubernetes.io/master-
3)下载离线的Kubernetes二进制包,手动按需部署master节点,并将pod调度到master节点工作。本人搜集的Kubernetes1.8二进制包存放地址:https://pan.baidu.com/disk/home?#/all?vmode=list&path=%2Fkubernetes1.18%E9%95%9C%E5%83%8F
4)修改已经成熟的shell脚本一键部署工具,部署master节点。本人推荐基于IT波哥的1.15版本shell脚本进行修改本人的脚本存储在:https://pan.baidu.com/disk/home?#/all?vmode=list&path=%2Fkubernetes1.18%E9%95%9C%E5%83%8F
,当前波哥的新项目地址为:https://github.com/luckman666/kkitdeploy_server

在当前时间段内,采用在线部署Kubernetes的方式并不可靠,因为组件下载地址处于不可到达状态。
如果是生产环境,建议通过自建镜像站点的方式进行部署,如果只是个人开发Helm开源项目使用,建议用二进制包手工部署或者定制shell脚本部署。

以下是本人今晚在家中在线部署的实况,唯一的故障是flannel网络插件的公版yml下载失败。当然,在不考虑辛苦和耗时的情况下,可以自己编写kube-flannel.yml,并自建***下载未下载齐全的组件镜像。

顺便吐槽一下今天参见的一场面试:
微型民办企业,主营业务号称是对外提供IT定制服务但并未看到他们的产品说明,技术部门从头到脚一共6人,4个目空一切的的自大狂对我展开了群殴式面试:一个向我鼓吹它自己的辉煌历史,不停地跟我强调你不精通Java开发和Python来了也是个在底层干活儿的小弟弟(我的疑惑是:我从来就没说过我懂Java开发或者Python开发呀?我提供的简历上也没没提到过者方面的信息呀?我从头到尾干的都是IAAS和PAAS平台运维和云计算虚拟化运维,不过偶尔也兼职产品经理根据指定需求提供满足功能需要和安全审计的开源产品组合罢了)。一个不停地跟我说道ES数据库是多么牛掰,比Oracle数据库系统还好、只懂MySQL和MongoDB的使用是不够的(我的疑惑是:难道数据库圈里新出了一款叫“ES”的新产品?没听那些做DBA的同行们提到过呀?难道这家小公司真的向北美的年轻化技术型公司一样藏龙卧虎、自己研发出了一套数据库系统、可以比肩巨杉的SequoiaDB或者华为的GaussDB或者阿里的OceanBase?听他飞了半天唾沫星子才大概齐儿听明白他引以为傲的ES数据库是啥了:就是Elasticsearch,之前在河南移动管理开源软件时有所了解,在我们的项目中Elasticsearch一般是作为文本搜索引擎使用的,的确也可以临时用作小规模的热点数据缓存存储。唉,连Elasticsearch是什么、关系型数据库和非关系型数据库的区别是啥都不清楚,这货是个水牛,地地道道的原产地国货水牛——Made In Zhengzhou,Henan。)一个更是火急火燎地说他们他们面临一个项目,是要给某家行政机构做虚拟化迁移,一个劲儿地催促着让我讲讲我主管这个项目的项目实施思路,还掏出手机搞了一通后反扣在桌子上、往我坐的方向推了推(我猜着家伙是打开了手机的录音,明摆着是在骗取项目实施方案的思路,跟郑州市CBD区的某中字头河南分公司的套路一样。碍于面皮,我只跟这家伙说了说传统的IT应用迁移可以有的几个方向及相关的利弊,分别是:VM-ESXi、PVE、Cirtrix-XenServer、Xen hypervisor、KVM-hypervisor、Microsoft Virtual server、采用OpenStack+Hadoop平台、采用Docker+Kubernetes平台。看着这似曾相识的场景,想起了我2018年刚回到河南去郑州市CBD区的某中字头河南分公司的面试,到那儿之后所谓的面试官随便找了个楼道电梯口,抽着烟就问我MySQL怎么做双活集群,我向他描述了双活HA集群的基本工作原理后,他把烟掐灭告诉我可以走了,他再考虑一下是否录用我;当天夜里快零点的时候,这中字头河南分公司所谓的面试官,打我电话让我早上7点过去参加第二次技术面试;我一大早赶到郑州市CBD区中粮大厦,这位中字头河南分公司所谓的面试官直接把我带到了他的工位让我搭建MySQL双主双从+Keepalived集群,一直到下午集群验证结束,这位中字头河南分公司所谓的面试官有事一句“你先回去吧,路上吃点儿饭”把我打发了,再也没有后文了。)还有一个家伙最后问了一通跟岗位招聘要求和我个人简历上都不着边的废话,比如“我们这都是实干的,不需要高学历充门面”、“你是安阳的,听说安阳有很多皮包公司,你的简历不会是在安阳的公司里包装的吧”、“看你开车过来的,车是租的吧?”、“听你说的头头是道,理论水平不低呀,都动手操作过吗”(我当时在想的是:我做过的项目,在线博客有实验记录、服务过的单位有项目实施记录,劳动合同上有工作单位和聘请职位抬头,郑州的IT圈儿就巴掌大,你要是真上道儿的话,不过是敲敲键盘、打几个电话验证一下的事儿。安阳人怎么了,怎么骗你郑州人?没听地域黑说“十个河南九个骗,总部设在驻马店,剩下一个是教练。 九个河南八个偷,指挥总部在郑州,剩下一个在练手。八个河南七个抢,贼子窝窝在洛阳,剩下一个在张望。”吗,这有我安阳人什么事儿吗?平心而论,恐怕你这三青子才是坑蒙拐骗偷盗抢的行家里手,真像那句话说的“河南的支柱产业是欺骗、郑州的支柱产业是诈骗”。)

跟这帮活宝浪费小半天儿时间,不自觉地想起了河南信息大厦里某家打国企名头招聘的公司:实习时做的项目不算工作经历,就像未成年人不能享有人的权利一样;不问技术能力能否匹配岗位的大部分要求,张嘴就是“工作不满10年就是简历造假”。突然发现,下到了郑州的市场经济之海后,所谓的招才引智企业,套路满满:骗取项目实施方案或思路的有之,不懂行业技术的外行打压内行者有之,为了压低支付工资而不惜采用人身***者有之,久拖不付工资者有之,为了不赔付赔偿金反咬一口指责盗取它商业机密者有之 ....... 劳资斗争,资方无所不用其极呀!

好后悔没有在自己年轻时留在体制内,如今心力交瘁到有点儿麻木不仁了......

下面是我的但点儿部署踩坑过程,待从朋友那儿拿回移动硬盘后在不上脚本部署过程:

配置虚拟机的Host/Board共享目录、安装VM-Tools
[googlebigtable@localhost ~]$ su root
Password:
[root@localhost googlebigtable]# pwd -P
/home/googlebigtable
[root@localhost googlebigtable]# echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/googlebigtable/.local/bin:/home/googlebigtable/bin
[root@localhost googlebigtable]#
[root@localhost googlebigtable]# ls -F
Desktop/ Documents/ Downloads/ Music/ Pictures/ Public/ Templates/ Videos/
[root@localhost googlebigtable]# mkdir -p DVD/temp
[root@localhost googlebigtable]# ls -F
Desktop/ Documents/ Downloads/ DVD/ Music/ Pictures/ Public/ Templates/ Videos/
[root@localhost googlebigtable]# cd DVD/
[root@localhost DVD]# ls -F
temp/
[root@localhost DVD]# cd temp/
[root@localhost temp]# ls -F
[root@localhost temp]# pwd -P
/home/googlebigtable/DVD/temp
[root@localhost temp]# ls /dev/ | grep cd
cdrom
[root@localhost temp]# mount /dev/cdrom /home/googlebigtable/DVD/temp
mount: /dev/sr0 is write-protected, mounting read-only
[root@localhost temp]# cd /home/googlebigtable/DVD/temp/
[root@localhost temp]# ls -F
manifest.txt run_upgrader.sh VMwareTools-10.3.10-13959562.tar.gz vmware-tools-upgrader-32 vmware-tools-upgrader-64
[root@localhost temp]# cp VMwareTools-10.3.10-13959562.tar.gz /home/googlebigtable/DVD/
[root@localhost temp]# cd ..
[root@localhost DVD]# ls -F
temp/ VMwareTools-10.3.10-13959562.tar.gz
[root@localhost DVD]# tar -xzvf VMwareTools-10.3.10-13959562.tar.gz
vmware-tools-distrib/
............................................................................................................
vmware-tools-distrib/vmware-install.pl
[root@localhost DVD]# ls -F
temp/ VMwareTools-10.3.10-13959562.tar.gz vmware-tools-distrib/
[root@localhost DVD]# cd vmware-tools-distrib/
[root@localhost vmware-tools-distrib]# ls -F
bin/ caf/ doc/ etc/ FILES INSTALL installer/ lib/ vgauth/ vmware-install.pl*
[root@localhost vmware-tools-distrib]# pwd -P
/home/googlebigtable/DVD/vmware-tools-distrib
[root@localhost vmware-tools-distrib]# /home/googlebigtable/DVD/vmware-tools-distrib/vmware-install.pl
The installer has detected an existing installation of open-vm-tools packages
on this system and will not attempt to remove and replace these user-space
applications. It is recommended to use the open-vm-tools packages provided by
the operating system. If you do not want to use the existing installation of
open-vm-tools packages and use VMware Tools, you must uninstall the
open-vm-tools packages and re-run this installer.
..............................................................................................................
Ejecting device /dev/sr0 ...
Enjoy,

--the VMware team

[root@localhost vmware-tools-distrib]# init 6

配置静态IP
[googlebigtable@localhost ~]$ su root
Password:
[root@localhost googlebigtable]# pwd -P
/home/googlebigtable
[root@localhost googlebigtable]# echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/googlebigtable/.local/bin:/home/googlebigtable/bin
[root@localhost googlebigtable]# ls -F /etc/sysconfig/network-scripts/
ifcfg-ens33 ifdown-ib ifdown-ppp ifdown-tunnel ifup-ib ifup-plusb ifup-Team network-functions
ifcfg-lo ifdown-ippp ifdown-routes ifup@ ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6
ifdown@ ifdown-ipv6
 ifdown-sit ifup-aliases ifup-ipv6 ifup-ppp ifup-tunnel
ifdown-bnep ifdown-isdn@ ifdown-Team ifup-bnep ifup-isdn@ ifup-routes ifup-wireless
ifdown-eth ifdown-post ifdown-TeamPort ifup-eth ifup-plip ifup-sit init.ipv6-global
[root@localhost googlebigtable]# cp /etc/sysconfig/network-scripts/ifcfg-ens33{,.original}
[root@localhost googlebigtable]# ls -F /etc/sysconfig/network-scripts/
ifcfg-ens33 ifdown-eth
 ifdown-post ifdown-TeamPort ifup-eth ifup-plip ifup-sit init.ipv6-global
ifcfg-ens33.original ifdown-ib ifdown-ppp ifdown-tunnel ifup-ib ifup-plusb ifup-Team network-functions
ifcfg-lo ifdown-ippp ifdown-routes ifup@ ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6
ifdown@ ifdown-ipv6
 ifdown-sit ifup-aliases ifup-ipv6 ifup-ppp ifup-tunnel
ifdown-bnep ifdown-isdn@ ifdown-Team ifup-bnep ifup-isdn@ ifup-routes ifup-wireless
[root@localhost googlebigtable]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
[root@localhost googlebigtable]# cat -n /etc/sysconfig/network-scripts/ifcfg-ens33
1 TYPE="Ethernet"
2 PROXY_METHOD="none"
3 BROWSER_ONLY="no"
4 BOOTPROTO="static"
5 IPADDR=192.168.20.199
6 NETMASK=255.255.255.0
7 GATEWAY=192.168.20.1
8 DEFROUTE="yes"
9 IPV4_FAILURE_FATAL="no"
10 IPV6INIT="yes"
11 IPV6_AUTOCONF="yes"
12 IPV6_DEFROUTE="yes"
13 IPV6_FAILURE_FATAL="no"
14 IPV6_ADDR_GEN_MODE="stable-privacy"
15 NAME="ens33"
16 UUID="174bc0f4-a139-4ec1-928a-611747463f29"
17 DEVICE="ens33"
18 ONBOOT="yes"
19 DNS=8.8.8.8
[root@localhost googlebigtable]# service network restart
Restarting network (via systemctl): [ OK ]
[root@localhost googlebigtable]# systemctl restart network
[root@localhost googlebigtable]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=2 ttl=128 time=173 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=128 time=438 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=128 time=123 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=128 time=150 ms
^C
--- 8.8.8.8 ping statistics ---
11 packets transmitted, 4 received, 63% packet loss, time 10007ms
rtt min/avg/max/mdev = 123.603/221.344/438.219/126.442 ms
[root@localhost googlebigtable]#

配置OS的YUM源
[root@localhost googlebigtable]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.original0
[root@localhost googlebigtable]# ls -F /etc/yum.repos.d/
CentOS-Base.repo.original0 CentOS-CR.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Media.repo CentOS-Sources.repo CentOS-Vault.repo
[root@localhost googlebigtable]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo
--2020-05-24 12:00:45-- http://mirrors.163.com/.help/CentOS7-Base-163.repo
Resolving mirrors.163.com (mirrors.163.com)... 59.111.0.251
Connecting to mirrors.163.com (mirrors.163.com)|59.111.0.251|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1572 (1.5K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’

100%[===========================================================================================================>] 1,572 --.-K/s in 0s

2020-05-24 12:00:45 (525 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [1572/1572]

[root@localhost googlebigtable]# ls -F /etc/yum.repos.d/
CentOS-Base.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo
CentOS-Base.repo.original0 CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo
[root@localhost googlebigtable]# cat -n /etc/yum.repos.d/CentOS-Base.repo
1 # CentOS-Base.repo
2 #
3 # The mirror system uses the connecting IP address of the client and the
4 # update status of each mirror to pick mirrors that are updated to and
5 # geographically close to the client. You should use this for CentOS updates
6 # unless you are manually picking other mirrors.
7 #
8 # If the mirrorlist= does not work for you, as a fall back you can try the
9 # remarked out baseurl= line instead.
10 #
11 #
12 [base]
13 name=CentOS-$releasever - Base - 163.com
14 #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
15 baseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/
16 gpgcheck=1
17 gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
18
19 #released updates
20 [updates]
21 name=CentOS-$releasever - Updates - 163.com
22 #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
23 baseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/
24 gpgcheck=1
25 gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
26
27 #additional packages that may be useful
28 [extras]
29 name=CentOS-$releasever - Extras - 163.com
30 #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
31 baseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/
32 gpgcheck=1
33 gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
34
35 #additional packages that extend functionality of existing packages
36 [centosplus]
37 name=CentOS-$releasever - Plus - 163.com
38 baseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/
39 gpgcheck=1
40 enabled=0
41 gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
[root@localhost googlebigtable]# yum clean all
Loaded plugins: fastestmirror, langpacks
Cleaning repos: base extras updates
Cleaning up everything
Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
Cleaning up list of fastest mirrors
[root@localhost googlebigtable]# yum makecache
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/10): base/7/x86_64/group_gz | 153 kB 00:00:00
(2/10): base/7/x86_64/primary_db | 6.1 MB 00:00:03
(3/10): extras/7/x86_64/filelists_db | 205 kB 00:00:00
(4/10): extras/7/x86_64/other_db | 122 kB 00:00:00
(5/10): extras/7/x86_64/primary_db | 194 kB 00:00:00
(6/10): updates/7/x86_64/filelists_db | 980 kB 00:00:01
(7/10): updates/7/x86_64/primary_db | 1.3 MB 00:00:01
(8/10): updates/7/x86_64/other_db | 183 kB 00:00:00
(9/10): base/7/x86_64/filelists_db | 7.1 MB 00:00:06
(10/10): base/7/x86_64/other_db | 2.6 MB 00:00:02
Metadata Cache Created
[root@localhost googlebigtable]#
[root@localhost googlebigtable]# yum update -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
.....................................................................................................
Complete!
[root@localhost googlebigtable]#

查看OS环境
[root@localhost googlebigtable]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
[root@localhost googlebigtable]# uname -r
3.10.0-862.el7.x86_64
[root@localhost googlebigtable]# hostnamectl status
Static hostname: localhost.localdomain
Icon name: computer-vm
Chassis: vm
Machine ID: b42ee68190eb41aea794fc999eab1a65
Boot ID: 9f5106fc1c4a4a358105dc8dc0b0b87e
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.el7.x86_64
Architecture: x86-64
[root@localhost googlebigtable]# rpm -q centos-release
centos-release-7-8.2003.0.el7.centos.x86_64
[root@localhost googlebigtable]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7b:dd:22 brd ff:ff:ff:ff:ff:ff
inet 92.168.20.199/24 brd 92.168.20.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.20.199/24 brd 192.168.20.255 scope global dynamic ens33
valid_lft 1604sec preferred_lft 1604sec
inet6 fe80::ea69:80fc:6c2c:368d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:f7:60:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:f7:60:fd brd ff:ff:ff:ff:ff:ff
[root@localhost googlebigtable]# ifconfig -a
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.20.199 netmask 255.255.255.0 broadcast 92.168.20.255
inet6 fe80::ea69:80fc:6c2c:368d prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:7b:dd:22 txqueuelen 1000 (Ethernet)
RX packets 832264 bytes 1203540576 (1.1 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 381029 bytes 23033632 (21.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 13 bytes 1322 (1.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 13 bytes 1322 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:f7:60:fd txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:f7:60:fd txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@localhost googlebigtable]#

时间同步服务
[root@localhost googlebigtable]# yum -y update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
No packages marked for update
[root@localhost googlebigtable]# yum install -y ntpdate
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package ntpdate-4.2.6p5-29.el7.centos.x86_64 already installed and latest version
Nothing to do
[root@localhost googlebigtable]# ntpdate time.windows.com
24 May 12:26:43 ntpdate[26764]: adjust time server 52.231.114.183 offset -0.018299 sec
[root@localhost googlebigtable]# ntpq -p
ntpq: read: Connection refused
[root@localhost googlebigtable]# ntpstat
synchronised to NTP server (162.159.200.123) at stratum 4
time correct to within 86 ms
polling server every 64 s
[root@localhost googlebigtable]# timedatectl status
Local time: Sun 2020-05-24 12:27:26 EDT
Universal time: Sun 2020-05-24 16:27:26 UTC
RTC time: Sun 2020-05-24 16:27:26
Time zone: America/New_York (EDT, -0400)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2020-03-08 01:59:59 EST
Sun 2020-03-08 03:00:00 EDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2020-11-01 01:59:59 EDT
Sun 2020-11-01 01:00:00 EST
[root@localhost googlebigtable]#

关闭防火墙及SELinux
[root@localhost googlebigtable]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost googlebigtable]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)

May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete FORWAR...hain?).
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete FORWAR...t name.
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete FORWAR...t name.
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete OUTPUT...hain?).
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).
May 24 12:29:08 localhost.localdomain systemd[1]: Stopping firewalld - dynamic firewall daemon...
May 24 12:29:11 localhost.localdomain systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost googlebigtable]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
[root@localhost googlebigtable]# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: disabled
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31
[root@localhost googlebigtable]# init 6
[root@localhost googlebigtable]# sestatus
SELinux status: disabled
[root@localhost googlebigtable]#

关闭SWAP分区
[root@localhost googlebigtable]# swapoff -a
[root@localhost googlebigtable]# sed -i '/ swap / s/^(.)$/#\1/g' /etc/fstab
sed: -e expression #1, char 23: invalid reference \1 on `s' command's RHS
[root@localhost googlebigtable]#
[root@localhost googlebigtable]# sed -ri 's/.
swap.*/#&/' /etc/fstab
[root@localhost googlebigtable]# cat -n /etc/fstab
1
2 #
3 # /etc/fstab
4 # Created by anaconda on Sun May 24 10:11:42 2020
5 #
6 # Accessible filesystems, by reference, are maintained under '/dev/disk'
7 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
8 #
9 /dev/mapper/centos-root / xfs defaults 0 0
10 UUID=ca495f7f-06a4-49bb-8b7b-c0a624209f2c /boot xfs defaults 0 0
11 /dev/mapper/centos-home /home xfs defaults 0 0
12 #/dev/mapper/centos-swap swap swap defaults 0 0
[root@localhost googlebigtable]#

配置hosts
[root@localhost googlebigtable]# hostnamectl set-hostname kubernetes-single
[root@localhost googlebigtable]# hostnamectl status
Static hostname: kubernetes-single
Icon name: computer-vm
Chassis: vm
Machine ID: b42ee68190eb41aea794fc999eab1a65
Boot ID: 54bed94757bd43b4a77c599f98519fd2
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1127.8.2.el7.x86_64
Architecture: x86-64
[root@localhost googlebigtable]# hostname -i
fe80::ea69:80fc:6c2c:368d%ens33 92.168.20.199 192.168.20.199 192.168.122.1
[root@localhost googlebigtable]#
[root@localhost googlebigtable]# cat -n /etc/hosts
1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
2 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@localhost googlebigtable]# cp /etc/hosts{,.original}
[root@localhost googlebigtable]# ls -F /etc/ | grep hosts
ghostscript/
hosts
hosts.allow
hosts.deny
hosts.original
[root@localhost googlebigtable]# cat >> /etc/hosts << EOF

192.168.20.199 kubernetes-master
192.168.20.199 kubernetes-node0
EOF
[root@localhost googlebigtable]# cat -n /etc/hosts
1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
2 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
3 192.168.20.199 kubernetes-master
4 192.168.20.199 kubernetes-node0
[root@localhost googlebigtable]#

master与nodes间的免密登录【此操作只需要在master机器上执行】
【这里相当于配置了192.168.20.199自身的SSH免密登录】
[root@localhost googlebigtable]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:brmVbKFaiYV871rNxTANIaWPi1VaPSinuUIzBYaCwy8 root@kubernetes-single
The key's randomart image is:
+---[RSA 2048]----+
| . . .o ..+. |
| + . .. . o = |
| o . + B + |
| E .. . . @ + . |
| . o S B . o |
| * @ B . |
| . X X o |
| + B |
| . o.. |
+----[SHA256]-----+
[root@localhost googlebigtable]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.20.199
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.20.199 (192.168.20.199)' can't be established.
ECDSA key fingerprint is SHA256:WI8wxf0lYeC+E36wAGj+ydKWkIL2c/4tu5hUXbLkQ1k.
ECDSA key fingerprint is MD5:3d:0b:1a:6a:11:63:c6:db:c6:6b:a6:48:d9:3f:91:a3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.20.199's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@192.168.20.199'"
and check to make sure that only the key(s) you wanted were added.

[root@localhost googlebigtable]# ssh 'root@192.168.20.199'
Last login: Sun May 24 12:33:41 2020
[root@kubernetes-single ~]# exit
logout
Connection to 192.168.20.199 closed.
[root@localhost googlebigtable]#

将桥接的IPv4流量传递到iptables的链
[root@localhost googlebigtable]# modprobe br_netfilter
[root@localhost googlebigtable]# sysctl -p
[root@localhost googlebigtable]# sysctl --system

  • Applying /usr/lib/sysctl.d/00-system.conf ...
    net.bridge.bridge-nf-call-ip6tables = 0
    net.bridge.bridge-nf-call-iptables = 0
    net.bridge.bridge-nf-call-arptables = 0
  • Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
    kernel.yama.ptrace_scope = 0
  • Applying /usr/lib/sysctl.d/50-default.conf ...
    kernel.sysrq = 16
    kernel.core_uses_pid = 1
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.all.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.conf.default.promote_secondaries = 1
    net.ipv4.conf.all.promote_secondaries = 1
    fs.protected_hardlinks = 1
    fs.protected_symlinks = 1
  • Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
    fs.aio-max-nr = 1048576
  • Applying /etc/sysctl.d/99-sysctl.conf ...
  • Applying /etc/sysctl.conf ...
    [root@localhost googlebigtable]# cat > /etc/sysctl.d/k8s.conf << EOF

    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    [root@localhost googlebigtable]# modprobe br_netfilter
    [root@localhost googlebigtable]# sysctl -p
    [root@localhost googlebigtable]# sysctl --system

  • Applying /usr/lib/sysctl.d/00-system.conf ...
    net.bridge.bridge-nf-call-ip6tables = 0
    net.bridge.bridge-nf-call-iptables = 0
    net.bridge.bridge-nf-call-arptables = 0
  • Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
    kernel.yama.ptrace_scope = 0
  • Applying /usr/lib/sysctl.d/50-default.conf ...
    kernel.sysrq = 16
    kernel.core_uses_pid = 1
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.all.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.conf.default.promote_secondaries = 1
    net.ipv4.conf.all.promote_secondaries = 1
    fs.protected_hardlinks = 1
    fs.protected_symlinks = 1
  • Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
    fs.aio-max-nr = 1048576
  • Applying /etc/sysctl.d/99-sysctl.conf ...
  • Applying /etc/sysctl.d/k8s.conf ...
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
  • Applying /etc/sysctl.conf ...
    [root@localhost googlebigtable]#

配置docker和kubernetes YUM软件源
[root@localhost googlebigtable]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
--2020-05-24 14:03:31-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 111.6.206.244, 111.6.126.161, 111.6.206.242, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|111.6.206.244|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2640 (2.6K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/docker-ce.repo’

100%[===========================================================================================================>] 2,640 --.-K/s in 0s

2020-05-24 14:03:31 (1.20 GB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640]

[root@localhost googlebigtable]# cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@localhost googlebigtable]# ls -F /etc/yum.repos.d/
CentOS-Base.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo CentOS-x86_64-kernel.repo kubernetes.repo
CentOS-Base.repo.original0 CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo docker-ce.repo
[root@localhost googlebigtable]#

安装Docker
[root@localhost googlebigtable]# yum -y update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
docker-ce-stable | 3.5 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
(1/3): docker-ce-stable/x86_64/primary_db | 42 kB 00:00:00
(2/3): docker-ce-stable/x86_64/updateinfo | 55 B 00:00:00
(3/3): kubernetes/primary | 69 kB 00:00:00
kubernetes 505/505
No packages marked for update
[root@localhost googlebigtable]# yum list installed | grep docker
[root@localhost googlebigtable]# curl -sSL https://get.daocloud.io/docker | sh

Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c

  • sh -c 'yum install -y -q yum-utils'
    Package yum-utils-1.1.31-54.el7_8.noarch already installed and latest version
  • sh -c 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'
    Loaded plugins: fastestmirror, langpacks
    adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
    grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
    repo saved to /etc/yum.repos.d/docker-ce.repo
  • '[' stable '!=' stable ']'
  • sh -c 'yum makecache'
    Loaded plugins: fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
    base | 3.6 kB 00:00:00
    docker-ce-stable | 3.5 kB 00:00:00
    extras | 2.9 kB 00:00:00
    kubernetes | 1.4 kB 00:00:00
    updates | 2.9 kB 00:00:00
    (1/4): kubernetes/other | 44 kB 00:00:00
    (2/4): kubernetes/filelists | 23 kB 00:00:00
    (3/4): docker-ce-stable/x86_64/filelists_db | 20 kB 00:00:00
    (4/4): docker-ce-stable/x86_64/other_db | 114 kB 00:00:00
    kubernetes 505/505
    kubernetes 505/505
    Metadata Cache Created
  • '[' -n '' ']'
  • sh -c 'yum install -y -q docker-ce'
    warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
    Public key for containerd.io-1.2.13-3.2.el7.x86_64.rpm is not installed
    Importing GPG key 0x621E9F35:
    Userid : "Docker Release (CE rpm) <docker@docker.com>"
    Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
    From : https://download.docker.com/linux/centos/gpg
    setsebool: SELinux is disabled.
    If you would like to use Docker as a non-root user, you should now consider
    adding your user to the "docker" group with something like:

    sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
containers which can be used to obtain root privileges on the
docker host.
Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
for more information.
[root@localhost googlebigtable]# yum list installed | grep docker
containerd.io.x86_64 1.2.13-3.2.el7 @docker-ce-stable
docker-ce.x86_64 3:19.03.9-3.el7 @docker-ce-stable
docker-ce-cli.x86_64 1:19.03.9-3.el7 @docker-ce-stable
[root@localhost googlebigtable]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@localhost googlebigtable]# docker --version
Docker version 19.03.9, build 9d988398e7
[root@localhost googlebigtable]# docker info
Client:
Debug Mode: false

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.9
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1127.8.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.62GiB
Name: kubernetes-single
ID: U4GI:7OI3:B2AK:TA4C:EDHL:63L5:RFD6:NIDM:BCPA:ROWN:U5BQ:KKZA
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

[root@localhost googlebigtable]#

安装Kubernetes
[root@localhost googlebigtable]# yum install -y kubelet kubeadm kubectl
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
.....................................................................................................
Complete!
[root@localhost googlebigtable]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost googlebigtable]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost googlebigtable]# cat >> /etc/docker/daemon.json << EOF

{
"registry-mirrors": ["https://dlbpv56y.mirror.aliyuncs.com"]
}
EOF
[root@localhost googlebigtable]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://dlbpv56y.mirror.aliyuncs.com"]
}
[root@localhost googlebigtable]#

部署Kubernetes Master
[root@localhost googlebigtable]# kubeadm init --apiserver-advertise-address=192.168.20.199 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version stable --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
【输出执行结果:
W0525 01:31:27.750870 26835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Hostname]: hostname "kubernetes-single" could not be reached
[WARNING Hostname]: hostname "kubernetes-single": lookup kubernetes-single on 192.168.20.1:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.3 not found: manifest unknown: manifest unknown
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3 not found: manifest unknown: manifest unknown
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.3 not found: manifest unknown: manifest unknown
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-proxy:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-proxy:v1.18.3 not found: manifest unknown: manifest unknown
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher】
【出现这个报错,最主要的原因是无法下载Kubernetes的最新稳定版镜像,原因是Kubernetes官网被我国的长城防火墙屏蔽了,而我们配置的aliyun软件源中最新稳定版尚未被收录。】
[root@localhost googlebigtable]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:49:29Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
[root@localhost googlebigtable]# cat -n /etc/yum.repos.d/kubernetes.repo
1 [kubernetes]
2 name=Kubernetes
3 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
4 enabled=1
5 gpgcheck=0
6 repo_gpgcheck=0
7 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@localhost googlebigtable]#
【根据提示,当前kubadm是v1.18.3,默认下载的最新稳定版Kubernetes也是v1.18.3,但目前aliyun收录的Kubernetes最新稳定版却是v1.18.0。解决方式是,在master节点运行kubeadm reset,然后重新指定Kubernetes版本执行kubadm init命令。】
[root@localhost googlebigtable]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.

[preflight] Running pre-flight checks
W0525 01:30:24.235511 26771 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0525 01:30:24.238567 26771 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@localhost googlebigtable]#
[root@localhost googlebigtable]# kubeadm init --apiserver-advertise-address=192.168.20.199 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
W0525 01:43:12.879595 27416 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Hostname]: hostname "kubernetes-single" could not be reached
[WARNING Hostname]: hostname "kubernetes-single": lookup kubernetes-single on 192.168.20.1:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes-single kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.20.199]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes-single localhost] and IPs [192.168.20.199 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes-single localhost] and IPs [192.168.20.199 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0525 01:44:35.823363 27416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0525 01:44:35.824368 27416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 80.002404 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubernetes-single as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubernetes-single as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 77a1kv.bx3qsxohrzit2vfa
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.199:6443 --token 77a1kv.bx3qsxohrzit2vfa \
--discovery-token-ca-cert-hash sha256:c99cbda7e0094e70794ca9a4732118842e6086d1d2c16d06b2c0450da7475ba2
[root@localhost googlebigtable]# exit
exit
[googlebigtable@localhost ~]$ mkdir -p $HOME/.kube
[googlebigtable@localhost ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for googlebigtable:
[googlebigtable@localhost ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[googlebigtable@localhost ~]$ su root
Password:
[root@kubernetes-single googlebigtable]# mkdir -p $HOME/.kube
[root@kubernetes-single googlebigtable]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kubernetes-single googlebigtable]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@kubernetes-single googlebigtable]#
【到目前为止,已经完成了master节点的初始化,但我们注意到执行kubadm init时,有一个警告:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/。这需要我们修改Kubernetes和Docker的驱动至保持一致。解决思路:修改docker驱动配置;或者修改kubelet驱动配置。如果此时不解决这个警告,在node节点上执行kubectl join时也会再次提示这个报错。】
【docker驱动配置有两种:/usr/lib/systemd/system/docker.service或者/etc/docker/daemon.json。
kubelet驱动配置为:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf】
【修改docker配置前需要先将docker服务停止,否则会出现docker重启失败的情况】
[root@kubernetes-single googlebigtable]# ps -aux | grep docker
root 3956 1.0 1.0 619092 81060 ? Ssl 03:22 0:03 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 6436 0.0 0.0 107688 6060 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/d3c3d042437eb6669a44ceb5cfe9aa15248dc16148c3797faf5bedfd804db300 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 6501 0.0 0.0 107688 6128 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/d8a0564e736da715490a25eb2194b234d264f8def23d2fb323ee0fdd4c04d0d4 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 6667 0.0 0.0 109096 6148 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/aea3a4a57bc4f47318707e5f483c34862bc8beeb88f03f77914db15786232eae -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 6731 0.0 0.0 107688 6228 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/10e2beb886ef01013382d111efa87f71c2a4e1efd882dd9d749ffff399a08024 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 6890 0.0 0.0 109096 6492 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/80172ec50dad5e8dbf247bb4a68eacf0d181dbcd17cb9c5fab106ff9e90ab604 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 6947 0.0 0.0 109096 6348 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/5480968b39c37f254c107f27fd3d5eec3669e717b21dd8ccd15aa95b85c59808 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 7132 0.0 0.0 109096 6248 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/db1e49cab420764b0ff2c22c81647bdd79db5ed6f1fcf1246675fc617ff264a1 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 7195 0.0 0.0 107688 6540 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/715c4ad9bd56cedc47ac9149efa04fb0242af29af7c49d7c026d68ab72fe7cdc -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 7374 0.0 0.0 107688 6492 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/986d65617c6bf86d572ea79deca4887d1e51e78c790294e9ac7f1ca40b500434 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 7430 0.0 0.0 107688 6364 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/fd16858ea9604aff7e3664851399bbfe0a6ea04c41e2467618b919bbdd1ef2f8 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 8161 0.0 0.0 112816 968 pts/0 S+ 03:28 0:00 grep --color=auto docker
[root@kubernetes-single googlebigtable]# systemctl stop docker
[root@kubernetes-single googlebigtable]# ps -aux | grep docker
root 8408 0.0 0.0 112812 968 pts/0 S+ 03:28 0:00 grep --color=auto docker
[root@kubernetes-single googlebigtable]#
[root@kubernetes-single googlebigtable]# ls -F /etc/docker/
daemon.json key.json
[root@kubernetes-single googlebigtable]# cat -n /etc/docker/daemon.json
1 {
2 "registry-mirrors": ["https://dlbpv56y.mirror.aliyuncs.com"]
3 }
[root@kubernetes-single googlebigtable]# cp /etc/docker/daemon.json{,.original}
[root@kubernetes-single googlebigtable]# ls -F /etc/docker/
daemon.json daemon.json.original key.json
[root@kubernetes-single googlebigtable]# cat > /etc/docker/daemon.json <<EOF

{
"registry-mirrors": ["https://dlbpv56y.mirror.aliyuncs.com"]
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@kubernetes-single googlebigtable]# cat -n /etc/docker/daemon.json
1 {
2 "registry-mirrors": ["https://dlbpv56y.mirror.aliyuncs.com"]
3 "exec-opts": ["native.cgroupdriver=systemd"]
4 }
[root@kubernetes-single googlebigtable]#

或者
[root@kubernetes-single googlebigtable]# docker info | grep Cgroup
Cgroup Driver: cgroupfs
[root@kubernetes-single googlebigtable]# cat -n /usr/lib/systemd/system/docker.service
1 [Unit]
2 Description=Docker Application Container Engine
3 Documentation=https://docs.docker.com
4 BindsTo=containerd.service
5 After=network-online.target firewalld.service containerd.service
6 Wants=network-online.target
7 Requires=docker.socket
8
9 [Service]
10 Type=notify
11 # the default is not to use systemd for cgroups because the delegate issues still
12 # exists and systemd currently does not support the cgroup feature set required
13 # for containers run by docker
14 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
15 ExecReload=/bin/kill -s HUP $MAINPID
16 TimeoutSec=0
17 RestartSec=2
18 Restart=always
19
20 # Note that StartLimit options were moved from "Service" to "Unit" in systemd 229.
21 # Both the old, and new location are accepted by systemd 229 and up, so using the old location
22 # to make them work for either version of systemd.
23 StartLimitBurst=3
24
25 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
26 # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
27 # this option work for either version of systemd.
28 StartLimitInterval=60s
29
30 # Having non-zero Limit
s causes performance problems due to accounting overhead
31 # in the kernel. We recommend using cgroups to do container-local accounting.
32 LimitNOFILE=infinity
33 LimitNPROC=infinity
34 LimitCORE=infinity
35
36 # Comment TasksMax if your systemd version does not support it.
37 # Only systemd 226 and above support this option.
38 TasksMax=infinity
39
40 # set delegate yes so that systemd does not reset the cgroups of docker containers
41 Delegate=yes
42
43 # kill only the docker process, not all processes in the cgroup
44 KillMode=process
45
46 [Install]
47 WantedBy=multi-user.target
[root@kubernetes-single googlebigtable]#
[root@kubernetes-single googlebigtable]# cp /usr/lib/systemd/system/docker.service{,.original}
【在"ExecStart=/usr/bin/dockerd "后追加“--exec-opt native.cgroupdriver=systemd”并保存】
[root@kubernetes-single googlebigtable]# cat -n /usr/lib/systemd/system/docker.service
1 [Unit]
2 Description=Docker Application Container Engine
3 Documentation=https://docs.docker.com
4 BindsTo=containerd.service
5 After=network-online.target firewalld.service containerd.service
6 Wants=network-online.target
7 Requires=docker.socket
8
9 [Service]
10 Type=notify
11 # the default is not to use systemd for cgroups because the delegate issues still
12 # exists and systemd currently does not support the cgroup feature set required
13 # for containers run by docker
14 ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd -H fd:// --containerd=/run/containerd/containerd.sock
15 ExecReload=/bin/kill -s HUP $MAINPID
16 TimeoutSec=0
17 RestartSec=2
18 Restart=always
19
20 # Note that StartLimit options were moved from "Service" to "Unit" in systemd 229.
21 # Both the old, and new location are accepted by systemd 229 and up, so using the old location
22 # to make them work for either version of systemd.
23 StartLimitBurst=3
24
25 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
26 # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
27 # this option work for either version of systemd.
28 StartLimitInterval=60s
29
30 # Having non-zero Limit
s causes performance problems due to accounting overhead
31 # in the kernel. We recommend using cgroups to do container-local accounting.
32 LimitNOFILE=infinity
33 LimitNPROC=infinity
34 LimitCORE=infinity
35
36 # Comment TasksMax if your systemd version does not support it.
37 # Only systemd 226 and above support this option.
38 TasksMax=infinity
39
40 # set delegate yes so that systemd does not reset the cgroups of docker containers
41 Delegate=yes
42
43 # kill only the docker process, not all processes in the cgroup
44 KillMode=process
45
46 [Install]
47 WantedBy=multi-user.target
[root@kubernetes-single googlebigtable]#
[root@kubernetes-single googlebigtable]# systemctl disable docker
Removed symlink /etc/systemd/system/multi-user.target.wants/docker.service.
[root@kubernetes-single googlebigtable]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@kubernetes-single googlebigtable]# systemctl daemon-reload
[root@kubernetes-single googlebigtable]# systemctl restart docker
[root@kubernetes-single googlebigtable]# docker info | grep Cgroup
Cgroup Driver: systemd
[root@kubernetes-single googlebigtable]#
[root@kubernetes-single googlebigtable]# docker info
Client:
Debug Mode: false

Server:
Containers: 18
Running: 8
Paused: 0
Stopped: 10
Images: 7
Server Version: 19.03.9
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1127.8.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.62GiB
Name: kubernetes-single
ID: U4GI:7OI3:B2AK:TA4C:EDHL:63L5:RFD6:NIDM:BCPA:ROWN:U5BQ:KKZA
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://dlbpv56y.mirror.aliyuncs.com/
Live Restore Enabled: false

[root@kubernetes-single googlebigtable]# ps -aux | grep docker
root 9022 3.7 1.0 873104 80920 ? Ssl 03:45 0:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 9201 0.0 0.0 108968 6656 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/89c8a3397bf326c5fca957a17073d9ffe30253956e08d3c962caefb086a828b8 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9210 0.0 0.0 107560 6672 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/5aab15177951ed3ef9f85fe158fde0fbcfb2acede2ba48a4c4678d5fe0b7d2ca -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9219 0.0 0.0 107560 6924 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/254fda158dba85349af7c4f38d84dde1ee500d18509f22ca6e3578d4b4aece4f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9228 0.0 0.0 108968 6660 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/6086315ede5d1d128b6ecd5a66d3f6d630ab390f9bb643fa008fb5468dc76771 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9409 0.5 0.0 107560 6672 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/505859a22bc24ae459dd83ead5d056537572fe6eec8f1d88d03236551a36d4a4 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9424 0.0 0.0 108968 6656 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/011089c5e1a622d23d175a8419abb022603d99a2f283dee9106c9ca5a4bcfc50 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9459 0.0 0.0 108968 6656 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/fe96bf8ac43c42b0e7f1f4a8e0f459d91f140d91edbf29b3608e0cd7d6031538 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9467 0.0 0.1 107560 8712 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/445b75f79429ec62ae7e13cc0f346e8c62d42cdc39b4e7528b2a7fd5b0de31b2 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9690 0.0 0.0 107560 6924 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/89bff4fac0d1e3e84141e87555c93f98bd2f7e9a422944fdcfeb30128853848c -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9748 0.0 0.0 107560 6672 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/825e5af830edef5f41582b0f4a1be301db8483b123f0d807837bee9638e01e2a -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 9884 0.0 0.0 112816 968 pts/0 S+ 03:45 0:00 grep --color=auto docker
[root@kubernetes-single googlebigtable]# docker info | grep Cgroup
Cgroup Driver: systemd
[root@kubernetes-single googlebigtable]#

【由于一些原因,我们打算使用Calico作为Kubernetes集群的网络插件。Calico的默认网段:192.168.0.0/16,Flannel的默认网段:10.244.0.0/16。所用的虚拟机IP 192.168.20.199 和Calico的默认网段重叠,因此需要修改 Calico的默认网段或者所用虚拟机的默认网段。此处选择修改Calico的默认网段为10.244.0.0/16。即,现在可以直接部署Calico作为Kubernetes的网络插件。】
[root@kubernetes-single googlebigtable]# wget https://docs.projectcalico.org/v3.7/manifests/calico.yaml
--2020-05-25 04:26:10-- https://docs.projectcalico.org/v3.7/manifests/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 157.230.37.202, 2400:6180:0:d1::57a:6001
Connecting to docs.projectcalico.org (docs.projectcalico.org)|157.230.37.202|:443... connected.
The connection to the server :443 was refused - did you specify the right host or port?

总结:此方法不可行,可以考虑仅部署master节点,并在master节点上部署node节点上的组件,通过 kubectl taint nodes --all node-role.kubernetes.io/master 开启单机模式。

孟伯,20200526

交流联系:微信 1807479153 ,QQ 1807479153

  • 0
    点赞
  • 0
    评论
  • 0
    收藏
  • 一键三连
    一键三连
  • 扫一扫,分享海报

©️2021 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值