云计算day38

⼀、编排分类
单机容器编排: docker-compose
容器集群编排: docker swarm、mesos+marathon、kubernetes
应⽤编排: ansible(模块,剧本,⻆⾊)
⼆、系统管理进化史
1. 传统部署时代
早期,各个组织是在物理服务器上运⾏应⽤程序。 由于⽆法限制在
物理服务器中运⾏的应⽤程序资源使⽤,因此会导致资源分配问
题。
例如,如果在同⼀台物理服务器上运⾏多个应⽤程序, 则可能会出
现⼀个应⽤程序占⽤⼤部分资源的情况,⽽导致其他应⽤程序的性
能下降。
⽽⼀种解决⽅案是将每个应⽤程序都运⾏在不同的物理服务器上,
但是当某个应⽤程序资源利⽤率不⾼时,剩余资源⽆法被分配给其
他应⽤程序,⽽且维护许多物理服务器的成本很⾼。
2. 虚拟化部署时代⽽后,虚拟化技术被引⼊了。虚拟化技术允许在单个物理服务器的
CPU 上运⾏多台虚拟机(VirtureMachine)。 虚拟化能使应⽤程序在不
同 VM 之间被彼此隔离,且能提供⼀定程度的安全性, 因为⼀个应
⽤程序的信息不能被另⼀应⽤程序随意访问。
虚拟化技术能够更好地利⽤物理服务器的资源,并且因为可轻松地
添加或更新应⽤程序,⽽因此可以具有更⾼的可扩缩性,以及降低
硬件成本等等的好处。通过虚拟化,你可以将⼀组物理资源呈现为
可丢弃的虚拟机集群。
每个 VM 是⼀台完整的计算机,在虚拟化硬件之上运⾏所有组件,
包括其⾃⼰的操作系统,所以由此看来,虚拟机技术占⽤资源量
⼤,在⼀台主机上最多运⾏⼗⼏台,效率不⾼。
3. 容器部署时代
容器,衍⽣于虚拟技术,但具有更宽松的隔离特性,使容器可以在
共享操作系统的同时,还能保持其轻量级的特点。⽽且每个容器都
具有⾃⼰的⽂件系统、CPU、内存、进程空间等,且具有最良好的
可移植性和平台兼容性。
敏捷应⽤程序的创建和部署:与使⽤ VM 镜像相⽐,提⾼了容
器镜像创建的简便性和效率
持续开发、集成和部署:通过快速简单的回滚(由于镜像不可变
性),提供可靠且频繁的容器镜像构建和部署。关注开发与运维的分离:在构建、发布时创建应⽤程序容器镜
像,⽽不是在部署时,从⽽将应⽤程序与基础架构分离。
可观察性:不可以显示OS 级别的信息和指标,还可以显示应⽤
程序的运⾏状况和其他指标信。
跨开发、测试和⽣产的环境⼀致性:在笔记本计算机上也可以和
在云中运⾏⼀样的应⽤程序。
跨云和操作系统发⾏版本的可移植性:可在 Ubuntu、RHEL、
CoreOS、本地、GoogleKubernetes Engine 和其他任何地⽅运
⾏。
以应⽤程序为中⼼的管理:提⾼抽象级别,从在虚拟硬件上运⾏
OS 到使⽤逻辑资源在OS 上运⾏应⽤程序。
松散耦合、分布式、弹性、解放的微服务:应⽤程序被分解成较
⼩的独⽴部分, 并且可以动态部署和管理-⽽不是在⼀台⼤型单
机上整体运⾏。
资源隔离:可预测的应⽤程序性能。
资源利⽤:⾼效率和⾼密度。
三、Kubernetes 简介
Kubernetes 缩写:K8S,k 和 s 之间有⼋个字符,所以因此得名。
Kubernetes 由 google 的 Brog 系统作为原型,后经 Go 语⾔延⽤
Brog 的思路重写,并捐献给 CNCF 基⾦会开源。Kubernetes 是⼀个可移植的、可扩展的开源平台,⽤于管理容器化
的⼯作负载和服务,可促进声明式配置和⾃动化。
官⽹:https://kubernetes.io
Github:https://github.com/kubernetes/kubernetes
四、Kubernetes 功能
Kubernetes 的⽬标是让部署容器化的应⽤简单并且⾼效,提供了应
⽤部署,规划,更新,维护的⼀种机制。
Kubernetes 在 Docker 等容器技术的基础上,为容器化的应⽤提供
部署运⾏、资源调度、服务发现和动态伸缩等⼀系列完整功能,提
⾼了⼤规模容器集群管理的便捷性。
主要功能:
1. 使⽤ Docker 等容器技术对应⽤程序包装(package)、实例化
(instantiate) 、运⾏(run)。
2. 以集群的⽅式运⾏、管理跨机器的容器,解决 Docker 跨机器容
器之间的通讯问题。
3. K8S 的⾃我修复机制使得容器集群总是运⾏在⽤户期望的状
态。
五、Kubernetes 特性五、Kubernetes 特性
1. ⾃动装箱:Kubernetes可以根据所需资源和其他限制条件智能
地定位容器,⽽不会影响可⽤性。
2. 弹性伸缩:使⽤命令、UI 或者基于 CPU 使⽤情况⾃动快速扩容
和缩容应⽤程序实例,保证应⽤业务⾼峰并发时的⾼可⽤性;业
务低峰时回收资源,以最⼩成本运⾏服务。
3. ⾃我修复:在节点故障时重新启动失败的容器,替换和重新部
署,保证预期的副本数量;杀死健康检查失败的容器,并且在未
准备好之前不会处理客户端请求,确保线上服务不中断。
4. 服务发现和负载均衡:K8S为多个容器提供⼀个统⼀访问⼊⼝
(内部IP地址和⼀个DNS名称),并且负载均衡关联的所有容
器,使得⽤户⽆需考虑容器 IP 问题。
5. ⾃动发布(默认滚动发布模式)和回滚:K8S采⽤滚动策略更新
应⽤,⼀个更新⼀个Pod,⽽不是同时删除所有的Pod,如果更
新过程中出现问题,将回滚更改,确保升级不收影响业务。
6. 集中化配置管理和密钥管理:管理机密数据和应⽤程序配置,⽽
不需要把敏感数据暴露在镜像⾥,提⾼敏感数据安全性,并可以
将⼀些常⽤的配置存储在K8S中,⽅便应⽤程序使⽤。
7. 存储编排:⽀持外挂存储并对外挂存储资源进⾏编排,挂载外部
存储系统,⽆论是来⾃本地存储,公有云(如:AWS),还是
⽹络存储(如:NFS、Glusterfs、Ceph)都作为集群资源的⼀
部分使⽤,极⼤提⾼存储使⽤灵活性。8. 任务批量处理运⾏:提供⼀次性任务,定时任务,满⾜批量数据
处理和分析的场景。
六、K8S 解决裸跑 Docker 的痛点
1. 单机使⽤,⽆法有效集群。
2. 随着容器数量的上升,管理成本攀升。
3. 没有有效的容灾、⾃愈机制。
4. 没有预设编排模板,⽆法实现快速、⼤规模容器调度。
5. 没有统⼀的配置管理中⼼⼯具。
6. 没有容器⽣命周期的管理⼯具。
7. 没有图形化运维管理⼯具。
七、Kubernetes 架构
K8S 属于主从设备模型(Mater-Slave 架构),由 Master 节点负责
集群的调度、管理和运维(分配活的),Slave 节点是运算⼯作负载
节点(⼲活的),被称为 Worker Node 节点。
Master 需要占据⼀个独⽴服务器运⾏,因为其是整个集群的“⼤
脑”,⼀旦宕机或不可⽤,那么所有控制命令都将失效,可对主节点
进⾏⾼可⽤配置。当 Worker Node 节点宕机时,其上的⼯作负载会被 Master ⾃动转
移到其他节点上。
1. Master 节点组件
API Server
*整个集群的控制中枢,提供集群中各个模块之间的数据交换*,并将
集群状态和信息存储到分布式键-值(key-value)存储系统 Etcd 集群
中。
同时它也是集群管理、资源配额、提供完备的集群安全机制的⼊
⼝,为集群各类资源对象提供增删改查以及 watch 的 REST API 接
⼝。Controller-manager
集群状态管理器,以保证 Pod 或其他资源达到期望值*。*当集群中
某个 Pod 的副本数或其他资源因故障和错误导致⽆法正常运⾏,没
有达到设定的值时,Controller Manager 会尝试⾃动修复并使其达到
期望状态。
Scheduler
*集群 Pod 的调度中⼼,主要是通过调度算法将 Pod 分配到最佳的
Node 节点*,它通过APIServer 监听所有 Pod 的状态,⼀旦发现新
的未被调度到任何 Node 节点的Pod(PodSpec.NodeName为空),就
会根据⼀系列策略选择最佳节点进⾏调度。
Etcd
*⽤于可靠地存储集群的配置数据,是⼀种持久性、轻量型、分布式
的键-值 (key-value) 数据存储组件*,作为Kubernetes集群的持久化
存储系统。
注:⽣产环境建议存放到单个的SSD硬盘,并且做好冗余。
2. Work Node 节点
Kubelet
*负责与 Master 通信协作,管理该节点上的 Pod,对容器进⾏健康
检查及监控,同时负责上报节点和节点上⾯ Pod 的状态。*Kube-proxy
*运⾏在每个 node 节点上,实现 pod 的⽹络代理,维护⽹络规则和
四层负载均衡规则*,负责写⼊规则到 iptables 或 ipvs 实现服务映射
访问。
Runtime
*负责容器的管理* (新版本 K8S 使⽤的是 Containerd)。
CoreDNS
⽤于 Kubernetes 集群内部 Service 的解析,可以让 Pod 把
Service 名称解析成 Service 的 IP 然后通过 Service 的 IP 地址进⾏
连接到对应的应⽤上。
Calico
符合 CNI 标准的⼀个⽹络插件,它*负责给每个 Pod 分配⼀个不会
重复的 IP,并且把每个节点当做⼀各“路由器”*,这样⼀个节点的
Pod 就可以通过 IP 地址访问到其他节点的 Pod。
Docker
*运⾏容器,负责本机的容器创建和管理⼯作。*
⼋、Pod 概念⼋、Pod 概念
Pod 是 Kubernetes 中的基本构建块,它代表⼀个或⼀组相互关联的
容器。Pod 是Kubernetes 的最⼩部署单元,可以包含⼀个或多个容
器,这些容器共享存储、⽹络和运⾏配置。
容器之间可以使⽤ localhost:port 相互访问,可以使⽤ volume 等实
现数据共享。根据 Docker 的构造,Pod 可被建模为⼀组具有共享命
令空间、卷、IP 地址和 Port 端⼝的容器。
Pod 的主要特点包括:
1. 共享存储:Pod 中的所有容器都可以访问同⼀个存储卷
(Persistent Volume),实现数据共享。
2. 共享⽹络:Pod 中的所有容器都共享同⼀个⽹络命名空间,可
以相互通信。
3. 共享运⾏配置:Pod 中的所有容器都共享相同的运⾏配置,例
如容器的启动参数、环境变量等。
Pause 容器:
Pod 的⽗容器,它主要负责僵⼫进程的回收管理,同时通过 Pause
容器可以使同⼀个 Pod ⾥⾯的不同容器进⾏共享存储、⽹络、
PID、IPC等。
九、Kubernetes ⼯作流程九、Kubernetes ⼯作流程
1. 运维⼈员使⽤ kubectl 命令⼯具向 API Server 发送请求,API
Server 接收请求后写⼊到 Etcd 中。
2. API Server 让 Controller-manager 按照预设模板去创建 Pod。
3. Controller-manager 通过 API Server 读取 Etcd 中⽤户的预设信
息,再通过 API Server 找到 Scheduler,为新创建的 Pod 选择
最合适的 Node ⼯作负载节点。
4. Scheduler 通过 API Server 在 Etcd 找到存储的 Node 节点元信
息、剩余资源等,⽤预选和优选策略选择最优的 Node 节点。
5. Scheduler 确定 Node 节点后,通过 API Server 交给这个 Node
节点上的 Kubelet 进⾏ Pod 资源的创建。
6. Kubelet 调⽤容器引擎交互创建 Pod,同时将 Pod 监控信息通
过 API Server 存储到 Etcd 中。
7. 当⽤户访问时,通过 Kube-proxy 负载、转发,访问相应的
Pod。
8. 注:决定创建 Pod 清单的是 Controller-manager 控制器,
Kubelet 和容器引擎只是⼲活的。⼗、K8S 创建 Pod 流程
1. 详细流程
⾸先 Kubectl 创建⼀个 Pod,在提交时转化为 json。
再经过 auth 认证(鉴权),然后传递给 API Server 进⾏处理。
API Server 将请求信息存储到 Etcd 中。Scheduler 和 Controller-manager 会监听 API Server 的请求。
在 Scheduler 和 Controller-manager 监听到请求后,Scheduler
会提交给API Server⼀个list清单 —— 包含的是获取node节点信
息。
当 API Server 从 Etcd 获取后端 Node 节点信息后,会同时被
Scheduler 监听到,然后 Scheduler 进⾏优选打分制,最后将评
估结果传递给 API Server。
⽽后,API Server 会提交清单给对应节点的 Kubelet(代理)。
Kubelet 代理通过 K8S 与容器的接⼝ (例如 containerd) 进⾏交
互,假设是 docker 容器,那么此时 kubelet 就会通过
dockershim 以及 runc 接⼝与 docker 的守护进程docker-server
进⾏交互,来创建对应的容器,再⽣成对应的 Pod。
Kubelet 同时会借助 Metric Server 收集本节点的所有状态信
息,然后提交给 API Server。
最后 API Server 将该节点的容器和 Pod 信息存储到 Etcd 中。
2. 简化流程
⽤户通过 kubectl 或其他 API 客户端提交 Pod Spec 给 API
Server。
API Server 尝试将 Pod 对象的相关信息存⼊ etcd 中,待写⼊操
作执⾏完成,API Server 即会返回确认信息⾄客户端。Controller 通过 API Server 的 Watch 接⼝发现新的 Pod,将
Pod 加⼊到任务队列,并启动 Pod Control 机制创建与之对应的
Pod。
所有 Controler 正常后,将结果存⼊ etcd。
Scheduler 通过 API Server 的 Watch 接⼝监测发现新的 Pod,
经过给主机打分之后,让 Pod 调度到符合要求的 Node 节点,
并将结果存⼊到 etcd 中。
Kubelet 每隔⼀段时间向 API Server 通过 Node name 获取⾃身
Node 上要运⾏的 Pod 并通过与⾃身缓存⽐较,来创建新
Pod。
Containerd 启动容器。
最后 API Server 将本节点的容器和Pod信息存储到etcd。
系统
主机名
IP 地址
⻆⾊
配置
Rocky8.7 k8s
master
192.168.15.11 master
节点
4 核 4G 内存
50G 硬盘
Rocky8.7 k8s
node01
192.168.15.22 work 节
4 核 4G 内存
50G 硬盘
Rocky8.7 k8s
node02
192.168.15.33 work 节
4 核 4G 内存
50G 硬盘
配置信息
备注
Docker 版本
24.10
Pod ⽹段
172.16.0.0/16
Service ⽹段
10.96.0.0/16
⼀、安装环境
1. 安装说明
本次以⼆进制⽅式安装⾼可⽤ k8s 1.28.0 版本,但在⽣产环境中,
建议使⽤⼩版本⼤于 5 的 Kubernetes 版本,⽐如 1.19.5 以后。
2. 系统环境
3. ⽹络及版本环境注:宿主机⽹段、Pod ⽹段、Service ⽹段不能重复,服务器 IP 地
址不能设置为 DHCP,需配置为静态 IP。
⼆、前期准备
1. 配置主机映射
2. 配置 yum 源
[root@k8s-master ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain
localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain
localhost6 localhost6.localdomain6
192.168.15.11 k8s-master
192.168.15.11 k8s-master-lb # 若有⾼可⽤主机,这⾥为
另⼀个master的IP
192.168.15.22 k8s-node01
192.168.15.33 k8s-node02
[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# mkdir bak
[root@k8s-master yum.repos.d]# mv Rocky* bak
[root@k8s-master yum.repos.d]# mv local.repo bak[root@k8s-master yum.repos.d]# ls
aliyunbak bak
[root@k8s-master yum.repos.d]# vim docker-ce.repo
# docker软件源
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/source/stable
enabled=0gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/source/test
enabled=0gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/debug-
$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/source/nightlyenabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
[root@k8s-master yum.repos.d]# vim Rocky
BaseOS.repo # 阿⾥云软件源
[baseos]
name=Rocky Linux $releasever - BaseOS
#mirrorlist=https://mirrors.rockylinux.org/mirrorl
ist?arch=$basearch&repo=BaseOS-$releasever
baseurl=https://mirrors.aliyun.com/rockylinux/$rel
easever/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
rockyofficial
[root@k8s-master yum.repos.d]# vim Rocky
AppStream.repo # 阿⾥云软件源
[appstream]
name=Rocky Linux $releasever - AppStream
#mirrorlist=https://mirrors.rockylinux.org/mirrorl
ist?arch=$basearch&repo=AppStream-$releasever
baseurl=https://mirrors.aliyun.com/rockylinux/$rel
easever/AppStream/$basearch/os/
gpgcheck=1enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
rockyofficial
[root@k8s-master yum.repos.d]# vim kubernetes.repo
# K8S软件源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/
repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/d
oc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm
package-key.gpg
[root@k8s-master yum.repos.d]# ls
aliyunbak docker-ce.repo Rocky-AppStream.repo
bak kubernetes.repo Rocky-BaseOS.repo
[root@k8s-master yum.repos.d]# yum clean all #
清除yum缓存
35 ⽂件已删除
[root@k8s-master yum.repos.d]# yum makecache #
建⽴yum元数据缓存
Rocky Linux 8 - AppStream
5.2 MB/s | 9.6 MB 00:01 3. 安装必备⼯具
Rocky Linux 8 - BaseOS
2.6 MB/s | 3.9 MB 00:01
Docker CE Stable - x86_64
54 kB/s | 52 kB 00:00
Kubernetes
193 kB/s | 182 kB 00:00
元数据缓存已建⽴。
[root@k8s-master yum.repos.d]# cd
[root@k8s-master ~]# yum install wget jq psmisc
vim net-tools telnet yum-utils device-mapper
persistent-data lvm2 git -y
......省略部分内容......
已安装:
git-2.39.3-1.el8_8.x86_64 git
core-2.39.3-1.el8_8.x86_64
git-core-doc-2.39.3-1.el8_8.noarch perl
Error-1:0.17025-2.el8.noarch
perl-Git-2.39.3-1.el8_8.noarch perl
TermReadKey-2.37-7.el8.x86_64
telnet-1:0.17-76.el8.x86_64 yum
utils-4.0.21-23.el8.noarch
完毕!4. 关闭安全及 swap 分区
5. 同步时间
[root@k8s-master ~]# systemctl disable --now
firewalld
[root@k8s-master ~]# systemctl disable --now
dnsmasq
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# sed -i
's#SELINUX=enforcing#SELINUX=disabled#g'
/etc/sysconfig/selinux
[root@k8s-master ~]# sed -i
's#SELINUX=enforcing#SELINUX=disabled#g'
/etc/selinux/config
[root@k8s-master ~]# swapoff -a && sysctl -w
vm.swappiness=0
vm.swappiness = 0
[root@k8s-master ~]# sed -ri '/^[^#]*swap/s@^@#@'
/etc/fstab6. 配置 limit
[root@k8s-master ~]# rpm -ivh
https://mirrors.wlnmp.com/rockylinux/wlnmp
release-rocky-8.noarch.rpm
获取https://mirrors.wlnmp.com/rockylinux/wlnmp
release-rocky-8.noarch.rpm
Verifying...
################################# [100%]
准备中...
################################# [100%]
正在升级/安装...
1:wlnmp-release-rocky-1-1
################################# [100%]
[root@k8s-master ~]# yum -y install wntp
[root@k8s-master ~]# ntpdate time2.aliyun.com
19 Dec 21:02:09 ntpdate[33790]: adjust time server
203.107.6.88 offset -0.001396 sec
[root@k8s-master ~]# crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com7. 配置免密登录
8. 安装 k8s ⾼可⽤性 Git 仓库
[root@k8s-master ~]# ulimit -SHn 65535 # 单个进程
可以打开的⽂件数量将被限制为 65535
[root@k8s-master ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
[root@k8s-master ~]# ssh-keygen -t rsa
# 遵循默认配置,⼀路回⻋即可
[root@k8s-master ~]# for i in k8s-node01 k8s
node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
# 按照提示输⼊yes和密码9. 升级系统并重启
更新系统但不包括内核
三、配置内核模块
[root@k8s-master ~]# cd /root/ ; git clone
https://gitee.com/dukuan/k8s-ha-install.git
# 在 /root/ ⽬录下克隆⼀个名为 k8s-ha-install.git 的
Git 仓库
正克隆到 'k8s-ha-install'...
remote: Enumerating objects: 882, done.
remote: Counting objects: 100% (208/208), done.
remote: Compressing objects: 100% (130/130), done.
remote: Total 882 (delta 92), reused 145 (delta
52), pack-reused 674
接收对象中: 100% (882/882), 19.71 MiB | 2.82 MiB/s,
完成.
处理 delta 中: 100% (356/356), 完成.
[root@k8s-master ~]# yum update -y --
exclude=kernel* --nobest && reboot1. 配置 ipvs 模块
[root@k8s-master ~]# yum install ipvsadm ipset
sysstat conntrack libseccomp -y
[root@k8s-master ~]# modprobe -- ip_vs #
使⽤ modprobe 命令加载内核模块,核⼼ IPVS 模块。
[root@k8s-master ~]# modprobe -- ip_vs_rr #
IPVS 负载均衡算法 rr
[root@k8s-master ~]# modprobe -- ip_vs_wrr #
IPVS 负载均衡算法 wrr
[root@k8s-master ~]# modprobe -- ip_vs_sh #
⽤于源端负载均衡的模块
[root@k8s-master ~]# modprobe -- nf_conntrack #
⽤于⽹络流量过滤和跟踪的模块
[root@k8s-master ~]# vim /etc/modules
load.d/ipvs.conf
# 在系统启动时加载下列 IPVS 和相关功能所需的模块
ip_vs # 负载均衡模块
ip_vs_lc # ⽤于实现基于连接数量的负载均衡算法
ip_vs_wlc # ⽤于实现带权重的最少连接算法的模块
ip_vs_rr # 负载均衡rr算法模块
ip_vs_wrr # 负载均衡wrr算法模块
ip_vs_lblc # 负载均衡算法,它结合了最少连接(LC)算法
和基于偏置的轮询(Round Robin with Bias)算法
ip_vs_lblcr # ⽤于实现基于链路层拥塞状况的最少连接负载
调度算法的模块ip_vs_dh # ⽤于实现基于散列(Hashing)的负载均衡算
法的模块
ip_vs_sh # ⽤于源端负载均衡的模块
ip_vs_fo # ⽤于实现基于本地服务的负载均衡算法的模块
ip_vs_nq # ⽤于实现NQ算法的模块
ip_vs_sed # ⽤于实现随机早期检测(Random Early
Detection)算法的模块
ip_vs_ftp # ⽤于实现FTP服务的负载均衡模块
ip_vs_sh
nf_conntrack # ⽤于跟踪⽹络连接的状态的模块
ip_tables # ⽤于管理防护墙的机制
ip_set # ⽤于创建和管理IP集合的模块
xt_set # ⽤于处理IP数据包集合的模块,提供了与
iptables等⽹络⼯具的接⼝
ipt_set # ⽤于处理iptables规则集合的模块
ipt_rpfilter # ⽤于实现路由反向路径过滤的模块
ipt_REJECT # iptables模块之⼀,⽤于将不符合规则的数据
包拒绝,并返回特定的错误码
ipip # ⽤于实现IP隧道功能的模块,使得数据可以在
两个⽹络之间进⾏传输
[root@k8s-master ~]# systemctl enable --now
systemd-modules-load.service # 开机⾃启systemd默
认提供的⽹络管理服务
The unit files have no installation config
(WantedBy, RequiredBy, Also, Alias
settings in the [Install] section, and
DefaultInstance for template units).This means they are not meant to be enabled using
systemctl.
Possible reasons for having this kind of units
are:
1) A unit may be statically enabled by being
symlinked from another unit's
.wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for
some other unit which has
a requirement dependency on it.
3) A unit may be started when needed via
activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
4) In case of template units, the unit is meant to
be enabled with some
instance name specified.
[root@k8s-master ~]# lsmod | grep -e ip_vs -e
nf_conntrack # 查看已写⼊加载的模块
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 172032 6
ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 4
xt_conntrack,nf_nat,ipt_MASQUERADE,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack2. 配置 k8s 内核
libcrc32c 16384 5
nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@k8s-master ~]# vim /etc/sysctl.d/k8s.conf
# 写⼊k8s所需内核模块
net.bridge.bridge-nf-call-iptables = 1 # 控制⽹络
桥接与iptables之间的⽹络转发⾏为
net.bridge.bridge-nf-call-ip6tables = 1 # ⽤于控制
⽹络桥接(bridge)的IP6tables过滤规则。当该参数设置为1
时,表示启⽤对⽹络桥接的IP6tables过滤规则
fs.may_detach_mounts = 1 # ⽤于控制⽂件系统是否允许
分离挂载,1表示允许
net.ipv4.conf.all.route_localnet = 1 # 允许本地⽹
络上的路由。设置为1表示允许,设置为0表示禁⽌。
vm.overcommit_memory=1 # 控制内存分配策略。设置为1表
示允许内存过量分配,设置为0表示不允许。
vm.panic_on_oom=0 # 决定当系统遇到内存不⾜(OOM)时是
否产⽣panic。设置为0表示不产⽣panic,设置为1表示产⽣
panic。
fs.inotify.max_user_watches=89100 # inotify可以监
视的⽂件和⽬录的最⼤数量。
fs.file-max=52706963 # 系统级别的⽂件描述符的最⼤数
量。fs.nr_open=52706963 # 单个进程可以打开的⽂件描述符
的最⼤数量。
net.netfilter.nf_conntrack_max=2310720 # ⽹络连接
跟踪表的最⼤⼤⼩。
net.ipv4.tcp_keepalive_time = 600 # TCP保活机制发
送探测包的间隔时间(秒)。
net.ipv4.tcp_keepalive_probes = 3 # TCP保活机制发
送探测包的最⼤次数。
net.ipv4.tcp_keepalive_intvl =15 # TCP保活机制在
发送下⼀个探测包之前等待响应的时间(秒)。
net.ipv4.tcp_max_tw_buckets = 36000 # TCP
TIME_WAIT状态的bucket数量。
net.ipv4.tcp_tw_reuse = 1 # 允许重⽤TIME_WAIT套接
字。设置为1表示允许,设置为0表示不允许。
net.ipv4.tcp_max_orphans = 327680 # 系统中最⼤的孤
套接字数量。
net.ipv4.tcp_orphan_retries = 3 # 系统尝试重新分
配孤套接字的次数。
net.ipv4.tcp_syncookies = 1 # ⽤于防⽌SYN洪⽔攻击。
设置为1表示启⽤SYN cookies,设置为0表示禁⽤。
net.ipv4.tcp_max_syn_backlog = 16384 # SYN连接请
求队列的最⼤⻓度。
net.ipv4.ip_conntrack_max = 65536 # IP连接跟踪表的
最⼤⼤⼩。
net.ipv4.tcp_max_syn_backlog = 16384 # 系统中最⼤
的监听队列的⻓度。net.ipv4.tcp_timestamps = 0 # ⽤于关闭TCP时间戳选
项。
net.core.somaxconn = 16384 # ⽤于设置系统中最⼤的监
听队列的⻓度
[root@k8s-master ~]# reboot
# 保存后,所有节点重启,保证重启后内核依然加载
[root@k8s-master ~]# lsmod | grep --color=auto -e
ip_vs -e nf_conntrack
ip_vs_ftp 16384 0
nf_nat 45056 3
ipt_MASQUERADE,nft_chain_nat,ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 172032 25
ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,i
p_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_
vs_sed,ip_vs_ftp四、基本组件安装
1. 安装 Containerd
(1)安装 Docker
(2)配置 Containerd 所需模块
nf_conntrack 172032 4
xt_conntrack,nf_nat,ipt_MASQUERADE,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 5
nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@k8s-master ~]# yum remove -y podman runc
containerd # 卸载之前的containerd
[root@k8s-master ~]# yum install docker-ce docker
ce-cli containerd.io -y # 安装Docker和containerd(3)配置 Containerd 所需内核
[root@k8s-master ~]# cat <<EOF | sudo tee
/etc/modules-load.d/containerd.conf
> overlay # ⽤于⽀持Overlay⽹络⽂件系统的模块,它可以
在现有的⽂件系统之上创建叠加层,以实现虚拟化、隔离和管理等功
能。
> br_netfilter # ⽤于containerd的⽹络过滤模块,它可
以对进出容器的⽹络流量进⾏过滤和管理。
> EOF
overlay
br_netfilter
[root@k8s-master ~]# modprobe -- overlay
[root@k8s-master ~]# modprobe -- br_netfilter(4)Containerd 配置⽂件
[root@k8s-master ~]# cat <<EOF | sudo tee
/etc/sysctl.d/99-kubernetes-cri.conf # tee:读取
的数据写⼊到⼀个或多个⽂件中,同时还将其复制到标准输出
> net.bridge.bridge-nf-call-iptables = 1 # ⽤于
控制⽹络桥接是否调⽤iptables进⾏包过滤和转发。
> net.ipv4.ip_forward = 1 # 路由
转发,1为开启
> net.bridge.bridge-nf-call-ip6tables = 1 # 控制
是否在桥接接⼝上调⽤IPv6的iptables进⾏数据包过滤和转发。
> EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8s-master ~]# sysctl --system
[root@k8s-master ~]# mkdir -p /etc/containerd
[root@k8s-master ~]# containerd config default |
tee /etc/containerd/config.toml # 读取containerd
的配置并保存到/etc/containerd/config.toml
[root@k8s-master ~]# vim
/etc/containerd/config.toml
# 找到containerd.runtimes.runc.options模块,添加
SystemdCgroup = true,如果已经存在则直接修改
[plugins."io.containerd.grpc.v1.cri".containerd.ru
ntimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false # 没有就添加,有
的话就修改
# 找到sandbox_image修改为如下参数
sandbox_image = "registry.cn
hangzhou.aliyuncs.com/google_containers/pause:3.9"
[root@k8s-master ~]# systemctl daemon-reload #
加载systemctl控制脚本
[root@k8s-master ~]# systemctl enable --now
containerd # 启动containerd并设置开机启动
Created symlink /etc/systemd/system/multi
user.target.wants/containerd.service "
/usr/lib/systemd/system/containerd.service.(5)配置 crictl 客户端连接的运⾏位置
2. 安装 Kubernetes 组件
安装 Kubeadm、Kubelet 和 Kubectl
[root@k8s-master ~]# cat > /etc/crictl.yaml <<EOF
# 配置容器运⾏环境的crictl.yml⽂件
> runtime-endpoint:
unix:///run/containerd/containerd.sock # 指定了容
器运⾏时的地址为:unix://...
> image-endpoint:
unix:///run/containerd/containerd.sock # 指定
了镜像运⾏时的地址为:unix://...
> timeout: 10 # 设置了超时时间为10秒
> debug: false # 关闭调试模式
> EOF3. Kubernetes 集群初始化
[root@k8s-master ~]# yum list kubeadm.x86_64 --
showduplicates | sort -r
# 查询最新的Kubernetes版本号
[root@k8s-master ~]# yum install kubeadm-1.28*
kubelet-1.28* kubectl-1.28* -y
# 安装1.28最新版本kubeadm、kubelet和kubectl
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable --now
kubelet # 允许开机⾃启kubelet
Created symlink /etc/systemd/system/multi
user.target.wants/kubelet.service "
/usr/lib/systemd/system/kubelet.service.
[root@k8s-master ~]# kubeadm version # 查看当前安
装的kubeadm版本号
kubeadm version: &version.Info{Major:"1",
Minor:"28", GitVersion:"v1.28.2",
GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2
f", GitTreeState:"clean", BuildDate:"2023-09-
13T09:34:32Z", GoVersion:"go1.20.8",
Compiler:"gc", Platform:"linux/amd64"}(1)Kubeadm 配置⽂件
[root@k8s-master ~]# vim kubeadm-config.yaml #
修改kubeadm配置⽂件
apiVersion: kubeadm.k8s.io/v1beta3 # 指定
Kubernetes配置⽂件的版本,使⽤的是kubeadm API的v1beta3
版本
bootstrapTokens: # 定义bootstrap tokens的信息。这
些tokens⽤于在Kubernetes集群初始化过程中进⾏身份验证
- groups: # 定义了与此token关联的组
- system:bootstrappers:kubeadm:default-node
token
token: 7t2weq.bjbawausm0jaxury # bootstrap
token的值
ttl: 24h0m0s # token的⽣存时间,这⾥设置为24⼩时
usages: # 定义token的⽤途
- signing # 数字签名
- authentication # 身份验证
kind: InitConfiguration # 指定配置对象的类型,
InitConfiguration:表示这是⼀个初始化配置
localAPIEndpoint: # 定义本地API端点的地址和端⼝
advertiseAddress: 192.168.15.11
bindPort: 6443
nodeRegistration: # 定义节点注册时的配置
criSocket:
unix:///var/run/containerd/containerd.sock # 容器
运⾏时(CRI)的套接字路径 name: k8s-master # 节点的名称
taints: # 标记
- effect: NoSchedule # 免调度节点
key: node-role.kubernetes.io/control-plane
# 该节点为控制节点
---
apiServer: # 定义了API服务器的配置
certSANs: # 为API服务器指定了附加的证书主体名称
(SAN),指定IP即可
- 192.168.15.11
timeoutForControlPlane: 4m0s # 控制平⾯的超时时
间,这⾥设置为4分钟
apiVersion: kubeadm.k8s.io/v1beta3 # 指定API
Server版本
certificatesDir: /etc/kubernetes/pki # 指定了证书的
存储⽬录
clusterName: kubernetes # 定义了集群的名称
为"kubernetes"
controlPlaneEndpoint: 192.168.15.11:6443 # 定义
了控制节点的地址和端⼝
controllerManager: {} # 控制器管理器的配置,为空表示
使⽤默认配置
etcd: # 定义了etcd的配置
local: # 本地etcd实例
dataDir: /var/lib/etcd # 数据⽬录(2)下载组件镜像
imageRepository: registry.cn
hangzhou.aliyuncs.com/google_containers # 指定了
Kubernetes使⽤的镜像仓库的地址,阿⾥云的镜像仓库。
kind: ClusterConfiguration # 指定了配置对象的类型,
ClusterConfiguration:表示这是⼀个集群配置
kubernetesVersion: v1.28.2 # 指定了kubernetes的版
networking: # 定义了kubernetes集群⽹络设置
dnsDomain: cluster.local # 定义了集群的DNS域为:
cluster.local
podSubnet: 172.16.0.0/16 # 定义了Pod的⼦⽹
serviceSubnet: 10.96.0.0/16 # 定义了服务的⼦⽹
scheduler: {} # 使⽤默认的调度器⾏为
[root@k8s-master ~]# kubeadm config migrate --old
config kubeadm-config.yaml --new-config new.yaml
# 将旧的kubeadm配置⽂件转换为新的格式[root@k8s-master ~]# kubeadm config images pull --
config /root/new.yaml # 通过新的配置⽂件new.yaml从指
定的阿⾥云仓库拉取kubernetes组件镜像
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/kube
apiserver:v1.28.2
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/kube
controller-manager:v1.28.2
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/kube
scheduler:v1.28.2
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/kube
proxy:v1.28.2
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/etcd:3.5.9
-0
[config/images] Pulled registry.cn
hangzhou.aliyuncs.com/google_containers/coredns:v1
.10.1(3)集群初始化
[root@k8s-master ~]# kubeadm init --config
/root/new.yaml --upload-certs
You can now join any number of the control-plane
node running the following command on each as
root:
kubeadm join 192.168.15.11:6443 --token
7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash
sha256:73dc6f8d973fc70818e309386c1bfc5d330c19d52b4
94c6f88f634a6b1250a2f \
--control-plane --certificate-key
80fcc505867ccbc6550c18ed11f40e64ecf486d626403823f5
48dda65c19953d
# 等待初始化后保存这些命令
[root@k8s-master ~]# vim token.txt
kubeadm join 192.168.15.11:6443 --token
7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash
sha256:73dc6f8d973fc70818e309386c1bfc5d330c19d52b4
94c6f88f634a6b1250a2f \ # 当需要加⼊新node节点时,
只复制到这即可(4)加载环境变量
(5)查看组件容器状态
之前采⽤初始化安装⽅式,所有的系统组件均以容器的⽅式运⾏
并且在 kube-system 命名空间内,此时可以查看 Pod(容器
组)状态
--control-plane --certificate-key
80fcc505867ccbc6550c18ed11f40e64ecf486d626403823f5
48dda65c19953d # 当需要⾼可⽤master集群时,将整个
token复制下来
[root@k8s-master ~]# cat <<EOF >> /root/.bashrc
> export KUBECONFIG=/etc/kubernetes/admin.conf
> EOF
[root@k8s-master ~]# source /root/.bashrc
[root@k8s-master ~]# kubectl get po -n kube-system
NAME READY
STATUS RESTARTS AGE
coredns-6554b8b87f-77brw 0/1
Pending 0 6m1s
coredns-6554b8b87f-8hwr4 0/1
Pending 0 6m1s
etcd-k8s-master 1/1
Running 0 6m16s(6)初始化失败排查
Ⅰ. 初始化重置
如果初始化失败,重置后再次初始化,命令如下(没有失败不要执
⾏!!!)
Ⅱ. 多次初始化失败
⾸先排查系统⽇志
kube-apiserver-k8s-master 1/1
Running 0 6m16s
kube-controller-manager-k8s-master 1/1
Running 0 6m16s
kube-proxy-j778p 1/1
Running 0 6m2s
kube-scheduler-k8s-master 1/1
Running 0 6m16s
# kubectl:k8s控制命令
# get:获取参数
# po:pod缩写
# -n:指定命名空间
# kube-system:命名空间
kubeadm reset -f ; ipvsadm --clear ; rm -rf
~/.kubeCentOS⽇志路径:/var/log/messages
Ubuntu⽇志路径:/var/log/syslog
通过⽇志找到错误原因
最后再检查之前所有的配置⽂件是否有编写错误,有的配置⽂件
在修改后需要重新载⼊,可以根据刚才的步骤进⾏修改及载⼊,
最终确认⽆误后输⼊重置命令,再进⾏初始化。
经常出错的原因:
Containerd 的配置⽂件修改的不对,⾃⾏参考《安装
containerd》⼩节核对。
new.yaml 配置问题,⽐如⾮⾼可⽤集群忘记修改 6443 端
⼝为 6443。
new.yaml 配置问题,三个⽹段有交叉,出现 IP 地址冲突。
VIP 不通导致⽆法初始化成功,此时 messages ⽇志会有
VIP 超时的报错。
Ⅲ.连接 API 服务器超时
当获取集群状态出现以下信息时:
tail -f /var/log/messages | grep -v "not found"
[root@master ~]# kubectl get poE1221 14:39:38.091729 2782 memcache.go:265]
couldn't get current server API group list: Get
"https://192.168.15.11:6443/api?timeout=32s": dial
tcp 192.168.15.11:6443: connect: connection
refused
E1221 14:39:38.092239 2782 memcache.go:265]
couldn't get current server API group list: Get
"https://192.168.15.11:6443/api?timeout=32s": dial
tcp 192.168.15.11:6443: connect: connection
refused
E1221 14:39:38.094041 2782 memcache.go:265]
couldn't get current server API group list: Get
"https://192.168.15.11:6443/api?timeout=32s": dial
tcp 192.168.15.11:6443: connect: connection
refused
E1221 14:39:38.095440 2782 memcache.go:265]
couldn't get current server API group list: Get
"https://192.168.15.11:6443/api?timeout=32s": dial
tcp 192.168.15.11:6443: connect: connection
refused
E1221 14:39:38.097007 2782 memcache.go:265]
couldn't get current server API group list: Get
"https://192.168.15.11:6443/api?timeout=32s": dial
tcp 192.168.15.11:6443: connect: connection
refusedThe connection to the server 192.168.15.11:6443
was refused - did you specify the right host or
port?
此时可以修改系统环境变量
临时修改:
export KUBECONFIG=/etc/kubernetes/admin.conf
⻓期修改:
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config
如果修改环境变量后也不⾏时,需要重新进⾏初始化(依次执⾏下
⾯两条命令)
kubeadm reset -f ; ipvsadm --clear ; rm -rf
~/.kube
kubeadm init --config /root/new.yaml --upload
certs4. Master ⾼可⽤
其他 master 加⼊集群时,输⼊如下命令
如:需要⾼可⽤时,⼜克隆了 master02、03...等,那么这些节
点都执⾏下⾯的命令
注意:每个主机的 token 值是不⼀样的,下⾯是我
192.168.15.11 (master)主机的 token 值,这是集群初始化⽣成
的代码,需要在当时记录下来。
5. Token 过期处理
注意:**以下步骤是上述初始化命令产⽣的 Token 过期了才需要执
⾏以下步骤,如果没有过期不需要执⾏,直接 join 即可。**
Token 过期后⽣成新的 token
kubeadm join 192.168.15.11:6443 --token
7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash \
sha256:73dc6f8d973fc70818e309386c1bfc5d330c19d52b4
94c6f88f634a6b1250a2f \
--control-plane --certificate-key \
80fcc505867ccbc6550c18ed11f40e64ecf486d626403823f5
48dda65c19953dMaster 需要⽣成 --certificate-key:
6. Node 节点配置
Node 节点上主要部署公司的⼀些业务应⽤,⽣产环境中不建议
Master 节点部署系统组件之外的其他 Pod,测试环境可以允许
Master 节点部署 Pod 以节省系统资源。
(1)node 加⼊集群
kubeadm token create --print-join-command
kubeadm init phase upload-certs --upload-certs
[root@k8s-node01 ~]# kubeadm join
192.168.15.11:6443 --token 7t2weq.bjbawausm0jaxury
\ # node01通过复制master初始化⽣成的token来加⼊集群
> --discovery-token-ca-cert-hash \
>
sha256:73dc6f8d973fc70818e309386c1bfc5d330c19d52b4
94c6f88f634a6b1250a2f
[preflight] Running pre-flight checks
[preflight] Reading configuration from the
cluster...
[preflight] FYI: You can look at this config file
with 'kubectl -n kube-system get cm kubeadm-config
-o yaml'(2)查看集群状态
master 上查看集群状态(NotReady 不影响)
[kubelet-start] Writing kubelet configuration to
file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file
with flags to file "/var/lib/kubelet/kubeadm
flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform
the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to
apiserver and a response was received.
* The Kubelet was informed of the new secure
connection details.
Run 'kubectl get nodes' on the control-plane to
see this node join the cluster.
# 正确加⼊集群后的输出信息到此建议打快照
7. Calico 组件安装
(1)切换 git 分⽀
[root@k8s-master ~]# kubectl get node # 获取所有节
点信息
NAME STATUS ROLES AGE
VERSION
k8s-master NotReady control-plane 35m
v1.28.2
k8s-node01 NotReady <none> 6m39s
v1.28.2
k8s-node02 NotReady <none> 7m27s
v1.28.2
[root@k8s-master ~]# cd /root/k8s-ha-install &&
git checkout manual-installation-v1.28.x && cd
calico/
分⽀ 'manual-installation-v1.28.x' 设置为跟踪
'origin/manual-installation-v1.28.x'。
切换到⼀个新分⽀ 'manual-installation-v1.28.x'(2)修改 Pod ⽹段
(3)查看容器和节点状态
[root@k8s-master calico]# POD_SUBNET=`cat
/etc/kubernetes/manifests/kube-controller
manager.yaml | grep cluster-cidr= | awk -F=
'{print $NF}'` # 获取已定义的Pod⽹段
[root@k8s-master calico]# sed -i
"s#POD_CIDR#${POD_SUBNET}#g" calico.yaml # 修改
calico.yml⽂件中的pod⽹段
[root@k8s-master calico]# kubectl apply -f
calico.yaml # 创建calico的pod
[root@k8s-master calico]# kubectl get po -n kube
system
NAME READY
STATUS RESTARTS AGE
calico-kube-controllers-6d48795585-wj8g5 1/1
Running 0 130m
calico-node-bk4p5 1/1
Running 0 130m
calico-node-kmsh7 1/1
Running 0 130m
calico-node-qthgh 1/1
Running 0 130mcoredns-6554b8b87f-jdc2b 1/1
Running 0 133m
coredns-6554b8b87f-thftb 1/1
Running 0 133m
etcd-master 1/1
Running 0 133m
kube-apiserver-master 1/1
Running 0 133m
kube-controller-manager-master 1/1
Running 0 133m
kube-proxy-46j4z 1/1
Running 0 131m
kube-proxy-8g887 1/1
Running 0 133m
kube-proxy-vwp27 1/1
Running 0 131m
kube-scheduler-master 1/1
Running 0 133m
[root@k8s-master calico]# kubectl get node # 此
时节点全部准备完成
NAME STATUS ROLES AGE
VERSION
k8s-master Ready control-plane 40m
v1.28.2
k8s-node01 Ready <none> 12m
v1.28.28. Metrics 部署
在新版的 Kubernetes 中系统资源的采集均使⽤ Metrics-server,可
以通过 Metrics 采集节点和 Pod 的内存、磁盘、CPU和⽹络的使⽤
率。
(1)复制证书到所有 node 节点
将 master 节点的 front-proxy-ca.crt 复制到所有 Node 节点,每有
⼀个节点执⾏⼀次,仅需修改命令内的 node 节点主机名即可。
k8s-node02 Ready <none> 12m
v1.28.2(2)安装 metrics server
[root@k8s-master calico]# scp
/etc/kubernetes/pki/front-proxy-ca.crt k8s
node01:/etc/kubernetes/pki/front-proxy-ca.crt #
向node01节点发送代理证书
front-proxy-ca.crt
100% 1123 937.0KB/s 00:00
[root@k8s-master calico]# scp
/etc/kubernetes/pki/front-proxy-ca.crt k8s
node02:/etc/kubernetes/pki/front-proxy-ca.crt #
向node02节点发送代理证书
front-proxy-ca.crt
100% 1123 957.4KB/s 00:00
# 若有其他node节点,按照格式执⾏下⾯命令,这⾥不⽤执⾏,因
为node只有两台主机
[root@k8s-master calico]# scp
/etc/kubernetes/pki/front-proxy-ca.crt k8s
node03:/etc/kubernetes/pki/front-proxy-ca.crt(3)查看 metrics server 状态
[root@k8s-master calico]# cd /root/k8s-ha
install/kubeadm-metrics-server
[root@k8s-master kubeadm-metrics-server]# kubectl
create -f comp.yaml # 添加metric server的pod资源
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggre
gated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metri
cs-server created
rolebinding.rbac.authorization.k8s.io/metrics
server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metri
cs-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/syste
m:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.
k8s.io created
[root@master kubeadm-metrics-server]# kubectl get
po -n kube-system -l k8s-app=metrics-server # 在
kube-system命名空间下查看metrics server的pod运⾏状态NAME READY STATUS
RESTARTS AGE
metrics-server-8df99c47f-mkbfd 1/1 Running
0 34s
[root@master kubeadm-metrics-server]# kubectl top
node # 查看node节点的系统资源使⽤情况
NAME CPU(cores) CPU% MEMORY(bytes)
MEMORY%
k8s-node01 51m 1% 831Mi
23%
k8s-node02 55m 1% 931Mi
25%
master 107m 2% 1412Mi
39%
[root@master kubeadm-metrics-server]# kubectl top
po -A
NAMESPACE NAME
CPU(cores) MEMORY(bytes)
kube-system calico-kube-controllers-6d48795585-
wj8g5 2m 25Mi
kube-system calico-node-bk4p5
20m 155Mi
kube-system calico-node-kmsh7
25m 152Mi
kube-system calico-node-qthgh
24m 145Mi 9. Dashboard部署
Dashboard ⽤于展示集群中的各类资源,同时也可以通过
Dashboard 实时查看 Pod 的⽇志和在容器中执⾏⼀些命令等。
kube-system coredns-6554b8b87f-jdc2b
1m 22Mi
kube-system coredns-6554b8b87f-thftb
1m 20Mi
kube-system etcd-master
14m 66Mi
kube-system kube-apiserver-master
29m 301Mi
kube-system kube-controller-manager-master
10m 56Mi
kube-system kube-proxy-46j4z
1m 22Mi
kube-system kube-proxy-8g887
1m 24Mi
kube-system kube-proxy-vwp27
1m 22Mi
kube-system kube-scheduler-master
2m 26Mi
kube-system metrics-server-8df99c47f-mkbfd
3m 29Mi (1)安装组件
[root@master kubeadm-metrics-server]# cd
/root/k8s-ha-install/dashboard/
[root@master dashboard]# kubectl create -f . #
建⽴dashboard的pod资源
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin
-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes
dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes
dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes
dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kuber
netes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created(2)登录 dashboard
如果是⾕歌浏览器,需要在启动⽂件中加⼊下⾯的启动参数,⽤
于解决⽆法访问 Dashboard 的问题
--test-type --ignore-certificate-errors
(3)更改 svc 模式[root@master dashboard]# kubectl edit svc
kubernetes-dashboard -n kubernetes-dashboard
# edit:进⼊kubernetes的⽂本编辑器
# svc:指定某个服务项,这⾥指定的是kubernetes-dashboard
# -n:指定命名空间,kubernetes-dashboard
# 命令执⾏后相当于进⼊vim⽂本编辑器,不要⽤⿏标滚轮,会输出
乱码的!可以使⽤“/”搜索,输⼊“/type”找到⽬标,如果已经为
NodePort忽略此步骤
......省略部分内容......
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
图示:(4)查看访问端⼝号
找到端⼝号后,通过 master 的 IP+端⼝即可访问 dashboard(端⼝
为终端查询到的端⼝,要⽤ https 协议访问)
[root@master dashboard]# kubectl get svc
kubernetes-dashboard -n kubernetes-dashboard # 获
取kubernetes-dashboard状态信息,包含端⼝,服务IP等
NAME TYPE CLUSTER-IP
EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.96.137.94
<none> 443:30582/TCP 8m50s(5)创建登录 token
在“输⼊ token *”内输⼊终端⽣成的 token
[root@master dashboard]# kubectl create token
admin-user -n kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6Inlvc2g1cWhWcjduaXI1ZU
FpQWNwRFJYYW1saXVFM3lrdlJnaHlUSmY0RTAifQ.eyJhdWQiO
lsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN
0ZXIubG9jYWwiXSwiZXhwIjoxNzAzMDU2Nzg4LCJpYXQiOjE3M
DMwNTMxODgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZ
hdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pb
yI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2V
hY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiN
zE0YWU1N2UtNjRiNC00NTU0LTk5OTctYjE2NmEwZTQyNzhjIn1
9LCJuYmYiOjE3MDMwNTMxODgsInN1YiI6InN5c3RlbTpzZXJ2a
WNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.l6z
gXfNhppndKAqFJrR_vCi7w0_esGp7yQGNYdrQGlE5latyFKRXN
Jub8dvDe-ZyquW1H-KTvZntAluwOXv79W
KY8Z8d31FePN9LHzCXPDordzyg8rE7qvgAPNeU8Fg
VnYtr_ujpB
muBinjnzT7LjysJiBi6fsndiD5RUYcYr6bsLg91bcLgAdW3bn_
9W5587z_q-910wpxl9AwUL9xVzy
vsVDDdXe1VthkoGYxyaznRf5omkmpwabQ3JQ0L8U_8Oop6HaZs
g5cEBCqBHrgyjBsYRALjzRlFlC9CB4hrYY4P_zRSdoI0lyiG4Z
eh0ber6awoeeKSMbJMTqwMlw10. Kube-proxy
(1)改为 ipvs模式
(2)更新 Kube-Proxy 的 Pod
[root@master ~]# kubectl edit cm kube-proxy -n
kube-system
# 使⽤“/”找到“mode”,按照如下修改
mode: ipvs五、集群可⽤性验证
1. 验证节点
[root@master ~]# kubectl patch daemonset kube
proxy -p "{\"spec\":{\"template\":{\"metadata\":
{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
-n kube-system
daemonset.apps/kube-proxy patched
[root@master ~]# curl 127.0.0.1:10249/proxyMode
ipvs
[root@master ~]# kubectl get node # 全部为Ready,
是正常
NAME STATUS ROLES AGE
VERSION
k8s-node01 Ready <none> 156m
v1.28.2
k8s-node02 Ready <none> 155m
v1.28.2
master Ready control-plane 157m
v1.28.22. 验证 Pod
[root@master ~]# kubectl get po -A # 全部为
running,表示正常
NAMESPACE NAME
READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-
6d48795585-wj8g5 1/1 Running 0
156m
kube-system calico-node-bk4p5
1/1 Running 0 156m
kube-system calico-node-kmsh7
1/1 Running 0 156m
kube-system calico-node-qthgh
1/1 Running 0 156m
kube-system coredns-6554b8b87f-jdc2b
1/1 Running 0
159m
kube-system coredns-6554b8b87f-thftb
1/1 Running 0
159m
kube-system etcd-master
1/1 Running 0 159m
kube-system kube-apiserver-master
1/1 Running 0 159mkube-system kube-controller-manager
master 1/1 Running 0
159m
kube-system kube-proxy-9sxt9
1/1 Running 0
5m6s
kube-system kube-proxy-g79z5
1/1 Running 0
5m7s
kube-system kube-proxy-scwgn
1/1 Running 0
5m9s
kube-system kube-scheduler-master
1/1 Running 0 159m
kube-system metrics-server-8df99c47f
mkbfd 1/1 Running 0
154m
kubernetes-dashboard dashboard-metrics-scraper-
7b554c884f-92jwb 1/1 Running 0
24m
kubernetes-dashboard kubernetes-dashboard-
54b699784c-f7trp 1/1 Running 0
24m3. 验证集群⽹段是否冲突
三⽅⽹段均不冲突(service、Pod、宿主机)
[root@master ~]# kubectl get svc # 查看服务的⽹段
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none>
443/TCP 160m
[root@master ~]# kubectl get po -A -owide # 查看
所有命名空间下的所有⽹段,再与服务的⽹段进⾏⽐较
NAMESPACE NAME
READY STATUS RESTARTS AGE
IP NODE NOMINATED NODE
READINESS GATES
kube-system calico-kube-controllers-
6d48795585-wj8g5 1/1 Running 0
158m 172.16.58.194 k8s-node02 <none>
<none>
kube-system calico-node-bk4p5
1/1 Running 0 158m
192.168.15.22 k8s-node01 <none>
<none>
kube-system calico-node-kmsh7
1/1 Running 0 158m
192.168.15.33 k8s-node02 <none>
<none>kube-system calico-node-qthgh
1/1 Running 0 158m
192.168.15.11 master <none>
<none>
kube-system coredns-6554b8b87f-jdc2b
1/1 Running 0
160m 172.16.58.195 k8s-node02 <none>
<none>
kube-system coredns-6554b8b87f-thftb
1/1 Running 0
160m 172.16.58.193 k8s-node02 <none>
<none>
kube-system etcd-master
1/1 Running 0 160m
192.168.15.11 master <none>
<none>
kube-system kube-apiserver-master
1/1 Running 0 160m
192.168.15.11 master <none>
<none>
kube-system kube-controller-manager
master 1/1 Running 0
160m 192.168.15.11 master <none>
<none>kube-system kube-proxy-9sxt9
1/1 Running 0
6m29s 192.168.15.11 master <none>
<none>
kube-system kube-proxy-g79z5
1/1 Running 0
6m30s 192.168.15.33 k8s-node02 <none>
<none>
kube-system kube-proxy-scwgn
1/1 Running 0
6m32s 192.168.15.22 k8s-node01 <none>
<none>
kube-system kube-scheduler-master
1/1 Running 0 160m
192.168.15.11 master <none>
<none>
kube-system metrics-server-8df99c47f
mkbfd 1/1 Running 0
155m 172.16.85.193 k8s-node01 <none>
<none>
kubernetes-dashboard dashboard-metrics-scraper-
7b554c884f-92jwb 1/1 Running 0
25m 172.16.85.195 k8s-node01 <none>
<none>4. 验证是否可正常创建参数
kubernetes-dashboard kubernetes-dashboard-
54b699784c-f7trp 1/1 Running 0
25m 172.16.85.194 k8s-node01 <none>
<none>
[root@master ~]# kubectl create deploy cluster
test --image=registry.cn
beijing.aliyuncs.com/dotbalo/debug-tools -- sleep
3600
deployment.apps/cluster-test created # 已创建,表
示正常
[root@master ~]# kubectl get po
NAME READY STATUS
RESTARTS AGE
cluster-test-66bb44bd88-sq8fx 1/1 Running
0 41s
[root@master ~]# kubectl get po -owide
NAME READY STATUS
RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
cluster-test-66bb44bd88-sq8fx 1/1 Running
0 48s 172.16.58.196 k8s-node02
<none> <none>5. Pod 必须能够解析 Service
同 namespace 和跨 namespace
(1)nslookup kubernetes
(2)nslookup kube-dns.kube-system
[root@master ~]# kubectl exec -it cluster-test-
66bb44bd88-sq8fx -- bash # 进⼊pod下的某个容器
(06:36 cluster-test-66bb44bd88-sq8fx:/) nslookup
kubernetes
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
# 可以解析到server的IP地址说明同namespace可以解析(06:36 cluster-test-66bb44bd88-sq8fx:/) nslookup
kube-dns.kube-system
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kube-dns.kube-system.svc.cluster.local
Address: 10.96.0.10
# 可以解析到server的第⼗个ip,说明可以解析到kube-dns,说
明跨namespace也可解析
6. 确认是否可访问 Kubernetes 的 443 和 kube-dns 的
53
每个节点都必须能访问 Kubernetes 的 kubernetes svc 443 和
kube-dns 的 service 537. 确认各 Pod 之间是否可正常通信
同 namespace 和跨 namespace
[root@master ~]# curl https://10.96.0.1:443
curl: (60) SSL certificate problem: unable to get
local issuer certificate
More details here:
https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server
and therefore could not
establish a secure connection to it. To learn more
about this situation and
how to fix it, please visit the web page mentioned
above.
[root@master ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server
[root@master ~]# kubectl get po -nkube-system -
owide
NAME READY
STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATEScalico-kube-controllers-6d48795585-wj8g5 1/1
Running 0 170m 172.16.58.194 k8s
node02 <none> <none>
calico-node-bk4p5 1/1
Running 0 170m 192.168.15.22 k8s
node01 <none> <none>
calico-node-kmsh7 1/1
Running 0 170m 192.168.15.33 k8s
node02 <none> <none>
calico-node-qthgh 1/1
Running 0 170m 192.168.15.11 master
<none> <none>
coredns-6554b8b87f-jdc2b 1/1
Running 0 173m 172.16.58.195 k8s
node02 <none> <none>
coredns-6554b8b87f-thftb 1/1
Running 0 173m 172.16.58.193 k8s
node02 <none> <none>
etcd-master 1/1
Running 0 173m 192.168.15.11 master
<none> <none>
kube-apiserver-master 1/1
Running 0 173m 192.168.15.11 master
<none> <none>
kube-controller-manager-master 1/1
Running 0 173m 192.168.15.11 master
<none> <none>kube-proxy-9sxt9 1/1
Running 0 19m 192.168.15.11 master
<none> <none>
kube-proxy-g79z5 1/1
Running 0 19m 192.168.15.33 k8s
node02 <none> <none>
kube-proxy-scwgn 1/1
Running 0 19m 192.168.15.22 k8s
node01 <none> <none>
kube-scheduler-master 1/1
Running 0 173m 192.168.15.11 master
<none> <none>
metrics-server-8df99c47f-mkbfd 1/1
Running 0 168m 172.16.85.193 k8s
node01 <none> <none>
[root@master ~]# kubectl get po -owide
NAME READY STATUS
RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
cluster-test-66bb44bd88-sq8fx 1/1 Running
0 12m 172.16.58.196 k8s-node02
<none> <none>
[root@master ~]# kubectl exec -it cluster-test-
66bb44bd88-sq8fx -- bash
(06:46 cluster-test-66bb44bd88-sq8fx:/) ping
172.16.58.195 -c 3PING 172.16.58.195 (172.16.58.195) 56(84) bytes of
data.
64 bytes from 172.16.58.195: icmp_seq=1 ttl=63
time=0.455 ms
64 bytes from 172.16.58.195: icmp_seq=2 ttl=63
time=0.082 ms
64 bytes from 172.16.58.195: icmp_seq=3 ttl=63
time=0.082 ms
--- 172.16.58.195 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss,
time 2083ms
rtt min/avg/max/mdev = 0.082/0.206/0.455/0.176 ms
同机器和跨机器六、注意事项
[root@master ~]# kubectl get po -owide
NAME READY STATUS
RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
cluster-test-66bb44bd88-sq8fx 1/1 Running
0 13m 172.16.58.196 k8s-node02
<none> <none>
[root@master ~]# ping 172.16.58.196 -c 3
PING 172.16.58.196 (172.16.58.196) 56(84) bytes of
data.
64 bytes from 172.16.58.196: icmp_seq=1 ttl=63
time=0.676 ms
64 bytes from 172.16.58.196: icmp_seq=2 ttl=63
time=0.303 ms
64 bytes from 172.16.58.196: icmp_seq=3 ttl=63
time=0.284 ms
--- 172.16.58.196 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss,
time 2043ms
rtt min/avg/max/mdev = 0.284/0.421/0.676/0.180 ms注意:kubeadm 安装的集群,证书有效期默认是⼀年。master 节点
的 kube-apiserver、kube-scheduler、kube-controller-manager、
etcd 都是以容器运⾏的。可以通过 kubectl get po -n kube-system
查看。
启动和⼆进制不同的是,
kubelet 的配置⽂件在 /etc/sysconfig/kubelet
和/var/lib/kubelet/config.yaml,修改后需要重启 kubelet 进程。
其他组件的配置⽂件在 /etc/kubernetes/manifests ⽬录下,⽐如
kube-apiserver.yaml,该 yaml ⽂件更改后,kubelet 会⾃动刷新配
置,也就是会重启 pod。不能再次创建该⽂件
kube-proxy 的配置在 kube-system 命名空间下的 configmap 中,
可以通过:
kubectl edit cm kube-proxy -n kube-system
进⾏更改,更改完成后,可以通过 patch 重启 kube-proxy:
kubectl patch daemonset kube-proxy -p "{\"spec\":
{\"template\":{\"metadata\":{\"annotations\":
{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
Kubeadm 安装后,master 节点默认不允许部署 Pod,可以通过以
下⽅式删除 Taint,即可部署 Pod:kubectl taint node -l node
role.kubernetes.io/control-plane node
role.kubernetes.io/control-plane:NoSchedule-
主机名
IP地址
说明
k8s-master01 ~
03
192.168.15.101~103
master节点 * 3
k8s-node01 ~ 02 192.168.15.22 |
192.168.15.33
worker节点 * 2
k8s-master-lb
192.168.15.88
keepalived虚拟
IP
⼀、安装说明
本⽂章将演示Rocky 8 ⼆进制⽅式安装⾼可⽤k8s 1.28.0版本。
⽣产环境中,建议使⽤⼩版本⼤于5的Kubernetes版本,⽐如1.19.5
以后的才可⽤于⽣产环境。
⼆、集群安装
2.1 基本环境配置配置信息
备注
系统版本
Rocky linux 8.7
Docker版本
24.10
Pod⽹段
172.16.0.0/16
Service⽹段
10.96.0.0/16
请统⼀替换这些⽹段,Pod⽹段和service和宿主机⽹段不要重
复!!!
注意:宿主机⽹段、K8s Service⽹段、Pod⽹段不能重复
主机信息,服务器IP地址不能设置成dhcp,要配置成静态IP。
系统环境:
各节点配置:
[root@k8s-master01 ~]# cat /etc/redhat-release
Rocky Linux release 8.7 (Green Obsidian)
[root@k8s-master01 ~]#192.168.100.61 k8s-master01 # 2C2G 20G
192.168.100.62 k8s-master02 # 2C2G 20G
192.168.100.63 k8s-master03 # 2C2G 20G
192.168.100.69 k8s-master-lb # VIP 虚拟IP不占⽤机器
资源 # 如果不是⾼可⽤集群,该IP为Master01的IP
192.168.100.64 k8s-node01 # 2C2G 20G
192.168.100.65 k8s-node02 # 2C2G 20G
配置**所有节点hosts⽂件:**
[root@k8s-master01 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain
localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain
localhost6 localhost6.localdomain6
192.168.100.61 k8s-master01
192.168.100.62 k8s-master02
192.168.100.63 k8s-master03
192.168.100.69 k8s-master-lb # 如果不是⾼可⽤集群,
该IP为Master01>的IP
192.168.100.64 k8s-node01
192.168.100.65 k8s-node02Rocky8**所有节点配置yum源:**
[root@k8s-master01 ~]# cd /etc/yum.repos.d/
[root@k8s-master01 yum.repos.d]# ls
bak
[root@k8s-master01 yum.repos.d]# cat
>>/etc/yum.repos.d/docker-ce.repo<<'EOF'
> [docker-ce-stable]
> name=Docker CE Stable - $basearch
> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/$basearch/stable
> enabled=1
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-stable-debuginfo]
> name=Docker CE Stable - Debuginfo $basearch
> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/debug-$basearch/stable
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-stable-source]
> name=Docker CE Stable - Sources> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/source/stable
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-test]
> name=Docker CE Test - $basearch
> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/$basearch/test
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-test-debuginfo]
> name=Docker CE Test - Debuginfo $basearch
> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/debug-$basearch/test
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-test-source]
> name=Docker CE Test - Sources> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/source/test
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-nightly]
> name=Docker CE Nightly - $basearch
> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/$basearch/nightly
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-nightly-debuginfo]
> name=Docker CE Nightly - Debuginfo $basearch
> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/debug-$basearch/nightly
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
>
> [docker-ce-nightly-source]
> name=Docker CE Nightly - Sources> baseurl=https://mirrors.aliyun.com/docker
ce/linux/centos/$releasever/source/nightly
> enabled=0
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/docker
ce/linux/centos/gpg
> EOF
[root@k8s-master01 yum.repos.d]#
[root@k8s-master01 yum.repos.d]# cat
>>/etc/yum.repos.d/Rocky-BaseOS.repo<<'EOF'
> # Rocky-BaseOS.repo
> #
> # The mirrorlist system uses the connecting IP
address of the client and the
> # update status of each mirror to pick current
mirrors that are geographically
> # close to the client. You should use this for
Rocky updates unless you are
> # manually picking other mirrors.
> #
> # If the mirrorlist does not work for you, you
can try the commented out
> # baseurl line instead.
>
> [baseos]
> name=Rocky Linux $releasever - BaseOS>
#mirrorlist=https://mirrors.rockylinux.org/mirrorli
st?arch=$basearch&repo=BaseOS-$releasever
>
baseurl=https://mirrors.aliyun.com/rockylinux/$rele
asever/BaseOS/$basearch/os/
> gpgcheck=1
> enabled=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
rockyofficial
> EOF
[root@k8s-master01 yum.repos.d]#
[root@k8s-master01 yum.repos.d]# cat >>Rocky
AppStream.repo<<'EOF'
> # Rocky-AppStream.repo
> #
> # The mirrorlist system uses the connecting IP
address of the client and the
> # update status of each mirror to pick current
mirrors that are geographically
> # close to the client. You should use this for
Rocky updates unless you are
> # manually picking other mirrors.
> #
> # If the mirrorlist does not work for you, you
can try the commented out
> # baseurl line instead.>
> [appstream]
> name=Rocky Linux $releasever - AppStream
>
#mirrorlist=https://mirrors.rockylinux.org/mirrorli
st?arch=$basearch&repo=AppStream-$releasever
>
baseurl=https://mirrors.aliyun.com/rockylinux/$rele
asever/AppStream/$basearch/os/
> gpgcheck=1
> enabled=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
rockyofficial
> EOF
[root@k8s-master01 yum.repos.d]# ls
bak docker-ce.repo Rocky-AppStream.repo Rocky
BaseOS.repo
[root@k8s-master01 yum.repos.d]# yum clean all
38 个⽂件已删除
[root@k8s-master01 yum.repos.d]# yum makecache
Rocky Linux 8 - AppStream 3.6 MB/s | 11
MB 00:02
Rocky Linux 8 - BaseOS 1.6 MB/s | 6.0
MB 00:03
Docker CE Stable - x86_64 75 kB/s | 49
kB 00:00
元数据缓存已建⽴。[root@k8s-master01 yum.repos.d]#
所有节点**必备⼯具安装**
[root@k8s-master01 yum.repos.d]# cd
[root@k8s-master01 ~]# yum install wget jq psmisc
vim net-tools telnet yum-utils device-mapper
persistent-data lvm2 git -y
所有节点**关闭firewalld 、dnsmasq、selinux**
[root@k8s-master01 ~]# systemctl disable --now
firewalld
[root@k8s-master01 ~]# systemctl disable --now
dnsmasq
[root@k8s-master01 ~]# setenforce 0
[root@k8s-master01 ~]# sed -i
's#SELINUX=enforcing#SELINUX=disabled#g'
/etc/sysconfig/selinux
[root@k8s-master01 ~]# sed -i
's#SELINUX=enforcing#SELINUX=disabled#g'
/etc/selinux/config
[root@k8s-master01 ~]#所有节点**关闭swap分区,fstab注释swap**
[root@k8s-master01 ~]# swapoff -a && sysctl -w
vm.swappiness=0
vm.swappiness = 0
[root@k8s-master01 ~]# sed -ri '/^[^#]*swap/s@^@#@'
/etc/fstab
[root@k8s-master01 ~]#
所有节点**同步时间**[root@k8s-master01 ~]# rpm -ivh
https://mirrors.wlnmp.com/rockylinux/wlnmp-release
rocky-8.noarch.rpm
获取https://mirrors.wlnmp.com/rockylinux/wlnmp
release-rocky-8.noarch.rpm
Verifying...
################################# [100%]
准备中...
################################# [100%]
正在升级/安装...
1:wlnmp-release-rocky-1-1
################################# [100%]
[root@k8s-master01 ~]# yum -y install wntp
[root@k8s-master01 ~]# ntpdate time2.aliyun.com
31 Aug 22:43:34 ntpdate[3549]: adjust time server
203.107.6.88 offset +0.000713 sec
[root@k8s-master01 ~]# crontab -e
no crontab for root - using an empty one
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
所有节点**配置limit**[root@k8s-node02 ~]# ulimit -SHn 65535
[root@k8s-node02 ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
Master01**节点免密钥登录其他节点,安装过程中⽣成配置⽂件和
证书均在Master01上操作,集群管理也在Master01上操作。密钥配
置如下:**
[root@k8s-master01 ~]# ssh-keygen -t rsa
Master01**配置免密码登录其他节点**
[root@k8s-master01 ~]# for i in k8s-master01 k8s
master02 k8s-master03 k8s-node01 k8s-node02;do ssh
copy-id -i .ssh/id_rsa.pub $i;doneMaster01**下载安装⽂件**
[root@k8s-master01 ~]# cd /root/ ; git clone
https://gitee.com/dukuan/k8s-ha-install.git
正克隆到 'k8s-ha-install'...
remote: Enumerating objects: 879, done.
remote: Counting objects: 100% (205/205), done.
remote: Compressing objects: 100% (127/127), done.
remote: Total 879 (delta 90), reused 145 (delta
52), pack-reused 674
接收对象中: 100% (879/879), 19.70 MiB | 2.37 MiB/s,
完成.
处理 delta 中: 100% (354/354), 完成.
[root@k8s-master01 ~]# ls
点名.txt 视频 下载 anaconda-ks.cfg k8s-ha
install
公共 图⽚ ⾳乐 dianming.sh yum.sh
模板 ⽂档 桌⾯ initial-setup-ks.cfg
[root@k8s-master01 ~]#
所有节点**升级系统并重启,此处升级没有升级内核**2.2 安装ipvsadm
所有节点**安装 ipvsadm 及检测系统的⼯具**
所有节点配置ipvs模块
[root@k8s-master01 ~]# yum update -y --
exclude=kernel* --nobest
[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# yum install ipvsadm ipset
sysstat conntrack libseccomp -y
[root@k8s-master01 ~]# modprobe -- ip_vs
[root@k8s-master01 ~]# modprobe -- ip_vs_rr
[root@k8s-master01 ~]# modprobe -- ip_vs_wrr
[root@k8s-master01 ~]# modprobe -- ip_vs_sh
[root@k8s-master01 ~]# modprobe -- nf_conntrack
[root@k8s-master01 ~]# vim /etc/modules
load.d/ipvs.conf
# 加⼊以下内容
ip_vsip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
# 检查是否加载:
[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e
nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0ip_vs 172032 6
ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 172032 4
xt_conntrack,nf_nat,ipt_MASQUERADEip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 5
nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
开启⼀些k8s集群中必须的内核参数,**所有节点配置k8s内核:**
[root@k8s-master01 ~]# cat <<EOF >
/etc/sysctl.d/k8s.conf
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> fs.may_detach_mounts = 1
> vm.overcommit_memory=1
> net.ipv4.conf.all.route_localnet = 1
>
> vm.panic_on_oom=0
> fs.inotify.max_user_watches=89100
> fs.file-max=52706963
> fs.nr_open=52706963
> net.netfilter.nf_conntrack_max=2310720>
> net.ipv4.tcp_keepalive_time = 600
> net.ipv4.tcp_keepalive_probes = 3
> net.ipv4.tcp_keepalive_intvl =15
> net.ipv4.tcp_max_tw_buckets = 36000
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_max_orphans = 327680
> net.ipv4.tcp_orphan_retries = 3
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.ip_conntrack_max = 65536
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.tcp_timestamps = 0
> net.core.somaxconn = 16384
> EOF
[root@k8s-master01 ~]# sysctl --system
所有节点**配置完内核后,重启服务器,保证重启后内核依旧加载**
[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# lsmod | grep --color=auto -e
ip_vs -e nf_conntrack
ip_vs_ftp 16384 0
nf_nat 45056 3
ipt_MASQUERADE,nft_chain_nat,ip_vs_ftpip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 172032 24
ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip
_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs
_sed,ip_vs_ftp
nf_conntrack 172032 4
xt_conntrack,nf_nat,ipt_MASQUERADEip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 5
nf_conntrack,nf_nat,nf_tables,xfs,ip_vs
[root@k8s-master01 ~]#
第三章 基本组件安装第三章 基本组件安装
本节主要安装的是集群中⽤到的各种组件,⽐如Docker-ce、
Kubernetes各组件等
此处建议打个快照
3.1 Containerd作为Runtime
所有节点**安装docker-ce-24.0(如果已经有安装,也需要执⾏安装
升级到最新版)**
配置Containerd所需的模块(**所有节点):**
[root@k8s-master01 ~]# yum remove -y podman runc
containerd
[root@k8s-master01 ~]# yum install docker-ce
docker-ce-cli containerd.io -y[root@k8s-master01 ~]# cat <<EOF | sudo tee
/etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF
overlay
br_netfilter
[root@k8s-master01 ~]#
所有节点**加载模块:**
[root@k8s-master01 ~]# modprobe -- overlay
[root@k8s-master01 ~]# modprobe -- br_netfilter
[root@k8s-master01 ~]#
所有节点**,配置Containerd所需的内核:**[root@k8s-master01 ~]# cat <<EOF | sudo tee
/etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8s-master01 ~]# sysctl --system
所有节点**配置Containerd的配置⽂件:**
(使⽤ containerd 默认配置⽣成⼀个 config.toml 配置⽂件,并将其
内容输出到 /etc/containerd/config.toml ⽂件中)
[root@k8s-master01 ~]# mkdir -p /etc/containerd
[root@k8s-master01 ~]# containerd config default |
tee /etc/containerd/config.toml
所有节点将Containerd的Cgroup改为Systemd:[root@k8s-master01 ~]# vim
/etc/containerd/config.toml
找到 containerd.runtimes.runc.options,添加 SystemdCgroup =
true(如果已存在直接修改,否则会报错),如下图所示:
所有节点**将 sandbox_image 的 Pause 镜像改成符合⾃⼰版本的地
址 registry.cn
hangzhou.aliyuncs.com/google_containers/pause:3.9**所有节点**启动Containerd,并配置开机⾃启动:**
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now
containerd
Created symlink /etc/systemd/system/multi
user.target.wants/containerd.service "
/usr/lib/systemd/system/containerd.service.
[root@k8s-master01 ~]#
所有节点**配置crictl客户端连接的运⾏时位置:**4.2 K8s及etcd安装
Master01下载kubernetes安装包
master01**下载etcd安装包**
[root@k8s-master01 ~]# cat > /etc/crictl.yaml <<EOF
> runtime-endpoint:
unix:///run/containerd/containerd.sock
> image-endpoint:
unix:///run/containerd/containerd.sock
> timeout: 10
> debug: false
> EOF
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# wget
https://dl.k8s.io/v1.28.0/kubernetes-server-linux
amd64.tar.gz[root@k8s-master01 ~]# wget
https://github.com/etcd
io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux
amd64.tar.gz
master01**解压kubernetes安装⽂件**
[root@k8s-master01 ~]# tar -xf kubernetes-server
linux-amd64.tar.gz --strip-components=3 -C
/usr/local/bin kubernetes/server/bin/kube{let,ctl,-
apiserver,-controller-manager,-scheduler,-proxy}
master01**解压etcd安装⽂件**[root@k8s-master01 ~]# tar -zxvf etcd-v3.5.9-linux
amd64.tar.gz --strip-components=1 -C /usr/local/bin
etcd-v3.5.9-linux-amd64/etcd{,ctl}
etcd-v3.5.9-linux-amd64/etcdctl
etcd-v3.5.9-linux-amd64/etcd
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.28.0
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.9
API version: 3.5
[root@k8s-master01 ~]#
master01**将组件发送到其他节点**
[root@k8s-master01 ~]# MasterNodes='k8s-master02
k8s-master03'
[root@k8s-master01 ~]# WorkNodes='k8s-node01 k8s
node02'
[root@k8s-master01 ~]# for NODE in $MasterNodes; do
echo $NODE; scp /usr/local/bin/kube{let,ctl,-
apiserver,-controller-manager,-scheduler,-proxy}
$NODE:/usr/local/bin/; scp /usr/local/bin/etcd*
$NODE:/usr/local/bin/; done
k8s-master02kubelet 100% 106MB
281.6MB/s 00:00
kubectl 100% 48MB
302.1MB/s 00:00
kube-apiserver 100% 116MB
146.9MB/s 00:00
kube-controller-manager 100% 112MB
129.0MB/s 00:00
kube-scheduler 100% 53MB
144.0MB/s 00:00
kube-proxy 100% 52MB
145.7MB/s 00:00
etcd 100% 21MB
105.1MB/s 00:00
etcdctl 100% 16MB
113.7MB/s 00:00
k8s-master03
kubelet 100% 106MB
269.0MB/s 00:00
kubectl 100% 48MB
212.2MB/s 00:00
kube-apiserver 100% 116MB
150.4MB/s 00:00
kube-controller-manager 100% 112MB
109.9MB/s 00:01
kube-scheduler 100% 53MB
154.3MB/s 00:00 kube-proxy 100% 52MB
137.2MB/s 00:00
etcd 100% 21MB
126.1MB/s 00:00
etcdctl 100% 16MB
105.4MB/s 00:00
[root@k8s-master01 ~]# for NODE in $WorkNodes; do
scp /usr/local/bin/kube{let,-proxy}
$NODE:/usr/local/bin/ ; done
kubelet 100% 106MB
239.4MB/s 00:00
kube-proxy 100% 52MB
135.6MB/s 00:00
kubelet 100% 106MB
306.6MB/s 00:00
kube-proxy 100% 52MB
295.4MB/s 00:00
[root@k8s-master01 ~]#
Master01节点**切换到1.28.x分⽀**
(其他版本可以切换到其他分⽀,.x即可,不需要更改为具体的⼩版
本)第四章 ⽣成证书
⼆进制安装最关键步骤,⼀步错误全盘皆输,⼀定要注意每个步骤都
要是正确的。
所以此处建议打快照!
Master01**下载⽣成证书⼯具**
[root@k8s-master01 ~]# cd /root/k8s-ha-install &&
git checkout manual-installation-v1.28.x
分⽀ 'manual-installation-v1.28.x' 设置为跟踪
'origin/manual-installation-v1.28.x'。
切换到⼀个新分⽀ 'manual-installation-v1.28.x'
[root@k8s-master01 k8s-ha-install]#4.1 Etcd证书
所有Master节点**创建etcd证书⽬录**
所有节点**创建kubernetes相关⽬录**
Master01节点**⽣成etcd证书**
[root@k8s-master01 k8s-ha-install]# wget
"https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O
/usr/local/bin/cfssl
[root@k8s-master01 k8s-ha-install]# wget
"https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64"
-O /usr/local/bin/cfssljson
[root@k8s-master01 k8s-ha-install]# chmod +x
/usr/local/bin/cfssl /usr/local/bin/cfssljson
[root@k8s-master01 k8s-ha-install]#
[root@k8s-master01 ~]# mkdir /etc/etcd/ssl -p
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# mkdir -p /etc/kubernetes/pki
[root@k8s-master01 ~]#⽣成证书的CSR⽂件:证书签名请求⽂件,配置了⼀些域名、公司、
单位
[root@k8s-master01 ~]# cd /root/k8s-ha-install/pki
[root@k8s-master01 pki]# cfssl gencert -initca
etcd-ca-csr.json | cfssljson -bare
/etc/etcd/ssl/etcd-ca ## ⽣成etcd CA证书和CA证书的
key
2023/09/01 01:03:37 [INFO] generating a new CA key
and certificate from CSR
2023/09/01 01:03:37 [INFO] generate received
request
2023/09/01 01:03:37 [INFO] received CSR
2023/09/01 01:03:37 [INFO] generating key: rsa-2048
2023/09/01 01:03:37 [INFO] encoded CSR
2023/09/01 01:03:37 [INFO] signed certificate with
serial number
344130823394962800880998772770250831901274089808
[root@k8s-master01 pki]# cfssl gencert \
> -ca=/etc/etcd/ssl/etcd-ca.pem \
> -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
> -config=ca-config.json \
> -hostname=127.0.0.1,k8s-master01,k8s
master02,k8s
master03,192.168.100.61,192.168.100.62,192.168.100.
63 \ # 注意主节点IP地址
> -profile=kubernetes \> etcd-csr.json | cfssljson -bare
/etc/etcd/ssl/etcd
2023/09/01 01:04:39 [INFO] generate received
request
2023/09/01 01:04:39 [INFO] received CSR
2023/09/01 01:04:39 [INFO] generating key: rsa-2048
2023/09/01 01:04:39 [INFO] encoded CSR
2023/09/01 01:04:39 [INFO] signed certificate with
serial number
467674155326432618988047679680337371584557335423
[root@k8s-master01 pki]#
master01**将证书复制到其他节点**
[root@k8s-master01 pki]# MasterNodes='k8s-master02
k8s-master03'
[root@k8s-master01 pki]# WorkNodes='k8s-node01 k8s
node02'
[root@k8s-master01 pki]# for NODE in $MasterNodes;
do
> ssh $NODE "mkdir -p /etc/etcd/ssl"
> for FILE in etcd-ca-key.pem etcd-ca.pem
etcd-key.pem etcd.pem; do
> scp /etc/etcd/ssl/${FILE}
$NODE:/etc/etcd/ssl/${FILE}4.2 k8s组件证书
Master01**⽣成kubernetes证书**
> done
> done
etcd-ca-key.pem 100% 1675
1.0MB/s 00:00
etcd-ca.pem 100% 1367
1.7MB/s 00:00
etcd-key.pem 100% 1675
1.1MB/s 00:00
etcd.pem 100% 1509
1.9MB/s 00:00
etcd-ca-key.pem 100% 1675
1.2MB/s 00:00
etcd-ca.pem 100% 1367
592.0KB/s 00:00
etcd-key.pem 100% 1675
1.5MB/s 00:00
etcd.pem 100% 1509
1.3MB/s 00:00
[root@k8s-master01 pki]#[root@k8s-master01 pki]# cd /root/k8s-ha
install/pki
[root@k8s-master01 pki]# cfssl gencert -initca ca
csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2023/09/01 01:06:52 [INFO] generating a new CA key
and certificate from CSR
2023/09/01 01:06:52 [INFO] generate received
request
2023/09/01 01:06:52 [INFO] received CSR
2023/09/01 01:06:52 [INFO] generating key: rsa-2048
2023/09/01 01:06:53 [INFO] encoded CSR
2023/09/01 01:06:53 [INFO] signed certificate with
serial number
700660882784581217234688273261359631764300667154
[root@k8s-master01 pki]#
# 10.96.0.0是k8s service的⽹段,如果说需要更改k8s service⽹段,
那就需要更改10.96.0.1,
# 如果不是⾼可⽤集群,192.168.15.88 为 Master01 的 IP
Master01 ⽣成 ca 证书[root@k8s-master01 pki]# cfssl gencert -
ca=/etc/kubernetes/pki/ca.pem \
> -ca-key=/etc/kubernetes/pki/ca-key.pem \
> -config=ca-config.json \
> -
hostname=10.96.0.1,192.168.15.88,127.0.0.1,kubernet
es,kubernetes.default,kubernetes.default.svc,kubern
etes.default.svc.cluster,kubernetes.default.svc.clu
ster.local,192.168.15.101,192.168.15.102,192.168.15
.103 \
> -profile=kubernetes \
> apiserver-csr.json | cfssljson -bare
/etc/kubernetes/pki/apiserver
2023/09/01 01:09:03 [INFO] generate received
request
2023/09/01 01:09:03 [INFO] received CSR
2023/09/01 01:09:03 [INFO] generating key: rsa-2048
2023/09/01 01:09:03 [INFO] encoded CSR
2023/09/01 01:09:03 [INFO] signed certificate with
serial number
700805877078105797489323146582505553062943677004
[root@k8s-master01 pki]#
Master01**⽣成apiserver的聚合证书**Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
[root@k8s-master01 pki]# cfssl gencert -initca
front-proxy-ca-csr.json | cfssljson -bare
/etc/kubernetes/pki/front-proxy-ca
2023/09/01 01:09:49 [INFO] generating a new CA key
and certificate from CSR
2023/09/01 01:09:49 [INFO] generate received
request
2023/09/01 01:09:49 [INFO] received CSR
2023/09/01 01:09:49 [INFO] generating key: rsa-2048
2023/09/01 01:09:50 [INFO] encoded CSR
2023/09/01 01:09:50 [INFO] signed certificate with
serial number
80744563078517085527684184401947310203460343599
[root@k8s-master01 pki]# cfssl gencert -
ca=/etc/kubernetes/pki/front-proxy-ca.pem \
> -ca-key=/etc/kubernetes/pki/front-proxy-ca
key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> front-proxy-client-csr.json | cfssljson -bare
/etc/kubernetes/pki/front-proxy-client
# 返回结果(忽略警告)
2023/09/01 01:09:56 [INFO] generate received
request
2023/09/01 01:09:56 [INFO] received CSR2023/09/01 01:09:56 [INFO] generating key: rsa-2048
2023/09/01 01:09:56 [INFO] encoded CSR
2023/09/01 01:09:56 [INFO] signed certificate with
serial number
269856273999475183076691732446396898422130380732
2023/09/01 01:09:56 [WARNING] This certificate
lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline
Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the
CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information
Requirements").
[root@k8s-master01 pki]#
Master01**⽣成controller-manage的证书**
[root@k8s-master01 pki]# cfssl gencert \
> -ca=/etc/kubernetes/pki/ca.pem \
> -ca-key=/etc/kubernetes/pki/ca-key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> manager-csr.json | cfssljson -bare
/etc/kubernetes/pki/controller-manager2023/09/01 01:11:24 [INFO] generate received
request
2023/09/01 01:11:24 [INFO] received CSR
2023/09/01 01:11:24 [INFO] generating key: rsa-2048
2023/09/01 01:11:24 [INFO] encoded CSR
2023/09/01 01:11:24 [INFO] signed certificate with
serial number
59801199172235379177295009346141259163675889793
2023/09/01 01:11:24 [WARNING] This certificate
lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline
Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the
CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information
Requirements").
[root@k8s-master01 pki]#
注意,如果不是⾼可⽤集群,192.168.15.88:8443改为master01的
地址,8443改为apiserver的端⼝,默认是6443
set-cluster:**Master01设置⼀个集群项**[root@k8s-master01 pki]# kubectl config set-cluster
kubernetes \
> --certificate
authority=/etc/kubernetes/pki/ca.pem \
> --embed-certs=true \
> --server=https://192.168.15.88:8443 \
> --kubeconfig=/etc/kubernetes/controller
manager.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 pki]#
Master01**设置⼀个环境项**
[root@k8s-master01 pki]# kubectl config set-context
system:kube-controller-manager@kubernetes \
> --cluster=kubernetes \
> --user=system:kube-controller-manager \
> --kubeconfig=/etc/kubernetes/controller
manager.kubeconfig
Context "system:kube-controller-manager@kubernetes"
created.
[root@k8s-master01 pki]#set-credentials:**Master01设置⼀个⽤户项**
[root@k8s-master01 pki]# kubectl config set
credentials system:kube-controller-manager \
> --client
certificate=/etc/kubernetes/pki/controller
manager.pem \
> --client-key=/etc/kubernetes/pki/controller
manager-key.pem \
> --embed-certs=true \
> --kubeconfig=/etc/kubernetes/controller
manager.kubeconfig
User "system:kube-controller-manager" set.
[root@k8s-master01 pki]#
Master01**使⽤某个环境当做默认环境**
[root@k8s-master01 pki]# kubectl config use-context
system:kube-controller-manager@kubernetes \
> --kubeconfig=/etc/kubernetes/controller
manager.kubeconfig
Switched to context "system:kube-controller
manager@kubernetes".
[root@k8s-master01 pki]# cfssl gencert \
> -ca=/etc/kubernetes/pki/ca.pem \> -ca-key=/etc/kubernetes/pki/ca-key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> scheduler-csr.json | cfssljson -bare
/etc/kubernetes/pki/scheduler
2023/09/01 01:14:36 [INFO] generate received
request
2023/09/01 01:14:36 [INFO] received CSR
2023/09/01 01:14:36 [INFO] generating key: rsa-2048
2023/09/01 01:14:36 [INFO] encoded CSR
2023/09/01 01:14:36 [INFO] signed certificate with
serial number
88018517488547413954267519985609730397109488269
2023/09/01 01:14:36 [WARNING] This certificate
lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline
Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the
CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information
Requirements").
[root@k8s-master01 pki]#
注意,如果不是⾼可⽤集群,192.168.15.88:8443改为master01的地
址,8443改为apiserver的端⼝,默认是6443Master01 设置
[root@k8s-master01 pki]# kubectl config set-cluster
kubernetes \
> --certificate
authority=/etc/kubernetes/pki/ca.pem \
> --embed-certs=true \
> --server=https://192.168.15.88:8443 \
> --
kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 pki]# kubectl config set
credentials system:kube-scheduler \
> --client
certificate=/etc/kubernetes/pki/scheduler.pem \
> --client-key=/etc/kubernetes/pki/scheduler
key.pem \
> --embed-certs=true \
> --
kubeconfig=/etc/kubernetes/scheduler.kubeconfig
User "system:kube-scheduler" set.
[root@k8s-master01 pki]# kubectl config set-context
system:kube-scheduler@kubernetes \
> --cluster=kubernetes \
> --user=system:kube-scheduler \
> --
kubeconfig=/etc/kubernetes/scheduler.kubeconfigContext "system:kube-scheduler@kubernetes" created.
[root@k8s-master01 pki]# kubectl config use-context
system:kube-scheduler@kubernetes \
> --
kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Switched to context "system:kube
scheduler@kubernetes".
[root@k8s-master01 pki]# cfssl gencert \
> -ca=/etc/kubernetes/pki/ca.pem \
> -ca-key=/etc/kubernetes/pki/ca-key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> admin-csr.json | cfssljson -bare
/etc/kubernetes/pki/admin
2023/09/01 01:16:13 [INFO] generate received
request
2023/09/01 01:16:13 [INFO] received CSR
2023/09/01 01:16:13 [INFO] generating key: rsa-2048
2023/09/01 01:16:13 [INFO] encoded CSR
2023/09/01 01:16:13 [INFO] signed certificate with
serial number
660421027543221851817737469949130636763120428998
2023/09/01 01:16:13 [WARNING] This certificate
lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline
Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the
CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information
Requirements").
[root@k8s-master01 pki]#
注意,如果不是⾼可⽤集群,192.168.15.88:8443改为master01的
地址,8443改为apiserver的端⼝,默认是6443
Master01 配置
[root@k8s-master01 pki]# kubectl config set-cluster
kubernetes \
> --certificate
authority=/etc/kubernetes/pki/ca.pem \
> --embed-certs=true \
> --server=https://192.168.15.88:8443 \
> --kubeconfig=/etc/kubernetes/admin.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 pki]# kubectl config set
credentials kubernetes-admin \
> --client
certificate=/etc/kubernetes/pki/admin.pem \
> --client-key=/etc/kubernetes/pki/admin-key.pem
\> --embed-certs=true \
> --kubeconfig=/etc/kubernetes/admin.kubeconfig
User "kubernetes-admin" set.
[root@k8s-master01 pki]# kubectl config set-context
kubernetes-admin@kubernetes --
cluster=kubernetes --user=kubernetes-admin
--kubeconfig=/etc/kubernetes/admin.kubeconfig
Context "kubernetes-admin@kubernetes" created.
[root@k8s-master01 pki]# kubectl config use-context
kubernetes-admin@kubernetes --
kubeconfig=/etc/kubernetes/admin.kubeconfig
Switched to context "kubernetes-admin@kubernetes".
[root@k8s-master01 pki]#
Master01**创建ServiceAccount Key =》secret**[root@k8s-master01 pki]# openssl genrsa -out
/etc/kubernetes/pki/sa.key 2048
Generating RSA private key, 2048 bit long modulus
(2 primes)
............+++++
...................................................
..........................+++++
e is 65537 (0x010001)
[root@k8s-master01 pki]# openssl rsa -in
/etc/kubernetes/pki/sa.key -pubout -out
/etc/kubernetes/pki/sa.pub
writing RSA key
[root@k8s-master01 pki]#
Master01**发送证书⾄其他节点**[root@k8s-master01 pki]# for NODE in k8s-master02
k8s-master03; do > for FILE in $(ls
/etc/kubernetes/pki | grep -v etcd); do
> scp /etc/kubernetes/pki/${FILE}
$NODE:/etc/kubernetes/pki/${FILE};
> done;
> for FILE in admin.kubeconfig controller
manager.kubeconfig scheduler.kubeconfig; do
> scp /etc/kubernetes/${FILE}
$NODE:/etc/kubernetes/${FILE};
> done;
> done
查看证书⽂件
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr front-proxy-ca.csr
admin-key.pem front-proxy-ca-key.pem
admin.pem front-proxy-ca.pem
apiserver.csr front-proxy-client.csr
apiserver-key.pem front-proxy-client
key.pem
apiserver.pem front-proxy-client.pem
ca.csr sa.key
ca-key.pem sa.pub此处建议打快照!
第五章 ⾼可⽤配置
⾼可⽤配置(注意:如果不是⾼可⽤集群,haproxy和keepalived⽆
需安装)
如果在云上安装也⽆需执⾏此章节的步骤,可以直接使⽤云上的lb,
⽐如阿⾥云slb,腾讯云elb等
公有云要⽤公有云⾃带的负载均衡,⽐如阿⾥云的SLB,腾讯云的
ELB,⽤来替代haproxy和keepalived,因为公有云⼤部分都是不⽀
持keepalived的。
ca.pem scheduler.csr
controller-manager.csr scheduler-key.pem
controller-manager-key.pem scheduler.pem
controller-manager.pem
[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
|wc -l
23
[root@k8s-master01 pki]#所有Master节点**安装keepalived和haproxy**
5.1 HAProxy 配置
所有Master**配置HAProxy,配置⼀样**
(注意修改 IP)
[root@k8s-master01 ~]# yum install keepalived
haproxy -y
[root@k8s-master01 ~]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 500005.1 Keepalived 配置
所有Master节点配置KeepAlived,配置不⼀样,注意区分
timeout http-request 15s
timeout http-keep-alive 15s
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall
2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.15.101:6443 check
server k8s-master02 192.168.15.102:6443 check
server k8s-master03 192.168.15.103:6443 check【注意每个节点的IP、VIP和⽹卡(interface参数),在此只列出⼀
台 master01 的配置】
[root@k8s-master01 ~]# vim
/etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens32
mcast_src_ip 192.168.15.101
virtual_router_id 51
priority 101
nopreempt
advert_int 2
authentication {
auth_type PASS5.2 健康检查配置
所有master节点
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.15.88
}
track_script {
chk_apiserver
} }
[root@k8s-master01 ~]# vim
/etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
elseerr=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
[root@k8s-master01 ~]# chmod +x
/etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]#
所有master节点**启动haproxy和keepalived**[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now
haproxy
Created symlink /etc/systemd/system/multi
user.target.wants/haproxy.service "
/usr/lib/systemd/system/haproxy.service.
[root@k8s-master01 ~]# systemctl enable --now
keepalived
Created symlink /etc/systemd/system/multi
user.target.wants/keepalived.service "
/usr/lib/systemd/system/keepalived.service.
[root@k8s-master01 ~]#
VIP测试[root@k8s-master01 ~]# ping 192.168.15.88 -c 3
PING 192.168.100.69 (192.168.100.69) 56(84) bytes
of data.
64 bytes from 192.168.15.88: icmp_seq=1 ttl=64
time=0.052 ms
64 bytes from 192.168.15.88: icmp_seq=2 ttl=64
time=0.046 ms
64 bytes from 192.168.15.88: icmp_seq=3 ttl=64
time=0.114 ms
--- 192.168.100.69 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss,
time 2044ms
rtt min/avg/max/mdev = 0.046/0.070/0.114/0.032 ms
[root@k8s-master01 ~]#
重要:如果安装了keepalived和haproxy,需要测试keepalived是否
是正常的![root@k8s-master01 ~]# telnet 192.168.15.88 8443
Trying 192.168.15.88...
Connected to 192.168.15.88.
Escape character is '^]'.
Connection closed by foreign host.
[root@k8s-master01 ~]#
如果ping不通且telnet没有出现 '^]',则认为VIP不可以,不可在继续
往下执⾏,需要排查keepalived的问题。⽐如防⽕墙和 selinux,
haproxy和keepalived 的状态,监听端⼝等
所有节点**查看防⽕墙状态必须为disable和inactive:systemctl
status firewalld**
所有节点**查看selinux状态,必须为disable:getenforce**
master节点**查看haproxy和keepalived状态:systemctl status
keepalived haproxy**
master节点**查看监听端⼝:netstat -lntp**
第六章 Kubernetes组件配第六章 Kubernetes组件配
6.1 etcd配置
etcd配置⼤致相同,注意修改每个Master节点的etcd配置的主机名
和IP地址
6.1.1 master01
当前只列出⼀台 master01 的配置,02、03 修改 IP 即可
[root@k8s-master01 ~]# vim
/etc/etcd/etcd.config.yml
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.15.101:2380'
listen-client-urls:
'https://192.168.15.101:2379,http://127.0.0.1:2379'
max-snapshots: 3max-wals: 5
cors:
initial-advertise-peer-urls:
'https://192.168.15.101:2380'
advertise-client-urls:
'https://192.168.15.101:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s
master01=https://192.168.15.101:2380,k8s
master02=https://192.168.15.102:2380,k8s
master03=https://192.168.15.103:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'6.1.2 创建service
所有Master节点**创建etcd service并启动**
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd
ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd
ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
[root@k8s-master01 ~]# vim
/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config
file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
所有Master节点**创建etcd的证书⽬录**[root@k8s-master01 ~]# mkdir
/etc/kubernetes/pki/etcd
[root@k8s-master01 ~]# ln -s /etc/etcd/ssl/*
/etc/kubernetes/pki/etcd/
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now etcd
Created symlink /etc/systemd/system/etcd3.service "
/usr/lib/systemd/system/etcd.service.
Created symlink /etc/systemd/system/multi
user.target.wants/etcd.service "
/usr/lib/systemd/system/etcd.service.
[root@k8s-master01 ~]#
查看etcd状态
[root@k8s-master01 ~]# export ETCDCTL_API=3
[root@k8s-master01 ~]# etcdctl --
endpoints="192.168.15.103:2379,192.168.15.102:2379,
192.168.15.101:2379" \
> --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \
> --cert=/etc/kubernetes/pki/etcd/etcd.pem \
> --key=/etc/kubernetes/pki/etcd/etcd-key.pem \
> endpoint status --write-out=table+---------------------+------------------+---------
+---------+-----------+------------+-----------+---
---------+--------------------+--------+
| ENDPOINT | ID | VERSION
| DB SIZE | IS LEADER | IS LEARNER | RAFT TERM |
RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------+------------------+---------
+---------+-----------+------------+-----------+---
---------+--------------------+--------+
| 192.168.100.63:2379 | a1dc876ddb79d945 | 3.5.9
| 20 kB | false | false | 2 |
8 | 8 | |
| 192.168.100.62:2379 | 94fc7c634260ffca | 3.5.9
| 20 kB | false | false | 2 |
8 | 8 | |
| 192.168.100.61:2379 | 578060df8daece21 | 3.5.9
| 20 kB | true | false | 2 |
8 | 8 | |
+---------------------+------------------+---------
+---------+-----------+------------+-----------+---
---------+--------------------+--------+
[root@k8s-master01 ~]#
6.2 API Server6.2 API Server
所有Master节点**创建kube-apiserver service**
# 注意,如果不是⾼可⽤集群,192.168.15.88 改为master01的地址
master01**配置 API Server 程序控制脚本**
注意使⽤的k8s service⽹段为10.96.0.0/16,该⽹段不能和宿主机的
⽹段、Pod⽹段的重复,请按需修改
在此只列出 master01 配置,其他 master 节点按照该配置修改 IP 即
[root@k8s-master01 ~]# vim
/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kuberne
tes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.15.101 \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd
servers=https://192.168.15.101:2379,https://192.168
.15.102:2379,https://192.168.15.103:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem
\
--tls-cert
file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key
file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client
certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client
key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key
file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key
file=/etc/kubernetes/pki/sa.key \--service-account
issuer=https://kubernetes.default.svc.cluster.local
\
--kubelet-preferred-address
types=InternalIP,ExternalIP,Hostname \
--enable-admission
plugins=NamespaceLifecycle,LimitRanger,ServiceAccou
nt,DefaultStorageClass,DefaultTolerationSeconds,Nod
eRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca
file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert
file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key
file=/etc/kubernetes/pki/front-proxy-client-key.pem
\
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group
\
--requestheader-extra-headers-prefix=X
Remote-Extra- \
--requestheader-username-headers=X-Remote
User
# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kube
apiserver
Created symlink /etc/systemd/system/multi
user.target.wants/kube-apiserver.service "
/usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master01 ~]#
检测kube-server状态
[root@k8s-master01 ~]# systemctl status kube
apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube
apiserver.service;>
Active: active (running) since Fri 2023-09-01
13:54:46 CST; 38s>
Docs: https://github.com/kubernetes/kubernetes
Main PID: 3164 (kube-apiserver) Tasks: 7 (limit: 11057)
Memory: 296.3M
CGroup: /system.slice/kube-apiserver.service
#$3164 /usr/local/bin/kube-apiserver --
v=2 --allow-priv>
9⽉ 01 13:54:53 k8s-master01 kube-apiserver[3164]:
I0901 13:54:53.>
9⽉ 01 13:54:53 k8s-master01 kube-apiserver[3164]:
I0901 13:54:53.>
9⽉ 01 13:54:53 k8s-master01 kube-apiserver[3164]:
I0901 13:54:53.>
9⽉ 01 13:54:53 k8s-master01 kube-apiserver[3164]:
I0901 13:54:53.>
9⽉ 01 13:54:53 k8s-master01 kube-apiserver[3164]:
[-]poststarthoo>
9⽉ 01 13:54:54 k8s-master01 kube-apiserver[3164]:
E0901 13:54:54.>
9⽉ 01 13:55:04 k8s-master01 kube-apiserver[3164]:
W0901 13:55:04.>
9⽉ 01 13:55:04 k8s-master01 kube-apiserver[3164]:
I0901 13:55:04.>
9⽉ 01 13:55:04 k8s-master01 kube-apiserver[3164]:
I0901 13:55:04.>
9⽉ 01 13:55:20 k8s-master01 kube-apiserver[3164]:
I0901 13:55:20.>
[root@k8s-master01 ~]#6.3 Controller Manager
所有Master节点**配置kube-controller-manager service(所有
master节点配置⼀样)**
注意使⽤的k8s Pod⽹段为172.16.0.0/16,该⽹段不能和宿主机的⽹
段、k8s Service⽹段的重复,请按需修改
[root@k8s-master01 ~]# vim
/usr/lib/systemd/system/kube-controller
manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kuberne
tes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert
file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key
file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key
file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller
manager.kubeconfig \
--authentication
kubeconfig=/etc/kubernetes/controller
manager.kubeconfig \
--authorization
kubeconfig=/etc/kubernetes/controller
manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner
\
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/16 \
--requestheader-client-ca
file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target所有Master节点**启动kube-controller-manager**
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kube
controller-manager
Created symlink /etc/systemd/system/multi
user.target.wants/kube-controller-manager.service "
/usr/lib/systemd/system/kube-controller
manager.service.
[root@k8s-master01 ~]# systemctl status kube
controller-manager
● kube-controller-manager.service - Kubernetes
Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube
controller-manager>
Active: active (running) since Fri 2023-09-01
14:01:21 CST; 52s>
Docs: https://github.com/kubernetes/kubernetes
Main PID: 3649 (kube-controller)
Tasks: 5 (limit: 11057)
Memory: 132.2M
CGroup: /system.slice/kube-controller
manager.service
#$3649 /usr/local/bin/kube-controller
manager --v=2 --r>6.4 Scheduler
所有Master节点**配置kube-scheduler service(所有master节点配
置⼀样)**
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
9⽉ 01 14:01:22 k8s-master01 kube-controller
manager[3649]: I0901 >
lines 1-20/20 (END)[root@k8s-master01 ~]# cat
/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kuberne
tes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--leader-elect=true \
--authentication
kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
--authorization
kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
--
kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kube
schedulerCreated symlink /etc/systemd/system/multi
user.target.wants/kube-scheduler.service "
/usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master01 ~]# systemctl status kube
scheduler
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube
scheduler.service;>
Active: active (running) since Fri 2023-09-01
14:04:15 CST; 5s >
Docs: https://github.com/kubernetes/kubernetes
Main PID: 3857 (kube-scheduler)
Tasks: 7 (limit: 11057)
Memory: 71.2M
CGroup: /system.slice/kube-scheduler.service
#$3857 /usr/local/bin/kube-scheduler --
v=2 --leader-ele>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
9⽉ 01 14:04:17 k8s-master01 kube-scheduler[3857]:
I0901 14:04:17.>
[root@k8s-master01 ~]#
第七章 TLS Bootstrapping
配置
TLS Bootstrapping(TLS 引导)是⼀种机制,⽤于在 Kubernetes
集群中⾃动为节点(Node)颁发和配置安全传输所需的 TLS 证书。
注意,如果不是⾼可⽤集群,192.168.15.88:8443改为master01的地
址,8443改为apiserver的端⼝,默认是6443Master01**创建bootstrap**
[root@k8s-master01 ~]# cd /root/k8s-ha
install/bootstrap
[root@k8s-master01 bootstrap]# kubectl config set
cluster kubernetes \
> --certificate
authority=/etc/kubernetes/pki/ca.pem \
> --embed-certs=true \
> --server=https://192.168.15.88:8443 \
> --kubeconfig=/etc/kubernetes/bootstrap
kubelet.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 bootstrap]# kubectl config set
credentials tls-bootstrap-token-user \
> --token=c8ad9c.2e4d610cf3e7426e \
> --kubeconfig=/etc/kubernetes/bootstrap
kubelet.kubeconfig
User "tls-bootstrap-token-user" set.
[root@k8s-master01 bootstrap]# kubectl config set
context tls-bootstrap-token-user@kubernetes \
> --cluster=kubernetes \
> --user=tls-bootstrap-token-user \
> --kubeconfig=/etc/kubernetes/bootstrap
kubelet.kubeconfigContext "tls-bootstrap-token-user@kubernetes"
created.
[root@k8s-master01 bootstrap]# kubectl config use
context tls-bootstrap-token-user@kubernetes --
kubeconfig=/etc/kubernetes/bootstrap
kubelet.kubeconfig
Switched to context "tls-bootstrap-token
user@kubernetes".
[root@k8s-master01 bootstrap]# mkdir -p /root/.kube
; cp /etc/kubernetes/admin.kubeconfig
/root/.kube/config
[root@k8s-master01 bootstrap]#
可以正常查询集群状态,才可以继续往下,否则不⾏,需要排查k8s
组件是否有故障
[root@k8s-master01 bootstrap]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
[root@k8s-master01 bootstrap]# kubectl create -f
bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created第⼋章 Node节点配置
8.1 复制证书
Master01节点**复制证书⾄Node节点**
clusterrolebinding.rbac.authorization.k8s.io/kubele
t-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node
autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node
autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube
apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system
:kube-apiserver created
[root@k8s-master01 bootstrap]#
[root@k8s-master01 ~]# cd /etc/kubernetes/
[root@k8s-master01 kubernetes]# for NODE in k8s
master02 k8s-master03 k8s-node01 k8s-node02; do
> ssh $NODE mkdir -p /etc/kubernetes/pki> for FILE in pki/ca.pem pki/ca-key.pem
pki/front-proxy-ca.pem bootstrap
kubelet.kubeconfig; do
> scp /etc/kubernetes/$FILE
$NODE:/etc/kubernetes/${FILE}
> done
> done
ca.pem 100%
1411 611.5KB/s 00:00
ca-key.pem 100%
1675 560.3KB/s 00:00
front-proxy-ca.pem 100%
1143 738.2KB/s 00:00
bootstrap-kubelet.kubeconfig 100%
2301 598.7KB/s 00:00
ca.pem 100%
1411 529.5KB/s 00:00
ca-key.pem 100%
1675 417.6KB/s 00:00
front-proxy-ca.pem 100%
1143 307.8KB/s 00:00
bootstrap-kubelet.kubeconfig 100%
2301 603.5KB/s 00:00
ca.pem 100%
1411 404.2KB/s 00:00
ca-key.pem 100%
1675 515.8KB/s 00:00 8.2 Kubelet配置
所有节点**创建相关⽬录**
所有节点**配置kubelet service**
front-proxy-ca.pem 100%
1143 380.2KB/s 00:00
bootstrap-kubelet.kubeconfig 100%
2301 550.5KB/s 00:00
ca.pem 100%
1411 645.2KB/s 00:00
ca-key.pem 100%
1675 1.4MB/s 00:00
front-proxy-ca.pem 100%
1143 1.2MB/s 00:00
bootstrap-kubelet.kubeconfig 100%
2301 760.2KB/s 00:00
[root@k8s-master01 kubernetes]#
[root@k8s-master01 ~]# mkdir -p /var/lib/kubelet
/var/log/kubernetes
/etc/systemd/system/kubelet.service.d
/etc/kubernetes/manifests/
[root@k8s-master01 ~]#[root@k8s-master01 ~]# vim
/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kuberne
tes
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
所有节点**配置kubelet service的配置⽂件**
(也可以写到kubelet.service)[root@k8s-master01 ~]# vim
/etc/systemd/system/kubelet.service.d/10-
kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap
kubeconfig=/etc/kubernetes/bootstrap
kubelet.kubeconfig --
kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--container
runtime
endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_CONFIG_ARGS=--
config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node
labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet
$KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS
$KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
所有节点**创建kubelet的配置⽂件**注意:如果更改了k8s的service⽹段,需要更改kubelet-conf.yml 的
clusterDNS:配置,改成k8s Service⽹段的第⼗个地址,⽐如
10.96.0.10
[root@k8s-master01 ~]# vim /etc/kubernetes/kubelet
conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
[root@k8s-master01 ~]#
启动**所有节点kubelet**[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now
kubelet
Created symlink /etc/systemd/system/multi
user.target.wants/kubelet.service "
/usr/lib/systemd/system/kubelet.service.
[root@k8s-master01 ~]# systemctl status kubelet
此时系统⽇志/var/log/messages显示只有如下两种信息为正常,安
装calico后即可恢复
Unable to update cni config: no networks found in /etc/cni/net.d
如果有很多报错⽇志,或者有⼤量看不懂的报错,说明kubelet的配
置有误,需要检查kubelet配置
查看集群状态(Ready 或 NotReady 都正常)8.3 kube-proxy配置
注意,如果不是⾼可⽤集群,192.168.15.88:8443改为master01的地
址,8443改为apiserver的端⼝,默认是6443
Master01**执⾏**
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady <none> 2m31s v1.28.0
k8s-master02 NotReady <none> 2m31s v1.28.0
k8s-master03 NotReady <none> 2m31s v1.28.0
k8s-node01 NotReady <none> 2m31s v1.28.0
k8s-node02 NotReady <none> 2m31s v1.28.0
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# cd /root/k8s-ha-install/pki
[root@k8s-master01 pki]# cfssl gencert \
> -ca=/etc/kubernetes/pki/ca.pem \
> -ca-key=/etc/kubernetes/pki/ca-key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> kube-proxy-csr.json | cfssljson -bare
/etc/kubernetes/pki/kube-proxy2023/09/01 20:58:25 [INFO] generate received
request
2023/09/01 20:58:25 [INFO] received CSR
2023/09/01 20:58:25 [INFO] generating key: rsa-2048
2023/09/01 20:58:25 [INFO] encoded CSR
2023/09/01 20:58:25 [INFO] signed certificate with
serial number
409810819523538273888266911652028641582986884038
2023/09/01 20:58:25 [WARNING] This certificate
lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline
Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the
CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information
Requirements").
[root@k8s-master01 pki]#
注意,如果不是⾼可⽤集群,192.168.100.69:8443改为master01的
地址,8443改为apiserver的端⼝,默认是6443
Master01**执⾏**
[root@k8s-master01 pki]# kubectl config set-cluster
kubernetes \> --certificate
authority=/etc/kubernetes/pki/ca.pem \
> --embed-certs=true \
> --server=https://192.168.15.88:8443 \
> --kubeconfig=/etc/kubernetes/kube
proxy.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master01 pki]# kubectl config set
credentials system:kube-proxy \
> --client
certificate=/etc/kubernetes/pki/kube-proxy.pem \
> --client-key=/etc/kubernetes/pki/kube-proxy
key.pem \
> --embed-certs=true \
> --kubeconfig=/etc/kubernetes/kube
proxy.kubeconfig
User "system:kube-proxy" set.
[root@k8s-master01 pki]# kubectl config set-context
system:kube-proxy@kubernetes \
> --cluster=kubernetes \
> --user=system:kube-proxy \
> --kubeconfig=/etc/kubernetes/kube
proxy.kubeconfig
Context "system:kube-proxy@kubernetes" created.
[root@k8s-master01 pki]# kubectl config use-context
system:kube-proxy@kubernetes \> --kubeconfig=/etc/kubernetes/kube
proxy.kubeconfig
Switched to context "system:kube-proxy@kubernetes".
[root@k8s-master01 pki]#
Master01**将kubeconfig发送⾄其他节点**[root@k8s-master01 pki]# for NODE in k8s-master02
k8s-master03; do
> scp /etc/kubernetes/kube-proxy.kubeconfig
$NODE:/etc/kubernetes/kube-proxy.kubeconfig
> done
kube-proxy.kubeconfig 100%
6482 1.7MB/s 00:00
kube-proxy.kubeconfig 100%
6482 218.3KB/s 00:00
[root@k8s-master01 pki]# for NODE in k8s-node01
k8s-node02; do
> scp /etc/kubernetes/kube-proxy.kubeconfig
$NODE:/etc/kubernetes/kube-proxy.kubeconfig
> done
kube-proxy.kubeconfig 100%
6482 1.3MB/s 00:00
kube-proxy.kubeconfig 100%
6482 2.2MB/s 00:00
[root@k8s-master01 pki]#
所有节点**添加kube-proxy的配置和service⽂件:**
[root@k8s-master01 pki]# vim
/usr/lib/systemd/system/kube-proxy.service[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kuberne
tes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
如果更改了集群Pod的⽹段,需要**更改kube-proxy.yaml的
clusterCIDR为更改后的Pod⽹段:**
[root@k8s-master01 pki]# vim /etc/kubernetes/kube
proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection: acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
[root@k8s-master01 pki]#
所有节点**启动kube-proxy**
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now
kube-proxy
Created symlink /etc/systemd/system/multi
user.target.wants/kube-proxy.service "
/usr/lib/systemd/system/kube-proxy.service.
[root@k8s-master01 pki]#
此时系统⽇志/var/log/messages显示只有如下两种信息为正常,安
装calico后即可恢复
Unable to update cni config: no networks found in /etc/cni/net.d第九章 安装Calico
9.1 安装官⽅推荐版本
master01**执⾏**
master01**更改calico的⽹段,主要需要将红⾊部分的⽹段,改为⾃
⼰的Pod⽹段**
确保这个⽹段是⾃⼰的Pod⽹段
[root@k8s-master01 pki]# cd /root/k8s-ha
install/calico/
[root@k8s-master01 calico]# ls
calico.yaml
[root@k8s-master01 calico]# sed -i
"s#POD_CIDR#172.16.0.0/16#g" calico.yaml
[root@k8s-master01 calico]# grep "IPV4POOL_CIDR"
calico.yaml -A 1
- name: CALICO_IPV4POOL_CIDR
value: "172.16.0.0/16"
[root@k8s-master01 calico]# kubectl apply -f
calico.yamlmaster01**查看容器状态**
[root@k8s-master01 calico]# kubectl get po -n kube
system
NAME READY
STATUS RESTARTS AGE
calico-kube-controllers-6d48795585-7bpnl 1/1
Running 0 95s
calico-node-k27fv 1/1
Running 0 95s
calico-node-nzgms 1/1
Running 0 95s
calico-node-pfw59 1/1
Running 0 95s
calico-node-qrg2q 1/1
Running 0 95s
calico-node-wts56 1/1
Running 0 95s
[root@k8s-master01 calico]#
如果容器状态异常可以使⽤kubectl describe 或者kubectl logs查看
容器的⽇志第⼗章 安装CoreDNS
master01**安装官⽅推荐版本**
如果更改了k8s service的⽹段需要将coredns的serviceIP改成k8s
service⽹段的第⼗个IP(**master01 执⾏)**
Kubectl logs -f POD_NAME -n kube-system
Kubectl logs -f POD_NAME -c upgrade-ipam -n kube
system
[root@k8s-master01 ~]# cd /root/k8s-ha-install/
[root@k8s-master01 k8s-ha-install]#
[root@k8s-master01 k8s-ha-install]#
COREDNS_SERVICE_IP=`kubectl get svc | grep
kubernetes | awk '{print $3}'`0
[root@k8s-master01 k8s-ha-install]# sed -i
"s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g"
CoreDNS/coredns.yaml
[root@k8s-master01 k8s-ha-install]#master01**安装coredns**
第⼗⼀章 安装Metrics
Server
在新版的Kubernetes中系统资源的采集均使⽤Metrics-server,可以
通过Metrics采集节点和Pod的内存、磁盘、CPU和⽹络的使⽤率。
[root@k8s-master01 k8s-ha-install]# kubectl create
-f CoreDNS/coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredn
s created
clusterrolebinding.rbac.authorization.k8s.io/system
:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@k8s-master01 k8s-ha-install]#master01**安装metrics server**
[root@k8s-master01 k8s-ha-install]# cd /root/k8s
ha-install/metrics-server
[root@k8s-master01 metrics-server]# kubectl create
-f .
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggreg
ated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metric
s-server created
rolebinding.rbac.authorization.k8s.io/metrics
server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metric
s-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system
:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k
8s.io created
[root@k8s-master01 metrics-server]#等待metrics server启动然后查看状态
[root@k8s-master01 metrics-server]# kubectl top
node
NAME CPU(cores) CPU% MEMORY(bytes)
MEMORY%
k8s-master01 436m 21% 936Mi
55%
k8s-master02 434m 21% 992Mi
58%
k8s-master03 160m 8% 929Mi
55%
k8s-node01 113m 5% 687Mi
40%
k8s-node02 108m 5% 770Mi
45%
[root@k8s-master01 metrics-server]#
如果有如下报错,可以等待10分钟后,再次查看:
Error from server (ServiceUnavailable): the server is currently
unable to handle the request (get nodes.metrics.k8s.io)
第⼗⼆章 安装dashboard第⼗⼆章 安装dashboard
12.1 Dashboard部署
Kubernetes Dashboard(仪表盘)是⼀个⽤于可视化管理和监控
Kubernetes 集群的 web UI ⼯具。它提供了⼀个⽤户友好的界⾯,⽤
于查看和管理集群中的应⽤程序、服务、节点等重要资源。
Dashboard⽤于展示集群中的各类资源,同时也可以通过Dashboard
实时查看Pod的⽇志和在容器中执⾏⼀些命令等。
12.1.1 安装指定版本dashboard
Master01 执⾏
[root@k8s-master01 metrics-server]# cd /root/k8s
ha-install/dashboard/
[root@k8s-master01 dashboard]# kubectl create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin
user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created12.1.2 登录dashboard
Master01**更改dashboard的svc为NodePort**
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard
created
clusterrole.rbac.authorization.k8s.io/kubernetes
dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes
dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubern
etes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master01 dashboard]#
[root@k8s-master01 dashboard]# kubectl edit svc
kubernetes-dashboard -n kubernetes-dashboard将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤)
Master01**查看端⼝号:**
[root@k8s-master01 dashboard]# kubectl get svc
kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP
EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.96.120.157
<none> 443:30237/TCP 4m55s
[root@k8s-master01 dashboard]#
根据⾃⼰的实例端⼝号,通过任意安装了kube-proxy的宿主机的
IP+端⼝即可访问到dashboard:访问Dashboard:https://192.168.15.88:30237(请更改30237为⾃
⼰的端⼝),选择登录⽅式为令牌(即token⽅式)
Master01**创建登录Token:**
[root@k8s-master01 dashboard]# kubectl create token
admin-user -n kube-systemeyJhbGciOiJSUzI1NiIsImtpZCI6Im9LMEFwQS11Rzc1YU40Zkp
5QnVPY0p6akxiRktfZ0hNbWtZd1ZrMlh2UkUifQ.eyJhdWQiOls
iaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZX
IubG9jYWwiXSwiZXhwIjoxNjkzNTc4Nzg2LCJpYXQiOjE2OTM1N
zUxODYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0
LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJ
uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW
50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMzdiNWY2M
jEtOTEzNC00NGYwLWIyY2UtZWViZWFlZmZlYjlhIn19LCJuYmYi
OjE2OTM1NzUxODYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3V
udDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.TFzNWCVfudxyNl
mkbWrkc4d21bx4VqYmArvr3eUvNXl1DrKFUyDLUWMLyv7R
x1rHPb3avlJoA3Zn40LwiOHltV9blQypuHr52-
bbzWbv5uy7Pa_vPw9LMELwZ4Vf7E807LLkhu0tTPCaL_iLbSFrL
tZRkiJUGkVmfMZLeHAh4BzKi1DOxZhBCIgsbK
GTfQWSwn5bApwvD7fqMZTL4ou9oIq2CGELpp9eVEdwtqq80OKgi
LfBA9KKYtmAlhbXAib_G1uNV4tN3XfdwkQwYx2ZDEazQ06y5tGm
XlfcBIq4hH7rCN7Kl7Pvo3C0OEqucuJGHJ820uJyQ8yzqvPXzcr
A
[root@k8s-master01 dashboard]#
将token值输⼊到令牌后,单击登录即可访问Dashboard:第⼗三章 集群可⽤性验证
13.1 节点均正常13.2 Pod均正常
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 47m v1.28.0
k8s-master02 Ready <none> 47m v1.28.0
k8s-master03 Ready <none> 47m v1.28.0
k8s-node01 Ready <none> 47m v1.28.0
k8s-node02 Ready <none> 47m v1.28.0
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# kubectl get po -A
NAMESPACE NAME
READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-
6d48795585-7bpnl 1/1 Running 1 (23m ago)
28m
kube-system calico-node-k27fv
1/1 Running 1 (23m ago)
28m
kube-system calico-node-nzgms
1/1 Running 1 (22m ago)
28m
kube-system calico-node-pfw59
1/1 Running 1 (22m ago)
28m13.3 集群⽹段⽆任何冲突
kube-system calico-node-qrg2q
1/1 Running 1 (23m ago)
28m
kube-system calico-node-wts56
1/1 Running 1 (22m ago)
28m
kube-system coredns-788958459b-gzrj7
1/1 Running 0 18m
kube-system metrics-server-8f77d49f6-
fhcbj 1/1 Running 0
17m
kubernetes-dashboard dashboard-metrics-scraper-
7b554c884f-6wlcz 1/1 Running 0
13m
kubernetes-dashboard kubernetes-dashboard-
54b699784c-8s77s 1/1 Running 0
13m
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none>
443/TCP 7h45m
[root@k8s-master01 ~]# kubectl get po -A -owideNAMESPACE NAME
READY STATUS RESTARTS AGE
IP NODE NOMINATED NODE
READINESS GATES
kube-system calico-kube-controllers-
6d48795585-7bpnl 1/1 Running 1 (23m ago)
29m 172.16.58.194 k8s-node02 <none>
<none>
kube-system calico-node-k27fv
1/1 Running 1 (23m ago)
29m 192.168.15.105 k8s-node02 <none>
<none>
kube-system calico-node-nzgms
1/1 Running 1 (23m ago)
29m 192.168.15.101 k8s-master01 <none>
<none>
kube-system calico-node-pfw59
1/1 Running 1 (23m ago)
29m 192.168.15.103 k8s-master03 <none>
<none>
kube-system calico-node-qrg2q
1/1 Running 1 (23m ago)
29m 192.168.15.104 k8s-node01 <none>
<none>kube-system calico-node-wts56
1/1 Running 1 (23m ago)
29m 192.168.15.102 k8s-master02 <none>
<none>
kube-system coredns-788958459b-gzrj7
1/1 Running 0 19m
172.16.32.129 k8s-master01 <none>
<none>
kube-system metrics-server-8f77d49f6-
fhcbj 1/1 Running 0
17m 172.16.85.193 k8s-node01 <none>
<none>
kubernetes-dashboard dashboard-metrics-scraper-
7b554c884f-6wlcz 1/1 Running 0
14m 172.16.195.1 k8s-master03 <none>
<none>
kubernetes-dashboard kubernetes-dashboard-
54b699784c-8s77s 1/1 Running 0
14m 172.16.122.129 k8s-master02 <none>
<none>
[root@k8s-master01 ~]#
13.4 能够正常创建资源13.4 能够正常创建资源
13.5 Pod必须能够解析Service(同
[root@k8s-master01 ~]# kubectl create deploy
cluster-test --image=registry.cn
beijing.aliyuncs.com/dotbalo/debug-tools -- sleep
3600
deployment.apps/cluster-test created
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS
RESTARTS AGE
cluster-test-66bb44bd88-gkq8m 1/1 Running 0
100s
[root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS
RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
cluster-test-66bb44bd88-gkq8m 1/1 Running 0
103s 172.16.85.194 k8s-node01 <none>
<none>
[root@k8s-master01 ~]#13.5 Pod必须能够解析Service(同
namespace和跨namespace)
1. nslookup kubernetes
可以解析到server的IP地址说明同namespace可以解析
1. nslookup kube-dns.kube-system
[root@k8s-master01 ~]# kubectl exec -it cluster
test-66bb44bd88-gkq8m -- bash
(13:58 cluster-test-66bb44bd88-gkq8m:/) nslookup
kubernetes
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1 # 可以解析到server的IP地址说明同
namespace可以解析
(13:58 cluster-test-66bb44bd88-gkq8m:/)(13:58 cluster-test-66bb44bd88-gkq8m:/) nslookup
kube-dns.kube-system
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kube-dns.kube-system.svc.cluster.local
Address: 10.96.0.10 # 可以解析到server的第⼗个ip,说
明可以解析到kube-dns
(14:00 cluster-test-66bb44bd88-gkq8m:/)
可以解析到server的第⼗个ip,说明可以解析到kube-dns,说明跨
namespace也可解析
13.6 每个节点都必须能访问Kubernetes
的kubernetes svc 443 和kube-dns的
service 53[root@k8s-master01 ~]# curl https://10.96.0.1:443
curl: (60) SSL certificate problem: unable to get
local issuer certificate
More details here:
https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server
and therefore could not
establish a secure connection to it. To learn more
about this situation and
how to fix it, please visit the web page mentioned
above.
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server
[root@k8s-master01 ~]#
13.7 Pod和Pod之间要能够正常通讯(同
namespace和跨namespace)
[root@k8s-master01 ~]# kubectl get po -nkube-system
-owide
NAME READY
STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATEScalico-kube-controllers-6d48795585-7bpnl 1/1
Running 1 (61m ago) 66m 172.16.58.194 k8s
node02 <none> <none>
calico-node-k27fv 1/1
Running 1 (61m ago) 66m 192.168.15.105 k8s
node02 <none> <none>
calico-node-nzgms 1/1
Running 1 (61m ago) 66m 192.168.15.101 k8s
master01 <none> <none>
calico-node-pfw59 1/1
Running 1 (61m ago) 66m 192.168.15.103 k8s
master03 <none> <none>
calico-node-qrg2q 1/1
Running 1 (61m ago) 66m 192.168.15.104 k8s
node01 <none> <none>
calico-node-wts56 1/1
Running 1 (61m ago) 66m 192.168.15.102 k8s
master02 <none> <none>
coredns-788958459b-gzrj7 1/1
Running 0 56m 172.16.32.129 k8s
master01 <none> <none>
metrics-server-8f77d49f6-fhcbj 1/1
Running 0 55m 172.16.85.193 k8s
node01 <none> <none>
[root@k8s-master01 ~]# kubectl get po -owideNAME READY STATUS
RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
cluster-test-66bb44bd88-gkq8m 1/1 Running 0
27m 172.16.85.194 k8s-node01 <none>
<none>
[root@k8s-master01 ~]# kubectl exec -it cluster
test-66bb44bd88-gkq8m -- bash
(14:18 cluster-test-66bb44bd88-gkq8m:/) ping
172.16.58.194 -c 3
PING 172.16.58.194 (172.16.58.194) 56(84) bytes of
data.
64 bytes from 172.16.58.194: icmp_seq=1 ttl=62
time=0.643 ms
64 bytes from 172.16.58.194: icmp_seq=2 ttl=62
time=1.59 ms
64 bytes from 172.16.58.194: icmp_seq=3 ttl=62
time=1.44 ms
--- 172.16.58.194 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss,
time 2024ms
rtt min/avg/max/mdev = 0.643/1.228/1.599/0.420 ms
(14:19 cluster-test-66bb44bd88-gkq8m:/)
进⼊node1节点的pod,ping测node2节点的pod可ping通
13.8 Pod和Pod之间要能够正常通讯(同13.8 Pod和Pod之间要能够正常通讯(同
机器和跨机器)
所有节点[root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS
RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
cluster-test-66bb44bd88-gkq8m 1/1 Running 0
19m 172.16.85.194 k8s-node01 <none>
<none>
[root@k8s-master01 ~]# ping 172.16.85.194 -c 3 #
所有节点都要ping
PING 172.16.85.194 (172.16.85.194) 56(84) bytes of
data.
64 bytes from 172.16.85.194: icmp_seq=1 ttl=63
time=0.574 ms
64 bytes from 172.16.85.194: icmp_seq=2 ttl=63
time=0.262 ms
64 bytes from 172.16.85.194: icmp_seq=3 ttl=63
time=0.479 ms
--- 172.16.85.194 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss,
time 2041ms
rtt min/avg/max/mdev = 0.262/0.438/0.574/0.131 ms
[root@k8s-master01 ~]#
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值