使用kubekey离线部署kubesphere v3.4.0 + rook-ceph


在这里插入图片描述

服务器规划

IP地址规划

主机名IP地址备注
k8s-harbor1
harbor.k8s.hebei
10.122.249.151/24主私有镜像库;配置HA代理Kubernetes API
k8s-harbor2
harbor2.k8s.hebei
10.122.249.152/24从私有镜像库;配置HA代理Kubernetes API
k8s-ha-vip
vip.k8s.hebei
10.122.249.153/24用于Kubernetes高可用的浮动IP
不绑定实体服务器
k8s-master110.122.249.154/24k8s管理节点1
k8s-master210.122.249.155/24k8s管理节点2
k8s-master310.122.249.156/24k8s管理节点3
k8s-node0110.122.249.157/24k8s工作节点,兼ceph存储节点
k8s-node0210.122.249.158/24k8s工作节点,兼ceph存储节点
k8s-node0310.122.249.159/24k8s工作节点,兼ceph存储节点
k8s-node0410.122.249.160/24k8s工作节点,兼备份节点
k8s-node0510.122.249.161/24k8s工作节点,兼备份节点

软件版本

软件版本备注
操作系统CentOS-7.9-x86_64-2009内核版本升级到:6.0.10-1
Kubernetesv1.26.5Kubernetes集群
KubeSpherev3.4.0KubeSphere管理平台
kubekeyv3.0.13集群部署工具
Harborv2.6.0私有镜像库
Containerdv1.6.4容器运行环境
Rookv1.9.13基于Kubernetes环境的Ceph管理器
Cephv16.2.10分布式存储

端口要求

服务协议起始端口结束端口备注
sshTCP22ssh端口
etcdTCP23792380etcd端口
apiserverTCP6443Kubernetes API服务端口
calicoTCP90999100CNI网络插件calico调用端口
bgpTCP179Calico使用bgp协议
masterTCP1025010258Master节点使用的端口
nodeTCP3000032767Node节点使用的端口
dnsTCP/UDP53DNS服务端口
harborTCP80私有镜像库服务端口
rpcbindTCP111NFS服务端口
ipipIPENCAP/IPIPCalico需要使用IPIP协议
metrics-serverTCP8443K8s集群性能监控
cephTCP32564Ceph Dashboard

备注:当使用Calico网络插件并且使用经典网络运行集群时,需要对源地址启用IPENCAP和IPIP协议。

配置服务器环境

Harbor节点服务器配置

安装CentOS 7.9
  • 使用光盘镜像文件:CentOS-7-x86_64-DVD-2009.iso
  • 最小化安装操作系统,然后安装必要的rpm包。
  • 可以不分配swap区
  • 因为docker和containerd都将运行数据存放在/var目录下,建议将/var目录配置较大的存储空间(至少大于100GB)
升级内核版本
# 升级内核
yum install kernel-ml-devel-6.0.10-1.el7.elrepo.x86_64.rpm
yum install kernel-ml-6.0.10-1.el7.elrepo.x86_64.rpm

# 设置启动引导使用新内核
grub2-set-default '6.0.10-1.el7.elrepo.x86_64'

# 重启操作系统
reboot
安装必要的软件包

软件包清单:
yum install -y yum-utils device-mapper-persistent-data lvm2 sysstat nfs-utils ntp jq bind-utils telnet curl rsync sshpass wget vim socat conntrack ebtables ipset bash-completion
HA环境的两个软件包:
yum install -y haproxy keepalived
搭建内网yum源或者制作离线包,制作rpm离线包方法网上很多教程可以参考。
或者从这里下载这些软件:rpm离线包下载

上传提前准备好的离线rpm包’k8s-request-rpm.tgz’,并解压缩。

tar zxvf k8s-request-rpm.tgz

drwxr-xr-x monitor/monitor   0 2023-12-01 16:49 ./install_rpm/
-rw-r--r-- monitor/monitor 67624 2014-07-04 08:43 ./install_rpm/autogen-libopts-5.18-5.el7.x86_64.rpm
-rw-r--r-- monitor/monitor 88692 2020-06-24 01:36 ./install_rpm/ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm
-rw-r--r-- monitor/monitor 1527972 2019-08-23 05:24 ./install_rpm/GeoIP-1.5.0-14.el7.x86_64.rpm
...
...
...
-rw-r--r-- monitor/monitor  417900 2022-12-20 00:02 ./install_rpm/rsync-3.1.2-12.el7_9.x86_64.rpm
-rw-r--r-- monitor/monitor   46304 2014-07-04 12:25 ./install_rpm/perl-Time-HiRes-1.9725-3.el7.x86_64.rpm
-rw-r--r-- monitor/monitor   19244 2014-07-04 12:15 ./install_rpm/perl-constant-1.27-2.el7.noarch.rpm

切换为root用户安装软件包:

# 切换到root用户
su -
密码:xxxx

# 进入解压好的目录下,安装软件包
cd install_rpm/
yum install *.rpm

已加载插件:fastestmirror
正在检查 autogen-libopts-5.18-5.el7.x86_64.rpm: autogen-libopts-5.18-5.el7.x86_64
正在检查 bind-libs-9.11.4-26.P2.el7_9.15.x86_64.rpm: 32:bind-libs-9.11.4-26.P2.el7_9.15.x86_64
...
...
...
 wget                            x86_64   1.14-18.el7_6.1                    /wget-1.14-18.el7_6.1.x86_64                             2.0 M
 yum-utils                       noarch   1.1.31-54.el7_8                    /yum-utils-1.1.31-54.el7_8.noarch                        337 k

事务概要
============================================================================================================================================
重新安装  73 软件包

总计:92 M
安装大小:92 M
Is this ok [y/d/N]: y
禁止swap区

编辑/etc/fstab注释掉swap那一行

sed -i 's/.*swap.*/#&/' /etc/fstab
swapoff -a
禁止防火墙和selinux
# 关闭禁止selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
setenforce 0

# 关闭禁止防火墙
systemctl disable firewalld && systemctl stop firewalld
配置时区和时间同步
# 设置时区为:Asia/Shanghai
timedatectl set-timezone Asia/Shanghai

# 设置时间同步,每30分钟同步一次
crontab -e
*/30 * * * * /usr/sbin/ntpdate 10.100.48.1 10.100.48.42 10.122.1.6 &> /dev/null
配置文件系统

# 查看硬盘分区情况
lsblk

NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb                   8:16   0  10.7T  0 disk
sda                   8:0    0 837.9G  0 disk
├─sda2                8:2    0 837.4G  0 part
│ ├─vg_root-swap_lv 253:1    0    32G  0 lvm
│ ├─vg_root-tmp_lv  253:6    0    10G  0 lvm  /tmp
│ ├─vg_root-nmon_lv 253:4    0    10G  0 lvm  /nmon
│ ├─vg_root-home_lv 253:2    0    10G  0 lvm  /home
│ ├─vg_root-root_lv 253:0    0    50G  0 lvm  /
│ ├─vg_root-app_lv  253:5    0    50G  0 lvm  /app
│ └─vg_root-var_lv  253:3    0    20G  0 lvm  /var
└─sda1                8:1    0   500M  0 part /boot

# 给/dev/sdb进行分区
[root@k8s-harbor1 ~]# parted /dev/sdb
GNU Parted 3.1
使用 /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel GPT
警告: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
是/Yes/否/No? y
(parted) mkpart
分区名称?  []? sdb1
文件系统类型?  [ext2]?
起始点? 1
结束点? 80000

(parted) mkpart
分区名称?  []? sdb2
文件系统类型?  [ext2]?
起始点? 8000GB
结束点? 11700GB
错误: The location 11700GB is outside of the device /dev/sdb.
(parted) mkpart
分区名称?  []? sdb2
文件系统类型?  [ext2]?
起始点? 8000GB
结束点? 11680GB

(parted) p
Model: AVAGO SAS3108 (scsi)
Disk /dev/sdb: 11.7TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  标志
 1      1049kB  8000GB  8000GB               sdb1
 2      8000GB  11.7TB  3680GB               sdb2

(parted) q
信息: You may need to update /etc/fstab.

# 再次查看硬盘分区情况
lsblk

NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb                   8:16   0  10.7T  0 disk
├─sdb2                8:18   0   3.4T  0 part
└─sdb1                8:17   0   7.3T  0 part
sda                   8:0    0 837.9G  0 disk
├─sda2                8:2    0 837.4G  0 part
│ ├─vg_root-swap_lv 253:1    0    32G  0 lvm
│ ├─vg_root-tmp_lv  253:6    0    10G  0 lvm  /tmp
│ ├─vg_root-nmon_lv 253:4    0    10G  0 lvm  /nmon
│ ├─vg_root-home_lv 253:2    0    10G  0 lvm  /home
│ ├─vg_root-root_lv 253:0    0    50G  0 lvm  /
│ ├─vg_root-app_lv  253:5    0    50G  0 lvm  /app
│ └─vg_root-var_lv  253:3    0    20G  0 lvm  /var
└─sda1                8:1    0   500M  0 part /boot

# 格式化磁盘
mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=32, agsize=61035200 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1953124864, imaxpct=5
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

mkfs.xfs /dev/sdb2
meta-data=/dev/sdb2              isize=512    agcount=32, agsize=28076224 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=898437376, imaxpct=5
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=438720, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

# 创建挂在目录
mkdir /harbor-data
mkdir /k8s-data

# 编辑/etc/fstab
vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Nov 30 16:25:28 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_root-root_lv /                       xfs     defaults        0 0
/dev/mapper/vg_root-app_lv /app                    xfs     defaults        0 0
UUID=0da96d5d-ccb6-4f1a-a8ec-8b552fdf005f /boot                   xfs     defaults        0 0
/dev/mapper/vg_root-home_lv /home                   xfs     defaults        0 0
/dev/mapper/vg_root-nmon_lv /nmon                   xfs     defaults        0 0
/dev/mapper/vg_root-tmp_lv /tmp                    xfs     defaults        0 0
/dev/mapper/vg_root-var_lv /var                    xfs     defaults        0 0
#/dev/mapper/vg_root-swap_lv swap                    swap    defaults        0 0

/dev/sdb1 /harbor-data xfs defaults 0 0
/dev/sdb2 /k8s-data xfs defaults 0 0

#挂载磁盘
mount -a

df -h
文件系统                     容量  已用  可用 已用% 挂载点
devtmpfs                      63G     0   63G    0% /dev
tmpfs                         63G     0   63G    0% /dev/shm
tmpfs                         63G  9.4M   63G    1% /run
tmpfs                         63G     0   63G    0% /sys/fs/cgroup
/dev/mapper/vg_root-root_lv   50G  2.1G   48G    5% /
/dev/sda1                    494M  180M  314M   37% /boot
/dev/mapper/vg_root-nmon_lv   10G   33M   10G    1% /nmon
/dev/mapper/vg_root-tmp_lv    10G   33M   10G    1% /tmp
/dev/mapper/vg_root-app_lv    50G  758M   50G    2% /app
/dev/mapper/vg_root-var_lv    20G  118M   20G    1% /var
/dev/mapper/vg_root-home_lv   10G  2.7G  7.4G   27% /home
tmpfs                         13G     0   13G    0% /run/user/1000
/dev/sdb1                    7.3T   34M  7.3T    1% /harbor-data
/dev/sdb2                    3.4T   34M  3.4T    1% /k8s-data
编辑/etc/hosts文件
vim /etc/hosts

10.122.249.151 k8s-harbor1 harbor.k8s.hebei
10.122.249.152 k8s-harbor2 harhor2.k8s.hebei
10.122.249.153 k8s-ha-vip vip.k8s.hebei

10.122.249.154 k8s-master1
10.122.249.155 k8s-master2
10.122.249.156 k8s-master3

10.122.249.157 k8s-node01
10.122.249.158 k8s-node02
10.122.249.159 k8s-node03
10.122.249.160 k8s-node04
10.122.249.161 k8s-node05

配置dns
# 替换为你自己内网的DNS服务器地址
echo "DNS1=114.114.114.114" >> /etc/sysconfig/network-scripts/ifcfg-ens5f1
systemctl daemon-reload && systemctl restart network

Kubernetes节点配置

参考Harbor节点服务器操作系统配置。

安装部署Harbor服务器

安装Docker环境

docker使用24.0.6版本,上传docker安装rpm包。

# harbor目录规划说明:harbor默认安装目录在/app/harbor,harbor存储的数据目录和日志目录在/harbor-data

# 安装docker
yum install *.rpm

# 配置docker服务私有镜像库信任,限制镜像日志大小尺寸
vim /etc/docker/daemon.json

{
        "insecure-registries":["http://10.122.249.151","http://10.122.249.152"],
        "log-driver":"json-file",
        "log-opts":{"max-size":"50m","max-file":"3"}
}

# 启动docker服务
systemctl enable --now docker

# 查看docker版本信息
docker version
Client: Docker Engine - Community
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.20.10
 Git commit:        afdd53b
 Built:             Thu Oct 26 09:11:35 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.6
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.7
  Git commit:       1a79695
  Built:            Mon Sep  4 12:34:28 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.25
  GitCommit:        d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f
 runc:
  Version:          1.1.10
  GitCommit:        v1.1.10-0-g18a0cb0
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

# 查看docker服务状态
docker info
Client: Docker Engine - Community
 Version:    24.0.7
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.21.0
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 24.0.6
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f
 runc version: v1.1.10-0-g18a0cb0
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 6.0.10-1.el7.elrepo.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 32
 Total Memory: 125.8GiB
 Name: k8s-harbor1
 ID: 51a38654-d8af-4362-9531-d23a397b6a79
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  10.122.249.151
  127.0.0.0/8
 Live Restore Enabled: false

# 复制docker-compose到/usr/bin/目录下
cp docker-compose-linux-x86_64-v2.10.2 /usr/bin/docker-compose
chmod +x /usr/bin/docker-compose

安装Harbor

# 解压缩离线安装包
cd /app/harbor
tar zxvf harbor-offline-installer-v2.6.0.tgz

# 编辑harbor配置文件
cp harbor.yml.tmpl harbor.yml
vim harbor.yml

hostname: harbor.k8s.hebei

http:
  port: 80

harbor_admin_password: Harbor12345

database:
  password: root123
  max_idle_conns: 100
  max_open_conns: 900

data_volume: /harbor-data

trivy:
  ignore_unfixed: false
  skip_update: false
  offline_scan: false
  insecure: false

jobservice:
  max_job_workers: 10

notification:
  webhook_job_max_retry: 10

chart:
  absolute_url: disabled

log:
  level: info
  local:
    rotate_count: 50
    rotate_size: 200M
    location: /harbor-data/log/harbor

_version: 2.6.0

proxy:
  http_proxy:
  https_proxy:
  no_proxy:
  components:
    - core
    - jobservice
    - trivy

upload_purging:
  enabled: true
  age: 168h
  interval: 24h
  dryrun: false

cache:
  enabled: false
  expire_hours: 24

# 编辑docker-compose.yml文件,修改两处地方,分别是122行和188行,增加一行内容:"- /etc/hosts:/etc/hosts:z",否则主备复制无法解析主机和IP地址,无法建立仓库连接

109   core:
110     image: goharbor/harbor-core:v2.6.0
111     container_name: harbor-core
112     env_file:
113       - ./common/config/core/env
114     restart: always
115     cap_drop:
116       - ALL
117     cap_add:
118       - SETGID
119       - SETUID
120     volumes:
121       - /harbor-data/ca_download/:/etc/core/ca/:z
122       - /etc/hosts:/etc/hosts:z                    # 添加mount /etc/hosts
123       - /harbor-data/:/data/:z
124       - ./common/config/core/certificates/:/etc/core/certificates/:z
125       - type: bind
126         source: ./common/config/core/app.conf
127         target: /etc/core/app.conf
128       - type: bind
129         source: /harbor-data/secret/core/private_key.pem
130         target: /etc/core/private_key.pem
131       - type: bind
132         source: /harbor-data/secret/keys/secretkey
133         target: /etc/core/key
134       - type: bind
135         source: ./common/config/shared/trust-certificates
136         target: /harbor_cust_cert
137     networks:
138       harbor:
139     depends_on:
140       - log
141       - registry
142       - redis
143       - postgresql
144     logging:
145       driver: "syslog"
146       options:
147         syslog-address: "tcp://localhost:1514"
148         tag: "core"

173   jobservice:
174     image: goharbor/harbor-jobservice:v2.6.0
175     container_name: harbor-jobservice
176     env_file:
177       - ./common/config/jobservice/env
178     restart: always
179     cap_drop:
180       - ALL
181     cap_add:
182       - CHOWN
183       - SETGID
184       - SETUID
185     volumes:
186       - /harbor-data/job_logs:/var/log/jobs:z
187       - /harbor-data/scandata_exports:/var/scandata_exports:z
188       - /etc/hosts:/etc/hosts:z                     # 添加mount /etc/hosts
189       - type: bind
190         source: ./common/config/jobservice/config.yml
191         target: /etc/jobservice/config.yml
192       - type: bind
193         source: ./common/config/shared/trust-certificates
194         target: /harbor_cust_cert
195     networks:
196       - harbor
197     depends_on:
198       - core
199     logging:
200       driver: "syslog"
201       options:
202         syslog-address: "tcp://localhost:1514"
203         tag: "jobservice"


# 开始安装并启动harbor
./install.sh

[Step 0]: checking if docker is installed ...
Note: docker version: 24.0.7
[Step 1]: checking docker-compose is installed ...
Note: Docker Compose version v2.21.0

[Step 2]: loading Harbor images ...
915f79eed965: Loading layer [==================================================>]  37.77MB/37.77MB
53e17aa1994a: Loading layer [==================================================>]  8.898MB/8.898MB
82205c155ee7: Loading layer [==================================================>]  3.584kB/3.584kB
7ffa6a408e36: Loading layer [==================================================>]   2.56kB/2.56kB
1a2ed94f447f: Loading layer [==================================================>]  97.91MB/97.91MB
e031eb4548cd: Loading layer [==================================================>]   98.7MB/98.7MB
Loaded image: goharbor/harbor-jobservice:v2.6.0
1ddd239fd081: Loading layer [==================================================>]  5.755MB/5.755MB
51cfe17ad552: Loading layer [==================================================>]  4.096kB/4.096kB
d66b11611927: Loading layer [==================================================>]   17.1MB/17.1MB
95ec06f9ede8: Loading layer [==================================================>]  3.072kB/3.072kB
4915db4c8a75: Loading layer [==================================================>]  29.13MB/29.13MB
de0dd696d1e4: Loading layer [==================================================>]  47.03MB/47.03MB
Loaded image: goharbor/harbor-registryctl:v2.6.0
135ff4cdf210: Loading layer [==================================================>]  119.9MB/119.9MB
971eb518f877: Loading layer [==================================================>]  3.072kB/3.072kB
dca613dfbd94: Loading layer [==================================================>]   59.9kB/59.9kB
86701cd4bbd5: Loading layer [==================================================>]  61.95kB/61.95kB
Loaded image: goharbor/redis-photon:v2.6.0
db777e2b34a6: Loading layer [==================================================>]    119MB/119MB
Loaded image: goharbor/nginx-photon:v2.6.0
e8b623356728: Loading layer [==================================================>]  6.283MB/6.283MB
de97fd65d649: Loading layer [==================================================>]  4.096kB/4.096kB
80d89e68db87: Loading layer [==================================================>]  3.072kB/3.072kB
d30aaa68403a: Loading layer [==================================================>]  91.21MB/91.21MB
09c2eb3f70bf: Loading layer [==================================================>]  12.86MB/12.86MB
d033d51a66ed: Loading layer [==================================================>]  104.9MB/104.9MB
Loaded image: goharbor/trivy-adapter-photon:v2.6.0
d68ea3579314: Loading layer [==================================================>]  43.85MB/43.85MB
ba0eac6b665d: Loading layer [==================================================>]  65.88MB/65.88MB
6e6fdfe712e6: Loading layer [==================================================>]  18.03MB/18.03MB
936f2805133b: Loading layer [==================================================>]  65.54kB/65.54kB
d1cc2359b34f: Loading layer [==================================================>]   2.56kB/2.56kB
3db4c06ddde2: Loading layer [==================================================>]  1.536kB/1.536kB
ffa89d14f0f8: Loading layer [==================================================>]  12.29kB/12.29kB
5b6fc339f848: Loading layer [==================================================>]  2.612MB/2.612MB
bf25a672c522: Loading layer [==================================================>]  379.9kB/379.9kB
Loaded image: goharbor/prepare:v2.6.0
f03402c298a4: Loading layer [==================================================>]  127.1MB/127.1MB
ec437899a2d4: Loading layer [==================================================>]  3.584kB/3.584kB
fc987efeff2f: Loading layer [==================================================>]  3.072kB/3.072kB
ccac4ecb9fab: Loading layer [==================================================>]   2.56kB/2.56kB
76126881776b: Loading layer [==================================================>]  3.072kB/3.072kB
e5710297bc49: Loading layer [==================================================>]  3.584kB/3.584kB
86ada12d4961: Loading layer [==================================================>]  20.99kB/20.99kB
Loaded image: goharbor/harbor-log:v2.6.0
4a1effd4840f: Loading layer [==================================================>]  8.898MB/8.898MB
3972ad56d11c: Loading layer [==================================================>]  24.63MB/24.63MB
e13ac1c66f56: Loading layer [==================================================>]  4.608kB/4.608kB
b38b2979e8ca: Loading layer [==================================================>]  25.42MB/25.42MB
Loaded image: goharbor/harbor-exporter:v2.6.0
1616a4b7b75d: Loading layer [==================================================>]    119MB/119MB
554400dd99a8: Loading layer [==================================================>]  7.535MB/7.535MB
6716f4e67dff: Loading layer [==================================================>]  1.185MB/1.185MB
Loaded image: goharbor/harbor-portal:v2.6.0
2a50c39af894: Loading layer [==================================================>]  1.096MB/1.096MB
f51c34c3ca3f: Loading layer [==================================================>]  5.888MB/5.888MB
b5cc7d5afb32: Loading layer [==================================================>]    169MB/169MB
e38b1eb3cd75: Loading layer [==================================================>]  16.72MB/16.72MB
8a0ad5839a99: Loading layer [==================================================>]  4.096kB/4.096kB
b31920119f7b: Loading layer [==================================================>]  6.144kB/6.144kB
95588b7a97e8: Loading layer [==================================================>]  3.072kB/3.072kB
37736a8af56d: Loading layer [==================================================>]  2.048kB/2.048kB
f5d998d20d26: Loading layer [==================================================>]   2.56kB/2.56kB
3665e4285a3e: Loading layer [==================================================>]   2.56kB/2.56kB
324a12cf3159: Loading layer [==================================================>]   2.56kB/2.56kB
54caba94e156: Loading layer [==================================================>]  8.704kB/8.704kB
Loaded image: goharbor/harbor-db:v2.6.0
470e6e891906: Loading layer [==================================================>]   5.75MB/5.75MB
4088c7055d35: Loading layer [==================================================>]  8.718MB/8.718MB
2d362b585526: Loading layer [==================================================>]  14.47MB/14.47MB
b4769e45480f: Loading layer [==================================================>]  29.29MB/29.29MB
83e39cfb4f90: Loading layer [==================================================>]  22.02kB/22.02kB
640999c19ee7: Loading layer [==================================================>]  14.47MB/14.47MB
Loaded image: goharbor/notary-signer-photon:v2.6.0
85eb81bb9355: Loading layer [==================================================>]  5.755MB/5.755MB
0e885f83b805: Loading layer [==================================================>]  90.88MB/90.88MB
1a9fd2a13905: Loading layer [==================================================>]  3.072kB/3.072kB
084b31f7f0cd: Loading layer [==================================================>]  4.096kB/4.096kB
1b3ae6218261: Loading layer [==================================================>]  91.67MB/91.67MB
Loaded image: goharbor/chartmuseum-photon:v2.6.0
f2f491399890: Loading layer [==================================================>]  8.898MB/8.898MB
9e8aca626b7b: Loading layer [==================================================>]  3.584kB/3.584kB
7209c3a47c64: Loading layer [==================================================>]   2.56kB/2.56kB
b00a81023fca: Loading layer [==================================================>]  80.74MB/80.74MB
9a69cb50757d: Loading layer [==================================================>]  5.632kB/5.632kB
43ecb1743b7e: Loading layer [==================================================>]  105.5kB/105.5kB
209ea1fa2634: Loading layer [==================================================>]  44.03kB/44.03kB
52f401aa0ea0: Loading layer [==================================================>]  81.68MB/81.68MB
d09eb77e4ec9: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: goharbor/harbor-core:v2.6.0
3c6f934a9b56: Loading layer [==================================================>]  5.755MB/5.755MB
b3bb2335fb3a: Loading layer [==================================================>]  4.096kB/4.096kB
371f4a2117d4: Loading layer [==================================================>]  3.072kB/3.072kB
1b3ba34ba7db: Loading layer [==================================================>]   17.1MB/17.1MB
621e061b4f88: Loading layer [==================================================>]   17.9MB/17.9MB
Loaded image: goharbor/registry-photon:v2.6.0
b8b9704ed345: Loading layer [==================================================>]   5.75MB/5.75MB
53cb970e3348: Loading layer [==================================================>]  8.718MB/8.718MB
9cb4357dfa83: Loading layer [==================================================>]  15.88MB/15.88MB
ecdb7cd58026: Loading layer [==================================================>]  29.29MB/29.29MB
d8e381266109: Loading layer [==================================================>]  22.02kB/22.02kB
d1a45ff5c697: Loading layer [==================================================>]  15.88MB/15.88MB
Loaded image: goharbor/notary-server-photon:v2.6.0

[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /app/harbor
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

Note: stopping existing Harbor instance ...

[Step 5]: starting Harbor ...
[+] Running 10/10
 ✔ Network harbor_harbor        Created                                     0.1s
 ✔ Container harbor-log         Started                                     0.0s
 ✔ Container redis              Started                                     0.0s
 ✔ Container registryctl        Started                                     0.0s
 ✔ Container harbor-db          Started                                     0.0s
 ✔ Container harbor-portal      Started                                     0.0s
 ✔ Container registry           Started                                     0.0s
 ✔ Container harbor-core        Started                                     0.0s
 ✔ Container nginx              Started                                     0.0s
 ✔ Container harbor-jobservice  Started                                     0.0s
 ✔ ----Harbor has been installed and started successfully.----

验证Harbor

# 验证harbor服务
docker ps -a | grep nginx
4d4b46367e60   goharbor/nginx-photon:v2.6.0         "nginx -g 'daemon of…"   4 minutes ago   Up 4 minutes (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp   nginx

netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:1514          0.0.0.0:*               LISTEN      14406/docker-proxy
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1885/master
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      14960/docker-proxy
tcp        0      0 0.0.0.0:10022           0.0.0.0:*               LISTEN      1626/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      1885/master
tcp6       0      0 :::80                   :::*                    LISTEN      14966/docker-proxy
tcp6       0      0 :::10022

# 使用curl测试harbor
curl -k -u "admin:Harbor12345" -X GET -H "Content-Type: application/json" "http://10.122.249.151/api/v2.0/projects/"

[{"chart_count":0,"creation_time":"2023-12-04T06:51:00.406Z","current_user_role_id":1,"current_user_role_ids":[1],"cve_allowlist":{"creation_time":"0001-01-01T00:00:00.000Z","id":1,"items":[],"project_id":1,"update_time":"0001-01-01T00:00:00.000Z"},"metadata":{"public":"true"},"name":"library","owner_id":1,"owner_name":"admin","project_id":1,"repo_count":0,"update_time":"2023-12-04T06:51:00.406Z"}]

# 查看当前镜像库的项目清单
 curl -s -u "admin:Harbor12345" "http://10.122.249.151/v2/_catalog" | jq
 
 {
  "repositories": [
    "k8s/alertmanager",
    "k8s/alpine",
    "k8s/busybox",
    "k8s/cloudcore",
    "k8s/cni",
    "k8s/configmap-reload",
    "k8s/coredns",
    ...
    ...
    "k8s/redis",
    "k8s/scope",
    "k8s/snapshot-controller",
    "k8s/thanos",
    "k8s/tower",
    "k8s/wget",
    "k8s/wordpress"
  ]
}

备Harbor节点部署安装与主节点相同。

登录Harbor管理界面

通过浏览器访问harbor:

http://10.122.249.151/

Harbor默认管理员帐号/密码:admin/Harbor12345

Harbor管理界面
在这里插入图片描述
在这里插入图片描述

启停Harbor命令

# harbor停止命令
cd /app/harbor            # 进入harbor安装目录
docker-compose stop       # 停止harbor

# harbor启动命令
/usr/bin/docker-compose -f /app/harbor/docker-compose.yml up -d

配置Harbor同步复制

登陆主harbor仓库web管理界面

进入左侧菜单:“系统管理” -> “仓库管理”

点击:+ 新建目标

提供者:    harbor
目标名称:  备库
描述:      镜像库备库
目标URL:   http://10.122.249.152/
访问ID:    admin
访问密码:   Harbor12345
验证远程证书:□ (不选)

进入左侧菜单:“系统管理” -> “复制管理”

点击:+ 新建规则

名称:        push-to-harbor2
描述:        推送镜像到备库
复制模式:     Push-based
源资源过滤器: 名称:   Tag:  标签:   资源:  
目标仓库:    备库-http://10.122.249.152/
目标:        名称空间:      仓库扁平化:
触发模式:     事件驱动 ,□ 删除本地资源时同时也删除远程的资源。(不选)
带宽:         -1

执行复制

建好复制规则后,选中刚刚建好的规则"push-to-harbor2",鼠标点击上面的"复制"按钮,开始数据复制。

安装部署集群管理HA

Kubernetes的管理api-server需要配置高可用,使用Haproxy和Keepalived在Harbor1和Harbor2两个节点上配置HA高可用,配置过程如下:

配置HAproxy

在两台用于负载均衡的机器上运行以下命令以配置 Proxy(两台机器的 Proxy 配置相同):

vi /etc/haproxy/haproxy.cfg

global
    log /dev/log  local0 warning
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    # user        haproxy
    # group       haproxy
    daemon

   stats socket /var/lib/haproxy/stats

defaults
  log global
  option  httplog
  option  dontlognull
        timeout connect 5000
        timeout client 50000
        timeout server 50000

frontend kube-apiserver
  bind *:6443
  mode tcp
  option tcplog
  default_backend kube-apiserver

backend kube-apiserver
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server k8s-master1 10.122.249.154:6443 check # Replace the IP address with your own.
    server k8s-master2 10.122.249.155:6443 check # Replace the IP address with your own.
    server k8s-master3 10.122.249.156:6443 check # Replace the IP address with your own.
    
# 启动服务:
systemctl enable --now haproxy   

配置Keepalived

两台机器上必须都安装 Keepalived,但在配置上略有不同。

运行以下命令以配置 Keepalived。

vi /etc/keepalived/keepalived.conf

global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance haproxy-vip {
  state BACKUP
  priority 100
  interface ens5f1                   # Network card 本机网卡设备文件名
  virtual_router_id 60
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  unicast_src_ip 10.122.249.151      # The IP address of this machine  本机地址
  unicast_peer {
    10.122.249.152                   # The IP address of peer machines 对端地址
  }

  virtual_ipaddress {
    10.122.249.153/24                # The VIP address  虚拟VIP地址
  }

  track_script {
    chk_haproxy
  }
}

# 启动服务:
systemctl enable --now keepalived

验证高可用

# 在k8s-harbor1上查看VIP
ip a s ; netstat -tnlp

# 模拟节点故障
systemctl stop haproxy
ip a s ; netstat -tnlp

# 在k8s-harbor2上查看VIP
ip a s ; netstat -tnlp

# 在k8s-harbor1上恢复haproxy服务
systemctl start haproxy

安装部署Kubernetes集群

安装准备

上传离线安装包到k8s-harbor2节点,清单如下:

文件名文件说明
kk安装部署工具kubekey
config-k8s.yaml集群安装部署配置文件
kubesphere-1.26.5-3.4.0-mini.tgzKubernetes集群安装部署kubekey软件集
ks-images-1.tgzkubesphere离线安装镜像包1
ks-images-2.tgzkubesphere离线安装镜像包2

kk下载地址https://github.com/kubesphere/kubekey/releases/
建议下载最新的正式版本。

kubesphere-1.26.5-3.4.0-mini.tgz离线包制作方法
在一台可以连互联网的电脑上安装部署一遍kubesphere,安装完成后,将kk运行目录下的kubekey整个目录tar打包而成;内网用kk安装部署时候,把包解压到kk运行的目录下面即可,这样kk会自动选择kubekey目录中的软件包进行安装部署,而不从外网下载相关软件。
注意:外网部署时的配置文件内容,如containerManager是用docker还是用contrainerd,kubernetes版本选择,要跟内网离线部署的保持一致,例如:我选择的k8s是1.26.5版,containerd做容器运行环境,kubesphere v3.4.0版等。

ks-images-1.tgz和ks-images-2.tgz离线包制作方法
在外网电脑上,使用image-list所有的镜像下载下来,docker save而成。
image-list清单官网地址:链接页面最下面的附录:KubeSphere 3.4 镜像清单中,根据自己需要部署的模块内容,使用docker pull下载到本地,docker save一个包太大,可以存成几个包。

使用docker命令装载镜像文件ks-images-1.tgz和ks-images-2.tgz(制作一个离线包容量太大,所以分成两个),并重新打标签,推送到私有镜像库k8s-harbor1中。

# 首先在harbor中创建k8s项目
curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" "http://10.122.249.151/api/v2.0/projects/" -d "{ \"project_name\": \"k8s\", \"public\": true}" -k

# 装载镜像库文件
docker load -i ks-images-1.tgz
b67d19e65ef6: Loading layer [==================================================>]   72.5MB/72.5MB
cdb4050e9049: Loading layer [==================================================>]  133.1kB/133.1kB
d3d371382e9b: Loading layer [==================================================>]  66.04MB/66.04MB
936b2981ee09: Loading layer [==================================================>]  338.4kB/338.4kB
455f7c2b21e3: Loading layer [==================================================>]  3.072kB/3.072kB
e6d5f4bdc261: Loading layer [==================================================>]  128.8MB/128.8MB
e0851f1656eb: Loading layer [==================================================>]  177.2kB/177.2kB
91677669cb6d: Loading layer [==================================================>]  6.144kB/6.144kB
a58cbef09750: Loading layer [==================================================>]   7.68kB/7.68kB
Loaded image: registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
7a5b9c0b4b14: Loading layer [==================================================>]  3.031MB/3.031MB
7057effd5424: Loading layer [==================================================>]  44.79MB/44.79MB
Loaded image: registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
d62604d5d244: Loading layer [==================================================>]  4.846MB/4.846MB
Loaded image: registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
9733ccc39513: Loading layer [==================================================>]  5.895MB/5.895MB
...

docker load -i ks-images-2.tgz
1be74353c3d0: Loading layer [==================================================>]  1.437MB/1.437MB
Loaded image: registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
cc2447e1835a: Loading layer [==================================================>]  7.626MB/7.626MB
81fdcc81a9d0: Loading layer [==================================================>]   10.5MB/10.5MB
854101110f63: Loading layer [==================================================>]  3.584kB/3.584kB
38067ed663bf: Loading layer [==================================================>]  4.608kB/4.608kB
f126bda54112: Loading layer [==================================================>]   2.56kB/2.56kB
901e6dddcc99: Loading layer [==================================================>]   5.12kB/5.12kB
01e36c0e0b84: Loading layer [==================================================>]  7.168kB/7.168kB
4b701b99fec7: Loading layer [==================================================>]  31.37MB/31.37MB
a77994360a90: Loading layer [==================================================>]  3.072kB/3.072kB
2bfba7bc3a39: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
efc90214a575: Loading layer [==================================================>]  217.6kB/217.6kB
a898cdc63d5d: Loading layer [==================================================>]  23.24MB/23.24MB
...

# 上传镜像前先使用docker login登陆私有镜像库harbor
docker login -u admin -p Harbor12345 10.122.249.151

WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

# 重新给镜像打标签,名字格式为:harbor镜像库IP/项目名称(k8s)名称/镜像名称:版本,并推送到harbor
10.122.249.151/k8s/perl:latest
10.122.249.151/k8s/hello:plain-text
10.122.249.151/k8s/ks-installer:v3.4.0
10.122.249.151/k8s/ks-controller-manager:v3.4.0
10.122.249.151/k8s/ks-apiserver:v3.4.0
10.122.249.151/k8s/ks-console:v3.4.0
10.122.249.151/k8s/notification-manager:v2.3.0
10.122.249.151/k8s/notification-manager-operator:v2.3.0
10.122.249.151/k8s/alpine:3.14
10.122.249.151/k8s/thanos:v0.31.0
10.122.249.151/k8s/opensearch-dashboards:2.6.0
10.122.249.151/k8s/opensearch:2.6.0
...

# 将所有镜像文件推送的harbor镜像库
docker push 10.122.249.151/k8s/perl:latest

The push refers to repository [10.122.249.151/k8s/perl]
036cca245378: Pushed
2c74cde41d11: Pushed
8a3cea755c82: Pushed
12b956927ba2: Pushing [====================>                  ]  385.8MB/587.2MB
266def75d28e: Pushed
29e49b59edda: Pushed
1777ac7d307b: Pushed
...

# 解压缩安装包kubesphere-1.26.5-3.4.0-mini.tgz
tar zxvf kubesphere-1.26.5-3.4.0-mini.tgz
./kubekey/
./kubekey/logs/
./kubekey/logs/kubekey.log.20231130
./kubekey/logs/kubekey.log
./kubekey/kube/
./kubekey/kube/v1.26.5/
./kubekey/kube/v1.26.5/amd64/
./kubekey/kube/v1.26.5/amd64/kubeadm
./kubekey/kube/v1.26.5/amd64/kubelet
./kubekey/kube/v1.26.5/amd64/kubectl
./kubekey/helm/
./kubekey/helm/v3.9.0/
./kubekey/helm/v3.9.0/amd64/
./kubekey/helm/v3.9.0/amd64/helm
./kubekey/cni/
./kubekey/cni/v1.2.0/
...

部署安装

注意:部署前需要先配置好haproxy和keepalived,确保高可用HA和VIP运行正常。

编辑kk安装配置文件config-k8s.yaml:(注意:配置文件要保留好,维护、升级、更新集群配置都还要使用)

vim config-k8s.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: k8s
spec:
  hosts:
  - {name: k8s-master1, address: 10.122.249.154, internalAddress: 10.122.249.154, user: root, password: "Abcd_1234"}
  - {name: k8s-master2, address: 10.122.249.155, internalAddress: 10.122.249.155, user: root, password: "Abcd_1234"}
  - {name: k8s-master3, address: 10.122.249.156, internalAddress: 10.122.249.156, user: root, password: "Abcd_1234"}
  - {name: k8s-node01, address: 10.122.249.157, internalAddress: 10.122.249.157, user: root, password: "Abcd_1234"}
  - {name: k8s-node02, address: 10.122.249.158, internalAddress: 10.122.249.158, user: root, password: "Abcd_1234"}
  - {name: k8s-node03, address: 10.122.249.159, internalAddress: 10.122.249.159, user: root, password: "Abcd_1234"}
  - {name: k8s-node04, address: 10.122.249.160, internalAddress: 10.122.249.160, user: root, password: "Abcd_1234"}
  - {name: k8s-node05, address: 10.122.249.161, internalAddress: 10.122.249.161, user: root, password: "Abcd_1234"}
  roleGroups:
    etcd:
    - k8s-master1
    - k8s-master2
    - k8s-master3
    control-plane: 
    - k8s-master1
    - k8s-master2
    - k8s-master3
    worker:
    - k8s-node01
    - k8s-node02
    - k8s-node03
    - k8s-node04
    - k8s-node05
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: vip.k8s.hebei
    address: 10.122.249.153
    port: 6443
  kubernetes:
    version: v1.26.5
    clusterName: k8s.hebei
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.240.0.0/16
    kubeServiceCIDR: 172.16.0.0/16
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: harbor
    auths:
      "10.122.249.151":
        username: admin
        password: Harbor12345
    privateRegistry: "10.122.249.151"
    namespaceOverride: "k8s"
    registryMirrors: [10.122.249.151]
    insecureRegistries: [http://10.122.249.151]
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.0
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: "http://10.122.249.151"
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: true
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600

安装部署:

cd /data/install_k8s

./kk create cluster -f config-k8s.yaml

 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

13:08:44 CST [GreetingsModule] Greetings
13:08:44 CST message: [k8s-node01]
Greetings, KubeKey!
13:08:44 CST message: [k8s-node03]
Greetings, KubeKey!
13:08:45 CST message: [k8s-node02]
Greetings, KubeKey!
13:08:45 CST message: [k8s-master3]
Greetings, KubeKey!
13:08:45 CST message: [k8s-node04]
Greetings, KubeKey!
13:08:45 CST message: [k8s-master1]
Greetings, KubeKey!
13:08:46 CST message: [k8s-node05]
Greetings, KubeKey!
13:08:46 CST success: [k8s-node01]
13:08:46 CST success: [k8s-node03]
13:08:46 CST success: [k8s-node02]
13:08:46 CST success: [k8s-master3]
13:08:46 CST success: [k8s-node04]
13:08:46 CST success: [k8s-master1]
13:08:46 CST success: [k8s-node05]
13:08:46 CST [NodePreCheckModule] A pre-check on nodes
13:08:47 CST success: [k8s-node03]
13:08:47 CST success: [k8s-node05]
13:08:47 CST success: [k8s-master3]
13:08:47 CST success: [k8s-master1]
13:08:47 CST success: [k8s-node01]
13:08:47 CST success: [k8s-node02]
13:08:47 CST success: [k8s-node04]
13:08:47 CST [ConfirmModule] Display confirmation form
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name        | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k8s-node01  | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
| k8s-node02  | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
| k8s-node03  | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
| k8s-node04  | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
| k8s-node05  | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
| k8s-master1 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
| k8s-master3 | y    | y    | y       | y        | y     | y     | y       | y         | y      |        | v1.6.4     | y          |             |                  | CST 13:08:47 |
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
13:08:48 CST success: [LocalHost]
13:08:48 CST [NodeBinariesModule] Download installation binaries
13:08:48 CST message: [localhost]
downloading amd64 kubeadm v1.26.5 ...
13:08:49 CST message: [localhost]
kubeadm is existed
13:08:49 CST message: [localhost]
downloading amd64 kubelet v1.26.5 ...
13:08:49 CST message: [localhost]
kubelet is existed
13:08:49 CST message: [localhost]


This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
13:10:48 CST skipped: [k8s-master1]
13:10:48 CST success: [k8s-master3]
13:10:48 CST [JoinNodesModule] Join worker node
13:10:50 CST stdout: [k8s-node03]
W1211 13:10:48.634389   14935 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 13:10:48.859781   14935 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [172.16.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:10:50 CST stdout: [k8s-node05]
W1211 13:10:48.640142   14979 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 13:10:48.867650   14979 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [172.16.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:10:50 CST stdout: [k8s-node04]
W1211 13:10:48.637747   14950 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 13:10:48.851069   14950 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [172.16.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:11:02 CST stdout: [k8s-node01]
W1211 13:10:48.630434   15013 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 13:10:48.848478   15013 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [172.16.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:11:03 CST stdout: [k8s-node02]
W1211 13:10:48.643144   14937 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1211 13:10:48.871181   14937 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [172.16.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13:11:03 CST success: [k8s-node03]
13:11:03 CST success: [k8s-node05]
13:11:03 CST success: [k8s-node04]
13:11:03 CST success: [k8s-node01]
13:11:03 CST success: [k8s-node02]
13:11:03 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
13:11:03 CST skipped: [k8s-master1]
13:11:03 CST success: [k8s-master3]
13:11:03 CST [JoinNodesModule] Remove master taint
13:11:03 CST skipped: [k8s-master3]
13:11:03 CST skipped: [k8s-master1]
13:11:03 CST [JoinNodesModule] Add worker label to all nodes
13:11:03 CST stdout: [k8s-master1]
node/k8s-node01 labeled
13:11:03 CST stdout: [k8s-master1]
node/k8s-node02 labeled
13:11:04 CST stdout: [k8s-master1]
node/k8s-node03 labeled
13:11:04 CST stdout: [k8s-master1]
node/k8s-node04 labeled
13:11:04 CST stdout: [k8s-master1]
node/k8s-node05 labeled
13:11:04 CST success: [k8s-master1]
13:11:04 CST skipped: [k8s-master3]
13:11:04 CST [DeployNetworkPluginModule] Generate calico
13:11:04 CST skipped: [k8s-master3]
13:11:04 CST success: [k8s-master1]
13:11:04 CST [DeployNetworkPluginModule] Deploy calico
13:11:05 CST stdout: [k8s-master1]
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
13:11:05 CST skipped: [k8s-master3]
13:11:05 CST success: [k8s-master1]
13:11:05 CST [ConfigureKubernetesModule] Configure kubernetes
13:11:05 CST success: [k8s-master1]
13:11:05 CST skipped: [k8s-master3]
13:11:05 CST [ChownModule] Chown user $HOME/.kube dir
13:11:05 CST success: [k8s-node05]
13:11:05 CST success: [k8s-node03]
13:11:05 CST success: [k8s-node01]
13:11:05 CST success: [k8s-node02]
13:11:05 CST success: [k8s-node04]
13:11:05 CST success: [k8s-master3]
13:11:05 CST success: [k8s-master1]
13:11:05 CST [AutoRenewCertsModule] Generate k8s certs renew script
13:11:06 CST success: [k8s-master3]
13:11:06 CST success: [k8s-master1]
13:11:06 CST [AutoRenewCertsModule] Generate k8s certs renew service
13:11:06 CST success: [k8s-master3]
13:11:06 CST success: [k8s-master1]
13:11:06 CST [AutoRenewCertsModule] Generate k8s certs renew timer
13:11:07 CST success: [k8s-master1]
13:11:07 CST success: [k8s-master3]
13:11:07 CST [AutoRenewCertsModule] Enable k8s certs renew service
13:11:07 CST success: [k8s-master3]
13:11:07 CST success: [k8s-master1]
13:11:07 CST [SaveKubeConfigModule] Save kube config as a configmap
13:11:07 CST success: [LocalHost]
13:11:07 CST [AddonsModule] Install addons
13:11:07 CST success: [LocalHost]
13:11:07 CST [DeployStorageClassModule] Generate OpenEBS manifest
13:11:08 CST skipped: [k8s-master3]
13:11:08 CST success: [k8s-master1]
13:11:08 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
13:11:09 CST skipped: [k8s-master3]
13:11:09 CST success: [k8s-master1]
13:11:09 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
13:11:10 CST skipped: [k8s-master3]
13:11:10 CST success: [k8s-master1]
13:11:10 CST [DeployKubeSphereModule] Apply ks-installer
13:11:10 CST stdout: [k8s-master1]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
13:11:10 CST skipped: [k8s-master3]
13:11:10 CST success: [k8s-master1]
13:11:10 CST [DeployKubeSphereModule] Add config to ks-installer manifests
13:11:10 CST skipped: [k8s-master3]
13:11:10 CST success: [k8s-master1]
13:11:10 CST [DeployKubeSphereModule] Create the kubesphere namespace
13:11:11 CST skipped: [k8s-master3]
13:11:11 CST success: [k8s-master1]
13:11:11 CST [DeployKubeSphereModule] Setup ks-installer config
13:11:11 CST stdout: [k8s-master1]
secret/kube-etcd-client-certs created
13:11:11 CST skipped: [k8s-master3]
13:11:11 CST success: [k8s-master1]
13:11:11 CST [DeployKubeSphereModule] Apply ks-installer
...
...
13:11:13 CST stdout: [k8s-master1]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
13:11:13 CST skipped: [k8s-master3]
13:11:13 CST success: [k8s-master1]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://10.122.249.154:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-12-11 13:18:34
#####################################################
13:18:36 CST skipped: [k8s-master3]
13:18:36 CST success: [k8s-master1]
13:18:36 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

注意:可能是部署配置文件没有配置正确,或者kk的bug,安装过程中如果出现pull镜像文件无法下载的报错,需要修改所有节点上containerd服务的配置文件,并重启containerd服务;配置文件修改完成后,重新执行:./kk create cluster -f config-k8s.yaml继续安装,就能正常pull下来镜像文件了,修改的内容参考如下内容,看注释:

# 修改所有节点上的containerd配置文件/etc/containerd/config.toml文件内容:
vim /etc/containerd/config.toml

version = 2
root = "/var/lib/containerd"
state = "/run/containerd"

[grpc]
  address = "/run/containerd/containerd.sock"
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[ttrpc]
  address = ""
  uid = 0
  gid = 0

[debug]
  address = ""
  uid = 0
  gid = 0
  level = ""

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[plugins]
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    runtime_type = "io.containerd.runc.v2"
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
      SystemdCgroup = true
  [plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "10.122.249.151/k8s/pause:3.8"
    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      max_conf_num = 1
      conf_template = ""
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        # 增加如下两行,确保pull镜像文件的时候从私有镜像库下载镜像文件
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.122.249.151"]
          endpoint = ["http://10.122.249.151"]
        [plugins."io.containerd.grpc.v1.cri".registry.configs]
          [plugins."io.containerd.grpc.v1.cri".registry.configs."10.122.249.151".auth]
            username = "admin"
            password = "Harbor12345"
            [plugins."io.containerd.grpc.v1.cri".registry.configs."10.122.249.151".tls]
              ca_file = ""
              cert_file = ""
              key_file = ""
              insecure_skip_verify = false

# 修改完毕后,重启containerd服务
systemctl daemon-reload && systemctl restart containerd

# 修改完后,重新执行kk安装命令
./kk create cluster -f config-k8s.yaml

安装完毕检查集群状态

# 查看节点状态
kubectl get nodes

NAME          STATUS   ROLES           AGE   VERSION
k8s-master1   Ready    control-plane   11m   v1.26.5
k8s-master3   Ready    control-plane   11m   v1.26.5
k8s-node01    Ready    worker          11m   v1.26.5
k8s-node02    Ready    worker          11m   v1.26.5
k8s-node03    Ready    worker          11m   v1.26.5
k8s-node04    Ready    worker          11m   v1.26.5
k8s-node05    Ready    worker          11m   v1.26.5

# 查看集群负载情况
kubectl top nodes

NAME          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master1   292m         0%     2217Mi          1%
k8s-master3   245m         0%     1737Mi          1%
k8s-node01    164m         0%     1666Mi          1%
k8s-node02    290m         0%     1428Mi          1%
k8s-node03    203m         0%     981Mi           0%
k8s-node04    166m         0%     1078Mi          0%
k8s-node05    137m         0%     977Mi           0%


问题处理

1、coredns状态异常问题

# 编辑coredns配置文件
kubectl edit cm coredns -n kube-system

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        forward . /etc/resolv.conf {                      # 删除这一行
           max_concurrent 1000                            # 删除这一行
        }                                                 # 删除这一行
        prometheus :9153
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2023-12-11T05:10:37Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "17582"
  uid: 8c6b1a65-a382-4805-9840-c20f0298e700
  
# 保存退出后,删除coredns的CrashLoopBackOff状态的pod,k8s会自动创建新的pod,状态恢复正常
kubectl delete pod coredns-554d84bd95-f7tm2 -n kube-system

登录管理界面

# 默认管理登录信息
Console: http://10.122.249.154:30880
Account: admin
Password: P@88w0rd

# 备注:重置密码命令
kubectl patch users admin -p '{"spec":{"password":"P@88w0rd"}}' --type='merge' && kubectl annotate users admin iam.kubesphere.io/password-encrypted-

# 查看用户状态
kubectl get users
NAME    EMAIL                 STATUS
admin   admin@kubesphere.io   Active

# 查看日志
kubectl -n kubesphere-system logs -l app=ks-controller-manager

登录界面

在这里插入图片描述

管理界面

在这里插入图片描述

设置nodelocaldns

为了配置自定义dns解析,需要对nodelocaldns的配置进行修改:

# 首先获取coredns的ClusterIP
kubectl get svc -A | grep coredns
kube-system      coredns     ClusterIP   172.16.0.3    <none>     53/UDP,53/TCP,9153/TCP  7D1h

# 修改nodelocaldns配置第45行开始的部分,把coredns的ClusterIP替换进去
kubectl edit cm nodelocaldns -n kube-system
    
    # 原内容
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . /etc/resolv.conf
        prometheus :9253
    #修改后的内容
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . 172.16.0.3 {        # 修改的内容
            force_tcp                 # 修改的内容
        }                             # 修改的内容
        prometheus :9253    

设置coredns自定义域名解析

添加自定义私有镜像库的域名解析:10.122.249.151 harbor.k8s.hebei

kubectl edit configmap coredns -n kube-system

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        hosts {                                           # 添加的部分
           10.122.249.151 harbor.k8s.hebei                # 添加的自定义域名解析
           fallthrough                                    # 添加的部分
        }                                                 # 添加的部分
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2023-12-11T05:10:37Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "1516808"
  uid: 8c6b1a65-a382-4805-9840-c20f0298e700

集群证书更新

集群的根证书ca.crt默认是10年有效期其它证书默认是一年有效期,所以需要在到期前定期更新,使用kk命令更新集群证书方法如下:

# 根证书有效期
[root@k8s-master1 install_k8s]# openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text | grep -A 2 Validity
        Validity
            Not Before: Dec 11 05:10:18 2023 GMT
            Not After : Dec  8 05:10:18 2033 GMT          # 有效期10年
            
# 查看集群证书有效期
[root@k8s-master1 install_k8s]# ./kk certs check-expiration


 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

07:42:45 CST [GreetingsModule] Greetings
07:42:45 CST message: [k8s-master1]
Greetings, KubeKey!
07:42:45 CST success: [k8s-master1]
07:42:45 CST [CheckCertsModule] Check cluster certs
07:42:46 CST success: [k8s-master1]
07:42:46 CST [PrintClusterCertsModule] Display cluster certs form
CERTIFICATE           EXPIRES            RESIDUAL TIME   CERTIFICATE AUTHORITY   NODE
apiserver.crt         Dec 10, 2024 05:10 UTC   358d          ca                   k8s-master1
apiserver-kubelet-client.crt   Dec 10, 2024 05:10 UTC   358d ca                   k8s-master1
front-proxy-client.crt         Dec 10, 2024 05:10 UTC   358d front-proxy-ca       k8s-master1
admin.conf                     Dec 10, 2024 05:10 UTC   358d                      k8s-master1
controller-manager.conf        Dec 10, 2024 05:10 UTC   358d                      k8s-master1
scheduler.conf                 Dec 10, 2024 05:10 UTC   358d                      k8s-master1

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   NODE
ca.crt                  Dec 08, 2033 05:10 UTC   9y              k8s-master1    # 有效期10年
front-proxy-ca.crt      Dec 08, 2033 05:10 UTC   9y              k8s-master1    # 有效期10年
07:42:46 CST success: [LocalHost]
07:42:46 CST Pipeline[CheckCertsPipeline] execute successfully

# 执行证书更新操作
[root@k8s-master1 install_k8s]# ./kk certs renew -f config-k8s.yaml


 _   __      _          _   __
| | / /     | |        | | / /
| |/ / _   _| |__   ___| |/ /  ___ _   _
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

07:49:46 CST [GreetingsModule] Greetings
07:49:46 CST message: [k8s-node05]
Greetings, KubeKey!
07:49:46 CST message: [k8s-node01]
Greetings, KubeKey!
07:49:47 CST message: [k8s-master1]
Greetings, KubeKey!
07:49:47 CST message: [k8s-node03]
Greetings, KubeKey!
07:49:47 CST message: [k8s-master2]
Greetings, KubeKey!
07:49:47 CST message: [k8s-master3]
Greetings, KubeKey!
07:49:48 CST message: [k8s-node04]
Greetings, KubeKey!
07:49:48 CST message: [k8s-node02]
Greetings, KubeKey!
07:49:48 CST success: [k8s-node05]
07:49:48 CST success: [k8s-node01]
07:49:48 CST success: [k8s-master1]
07:49:48 CST success: [k8s-node03]
07:49:48 CST success: [k8s-master2]
07:49:48 CST success: [k8s-master3]
07:49:48 CST success: [k8s-node04]
07:49:48 CST success: [k8s-node02]
07:49:48 CST [RenewCertsModule] Renew control-plane certs
07:49:48 CST stdout: [k8s-master1]
v1.26.5
07:49:51 CST stdout: [k8s-master2]
v1.26.5
07:49:54 CST stdout: [k8s-master3]
v1.26.5
07:49:57 CST success: [k8s-master1]
07:49:57 CST success: [k8s-master2]
07:49:57 CST success: [k8s-master3]
07:49:57 CST [RenewCertsModule] Copy admin.conf to ~/.kube/config
07:49:57 CST success: [k8s-master3]
07:49:57 CST success: [k8s-master2]
07:49:57 CST success: [k8s-master1]
07:49:57 CST [CheckCertsModule] Check cluster certs
07:49:58 CST success: [k8s-master2]
07:49:58 CST success: [k8s-master3]
07:49:58 CST success: [k8s-master1]
07:49:58 CST [PrintClusterCertsModule] Display cluster certs form
CERTIFICATE                    EXPIRES           RESIDUAL TIME   CERTIFICATE AUTHORITY   NODE
apiserver.crt                  Dec 16, 2024 23:49 UTC   364d  ca                      k8s-master1
apiserver-kubelet-client.crt   Dec 16, 2024 23:49 UTC   364d  ca                      k8s-master1
front-proxy-client.crt         Dec 16, 2024 23:49 UTC   364d  front-proxy-ca          k8s-master1
admin.conf                     Dec 16, 2024 23:49 UTC   364d                          k8s-master1
controller-manager.conf        Dec 16, 2024 23:49 UTC   364d                          k8s-master1
scheduler.conf                 Dec 16, 2024 23:49 UTC   364d                          k8s-master1
apiserver.crt                  Dec 16, 2024 23:49 UTC   364d  ca                      k8s-master2
apiserver-kubelet-client.crt   Dec 16, 2024 23:49 UTC   364d  ca                      k8s-master2
front-proxy-client.crt         Dec 16, 2024 23:49 UTC   364d  front-proxy-ca          k8s-master2
admin.conf                     Dec 16, 2024 23:49 UTC   364d                          k8s-master2
controller-manager.conf        Dec 16, 2024 23:49 UTC   364d                          k8s-master2
scheduler.conf                 Dec 16, 2024 23:49 UTC   364d                          k8s-master2
apiserver.crt                  Dec 16, 2024 23:49 UTC   364d  ca                      k8s-master3
apiserver-kubelet-client.crt   Dec 16, 2024 23:49 UTC   364d  ca                      k8s-master3
front-proxy-client.crt         Dec 16, 2024 23:49 UTC   364d  front-proxy-ca          k8s-master3
admin.conf                     Dec 16, 2024 23:49 UTC   364d                          k8s-master3
controller-manager.conf        Dec 16, 2024 23:49 UTC   364d                          k8s-master3
scheduler.conf                 Dec 16, 2024 23:49 UTC   364d                          k8s-master3

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   NODE
ca.crt                  Dec 08, 2033 05:10 UTC   9y              k8s-master1  # 有效期10年
front-proxy-ca.crt      Dec 08, 2033 05:10 UTC   9y              k8s-master1  # 有效期10年
ca.crt                  Dec 08, 2033 05:10 UTC   9y              k8s-master2  # 有效期10年
front-proxy-ca.crt      Dec 08, 2033 05:10 UTC   9y              k8s-master2  # 有效期10年
ca.crt                  Dec 08, 2033 05:10 UTC   9y              k8s-master3  # 有效期10年
front-proxy-ca.crt      Dec 08, 2033 05:10 UTC   9y              k8s-master3  # 有效期10年
07:49:58 CST success: [LocalHost]
07:49:58 CST Pipeline[RenewCertsPipeline] execute successfully

etcd证书有效期

[root@k8s-master1 install_k8s]# openssl x509 -in /etc/ssl/etcd/ssl/ca.pem -noout -text | grep -A 2 Validity
        Validity
            Not Before: Nov 30 03:28:05 2023 GMT
            Not After : Nov 27 03:28:05 2033 GMT           # 有效期10年
         
[root@k8s-master1 install_k8s]# openssl x509 -in /etc/ssl/etcd/ssl/admin-k8s-master1.pem -noout -text | grep -A 2 Validity
        Validity
            Not Before: Nov 30 03:28:05 2023 GMT
            Not After : Dec  8 05:09:25 2033 GMT           # 有效期10年
         
[root@k8s-master1 install_k8s]# openssl x509 -in /etc/ssl/etcd/ssl/member-k8s-master1.pem -noout -text | grep -A 2 Validity
        Validity
            Not Before: Nov 30 03:28:05 2023 GMT
            Not After : Dec  8 05:09:25 2033 GMT           # 有效期10年
         
[root@k8s-master1 install_k8s]# openssl x509 -in /etc/ssl/etcd/ssl/node-k8s-master1.pem -noout -text | grep -A 2 Validity
        Validity
            Not Before: Nov 30 03:28:05 2023 GMT
            Not After : Dec  8 05:09:25 2033 GMT          # 有效期10年

安装部署Rook-Ceph集群

设置时间同步和timezone

# set timzone
timedatectl set-timezone Asia/Shanghai

# set ntp
echo "*/30 * * * * /usr/sbin/ntpdate 10.100.48.1 10.100.48.42 10.122.1.6 &> /dev/null" >> /var/spool/cron/root

磁盘处理

ceph管理裸磁盘,磁盘头不能有任何使用过的信息。

# 查看现有的磁盘设备
lsblk

NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdd                   8:48   0   1.8T  0 disk
sdb                   8:16   0   1.8T  0 disk
sdc                   8:32   0  15.6T  0 disk
sda                   8:0    0 837.9G  0 disk
├─sda2                8:2    0 837.4G  0 part
│ ├─vg_root-home_lv 253:1    0    10G  0 lvm  /home
│ ├─vg_root-nmon_lv 253:4    0    10G  0 lvm  /nmon
│ ├─vg_root-var_lv  253:2    0   500G  0 lvm  /var
│ ├─vg_root-root_lv 253:0    0    50G  0 lvm  /
│ └─vg_root-tmp_lv  253:3    0    10G  0 lvm  /tmp
└─sda1                8:1    0   500M  0 part /boot

# 注意:如果服务器或磁盘曾经被分过区、lvm管理和创建文件系统,需要做如下dd处理,抹去之前的使用痕迹,否则ceph无法识别磁盘,不能创建osd
dd if=/dev/zero of=/dev/sdc bs=1M count=20480 status=progress

上传镜像文件到私有镜像库

# 导入镜像文件到本地docker环境
docker load -i rook-1.9-ceph-16.2.10-images.tgz

# 修改镜像tag
docker tag rook/ceph:v1.9.13 10.122.249.151/rook/ceph:v1.9.13
docker tag quay.io/ceph/ceph:v16.2.10 10.122.249.151/rook/ceph/ceph:v16.2.10
docker tag quay.io/csiaddons/k8s-sidecar:v0.4.0 10.122.249.151/rook/csiaddons/k8s-sidecar:v0.4.0
docker tag quay.io/cephcsi/cephcsi:v3.6.2 10.122.249.151/rook/cephcsi/cephcsi:v3.6.2
docker tag registry.k8s.io/sig-storage/csi-snapshotter:v6.0.1 10.122.249.151/rook/sig-storage/csi-snapshotter:v6.0.1
docker tag registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1 10.122.249.151/rook/sig-storage/csi-node-driver-registrar:v2.5.1
docker tag registry.k8s.io/sig-storage/nfsplugin:v4.0.0 10.122.249.151/rook/sig-storage/nfsplugin:v4.0.0
docker tag quay.io/csiaddons/volumereplication-operator:v0.3.0 10.122.249.151/rook/csiaddons/volumereplication-operator:v0.3.0
docker tag registry.k8s.io/sig-storage/csi-resizer:v1.4.0 10.122.249.151/rook/sig-storage/csi-resizer:v1.4.0
docker tag registry.k8s.io/sig-storage/csi-provisioner:v3.1.0 10.122.249.151/rook/sig-storage/csi-provisioner:v3.1.0
docker tag registry.k8s.io/sig-storage/csi-attacher:v3.4.0 10.122.249.151/rook/sig-storage/csi-attacher:v3.4.0

# 私有镜像库创建root-ceph项目
curl -u "admin:Harbor12345" -X POST -H "Content-Type: application/json" "http://10.122.249.151/api/v2.0/projects/" -d "{ \"project_name\": \"rook\", \"public\": true}" -k

# login私有镜像库
docker login -u admin -p Harbor12345 10.122.249.151

WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

# push镜像到私有镜像库
docker push 10.122.249.151/rook/ceph:v1.9.13
docker push 10.122.249.151/rook/ceph/ceph:v16.2.10
docker push 10.122.249.151/rook/csiaddons/k8s-sidecar:v0.4.0
docker push 10.122.249.151/rook/cephcsi/cephcsi:v3.6.2
docker push 10.122.249.151/rook/sig-storage/csi-snapshotter:v6.0.1
docker push 10.122.249.151/rook/sig-storage/csi-node-driver-registrar:v2.5.1
docker push 10.122.249.151/rook/sig-storage/nfsplugin:v4.0.0
docker push 10.122.249.151/rook/csiaddons/volumereplication-operator:v0.3.0
docker push 10.122.249.151/rook/sig-storage/csi-resizer:v1.4.0
docker push 10.122.249.151/rook/sig-storage/csi-provisioner:v3.1.0
docker push 10.122.249.151/rook/sig-storage/csi-attacher:v3.4.0

编辑rook-ceph部署文件

# 编辑operator.yaml文件,文件从GitHub上Rook中选择对应的版本下载下来
# 文件在https://github.com/rook/rook/tree/master/deploy/examples目录中

 92   # The default version of CSI supported by Rook will be started. To change the version
 93   # of the CSI driver to something other than what is officially supported, change
 94   # these images to the desired release of the CSI driver.
 95   # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.6.2"
 96   # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1"
 97   # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.4.0"
 98   # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v3.1.0"
 99   # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v6.0.1"
100   # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v3.4.0"
101   # ROOK_CSI_NFS_IMAGE: "registry.k8s.io/sig-storage/nfsplugin:v4.0.0"
102   ROOK_CSI_CEPH_IMAGE: "10.122.249.151/rook/cephcsi/cephcsi:v3.6.2"
103   ROOK_CSI_REGISTRAR_IMAGE: "10.122.249.151/rook/sig-storage/csi-node-driver-registrar:v2.5.1"
104   ROOK_CSI_RESIZER_IMAGE: "10.122.249.151/rook/sig-storage/csi-resizer:v1.4.0"
105   ROOK_CSI_PROVISIONER_IMAGE: "10.122.249.151/rook/sig-storage/csi-provisioner:v3.1.0"
106   ROOK_CSI_SNAPSHOTTER_IMAGE: "10.122.249.151/rook/sig-storage/csi-snapshotter:v6.0.1"
107   ROOK_CSI_ATTACHER_IMAGE: "10.122.249.151/rook/sig-storage/csi-attacher:v3.4.0"
108   ROOK_CSI_NFS_IMAGE: "10.122.249.151/rook/sig-storage/nfsplugin:v4.0.0"
...
...
447   # CSI_VOLUME_REPLICATION_IMAGE: "10.122.249.151/rook/csiaddons/volumereplication-operator:v0.3.0"
448   # Enable the csi addons sidecar.
449   CSI_ENABLE_CSIADDONS: "false"
450   # ROOK_CSIADDONS_IMAGE: "10.122.249.151/rook/csiaddons/k8s-sidecar:v0.4.0"
...
...
480       containers:
481         - name: rook-ceph-operator
482           # image: rook/ceph:v1.9.13
483           image: 10.122.249.151/rook/ceph:v1.9.13

# 编辑cluster.yaml文件,(注意:这个cluster.yaml文件需要保存好,rook-ceph集群的维护和扩容还需要使用)
 24     # image: quay.io/ceph/ceph:v16.2.10
 25     image: 10.122.249.151/rook/ceph/ceph:v16.2.10
...
...
226   storage: # cluster level storage configuration and selection
227     # useAllNodes: true
228     useAllNodes: false
229     # useAllDevices: true
230     useAllDevices: false
...
...
252     # when onlyApplyOSDPlacement is false, will merge both placement.All() and placement.osd
253     nodes:
254       - name: "k8s-node01"
255         devices:
256           - name: "sdc"
257       - name: "k8s-node02"
258         devices:
259           - name: "sdc"
260       - name: "k8s-node03"
261         devices:
262           - name: "sdb"

部署rook-ceph环境

# 配置运行环境资源
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

# 创建rook-ceph集群
kubectl create -f cluster.yaml

# 查看rook-ceph pod运行状态
watch  kubectl get pod -n rook-ceph

Every 2.0s: kubectl get pod -n rook-ceph                          Tue Dec 12 15:04:35 2023

NAME                                                   READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-d99qj                                 2/2     Running     0          9m31s
csi-cephfsplugin-ncjr2                                 2/2     Running     0          9m31s
csi-cephfsplugin-provisioner-59b9d8f8c4-bmt4d          5/5     Running     0          9m31s
csi-cephfsplugin-provisioner-59b9d8f8c4-tk4ln          5/5     Running     0          9m31s
csi-cephfsplugin-w6596                                 2/2     Running     0          9m31s
csi-cephfsplugin-wdgc4                                 2/2     Running     0          9m31s
csi-cephfsplugin-zsbzs                                 2/2     Running     0          9m31s
csi-rbdplugin-jlbk7                                    2/2     Running     0          9m31s
csi-rbdplugin-k4mfq                                    2/2     Running     0          9m31s
csi-rbdplugin-provisioner-59bc4f5fd7-hq4wx             5/5     Running     0          9m31s
csi-rbdplugin-provisioner-59bc4f5fd7-k66kx             5/5     Running     0          9m31s
csi-rbdplugin-sgmwn                                    2/2     Running     0          9m31s
csi-rbdplugin-tmzhz                                    2/2     Running     0          9m31s
csi-rbdplugin-xfx84                                    2/2     Running     0          9m31s
rook-ceph-crashcollector-k8s-node01-9c4bdcf48-5dmgm    1/1     Running     0          7m45s
rook-ceph-crashcollector-k8s-node02-745cf87d97-nfb5w   1/1     Running     0          7m9s
rook-ceph-crashcollector-k8s-node03-75c68bb54-wpr7k    1/1     Running     0          7m56s
rook-ceph-crashcollector-k8s-node05-7f86695b78-7m8hs   1/1     Running     0          8m2s
rook-ceph-mgr-a-dc76ff88d-zxgx9                        2/2     Running     0          8m2s
rook-ceph-mgr-b-57d6bc44b6-9h2ws                       2/2     Running     0          8m1s
rook-ceph-mon-a-7f6df64956-shdc9                       1/1     Running     0          9m25s
rook-ceph-mon-b-5c9f8d4756-vdgrb                       1/1     Running     0          8m26s
rook-ceph-mon-c-c86fb777b-kmg9d                        1/1     Running     0          8m15s
rook-ceph-operator-fb8764b98-5cthn                     1/1     Running     0          10m
rook-ceph-osd-0-7d467bdb45-td5qf                       1/1     Running     0          7m9s
rook-ceph-osd-1-bcb79ff46-28pv9                        1/1     Running     0          7m8s
rook-ceph-osd-2-854858dc4f-lf7vv                       1/1     Running     0          7m7s
rook-ceph-osd-prepare-k8s-node01-9hkvw                 0/1     Completed   0          6m44s
rook-ceph-osd-prepare-k8s-node02-nhlpw                 0/1     Completed   0          6m41s
rook-ceph-osd-prepare-k8s-node03-4thbk                 0/1     Completed   0          6m38s

# 查看ceph集群创建状态
kubectl get cephcluster -n rook-ceph rook-ceph -w

NAME        DATADIRHOSTPATH   MONCOUNT   AGE   PHASE         MESSAGE                 HEALTH   EXTERNAL
rook-ceph   /var/lib/rook     3          4s    Progressing   Configuring Ceph Mons
rook-ceph   /var/lib/rook     3          92s   Progressing   Configuring Ceph Mgr(s)
rook-ceph   /var/lib/rook     3          115s   Progressing   Configuring Ceph OSDs
rook-ceph   /var/lib/rook     3          2m25s   Progressing   Processing OSD 0 on node "k8s-node02"
rook-ceph   /var/lib/rook     3          2m26s   Progressing   Processing OSD 1 on node "k8s-node01"
rook-ceph   /var/lib/rook     3          2m27s   Progressing   Processing OSD 2 on node "k8s-node03"
rook-ceph   /var/lib/rook     3          2m27s   Ready         Cluster created successfully
rook-ceph   /var/lib/rook     3          2m27s   Progressing   Detecting Ceph version
rook-ceph   /var/lib/rook     3          2m28s   Ready         Cluster created successfully            HEALTH_OK
rook-ceph   /var/lib/rook     3          2m30s   Progressing   Configuring the Ceph cluster            HEALTH_OK
rook-ceph   /var/lib/rook     3          2m30s   Progressing   Configuring Ceph Mons                   HEALTH_OK
rook-ceph   /var/lib/rook     3          2m45s   Progressing   Configuring Ceph Mgr(s)                 HEALTH_OK
rook-ceph   /var/lib/rook     3          2m46s   Progressing   Configuring Ceph OSDs                   HEALTH_OK
rook-ceph   /var/lib/rook     3          2m58s   Progressing   Processing OSD 0 on node "k8s-node02"   HEALTH_OK
rook-ceph   /var/lib/rook     3          2m59s   Progressing   Processing OSD 1 on node "k8s-node01"   HEALTH_OK
rook-ceph   /var/lib/rook     3          3m1s    Progressing   Processing OSD 2 on node "k8s-node03"   HEALTH_OK
rook-ceph   /var/lib/rook     3          3m3s    Ready         Cluster created successfully            HEALTH_OK
rook-ceph   /var/lib/rook     3          3m28s   Ready         Cluster created successfully            HEALTH_OK
rook-ceph   /var/lib/rook     3          3m29s   Ready         Cluster created successfully            HEALTH_OK
rook-ceph   /var/lib/rook     3          4m30s   Ready         Cluster created successfully            HEALTH_OK
rook-ceph   /var/lib/rook     3          5m31s   Ready         Cluster created successfully            HEALTH_OK
rook-ceph   /var/lib/rook     3          6m32s   Ready         Cluster created successfully            HEALTH_OK

问题处理

# 如果部署后无法识别磁盘创建osd,可能是服务器磁盘之前有过使用,做过分区、lvm和文件系统需要使用工具删除磁盘分区,并用dd命令清掉磁盘前部的数据
parted /dev/sdc
dd if=/dev/zero of=/dev/sdc bs=1M count=20480 status=progress

# 删除部署后需要把node节点的/var/lib/rook/目录全部删除情况,否则会记着上次部署的配置信息
rm -rf /var/lib/rook/

部署toolbox并验证rook-ceph集群状态

# 首先部署rook自带的ceph管理toolbox容器
kubectl create -f toolbox.yaml

# 获取toolbox容器名称
kubectl get pod -n rook-ceph | grep tools
rook-ceph-tools-558df6699f-d5kqs                       1/1     Running   0          26s

# 进入toolbox容器,查看ceph集群状态
kubectl exec -it rook-ceph-tools-558df6699f-d5kqs -n rook-ceph -- bash

[rook@rook-ceph-tools-558df6699f-d5kqs /]$ ceph -s
  cluster:
    id:     9efa23f4-5897-4ad6-bf73-c8cf2114dd28
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 10m)
    mgr: a(active, since 9m), standbys: b
    osd: 3 osds: 3 up (since 9m), 3 in (since 10m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   15 MiB used, 47 TiB / 47 TiB avail
    pgs:     1 active+clean

[rook@rook-ceph-tools-558df6699f-d5kqs /]$ ceph -s
  cluster:
    id:     9efa23f4-5897-4ad6-bf73-c8cf2114dd28
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 10m)
    mgr: a(active, since 9m), standbys: b
    osd: 3 osds: 3 up (since 9m), 3 in (since 10m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   15 MiB used, 47 TiB / 47 TiB avail
    pgs:     1 active+clean

[rook@rook-ceph-tools-558df6699f-d5kqs /]$
[rook@rook-ceph-tools-558df6699f-d5kqs /]$ ceph df
--- RAW STORAGE ---
CLASS    SIZE   AVAIL    USED  RAW USED  %RAW USED
hdd    47 TiB  47 TiB  15 MiB    15 MiB          0
TOTAL  47 TiB  47 TiB  15 MiB    15 MiB          0

--- POOLS ---
POOL                   ID  PGS  STORED  OBJECTS  USED  %USED  MAX AVAIL
device_health_metrics   1    1     0 B        0   0 B      0     15 TiB
[rook@rook-ceph-tools-558df6699f-d5kqs /]$
[rook@rook-ceph-tools-558df6699f-d5kqs /]$ rados df
POOL_NAME              USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED  RD_OPS   RD  WR_OPS   WR  USED COMPR  UNDER COMPR
device_health_metrics   0 B        0       0       0                   0        0         0       0  0 B       0  0 B         0 B          0 B

total_objects    0
total_used       15 MiB
total_avail      47 TiB
total_space      47 TiB

配置dashboard

# 部署dashboard
kubectl create -f dashboard-external-https.yaml

# 获取dashboard登录密码
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
Ge}RJWx=[xxxxxxxx}wY*

登录dashboard

用户: admin
密码: Ge}RJWx=[xxxxxxxx}wY*
http://10.122.249.155:8443/
在这里插入图片描述

Dashboard界面

在这里插入图片描述

K8s使用Rook-Ceph

创建rbd块存储StorageClass
# 编辑rbd-storageclass.yaml部署文件
vim rbd-storageclass.yaml

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3
    requireSafeReplicaSize: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  clusterID: rook-ceph
  pool: replicapool
  imageFormat: "2"
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
  csi.storage.k8s.io/fstype: ext4
allowVolumeExpansion: true
reclaimPolicy: Delete

# 创建rbd storageclass
kubectl create -f rbd-storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

# 查看创建的storageclase
kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local (default)   openebs.io/local             Delete          WaitForFirstConsumer   false                  2d18h
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate              true                   31s

# 查看CephBlockPool
kubectl get cephblockpools -n rook-ceph
NAME          PHASE
replicapool   Ready
创建Cephfs共享文件存储StorageClass
# 编辑mds-filesystem.yaml部署文件
vim mds-filesystem.yaml

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: myfs
  namespace: rook-ceph
spec:
  metadataPool:
    replicated:
      size: 3
      requireSafeReplicaSize: true
    parameters:
      compression_mode:
        none
  dataPools:
    - name: replicated
      failureDomain: host
      replicated:
        size: 3
        requireSafeReplicaSize: true
      parameters:
        compression_mode:
          none
  preserveFilesystemOnDelete: true
  metadataServer:
    activeCount: 1
    activeStandby: true
    placement:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-mds
            topologyKey: kubernetes.io/hostname
        preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - rook-ceph-mds
              topologyKey: topology.kubernetes.io/zone
    priorityClassName: system-cluster-critical
    livenessProbe:
      disabled: false
    startupProbe:
      disabled: false

# 编辑cephfs-storageclass.yaml部署文件
vim cephfs-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
  clusterID: rook-ceph
  fsName: myfs
  pool: myfs-replicated
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:

# 创建Cephfs需要首先部署MDS服务,该服务负责处理文件系统中的元数据
kubectl create -f mds-filesystem.yaml

[root@k8s-master1 install_rook]# kubectl get pod -A | grep mds
rook-ceph      rook-ceph-mds-myfs-a-654947c59b-sgnhf            0/1     Running     0          18s
rook-ceph      rook-ceph-mds-myfs-b-97d984d48-j57qp             0/1     Running     0          17s

# 创建StorageClass
kubectl apply -f cephfs-storageclass.yaml

[root@k8s-master1 install_rook]# kubectl get sc
NAME     PROVISIONER      RECLAIMPOLICY     VOLUMEBINDINGMODE       ALLOWVOLUMEEXPANSION   AGE
local (default)   openebs.io/local    Delete      WaitForFirstConsumer   false             2d19h
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete     Immediate   true              19m
rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete     Immediate   true              5s

# 进入ceph toolbox中查看ceph运行状态
[root@k8s-master1 install_rook]# kubectl exec -it rook-ceph-tools-558df6699f-d5kqs -n rook-ceph -- bash

[rook@rook-ceph-tools-558df6699f-d5kqs /]$ ceph -s
  cluster:
    id:     9efa23f4-5897-4ad6-bf73-c8cf2114dd28
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 41h)
    mgr: a(active, since 41h), standbys: b
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 41h), 3 in (since 41h)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 97 pgs
    objects: 26 objects, 2.3 KiB
    usage:   18 MiB used, 47 TiB / 47 TiB avail
    pgs:     97 active+clean

  io:
    client:   1.2 KiB/s rd, 2 op/s rd, 0 op/s wr

部署一个测试deployment

# 编辑app的yaml
vim nginx-deploy-rbd.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy-rbd
  labels:
    app: nginx-deploy-rbd
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deploy-rbd
  template:
    metadata:
      labels:
        app: nginx-deploy-rbd
    spec:
      containers:
        - image: 10.122.249.151/library/nginx:1.14-alpine
          name: nginx
          volumeMounts:
          - name: data
            mountPath: /usr/share/nginx/html
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: nginx-rbd-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-rbd-pvc
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Mi              # <----------- 声明一个200M的PV块存储

# 部署app
[root@k8s-master1 test]# kubectl create -f nginx-deploy-rbd.yaml
deployment.apps/nginx-deploy-rbd created

# 查看app情况
[root@k8s-master1 test]# kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP      NODE       NOMINATED NODE   READINESS GATES
nginx-deploy-rbd-66bb689dc9-mkw5l  1/1  Running  0  7s  10.240.96.55  k8s-node02  <none>  <none>

kubectl exec -it nginx-deploy-rbd-66bb689dc9-mkw5l -- sh
/ # echo "Hello Nginx rbd!" > /usr/share/nginx/html/index.html
/ # exit
# 测试nginx
[root@k8s-master1 test]# curl 10.240.96.55
Hello Nginx rbd!

# 查看ceph创建pvc和pv的情况
[root@k8s-master1 test]# kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS      AGE
nginx-rbd-pvc  Bound  pvc-b4bf3ade-89f2-43fb-a4a5-9ee7b57a9b0c  200Mi  RWO  rook-ceph-block   53m

[root@k8s-master1 test]# kubectl get pv
NAME  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM    STORAGECLASS   REASON   AGE
pvc-b1fedbcd-84f8-4588-8307-978108dfce8e   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0   local                      2d22h
pvc-b4bf3ade-89f2-43fb-a4a5-9ee7b57a9b0c   200Mi      RWO            Delete           Bound    default/nginx-rbd-pvc                                             rook-ceph-block            56m
pvc-cc9ac708-4f37-4f27-aba4-e3402644ad75   20Gi       RWO            Delete           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1   local                      2d22h
pvc-f9aea978-e7ed-4b11-9509-56d9cde8ab8d   20Gi       RWO            Delete           Bound    kubesphere-system/minio                                           local                      24h

# 进入ceph toolbox中查看存储创建情况
kubectl exec -it rook-ceph-tools-558df6699f-d5kqs -n rook-ceph -- sh

sh-4.4$ ceph osd lspools
1 device_health_metrics
2 replicapool
3 myfs-metadata
4 myfs-replicated

sh-4.4$ rbd list replicapool
csi-vol-7cff583b-9a2a-11ee-b8dd-728314e12c31

sh-4.4$ rbd info -p replicapool csi-vol-7cff583b-9a2a-11ee-b8dd-728314e12c31
rbd image 'csi-vol-7cff583b-9a2a-11ee-b8dd-728314e12c31':
        size 200 MiB in 50 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 246a219f96cce
        block_name_prefix: rbd_data.246a219f96cce
        format: 2
        features: layering
        op_features:
        flags:
        create_timestamp: Thu Dec 14 02:42:57 2023
        access_timestamp: Thu Dec 14 02:42:57 2023
        modify_timestamp: Thu Dec 14 02:42:57 2023

添加存储节点或磁盘

编辑cluster.yaml文件

vim cluster.yaml

252     # when onlyApplyOSDPlacement is false, will merge both placement.All() and placement.osd
253     nodes:
254       - name: "k8s-node01"
255         devices:
256           - name: "sdc"
257           - name: "sda"   # ssd 新加的磁盘
258           - name: "sdd"   # ssd 新加的磁盘
259       - name: "k8s-node02"
260         devices:
261           - name: "sdc"
262           - name: "sdb"   # ssd 新加的磁盘
263           - name: "sdd"   # ssd 新加的磁盘
264       - name: "k8s-node03"
265         devices:
266           - name: "sdb"
267           - name: "sdc"   # ssd 新加的磁盘
268           - name: "sdd"   # ssd 新加的磁盘

应用新配置并监控osd创建情况

[root@k8s-master1 install_rook]# kubectl apply -f cluster.yaml

[root@k8s-master1 install_rook]# watch  kubectl get pod -n rook-ceph

Every 2.0s: kubectl get pod -n rook-ceph                                                                                                  Mon Dec 18 07:34:19 2023

NAME                                                   READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-d99qj                                 2/2     Running     0          5d16h
csi-cephfsplugin-ncjr2                                 2/2     Running     0          5d16h
csi-cephfsplugin-provisioner-59b9d8f8c4-bmt4d          5/5     Running     0          5d16h
csi-cephfsplugin-provisioner-59b9d8f8c4-tk4ln          5/5     Running     0          5d16h
csi-cephfsplugin-w6596                                 2/2     Running     0          5d16h
csi-cephfsplugin-wdgc4                                 2/2     Running     0          5d16h
csi-cephfsplugin-zsbzs                                 2/2     Running     0          5d16h
csi-rbdplugin-jlbk7                                    2/2     Running     0          5d16h
csi-rbdplugin-k4mfq                                    2/2     Running     0          5d16h
csi-rbdplugin-provisioner-59bc4f5fd7-hq4wx             5/5     Running     0          5d16h
csi-rbdplugin-provisioner-59bc4f5fd7-k66kx             5/5     Running     0          5d16h
csi-rbdplugin-sgmwn                                    2/2     Running     0          5d16h
csi-rbdplugin-tmzhz                                    2/2     Running     0          5d16h
csi-rbdplugin-xfx84                                    2/2     Running     0          5d16h
rook-ceph-crashcollector-k8s-node01-9c4bdcf48-5dmgm    1/1     Running     0          5d16h
rook-ceph-crashcollector-k8s-node02-745cf87d97-nfb5w   1/1     Running     0          5d16h
rook-ceph-crashcollector-k8s-node03-75c68bb54-wpr7k    1/1     Running     0          5d16h
rook-ceph-crashcollector-k8s-node04-fddfc8f4f-fv8sj    1/1     Running     0          3d23h
rook-ceph-crashcollector-k8s-node05-7f86695b78-7m8hs   1/1     Running     0          5d16h
rook-ceph-mds-myfs-a-654947c59b-sgnhf                  1/1     Running     0          3d23h
rook-ceph-mds-myfs-b-97d984d48-j57qp                   1/1     Running     0          3d23h
rook-ceph-mgr-a-dc76ff88d-zxgx9                        2/2     Running     0          5d16h
rook-ceph-mgr-b-57d6bc44b6-9h2ws                       2/2     Running     0          5d16h
rook-ceph-mon-a-7f6df64956-shdc9                       1/1     Running     0          5d16h
rook-ceph-mon-b-5c9f8d4756-vdgrb                       1/1     Running     0          5d16h
rook-ceph-mon-c-c86fb777b-kmg9d                        1/1     Running     0          5d16h
rook-ceph-operator-fb8764b98-5cthn                     1/1     Running     0          5d16h
rook-ceph-osd-0-7d467bdb45-td5qf                       1/1     Running     0          5d16h
rook-ceph-osd-1-bcb79ff46-28pv9                        1/1     Running     0          5d16h
rook-ceph-osd-2-854858dc4f-lf7vv                       1/1     Running     0          5d16h
rook-ceph-osd-3-8487567b86-fvvtx                       1/1     Running     0          14m
rook-ceph-osd-4-79bfc868d5-hxckx                       1/1     Running     0          14m
rook-ceph-osd-5-5df789d756-jw7hv                       1/1     Running     0          14m
rook-ceph-osd-6-85cf876ff-m69lf                        1/1     Running     0          3m40s
rook-ceph-osd-7-5c59fcb744-mz7wb                       1/1     Running     0          3m38s
rook-ceph-osd-8-6459cddc97-ptjks                       1/1     Running     0          3m34s
rook-ceph-osd-prepare-k8s-node01-42nwq                 0/1     Completed   0          3m52s
rook-ceph-osd-prepare-k8s-node02-nnqlt                 0/1     Completed   0          3m49s
rook-ceph-osd-prepare-k8s-node03-tpmkt                 0/1     Completed   0          3m46s
rook-ceph-tools-558df6699f-d5kqs                       1/1     Running     0          5d16h

使用toolbox查看ceph

[rook@rook-ceph-tools-558df6699f-d5kqs /]$ ceph -s
  cluster:
    id:     9efa23f4-5897-4ad6-bf73-c8cf2114dd28
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 5d)
    mgr: a(active, since 5d), standbys: b
    mds: 1/1 daemons up, 1 hot standby
    osd: 9 osds: 9 up (since 15s), 9 in (since 32s)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 97 pgs
    objects: 42 objects, 4.6 MiB
    usage:   106 MiB used, 57 TiB / 57 TiB avail
    pgs:     97 active+clean

  io:
    client:   871 B/s rd, 1 op/s rd, 0 op/s wr
    recovery: 9 B/s, 0 objects/s
    
[rook@rook-ceph-tools-558df6699f-d5kqs /]$ ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME            STATUS  REWEIGHT  PRI-AFF
-1         57.11728  root default
-5         19.03909      host k8s-node01
 1    hdd  15.54590          osd.1            up   1.00000  1.00000
 3    ssd   1.74660          osd.3            up   1.00000  1.00000
 6    ssd   1.74660          osd.6            up   1.00000  1.00000
-3         19.03909      host k8s-node02
 0    hdd  15.54590          osd.0            up   1.00000  1.00000
 4    ssd   1.74660          osd.4            up   1.00000  1.00000
 7    ssd   1.74660          osd.7            up   1.00000  1.00000
-7         19.03909      host k8s-node03
 2    hdd  15.54590          osd.2            up   1.00000  1.00000
 5    ssd   1.74660          osd.5            up   1.00000  1.00000
 8    ssd   1.74660          osd.8            up   1.00000  1.00000

参考文档:
kubesphere官方资料:https://kubesphere.io/zh/docs/v3.4/
kubekey(kk)官方说明:https://github.com/kubesphere/kubekey/blob/master/README_zh-CN.md
Rook官方文档:https://rook.github.io/
还有很多网上零散的rook-ceph安装部署、使用的文章和B站视频,请自行搜索参考。

  • 4
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值