使用Cephadm安装Ceph笔记

一、基础环境

1、Linux OS

ubuntu-22.04.4-live-server-amd64.iso

2、VMware Workstation

3、虚机配置

  • 创建三台虚机,虚机名字为ceph01/ceph02/ceph03
  • 每台虚机都配置两个网卡:

        网卡1:接入VMnet3,VMnet3配置为主机模式,网段为192.168.1.0/24,三台虚机静态配置IP地址分别为192.168.1.136、192.168.1.137、192.168.1.138

        网卡2:配置为NAT模式,接入VMnet8,三台虚机dhcp获得地址,能通过网卡2和互联网连通。

  • 每台虚机都配置3个硬盘。
  • 每台虚机配置为:

         如果自己电脑内存紧张,内存可以调整为4G。

4、虚机安装ubuntu

根据虚机配置,创建三台虚机安装ubuntu,系统hostname分别命名为:lxhcep01、lxhcep02、lxhceph03

可以参考相关文档。

5、*注意事项*

在 Ubuntu 中,默认情况下 root 帐户是被禁用的,因为它的密码没有设置。这是为了提高系统的安全性,因为 root 帐户具有对整个系统的完全控制权,如果被滥用可能会造成严重的后果。

为了执行需要 root 权限的操作,通常最好使用具有管理员权限的用户,并通过 sudo 命令临时获取 root 权限。这种方式可以更加安全地进行系统管理和维护操作,因为它在需要时才临时提升权限,而不是一直以 root 身份登录系统。

在安装Ubuntu(或类似的Linux发行版)时,你创建的第一个用户帐户通常会被设置为具有sudo权限(即可以执行管理员任务的权限)。通过使用sudo命令,这个用户可以在需要时临时获取root权限执行特权操作,而不必始终以root身份登录。

但在ceph安装时,由于设备之间相互ssh操作默认使用root账号,所以在实验环境中,可以考虑启用root账号,方便操作。

lxhub@lxhceph01:/etc/apt$ sudo passwd root
New password: 
Retype new password: 
passwd: password updated successfully

 还需要允许远程使用root账号ssh连接本机:

lxhub@lxhceph01:~$ sudo vi /etc/ssh/sshd_config
#PermitRootLogin prohibit-password
PermitRootLogin yes  <---新增配置
lxhub@lxhceph01:~$
lxhub@lxhceph01:~$ sudo /etc/init.d/ssh stop  <---修改完成后,对服务重启
[sudo] password for lxhub: 
Stopping ssh (via systemctl): ssh.service.
lxhub@lxhceph01:~$ 
lxhub@lxhceph01:~$ sudo /etc/init.d/ssh start
Starting ssh (via systemctl): ssh.service.
lxhub@lxhceph01:~$ sudo service ssh restart
lxhub@lxhceph01:~$ 

lxhceph02、lxhceph03类似操作。

这样SecureCRT等登录工具直接用root登录。

二、基础配置

三台虚机安装ubuntu完成后,进行相关设置。

1、IP地址

在 Ubuntu 22.04 中修改 IP 地址通常需要编辑网络配置文件vi /etc/netplan/00-installer-config.yaml,运行netplan apply来应用更改。

root@lxhceph01:/etc/ceph# cat /etc/netplan/00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      dhcp4: false
      addresses:
        - 192.168.1.136/24
    ens34:
      dhcp4: true
  version: 2
root@lxhceph01:/etc/ceph# 


root@lxhceph02:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      dhcp4: false
      addresses: 
        - 192.168.1.137/24
    ens34:
      dhcp4: true
  version: 2
root@lxhceph02:~# 

root@lxhceph03:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      dhcp4: false
      addresses:
        - 192.168.1.138/24
    ens34:
      dhcp4: true
  version: 2
root@lxhceph03:~# 

2、hosts

三台虚机编辑/etc/hosts,这里以lxhceph01为例。

root@lxhceph01:/etc/ceph# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 lxhceph01

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.1.136 lxhceph01  <-新增配置
192.168.1.137 lxhceph02  <-新增配置
192.168.1.138 lxhceph03  <-新增配置

3、APT软件源

用了系统自带的源,未作修改。

4、配置SSH互信

这样要注意一点,对自己主机的密钥互信也要配,不是只配其它两台而已。

lxhceph01配置:

root@lxhceph01:~# ssh-keygen -t rsa
root@lxhceph01:~# ssh-copy-id 192.168.1.137
root@lxhceph01:~# ssh-copy-id 192.168.1.138
root@lxhceph01:~# ssh-copy-id 192.168.1.136

验证:
root@lxhceph01:~# ssh lxhceph01 date
Sun Apr 28 10:45:23 PM CST 2024
root@lxhceph01:~# ssh lxhceph02 date
Sun Apr 28 10:45:30 PM CST 2024
root@lxhceph01:~# ssh lxhceph03 date
Sun Apr 28 10:45:37 PM CST 2024
root@lxhceph01:~# 

lxhceph02、lxhceph03类似。

5、时间同步

采用Chrony进行时间同步。Chrony 是一个用于时间同步的软件,旨在提供高精度和稳定性的系统时钟同步。它用于在计算机网络中同步系统时钟,确保各个计算机上的时间保持一致,从而确保计算机系统正常运行和协调工作。

与其他时间同步软件(如 NTP,Network Time Protocol)相比,Chrony 提供了更精确的时间同步,并且对网络延迟和不稳定性有更好的适应性。它使用一种称为“相位校准”的算法,能够在网络延迟和不稳定性较大的情况下,仍然保持高精度的时钟同步。

检查时间同步方式:

root@lxhceph01:~# systemctl status systemd-timesyncd
○ systemd-timesyncd.service
     Loaded: masked (Reason: Unit systemd-timesyncd.service is masked.)
     Active: inactive (dead)
root@lxhceph01:~# systemctl status ntp
Unit ntp.service could not be found.
root@lxhceph01:~# systemctl status chronyd
● chrony.service - chrony, an NTP client/server
     Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-04-29 06:05:29 CST; 56min ago
       Docs: man:chronyd(8)
             man:chronyc(1)
             man:chrony.conf(5)
    Process: 997 ExecStart=/usr/lib/systemd/scripts/chronyd-starter.sh $DAEMON_OPTS (code=exited, status=0/SUCCESS)
   Main PID: 1024 (chronyd)
      Tasks: 2 (limit: 9346)
     Memory: 2.0M
        CPU: 126ms
     CGroup: /system.slice/chrony.service
             ├─1024 /usr/sbin/chronyd -F 1
             └─1046 /usr/sbin/chronyd -F 1

Apr 29 06:05:29 lxhceph01 systemd[1]: Starting chrony, an NTP client/server...
Apr 29 06:05:29 lxhceph01 chronyd[1024]: chronyd version 4.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Apr 29 06:05:29 lxhceph01 chronyd[1024]: Frequency -15.731 +/- 0.192 ppm read from /var/lib/chrony/chrony.drift
Apr 29 06:05:29 lxhceph01 chronyd[1024]: Timezone right/Asia/Shanghai failed leap second check, ignoring
Apr 29 06:05:29 lxhceph01 chronyd[1024]: Loaded seccomp filter (level 1)
Apr 29 06:05:29 lxhceph01 systemd[1]: Started chrony, an NTP client/server.
Apr 29 06:05:35 lxhceph01 chronyd[1024]: Selected source 95.111.202.5 (2.ubuntu.pool.ntp.org)
Apr 29 06:05:35 lxhceph01 chronyd[1024]: Selected source 84.16.67.12 (0.ubuntu.pool.ntp.org)
Apr 29 06:05:37 lxhceph01 chronyd[1024]: Source 95.111.202.5 replaced with 108.59.2.24 (2.ubuntu.pool.ntp.org)
root@lxhceph01:~# 

chrony只是修改了时区为Asia/Shanhai:

root@lxhceph01:~# timedatectl list-timezones | grep Asia/Shanghai
Asia/Shanghai

root@lxhceph01:~# timedatectl set-timezone Asia/Shanghai

root@lxhceph01:~# cat /etc/chrony/chrony.conf  <---编辑此文件
...
# Get TAI-UTC offset and leap seconds from the system tz database.
# This directive must be commented out when using time sources serving
# leap-smeared time.
leapsectz right/Asia/Shanghai <---修改时区信息
...

root@lxhceph01:~# systemctl restart chrony

root@lxhceph01:~# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- alphyn.canonical.com          2  10   375   801  -4618us[-3217us] +/-  138ms
^- prod-ntp-5.ntp4.ps5.cano>     2  10   377   27m    -26ms[  -24ms] +/-  134ms
^- prod-ntp-3.ntp4.ps5.cano>     2  10   337   714  -1453us[  -48us] +/-  125ms
^- prod-ntp-4.ntp1.ps5.cano>     2  10   365   30m  -7847us[-5809us] +/-  121ms
^- tick.ntp.infomaniak.ch        1  10   245    65    +22ms[  +22ms] +/-  102ms
^- a.chl.la                      2  10   337   699    -15ms[  -13ms] +/-  147ms
^- user-185-209-85-222.mow0>     2  10   377   913    +42ms[  +44ms] +/-  106ms
^* 139.199.215.251               2  10   377   316    +12ms[  +13ms] +/-   39ms
root@lxhceph01:~#

lxhceph02、lxhceph03类似。

三、安装CEPH CLUSTER

Cephadm 在创建一个新的 Ceph 群集时的工作流程:

  1. 通过引导单个主机创建新的 Ceph 群集:首先,Cephadm 会在单个主机上引导(bootstrap)一个新的 Ceph 群集。引导过程会初始化一些基本的配置和组件,如 Monitor(监视器)节点等,以建立初始的群集结构。
  2. 将集群扩展到其他主机:一旦初始主机被引导为群集的一部分,Cephadm 将会通过添加其他主机来扩展群集。这些新的主机将加入到已有的 Ceph 群集中,形成一个更大的集群。
  3. 部署所需的服务:一旦群集的规模得到了扩展,Cephadm 将会在这些主机上部署所需的服务。这些服务包括了 Ceph 群集中的各种组件,如 Monitor、OSD(对象存储守护进程)、Mgr(管理器)、MDS(元数据服务器)等,以确保群集的正常运行和高可用性。

Cephadm 创建新的 Ceph 群集的过程,包括了引导单个主机、扩展群集规模以及部署所需的服务。Cephadm 的自动化工作流程使得创建和管理 Ceph 群集变得更加简单和高效。

参考官网:

https://docs.ceph.com/en/latest/cephadm/install/

1、REQUIREMENTS

参与Ceph集群的每台虚机都需要安装:

  • Python 3

  • Systemd

  • Podman or Docker for running containers

  • Time synchronization (such as Chrony or the legacy ntpd)

  • LVM2 for provisioning storage devices

安装完ubuntu22.04后,只有Docker还需要安装(安装命令:apt install docker.io),其他都已经安装。

可以进行验证:

1、Python 3检查:

root@lxhceph01:~# python3 --version
Python 3.10.12
root@lxhceph01:~#

2、systemd检查:

root@lxhceph01:~# systemctl --version
systemd 249 (249.11-0ubuntu3.12)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
root@lxhceph01:~# ps -p 1
    PID TTY          TIME CMD
      1 ?        00:00:02 systemd
root@lxhceph01:~# 

3、docker检查:

root@lxhceph01:~# docker --version
Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1
root@lxhceph01:~# 

4、chrony检查:

root@lxhceph01:~# systemctl status chrony
● chrony.service - chrony, an NTP client/server
     Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2024-04-29 06:05:29 CST; 34min ago
       Docs: man:chronyd(8)
             man:chronyc(1)
             man:chrony.conf(5)
    Process: 997 ExecStart=/usr/lib/systemd/scripts/chronyd-starter.sh $DAEMON_OPTS (code=exited, status=0/SUCCESS)
   Main PID: 1024 (chronyd)
      Tasks: 2 (limit: 9346)
     Memory: 2.0M
        CPU: 112ms
     CGroup: /system.slice/chrony.service
             ├─1024 /usr/sbin/chronyd -F 1
             └─1046 /usr/sbin/chronyd -F 1

Apr 29 06:05:29 lxhceph01 systemd[1]: Starting chrony, an NTP client/server...
Apr 29 06:05:29 lxhceph01 chronyd[1024]: chronyd version 4.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Apr 29 06:05:29 lxhceph01 chronyd[1024]: Frequency -15.731 +/- 0.192 ppm read from /var/lib/chrony/chrony.drift
Apr 29 06:05:29 lxhceph01 chronyd[1024]: Timezone right/Asia/Shanghai failed leap second check, ignoring
Apr 29 06:05:29 lxhceph01 chronyd[1024]: Loaded seccomp filter (level 1)
Apr 29 06:05:29 lxhceph01 systemd[1]: Started chrony, an NTP client/server.
Apr 29 06:05:35 lxhceph01 chronyd[1024]: Selected source 95.111.202.5 (2.ubuntu.pool.ntp.org)
Apr 29 06:05:35 lxhceph01 chronyd[1024]: Selected source 84.16.67.12 (0.ubuntu.pool.ntp.org)
Apr 29 06:05:37 lxhceph01 chronyd[1024]: Source 95.111.202.5 replaced with 108.59.2.24 (2.ubuntu.pool.ntp.org)
root@lxhceph01:~# 
root@lxhceph01:~# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^+ alphyn.canonical.com          2   7   377    12  -9545us[-9545us] +/-  134ms
^+ prod-ntp-3.ntp1.ps5.cano>     2   8   377   139    -16ms[  -17ms] +/-  112ms
^+ prod-ntp-4.ntp1.ps5.cano>     2   8   277   140    -16ms[  -16ms] +/-  115ms
^- prod-ntp-5.ntp4.ps5.cano>     2   7   377    11    -23ms[  -23ms] +/-  117ms
^* tock.ntp.infomaniak.ch        1   7   377   140  +5568us[+5334us] +/-   95ms
^+ ntp.ams1.nl.leaseweb.net      2   8   327   205    +15ms[  +15ms] +/-  203ms
^- time.cloudflare.com           3   6   367   138    +13ms[  +13ms] +/-   91ms
^- ntp.wdc2.us.leaseweb.net      2   7   377    78  +6623us[+6623us] +/-  225ms
root@lxhceph01:~# 


5、lvm2检查:

root@lxhceph01:~# dpkg -l | grep lvm2
ii  liblvm2cmd2.03:amd64                   2.03.11-2.1ubuntu4                      amd64        LVM2 command library
ii  lvm2                                   2.03.11-2.1ubuntu4                      amd64        Linux Logical Volume Manager
root@lxhceph01:~# 

2、INSTALL CEPHADM

在lxhceph01上安装Cephadm。

Cephadm 的目标是简化 Ceph 存储集群的部署和管理过程,使得用户可以更轻松地搭建和管理复杂的存储基础设施。它采用了基于容器的部署模型,利用容器技术(如 Docker 或 Podman)来封装和管理 Ceph 的各个组件,如 Monitor、OSD(对象存储设备)和 Manager。

安装Cephadm软件,首先选择lxhceph01进行安装,lxhceph02、lxhceph03不安装。

在Ubuntu平台,采用以下命令直接安装:

root@lxhceph01:~# apt install -y cephadm

3、BOOTSTRAP A NEW CLUSTER

引导创建新集群。

这是创建一个新的 Ceph 集群时的第一步操作。在这个步骤中,你需要在 Ceph 集群的第一个主机(lxhceph01)上运行 cephadm bootstrap 命令。运行这个命令的作用是在 Ceph 集群的第一个主机上创建第一个 Monitor 守护进程。Monitor 是 Ceph 集群中的重要组件之一,负责集群的状态监控、管理和维护。

root@lxhceph01:~# cephadm bootstrap --mon-ip 192.168.1.136 --initial-dashboard-user lxh --initial-dashboard-password lxh

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
Verifying IP 192.168.1.136 port 3300 ...
Verifying IP 192.168.1.136 port 6789 ...
Mon IP `192.168.1.136` is in CIDR network `192.168.1.0/24`
Mon IP `192.168.1.136` is in CIDR network `192.168.1.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host lxhceph01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

             URL: https://lxhceph01:8443/
            User: lxh
        Password: lxh

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/4ef40eae-0429-11ef-a0fc-ddb80bd5a148/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid 4ef40eae-0429-11ef-a0fc-ddb80bd5a148 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell 

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.
 

运行cephadm bootstrap命令会执行以下操作:

  • 在本地主机上为新集群创建一个 Monitor(监视器)和一个 Manager(管理器)守护进程。这两个守护进程是 Ceph 集群中的核心组件,负责集群的状态监控、管理和维护。

  • 为 Ceph 集群生成一个新的 SSH 密钥,并将其添加到 root 用户的 /root/.ssh/authorized_keys 文件中。/root/.ssh/authorized_keys 文件是用于存储允许通过 SSH 密钥进行身份验证的公钥列表的文件,允许用户通过 SSH 连接到远程主机而无需输入密码。。这样做是为了在集群节点之间建立安全的 SSH 连接,以便进行集群管理和通信。

root@lxhceph01:~/.ssh# cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9PLsdWTSArSMeDBh+6JUVcupluk9KsS/GgMnOGj7pBa7cLVI52F/pHcD0ZN9Kh2j9frD5YMJN0dmYZW6PDEPfaueY5KkOL+jv/3tpkk/wibPtG6dZu45P/ofE/nbMeDksi4gYRXbX9vDzpUkf7opxnXW55GN6qoEMFzGVquIdBeiLfA0qN7gE62Mja5f+4q82IIssPViBUk1XZs7hAXUPBt8Mws5qav0BRcxn+eCsl9U4tsqWe2XDsZg5pVlKlBcM5oGAxKNXkOhEgEpp0Ii5QQyb72kb3kkLVKuRcs49b35HLKppaQ8HP8GmPgUV4BUqthjicrfgwcqIzFwNDsqSbjLaES/FjcxpagQkKFfT9ON1fCX4Y9DZMGbkO33kjul0Ic3gHhRgcn5XSPSWjsJeRELpFfaoKIyshs+t9Dd2UrXV2Mtt3ajV0PN2/Xv3z0iaYAp2sbfvEcNdh+8TiH96VJuYiiDF2mGzWVJWlSLBkR3KsnFQ/x1c9LxPAwO+3gE= ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkkajjB1RmUkahgVSuFAN9LqG4Q4ywkQi8qT0iDLzeCSuEIZPxzuVi6taXPkPS+b2BvJrtlnxzI7ROiPLQIU5lE1KLe0JvSYb0J349YfSVsg5JbNff2BA21uzY3klTAK88b6+qMSQvYKXebrF/19vlN2Eq+b8SZZPI9iSNZHQqN/1MsmWN9LKB7PD96pHO8Dk5bkE96DMRwCyDcvWvu9cZO/nLaaAvv/mnHNa4LN6Mg5gIafT0f/DLUteQXNE5UUdLtGHyvw79B5Xikzmu2C8DX6xRXvwsJpDwmD7MzJvDqlluRrADrYkTViE2pjw/JXvRUf0soTiK5CVsL/c0p4Wzdg++F9FPaP/AFyRQE/2wOvWjnActvUDpZI9kf+2bQUhIi8eqWk19YvWfnfKEQ2ooXw/zZ21f26q6SlZ4MfV0dAoUyWMMFMo/amy0e/O/tdY3jY6E4hY984RtGVevBjlxd6komWAhlmwyI9qTTHD16wqaPE2l5GgSfjyK2qU46yM= root@lxhceph02
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhWvvfeWX6eo0IgTDC4jBle5PNBOiWfnCPUyxjEmEDOoJ+m5ZQIHH0GGmZRwq9wspkDQHzOGuIGov7oiYjZe5XVGdgIY5v4ufslwEMYrMwxB/KHf6UcYtKXM0KjzGim78tj7tAl9Gh/Q5GGeZhQ5+0GrTJNHNqlN9dniS2j1eANmmguw9LMFFLCa6RV2tABITgpUGsbfbr0HICA5XnlWBevyaEkeTS8N3VBHr+8ZGCLpVQDrxnJrRe3OwnGWgT5tWEOOpWdyz9yYATvp1u2FtNdnQeOw1JMpW+haMi8UqW/g7VEs0YimrzSSSUEIwo7xZ87irSVuOU1PIIc3mWwY8LLQkuIPvjFR1pXXmABaszjcnHwz3m/7FGsvlYzW63mYifCCqqJqe+2cuy0nNREqKYaXCQtdFLaoiLi+VSdpf2AtoXkdcuhcy+eSc76EJcFcIpRQPsuwhF5OgT3WlKvcXwwI/g5oh1omTEpdEeyG5ls2HIy8TIeu0XiX9m0EYm1TM= root@lxhceph03
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1LVMugaQ50ZbJxXohae4HxNF8YSOZC5b48kglyK+zQmG2OsXH4+kOpfXjDhG/9ma8atRdzALX3zgzH00RcLpnHD9e9i0MnzCHQW3dgjhOxExz6RWUuAr9fAEhPZGJyKHFy5mSCbwmAJXfLnq75V7XmbCb2AdA9yBL3ws/4MA5W1zB0obxIdSZ0gT11HTS3wfhPOBmlt9UCLxOB/wvFAcObspT9Zj54Vnq1J1++celpHEQnfz1tdWkJhkDVLUS+ohkWSGP2dUnsUmETTdDCHgzVchYx7W4Qb6FVYBGwnisjY1NTca2dJv/lozIBZfWsg3LXNfQZHT6nVDyi4jWUVAOiRVuX4M9A3OiC4bKxoZA+qj4Oieh8cCi+r0A39y/JP+JYzJkjLrJKjfC8HyCUc1C5Hv8sD87itp7JhNNIcUqqgNjPvEPEnuA/ZlI5gSMAPjEnrVadcqMfBn1Bwe/PHYaBnRfqojWEE0aI4xvjMjPva+k1wDPFqXswR1MP/ovRxE= root@lxhceph01
  • 将公钥的副本写入 /etc/ceph/ceph.pub 文件中。这个公钥文件用于在集群中进行身份验证和密钥交换。

root@lxhceph01:~/.ssh# cat /etc/ceph/ceph.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9PLsdWTSArSMeDBh+6JUVcupluk9KsS/GgMnOGj7pBa7cLVI52F/pHcD0ZN9Kh2j9frD5YMJN0dmYZW6PDEPfaueY5KkOL+jv/3tpkk/wibPtG6dZu45P/ofE/nbMeDksi4gYRXbX9vDzpUkf7opxnXW55GN6qoEMFzGVquIdBeiLfA0qN7gE62Mja5f+4q82IIssPViBUk1XZs7hAXUPBt8Mws5qav0BRcxn+eCsl9U4tsqWe2XDsZg5pVlKlBcM5oGAxKNXkOhEgEpp0Ii5QQyb72kb3kkLVKuRcs49b35HLKppaQ8HP8GmPgUV4BUqthjicrfgwcqIzFwNDsqSbjLaES/FjcxpagQkKFfT9ON1fCX4Y9DZMGbkO33kjul0Ic3gHhRgcn5XSPSWjsJeRELpFfaoKIyshs+t9Dd2UrXV2Mtt3ajV0PN2/Xv3z0iaYAp2sbfvEcNdh+8TiH96VJuYiiDF2mGzWVJWlSLBkR3KsnFQ/x1c9LxPAwO+3gE= ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148
  • /etc/ceph/ceph.conf 文件中写入一个最小配置文件。这个配置文件是用来与 Ceph 守护进程进行通信的,包括 Monitor、Manager、OSD 等。

root@lxhceph01:~/.ssh# cat /etc/ceph/ceph.conf
# minimal ceph.conf for 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
[global]
        fsid = 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
        mon_host = [v2:192.168.1.136:3300/0,v1:192.168.1.136:6789/0] [v2:192.168.1.137:3300/0,v1:192.168.1.137:6789/0] [v2:192.168.1.138:3300/0,v1:192.168.1.138:6789/0]
root@lxhceph01:~/.ssh# 
  • 将客户端管理员(具有特权的)密钥的副本写入 /etc/ceph/ceph.client.admin.keyring 文件中。这个密钥用于执行 Ceph 集群中的管理操作,如添加节点、调整配置等。

root@lxhceph01:~/.ssh# cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
        key = AQCqQCxmG8iQDBAAVTlR+lkdypzdrP3lARnLwg==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
  • _admin 标签添加到引导主机。默认情况下,任何带有这个标签的主机都会获得 /etc/ceph/ceph.conf/etc/ceph/ceph.client.admin.keyring 文件的副本。这样做是为了确保引导主机具有足够的权限和配置来管理 Ceph 集群。

完成cephadm bootstrap后, 检查lxhceph01 主机上运行的 Docker 容器和可用的 Docker 镜像:

root@lxhceph01:~# docker ps -a
CONTAINER ID   IMAGE                                     COMMAND                  CREATED             STATUS             PORTS     NAMES
13a92234a4d8   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mgr -…"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-mgr-lxhceph01-caeuir
d55e0867f82b   quay.io/prometheus/prometheus:v2.43.0     "/bin/prometheus --c…"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-prometheus-lxhceph01
ca3d80ec510b   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-mon-lxhceph01
7a873c05b215   quay.io/ceph/ceph-grafana:9.4.7           "/bin/sh -c 'grafana…"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-grafana-lxhceph01
44a73eec4a9f   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-crash-lxhceph01
0be173386d94   quay.io/prometheus/alertmanager:v0.25.0   "/bin/alertmanager -…"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-alertmanager-lxhceph01
c24259671acc   quay.io/prometheus/node-exporter:v1.5.0   "/bin/node_exporter …"   About an hour ago   Up About an hour             ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148-node-exporter-lxhceph01
root@lxhceph01:~# docker images
REPOSITORY                         TAG       IMAGE ID       CREATED         SIZE
quay.io/ceph/ceph                  v17       fb0a9c13ef5c   12 days ago     1.26GB
quay.io/ceph/ceph-grafana          9.4.7     954c08fa6188   4 months ago    633MB
quay.io/prometheus/prometheus      v2.43.0   a07b618ecd1d   13 months ago   234MB
quay.io/prometheus/alertmanager    v0.25.0   c8568f914cd2   16 months ago   65.1MB
quay.io/prometheus/node-exporter   v1.5.0    0da6a335fe13   17 months ago   22.5MB
root@lxhceph01:~# 

关于 Ceph 群集中不同服务的部署信息:

  1. mon service (Monitor 服务): Monitor 服务是 Ceph 群集的关键组件之一,用于管理群集的状态信息和配置。Monitor 负责维护集群状态的一致性,并在群集中的其他服务之间传播信息。
  2. mgr service (Manager 服务): Manager 服务负责监视和管理 Ceph 群集中的各种资源和任务。它提供了集群状态的实时视图,并允许执行各种管理操作,如集群配置、性能调整等。
  3. crash service (Crash 服务): Crash 服务用于收集和处理 Ceph 群集中发生的崩溃信息和故障报告。它能够帮助诊断和解决群集中的故障和问题。
  4. prometheus service (Prometheus 服务): Prometheus 是一个用于监控和报警的开源系统,用于收集和存储群集的性能指标和监控数据。
  5. grafana service (Grafana 服务): Grafana 是一个开源的数据可视化工具,通常与 Prometheus 配合使用,用于创建漂亮的监控仪表板,展示群集的性能和状态信息。
  6. node-exporter service (Node Exporter 服务): Node Exporter 是一个 Prometheus 的插件,用于收集并暴露主机的各种性能指标和系统信息,如 CPU 使用率、内存使用量、磁盘空间等。
  7. alertmanager service (Alertmanager 服务): Alertmanager 用于处理 Prometheus 生成的警报信息,并进行通知和报警处理。它能够根据预定义的规则对警报进行分类、分组和路由,然后发送通知给相关的管理员或团队。

4、ENABLE CEPH CLI

使能ceph命令行操作。

在需要执行ceph命令的主机上安装ceph-common,安装 ceph-common 软件包可以为你提供 Ceph 存储集群的完整管理和使用功能,包括管理存储池、块设备和文件系统等。

root@lxhceph01:~# apt install ceph-common

检查:

root@lxhceph01:~# dpkg -l | grep ceph-common
ii  ceph-common                            17.2.7-0ubuntu0.22.04.1                 amd64        common utilities to mount and interact with a ceph storage cluster
ii  python3-ceph-common                    17.2.7-0ubuntu0.22.04.1                 all          Python 3 utility libraries for Ceph

安装完成后,可以执行ceph命令:

root@lxhceph01:~# ceph -v
ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)

5、ADDING HOSTS

添加步骤

        1、将集群的公共 SSH 密钥添加到新主机的 root 用户的 authorized_keys 文件中。在SSH (Secure Shell)中,公钥和私钥被用来建立安全的通信通道。在这种情况下,你需要将集群的公共 SSH 密钥添加到新主机的root用户的authorized_keys文件中,以便root用户可以使用集群的私钥来验证并与其他服务器建立安全连接。这通常用于在不同服务器之间进行自动化和安全的通信。

root@lxhceph01:~# ssh-copy-id -f -i /etc/ceph/ceph.pub root@lxhceph02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@lxhceph02'"
and check to make sure that only the key(s) you wanted were added.

验证:

先检查ceph.pub:
root@lxhceph01:/etc/ceph# cat ceph.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9PLsdWTSArSMeDBh+6JUVcupluk9KsS/GgMnOGj7pBa7cLVI52F/pHcD0ZN9Kh2j9frD5YMJN0dmYZW6PDEPfaueY5KkOL+jv/3tpkk/wibPtG6dZu45P/ofE/nbMeDksi4gYRXbX9vDzpUkf7opxnXW55GN6qoEMFzGVquIdBeiLfA0qN7gE62Mja5f+4q82IIssPViBUk1XZs7hAXUPBt8Mws5qav0BRcxn+eCsl9U4tsqWe2XDsZg5pVlKlBcM5oGAxKNXkOhEgEpp0Ii5QQyb72kb3kkLVKuRcs49b35HLKppaQ8HP8GmPgUV4BUqthjicrfgwcqIzFwNDsqSbjLaES/FjcxpagQkKFfT9ON1fCX4Y9DZMGbkO33kjul0Ic3gHhRgcn5XSPSWjsJeRELpFfaoKIyshs+t9Dd2UrXV2Mtt3ajV0PN2/Xv3z0iaYAp2sbfvEcNdh+8TiH96VJuYiiDF2mGzWVJWlSLBkR3KsnFQ/x1c9LxPAwO+3gE= ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148
root@lxhceph01:/etc/ceph# 

然后登录lxhceph02确保其中包含你想要添加的那个密钥:
root@lxhceph02:~# cat ~/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1LVMugaQ50ZbJxXohae4HxNF8YSOZC5b48kglyK+zQmG2OsXH4+kOpfXjDhG/9ma8atRdzALX3zgzH00RcLpnHD9e9i0MnzCHQW3dgjhOxExz6RWUuAr9fAEhPZGJyKHFy5mSCbwmAJXfLnq75V7XmbCb2AdA9yBL3ws/4MA5W1zB0obxIdSZ0gT11HTS3wfhPOBmlt9UCLxOB/wvFAcObspT9Zj54Vnq1J1++celpHEQnfz1tdWkJhkDVLUS+ohkWSGP2dUnsUmETTdDCHgzVchYx7W4Qb6FVYBGwnisjY1NTca2dJv/lozIBZfWsg3LXNfQZHT6nVDyi4jWUVAOiRVuX4M9A3OiC4bKxoZA+qj4Oieh8cCi+r0A39y/JP+JYzJkjLrJKjfC8HyCUc1C5Hv8sD87itp7JhNNIcUqqgNjPvEPEnuA/ZlI5gSMAPjEnrVadcqMfBn1Bwe/PHYaBnRfqojWEE0aI4xvjMjPva+k1wDPFqXswR1MP/ovRxE= root@lxhceph01
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhWvvfeWX6eo0IgTDC4jBle5PNBOiWfnCPUyxjEmEDOoJ+m5ZQIHH0GGmZRwq9wspkDQHzOGuIGov7oiYjZe5XVGdgIY5v4ufslwEMYrMwxB/KHf6UcYtKXM0KjzGim78tj7tAl9Gh/Q5GGeZhQ5+0GrTJNHNqlN9dniS2j1eANmmguw9LMFFLCa6RV2tABITgpUGsbfbr0HICA5XnlWBevyaEkeTS8N3VBHr+8ZGCLpVQDrxnJrRe3OwnGWgT5tWEOOpWdyz9yYATvp1u2FtNdnQeOw1JMpW+haMi8UqW/g7VEs0YimrzSSSUEIwo7xZ87irSVuOU1PIIc3mWwY8LLQkuIPvjFR1pXXmABaszjcnHwz3m/7FGsvlYzW63mYifCCqqJqe+2cuy0nNREqKYaXCQtdFLaoiLi+VSdpf2AtoXkdcuhcy+eSc76EJcFcIpRQPsuwhF5OgT3WlKvcXwwI/g5oh1omTEpdEeyG5ls2HIy8TIeu0XiX9m0EYm1TM= root@lxhceph03
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9PLsdWTSArSMeDBh+6JUVcupluk9KsS/GgMnOGj7pBa7cLVI52F/pHcD0ZN9Kh2j9frD5YMJN0dmYZW6PDEPfaueY5KkOL+jv/3tpkk/wibPtG6dZu45P/ofE/nbMeDksi4gYRXbX9vDzpUkf7opxnXW55GN6qoEMFzGVquIdBeiLfA0qN7gE62Mja5f+4q82IIssPViBUk1XZs7hAXUPBt8Mws5qav0BRcxn+eCsl9U4tsqWe2XDsZg5pVlKlBcM5oGAxKNXkOhEgEpp0Ii5QQyb72kb3kkLVKuRcs49b35HLKppaQ8HP8GmPgUV4BUqthjicrfgwcqIzFwNDsqSbjLaES/FjcxpagQkKFfT9ON1fCX4Y9DZMGbkO33kjul0Ic3gHhRgcn5XSPSWjsJeRELpFfaoKIyshs+t9Dd2UrXV2Mtt3ajV0PN2/Xv3z0iaYAp2sbfvEcNdh+8TiH96VJuYiiDF2mGzWVJWlSLBkR3KsnFQ/x1c9LxPAwO+3gE= ceph-4ef40eae-0429-11ef-a0fc-ddb80bd5a148
root@lxhceph02:~# 

        2、告诉 Ceph 存储集群,新的节点已经成为集群的一部分。Ceph 是一个开源的分布式存储系统,它可以自动管理数据的分布和复制,确保高可用性和数据持久性。当你添加新的节点到 Ceph 存储集群时,你需要通知 Ceph 系统,这样它就可以开始在新节点上分配数据和执行其他必要的操作,以确保新节点与现有节点一起工作,形成一个完整的存储集群。

root@lxhceph01:/etc/ceph# ceph orch host add lxhceph02 192.168.1.137
Added host 'lxhceph02' with addr '192.168.1.137'
root@lxhceph01:/etc/ceph# 

按照以上步骤把lxhceph03也加入到ceph cluster中:

root@lxhceph01:~# ssh-copy-id -f -i /etc/ceph/ceph.pub root@lxhceph03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@lxhceph03'"
and check to make sure that only the key(s) you wanted were added.

root@lxhceph01:~#
root@lxhceph01:~#  ceph orch host add lxhceph03 192.168.1.138
Added host 'lxhceph03' with addr '192.168.1.138'
root@lxhceph01:~# 

 查看状态

web查看状态:

此时登录:https://192.168.1.136:8443/,可以看到ceph集群开始在新的节点上安装相关服务。

命令行查看: 

root@lxhceph01:~# ceph status
  cluster:
    id:     4ef40eae-0429-11ef-a0fc-ddb80bd5a148
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum lxhceph01,lxhceph02,lxhceph03 (age 2m)
    mgr: lxhceph01.caeuir(active, since 67m), standbys: lxhceph02.emkevs
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@lxhceph01:~# 

在lxhceph02、lxhceph03主机上安装ceph-common,方便直接用ceph命令行进行操作。

主机添加_admin标签

默认情况下,在具有 _admin 标签的所有主机上,都会维护一个 ceph.conf 文件和 client.admin 密钥环的副本,这些文件通常位于 /etc/ceph 目录下。这些文件包含了 Ceph 存储集群的配置信息和管理员权限的密钥,用于管理集群的各种操作。

root@lxhceph01:/etc/ceph# cat ceph.conf
# minimal ceph.conf for 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
[global]
        fsid = 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
        mon_host = [v2:192.168.1.136:3300/0,v1:192.168.1.136:6789/0] [v2:192.168.1.137:3300/0,v1:192.168.1.137:6789/0] [v2:192.168.1.138:3300/0,v1:192.168.1.138:6789/0]
root@lxhceph01:/etc/ceph# 
root@lxhceph01:/etc/ceph# cat ceph.client.admin.keyring 
[client.admin]
        key = AQCqQCxmG8iQDBAAVTlR+lkdypzdrP3lARnLwg==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"
root@lxhceph01:/etc/ceph# 

初始情况下,_admin 标签仅应用于引导主机(bootstrap host)。引导主机是集群中的第一个主机,用于启动集群的配置和初始化过程。

root@lxhceph01:/etc/ceph# ceph orch host ls
HOST       ADDR           LABELS  STATUS  
lxhceph01  192.168.1.136  _admin          
lxhceph02  192.168.1.137                  
lxhceph03  192.168.1.138                  
3 hosts in cluster

推荐将一个或多个其他主机标记为 _admin,以便在多个主机上轻松访问 Ceph CLI。这意味着在其他主机上也能够使用 cephadm shell 或其他 Ceph CLI 命令来管理集群,而不仅限于引导主机。

这里将lxhceph03添加_admin标签,添加_admin标签后在lxhceph03自动添加ceph.client.admin.keyring和ceph.conf,并能执行ceph cli命令。

root@lxhceph01:/etc/ceph# ceph orch host label add lxhceph03 _admin
Added label _admin to host lxhceph03

root@lxhceph03:/etc# cd ceph
root@lxhceph03:/etc/ceph# ls
ceph.client.admin.keyring  ceph.conf  rbdmap
root@lxhceph03:/etc/ceph# cat ceph.conf
# minimal ceph.conf for 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
[global]
        fsid = 4ef40eae-0429-11ef-a0fc-ddb80bd5a148
        mon_host = [v2:192.168.1.136:3300/0,v1:192.168.1.136:6789/0] [v2:192.168.1.137:3300/0,v1:192.168.1.137:6789/0] [v2:192.168.1.138:3300/0,v1:192.168.1.138:6789/0]
root@lxhceph03:/etc/ceph# cat ceph.client.admin.keyring 
[client.admin]
        key = AQCqQCxmG8iQDBAAVTlR+lkdypzdrP3lARnLwg==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

root@lxhceph03:/etc/ceph# ceph status
  cluster:
    id:     4ef40eae-0429-11ef-a0fc-ddb80bd5a148
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum lxhceph01,lxhceph02,lxhceph03 (age 2h)
    mgr: lxhceph01.caeuir(active, since 4h), standbys: lxhceph02.emkevs
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
root@lxhceph03:/etc/ceph# ceph orch ls
NAME           PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager   ?:9093,9094      1/1  60s ago    7h   count:1    
crash                           3/3  61s ago    7h   *          
grafana        ?:3000           1/1  60s ago    7h   count:1    
mgr                             2/2  60s ago    7h   count:2    
mon                             3/5  61s ago    7h   count:5    
node-exporter  ?:9100           3/3  61s ago    7h   *          
prometheus     ?:9095           1/1  60s ago    7h   count:1    
root@lxhceph03:/etc/ceph# 

LISTHOSTS                                                                                                                     

root@lxhceph03:/etc/ceph# ceph orch host ls
HOST       ADDR           LABELS  STATUS  
lxhceph01  192.168.1.136  _admin          
lxhceph02  192.168.1.137                  
lxhceph03  192.168.1.138  _admin          
3 hosts in cluster
root@lxhceph03:/etc/ceph# ceph orch host ls --detail
HOST       ADDR           LABELS  STATUS  VENDOR/MODEL                            CPU    RAM    HDD        SSD  NIC  
lxhceph01  192.168.1.136  _admin          VMware, Inc. (VMware Virtual Platform)  4C/4T  8 GiB  11/54.0GB  -    2    
lxhceph02  192.168.1.137                  VMware, Inc. (VMware Virtual Platform)  4C/4T  8 GiB  11/54.1GB  -    2    
lxhceph03  192.168.1.138  _admin          VMware, Inc. (VMware Virtual Platform)  4C/4T  8 GiB  11/54.0GB  -    2    
3 hosts in cluster
root@lxhceph03:/etc/ceph#

6、ADDING ADDITIONAL MONS                                               

一个典型的 Ceph 存储集群通常会有三个或五个 Monitor(监视器)守护进程,它们分布在不同的主机上。

在 Ceph 中,Monitor 负责监视集群中的状态、配置和成员,并维护这些信息的一致性。Monitor 守护进程通常以多个副本的形式运行,以确保高可用性和容错性。根据推荐做法,一个典型的 Ceph 集群会部署三个或五个 Monitor 守护进程,并将它们分散在不同的物理或虚拟主机上,以防止单点故障并提高集群的稳定性。

通过在多个主机上分布 Monitor 守护进程,Ceph 能够确保即使某些主机出现故障,集群仍然能够继续正常运行,从而保证数据的可靠性和高可用性。

这里不进行操作。

查看方式:

root@lxhceph01:~# ceph
ceph> mon stat
e3: 3 mons at {lxhceph01=[v2:192.168.1.136:3300/0,v1:192.168.1.136:6789/0],lxhceph02=[v2:192.168.1.137:3300/0,v1:192.168.1.137:6789/0],lxhceph03=[v2:192.168.1.138:3300/0,v1:192.168.1.138:6789/0]} removed_ranks: {}, election epoch 36, leader 0 lxhceph01, quorum 0,1,2 lxhceph01,lxhceph02,lxhceph03

ceph> 

7、ADDING STORAGE                                                              

显示所有集群主机上存储设备的清单                                                                                                 

root@lxhceph01:/etc/ceph# ceph orch device ls
HOST       PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS  
lxhceph01  /dev/sdb  hdd              10.0G  Yes        7m ago                     
lxhceph01  /dev/sdc  hdd              10.0G  Yes        7m ago                     
lxhceph02  /dev/sdb  hdd              10.0G  Yes        7m ago                     
lxhceph02  /dev/sdc  hdd              10.0G  Yes        7m ago                     
lxhceph03  /dev/sdb  hdd              10.0G  Yes        7m ago                     
lxhceph03  /dev/sdc  hdd              10.0G  Yes        7m ago                     
root@lxhceph01:/etc/ceph#

CREATING NEW OSDS  

这里使用lxhceph01:/dev/sdb,/dev/sdc, lxhceph02:/dev/sdb, lxhceph03:/dev/sdb硬盘创建osd,剩余暂时不用。                                                                      

root@lxhceph01:/etc/ceph# ceph orch daemon add osd lxhceph01:/dev/sdb,/dev/sdc
Created osd(s) 0,1 on host 'lxhceph01'
root@lxhceph01:/etc/ceph# 
root@lxhceph01:/etc/ceph# ceph orch daemon add osd lxhceph02:/dev/sdb
Created osd(s) 2 on host 'lxhceph02'
root@lxhceph01:/etc/ceph# ceph orch daemon add osd lxhceph03:/dev/sdb 
Created osd(s) 3 on host 'lxhceph03'
root@lxhceph01:/etc/ceph# 

 查看状态

命令行查看:

root@lxhceph01:~# ceph status
  cluster:
    id:     4ef40eae-0429-11ef-a0fc-ddb80bd5a148
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum lxhceph01,lxhceph02,lxhceph03 (age 84m)
    mgr: lxhceph01.caeuir(active, since 84m), standbys: lxhceph02.emkevs
    osd: 4 osds: 4 up (since 84m), 4 in (since 28h)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   1.2 GiB used, 39 GiB / 40 GiB avail
    pgs:     1 active+clean
 
root@lxhceph01:~# 

web页面查看:

8、USING CEPH                                                                         

至此,新的CEPH集群安装完成,需要根据自己的需求(比如文件系统、块存储、对象存储),继续进行所需功能的部署和配置,这里就不做介绍。                                                                                                                                                                          

  • 26
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值