Ceph集群部署

【题目 7】1.2.7 Ceph 部署[1 分]

因为篇幅过长,所以单独一篇,本篇隶属于第一场:模块二 OpenStack私有云服务运维
1,使用提供的 ceph.tar.gz 软件包,安装 ceph 服务并完成初始化操作。

使用提供的 ceph-14.2.22.tar.gz 软件包,在 OpenStack 平台上创建三台 CentOS7.9系统的云主机,使用这三个节点安装 ceph 服务并完成初始化操作
第一个节点为 mon/osd节点,第二、三个节点为 osd 节点,部署完 ceph 后,创建 vms、images、volumes 三个 pool。
完成后提交第一个节点的用户名、密码和 IP 地址到答题框。


前言

本节内容: Ceph 部署

ceph-14.2.22.tar.gz


准备工作

1,规划节点

IP地址主机名节点
192.168.25.201ceph-node1Monitor/OSD
192.168.25.202ceph-node2OSD
192.168.25.203ceph-node3OSD

根据规划配置好网卡

2,基础准备

创建三个云主机/虚拟机(推荐规格为2vCPU/4G/40G硬盘+临时磁盘20G),并修改主机名
我直接克隆的规格为4vCPU/6G/100G硬盘+临时磁盘20G(看电脑配置自定)

1,修改主机名

三台虚拟机都执行

[root@localhost ~]# hostnamectl set-hostname ceph-node1
[root@localhost ~]# bash
bash
[root@ceph-node1 ~]# 

[root@localhost ~]# hostnamectl set-hostname ceph-node2
[root@localhost ~]# bash
bash
[root@ceph-node2 ~]# 

[root@localhost ~]# hostnamectl set-hostname ceph-node3
[root@localhost ~]# bash
bash
[root@ceph-node3 ~]# 

2,为三台主机配置主机名地址映射(三台主机都执行)

[root@ceph-node1 ~]# vi /etc/hosts
[root@ceph-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.25.201 ceph-node1
192.168.25.202 ceph-node2
192.168.25.203 ceph-node3

3,在三个Ceph节点上修改Yum源,均使用本地源。(软件包使用ceph-14.2.22.tar.gz)

[root@ceph-node1 ~]# ls
anaconda-ks.cfg  ceph-14.2.22.tar.gz

[root@ceph-node2 ~]# ls
anaconda-ks.cfg  ceph-14.2.22.tar.gz

[root@ceph-node3 ~]# ls
anaconda-ks.cfg  ceph-14.2.22.tar.gz

解压完成,开始配置repo文件,先把/etc/yum.repos.d下面的所有repo文件移走,然后创建local.repo文件
(三个节点都要执行)

ceph-node1
[root@ceph-node1 ~]# tar -zxvf ceph-14.2.22.tar.gz -C /opt
[root@ceph-node1 ~]# mv /etc/yum.repos.d/* /media/
[root@ceph-node1 ~]# vi /etc/yum.repos.d/local.repo
ceph-node2
[root@ceph-node2 ~]# tar -zxvf ceph-14.2.22.tar.gz -C /opt
[root@ceph-node2 ~]# mv /etc/yum.repos.d/* /media/
[root@ceph-node2 ~]# vi /etc/yum.repos.d/local.repo
ceph-node3
[root@ceph-node3 ~]# tar -zxvf ceph-14.2.22.tar.gz -C /opt
[root@ceph-node3 ~]# mv /etc/yum.repos.d/* /media/
[root@ceph-node3 ~]# vi /etc/yum.repos.d/local.repo
[root@ceph-node3 ~]# cat /etc/yum.repos.d/local.repo
[ceph]
name=ceph
baseurl=file:///opt/ceph
gpgcheck=0
enabled=1

验证
[root@ceph-node3 ~]# yum clean all && yum makecache
Loaded plugins: fastestmirror
Cleaning repos: ceph
Cleaning up list of fastest mirrors
Loaded plugins: fastestmirror
Determining fastest mirrors
ceph                                                                                                                                                                 | 2.9 kB  00:00:00     
(1/3): ceph/filelists_db                                                                                                                                             | 183 kB  00:00:00     
(2/3): ceph/primary_db                                                                                                                                               | 265 kB  00:00:00     
(3/3): ceph/other_db                                                                                                                                                 | 132 kB  00:00:00     
Metadata Cache Created

!!!重要:由于python环境缺少一些模块,需要配置一下

点这里跟着配置好后再回来部署(三个节点都要搞)

3,部署集群

【非必要步骤】建议配置ssh免密,会省去很多输入密码的步骤

[root@ceph-node1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:8EOUG8rhDzm5/1yMvL3iwFb254Zd36pxKTOWqdOVxBU root@ceph-node1
The key`s randomart image is:
+---[RSA 2048]----+
|        ..     E.|
|      ..o       .|
|     o.=.o   . . |
|      O+.     o  |
|       =S o  . . |
|      ...= + oo..|
|       .+ o.%++.o|
|       ..oo*oOo o|
|         o*o++o. |
+----[SHA256]-----+
[root@ceph-node1 ~]# ssh-copy-id ceph-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-node2 (192.168.25.202)' can`t be established.
ECDSA key fingerprint is SHA256:nNfbIvQ4JnzKAG0MHAegS0723/jJht3xVf0Nt+/rjBc.
ECDSA key fingerprint is MD5:7c:24:7a:dd:1e:55:e9:fa:c8:65:bb:86:b5:b6:70:73.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-node2`s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ceph-node2'"
and check to make sure that only the key(s) you wanted were added.

[root@ceph-node1 ~]# ssh-copy-id ceph-node3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph-node3 (192.168.25.203)' can`t be established.
ECDSA key fingerprint is SHA256:nNfbIvQ4JnzKAG0MHAegS0723/jJht3xVf0Nt+/rjBc.
ECDSA key fingerprint is MD5:7c:24:7a:dd:1e:55:e9:fa:c8:65:bb:86:b5:b6:70:73.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-node3`s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ceph-node3'"
and check to make sure that only the key(s) you wanted were added.

[root@ceph-node1 ~]# 

1,创建Ceph集群

(1)在ceph-node1上安装ceph-deploy

[root@ceph-node1 ~]# yum install -y ceph-deploy 

(2)在cep-node1上用ceph-deploy创建一个Ceph集群,

[root@ceph-node1 ~]# mkdir /etc/ceph
[root@ceph-node1 ~]# cd /etc/ceph
[root@ceph-node1 ceph]# ceph-deploy new ceph-node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f2a8a52cb90>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2a8a54d998>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: /usr/sbin/ip link show
[ceph-node1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-node1][DEBUG ] IP addresses found: [u'192.168.25.201', u'192.168.200.201']
[ceph_deploy.new][DEBUG ] Resolving host ceph-node1
[ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 192.168.25.201
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.25.201']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@ceph-node1 ceph]# 

注意从现在开始运行ceph-deploy必须要在 /etc/ceph 下面运行

(3)eph-deploy的new子命令能够部署一个默认名称为Ceph的新集群,并且它能生成集群配置文件和密钥文件。列出当前的工作目录,可以查看到ceph.conf和ceph.mon.keying文件

[root@ceph-node1 ceph]# ll
total 12
-rw-r--r--. 1 root root  202 Feb  5 00:26 ceph.conf
-rw-r--r--. 1 root root 3005 Feb  5 00:26 ceph-deploy-ceph.log
-rw-------. 1 root root   73 Feb  5 00:26 ceph.mon.keyring
[root@ceph-node1 ceph]# 

(4)在ceph-node1上使用ceph-deploy在所有节点上安装Ceph软件包

–no-adjust-repos告诉 ceph-deploy 在安装过程中不要调整软件源的设置

[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3 --no-adjust-repos
# 该命令会在所有节点上安装ceph二进制软件包
# 过程中需要手动输入yes和ceph-node2、ceph-node3的密码(如果配置了ssh免密则不需要
				......
[ceph-node3][DEBUG ] Complete!
[ceph-node3][INFO  ] Running command: ceph --version
[ceph-node3][DEBUG ] ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

# 分别在三个节点上检查
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

[root@ceph-node2 ~]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

[root@ceph-node3 ~]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

(6) 在ceph-node1上创建初始的 Ceph Monitor 节点

[root@ceph-node1 ceph]# ceph-deploy mon create-initial
# 检查集群的状态
[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     e7e0cf92-6bbd-4615-a812-52aeae3383c8
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 2m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
[root@ceph-node1 ceph]# 

集群状态为HEALTH_WARN,现在是警告状态

2,创建OSD

(1)列出ceph-node1上所有的可用磁盘

[root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-node1
	......
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: fdisk -l
[ceph-node1][INFO  ] Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
[ceph-node1][INFO  ] Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
[ceph-node1][INFO  ] Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
[ceph-node1][INFO  ] Disk /dev/mapper/centos-swap: 6308 MB, 6308233216 bytes, 12320768 sectors
[ceph-node1][INFO  ] Disk /dev/mapper/centos-home: 46.3 GB, 46296727552 bytes, 90423296 sectors

(2) 创建共享磁盘,3个节点都要执行,这里以ceph-node1节点为例。先对系统上的空闲硬盘进行分区操作

如果硬盘挂载了东西记得先umount /dev/sdb,不确定的话也可以先执行一下

# 三个节点应该都是这样的
[root@ceph-node1 ceph]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part 
  ├─centos-root 253:0    0   50G  0 lvm  /
  ├─centos-swap 253:1    0  5.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0 43.1G  0 lvm  /home
sdb               8:16   0   20G  0 disk 
sr0              11:0    1  4.4G  0 rom
# 分3个盘,每个盘分5G
[root@ceph-node1 ceph]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xa3f1078b.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): 
Using default response p
Partition number (2-4, default 2): 
First sector (10487808-41943039, default 10487808): 
Using default value 10487808
Last sector, +sectors or +size{K,M,G} (10487808-41943039, default 41943039): +5G
Partition 2 of type Linux and of size 5 GiB is set

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): 
Using default response p
Partition number (3,4, default 3): 
First sector (20973568-41943039, default 20973568): 
Using default value 20973568
Last sector, +sectors or +size{K,M,G} (20973568-41943039, default 41943039): +5G
Partition 3 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@ceph-node1 ceph]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  100G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   99G  0 part 
  ├─centos-root 253:0    0   50G  0 lvm  /
  ├─centos-swap 253:1    0  5.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0 43.1G  0 lvm  /home
sdb               8:16   0   20G  0 disk 
├─sdb1            8:17   0    5G  0 part 
├─sdb2            8:18   0    5G  0 part 
└─sdb3            8:19   0    5G  0 part 
sr0              11:0    1  4.4G  0 rom  
[root@ceph-node1 ceph]# 

(3)分区完毕,将这些分区添加至osd

注意:全部在 ceph-node1上操作!!

ceph-node1节点:

[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb1 ceph-node1
		......
[ceph-node1][INFO  ] checking OSD status...
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use

ceph-node2节点:需要输入密码

[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb1 ceph-node2
		......
[ceph-node2][INFO  ] checking OSD status...
[ceph-node2][DEBUG ] find the location of an executable
[ceph-node2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node2 is now ready for osd use.

ceph-node3节点:需要输入密码

[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb1 ceph-node3
		......
[ceph-node3][INFO  ] checking OSD status...
[ceph-node3][DEBUG ] find the location of an executable
[ceph-node3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node3 is now ready for osd use.

如果一开始了配置ssh免密就不用输密码了

(4)添加完osd节点,查看集群的状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     e7e0cf92-6bbd-4615-a812-52aeae3383c8
    health: HEALTH_WARN
            no active mgr
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 21m)
    mgr: no daemons active
    osd: 3 osds: 3 up (since 104s), 3 in (since 104s)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 
[root@ceph-node1 ceph]# 

此时的状态依然为警告,因为还没有设置mgr节点

(5)安装mgr

[root@ceph-node1 ceph]# ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
	......
[ceph-node3][INFO  ] Running command: systemctl enable ceph-mgr@ceph-node3
[ceph-node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-node3.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-node3][INFO  ] Running command: systemctl start ceph-mgr@ceph-node3
[ceph-node3][INFO  ] Running command: systemctl enable ceph.target

此时查看依然是警告状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     e7e0cf92-6bbd-4615-a812-52aeae3383c8
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 25m)
    mgr: ceph-node1(active, since 107s), standbys: ceph-node2, ceph-node3
    osd: 3 osds: 3 up (since 6m), 3 in (since 6m)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 12 GiB / 15 GiB avail
    pgs:     

(6)禁用不安全模式

[root@ceph-node1 ceph]# ceph config set mon auth_allow_insecure_global_id_reclaim false
[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     e7e0cf92-6bbd-4615-a812-52aeae3383c8
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node1 (age 33m)
    mgr: ceph-node1(active, since 9m), standbys: ceph-node2, ceph-node3
    osd: 3 osds: 3 up (since 14m), 3 in (since 14m)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 12 GiB / 15 GiB avail
    pgs:     

集群状态 OK 了

(6)给其他节点开放权限,进行灾备处理

[root@ceph-node1 ceph]# ceph-deploy admin ceph-node{1,2,3}
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa3bf2d22d8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node1', 'ceph-node2', 'ceph-node3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7fa3bfb66230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node2
root@ceph-node2's password: 
root@ceph-node2's password: 
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node3
root@ceph-node3's password: 
root@ceph-node3's password: 
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

# 给 /etc/ceph/ceph.client.admin.keyring 文件添加 读 权限(+r)
# 这个文件是 Ceph 集群的管理员密钥环文件,它包含了访问 Ceph 集群所需的认证信息
# 为了确保特其他设备或用户可以读取该文件,进而能够以管理员的身份执行对 Ceph 集群的管理操作
[root@ceph-node1 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring 

部署完毕

4,创建pool

pool 用于存储对象、块设备或文件系统。对于不同类型的数据(如虚拟机磁盘、镜像或卷),创建独立的 pool 可以提供更好的组织和性能调优

(1)查看osd数量
发现每台主机上有3个osd

[root@ceph-node1 ceph]# ceph osd stat
3 osds: 3 up (since 29m), 3 in (since 29m); epoch: e13

(2)计算pg数量 (了解)
补充:最后才发现测试的话不需要计算,用32或64就行了,生产环境需要计算,可直接去看第(3)步

有 3 台主机,每台主机上有 3 个 OSD,总共 9 个 OSD,假定数据复制数为 3。对于 PG(Placement Group)数量的设置,虽然理论上可以将 PG 数量设置为任何值。但是我们应该选择一个既能高效利用集群资源,又不会给集群管理带来过大负担的数值
[ \text{理想的 PG 数量} = \text{每个 OSD 管理的 PG 数量} \times \text{OSD 的数量} ]
[ \text{理想的 PG 数量} = \frac{9 \times 100}{3} = 300 ]

每个 OSD 理想上应管理的 PG 数量 = 100(根据最佳实践)
有 9 个 OSD(3 台主机,每台主机有 3 个 OSD)
所以,理想的 PG 数量 = 每个 OSD 管理的 PG 数量 × OSD 的数量
= 100 × 9 / 3
= 300

(3) 创建vms、images 和 volumes 这三个 pool

# 创建
[root@ceph-node1 ~]# ceph osd pool create vms 64
pool 'vms' created
[root@ceph-node1 ~]# ceph osd pool create images 64
pool 'images' created
[root@ceph-node1 ~]# ceph osd pool create volumes 64
pool 'volumes' created

# 验证
[root@ceph-node1 ~]# ceph osd lspools
1 vms
2 images
3 volums

返回主界面

回到第一场:模块二 OpenStack私有云服务运维

欢迎大家来一起交流和交换资源

  • 13
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值