cephfs创建和删除pool

1、cephfs创建pool

[root@ceph01 ceph-cluster]#ceph osd pool create mypool 32 #数据
pool 'mypool' created
[root@ceph01 ceph-cluster]#ceph osd pool create mypool_mata 32 #元数据
pool 'mypool_mata' created
[root@ceph01 ceph-cluster]#ceph osd pool ls
mypool
mypool_mata
[root@ceph01 ceph-cluster]#rados lspools
mypool
mypool_mata
[root@ceph01 ceph-cluster]#ceph fs new fs-test mypool_mata mypool #创建cephfs:fs-test,绑定pool
new fs with metadata pool 2 and data pool 1
[root@ceph01 ceph-cluster]#ceph fs ls
name: fs-test, metadata pool: mypool_mata, data pools: [mypool ]
[root@ceph01 ceph-cluster]# ceph fs status fs-test
fs-test - 0 clients
=======
+------+--------+--------+---------------+-------+-------+
| Rank | State  |  MDS   |    Activity   |  dns  |  inos |
+------+--------+--------+---------------+-------+-------+
|  0   | active | ceph01 | Reqs:    0 /s |   10  |   13  |
+------+--------+--------+---------------+-------+-------+
+-------------+----------+-------+-------+
|     Pool    |   type   |  used | avail |
+-------------+----------+-------+-------+
| mypool_mata | metadata | 1536k |  283G |
|    mypool   |   data   |    0  |  283G |
+-------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

[root@ceph01 ceph-cluster]#ceph mds stat
fs-test:1 {0=ceph01=up:active}  #已启用

[root@ceph01 ceph-cluster]#modprobe rbd   #加载rbd内核模块
[root@ceph01 ceph-cluster]#lsmod | grep rbd
rbd                    94208  0 
libceph               360448  1 rbd

[root@ceph01 ceph-cluster]#cat ceph.client.admin.keyring
[client.admin]
	key = AQBJawRjSaZUEBAAZvfbk2N9Our6O6yPzJEZxg==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

#客户端挂载	
mkdir /fs_test
mount -t ceph 10.30.130.21:6789:/ /cephfs_test -o name=admin,secret=AQCYNFBbMFgrGRAATVYCVImCvnW+SeK9MDGb1g==

2、cephfs删除pool

#错误删除的报错信息:
#删除pool
[root@ceph01 ceph-cluster]#ceph osd pool rm mypool 
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool mypool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it. #把pool名传递两次,并后边跟--yes-i-really-really-mean-it
[root@ceph01 ceph-cluster]#ceph osd pool rm mypool mypool --yes-i-really-really-mean-it 
Error EBUSY: pool 'mypool' is in use by CephFS #被cephfs使用
[root@ceph01 ceph-cluster]#ceph osd pool rm mypool mypool --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool #配置文件里mon_allow_pool_delete需要是true
#删除cephfs
[root@ceph01 ceph-cluster]#ceph fs rm fs-test 
Error EINVAL: all MDS daemons must be inactive/failed before removing filesystem. See `ceph fs fail`. #MDS进程必须是inactive/failed
[root@ceph01 ceph-cluster]#ceph fs rm fs-test 
Error EPERM: this is a DESTRUCTIVE operation and will make data in your filesystem permanently inaccessible.  Add --yes-i-really-mean-it if you are sure you wish to continue. #后边跟--yes-i-really-really-mean-it


#正确删除步骤:
[root@ceph01 ceph-cluster]#systemctl stop ceph-mds@ceph01 #停用mds服务
[root@ceph01 ceph-cluster]#ceph fs rm fs-test --yes-i-really-mean-it #删除cephfs文件系统,可通过ceph fs ls查询
[root@ceph01 ceph-cluster]#vi /etc/ceph/ceph.conf  #注:所有mon节点都需要配置并重启服务
[mon]
mon_allow_pool_delete = true
[root@ceph01 ceph-cluster]#systemctl restart ceph-mon.target

[root@ceph01 ceph-cluster]#ceph osd pool delete mypool mypool --yes-i-really-really-mean-it #删除pool,rm和delete都可以
pool 'mypool' removed
[root@ceph01 ceph-cluster]#ceph osd pool rm mypool_mata mypool_mata --yes-i-really-really-mean-it 
pool 'mypool_mata' removed
#删除后恢复配置:
[root@ceph01 ceph-cluster]#vi /etc/ceph/ceph.conf  #注:所有mon节点都需要配置并重启服务
[mon]
#mon_allow_pool_delete = true
[root@ceph01 ceph-cluster]#systemctl start ceph-mds@ceph01 #启用mds服务
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
创建 CephFS 并使用它,需要遵循以下步骤: 1. 确保在 Ceph 集群中已启用 CephFS 功能。在 ceph.conf 文件中,确保以下配置项已设置为“true”: ``` [global] fsid = <your fsid> mon_initial_members = <your mon node names> mon_host = <your mon node IP addresses> auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx [client] keyring = /etc/ceph/<your ceph client keyring> ``` 2. 在 Ceph 集群中创建一个 MDS(元数据服务器)节点。这个节点将处理文件系统的元数据。可以使用以下命令创建 MDS 节点: ``` ceph-deploy mds create <mds node names> ``` 3. 创建 CephFS 文件系统。可以使用以下命令创建文件系统: ``` ceph fs new <fs name> <metadata pool name> <data pool name> ``` 其中,“fs name”是文件系统的名称,“metadata pool name”是用于存储文件系统元数据的池的名称,“data pool name”是用于存储文件数据的池的名称。 4. 挂载 CephFS 文件系统。可以使用以下命令将文件系统挂载到本地目录: ``` mount -t ceph <mon node>:<path> <mount point> -o name=<client name>,secret=<client key> ``` 其中,“mon node”是监视器节点的名称或 IP 地址,“path”是文件系统的路径,“mount point”是本地目录的路径,“client name”是用于访问文件系统的客户端名称,“client key”是用于访问文件系统的客户端密钥。 完成以上步骤后,就可以使用 CephFS 文件系统了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值