(1) 使用要求:
a) 集群环境搭建成功
b) 集群的状态是 active+clean。
c) 节点配置,将admnode也作为client-node使用
主机名 角色 磁盘
================================================================
a) admnode deploy-node,client-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G)
(2) 使用方法 http://docs.ceph.com/docs/master/start/quick-rbd/
a) 在client-node创建Block Device Image,这里使用默认的rbd pool(使用ceph osd lspools查看)
# ceph osd lspools
0 rbd,
# rbd create --size 1024 blockDevImg
# rbd ls rbd
blockDevImg
# rbd info blockDevImg
rbd image 'blockDevImg':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1041.74b0dc51
format: 1
b) 在client-node将该image 映射给一个 block device。
# sudo rbd map blockDevImg --name client.admin
/dev/rbd0
c) 使用该block device在client-node上创建一个文件系统。
# sudo mkfs.ext4 -m0 /dev/rbd/rbd/blockDevImg
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
d) 挂载文件系统
# sudo mkdir /mnt/ceph-block-device
# sudo mount /dev/rbd/rbd/blockDevImg /mnt/ceph-block-device
# cd /mnt/ceph-block-device
# mount
...
/dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)