ceph客户端使用_ceph-块存储客户端

ceph块存储

ceph块设备,以前称为RADOS块设备,为客户机提供可靠性、分布式和高性能的块存储磁盘。RADOS块设备利用librbd库并以顺序的形式在ceph集群的多个osd上存储数据块。RBD是由ceph的RADOS层支持,因此每个块设备都分布在多个ceph节点上,提供了性能和优异的可靠性。RBD有linux内核的本地支持,这意味着RBD驱动程序从过去几年就与linux内核集成的很好。除了可靠性和性能外,RBD还提供了企业特性,例如完整和增量快照、瘦配置、写时复制克隆、动态调整大小等,RBD还支持内存缓存,这大大提高了性能。

安装ceph块存储客户端

创建ceph块客户端用户名和认证密钥

[ceph-admin@ceph-node1 my-cluster]$ ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' | tee ./ceph.client.rbd.keyring

[client.rbd]

key= AQChG2Vcu552KRAAMf4/SdfSVa4sFDZPfsY8bg==[ceph-admin@ceph-node1 my-cluster]$ ceph auth get client.rbd

exported keyringforclient.rbd

[client.rbd]

key= AQChG2Vcu552KRAAMf4/SdfSVa4sFDZPfsY8bg==caps mon= "allow r"caps osd= "allow class-read object_prefix rbd_children, allow rwx pool=rbd"

将密钥文件和配置文件拷贝到客户端

[ceph-admin@ceph-node1 my-cluster]$ scp ceph.client.rbd.keyring /etc/ceph/ceph.conf root@192.168.0.123:/etc/ceph

检查客户端是否符合块设备环境要求

[root@localhost ~]# uname -r

3.10.0-862.el7.x86_64

[root@localhost ~]# modprobe rbd

[root@localhost ~]# echo $?

0

安装ceph客户端

[root@localhost ~]# wget -O /etc/yum.repos.d/ceph.repo https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo

[root@localhost ~]# yum install -y ceph

测试密钥连接集群

[root@localhost ~]# ceph -s --name client.rbd

cluster:id: cde2c9f7-009e-4bb4-a206-95afa4c43495

health: HEALTH_OK

services:

mon:3 daemons, quorum ceph-node1,ceph-node2,ceph-node3

mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3

osd:9 osds: 9 up, 9 indata:

pools:0 pools, 0pgs

objects:0objects, 0B

usage:9.06GiB used, 171GiB /180GiB avail

pgs:

客户端创建块设备及映射

创建rbd池

[ceph-admin@ceph-node1 my-cluster]$ ceph osd lspools

[ceph-admin@ceph-node1 my-cluster]$ ceph osd pool create rbd 128pool'rbd'created

[ceph-admin@ceph-node1 my-cluster]$ ceph osd lspools1 rbd,

客户端创建块设备

[root@localhost ceph]# rbd create rbd1 --size 10240 --name client.rbd

查看

[root@localhost ceph]# rbd ls -p rbd --name client.rbd

rbd1

[root@localhost ceph]# rbd list--name client.rbd

rbd1

[root@localhost ceph]# rbd--image rbd1 info --name client.rbd

rbd image'rbd1':

size 10GiBin 2560objects

order22(4MiB objects)

block_name_prefix: rbd_data.faa76b8b4567

format:2features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

flags:

create_timestamp: Thu Feb14 17:53:54 2019

映射到客户端

[root@localhost ceph]# rbd map --image rbd1 --name client.rbd

rbd: sysfswritefailed

RBD image feature set mismatch. You can disable features unsupported by the kernel with"rbd feature disable rbd1 object-map fast-diff deep-flatten".

In some cases usefulinfo is found in syslog - try "dmesg | tail".

rbd: map failed: (6) No such device or address

映射报错

layering:分层支持

exclusive-lock:排它锁定支持对

object-map:对象映射支持,需要排它所(exclusive-lock)。

deep-flatten:快照平支持(snapshot flatten support)

fast-diff:在client-node1上使用krbd(内核rbd)客户机进行快速diff计算(需要对象映射),我们将无法在centos内核3.10上映射块设备映像,因为该内核不支持对象映射(object-map)、深平(deep-flatten)和快速diff(fast-diff)(在内核4.9中引入了支持)。为了解决这个问题,我们将禁用不支持的特性,有几个选项可以做到这一点:

1)动态禁用

[root@localhost ceph]# rbd feature disable rbd1 exclusive-lock object-map fast-diff deep-flatten --name client.rbd

2)创建rbd镜像时之启用分层特性

[root@localhost ceph]# rbd create rbd2 --size 10240 --image-feature layering --name client.rbd

3)ceph配置文件中禁用

rbd default features = 1

再次映射到客户端

[root@localhost ceph]# rbd map --image rbd1 --name client.rbd/dev/rbd0

[root@localhost ceph]# rbd showmapped--name client.rbdidpool image snap device0 rbd rbd1 - /dev/rbd0

创建文件系统,并挂载

[root@localhost ceph]# fdisk -l /dev/rbd0

Disk/dev/rbd0: 10.7 GB, 10737418240 bytes, 20971520sectors

Units= sectors of 1 * 512 = 512bytes

Sector size (logical/physical): 512 bytes / 512bytes

I/O size (minimum/optimal): 4194304 bytes / 4194304bytes

[root@localhost ceph]# mkfs.xfs/dev/rbd0

meta-data=/dev/rbd0 isize=512 agcount=16, agsize=163840blks= sectsz=512 attr=2, projid32bit=1

= crc=1 finobt=0, sparse=0data= bsize=4096 blocks=2621440, imaxpct=25

= sunit=1024 swidth=1024blks

naming=version 2 bsize=4096 ascii-ci=0 ftype=1log=internal log bsize=4096 blocks=2560, version=2

= sectsz=512 sunit=8 blks, lazy-count=1realtime=none extsz=4096 blocks=0, rtextents=0[root@localhost ceph]#mkdir /mnt/ceph-disk1

[root@localhost ceph]#mount /dev/rbd0 /mnt/ceph-disk1

[root@localhost ceph]#df -h /mnt/ceph-disk1

Filesystem Size Used Avail Use%Mounted on/dev/rbd0 10G 33M 10G 1% /mnt/ceph-disk1

写入数据测试

[root@localhost ceph]# ll /mnt/ceph-disk1/total0[root@localhost ceph]#dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M100+0 records in

100+0records out104857600 bytes (105 MB) copied, 0.127818 s, 820 MB/s

[root@localhost ceph]# ll/mnt/ceph-disk1/total102400

-rw-r--r-- 1 root root 104857600 Feb 15 10:47file1

[root@localhost ceph]#df -h /mnt/ceph-disk1/Filesystem Size Used Avail Use%Mounted on/dev/rbd0 10G 133M 9.9G 2% /mnt/ceph-disk1

开机自动挂载

下载脚本

[root@localhost ceph]# wget -O /usr/local/bin/rbd-mount https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount

[root@localhost ceph]# chmod +x /usr/local/bin/rbd-mount[root@localhost ceph]#cat /usr/local/bin/rbd-mount#!/bin/bash

# Pool name where block device image is stored

export poolname=rbd

# Disk image name

export rbdimage=rbd1

# Mounted Directory

export mountpoint=/mnt/ceph-disk1

# Imagemount/unmount and pool are passed from the systemd service as arguments

# Are we are mounting or unmountingif [ "$1" == "m" ]; then

modproberbd

rbd feature disable $rbdimageobject-map fast-diff deep-flatten

rbd map $rbdimage--id rbd --keyring /etc/ceph/ceph.client.rbd.keyringmkdir -p $mountpointmount /dev/rbd/$poolname/$rbdimage $mountpointfi

if [ "$1" == "u" ]; then

umount$mountpoint

rbd unmap/dev/rbd/$poolname/$rbdimagefi

做成系统服务

[root@localhost ceph]# wget -O /etc/systemd/system/rbd-mount.service https://raw.githubusercontent.com/aishangwei/ceph-demo/master/client/rbd-mount.service

[root@localhost ceph]# systemctl daemon-reload

[root@localhost ceph]# systemctl enable rbd-mount.service

重启查看自动挂载

[root@localhost ceph]# reboot -f

[root@localhost ceph]#df -h /mnt/ceph-disk1/Filesystem Size Used Avail Use%Mounted on/dev/rbd1 10G 133M 9.9G 2% /mnt/ceph-disk1

[root@localhost ceph]# ll-h /mnt/ceph-disk1/total 100M-rw-r--r-- 1 root root 100M Feb 15 10:47 file1

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值