Ceph Cookbook 学习实践笔记(二)之使用Ceph块存储

Ceph Cookbook 学习实践笔记(二)之使用Ceph块存储

第二章:使用Ceph块存储

安装和配置好Ceph集群后,接下来的任务就是进行存储配置:给物理机或者虚拟机分配存储空间或容量的操作过程,无论是以块、文件还是对象形式存储。

配置Ceph客户端:

常用的Linux宿主机都可以作为Ceph的客户端,客户端通过Ceph集群网络来相互交互以存储或者检索用户数据。

操作步骤:

和之前一样,我们通过Vagrant和VirtualBox来搭建Ceph客户端(运行一个Ubuntu14.04虚拟机作为Ceph客户端)。

1、在已经克隆了ceph-cookbook git repository库的路径下(https://github.com/ksingh7/ceph-cookboook.git),使用Vagrant启动Ceph客户端虚拟机。

vagrant up client-node1        //启动虚拟机
vagrant status client-node1    //查看运行状态

2、登陆到client-node1节点:

vagrant ssh client-node1

Vagrant配置的虚拟机使用的用户名和密码都是vagrant,并且拥有sudo权限,root用户名的默认密码也是vagrant。

3、检查OS和内核版本:

lsb_release -a
uname -r

4、检查内核对RBD的支持:

sudo modprobe rbd

5、允许ceph-node MON节点通过ssh免密访问client-node1(从ceph-node1上将root ssh 秘钥复制到client-node1下的Vagrant用户下):

vagrant ssh ceph-node1
sudo su -
ssh-copy-id vagrant@client-node1

6、在ceph-node1上使用ceph-deploy工具把Ceph二进制程序安装到client-node1上面:

cd /etc/ceph
ceph-deploy --username vagrant install client-node1

7、将Ceph配置文件(ceph.conf)复制到client-node1:

ceph-deploy --username vagrant config push client-node1

8、客户端需要Ceph秘钥去访问Ceph集群。Ceph创建了一个默认用户client.admin,它有足够的权限去访问Ceph集群。不建议把client.admin共享到所有其他客户端节点。更好的做法是用分开的秘钥创建一个新的Ceph用户区访问特定的存储池。

本例中,我们创建一个Ceph用户client.rbd,它拥有访问rbd存储池的权限。Ceph的块设备默认在rbd存储池中创建:

ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'

9、为client-node1上的client.rbd用户添加秘钥:

ceph auth get-or-create client.rbd | ssh vagrant@client-node1 sudo tee /etc/ceph/ceph.client.rbd.keyring

10、至此,client-node1已经准备好充当Ceph客户端了。通过提供用户名和秘钥在clilent-node1上检查Ceph集群的状态:

vagrant ssh ceph-node1
sudo su -
cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring

#由于没有使用默认client.admin用户,故必须提供用户名来连接Ceph集群
ceph -s --name client.rbd

 

创建Ceph块设备

1、创建一个10240MB大小的RADOS块设备,取名为rbd1:

rbd create rbd1 --size 10240 --name client.rbd

2、多种方式列出RBD镜像

#保存块设备镜像的默认存储池是rbd,也可用通过rbd命令的-p选项指定存储池
rbd ls --name client.rbd
rbd ls -p rbd --name client.rbd
rbd list --name client.rbd

3、检查rbd镜像的细节:

rbd --image rbd1 info --name client.rbd

 

映射Ceph块设备

现在我们已经在Ceph集群上创建了一个块设备,要使用它,我们要将它映射到客户机。

1、映射块设备到lient-node1:

rbd map --image rbd1 --name client.rbd

2、检查被映射的块设备:

rbd showmapped --name client.rbd

3、要使用这个块设备,我们需要创建并挂载一个文件系统:

fdisk -l /dev/rbd1
mkfs.xfs /dev/rbd1
mkdir /mnt/ceph-disk1
mount /dev/rbd1 /mnt/ceph-disk1
df -h /mnt/ceph-disk1

4、通过将数据写入块设备来进行检测:

dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M

5、要在机器重启后映射该块设备,需要在系统启动中添加 init-rbdmap脚本,并且将Ceph用户和keyring详细信息添加到/etc/ceph/rbdmap,最后再更新/etc/fstable文件:

wget https://raw.githubusercontent.com/ksingh7/ceph-cookbook/master/rbdmap -O /etc/init.d/rbdmap
chmod +x /etc/init.d/rbdmap
update-rc.d rbdmap defaults

#确保在/etc/ceph/rbdmap文件中使用了正确的keyring,在一个环境当中它通常唯一
echo "rbd/rbd1 id=rbd, keyring=AQBvooRdAPlJIRAAE4EhWj4jiGUm76mFB0CoXA==" >> /etc/ceph/rbdmap
echo "/dev/rbd1 /mnt/ceph-disk1 xfs defaults, _netdev0 0" >> /etc/fstab
/etc/init.d/rbdmap start

 

调整Ceph RBD大小

Ceph支持精简配置,即直到开始在块设备上存储数据之前,物理存储空间都不会被占用。Ceph块设备非常灵活,可以在Ceph存储端增加或减少RBD的大小。

1、将之前穿件的RBD镜像的大小从10GB增加到20GB:

rbd resize --image rbd1 --size 20480 --name client.rbd
rbd info --image rbd1 --name client.rbd

2、我们需要扩展文件系统来利用增加了的存储空间。

dmesg | grep -i capacity | grep rbd1
xfs_growfs -d /mnt/ceph-disk1

使用RBD快照

快照是指在某时间点上生成的只读的RBD镜像副本,可以通过创建和恢复快照来保持CephRBD镜像的状态以及从快照恢复原始数据。

1、在创建好的块设备上创建一个文件

echo "Hello Ceph this is snapshot test" > /mnt/ceph-disk1/snapshot_test_file

2、为Ceph块设备创建快照

语法:rbd snap create <pool-name>/<image-name>@<snap-name>

rbd snap create rbd/rbd1@snapshot1 --name client.rbd

3、列出镜像快照命令

语法:rbd snap ls <pool-name>/<image-name>

rbd snap ls rbd/rbd1 --name client.rbd

4、删除文件,测试快照恢复功能

rm -rf /mnt/ceph-disk1/*

5、现在恢复CephRBD快照来找回之前删除的文件(回滚操作使用快照版本来覆盖我们当前版本的RBD镜像和它里面的数据,要谨慎操作)

## 语法:rbd snap rollback <pool-name>/<image-name>@<snap-name>
rbd snap rollback rbd/rbd1@snapshot1 --name client.rbd

6、快照回滚操作完成后,重新挂载Ceph RBD文件系统并刷新其状态。可以发现删除的文件已经被恢复了

umount /mnt/ceph-disk1
mount /dev/rbd1 /mnt/ceph-disk1
ls -l /mnt/ceph-disk1/

7、删除单个快照或所有快照

语法:rbd snap rm <pool-name>/<image-name>@<snap-name>
rbd snap rm rbd/rbd1@snapshot1 --name client.rbd

语法:rbd snap purge <pool-name>/<image-name>
rbd snap purge rbd/rbd1 --name client.rbd

使用RBD克隆

Ceph支持一个非常好的特性,即以COW(写时复制)的方式从RBD快照创建克隆(clone),在Ceph中这被称为快照分层(Snapshot Layering)。分层特性允许客户端创建多个CephRBD克隆实例。这个特性对OpenStack、CloudsStack、Qemu/KVM等云平台和虚拟化平台都提供了非常有用的帮助。这些平台通常以快照的形式来保护CephRBD镜像。然后,这些快照被多次用来孵化实例。快照是只读的,但COW克隆是完全可写的。

CephRBD镜像有两种类型:format-1和format-2。RBD快照支持这两种类型,默认为format-1,但是分层特性(COW克隆特性)只支持format-2类型的RBD镜像。

1、创建format-2类型的RBD镜像,并检查细节

root@client-node1:~# rbd create rbd2 --size 10240 --image-format 2 --name client.rbd
root@client-node1:~# rbd info --image rbd2 --name client.rbd

2、创建这个RBD镜像的快照

rbd snap create rbd/rbd2@snapshot_for_cloning --name client.rbd

3、要创建COW克隆,首先要保护这个快照,这是非常重要的一步。因为一旦它被删除了,所有附着其上的COW克隆都会被摧毁

rbd snap protect rbd/rbd2@snapshot_for_cloning --name client.rbd

4、通过快照创建一个克隆的RBD镜像

语法:rbd clone <pool-name>/<parent-image>@<snap-name> <pool-name>/<child-image-name>

rbd clone rbd/rbd2@snapshot_for_cloning rbd/clone_rbd2 --name client.rbd

5、创建克隆是一个很快的过程。一旦完成,则可检查新镜像的信息。

rbd info rbd/clone_rbd2 --name client.rbd

现在我们已经克隆了一个依赖于父镜像快照的RBD镜像。为了让这个克隆的RBD镜像独立于他的父镜像,我们需要将父镜像的信息合并(flattern)到子镜像。这个操作的时间长短取决于父镜像快照当前的数据量大小。一旦完成,RBD镜像就和它的父镜像之间不存在依赖关系。

6、使用下力了命令开始合并操作

rbd flatten rbd/clone_rbd2 --name client.rbd
rbd info --image clone_rbd2 --name client.rbd

7、如果不再使用父镜像快照,可以移除它,移除之前先解除保护

rbd snap unprotect rbd/rbd2@snapshot_for_cloning --name client.rbd
rbd snap rm rbd/rbd2@snapshot_for_cloning --name client.rbd

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Key Features Leverage Ceph's advanced features such as erasure coding, tiering, and Bluestore Solve large-scale problems with Ceph as a tool by understanding its strengths and weaknesses to develop the best solutions A practical guide that covers engaging use cases to help you use advanced features of Ceph effectively Book Description Mastering Ceph covers all that you need to know to use Ceph effectively. Starting with design goals and planning steps that should be undertaken to ensure successful deployments, you will be guided through to setting up and deploying the Ceph cluster, with the help of orchestration tools. Key areas of Ceph including Bluestore, Erasure coding and cache tiering will be covered with help of examples. Development of applications which use Librados and Distributed computations with shared object classes are also covered. A section on tuning will take you through the process of optimisizing both Ceph and its supporting infrastructure. Finally, you will learn to troubleshoot issues and handle various scenarios where Ceph is likely not to recover on its own. By the end of the book, you will be able to successfully deploy and operate a resilient high performance Ceph cluster. What you will learn Know when and how to use some of Ceph's advanced new features Set up a test cluster with Ansible and some virtual machines using VirtualBox and Vagrant Develop novel solutions to massive problems with librados and shared object classes. Choose intelligent parameters for an erasure coded pool and set it up. Configure the Bluestore settings and see how they interact with different hardware configurations. Keep Ceph running through thick and thin with tuning, monitoring and disaster recovery advice. About the Author Nick Fisk is an IT specialist with a strong history in enterprise storage. Having worked in a variety of roles throughout his career, he has encountered a wide variety of technologies. In 2012, Nick was given the opportunity to focus more toward open source technologies, and this is when his first exposure to Ceph happened. Having seen the potential of Ceph as a storage platform and the benefits of moving away from the traditional closed-stack storage platforms, Nick pursued Ceph with a keen interest. Throughout the following years, his experience with Ceph increased with the deployment of several clusters and enabled him to spend time in the Ceph community, helping others and improving certain areas of Ceph. Table of Contents Planning for Ceph Deploying Ceph BlueStore Erasure Coding for Better Storage Efficiency Developing with Librados Distributed Computation with Ceph RADOS Classes Monitoring Ceph Tiering with Ceph Tuning Ceph Troubleshooting Disaster Recovery
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值