2.集成ceph与openstack(结合上一章)

如何配置ceph作为openstack的后端存储,用来存放vm的临时磁盘。
先配置无密码登录
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@192.168.10.120
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@192.168.10.121
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@192.168.10.122
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@192.168.10.123
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@192.168.10.124
ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 root@192.168.10.126

集成ceph与Openstack nova-coompute、glance、cinder

安装ceph客户端
集成ceph与Openstack的第一步就是要在openstack的节点上安装ceph客户端(一些ceph命令行工具和连接ceph集群需要的libraries)。

 ceph-deploy install --cli --no-adjust-repos controller1
 ceph-deploy config push controller1
 ceph-deploy install --cli --no-adjust-repos controller2
 ceph-deploy config push controller2
 ceph-deploy install --cli --no-adjust-repos controller3
 ceph-deploy config push controller3
 ceph-deploy install --cli --no-adjust-repos compute1
 ceph-deploy config push compute1
 ceph-deploy install --cli --no-adjust-repos compute2
 ceph-deploy config push compute2
 ceph-deploy install --cli --no-adjust-repos cinder1
 ceph-deploy config push cinder1

创建pool
给nova客户端创建一个ceph用户和密钥
ceph get-or-create 会产生一个用户名和一个秘钥,并将它们保存在ceph monitor上。下面命令给nova客户端创建一个用户和秘钥,并赋予合适的权限。
[root@ceph-node1 ceph]# ceph osd pool create images 128
pool 'images' created
[root@ceph-node1 ceph]# ceph osd pool create volumes 128
pool 'volumes' created
[root@ceph-node1 ceph]# ceph osd pool create vms 128
pool 'vms' created
[root@ceph-node1 ceph]# ceph osd pool create compute 128
pool 'compute' created

cinder和glance客户端:
[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images"
[client.cinder]
        key = AQCwcCpa55O+CxAA+l1l0I6XCdMpPz7+OSU2TQ==

[root@ceph-node1 ceph]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQATdClazMH5HRAApL0VvN1vhEKbg94MMWNXGw==


分发秘钥给nova客户端,并修改秘钥文件的group和权限
客户端需要ceph秘钥去访问集群,ceph创建了一个默认用户client.admin,他有足够的权限去访问ceph集群。不能把这个用户共享给其他客户端。更好的做法是用分开的秘钥,创建一个新的ceph用户去访问特定的pool。
[root@ceph-node1  ceph]# ceph auth get-or-create client.glance | ssh controller1 tee /etc/ceph/ceph.client.glance.keyring
[root@ceph-node1  ceph]# ceph auth get-or-create client.glance | ssh controller2 tee /etc/ceph/ceph.client.glance.keyring
[root@ceph-node1   ceph]#ceph auth get-or-create client.glance | ssh controller3 tee /etc/ceph/ceph.client.glance.keyring

[root@ceph-node1 ceph]# ssh controller1 chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@ceph-node1 ceph]# ssh controller2 chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@ceph-node1 ceph]# ssh controller3 chown glance:glance /etc/ceph/ceph.client.glance.keyring

每个节glance-api.conf配置文件添加如下内容:
[root@controller1 ~]# cat /etc/glance/glance-api.conf
[DEFAULT]
default_store = rbd
show_image_direct_url = Tru

[glance_store]
stores = glance,store,rbd,Store
default_store = rbd
#filesystem_store_datadir = /var/lib/glance/images/
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8


每个节点的/etc/cinder/cinder.conf 配置文件添加如下内容:
[lvm]
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
#volume_group = cinder-volumes
#iscsi_protocol = iscsi
#iscsi_helper = lioadm
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = AQCEVi5akU8qBBAAqYE5w2IG62TIcQGOIt6ESA==    

[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder | ssh cinder1 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
        key = AQCEVi5akU8qBBAAqYE5w2IG62TIcQGOIt6ESA==
[root@ceph-node1 ceph]# ssh cinder1 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder | ssh compute1 tee /etc/ceph/ceph.client.cinder.keyring

 



compute客户端:
[root@ceph ceph]# ceph auth get-or-create client.compute mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=compute, allow rx pool=images"
[client.compute]
key = AQBLHcJYm1XxBBAA75foQeQ72bT3GsGVDzBZcg==

分发秘钥给nova客户端,并修改秘钥文件的group和权限
客户端需要ceph秘钥去访问集群,ceph创建了一个默认用户client.admin,他有足够的权限去访问ceph集群。不能把这个用户共享给其他客户端。更好的做法是用分开的秘钥,创建一个新的ceph用户去访问特定的pool。
[root@ceph-node1 ceph]# ceph auth get-or-create client.compute | ssh compute1 tee /etc/ceph/ceph.client.compute.keyring
[client.compute]
        key = AQCOkC9asSnGGhAA0yRom0YwIM4HgqBPvbUSZw==
[root@ceph-node1 ceph]# ceph auth get-or-create client.compute | ssh compute2 tee /etc/ceph/ceph.client.compute.keyring
[client.compute]
        key = AQCOkC9asSnGGhAA0yRom0YwIM4HgqBPvbUSZw==

[root@compute1 yum.repos.d]# chgrp nova /etc/ceph/ceph.client.compute.keyring
[root@compute1 yum.repos.d]# chmod 0640 /etc/ceph/ceph.client.compute.keyring


建一个临时秘钥,用来配置libvirt(临时密钥使用get-key,不然virsh不了)
[root@ceph-node1 ceph]# ceph auth get-key client.compute | ssh compute1 tee /etc/ceph/client.compute
AQCOkC9asSnGGhAA0yRom0YwIM4HgqBPvbUSZw==[root@ceph-node1 ceph]#
[root@ceph-node1 ceph]# ceph auth get-key client.compute | ssh compute2 tee /etc/ceph/client.compute
AQCOkC9asSnGGhAA0yRom0YwIM4HgqBPvbUSZw==[root@ceph-node1 ceph]#
把密钥配置在计算节点的ceph.conf文件
compute节点,其进程需要nova的密钥环文件。
[root@compute1 ceph]# cat  /etc/ceph/ceph.conf
[client.compute]
keyring = /etc/ceph/ceph.client.compute.keyring

集成ceph和libvirt
libvirt进程需要有访问ceph集群的权限。所以需要把nova客户端的密钥存进libvirt。在计算节点上把nova客户端密钥加进libvirt 。

生成一个uuid
[root@openstack]# uuidgen    
  c1261b3e-eb93-49bc-aa13-557df63a6347    

创建libvirt secret文件,设置uuid
<secret ephemeral="no" private="no">    

<uuid>c1261b3e-eb93-49bc-aa13-557df63a6347</uuid>   

<usage type="ceph">    

<name>client.compute secret</name>    
</usage>    
</secret>    
 
   [root@openstack]# virsh secret-define --file ceph.xml      
   Secret c1261b3e-eb93-49bc-aa13-557df63a6347 created      


把nova密钥加进libvirt
[root@compute2 ceph]# ls
ceph.client.compute.keyring  ceph.conf  ceph.xml  client.compute  rbdmap  tmpCo5d8W
[root@compute2 ceph]# virsh secret-set-value --secret e45abfec-412b-428c-90a8-186ee3540640  --base64 $(cat client.compute)
Secret value set    
[root@compute1 ceph]# virsh secret-list  
 UUID                                  Usage
--------------------------------------------------------------------------------
 e45abfec-412b-428c-90a8-186ee3540640  ceph client.compute secret    

配置nova
修改compute节点的/etc/nova/nova.conf文件里的libvirt部分,增加ceph的认证信息。libvirt会使用该用户来和Ceph集群进行连接和认证
[libvirt]
virt_type = qemu
cpu_mode = host-model
#block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_NON_SHARED_INC
#live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool=compute  //这里在ceph一定要有pool这个池
images_type=rbd
rbd_secret_uuid=e45abfec-412b-428c-90a8-186ee3540640  //
rbd_user=compute  //ceph授权的用户
libvirt_inject_password = false
libvirt_inject_key = false
libvirt_inject_partition = -2

重启nova compute服务
[root@openstack]#systemctl restart openstack-nova-compute

测试
新建一个vm虚拟机,然后检查VM’s ephemeral disk是否健在ceph上。
[root@ceph ceph]# rbd -p compute ls
  24e6ca7f-05c8-411b-b23d-6e5ee1c809f9_disk

  [root@ceph ceph]# rbd -p compute info 24e6ca7f-05c8-411b-b23d-6e5ee1c809f9_disk
  rbd image '24e6ca7f-05c8-411b-b23d-6e5ee1c809f9_disk':









  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值