配置Ceph为openstack后端

本文档详细介绍了如何将Ceph配置为OpenStack的后端存储,包括创建pool、设置SSH密钥、创建ceph目录、配置Ceph客户端、安装Ceph包、设置身份验证、配置Glance、Cinder和Nova。在配置过程中遇到了Cinder-volume无法连接到Ceph集群的问题,经过排查发现是由于缺少正确的keyring文件导致。解决方案是确保每个client都有对应的keyring文件,并正确配置。
摘要由CSDN通过智能技术生成

尝试给nova,cinder-volume/cinder-backup,glance配置Ceph作为后端。

参考Ceph文档:http://docs.ceph.com/docs/master/rbd/rbd-openstack/

先在devstack安装的Kilo版本的环境中配置。

1. 创建pool
[root@controller-1 ~]# ceph osd pool create volumes-maqi-kilo 128
pool 'volumes-maqi-kilo' created
[root@controller-1 ~]# ceph osd pool create backups-maqi-kilo 128
pool 'backups-maqi-kilo' created
[root@controller-1 ~]# ceph osd pool create images-maqi-kilo 128
pool 'images-maqi-kilo' created
[root@controller-1 ~]# ceph osd pool create vms-maqi-kilo 128
pool 'vms-maqi-kilo' created

[root@controller-1 ~]# ceph osd lspools
0 rbd,1 volumes-maqi-kilo,2 backups-maqi-kilo,3 images-maqi-kilo,4 vms-maqi-kilo,

128是pg(Placement Group) number,少于5个OSD的环境推荐设为128。
Update 2015/11/16: pg number设置不准确

admin@maqi-kilo:~|⇒  ceph -s
    cluster d3752df9-221d-43c7-8cf5-f39061a630da
     health HEALTH_WARN
            too many PGs per OSD (576 > max 300)
     monmap e1: 1 mons at {controller-1=10.134.1.3:6789/0}
            election epoch 2, quorum 0 controller-1
     osdmap e18: 2 osds: 2 up, 2 in
      pgmap v48: 576 pgs, 5 pools, 394 bytes data, 4 objects
            20567 MB used, 36839 GB / 36860 GB avail
                 576 active+clean

创建的4个pool,每个128个pg,默认的rbd pool有64个pg,一共128*4+64=576个pg。这些pg分布在两个osd上。
看warning信息,max 300,貌似一个osd只能150?

2. 拷贝ceph节点的ssh public key到openstack 节点上
[root@controller-1 ~]# ssh-copy-id admin@10.133.16.195
3. 在openstack节点上创建ceph目录
admin@maqi-kilo:~|⇒  sudo mkdir /etc/ceph
4. 配置openstack ceph client

跑cinder-volume、cinder-backup,nova-compute,glance-xxx的节点都是ceph cluster的client,需要ceph.conf配置文件

[root@controller-1 ~]# ssh admin@10.133.16.195 sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
[global]
fsid = 1c9f72d3-3ebc-465b-97a4-2784f2db1db3
mon_initial_members = controller-1
mon_host = 10.254.4.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.254.4.3/24
5. 在openstack节点上安装ceph包
admin@maqi-kilo:~|⇒  sudo apt-get install ceph-common
6. Setup Ceph client authentication

为cinder-volume,cinder-backup,glance创建用户

[root@controller-1 ~]# ceph auth get-or-create client.cinder-maqi-kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes-maqi-kilo , allow rwx pool=vms-maqi-kilo, allow rx pool=images-maqi-kilo'
[client.cinder-maqi-kilo]
    key = AQDJYkhWwv4uKRAAI/JPWK2H4qV+DqMSkkliOQ==

[root@controller-1 ~]# ceph auth get-or-create client.glance-maqi-kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images-maqi-kilo'
[client.glance-maqi-kilo]
    key = AQAPY0hW+1YQOBAA3aRlTVGkfzTA4ZfaBEmM8Q==

[root@controller-1 ~]# ceph auth get-or-create client.cinder-backup-maqi-kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups-maqi-kilo'
[client.cinder-backup-maqi-kilo]
    key = AQA7Y0hWxegCChAAhTHc7abrE9bGON97bSsLgw==

把key拷贝到openstack节点上并修改ownership:【注意,这一步中的keyring文件名是错误的,详见问题1】

[root@controller-1 ~]# ceph auth get-or-create client.cinder-maqi-kilo | ssh admin@10.133.16.195 sudo tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder-maqi-kilo]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值