在openstack(liberty)中配置ceph(infernalis)作为存储后端

(http://docs.ceph.com/docs/master/rbd/rbd-openstack/)

一、创建资源池
在ceph集群(ceph-deploy节点)中新增相应的pools
ceph osd pool create volumes 256
ceph osd pool create images 256
ceph osd pool create backups 256
ceph osd pool create vms 256

查看pools
ceph osd lspools

二、配置openstack Ceph客户端
openstack节点 start
    vim /etc/yum.repos.d/ceph.repo
    并将如下信息拷贝(注:如果是要在安装hammer版本,将下面的"rpm-infernalis"替换成"rpm-hammer")
        [epel]
        name=Ceph epel packages
        baseurl=ftp://193.168.140.67/pub/ceph/epel/
        enabled=1
        priority=2
        gpgcheck=0

        [ceph]
        name=Ceph packages
        baseurl=ftp://193.168.140.67/pub/ceph/rpm_infernalis/
        enabled=1
        priority=2
        gpgcheck=0

        [update]
        name=update
        baseurl=ftp://193.168.140.67/pub/updates/
        enabled=1
        priority=2
        gpgcheck=0

        [base]
        name=base
        baseurl=ftp://193.168.140.67/pub/base/
        enabled=1
        priority=2
        gpgcheck=0

    查看yum源
    yum repolist all


    sudo yum -y install python-rbd ceph
openstack节点 end

ceph-deploy节点 start
    为deploy节点设置openstack节点的ssh无密码登陆
    ssh-copy-id root@{your-openstack-server}

    client端添加集群的ceph.conf文件,在ceph节点执行中
    scp /etc/ceph/ceph.conf root@{your-openstack-server}:/etc/ceph/

    ceph-deploy部署节点上执行,生成cinder、glance和cinder-backkup的cephx身份认证信息
    ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
    ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
    ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'


    执行如下命令
    ceph auth get-or-create client.glance | ssh root@{your-openstack-server} tee /etc/ceph/ceph.client.glance.keyring
    ssh root@{your-openstack-server} chown glance:glance /etc/ceph/ceph.client.glance.keyring
    ceph auth get-or-create client.cinder | ssh root@{your-openstack-server} tee /etc/ceph/ceph.client.cinder.keyring
    ssh root@{your-openstack-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
    ceph auth get-or-create client.cinder-backup | ssh root@{your-openstack-server} tee /etc/ceph/ceph.client.cinder-backup.keyring
    ssh root@{your-openstack-server} chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

    ceph auth get-key client.cinder | ssh root@{your-openstack-server} tee client.cinder.key
ceph-deploy节点 end

openstack节点 start
    uuidgen

    用上面命令生成的"3dea6dec-9e7c-4f6a-8aff-b6da186915e8"进行对下面命令全局替换,
    cat > secret.xml <<EOF
    <secret ephemeral='no' private='no'>
      <uuid>3dea6dec-9e7c-4f6a-8aff-b6da186915e8</uuid>
      <usage type='ceph'>
        <name>client.cinder secret</name>
      </usage>
    </secret>
    EOF
   
    sudo virsh secret-define --file secret.xml
   
    sudo virsh secret-set-value --secret 3dea6dec-9e7c-4f6a-8aff-b6da186915e8 --base64 $(cat ~/client.cinder.key) && rm client.cinder.key secret.xml

openstack节点 end

三、配置openstack使用ceph
#Glance配置
vim /etc/glance/glance-api.conf
[DEFAULT]
...
show_image_direct_url = True
...
[glance_store]
#default_store = file      <-- 若有该配置请注释掉
#filesystem_store_datadir = /var/lib/glance/images/      <-- 若有该配置请注释掉
default_store = rbd
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8


sudo systemctl restart openstack-glance-api
sudo systemctl status openstack-glance-api

#Cinder配置
vim /etc/cinder/cinder.conf
[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 3dea6dec-9e7c-4f6a-8aff-b6da186915e8   <-- 注:uuid为上面生成的uuidgen值

# Cinder Backup配置
vim /etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

# Nova配置
vim /etc/ceph/ceph.conf
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20


mkdir -p /var/run/ceph/guests/ /var/log/qemu/
groupadd libvirtd
chown qemu:libvirtd /var/run/ceph/guests /var/log/qemu/

vim /etc/nova/nova.conf
[libvirt]
...
...
hw_disk_discard = unmap

# 重启openstack
sudo systemctl restart openstack-glance-api
sudo systemctl restart openstack-nova-compute
sudo systemctl restart openstack-cinder-volume
sudo systemctl restart openstack-cinder-backup

sudo systemctl status openstack-glance-api
sudo systemctl status openstack-nova-compute
sudo systemctl status openstack-cinder-volume
sudo systemctl status openstack-cinder-backup

#测试
查看openstack状态
openstack-status


nova image-list  ==> 查看nova的image
+--------------------------------------+-------------------------+--------+--------+
| ID                                   | Name                    | Status | Server |
+--------------------------------------+-------------------------+--------+--------+
| 4c65f914-8d6a-4cde-8e66-e546c66b6152 | cirros-0.3.3-x86_64     | ACTIVE |        |
| 8ac87fde-2392-4fda-9fcf-71601100f0d7 | cirros-0.3.4-x86_64     | ACTIVE |        |
| e79414f2-2cd6-4459-a0e0-ea2c0a28071f | cirros-0.3.4-x86_64_raw | ACTIVE |        |
+--------------------------------------+-------------------------+--------+--------+

创建
cinder create --image-id {id of nova image-list} --display-name {name of volume} {size of volume}
例如:
cinder create --image-id e79414f2-2cd6-4459-a0e0-ea2c0a28071f --display-name testVolume2 1

查看 cinder信息
cinder list
+--------------------------------------+-----------+------------------+-------------+------+-------------+----------+-------------+-------------+
|                  ID                  |   Status  | Migration Status |     Name    | Size | Volume Type | Bootable | Multiattach | Attached to |
+--------------------------------------+-----------+------------------+-------------+------+-------------+----------+-------------+-------------+
| 1557c4c0-1af3-4b64-924d-666bca3e044c |   error   |        -         | testVolume1 |  1   |      -      |  false   |    False    |             |
| 950bd316-ec82-4723-bb33-047196008f48 |   error   |        -         | testVolume1 |  1   |      -      |  false   |    False    |             |
| e087c278-0846-4c20-9a22-7a83f304ad65 | available |        -         | testVolume2 |  1   |      -      |   true   |    False    |             |
+--------------------------------------+-----------+------------------+-------------+------+-------------+----------+-------------+-------------+

转载于:https://my.oschina.net/u/658505/blog/646709

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值