尝试给nova,cinder-volume/cinder-backup,glance配置Ceph作为后端。
参考Ceph文档:http://docs.ceph.com/docs/master/rbd/rbd-openstack/
先在devstack安装的Kilo版本的环境中配置。
1. 创建pool
[root@controller-1 ~]# ceph osd pool create volumes-maqi-kilo 128
pool 'volumes-maqi-kilo' created
[root@controller-1 ~]# ceph osd pool create backups-maqi-kilo 128
pool 'backups-maqi-kilo' created
[root@controller-1 ~]# ceph osd pool create images-maqi-kilo 128
pool 'images-maqi-kilo' created
[root@controller-1 ~]# ceph osd pool create vms-maqi-kilo 128
pool 'vms-maqi-kilo' created
[root@controller-1 ~]# ceph osd lspools
0 rbd,1 volumes-maqi-kilo,2 backups-maqi-kilo,3 images-maqi-kilo,4 vms-maqi-kilo,
128是pg(Placement Group) number,少于5个OSD的环境推荐设为128。
Update 2015/11/16: pg number设置不准确
admin@maqi-kilo:~|⇒ ceph -s
cluster d3752df9-221d-43c7-8cf5-f39061a630da
health HEALTH_WARN
too many PGs per OSD (576 > max 300)
monmap e1: 1 mons at {controller-1=10.134.1.3:6789/0}
election epoch 2, quorum 0 controller-1
osdmap e18: 2 osds: 2 up, 2 in
pgmap v48: 576 pgs, 5 pools, 394 bytes data, 4 objects
20567 MB used, 36839 GB / 36860 GB avail
576 active+clean
创建的4个pool,每个128个pg,默认的rbd pool有64个pg,一共128*4+64=576个pg。这些pg分布在两个osd上。
看warning信息,max 300,貌似一个osd只能150?
2. 拷贝ceph节点的ssh public key到openstack 节点上
[root@controller-1 ~]# ssh-copy-id admin@10.133.16.195
3. 在openstack节点上创建ceph目录
admin@maqi-kilo:~|⇒ sudo mkdir /etc/ceph
4. 配置openstack ceph client
跑cinder-volume、cinder-backup,nova-compute,glance-xxx的节点都是ceph cluster的client,需要ceph.conf
配置文件
[root@controller-1 ~]# ssh admin@10.133.16.195 sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
[global]
fsid = 1c9f72d3-3ebc-465b-97a4-2784f2db1db3
mon_initial_members = controller-1
mon_host = 10.254.4.3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.254.4.3/24
5. 在openstack节点上安装ceph包
admin@maqi-kilo:~|⇒ sudo apt-get install ceph-common
6. Setup Ceph client authentication
为cinder-volume,cinder-backup,glance创建用户
[root@controller-1 ~]# ceph auth get-or-create client.cinder-maqi-kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes-maqi-kilo , allow rwx pool=vms-maqi-kilo, allow rx pool=images-maqi-kilo'
[client.cinder-maqi-kilo]
key = AQDJYkhWwv4uKRAAI/JPWK2H4qV+DqMSkkliOQ==
[root@controller-1 ~]# ceph auth get-or-create client.glance-maqi-kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images-maqi-kilo'
[client.glance-maqi-kilo]
key = AQAPY0hW+1YQOBAA3aRlTVGkfzTA4ZfaBEmM8Q==
[root@controller-1 ~]# ceph auth get-or-create client.cinder-backup-maqi-kilo mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups-maqi-kilo'
[client.cinder-backup-maqi-kilo]
key = AQA7Y0hWxegCChAAhTHc7abrE9bGON97bSsLgw==
把key拷贝到openstack节点上并修改ownership:【注意,这一步中的keyring文件名是错误的,详见问题1】
[root@controller-1 ~]# ceph auth get-or-create client.cinder-maqi-kilo | ssh admin@10.133.16.195 sudo tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder-maqi-kilo]