OpenStack Cinder - Add more volume nodes Configure multiple backends

http://giuliofidente.com/2013/04/openstack-cinder-add-more-volume-nodes.html

OpenStack Cinder - Add more volume nodes

With this being the first of a short series, I'd like to publish some articles intendend to cover the required steps to configure Cinder (OpenStack block storage service) in a mid/large deployment scenario. The idea is to discuss at least three topics: how to scale the service by adding more volume nodes; how to ensure high-availablity for the API and Scheduler sub-services; leverage the multi-backend feature landed in Grizzly.

I'm starting with this post on the scaling issue first. Cinder is composed of three main parts, the API server, the scheduler and the volume service. The volume service is some sort of abstraction layer between the API and the actual resources provider.

By adding more volume nodes into the environment you will be able to increase the total offering of block storage to the tenants. Each volume node can either provide volumes by allocating them locally or on a remote container like an NFS or GlusterFS share.

Some assumptions before getting into the practice:

  • you're familiar with the general OpenStack architecture
  • you have at least one Cinder node configured and working as expected

First thing to do on the candidate node is to install the required packages. I'm running the examples on CentOS and using the RDO repository which makes this step as simple as:

# yum install openstack-cinder

If you plan to host new volumes using the locally available storage dont' forget to create a volume group called cinder-volumes (the name can be configured via the cinder_volume parameter). Also don't forget to configure the tgtd to include the config files created dynamically by Cinder. Add a line like the following:

include /etc/cinder/volumes/*

in your /etc/tgt/targets.conf file. Now enable and start the tgtd service:

# chkconfig tgtd on
# service tgtd start

Amongst the three init services installed by openstack-cinder you only need to run openstack-cinder-volume, which gets configured in /etc/cinder/cinder.conf. Configure it to connect to the existing Cinder database (the db in use by the pre-existing node) and to the existing AMQP broker (again, in use by the pre-existing node) by setting the following:

sql_connection=mysql://cinder:${CINDER_DB_PASSWORD}@${CINDER_DB_HOST}/cinder
qpid_hostname=${QPIDD_BROKER}

Set the credentials if needed and/or change the rpc_backend setting if you're not using Qpid as your message broker. One more setting, not really required to change but worth checking if you're using the local resources:

iscsi_ip_address=${TGTD_IP_ADDRESS}

That should match the public ip address of the volume node just installed. The iSCSI targets created locally using tgtadm/tgtd have to be reachable by theNova nodes. The IP address of each target is stored in the database with every volume created. The iscsi_ip_address prameter sets what is the IP address to be given to the initiators.

At this point you should be ready to start the volume service:

# service openstack-cinder-volume start

Verify that it started by checking the logs (/var/log/cinder/volume.log) or by issueing on any Cinder node:

# cinder-manage host list

you should see all of your volume nodes listed. From now on you can create new volumes as usual and they will be allocated on any of the volume nodes, keep in mind that the scheduler will default to the node with the most space available.



OpenStack Cinder - Configure multiple backends

Following my first post of the series discussing how to scale OpenStack Cinderto multiple nodes, with this I want to approach the configuration and usage of the multibackend feature landed in Cinder with the Grizzly release.

This feature allows you to configure a single volume node for use with more than a single backend driver. You can find all about the few configuration bits needed also in the OpenStack block storage documentation. That makes this post somehow redundant but I wanted to keep up with the series and the topic is well worth to be kept also here.

As usual, some assumptions before we start:

  • you're familiar with the general OpenStack architecture
  • you have already some Cinder volume node configured and working as expected

Assuming we want our node, configured with some LVM based and an additional NFS based backend, this is what we would need to add intocinder.conf:

enabled_backends=lvm1,nfs1
[lvm1]
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
[nfs1]
nfs_shares_config=${PATH_TO_YOUR_SHARES_FILE}
volume_driver=cinder.volume.drivers.nfs.NfsDriver
volume_backend_name=NFS

The enabled_backends value defines some names (separated by a comma) for the config groups. These do not have to match the driver name nor the backend name.

When the configuration is complete, to use a particular backend when allocating new volumes, you'll have to pass a volume_type parameter to the creation command. Such a type has to be created beforehand and to have some backends assigned to it:

# cinder type-create lvm
# cinder type-key lvm set volume_backend_name=LVM_iSCSI
# cinder type-create nfs
# cinder type-key nfs set volume_backend_name=NFS

Finally, to create your volumes:

# cinder create --volume_type lvm --display_name inlvm 1

For people using the REST interface, to set any type-key property, includingvolume_backend_name, you pass that information along with the request as extra specs. You can list those indeed to make sure the configuration is working as expected:

#  cinder extra-specs-list

Note that you can have backends of the same type (driver) using different names (say two LVM based backends allocating volumes in different volume groups) or you can also have backends of the same type using the same name! The scheduler is in charge of making the proper decision on how to pickup the correct backend at creation time so a few notes on the filter scheduler (enabled by default in Grizzly):

  • firstly it filters the available backends (AvailabilityZoneFilter, CapacityFilter and CapabilitiesFilter are enabled by default and the backend name is matched against the capabilities)
  • secondly weights the previously filtered backends (CapacityWeigher is the only one enabled by default)

The CapacityWeigher attributes high score to backends with the most available space, so new volumes are allocated within the backend with the more space available matching the particular name in the request.



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值