Test multipath feature by openstack (by quqi99)

**作者:张华 发表于:2017-02-10
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )**

juju快速搭建openstack

附件中的basic.yaml文件对于cinder的配置是将cinder-api, cinder-schedule, cinder-volume三个服务安装一个cinder/0节点中

#bzr branch lp:openstack-charm-testing
juju destroy-environment --force zhhuabj
juju switch zhhuabj && juju bootstrap 
juju-deployer -c ./basic.yaml -d xenial-mitaka
source  ~/openstack-charm-testing/novarc
cd ~/openstack-charm-testing
./configure

存储节点硬件配置

1, /etc/ceph/ceph.conf中的volume_group选项默认为cinder-volumes,故需先创建卷组

dd if=/dev/zero of=/images/cinder-volumes.img bs=1M count=4096 oflag=direct
sgdisk -g --clear /images/cinder-volumes.img
sudo vgcreate cinder-volumes $(sudo losetup --show -f /images/cinder-volumes.img)
#sudo pvcreate /dev/vdd
#sudo vgcreate cinder-volumes /dev/vdd
#sudo lvcreate -L2G -nceph0 cinder-volumes

$ sudo vgs
  VG             #PV #LV #SN Attr   VSize VFree
  cinder-volumes   1   1   0 wz--n- 4.00g 3.00g

2, multipath要求存储节点有两块网卡,故需为存储节点再添加一块网卡

source ~/novarc
nova interface-attach <cinder_node_uuid> --net-id=<zhhuabj_admin_net_uuid>

source ~/openstack-charm-testing/novarc
juju ssh cinder/0 sudo ifconfig ens7 up
juju ssh cinder/0 sudo dhclient ens7
$ juju ssh cinder/0 sudo ip addr show ens3 |grep global
    inet 10.5.9.15/16 brd 10.5.255.255 scope global ens3
$ juju ssh cinder/0 sudo ip addr show ens7 |grep global
    inet 10.5.9.27/16 brd 10.5.255.255 scope global ens7

创建虚机

nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova boot --key-name mykey --image trusty --flavor m1.small --nic net-id=$(neutron port-list |grep ' private ' |awk '{print $2}') i1
FLOATING_IP=$(nova floating-ip-create |grep 'ext_net' |awk '{print $4}')
nova add-floating-ip i1 $FLOATING_IP
ssh ubuntu@10.5.150.1 -v

openstack端的multipath配置

1, /etc/nova/nova.conf

[libvirt]
iscsi_use_multipath = True

NOTE: Icehouse版本的tgtadm与lioadm均不支持multiple portals,所以Icehouse不支持Icehouse。比如此时使用NEC支持multiple portals的存储,前cinder端使用tgtadm的话因为它只能用一个portal,所以当backend卷的时候(需要detach),就样会造成iscsi session混乱,从而造成问题。此外,即使Icehouse tgtadm支持multiple portals,因为另一个’multipath -r’会造成iscsi sessions混乱的bug (https://bugs.launchpad.net/os-brick/+bug/1623700) 依然无法使用mutipath.
2, /etc/cinder/cinder.conf

[default]
iscsi_ip_address=10.5.9.15
iscsi_secondary_ip_addresses = 10.5.9.27

3, 配置完后,重启ceph-volume服务

sudo service cinder-volume restart

创建卷

cinder create --display_name test_volume 1

cinder也支持下面方式创建multi-backend, 不过本例未用到。

[lvmdriver-1]
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = lvmbackend

[lvmdriver-2]
volume_group = stack-volumes-lvmdriver-2
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = lvmbackend

cinder type-create type-test
cinder type-key type-test set volume_backend_name=lvmbackend
cinder service-list
cinder create --display_name test_volume --volume_type type-test 1

attach卷

#nova volume-detach i1 3aa1d2c9-2eba-4b44-b95d-79d8b11f7246
nova volume-attach i1 3aa1d2c9-2eba-4b44-b95d-79d8b11f7246

attach卷到虚机之后,配置iscsi_ip_address=10.5.9.15与iscsi_secondary_ip_addresses = 10.5.9.27将会被更新到provider_location表中.

mysql -ucinder -p6HxgTxcnrh7hpBTzChxfSN8FnLVXdxHh -h10.5.9.19
mysql> select provider_location from volumes where id='3aa1d2c9-2eba-4b44-b95d-79d8b11f7246';
+---------------------------------------------------------------------------------------------------------+
| provider_location                                                                                       |
+---------------------------------------------------------------------------------------------------------+
| 10.5.9.15:3260;10.5.9.27:3260,1 iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246 1 |
+---------------------------------------------------------------------------------------------------------

nova端connection_info表也会更新

mysql -unova -pWhhRjy7C2mqBhm46FZPxpyNY7mWbKLtT -h10.5.9.19
select connection_info from block_device_mapping where instance_uuid='f6319709-5236-441c-b420-d63f3f4b0382'; 

这一块的完整代码流程分析如下:

1, Create volume (vol_id) in cinder side, the cinder option iscsi_ip_address will be updated to DB table volume.

2, Using vol_id to boot a VM.

nova boot --image <image_id> --flavor 2 --key-name mykey --block-device-mapping vda=<vol_id>:<type>:<size>:<delete-on-terminate> <instance_name>

The block_device_mapping data stucture looks like:
{
'block_device_mapping': [{
'connection_info': {
u'driver_volume_type': u'iscsi',
'serial': u'b66e294e-b997-48c1-9208-817be475e95b',
u'data': {
u'target_discovered': False,
u'target_iqn': u'iqn.2010-10.org.openstack: volume-b66e294e-b997-48c1-9208-817be475e95b',
u'target_portal': u'192.168.82.231: 3260',
u'volume_id': u'b66e294e-b997-48c1-9208-817be475e95b',
u'target_lun': 1,
u'auth_password': u'jcYpzNiA4ZQ4dyiC26fB',
u'auth_username': u'CQZto4sC4HKkx57U4WfX',
u'auth_method': u'CHAP'
}
},
'mount_device': u'vda',
'delete_on_termination': False
}],
'root_device_name': None,
'ephemerals': [],
'swap': None
}

3, nova gets above connection_info from cinder by invoking initialize_connection() only when attaching a volume to a VM.

def attach(self, context, instance, volume_api, virt_driver,
do_check_attach=True, do_driver_attach=False):
...
connection_info = volume_api.initialize_connection(context, self.volume_id, connector)
NOTE: the connector info is from os-brick's get_connector_properties, then nova will pass it to cinder: get_volume_connector -> connector.get_connector_properties(os-brick)

4, initialize_connection() in cinder querys volume from DB table volume by volume_id,

def initialize_connection(self, context, volume_id, connector):
...
volume = self.db.volume_get(context, volume_id)
model_update = None
try:
LOG.debug(_("Volume %s: creating export"), volume_id)
model_update = self.driver.create_export(context.elevated(), volume)
if model_update:
volume = self.db.volume_update(context,volume_id,model_update)

def create_export(self, context, volume, volume_path):
iscsi_name = "%s%s" % (CONF.iscsi_target_prefix, volume['name'])
iscsi_target, lun = self._get_target_and_lun(context, volume)
...
tid = self.create_iscsi_target(iscsi_name, iscsi_target, 0, volume_path, chap_auth)
data = {}
data['location'] = self._iscsi_location(CONF.iscsi_ip_address, tid, iscsi_name, lun)
return data

5, _get_iscsi_properties will implement the transform between provider_location and target_portal

def _get_iscsi_properties(self, volume):
properties = {}
location = volume['provider_location']
if location:
# provider_location is the same format as iSCSI discovery output
properties['target_discovered'] = False
else:
location = self._do_iscsi_discovery(volume)
properties['target_discovered'] = True
results = location.split(" ")
properties['target_portal'] = results[0].split(",")[0]

一些有用输出

1, cinder端

$ sudo tgtadm --mode target --op show
Target 1: iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 5
            Initiator: iqn.1993-08.org.debian:01:bc814ff1c89 alias: juju-zhhuabj-machine-9
            Connection: 0
                IP Address: 10.5.9.24
        I_T nexus: 6
            Initiator: iqn.1993-08.org.debian:01:bc814ff1c89 alias: juju-zhhuabj-machine-9
            Connection: 0
                IP Address: 10.5.9.24
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 1074 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: rdwr
            Backing store path: /dev/cinder-volumes/volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246
            Backing store flags: 
    Account information:
        e3X9c6oh3tSLB6pyDjpn
    ACL information:
        ALL

cinder做得这些相当于下列手工命令做的事情:

$ sudo apt-get install tgt
$ sudo tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246
#Attach a logical unit ( LUN )
$ sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /dev/cinder-volumes/volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246

# cat /etc/tgt/targets.conf 
include /etc/tgt/conf.d/*.conf

# cat /etc/tgt/conf.d/cinder_tgt.conf
include /var/lib/cinder/volumes/*

# cat /var/lib/cinder/volumes/volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246 
<target iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246>
    backing-store /dev/cinder-volumes/volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246
    driver iscsi
    incominguser e3X9c6oh3tSLB6pyDjpn 6w49MzSf7iPk9g3H

    write-cache on
</target>

$ sudo iscsiadm -m discovery -t sendtargets -p 10.5.1.30
$ sudo iscsiadm -m discovery -t sendtargets -p 10.5.1.31
$ sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246 -p 10.5.1.30 --login
$ sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246 -p 10.5.1.31 --login

2, nova端:

$ sudo multipath -ll
360000000000000000e00000000010001 dm-0 IET,VIRTUAL-DISK
size=1.0G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 10:0:0:1 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 9:0:0:1  sda 8:0  active ready running

$ sudo iscsiadm -m node
10.5.9.15:3260,-1 iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246
10.5.9.27:3260,-1 iqn.2010-10.org.openstack:volume-3aa1d2c9-2eba-4b44-b95d-79d8b11f7246

附录: cinder中如何使用type, qos, multiple backend

nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
nova flavor-create myflavor auto 2048 20 1 
openstack server create --wait --image xenial --flavor myflavor --key-name mykey --nic net-id=4f80f7ab-bf2f-4b97-b5cf-dbf31b976485 --min 1 --max 1 i1

cinder type-create type1
cinder qos-create qos1 read_iops_sec=500 write_iops_sec=500
cinder qos-key <qos1-id> set consumer=front-end
#cinder type-key type2 set volume_backend_name=LVM_iSCSI_2  #For multiple storage backends
cinder qos-associate <qos1-id> <type1-id>

cinder create --name volume1 --volume-type type1 1
nova volume-attach i1 <volume1-id> /dev/vdc

cinder type-create type2
cinder qos-create qos2 read_iops_sec=200 write_iops_sec=200
cinder qos-key <qos2-id> set consumer=front-end
cinder qos-associate <qos2-id> <type2-id>

nova volume-detach i1 <volume1-id>
cinder retype volume1 type2
# above throw the errror 'Retype requires migration but is not allowed.', that's because LVMVolumeDriver just started to support retype operation since Mitaka - https://bugs.launchpad.net/cinder/+bug/1515840, so change to the following command according to https://docs.openstack.org/cinder/pike/contributor/migration.html
cinder retype --migration-policy on-demand volume1 type2

附录 - 没有retype时修改已有卷的qos

在Icehouse中RBD驱动没有实现retype特性。

#从RBD image volume启动,所以它和这个bug有关 - https://review.openstack.org/#/c/143939/
nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
nova boot --key-name mykey --image trusty --flavor m1.small --nic net-id=$(neutron net-list |grep ' private ' |awk '{print $2}')  --block-device-mapping vda=$(cinder --os-volume-api-version 2 list |grep bootvol |awk '{print $2}'):::0 i1
#cinder --os-volume-api-version 2 create --display-name testvol --volume-type type1 1 
#nova volume-attach xenial-034615 $(cinder --os-volume-api-version 2 list |grep 'vol_ceph' |awk '{print $2}')
neutron floatingip-create ext_net
neutron floatingip-associate $(neutron floatingip-list |grep 10.5.150.1 |awk '{print $2}') $(neutron port-list |grep '192.168.21.4' |awk '{print $2}')

# 因为Icehouse缺乏retype支持, 我们只能重新定义一个新qos,然后从type上qos-dissocaite掉old qos, 然后再将这个new qos和type关联。
cinder --os-volume-api-version 2 qos-create qos2 consumer="front-end" read_iops_sec=1999 write_iops_sec=999
cinder --os-volume-api-version 2 qos-disassociate $(cinder --os-volume-api-version 2 qos-list |grep qos1 |awk '{print $2}') $(cinder --os-volume-api-version 2 type-list |grep type1 |awk '{print $2}')
cinder --os-volume-api-version 2 qos-associate $(cinder --os-volume-api-version 2 qos-list |grep qos2 |awk '{print $2}') $(cinder --os-volume-api-version 2 type-list |grep type1 |awk '{print $2}')

# 上面针对的是从bootable volume,实现上上述方法不会成功,因为volume一时创建会将会将相关卷相信缓存在nova端的block_device_mapping表中,这个表只有新建,或者detach/attach时才会更新(select connection_info from block_device_mapping where deleted_at is NULL \G),bootable volume无法重新attach volume去更新block_device_mapping所以它仍然无法实现在无retype情况下对已有卷更新qos的操作。
#但是对于non-bootable volume是可以的,如下:
cinder --os-volume-api-version 2 create --display-name testvol --volume-type cephtype --volume-type type1 1 
nova volume-attach i1 $(cinder --os-volume-api-version 2 list |grep 'testvol' |awk '{print $2}')
nova volume-detach i1 $(cinder --os-volume-api-version 2 list |grep 'testvol' |awk '{print $2}')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值