Test multipath feature by openstack lioadm (by quqi99)

版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (http://blog.csdn.net/quqi99)

问题

之前写过一篇关于使用tgtadm测试OpenStack Multipath特性的文章。tgt是一个用户态的iscsi target,lio是内核核的iscsi target并且它已经被集成到了Linux内核。
在OpenStack Icehouse版本,由于仅支持单target (即cinder还不支持iscsi_secondary_ip_addresses选项用于配置第二个target),所以基于Icehouse的tgtadm将无法支持multipath。但使用lioadm改点配置可以支持。

基于OpenStack环境

搭建一个简单带cinder的openstack环境即可,不需要ceph支持。可参考这篇文章的一小部分安装。

cinder节点安装LIO-target与加载target_core_mod模块

先加载target_core_mod模块

juju ssh cinder/0
sudo apt install linux-image-extra-$(uname -r)  #Avoid the error 'Module target_core_mod not found'
sudo apt build-dep linux-image-$(uname -r)
sudo modprobe target_core_mod

cinder节点上需要安装LIO-target

#Install LIO-target - https://www.thomas-krenn.com/de/wiki/Linux-IO_Target_(LIO)_unter_Ubuntu_14.04
sudo apt install open-iscsi targetcli python-urwid lio-utils python-pyparsing python-prettytable python-rtslib python-configshell
sudo pip install 'rtslib-fb>=2.1.39'

需要说明的是,这里面有一个bug,因为Icehouse版本 /usr/bin/cinder-rtstool文件中使用了rtslib-fb>=2.1.39,所以我们必须使用“sudo pip install ‘rtslib-fb>=2.1.39’”命令安装它。但是rtslib-fb这个模块比较老又被废弃了,targetcli工具又使用了比较新的rtslib模块。
如果我们删除rtslib-fb模块(sudo pip uninstall y rtslib-fb)而使用rtslib模块targetcli工具将会恢复正常,但是/usr/bin/cinder-rtstool工具在运行下列命令时会报错:’ImportError: No module named rtslib_fb’

sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool create /dev/cinder-volumes/volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea zAXMzsNKJ4kBvDYCZBec VyhWebHq3GBKE22zYjpX

幸好,/usr/bin/cinder-rtstool里没有使用targetcli,所以我们必须使用‘rtslib-fb>=2.1.39’模块。
下面是正常使用targetcli工具的例子:

root@juju-c0c753-trusty-icehouse-0:~# targetcli
targetcli GIT_VERSION (rtslib GIT_VERSION)
Copyright (c) 2011-2013 by Datera, Inc.
All rights reserved.
/> ls
o- / ............................................................................................................... [...]
  o- backstores .................................................................................................... [...]
  | o- fileio ......................................................................................... [0 Storage Object]
  | o- iblock ......................................................................................... [0 Storage Object]
  | o- pscsi .......................................................................................... [0 Storage Object]
  | o- rd_dr .......................................................................................... [0 Storage Object]
  | o- rd_mcp ......................................................................................... [0 Storage Object]
  o- ib_srpt ................................................................................................. [0 Targets]
  o- iscsi ................................................................................................... [0 Targets]
  o- loopback ................................................................................................ [0 Targets]
  o- qla2xxx ................................................................................................. [0 Targets]
  o- tcm_fc .................................................................................................. [0 Targets]
/>

Cinder节点继续修改

Icehouse的lioadm默认也是只支持一个iscsi target的,要想支持多个,需要做如下的修改,让一个cinder节点(10.5.0.22)的两个端口(3260, 3261)去做multipath.

sudo sed -i 's/import rtslib/import rtslib_fb as rtslib/g' /usr/bin/cinder-rtstool
sudo sed -i 's/if target == None:/if not target:/g' /usr/bin/cinder-rtstool
# For this step, you must replace IPADDRESS with the actual IP of the cinder/0 node.
sudo sed -i "s/rtslib.NetworkPortal(tpg_new, '0.0.0.0', 3260, mode='any')/rtslib.NetworkPortal(tpg_new, '10.5.0.22', 3260, mode='any')\n\t rtslib.NetworkPortal(tpg_new, '10.5.0.22', 3261, mode='any')/g" /usr/bin/cinder-rtstool

通过charm变更使用lioadm,当然也可以手工修改

http_proxy=http://squid.internal:3128 git clone https://github.com/openstack/charm-cinder.git
cd charm-cinder
sed -i 's/tgtadm/lioadm/g' templates/icehouse/cinder.conf
juju upgrade-charm cinder --path $PWD

停止tgt服务

juju ssh cinder/0 sudo service tgt stop

使用lioadm时,不需要像tgtadm那样在计算节点的nova.conf中配置下列参考支持multipath:

[libvirt]
iscsi_use_multipath = True

使用windows镜像

我们使用windows镜像测试:

source ~/novarc && glance image-download --file windows2012R2_virtio.raw --progress 31dd4e9f-ccd3-4c57-b10e-6b5e99366240
source novarc && glance image-create --name windows2012R2 --file /bak/windows2012R2_virtio.raw --visibility public --progress --container-format bare --disk-format raw
nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
nova flavor-create myflavor auto 3200 45 1 
openstack server create --wait --image windows2012R2 --flavor myflavor --key-name mykey --nic net-id=dd269a94-5b76-4e24-8046-4d377fa3be5f --min 1 --max 1 i1
nova floating-ip-create
nova floating-ip-associate i1 10.5.150.2
./tools/sec_groups.sh

Windows镜像比较大,有31G,所以默认创建的glance硬盘不够会失败,删除glance节点再通过‘root-disk=90G’参数重新安装:

juju remove-unit glance/0
juju remove-application glance
juju deploy cs:~openstack-charmers-next/glance --constraints "mem=1G root-disk=90G" --series trusty
juju add-relation nova-cloud-controller glance
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation glance "cinder:image-service"
juju add-relation glance rabbitmq-server

至于如何在多层内网情况下仍然能够通过图形化RDP界面而不是命令行来访问windows虚机,可参考文章

为虚机创建磁盘

为虚机创建磁盘:

cinder create --display_name test_volume 1
nova volume-attach i1 1a3e3146-5df7-49ed-8041-de1de257a300

root@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m session
tcp: [1] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896e
tcp: [2] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a300
root@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m node
10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a300
10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896e

登录windows虚机后在Powershell中使用”Get-Disk”命令可以看到一个新磁盘。
同时登录计算节点看到libvirt已经为windows虚机生成了配置:

 <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/mapper/360014050fd353e4dd274f20b1abd70e4'/>
      <target dev='vdb' bus='virtio'/>
      <serial>1035ee80-339e-4e4e-b4c9-6c925cb259ea</serial>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

mutlipath信息如下:

sudo apt install multipath-tools
# multipath -ll
360014050fd353e4dd274f20b1abd70e4 dm-0 LIO-ORG ,IBLOCK          
size=1.0G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:0:0 sdc   8:32   active ready  running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:0 sdd   8:48   active ready  running

root@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m session
tcp: [1] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896e
tcp: [2] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a300
tcp: [3] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea
tcp: [4] 10.5.0.22:3261,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea
root@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m node
10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a300
10.5.0.22:3261,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea
10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea
10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896e

root@juju-c0c753-trusty-icehouse-7:~# ls /dev/mapper/360014050fd353e4dd274f20b1abd70e4
/dev/mapper/360014050fd353e4dd274f20b1abd70e4

调试

echo 'show config' | sudo multipathd -k
echo 'show map 360014051a18c4162bbe48939111e6439 topology' | sudo multipathd -k
echo 'switch map 360014051a18c4162bbe48939111e6439 group 2' | sudo multipathd -k
iscsiadm -m session -P 3

sudo dd if=/dev/mapper/360014051a18c4162bbe48939111e6439 of=test  bs=1M count=10240
iostat -m 1 20|grep -E "sda|sdb|Device"

detach磁盘

detach磁盘,在syslog中看到了一系列的错误日志,但是功能正常,能够正常detach。并且在重新用cinder create创建了一块再测试了很多遍再也没有遇到该问题。第一次发生这个问题的原因未知。
20180309注:
Detach时,如果VM中的disk是online的(如正在往cache里写数据):

  1. Nova在detach disk时,它假设它会成功,而不会去check或者wait/retry
  2. libvirt请求qemu去detach disk, 然后立即close the backing disk (eg: iSCSI mount)。这可能会造成一个状态:VM里的disk还在但处于错误状态,但是iSCSI mount没有了。
  3. qemu因此也没有从它的DB里移除相关配置。
  4. Openstack此时要求libvirt列出所有connected disks, 因为disk还在,所以openstack也就不会调用logout session的代码。而上面iSCSI mount又关闭了,所以当再次attach该disk时会出错。(即使成功logout session后再次attach也仍然会出错,因为disk还在VM里了).
    所以workaround是应该在re-attaching的时候先restart VM
nova volume-dettach i1  1035ee80-339e-4e4e-b4c9-6c925cb259ea
2017-11-29 07:30:13.641 25277 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'186c37006bd94287ae768e1f80676584', 'tenant': u'becbf8797c954e2492d62a42a43a4324', 'user_identity': u'186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -'}
2017-11-29 07:30:13.867 25277 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'186c37006bd94287ae768e1f80676584', 'tenant': u'becbf8797c954e2492d62a42a43a4324', 'user_identity': u'186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -'}
2017-11-29 07:30:13.876 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Got semaphore "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:191
2017-11-29 07:30:13.877 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Attempting to grab file lock "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:202
2017-11-29 07:30:13.878 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Got file lock "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" at /var/lock/cinder/cinder-1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:232
2017-11-29 07:30:14.245 25277 DEBUG cinder.volume.manager [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] volume 1035ee80-339e-4e4e-b4c9-6c925cb259ea: removing export detach_volume /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:687
2017-11-29 07:30:14.263 25277 INFO cinder.brick.iscsi.iscsi [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Removing iscsi_target: 1035ee80-339e-4e4e-b4c9-6c925cb259ea
2017-11-29 07:30:14.264 25277 DEBUG cinder.openstack.common.processutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:147
2017-11-29 07:30:14.738 25277 DEBUG cinder.openstack.common.processutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Result was 0 execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:171
2017-11-29 07:30:14.754 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Released file lock "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" at /var/lock/cinder/cinder-1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:239
2017-11-29 07:31:02.297 25277 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:178
2017-11-29 07:31:02.300 25277 DEBUG cinder.manager [-] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python2.7/dist-packages/cinder/manager.py:128
2017-11-29 07:31:02.321 25277 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:178
2017-11-29 07:31:02.323 25277 INFO cinder.volume.manager [-] Updating volume status
2017-11-29 07:31:02.323 25277 DEBUG cinder.volume.drivers.lvm [-] Updating volume stats _update_volume_stats /usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py:346
2017-11-29 07:31:02.325 25277 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix cinder-volumes execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:147
2017-11-29 07:31:02.469 25277 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:171




Nov 29 07:29:57 juju-c0c753-trusty-icehouse-7 kernel: [86241.589558]  connection1:0: detected conn error (1020)

Nov 29 07:29:57 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:29:57 juju-c0c753-trusty-icehouse-7 kernel: [86242.360928]  connection2:0: detected conn error (1020)
Nov 29 07:29:58 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:00 juju-c0c753-trusty-icehouse-7 kernel: [86244.630641]  connection1:0: detected conn error (1020)
Nov 29 07:30:00 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:00 juju-c0c753-trusty-icehouse-7 kernel: [86245.399222]  connection2:0: detected conn error (1020)
Nov 29 07:30:01 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:03 juju-c0c753-trusty-icehouse-7 kernel: [86247.672850]  connection1:0: detected conn error (1020)
Nov 29 07:30:03 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:03 juju-c0c753-trusty-icehouse-7 kernel: [86248.442433]  connection2:0: detected conn error (1020)
Nov 29 07:30:04 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:06 juju-c0c753-trusty-icehouse-7 kernel: [86250.702435]  connection1:0: detected conn error (1020)
Nov 29 07:30:06 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:06 juju-c0c753-trusty-icehouse-7 kernel: [86251.461198]  connection2:0: detected conn error (1020)
Nov 29 07:30:07 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:09 juju-c0c753-trusty-icehouse-7 kernel: [86253.725045]  connection1:0: detected conn error (1020)
Nov 29 07:30:09 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 kernel: [86254.494474]  connection2:0: detected conn error (1020)
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 kernel: [86254.659090] type=1400 audit(1511940610.186:17): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-6e468b3b-6cb4-4d5d-a9d6-6ba32b4bd8cb" pid=16120 comm="apparmor_parser"
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: add map (uevent)
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: devmap already registered
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: remove map (uevent)
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: 360014050fd353e4dd274f20b1abd70e4: devmap removed
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: 360014050fd353e4dd274f20b1abd70e4: stop event checker thread (140170797860608)
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: remove map (uevent)
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger error
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: sdc: remove path (uevent)
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 kernel: [86255.657776] sd 4:0:0:0: [sdc] Synchronizing SCSI cache
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: sdd: remove path (uevent)
Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 kernel: [86256.365042] sd 5:0:0:0: [sdd] Synchronizing SCSI cache
Nov 29 07:30:12 juju-c0c753-trusty-icehouse-7 iscsid: Connection3:0 to [target: iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea, portal: 10.5.0.22,3260] through [iface: default] is shutdown.
Nov 29 07:30:12 juju-c0c753-trusty-icehouse-7 iscsid: Connection4:0 to [target: iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea, portal: 10.5.0.22,3261] through [iface: default] is shutdown.
Nov 29 07:30:12 juju-c0c753-trusty-icehouse-7 kernel: [86257.129920]  connection1:0: detected conn error (1020)
Nov 29 07:30:13 juju-c0c753-trusty-icehouse-7 kernel: [86257.893417]  connection2:0: detected conn error (1020)
Nov 29 07:30:13 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:14 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)
Nov 29 07:30:16 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)
Nov 29 07:30:16 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)
Nov 29 07:30:19 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)
Nov 29 07:30:20 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值