目录
Linux服务器挂载iSCSI磁盘
安装iSCSI initiator 探测器
yum -y install iscsi-initiator-utils
查看iSCSI存储
在测试Linux服务器上,执行下面命令探测Openfiler服务器的iSCSI Target。执行命令的回显,已经表明iSCSI客户端正确识别到共享存储,我们在openfiler存储上配置的是3块盘:
iscsi_lvm_disk1
iscsi_lvm_disk2
iscsi_lvm_disk3
但是这里却显示了6块硬盘,这是有多路径造成的。
# iscsiadm -m discovery -t sendtargets -p 192.168.10.146:3260
192.168.10.146:3260,1 iqn.2006-01.com.openfiler:iscsi_lvm_disk3
172.7.24.146:3260,1 iqn.2006-01.com.openfiler:iscsi_lvm_disk3
192.168.10.146:3260,1 iqn.2006-01.com.openfiler:iscsi_lvm_disk2
172.7.24.146:3260,1 iqn.2006-01.com.openfiler:iscsi_lvm_disk2
192.168.10.146:3260,1 iqn.2006-01.com.openfiler:iscsi_lvm_disk1
172.7.24.146:3260,1 iqn.2006-01.com.openfiler:iscsi_lvm_disk1
挂载iSCSI磁盘
根据上面探测的结果,执行下面命令,挂载共享磁盘,并重复下面的步骤,将第2、3块共享磁盘也都挂载到本地。 挂载成功后可以执行lsblk或者fdisk -l|grep Disk查看。
# iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lvm_disk1 -p 172.7.24.146 -l
或
# iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lvm_disk1 -p 172.7.24.146 -l|--login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:iscsi_lvm_disk3, portal: 172.7.24.146,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:iscsi_lvm_disk3, portal: 172.7.24.146,3260] successful.
# iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lvm_disk1 -p 192.168.10.146 -l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:iscsi_lvm_disk3, portal: 192.168.10.146,3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:iscsi_lvm_disk3, portal: 192.168.10.146,3260] successful.
卸载iSCSI磁盘
# iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lvm_disk1 -p 172.7.24.146 --logout
multipath实现多路径聚合
安装multipath
yum -y install device-mapper device-mapper-multipath
初始化multipath
/etc/multipath.conf 默认情况并不被创建,需要执行下面的命令以初始化multipath,创建/etc/multipath.conf ,初创的multipath.conf文件的默认配置如下:
defaults {
user_friendly_names yes
find_multipaths yes
}
blacklist {
}
mpathconf --enable
使能开机自启动 && 启动multipath服务
systemctl start multipathd
systemctl enable multipathd
初始化multipath和启动multipath服务之后,multipathd服务就会根据初始的/etc/multipath.conf的配置进行多路径聚合。默认的聚合模式是:failover(主备模式);此外,还有多种其他工作模式(例如:multibus(负载均衡模式)),这里不做深入探究。执行下面的命令查看多路径聚合效果:可见共3块磁盘,每块磁盘2条路径:一个是ACTIVE,一个是ENABLED,表明此时multipath工作模式是主/备。
# multipath -ll
mpathc (14f504e46494c455234546b524d412d446473642d4a45796c) dm-4 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 37:0:0:0 sdf 8:80 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 38:0:0:0 sdg 8:96 active ready running
mpathb (14f504e46494c455251546e5266382d5052624d2d6a386e6e) dm-3 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 34:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 35:0:0:0 sdd 8:48 active ready running
mpatha (14f504e46494c4552446e525879682d4667324c2d6d633178) dm-2 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 33:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 36:0:0:0 sde 8:64 active ready running
修改/etc/multipath.conf配置,defaults {}中增加path_grouping_policy multibus:
defaults {
user_friendly_names yes
find_multipaths yes
path_grouping_policy multibus
}然后重启服务,再次查看:每个磁盘的两条路径都是ACTIVE,表明此时multipath工作模式是负载均衡。I/O流量会轮询调度到两条路径上。
# multipath -ll
mpathc (14f504e46494c455234546b524d412d446473642d4a45796c) dm-4 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 37:0:0:0 sdf 8:80 active ready running
`- 38:0:0:0 sdg 8:96 active ready running
mpathb (14f504e46494c455251546e5266382d5052624d2d6a386e6e) dm-3 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 34:0:0:0 sdc 8:32 active ready running
`- 35:0:0:0 sdd 8:48 active ready running
mpatha (14f504e46494c4552446e525879682d4667324c2d6d633178) dm-2 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 33:0:0:0 sdb 8:16 active ready running
`- 36:0:0:0 sde 8:64 active ready running
至此,多路径聚合已经结束。multipath还有一些配置,下面赘述一下。
配置multipath(选)
建议user_friendly_names设为no。如果设定为 no,即指定该系统应使用WWID作为该多路径的别名。user_friendly_names yes的话,使用mpathX做为多路径别名。而WWID的唯一性更好。同时我们给WWID绑定一个自定义的别名,以便于管理。同时/etc/multipath.conf 更新配置如下:
defaults {
user_friendly_names no
find_multipaths yes
path_grouping_policy multibus
}
#先将所有的设备列入到blacklist中,也就是说先使得所有设备不能聚合
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(hd|xvd|vd)[a-z]*"
wwid "*"
}
# 将想要聚合的设备列入blacklist_exceptions
blacklist_exceptions{
wwid "14f504e46494c4552446e525879682d4667324c2d6d633178"
wwid "14f504e46494c455251546e5266382d5052624d2d6a386e6e"
wwid "14f504e46494c455234546b524d412d446473642d4a45796c"
}
multipaths{
multipath {
wwid "14f504e46494c4552446e525879682d4667324c2d6d633178"
alias openfiler-disk1
}
multipath {
wwid "14f504e46494c455251546e5266382d5052624d2d6a386e6e"
alias openfiler-disk2
}
multipath {
wwid "14f504e46494c455234546b524d412d446473642d4a45796c"
alias openfiler-disk3
}
}效果如下:
# multipath -ll
openfiler-disk3 (14f504e46494c455234546b524d412d446473642d4a45796c) dm-4 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 49:0:0:0 sdf 8:80 active ready running
`- 50:0:0:0 sdg 8:96 active ready running
openfiler-disk2 (14f504e46494c45524359594d6d562d7035786f2d73744143) dm-3 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 47:0:0:0 sdd 8:48 active ready running
`- 48:0:0:0 sde 8:64 active ready running
openfiler-disk1 (14f504e46494c4552764b6f4275352d376445392d43363066) dm-2 OPNFILER,VIRTUAL-DISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 45:0:0:0 sdb 8:16 active ready running
`- 46:0:0:0 sdc 8:32 active ready running
挂载multipath设备
执行fdisk依次对openfiler-disk1、openfiler-disk2、openfiler-disk3进行分区,然后格式化为CentOS默认的文件系统XFS,然后配置到/etc/fstab 。
问题:fdisk执行分区时可能会有告警:WARNING: Re-reading the partition table failed with error 22: Invalid argument.
解决:执行 partprobe 。
fdisk /dev/mapper/openfiler-disk3
mkfs.xfs /dev/mapper/openfiler-disk3p1
# vi /etc/fstab
/dev/mapper/openfiler-disk3p1 /data xfs defaults 0 0
# mount -a
# df -h
/dev/mapper/openfiler-disk3p1 20G 33M 20G 1% /data
卸载磁盘阵列
卸载multipath设备
# umount -a /dev/mapper/openfiler-disk3p1
清除multipath聚合设备
# multipath -F
# systemctl stop multipathd
卸载iSCSI设备
# iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lvm_disk3 -p 172.7.24.146 --logout
挂载带CHAP认证的卷
#直接挂载失败
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 172.7.24.146 --login
……
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals#第一:发现卷
iscsiadm --mode discovery --type sendtargets --portal 192.168.10.146
或
iscsiadm -m discovery -t sendtargets -p 192.168.10.146:3260#第二:对于需验证码的节点,在挂载前需执行
先开启验证
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 172.7.24.146 -o update --name node.session.auth.authmethod --value=CHAP
再添加用户
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 172.7.24.146 -o update --name node.session.auth.username --value=liuyuanlin
添加密码
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 172.7.24.146 -o update --name node.session.auth.password --value=lyl-password
多路径下还要重复执行上述步骤到192.168.10.146。注:除了上述命令行方式外,也可以直接修改配置文件:/etc/iscsi/iscsid.conf 。因为上述命令也是写该文件。
#第三:测试挂载设备
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 172.7.24.146 --login
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 192.168.10.146 --login#第四:断开(卸载)与target的连接
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 172.7.24.146 --logout
iscsiadm -m node -T iqn.2006-01.com.openfiler:iscsi_lun1_disk -p 192.168.10.146 --logout#查看挂载到本机的
lsscsi -i
[52:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdh -
[52:0:0:1] disk OPNFILER VIRTUAL-DISK 0 /dev/sdi -
[52:0:0:2] disk OPNFILER VIRTUAL-DISK 0 /dev/sdj -
[55:0:0:0] disk OPNFILER VIRTUAL-DISK 0 /dev/sdk 14f504e46494c4552664a316554442d677754492d4f416464
[55:0:0:1] disk OPNFILER VIRTUAL-DISK 0 /dev/sdl 14f504e46494c4552356359576b672d667952672d6c5a384b
[55:0:0:2] disk OPNFILER VIRTUAL-DISK 0 /dev/sdm 14f504e46494c455252666f574e652d30634e722d57555755