multipath多路径实验-构建iSCSI模拟环境

工作需要配置多路径但是自己又没有搞过,所以搭建环境学习一下过程,这样以后操作感觉会踏实点。本着学习的心态,在虚拟环境下模拟,来做一个多路径的实验。
本文主要描述使用iscsi模拟存储划出的1个LUN,最终由两条链路连接到主机,显示为两个磁盘的场景。为后续配置multipath提供前置环境。

•1.模拟主机环境
•2.准备软件环境
•3.模拟存储加盘
•4.配置iSCSI服务端
•5.配置iSCSI客户端

1.模拟主机环境

首先虚拟出两个linux服务器,用作搭建iSCSI服务器跟客户端。每台服务器一共三块网卡。一块外网。两块内网网卡。两个内网网卡主要虚拟两条链路。架构如下、

服务端:
外网:10.0.0.205
内网1:172.16.1.5
内网2:172.16.2.5

客户端:
外网:10.0.0.206
内网1:172.16.1.6
内网2:172.16.2.6

内网主要用虚拟机的lan区段模拟,搭建完成实现互ping能通。
在这里插入图片描述

2.准备软件环境

scsi-target-utils:用來將 Linux 系統模擬成為 iSCSI target 的功能;
iscsi-initiator-utils:掛載來自 target 的磁碟到 Linux 本机上。

其中scsi-target-utils好像要用epel源下载,我用了阿里源的epql源。直接yum install 软件名字。

3.模拟存储加盘

我在虚拟机服务端填加了一个5G的盘,实际就是用来模拟存储新增实际的一块盘。 我这里新增加的盘显示为/dev/sdb,我将它创建成lvm,并先分出一个2g的lv,供后续使用。至于怎么分涉及到LVM知识,不在本文范围内,请学习lvm知识。

4.配置iSCSI服务端

iSCSI服务端主要配置文件:/etc/tgt/targets.conf

iSCSI 有一套自己分享 target 档名,基本上,藉由 iSCSI 分享出來的 target 檔名都是以 iqn 為開頭,意思是:『iSCSI Qualified Name (iSCSI 合格名稱)』的意思(註5)。那麼在 iqn 後面要接啥檔名呢?通常是這樣的:
iqn.yyyy-mm.

操作如下:
vi /etc/tgt/targets.conf

上面有一段显示不出来,所以我截图放在这.
在这里插入图片描述

其中这一串就是lun的名字,下文客户端识别的就是这个名字: iqn.2017-07.com.cnblogs.jyzhao:alfreddisk

配置完成后,就启动服务和设置开机自启动:
[root@centos7 ~]# systemctl start tgtd
[root@centos7 ~]# ps -ef |grep tgtd
root 1597 1 0 23:38 ? 00:00:00 /usr/sbin/tgtd -f
root 1657 1490 0 23:39 pts/0 00:00:00 grep --color=auto tgtd

[root@centos7 ~]# systemctl status tgtd
● tgtd.service - tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2020-07-24 23:38:45 CST; 1min 7s ago
Process: 1629 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v ready (code=exited, status=0/SUCCESS)
Process: 1601 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 1600 ExecStartPost=/usr/sbin/tgtadm --op update --mode sys --name State -v offline (code=exited, status=0/SUCCESS)
Process: 1598 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Main PID: 1597 (tgtd)
CGroup: /system.slice/tgtd.service
└─1597 /usr/sbin/tgtd -f

Jul 24 23:38:40 centos7 systemd[1]: Starting tgtd iSCSI target daemon…
Jul 24 23:38:40 centos7 tgtd[1597]: tgtd: iser_ib_init(3436) Failed to initialize RDMA; load kernel modules?
Jul 24 23:38:40 centos7 tgtd[1597]: tgtd: work_timer_start(146) use timer_fd based scheduler
Jul 24 23:38:40 centos7 tgtd[1597]: tgtd: bs_init_signalfd(267) could not open backing-store module directory /usr/lib64/tgt/backing-store
Jul 24 23:38:40 centos7 tgtd[1597]: tgtd: bs_init(386) use signalfd notification
Jul 24 23:38:45 centos7 tgtd[1597]: tgtd: device_mgmt(246) sz:29 params:path=/dev/vg_storage/lv_lun1
Jul 24 23:38:45 centos7 tgtd[1597]: tgtd: bs_thread_open(408) 16
Jul 24 23:38:45 centos7 systemd[1]: Started tgtd iSCSI target daemon.

[root@centos7 ~]# systemctl enable tgtd

然后查询下相关的信息,比如占用的端口、LUN信息:
[root@centos7 ~]# netstat -tlunp |grep tgt
tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 1597/tgtd
tcp6 0 0 :::3260 ::😗 LISTEN 1597/tgtd

tgt-admin --show
Target 1: iqn.2017-07.com.cnblogs.jyzhao:alfreddisk
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 2147 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/vg_storage/lv_lun1
Backing store flags:
Account information:
ACL information:
ALL

发现 LUN: 1就是我们刚刚配置的lun。已经成功。

5.配置iSCSI客户端

确认开机启动项设置开启:
systemctl start iscsid

[root@centos7 network-scripts]# systemctl status iscsid
● iscsid.service - Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2020-07-24 23:48:39 CST; 5s ago
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)
Main PID: 2381 (iscsid)
Status: “Ready to process requests”
CGroup: /system.slice/iscsid.service
└─2381 /sbin/iscsid -f

Jul 24 23:48:39 centos7 systemd[1]: Starting Open-iSCSI…
Jul 24 23:48:39 centos7 systemd[1]: Started Open-iSCSI.

------------ping可以通

然后执行以下命令扫描链路:
[root@centos7 network-scripts]# iscsiadm -m discovery -t sendtargets -p 172.16.1.5
172.16.1.5:3260,1 iqn.2017-07.com.cnblogs.jyzhao:alfreddisk
[root@centos7 network-scripts]#
[root@centos7 network-scripts]# iscsiadm -m discovery -t sendtargets -p 172.16.2.5
172.16.2.5:3260,1 iqn.2017-07.com.cnblogs.jyzhao:alfreddisk

查看/var/lib/iscsi/nodes/下的文件:
[root@centos7 network-scripts]# ll -R /var/lib/iscsi/nodes/
/var/lib/iscsi/nodes/:
total 0
drw------- 4 root root 56 Jul 24 23:51 iqn.2017-07.com.cnblogs.jyzhao:alfreddisk

/var/lib/iscsi/nodes/iqn.2017-07.com.cnblogs.jyzhao:alfreddisk:
total 0
drw------- 2 root root 21 Jul 24 23:50 172.16.1.5,3260,1
drw------- 2 root root 21 Jul 24 23:51 172.16.2.5,3260,1

/var/lib/iscsi/nodes/iqn.2017-07.com.cnblogs.jyzhao:alfreddisk/172.16.1.5,3260,1:
total 4
-rw------- 1 root root 2072 Jul 24 23:50 default

/var/lib/iscsi/nodes/iqn.2017-07.com.cnblogs.jyzhao:alfreddisk/172.16.2.5,3260,1:
total 4
-rw------- 1 root root 2072 Jul 24 23:51 default

[root@centos7 network-scripts]# iscsiadm -m node
172.16.1.5:3260,1 iqn.2017-07.com.cnblogs.jyzhao:alfreddisk
172.16.2.5:3260,1 iqn.2017-07.com.cnblogs.jyzhao:alfreddisk

登陆登录Target
[root@centos7 network-scripts]# iscsiadm -m node -T iqn.2017-07.com.cnblogs.jyzhao:alfreddisk --login
Logging in to [iface: default, target: iqn.2017-07.com.cnblogs.jyzhao:alfreddisk, portal: 172.16.1.5,3260] (multiple)
Logging in to [iface: default, target: iqn.2017-07.com.cnblogs.jyzhao:alfreddisk, portal: 172.16.2.5,3260] (multiple)
Login to [iface: default, target: iqn.2017-07.com.cnblogs.jyzhao:alfreddisk, portal: 172.16.1.5,3260] successful.
Login to [iface: default, target: iqn.2017-07.com.cnblogs.jyzhao:alfreddisk, portal: 172.16.2.5,3260] successful.

target常用命令:
1: # iscsiadm -m discovery -t sendtargets -p 192.168.1.1:3260
此时找到并拥有了一个目标(target):192.168.1.1:3260,1 iqn.1997-05.com.test:raid 被发现的目标也叫做节点。

2 登入节点,以上面被发现的目标为例:

iscsiadm -m node –T iqn.1997-05.com.test:raid -p 192.168.1.1:3260 -l 其中iqn.1997-05.com.test:raid是目标名。

3:如何注销到target的连接?
如果要注销到某一个特定的Target的连接,可以使用下列的命令:
iscsiadm -m node -T iqn.2007-04.acme.com:h3c:200realm.rhel5 -p 200.200.10.200:3260 –u

4:如何从操作系统中删除一个target的信息?
iscsiadm -m node -o delete -T iqn.2005-03.com.max -p 192.168.0.4:3260
其中iqn.2005-03.com.max代表target的名称,192.168.0.4代表target的IP地址

5:如何查看就有哪些target记录在了Open-iSCSI数据库中?
使用iscsiadm -m node命令

最后查询 fdisk -l结果:
省略之前无关内容…这两个就是服务端映射过来的lun,已经成功识别、
Disk /dev/sdd: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sde: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

下面是多路径(Multipath)配置

普通的电脑主机都是一个硬盘挂接到一个总线上,这里是一对一的关系。而到了有光纤组成的SAN环境,或者由iSCSI组成的IPSAN环境,由于主机和存储通过了光纤交换机或者多块网卡及IP来连接,这样的话,就构成了多对多的关系。也就是说,主机到存储可以有多条路径可以选择。主机到存储之间的IO由多条路径可以选择。每个主机到所对应的存储可以经过几条不同的路径,如果是同时使用的话,I/O流量如何分配?其中一条路径坏掉了,如何处理?还有在操作系统的角度来看,每条路径,操作系统会认为是一个实际存在的物理盘,但实际上只是通向同一个物理盘的不同路径而已,这样是在使用的时候,就给用户带来了困惑。多路径软件就是为了解决上面的问题应运而生的。
多路径的主要功能就是和存储设备一起配合实现如下功能:
故障的切换和恢复
IO流量的负载均衡
磁盘的虚拟化

1:安装:

yum install device-mapper-multipath

2.设置服务开机启动

systemctl enable multipathd
systemctl start multipathd

3.生成multipath配置文件

/sbin/mpathconf --enable

确认服务状态:
[root@centos7 network-scripts]# systemctl status multipathd
● multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2020-07-25 00:05:53 CST; 4s ago
Process: 2647 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
Process: 2645 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
Process: 2643 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
Main PID: 2555 (multipathd)
CGroup: /system.slice/multipathd.service
‣ 2555 multipathd

Jul 25 00:05:53 centos7 systemd[1]: Starting Device-Mapper Multipath Device Controller…
Jul 25 00:05:53 centos7 systemd[1]: Started Device-Mapper Multipath Device Controller.

4.multipath的常用命令

常用命令:
–生成multipath配置文件
/sbin/mpathconf --enable
–显示多路径的布局
multipath -ll
–重新刷取
multipath -v2
–清空所有多路径
multipath -F

针对上述常用命令,实际操作的记录:
查看多路径情况:
[root@centos7 network-scripts]# multipath -ll
mpatha (360000000000000000e00000000010001) dm-2 IET ,VIRTUAL-DISK
size=2.0G features=‘0’ hwhandler=‘0’ wp=rw
|-± policy=‘service-time 0’ prio=1 status=active
| - 3:0:0:1 sdd 8:48 active ready running -± policy=‘service-time 0’ prio=1 status=enabled
`- 4:0:0:1 sde 8:64 active ready running

这里测试没有问题后,执行lsblk命令就可以看到多路径磁盘mpatha了
[root@centos7 network-scripts]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 1G 0 disk
sdc 8:32 0 1G 0 disk
sdd 8:48 0 2G 0 disk
└─mpatha 253:2 0 2G 0 mpath
sde 8:64 0 2G 0 disk
└─mpatha 253:2 0 2G 0 mpath
sr0 11:0 1 4.2G 0 rom

可以对mpatha进行相应的磁盘操作了,以下是我的操作,仅供参考。
cd /dev/mapper/
[root@centos7 mapper]# ll
total 0
lrwxrwxrwx 1 root root 7 Jul 24 23:18 centos-root -> …/dm-0
lrwxrwxrwx 1 root root 7 Jul 24 23:18 centos-swap -> …/dm-1
crw------- 1 root root 10, 236 Jul 24 23:18 control
lrwxrwxrwx 1 root root 7 Jul 25 00:03 mpatha -> …/dm-2

[root@centos7 mapper]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c8d08

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 41943039 19921920 8e Linux LVM

Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdd: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sde: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/mpatha: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

创建lvm
[root@centos7 mapper]# pvcreate /dev/mapper/mpatha
Physical volume “/dev/mapper/mpatha” successfully created.
[root@centos7 mapper]# vgcreate vg01 /dev/mapper/mpatha
Volume group “vg01” successfully created
[root@centos7 mapper]# lvcreate -n lv01 -l 100%VG vg01
Logical volume “lv01” created.

[root@centos7 mapper]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- <17.00g
swap centos -wi-ao---- 2.00g
lv01 vg01 -wi-a----- <2.00g

[root@centos7 mapper]# lvdisplay
— Logical volume —
LV Path /dev/vg01/lv01
LV Name lv01
VG Name vg01
LV UUID WEUv92-jV8c-hxX7-xnTw-MdvQ-19m3-YDqoip
LV Write Access read/write
LV Creation host, time centos7, 2020-07-25 00:39:15 +0800
LV Status available

open 0

LV Size <2.00 GiB
Current LE 511
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 8192
    Block device 253:3

— Logical volume —
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID k4EpRI-Tojt-EK7k-k5rA-lwT0-jFYj-H4NIEc
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-06-22 19:51:10 +0800
LV Status available

open 2

LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 8192
    Block device 253:1

— Logical volume —
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID V5Na7e-PmMv-37YH-teg8-t5Sv-PPk7-DCA2bX
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-06-22 19:51:10 +0800
LV Status available

open 1

LV Size <17.00 GiB
Current LE 4351
Segments 1
Allocation inherit
Read ahead sectors auto

  • currently set to 8192
    Block device 253:0

格式化文件系统
[root@centos7 mapper]# mkfs.ext4 /dev/vg01/lv01
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
130816 inodes, 523264 blocks
26163 blocks (5.00%) reserved for the super user
rst data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

挂载:
mount /dev/vg01/lv01 /date/
[root@centos7 mapper]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 17G 1.6G 16G 10% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 8.7M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sda1 1014M 125M 890M 13% /boot
tmpfs 394M 0 394M 0% /run/user/0
/dev/mapper/vg01-lv01 2.0G 6.0M 1.9G 1% /date

开机自动挂载。

完。。。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值