关闭

DS8K存储分配空间给主机方案

367人阅读 评论(0) 收藏 举报
分类:

环境IBM DS8K + SAN192 + Redhat6.1+Multipath软件

分为三大步,先划zone,后划存储,最后在主机端绑定多路径

一.划zone
为了以后通过tsm备份,将手机POS与DS8700,TS3500都划zone,同时为了保证生产安全,两台交换机分别划zone。

1.连接到SAN384交换机,备份原来的config
本机IP设置为10.77.77.88/255.255.255.0

telnet 10.77.77.77
用户名:admin  密码:xxxx

交换机1备份文件名:config-san.txt
交换机2的备份文件名:config-san2.txt

IBM_2499_192:FID128:admin> configupload
Protocol (scp, ftp, local) [ftp]: ftp
Server Name or IP Address [host]: 10.77.77.88
User Name [user]: ftp
Path/Filename [<home dir>/config.txt]: /upload/config-san.txt
Section (all|chassis|FID# [all]): all
Password:

configUpload complete: All selected config parameters are uploaded

2.手机pos划zone
(1)交换机1实施
zonecreate "MPOS_SW1_GZDS8K" "1,2; 1,3; 1,4; 1,5; 1,6; 1,7; 1,8; 1,9; 2,0; 2,1; 2,2; 2,3"
@linlf -- 后面这四个2是什么意义?2 --表示SAN交换机的domain

--注释:
1 --表示Domain
0 --连磁带库
2 --表示domain
1,2; 1,3;...1,9 -- 存储连接SAN交换机的PortIndex(switchshow查看到的Portindex,不是Port)
2,0;2,1;2,2;2,3; -- 新上主机连接SAN交换机的Port


zonecreate "TS35a_MPOS1_R1","1,0;2,1"
zonecreate "TS35a_MPOS2_R1","1,0;2,3"

zonecreate "TS35b_MPOS1_R1","1,1;2,1"
zonecreate "TS35b_MPOS2_R1","1,1;2,3"

cfgadd "TYZF_SW1","MPOS_SW1_GZDS8K"
cfgadd "TYZF_SW1","TS35a_MPOS1_R1"
cfgadd "TYZF_SW1","TS35a_MPOS2_R1"
cfgadd "TYZF_SW1","TS35b_MPOS1_R1"
cfgadd "TYZF_SW1","TS35b_MPOS2_R1"

配置完成后,保存并且enable
cfgsave
cfgenable "TYZF_SW1"

查询状态是否正常
switchshow
查询是否是online,各个口是否已经连接

(2)交换机2实施
zonecreate "MPOS_SW2_GZDS8K" "1,2; 1,3; 1,4; 1,5; 1,6; 1,7; 1,8; 1,9; 2,0; 2,1; 2,2; 2,3"

zonecreate "TS35c_MPOS1_L2","1,0;2,1"
zonecreate "TS35c_MPOS2_L2","1,0;2,3"

zonecreate "TS35d_MPOS1_L2","1,1;2,1"
zonecreate "TS35d_MPOS2_L2","1,1;2,3"

cfgadd "TYZF_SW2","MPOS_SW2_GZDS8K"
cfgadd "TYZF_SW2","TS35c_MPOS1_L2"
cfgadd "TYZF_SW2","TS35c_MPOS2_L2"
cfgadd "TYZF_SW2","TS35d_MPOS1_L2"
cfgadd "TYZF_SW2","TS35d_MPOS2_L2"

配置完成后,enable
cfgsave
cfgenable "TYZF_SW2"

查询状态是否正常
switchshow
查询是否是online,各个口是否已经连接

 

3.回退方案

恢复配置文件
admin>switchdisable
admin>configdownload

按照提示输入用户名,及备份文件名
config-san.txt

admin>switchenable

admin>switchshow
查询是否是online,各个口是否已经连接
@linlf One by one

二.在DS8700根据手机pos的主机HBA卡的wwid号建立group,新划的lun映射到这个group

(1)登录到DS8700

两台控制器的ip分别是172.16.0.3和172.17.0.4

主机ip通过自动DHCP方式连通DS8700阵列,然后再查到阵列管理IP,供下面登录使用(ping 172.17.0.4)

用客户端软件进入控制台,然后输入命令>dscli,会弹出
dscli>
C:\Program Files\IBM\dscli>dscli
Enter the primary management console IP address:172.17.0.4

用户名:admin 密码:xxx

(2)建立volumegroup

mkvolgrp -type scsimap256 mpos
--反向操作:rmvolgrp v9

查询新建立的volume group
dscli> lsvolgrp
新建的volume group的ID应该是v9

(3)划lun

共需要划5个200G的lun,4个5G的lun(具体划分到哪个extpool里待定)
DS8700共有P0-P7,8个extent pool
剩余空间如下:
Name   ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
========================================================================================
ext_P0 P0 fb            0  below              1199         62      1199        0      14
ext_P1 P1 fb            1  below              1199         62      1199        0      14
ext_P2 P2 fb            0  below              1199         62      1199        0      14
ext_P3 P3 fb            1  below              1149         64      1149        0      24
ext_P4 P4 fb            0  below              1732         53      1732        0      14
ext_P5 P5 fb            1  below              1732         53      1732        0      14
ext_P6 P6 fb            0  below              1732         53      1732        0      14
ext_P7 P7 fb            1  below              1732         53      1732        0      14

根据分散IO的原则从P3-P7分别划一个200G的lun,从P4-P7分别划一个5G的lun
根据之前的命名规则,以及vol的顺序

mkfbvol -extpool P3 -cap 200 -name vol_#h 1321
mkfbvol -extpool P4 -cap 200 -name vol_#h 1417
mkfbvol -extpool P5 -cap 200 -name vol_#h 1514
mkfbvol -extpool P6 -cap 200 -name vol_#h 1614
mkfbvol -extpool P7 -cap 200 -name vol_#h 1714

mkfbvol -extpool P4 -cap 5 -name vol_#h 1418
mkfbvol -extpool P5 -cap 5 -name vol_#h 1515
mkfbvol -extpool P6 -cap 5 -name vol_#h 1615
mkfbvol -extpool P7 -cap 5 -name vol_#h 1715

--反向操作:rmfbvol -safe 1321
           rmfbvol -safe 1417
           rmfbvol -safe 1514
           rmfbvol -safe 1614
           rmfbvol -safe 1714
          
           rmfbvol -safe 1418
           rmfbvol -safe 1515
           rmfbvol -safe 1615
           rmfbvol -safe 1715

在P3-P7的POOL里划分了1321,1417,1514,1614,1714共5个200G的lun
在P4-P7的POOL里划分了1418,1515,1615,1715共4个5G的lun

 

(4)将lun映射到主机的group

chvolgrp -action add -volume 1321,1417,1514,1614,1714,1418,1515,1615,1715 v9
--反向操作:chvolgrp -action remove -volume 1321,1417,1514,1614,1714,1418,1515,1615,1715 v9

创建阵列与主机的Map连接。    
mkhostconnect -wwname 21000024ff50ce3c  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc0
mkhostconnect -wwname 21000024ff50ce3d  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc1
mkhostconnect -wwname 21000024ff50c9dc  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc2 
mkhostconnect -wwname 21000024ff50c9dd  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos1_fc3

mkhostconnect -wwname 21000024ff50cada  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc0
mkhostconnect -wwname 21000024ff50cadb  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc1
mkhostconnect -wwname 21000024ff50cb78  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc2 
mkhostconnect -wwname 21000024ff50cb79  -profile "Intel - Linux RHEL" -volgrp v9 -ioport all mpos2_fc3 

--反向操作:lshostconnect -volgrp v9获取host_connect_id
            rmhostconnect <host_connect_id>
           

查看v9映射的lun
showvolgrp v9

三.绑定多路径
1.安装multipath,及生成/etc/multipath.conf文件

#fdisk -l | grep sd查看硬盘,对于linux并不能看到新增的存储设备,需要重启主机
利用scsi_id -g -u看查看新划的lun的wwid号

#yum install device-mapper-multipath.x86_64

生成multipath.conf文件
#cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf

2.配置/etc/multipath.conf文件

根据查询到的wwid号配置/etc/multipath.conf文件,并且根据生产上别的主机的/etc/multipath.conf文件编辑
blacklist_exceptions {
                devnode "^(sd)[b-z]"
                devnode "^(dm-)[0-9]"
}

defaults {
        user_friendly_names yes
        path_grouping_policy    group_by_prio
        features                "1 queue_if_no_path"
        path_checker            tur
}

multipaths {

        multipath {
                wwid            <wwid>
                alias           mpathdsk1
        }
        multipath {
                wwid            <wwid>
                alias           mpathdsk2
        }
        multipath {
                wwid            <wwid>
                alias           mpathdsk3
        }
        multipath {
                wwid            <wwid>
                alias           mpathdsk4
        }
        multipath {
                wwid            <wwid>
                alias           mpathdsk5
        }
        multipath {
                wwid            <wwid>
                alias           crsdsk1
        }
        multipath {
                wwid            <wwid>
                alias           crsdsk2
        }
        multipath {
                wwid            <wwid>
                alias           crsdsk3
        }
        multipath {
                wwid            <wwid>
                alias           crsdsk4
        }
}

配置完/etc/multipath.conf

#/etc/init.d/multipathd start,
#multipath –ll命令查询多路径盘,并重启服务器,确定服务器没问题

#ll /dev/mapper/查看该路径下的多路径盘


磁盘裸设备绑定,数据库安装工作由集成商完成.

0
0

查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:217263次
    • 积分:2921
    • 等级:
    • 排名:第12598名
    • 原创:27篇
    • 转载:312篇
    • 译文:0篇
    • 评论:2条
    文章分类
    最新评论