Redhat iscsi共享存储+ gfs安装配置

感谢伟哥的文档跟帮助,伟哥的blog://http://ylw6006.blog.51cto.com 此人大牛也... 

环境:

系统:redhat 5.4
服务端:10.0.0.52
节点1:10.0.0.53
节点2:10.0.0.54

Redhat iscsi共享存储安装:

服务器端: 

首先需要安装scsi-target-utils工具包,然后将tgtd服务设置成开机自动启动,然后划出一个LVM做共享盘阵,LVM的名称为mydata

[root@localhost ~]# yum install -y scsi-target-utils.x86_64
[root@localhost ~]# service tgtd start
Starting SCSI target daemon:                               [  OK  ]
[root@localhost ~]# chkconfig tgtd on
[root@localhost ~]# lvs
  LV       VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  Lvmydata vg01 -wi-ao 310.41G                                      
  Lvroot   vg01 -wi-ao  50.00G                                      
  Lvusr    vg01 -wi-ao  50.00G

每次重启tgtd服务的时候,之前使用tgtdadmin绑定的target和logicalunit都会失效,因而写了个脚本用于简化操作

IQN命名规范:iqn.date. reverse.domain.name:optional name,例如:iqn.2011-12-15.com.hsf.data:shareddisk
这里设置允许所有的IP进行进行挂载,取消target绑定的时候需要先取消logicalunit然后取消target   

[root@localhost ~]# vi /etc/init.d/tgtdrules   
#!/bin/sh  
# chkconfig: - 59 85  
# Source function library.  
 
. /etc/rc.d/init.d/functions  
 
start() {  
        echo -e "Starting Tgtdrules Server:\n"  
 
        # Target  
        tgtadm  --lld iscsi --op new --mode target --tid 1 -T iqn.2011-12-15.com.hsf.data:shareddisk  
 
        # Lun  
        tgtadm  --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/vg01/Lvmydata  
 
        # Init  
        #tgtadm  --lld iscsi --op bind --mode target --tid 1 -I 10.0.0.53  
        #tgtadm  --lld iscsi --op bind --mode target --tid 1 -I 10.0.0.54  
        tgtadm  --lld iscsi --op bind --mode target --tid 1 -I ALL  
}  
 
stop() {  
        echo -e "Stopping Tgtdrules Server:\n"  
 
        # Lun  
        tgtadm  --lld iscsi --op delete --mode logicalunit --tid 1 --lun 1   
 
        # Target  
        tgtadm  --lld iscsi --op delete --mode target --tid 1   
 
        # Init  
        #tgtadm  --lld iscsi --op unbind --mode target --tid 1 -I 10.0.0.53  
        #tgtadm  --lld iscsi --op unbind --mode target --tid 1 -I 10.0.0.54  
        tgtadm  --lld iscsi --op unbind --mode target --tid 1 -I ALL  
}  
 
status() {  
        tgtadm --lld iscsi -o show -m target
}  
case "$1" in  
  start)
  start
  ;;  
 
  stop)  
  stop  
  ;;  
 
  status)  
  status  
  ;;  
 
  *)  
  echo {1}quot;Usage: tgtdrules {start|stop|status}"  
  ;;  
 
esac  
exit 0
[root@localhost ~]# chmod  +x /etc/init.d/tgtdrules

服务器端测试: 

[root@localhost ~]# service tgtd status 
tgtd (pid 29246 29245) is running...
[root@localhost ~]# netstat -ntpl| grep :3260
tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      29245/tgtd          
tcp        0      0 :::3260                     :::*                        LISTEN      29245/tgtd
[root@localhost ~]# service tgtdrules start 
[root@localhost ~]# service tgtdrules status 
Target 1: iqn.2011-12-15.com.hsf.data:shareddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 333296 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/vg01/Lvmydata
    Account information:
    ACL information:
        ALL

节点安装:

安装iscsi-initiator-utils软件包,设置iscsi进程开机自动启动 

[root@localhost ~]# yum install -y iscsi-initiator-utils.x86_64
[root@localhost ~]# service iscsi start
iscsid is stopped
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@localhost ~]# chkconfig iscsi on
[root@localhost ~]# service iscsi status
iscsid (pid  29527) is running...

自动挂载iscsi脚本内容如下,挂载和卸载之前先发现,探测一次服务器端的共享是否正常 

[root@localhost ~]# vi /etc/init.d/iscsiadmrules    
#!/bin/bash  
# chkconfig: - 20 85  
# Source function library.  
 
. /etc/rc.d/init.d/functions  
 
start() {  
echo -e "Starting Iscsiadmrules Server:\n"  
iscsiadm --mode discovery --type sendtargets --portal 10.0.0.52  
iscsiadm --mode node --targetname iqn.2011-12-15.com.hsf.data:shareddisk --portal 10.0.0.52:3260 --login  
}  
 
stop() {  
echo -e "Stopping Iscsiadmrules Server:\n"  
iscsiadm --mode discovery --type sendtargets --portal 10.0.0.52 
iscsiadm --mode node --targetname iqn.2011-12-15.com.hsf.data:shareddisk --portal 10.0.0.52:3260 --logout  
}  
 
case "$1" in  
 start)  
 start  
 ;;  
 
 stop)  
 stop  
 ;;  
esac  
exit 0  
[root@localhost ~]# chmod  +x /etc/rc.d/init.d/iscsiadmrules

挂载:  

[root@localhost ~]# service iscsiadmrules start 
Starting Iscsiadmrules Server:
10.0.0.52:3260,1 iqn.2011-12-15.com.hsf.data:shareddisk
Logging in to [iface: default, target: iqn.2011-12-15.com.hsf.data:shareddisk, portal: 10.0.0.52,3260]
Login to [iface: default, target: iqn.2011-12-15.com.hsf.data:shareddisk, portal: 10.0.0.52,3260]: successful
[root@localhost ~]# service iscsi status
iscsid (pid  29527) is running...
[root@localhost ~]# fdisk -l
Disk /dev/sda: 449.4 GB, 449495171072 bytes
255 heads, 63 sectors/track, 54648 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          25      200781   83  Linux
/dev/sda2              26        1069     8385930   82  Linux swap / Solaris
/dev/sda3            1070       54648   430373317+  8e  Linux LVM
Disk /dev/sdb: 333.2 GB, 333296173056 bytes
255 heads, 63 sectors/track, 40520 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table

共享存储以/sdb形式加载进来了.

到服务端看一下.

[root@localhost ~]# service tgtdrules status 
Target 1: iqn.2011-12-15.com.hsf.data:shareddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.1994-05.com.redhat:424056be2e26
            Connection: 0
                IP Address: 10.0.0.53
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 333296 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/vg01/Lvmydata
    Account information:
    ACL information:
        ALL

可以看到10.0.0.53已经挂载了.

在另外一台客户端也进行同步的操作再去服务端查看下:

[root@localhost ~]# service tgtdrules status 
Target 1: iqn.2011-12-15.com.hsf.data:shareddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 1
            Initiator: iqn.1994-05.com.redhat:424056be2e26
            Connection: 0
                IP Address: 10.0.0.53
        I_T nexus: 2
            Initiator: iqn.1994-05.com.redhat:ba96a2c6310
            Connection: 0
                IP Address: 10.0.0.54
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 333296 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/vg01/Lvmydata
    Account information:
    ACL information:
        ALL

GFS安装配置

先配置下yum

[root@localhost ~]# cat /etc/yum.repos.d/base.repo 
[base]
name=RHEL 5.4 Server
baseurl=ftp://10.0.0.23/pub/Server/
gpgcheck=0
[VT]
name=RHEL 5.4 VT
baseurl=ftp://10.0.0.23/pub/VT/
gpgcheck=0
[Cluster]
name=EHRL 5.4 Cluster
baseurl=ftp://10.0.0.23/pub/Cluster
gpgcheck=0
[ClusterStorage]
name=EHRL 5.4 ClusterStorage
baseurl=ftp://10.0.0.23/pub/ClusterStorage
gpgcheck=0
[root@localhost ~]#  yum -y groupinstall "Cluster Storage"  "Clustering"

创建配置文件,启动相关进程,两个节点做同样的配置 

[root@localhost ~]# vi /etc/cluster/cluster.conf
<?xml version="1.0" ?>
<cluster config_version="2" name="file_gfs">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="10.0.0.53" nodeid="1" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="10.0.0.54" nodeid="2" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
[root@localhost ~]# lvmconf --enable-cluster
[root@localhost ~]# service rgmanager start
Starting Cluster Service Manager:                          [  OK  ]
[root@localhost ~]# chkconfig rgmanager on
[root@localhost ~]# service ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Starting ricci:                                            [  OK  ]
[root@localhost ~]#  chkconfig ricci on
[root@localhost ~]# service cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]
[root@localhost ~]# chkconfig cman on
[root@localhost ~]# service clvmd start
Starting clvmd:                                            [  OK  ]
Activating VGs:   3 logical volume(s) in volume group "vg01" now active
                                                           [  OK  ]
[root@localhost ~]# chkconfig clvmd on
[root@localhost ~]#  clustat 
Cluster Status for file_gfs @ Thu Dec 15 13:53:40 2011
Member Status: Quorate
 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 10.0.0.53                                                           1 Online, Local
 10.0.0.54                                                           2 Offline

在共享存储上划分LVM,在一个节点上操作即可 

[root@localhost ~]# pvcreate /dev/sdb1 
  Physical volume "/dev/sdb1" successfully created
[root@localhost ~]# vgcreate file_gfs /dev/sdb1 
  Clustered volume group "file_gfs" successfully created
[root@localhost ~]# vgdisplay file_gfs
  --- Volume group ---
  VG Name               file_gfs
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               310.40 GB
  PE Size               4.00 MB
  Total PE              79462
  Alloc PE / Size       0 / 0   
  Free  PE / Size       79462 / 310.40 GB
  VG UUID               XebmJz-qHFl-Pe5U-gvEL-y1Ml-HbRV-j3GvSd
[root@localhost mnt]# lvcreate -n gfs -l 79462 file_gfs
  Logical volume "gfs" created

如果报以下的错误:

[root@localhost ~]# lvcreate -n gfs1 -l 79462 file_gfs
  Error locking on node 10.0.0.54: Volume group for uuid not found: XebmJzqHFlPe5UgvELy1MlHbRVj3GvSdig4REYsoFSRsUE7byErDdOYcJwu3DTct
  Aborting. Failed to activate new LV to wipe the start of it.


可以先重启下两台节点的clvmd.

[root@localhost ~]# service clvmd restart
Deactivating VG file_gfs:   0 logical volume(s) in volume group "file_gfs" now active
                                                           [  OK  ]
Stopping clvm:                                             [  OK  ]
Starting clvmd:                                            [  OK  ]
Activating VGs:   0 logical volume(s) in volume group "file_gfs" now active
  3 logical volume(s) in volume group "vg01" now active
                                                           [  OK  ]

如果还是不行.就在两台节点上vgs查看下.是不是都能看到这个vg.

[root@localhost mnt]# vgs
  VG       #PV #LV #SN Attr   VSize   VFree
  file_gfs   1   1   0 wz--nc 310.40G    0 
  vg01       1   3   0 wz--n- 410.41G    0 
[root@localhost ~]# service clvmd restart
Deactivating VG file_gfs:   0 logical volume(s) in volume group "file_gfs" now active
                                                           [  OK  ]
Stopping clvm:                                             [  OK  ]
Starting clvmd:                                            [  OK  ]
Activating VGs:   1 logical volume(s) in volume group "file_gfs" now active
  3 logical volume(s) in volume group "vg01" now active
                                                           [  OK  ]

格式化lvm卷 

[root@localhost ~]# gfs_mkfs -h
Usage:
gfs_mkfs [options] <device>
Options:
  -b <bytes>       Filesystem block size
  -D               Enable debugging code
  -h               Print this help, then exit
  -J <MB>          Size of journals
  -j <num>         Number of journals
  -O               Don't ask for confirmation
  -p <name>        Name of the locking protocol
  -q               Don't print anything
  -r <MB>          Resource Group Size
  -s <blocks>      Journal segment size
  -t <name>        Name of the lock table
  -V               Print program version information, then exit
[root@localhost ~]# gfs_mkfs -p lock_dlm  -t file_gfs:gfs -j 2 /dev/file_gfs/gfs   
This will destroy any data on /dev/file_gfs/gfs.
Are you sure you want to proceed? [y/n] y
Device:                    /dev/file_gfs/gfs
Blocksize:                 4096
Filesystem Size:           81297320
Journals:                  2
Resource Groups:           1242
Locking Protocol:          lock_dlm
Lock Table:                file_gfs:gfs
Syncing...
All Done

在两个节点上分别挂载,并测试写入数据 

[root@localhost ~]# mount -t gfs /dev/file_gfs/gfs /data/
[root@localhost data]# dd if=/dev/zero of=aa bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 1.25589 seconds, 835 MB/s

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
Red Hat Linux操作系统是一种广泛应用于企业级服务器的操作系统,其安装配置需要按照一定的规范进行。以下是Red Hat Linux操作系统的安装配置规范: 1. 硬件要求:在安装Red Hat Linux操作系统前,首先要确保计算机的硬件符合最低要求。通常要求至少有1GB的RAM、20 GB的硬盘空间以及一颗1.4 GHz的处理器。此外,还应检查硬件设备的兼容性,以确保其正常工作。 2. 安装媒介:根据Red Hat官方网站提供的下载链接,从官方网站或其他可信的地方下载最新版的Red Hat Linux操作系统光盘镜像文件。将光盘镜像文件刻录到DVD或创建启动盘。 3. 安装过程:将光盘或启动盘插入计算机,重启计算机并进入BIOS设置。将启动介质设为光盘或USB,保存设置并重新启动计算机。然后按照安装向导的指引,选择安装类型(图形化或文本模式)、分区和磁盘布局等设置。 4. 网络配置:在安装过程中,需要为网络配置提供正确的网络IP地址、子网掩码、默认网关和DNS服务器。这些设置将确保操作系统能够正常连接到网络。 5. 安全性配置:在安装后,需要通过设置root用户密码和创建其他用户来强化系统的安全性。此外,还需要启动防火墙、禁止不必要的服务,并定期更新系统补丁来防止安全漏洞的利用。 6. 软件包管理:Red Hat Linux使用yum软件包管理器来管理软件包。通过yum命令可以方便地安装、更新和删除软件包。在配置系统时,可以根据需求选择安装需要的软件包,并进行必要的更新。 7. 日志和监控:配置系统的日志记录功能,将日志文件保存在合适的位置,并定期监测日志,以及系统的性能和事件。这将有助于及时发现和解决可能出现的问题。 最后,安装配置Red Hat Linux操作系统需要遵循上述规范来确保系统能够正常工作,并提供高效、安全的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值