ha群集的简介http://baike.baidu.com/view/996184.htm

 

准备工作

一:target  192.168.2.100

1:修改主机的名字

[root@zzu ~]# vim /etc/sysconfig/network        
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=target.a.com

2:修改hosts文件

[root@target ~]# vim /etc/hosts
127.0.0.1    localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.2.100   target.a.com  target
192.168.2.10    node1.a.com   node1
192.168.2.20    node2.a.com   node2

3:同步各个节点之间的时间

[root@target ~]# hwclock -s

二:node1   192.168.2.10

1:修改主机的名字

[root@zzu ~]# vim /etc/sysconfig/network        
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node1.a.com

2:修改hosts文件

[root@target ~]# vim /etc/hosts
127.0.0.1    localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.2.100   target.a.com  target
192.168.2.10    node1.a.com   node1
192.168.2.20    node2.a.com   node2

3:同步各个节点之间的时间

[root@node1 ~]# hwclock -s

三:node2  192.168.2.20

1:修改主机的名字

[root@zzu ~]# vim /etc/sysconfig/network        
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=node2.a.com

2:修改hosts文件

[root@target ~]# vim /etc/hosts
127.0.0.1    localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.2.100   target.a.com  target
192.168.2.10    node1.a.com   node1
192.168.2.20    node2.a.com   node2

3:同步各个节点之间的时间

[root@node2 ~]# hwclock -s

实施操作

一:target

1:在target上面增加新的硬盘,做存储的挂接使用。

[root@target ~]# fdisk -l

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         123      987966   83  Linux

2:[root@target ~]# yum install scsi-target-utils*

3:[root@target ~]# service tgtd restart
Stopping SCSI target daemon:                               [  OK  ]
Starting SCSI target daemon:                                [  OK  ]

4:[root@target ~]# chkconfig tgtd on 设置tgtd为开机启动

5:

[root@target ~]# tgtadm --lld iscsi  --op new --mode target --tid 1 --targetname iqn.2012-05.com.a.target
[root@target ~]# tgtadm --lld iscsi --op new --mode logicalunit --tid 1  --lun 1 -b /dev/sdb1
[root@target ~]# tgtadm --lld iscsi --op bind  --mode target --tid 1  --initiator-address=192.168.2.0/24  开启对存储设备ip验证

[root@target ~]# tgtadm --lld iscsi --op show  --mode target  查看target设备      
Target 1: iqn.2012-05.com.a.target
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 1012 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/sdb1
    Account information:
    ACL information:
        192.168.2.0/24

[root@target ~]# vim /etc/tgt/targets.conf
<target iqn.2012-05.com.a.target>
         backing-store /dev/sdb1

         initiator-address 192.168.2.0/24
</target>

二:node1

1:[root@node1 ~]# yum install iscsi-initiator-utils*

2:

[root@node1 ~]# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2012-05.com.a.node1

3:[root@node1 ~]# service iscsi start                        [  OK  ]
iscsid dead but pid file exists
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]

[root@node1 ~]# chkconfig iscsi on

4:[root@node1 ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.2.100 发现存储设备
192.168.2.100:3260,1 iqn.2012-05.com.a.target

[root@node1 ~]# iscsiadm --mode node  --targetname iqn.2012-05.com.a.target  --portal 192.168.2.100:3260 --login 登录该设备

Logging in to [iface: default, target: iqn.2012-05.com.a.target, portal: 192.168.2.100,3260]
Login to [iface: default, target: iqn.2012-05.com.a.target, portal: 192.168.2.100,3260]: successful 显示登录成功

5:[root@node1 ~]# fdisk -l

Command (m for help): p

Disk /dev/sdb: 1011 MB, 1011677184 bytes
32 heads, 61 sectors/track, 1012 cylinders
Units = cylinders of 1952 * 512 = 999424 bytes

   Device Boot      Start         End      Blocks   Id  System

[root@node1 ~]# fdisk /dev/sdb

Command (m for help): p

Disk /dev/sdb: 1011 MB, 1011677184 bytes
32 heads, 61 sectors/track, 1012 cylinders
Units = cylinders of 1952 * 512 = 999424 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1002      977921+  83  Linux

[root@node1 ~]# partprobe /dev/sdb
[root@node1 ~]# cat /proc/partitions

[root@node1 ~]# mkfs -t ext3 /dev/sdb1  对存储设备进行格式化

[root@node1 ~]# mkdir /mnt/sdb1

[root@node1 ~]# mount /dev/sdb1 /mnt/sdb1

[root@node1 ~]# cd /mnt/sdb1/

[root@node1 sdb1]# echo "hello the world " &gt;index.html

[root@node1 sdb1]# cd
[root@node1 ~]# umount /dev/sdb1

三:node2

1:[root@node2 ~]#  yum install iscsi-initiator-utils*

[root@node2 ~]# vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2012-05.com.a.node2

[root@node2 ~]# service iscsi start
iscsid is stopped
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]

[root@node2 ~]# chkconfig iscsi on

[root@node2 ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.2.100 发现存储设备
192.168.2.100:3260,1 iqn.2012-05.com.a.target
[root@node2 ~]# iscsiadm --mode node  --targetname iqn.2012-05.com.a.target  --portal 192.168.2.100:3260 --login 登录存储设备
Logging in to [iface: default, target: iqn.2012-05.com.a.target, portal: 192.168.2.100,3260]
Login to [iface: default, target: iqn.2012-05.com.a.target, portal: 192.168.2.100,3260]: successful  登录成功

在target查看,发现node1和node2节点都已经登录上了我们的共享的块设备

[root@target ~]# tgtadm --lld iscsi --op show  --mode target
Target 1: iqn.2012-05.com.a.target
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
        I_T nexus: 2
            Initiator: iqn.2012-05.com.a.node1
            Connection: 0
                IP Address: 192.168.2.10
        I_T nexus: 3
            Initiator: iqn.2012-05.com.a.node2
            Connection: 0
                IP Address: 192.168.2.20
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: deadbeaf1:0
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store: No backing store
        LUN: 1
            Type: disk
            SCSI ID: deadbeaf1:1
            SCSI SN: beaf11
            Size: 1012 MB
            Online: Yes
            Removable media: No
            Backing store: /dev/sdb1
    Account information:
    ACL information:
        192.168.2.0/24

四:在任意一个群集节点上安装图形化的工具来对我们的群集服务器进行管理

说明:我们这里选择target设备

1:[root@target ~]# yum install -y luci

[root@target ~]# luci_admin init 初始化,为管理员建立一个密码

[root@target ~]# chkconfig luci on 设置开机启动

[root@target ~]# service luci restart
Shutting down luci:                                        [  OK  ]
Starting luci: Generating https SSL certificates...  done
                                                           [  OK  ]

Point your web browser to https://target.a.com:8084 to access luci

2:[root@node1 ~]# yum install ricci httpd

[root@node1 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Starting ricci:                                            [  OK  ]

[root@node1 ~]# chkconfig ricci on

3:[root@node2 ~]# yum install ricci httpd

[root@node2 ~]# service ricci start
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Starting ricci:                                             [  OK  ]

[root@node2 ~]# chkconfig ricci on

4:在浏览器上进行管理

p_w_picpath

p_w_picpath

p_w_picpath

p_w_picpath

p_w_picpath

4:增加vm fence

p_w_picpath

p_w_picpath

p_w_picpath

5:为各节点增加fence设备

p_w_picpath

p_w_picpath

6:为群集增加群集资源

p_w_picpath

p_w_picpath

p_w_picpath

p_w_picpath

7:建立一个灾难转移域

p_w_picpath

p_w_picpath

p_w_picpath

p_w_picpath

 

p_w_picpath

p_w_picpath 

五:在节点上面查看www群集状态

node1.a.com上面

[root@node1 ~]# cat /etc/cluster/cluster.conf

[root@node1 ~]# clustat
Cluster Status for cluster1 @ Tue May  8 01:14:20 2012
Member Status: Quorate

Member Name                                                    ID   Status
------ ----                                                    ---- ------
node2.a.com                                                        1 Online, rgmanager
node1.a.com                                                        2 Online, Local, rgmanager

Service Name                                          Owner (Last)                                          State        
------- ----                                          ----- ------                                          -----        
service:www                                           node1.a.com    我们看到www服务现在运行在node1的节点上面

[root@node1 ~]# netstat -tupln|grep httpd
tcp        0      0 192.168.2.200:80            0.0.0.0:*                   LISTEN      21917/httpd 

[root@node1 ~]# mount
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/hdc on /media/RHEL_5.4 i386 DVD type iso9660 (ro,noexec,nosuid,nodev,uid=0)
/dev/hdc on /mnt/cdrom type iso9660 (ro)
none on /sys/kernel/config type configfs (rw)
/dev/sdb1 on /var/www/html type ext3 (rw) 我们看到了我们的设备已经挂载过来啦

node2.a.com 上面

[root@node2 ~]# clustat
Cluster Status for cluster1 @ Tue May  8 01:15:30 2012
Member Status: Quorate

Member Name                                                    ID   Status
------ ----                                                    ---- ------
node2.a.com                                                        1 Online, Local, rgmanager
node1.a.com                                                        2 Online, rgmanager

Service Name                                          Owner (Last)                                          State        
------- ----                                          ----- ------                                          -----        
service:www                                          node1.a.com  我们看到www服务现在运行在node1的节点上面

[root@node2 ~]# netstat -tupln|grep httpd
tcp        0      0 192.168.2.200:80            0.0.0.0:*                   LISTEN      21917/httpd 

 

我们模拟node1坏掉的情况,看下www服务是否可以飘到node2的上面

p_w_picpath

p_w_picpath

漂移前

[root@node2 ~]# clustat
Cluster Status for cluster1 @ Tue May  8 01:50:19 2012
Member Status: Quorate

Member Name                                                    ID   Status
------ ----                                                    ---- ------
node2.a.com                                                        1 Online, Local, rgmanager
node1.a.com                                                        2 Online, rgmanager

Service Name                                          Owner (Last)                                          State        
------- ----                                          ----- ------                                          -----        
service:www                                           node1.a.com                                           started  

漂移后

[root@node2 ~]# clustat
Cluster Status for cluster1 @ Tue May  8 01:58:54 2012
Member Status: Quorate

Member Name                                                    ID   Status
------ ----                                                    ---- ------
node2.a.com                                                        1 Online, Local, rgmanager
node1.a.com                                                        2 Online, rgmanager

Service Name                                          Owner (Last)                                          State        
------- ----                                          ----- ------                                          -----        
service:www                                           (node1.a.com)                                         failed 

p_w_picpath

注意:我们无法自动将www服务转移到node2的上面,主要是应为我们的vm fence 不行。

我们必须手动启动www服务在node2上面才行

p_w_picpath

p_w_picpath

继续查看node2的群集状态

[root@node2 ~]# clustat
Cluster Status for cluster1 @ Tue May  8 02:16:30 2012
Member Status: Quorate

Member Name                                                    ID   Status
------ ----                                                    ---- ------
node2.a.com                                                        1 Online, Local, rgmanager
node1.a.com                                                        2 Online, rgmanager

Service Name                                          Owner (Last)                                          State        
------- ----                                          ----- ------                                          -----        
service:www                                           node2.a.com                                           started 

这样的群集系统并不是非常的智能我们可以使用其他的方法来实现群集的管理。

欢迎加入郑州阳仔的网络工程师自由交流群--132444800(请注明自己的身份,就说是51cto的博友)