前期准备:

改IP:vim /etc/sysconfig/network-scripts/ifcfg-eth0 

改主机名:vim /etc/sysconfig/network

改本地解析文件:vim /etc/hosts

改selinux配置文件:vim /etc/selinux/config   getenforce setenforce 0|1

关闭iptables: service iptables stop  chkconfig iptables off(开机不启动)


SCSI:192.168.1.1  主机名:localhost,外加三块硬盘,各1G

web1:192.168.1.10 主机名:node1

web2:192.168.1.20 主机名:node2



SCSI配置:

cd /mnt/cdrom/Packages/

yum install scsi-target-utils-1.0.24-10.el6.x86_64.rpm  -y

vim /etc/tgt/targets.conf 

<target iqn.2017-12:sdb>

     backing-store /dev/sdb

     initiator-address 192.168.1.0/24

</target>


service tgtd start

netstat -antup |grep 3260 ##查看是否启动


vim /etc/hosts ##修改hosts文件

192.168.1.1     iscsiservcer

192.168.1.10    node1

192.168.1.20    node2


ssh-keygen  ##SSH互信

cd .ssh/

ll

ssh-copy-id -i id_rsa.pub  192.168.1.10

ssh-copy-id -i id_rsa.pub  192.168.1.20

cd 

scp /etc/hosts  192.168.1.10:/etc ##到主机验证

scp /etc/hosts  192.168.1.20:/etc ##到主机验证


yum install ntp -y ##安装时间服务器,默认是安装的

vim /etc/ntp.conf

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap ##放第10行,8,9,14,15注释掉

server 127.127.1.0    ##这两行放到第26行,22-25注释掉

fudge 127.127.1.0  stratum 1


service ntpd start

chkconfig ntpd on


ssh 192.168.1.10 'ntpdate 192.168.10.1'

ssh 192.168.1.20 'ntpdate 192.168.10.1'


ssh 192.168.1.10  'service NetworkManager stop'

ssh 192.168.1.10  'chkconfig NetworkManager off'

ssh 192.168.1.20  'service NetworkManager stop'

ssh 192.168.1.20  'chkconfig NetworkManager off'


cd /mnt/cdrom/Packages/

yum install luci-0.26.0-48.el6.centos.x86_64.rpm -y

service luci start

chkconfig luci on


ssh 192.168.1.10 'yum install rgmanager -y' ##rgmanager包含ricci cman

ssh 192.168.1.20 'yum install rgmanager -y'


ssh 192.168.1.10 'service ricci start'

ssh 192.168.1.20 'service ricci start'

ssh 192.168.1.10 'chkconfig ricci on'

ssh 192.168.1.20 'chkconfig ricci on'


ssh 192.168.1.10 'service cman start'

##启动cman如出现错误:

##Starting cman... xmlconfig cannot find /etc/cluster/cluster.conf [FAILED]

##因为节点还没有加入集群,没产生配置文件/etc/cluster/cluster.conf


ssh 192.168.1.20 'service cman start'

##启动cman如出现错误:

##Starting cman... xmlconfig cannot find /etc/cluster/cluster.conf [FAILED]

##因为节点还没有加入集群,没产生配置文件/etc/cluster/cluster.conf


ssh 192.168.1.10 'chkconfig cman on'

ssh 192.168.1.20 'chkconfig cman on'


ssh 192.168.1.10 'service rgmanager start'

ssh 192.168.1.20 'service rgmanager start'

ssh 192.168.1.10 'chkconfig rgmanager on'

ssh 192.168.1.20 'chkconfig rgmanager on'


ssh 192.168.1.10 'passwd ricci' ##此处注意输入密码

abc

ssh 192.168.1.20 'passwd ricci' ##此处注意输入密码

abc


netstat -antup|grep 8084 ##其实就是luci

https://192.168.1.1:8084

root abc (管理员身份登录)

Manager Cluster 

create

Cluster Name:webcluster  在Use the Same Password for All Nodes前打钩

node1 abc(ricci的密码) node1


选定Use Locally Installed Packages

Reboot Nodes Before Joining Cluster

Enable Shared Storage Support这三项




这些准备完成后到web1和web2分别执行下列命令:

clustat

ccs_tool addfence mf fence_manual

##命令执行后/etc/cluster/cluster.conf有变化,呼应前面cman启动时出现的错误

service cman start



之后到SCSI:https://192.168.1.1:8084刷新,点击Fence Devices,会出现mf选项

ssh 192.168.1.10 'yum install lvm2-cluster gfs2-utils -y'##装rgmananger时安装

ssh 192.168.1.20 'yum install lvm2-cluster gfs2-utils -y'

到此ISCSI共享服务器搭建完成,下一步到web1和web2

------------------------------------------------------------------------------

web1:

cd /mnt/cdrom/Packages/

yum install iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm  -y

cd 

iscsiadm -m discovery -t sendtargets -p 192.168.10.1

iscsiadm -m node -T iqn.2017-12:sdb -p 192.168.10.1 -l

fdisk -l


web2:

cd /mnt/cdrom/Packages/

yum install iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm  -y

cd 

iscsiadm -m discovery -t sendtargets -p 192.168.10.1

iscsiadm -m node -T iqn.2017-12:sdb -p 192.168.10.1 -l

fdisk -l

在对磁盘操作前,所有的节点都要挂载上,才能操作成功,(估计是为各节点同步性)

-----------------------------------------------------------------------------

web1:

pvcreate /dev/sdb  

vgcreate vg0 /dev/sdb

lvcreate -L 800M -n /dev/vg0/lv0 vg0

mkfs.gfs2 -t webcluster:gfs2 -j 2 -p lock_dlm /dev/vg0/lv0   ##格式化完后,有UUID

blkid /dev/vg0/lv0  ##查UUID

vim /etc/fstab

UUID=5cdf35bb-7cbc-f82c-c06e-cae64e2ad72c       /usr/local/apache2/htdocs/      gfs2    defaults        0 0

mount -a

df -hT ##检查是否挂载上

reboot

cd /usr/local/apache2/htdocs/

echo 'www.121.com'>index.html ##去web2同样位置查看

ll


web2:

blkid /dev/vg0/lv0

vim /etc/fstab

UUID="5cdf35bb-7cbc-f82c-c06e-cae64e2ad72c"     /usr/local/apache2/htdocs/      gfs2    defaults        0 0

mount -a

df -hT ##检查是否挂载上

reboot

cd /usr/local/apache2/htdocs/

ll

echo 'www.0101.com’>index.html ##去web1同样位置查看


此时:

(1)通过gfs2可以解决相互同步问题;

(2)在格式化过程中有锁定(lock_dlm)功能,可以解决读写冲突问题,要么写,要么读;

-----------------------------------------------------------------------------------

扩展:每次扩容不能低于233M


一、扩容卷组

SCSI:

vim /etc/tgt/targets.conf

<target iqn.2017-11:sdc>

     backing-store /dev/sdc

     initiator-address 192.168.100.0/24

</target>

service tgtd reload   ##一定别敲service tgtd restart,此时到web1和web2上查看


web1:

iscsiadm -m discovery -t sendtargets -p 192.168.1.1

iscsiadm -m node -T iqn.2017-12:sdc -p 192.168.1.1 -l

fdisk -l

pvcreate /dev/sdc  

vgextend vg0 /dev/sdc

lvextend -L 1500M /dev/vg0/lv0

gfs2_grow /dev/vg0/lv0

df -hT ##查看扩容是否成功


web2:

iscsiadm -m discovery -t sendtargets -p 192.168.1.1

iscsiadm -m node -T iqn.2017-12:sdc -p 192.168.1.1 -l

fdisk -l

df -hT ##查看扩容是否成功


实验中容易出错的地方:

1、格式化:mkfs.gfs2 -t webcluster:gfs2 -j 2 -p lock_dlm /dev/vg0/lv0,和以前不一样

2、在存储端,添加一块磁盘后:service tgtd reload  ##一定不能敲service tgtd restart

3、扩容完成后敲的是:gfs2_grow /dev/vg0/lv0  


本实验极易出错,每一步都要小心,切记切记!!


二、扩容服务器:

在扩容的服务器上把在node1或node2上的操作再做一遍

最后在任意现有节点上,添加命令:gfs2_jadd -j 1 /dev/vg0/lv0 ##添加一个新日志文件