CLVM+GFS2文件系统部署实践

34 篇文章 0 订阅
23 篇文章 9 订阅

作者:吴业亮
博客:https://wuyeliang.blog.csdn.net/

GFS2:
全局文件系统第二版,GFS2是应用最广泛的集群文件系统。它是由红帽公司开发出来的,允许所有集群节点并行访问。元数据通常会保存在共享存储设备或复制存储设备的一个分区里或逻辑卷中。

CLVM
集群化的 LVM (Clustered LVM,CLVM)是 LVM 的一个集群方面的扩展。允许一个集群的计算机通过 LVM 管理共享存储。clvmd 守护进程是 CLVM 的核心。clvmd 守护进程在每个集群计算机上运行,并更新 LVM 元数据,让集群的每个计算机上的 LVM 信息都保持一致,用 CLVM 在共享存储上建立的逻辑卷对于访问过该共享存储的计算机都是可视的。CLVM 允许一个用户在共享存储上配置逻辑卷时,锁住正被配置的物理存储设备。CLVM 使用锁服务来保证基础信息方面的一致性。CLVM 要求改变 lvm.conf 以使用 cluster-wide 的锁服务。

架构说明
这里写图片描述
一、各个节点环境配置
1、关闭selinux
启用selinux后GFS2读写性能降低一半

# setenforce 0
# sed -i.bak "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2、关闭防火墙

# service firewalld stop
# chkconfig firewalld off

3、关闭NetworkManager

# chkconfig NetworkManager off
# service  NetworkManager stop

4、安装NTP

#  yum -y install ntp

修改配置文件/etc/ntp.conf

server NTP-server

启动服务并设置开机启动

# systemctl start ntpd 
# systemctl enable ntpd 

查看ntp状态

# ntpq -p

二、安装集群软件
1、安装集群软件

[node1&node2]# yum -y install pacemaker pcs

2、启动服务并设置开机启动

[node1&node2]# systemctl start pcsd 
[node1&node2]# systemctl enable pcsd

3、修改集群管理员密码

[node1&node2]# passwd hacluster 
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

4、建立认证

[node1/node2]# pcs cluster auth node1 node2 
Username: hacluster 
Password:
node1: Authorized
node2: Authorized

5、配置集群

[node1/node2]# pcs cluster setup --name ha_cluster node1 node2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
node1: Succeeded
node2: Succeeded

6、启动集群

[node1/node2]# pcs cluster start --all 
node2: Starting Cluster...
node1: Starting Cluster...

7、设置集群开机启动

[node1/node2]# pcs cluster enable --all 
node1: Cluster Enabled
node2: Cluster Enabled

8、查看集群状态

[node1/node2]# pcs status cluster 
Cluster Status:
 Last updated: Wed Jun 23 19:36:55 2017
 Last change: Wed Jun 23 19:36:47 2017
 Stack: corosync
 Current DC: node1 (1) - partition with quorum
 Version: 1.1.12-a14efad
 2 Nodes configured
 0 Resources configured

查看corosync状态

[node1/node2]# pcs status corosync 
Membership information
----------------------
    Nodeid      Votes Name
         1          1 node1 (local)
         2          1 node2
		 
		 

三、创建iscsi共享存储

1、安装epel源

# yum --enablerepo=epel -y install scsi-target-utils

2、修改配置文件/etc/tgt/targets.conf,末尾加上以下内容
将备份数据拷贝到/data下,名称以volume为例

<target iqn.2015-12.condata:volume00> 
    # provided devicce as a iSCSI target
    backing-store /data/volume #卷的路径
    # iSCSI Initiator's IP address you allow to connect
    initiator-address 172.16.8.0/24 #允许连接的IP
    # authentication info ( set anyone you like for "username", "password" )
    incominguser username "b021191eb4fb613a" #用户名及密码,注意密码长度
</target>

3、配置selinux

# chcon -R -t tgtd_var_lib_t /iscsi_disks 
# semanage fcontext -a -t tgtd_var_lib_t /iscsi_disks

4、配置防火墙

# firewall-cmd --add-service=iscsi-target --permanent 
# firewall-cmd --reload

5、重启tgtd并设置开机启动

# systemctl restart tgtd 
# systemctl enable tgtd

6、查看状态

# tgtadm --mode target --op show

四、连接共享存储
1、配置 iSCSI Initiator

[node1&node2]]# yum -y install iscsi-initiator-utils
[node1&node2]]# vi /etc/iscsi/initiatorname.iscsi
#确保iqn名称和章节二的名称一致,否则无法挂载
InitiatorName=iqn.2014-07.world.srv:www

2、修改/etc/iscsi/iscsid.conf

[node1&node2]]# vi /etc/iscsi/iscsid.conf
# 第 57行注释,启用CHAP认证
node.session.auth.authmethod = CHAP
配置用户名及密码
node.session.auth.username = admin
node.session.auth.password = b021191eb4fb613a

3、发现客户端

[node1&node2]]# iscsiadm -m discovery -t sendtargets -p 172.16.8.90
[  635.510656] iscsi: registered transport (tcp)
172.16.8.90:3260,1 iqn.2014-07.world.srv:storage.target00

4、发现后确认状态

[node1&node2]]# iscsiadm -m node -o show 
# BEGIN RECORD 6.2.0.873-21
node.name = iqn.2014-07.world.srv:storage.target00
node.tpgt = 1
node.startup = automatic
node.leading_login = No
...
...
...
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

5、登录target

[node1&node2]]# iscsiadm -m node --login 
Logging in to [iface: default, target: iqn.2014-07.world.srv:storage.target00, portal: 172.16.8.90,3260] (multiple)
[  708.383308] scsi2 : iSCSI Initiator over TCP/IP
[  709.393277] scsi 2:0:0:0: Direct-Access     LIO-ORG  disk01           4.0  PQ: 0 ANSI: 5
[  709.395709] scsi 2:0:0:0: alua: supports implicit and explicit TPGS
[  709.398155] scsi 2:0:0:0: alua: port group 00 rel port 01
[  709.399762] scsi 2:0:0:0: alua: port group 00 state A non-preferred supports TOlUSNA
[  709.401763] scsi 2:0:0:0: alua: Attached
[  709.402910] scsi 2:0:0:0: Attached scsi generic sg0 type 0
Login to [iface: default, target: iqn.2014-07.world.srv:storage.target00, portal: 172.16.8.90,3260] successful.

6、确认建立连接

[node1&node2]]# iscsiadm -m session -o show 
tcp: [1] 172.16.8.90:3260,1 iqn.2014-07.world.srv:storage.target00 (non-flash)

7、确认分区

[node1&node2]]# cat /proc/partitions 
major minor  #blocks  name

 252        0   52428800 sda
 252        1     512000 sda1
 252        2   51915776 sda2
 253        0    4079616 dm-0
 253        1   47833088 dm-1
   8        0   20971520 sdb

五、配置CLVM和GFS2
1、安装fence、CLVM和GFS2软件包

[node1&node2 ~]# yum -y install fence-agents-all lvm2-cluster gfs2-utils

2、配置集群LVM

[node1&node2 ~]# lvmconf --enable-cluster 

3、重启使CLVM生效

[node1&node2 ~]# reboot 

4、配置stonith设备

[node1/node2 ~]# ll /dev/disk/by-id | grep sda 
lrwxrwxrwx 1 root root  9 Jul 10 11:44 scsi-0x6001405d8645f107126496380c1e145f -> ../../sda
lrwxrwxrwx 1 root root  9 Jul 10 11:44 wwn-0x6001405d8645f107126496380c1e145f -> ../../sda
[node1/node2 ~]# pcs stonith create scsi-shooter fence_scsi pcmk_host_list="node1 node2 " pcmk_monitor_action="metadata" pcmk_reboot_action="off"  \
   devices="/dev/disk/by-id/wwn-0x6001405d8645f107126496380c1e145f" meta  provides="unfencing"

禁止集群投票

[node1/node2 ~]# pcs property set no-quorum-policy=freeze

查看stonith

[node1/node2 ~]# pcs stonith show scsi-shooter 
 Resource: scsi-shooter (class=stonith type=fence_scsi)
  Attributes: devices=/dev/disk/by-id/wwn-0x6001405189b893893594dffb3a2cb3e9
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (scsi-shooter-monitor-interval-60s)

5、创建CLVM并且格式化成GFS2
创建物理卷

[node1/node2 ~]# pvcreate /dev/sdb1 
Physical volume "/dev/sdb1" successfully created

创建集群卷组

[node1/node2 ~]# vgcreate -cy vg_cluster /dev/sdb1 
Clustered volume group "vg_cluster" successfully created

创建逻辑卷

[node1/node2 ~]# lvcreate -l100%FREE -n lv_cluster vg_cluster 
Logical volume "lv_cluster" created.

格式化成GFS2文件系统

[node1/node2 ~]# mkfs.gfs2 -p lock_dlm -t ha_cluster:gfs2 -j 2 /dev/vg_cluster/lv_cluster 
/dev/vg_cluster/lv_cluster is a symbolic link to /dev/dm-3
This will destroy any data on /dev/dm-3
Are you sure you want to proceed? [y/n] y
Device:                    /dev/vg_cluster/lv_cluster
Block size:                4096
Device size:               0.99 GB (260096 blocks)
Filesystem size:           0.99 GB (260092 blocks)
Journals:                  2
Resource groups:           5
Locking protocol:          "lock_dlm"
Lock table:                "ha_cluster:gfs2"
UUID:                      cdda1b15-8c57-67a1-481f-4ad3bbeb1b2f

参数说明:
-p lock_dlm 共享锁协议
-t ha_cluster:gfs2 锁的表格式为:clustername:lockspace
-j 2 日志盘的数量,数量为2,只允许两个节点挂载。扩展时可用’gfs2_jadd -j 1’
/dev/vg_cluster/lv_cluster 卷的名称

6、将共享存储加入集群资源中

[node1/node2 ~]# pcs resource create fs_gfs2 Filesystem \
device="/dev/vg_cluster/lv_cluster" directory="/mnt" fstype="gfs2" \
options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
[node1/node2 ~]# pcs resource show 
 Clone Set: dlm-clone [dlm]
     Started: [ node01 ]
     Stopped: [ node02 ]
 Clone Set: clvmd-clone [clvmd]
     Started: [ node01 ]
     Stopped: [ node02 ]
 Clone Set: fs_gfs2-clone [fs_gfs2]
     Started: [ node01 ]

7、设置CLVM和GFS2服务启动顺序

[node1/node2 ~]# pcs constraint order start clvmd-clone then fs_gfs2-clone 
Adding clvmd-clone fs_gfs2-clone (kind: Mandatory) (Options: first-action=start then-action=start)

8、绑定CLVM和GFS2服务启动顺序

[node1/node2 ~]# pcs constraint colocation add fs_gfs2-clone with clvmd-clone 

9、查看绑定

[node1/node2 ~]# pcs constraint show 
Location Constraints:
Ordering Constraints:
  start dlm-clone then start clvmd-clone (kind:Mandatory)
  start clvmd-clone then start fs_gfs2-clone (kind:Mandatory)
Colocation Constraints:
  clvmd-clone with dlm-clone (score:INFINITY)
  fs_gfs2-clone with clvmd-clone (score:INFINITY)

参考:

https://access.redhat.com/documentation/zh-CN/Red_Hat_Enterprise_Linux/6/html/Global_File_System_2/ch-overview-GFS2.html

  • 3
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值