GlusterFS预防脑裂机制

脑裂简单来说就是两个节点之间的联系断了,A进程写server1,B进程写server2,各写各的,写了都记自己对的,对方错了。脑裂发生时,只能管理员手动判断手动恢复了,gluster采用了quorum的机制尽量预防脑裂的发生。

      在什么样的场景会出现脑裂的情况呢,打比方说现在有2个副本,个运行在Node1和node2上,我们都知道写完副本后要通知所有其他的副本我已经写完了,其他的副本记录下,哦,你写完了,你是OK的,详见《GlusterFS数据恢复机制AFR》。如果这时Node1和Node2之间的网断了,Node1就无法通知node2,那么node2这边就觉得,我写完了,node1没给我通信,Node1有问题;相应地node2也没法给node1报告说我写完了,node1也同样认为我写完了而对方有问题,而他们各自各客户端连接正常的,都会返回正确的信息给客户端,这样读写看似是正常进行的,但这个文件再次被访问到的时候,Node1和node2一查看自己的changelog,都发现自己正常对方OK,都尝试用自己的内容去恢复对方的内容,这就出现脑裂了。脑裂的文件访问会出现问题,一般是input/output error这样的问题。

      再说到quorum机制,quorum机制简单来说就是设置一个最小写成功的副本数,比如3副本最小写成功2个,否则返回readonly错误,2副本最小写成功1个,等等。

      这个quorum机制运行在glusterd上,它是服务器端的一个守护进程。quorum数是可以设置的,如果这个数没有达到,brick就被Kill掉了,任何命令都不能运行,包括节点增加,减少都不行,设置命令如下:

#gluster volume set <volname> cluster.server-quorum-type none/server

 #gluster volume set all cluster.server-quorum-ratio <percentage%>

    如下测试用例可以确认quorum是否工作了(翻译的,有些不太明白):

    1、quorum功能要确认是支持的,3.4以后应该就支持了

    2、卷要做如下设置

cluster.server-quorum-type- none/server ,好像还有一种auto

cluster.server-quorum-ratio-大于50%. 对应quorum机制关于active的peer数的规定,默认是50%,否则就是设定的值

    3、 确认cluster.server-quorum-type默认是none

    4、确认是否通过all-volume set/reset 可以设置cluster.server-quorum-ratio,它是唯一一个可以作用于所有卷的设置项
    5、将quorum功能disable掉,断掉节点间的网络连接可以看到bricks不会停止工作或开始备份

    6、开启quorum,在上述环境下bricks会停止工作或开始做备份
    7、将quorum功能disable掉,只关闭glusterd,bricks不受影响

    8、开启quorum,在上述情况下,quorum数没有达到的情况下bricks会停止工作

    注意:glusterd不工作和节点间网络断掉是同等对待的
    9、确认如果任何一个卷的quorum数没有达到,卷的更新也是不允许的

    10、确认quorum数没有达到,节点的添加和删除也是不允许的

    11、确认如果机器重启,而quorum功能开启,quorum数没有达到,那么brick不会自动up

    12、quorum不开启的话,glusterd一启动,brick就up了

    13、节点的quorum状态还在初始化过程中,确认vol和peer的操作是否可进行

    14、假设kill掉M1节点的glusterd,kill其他节点的glusterd使M1节点的quorum不达标,重启M1上的glusterd,M1上的brick也不会up

    15、关掉quorum,kill掉glusterd,不会看到任何brick重启

    16、即使quorum数没有达到,peer删除加强制(force选项)也可以进行

    17、用各种命令重置卷, /var/lib/glusterd/options 文件会更新

    18、节点的添加和删除会影响所有卷的操作(这句话不懂)

    19、确认所有卷的设置操作是否都被存储

    20、即使quorum数没有达到,卷的status查询也是可以正常工作的

    21、即使quorum数没有达到,卷的 set/reset操作也是可以正常工作的


1) If the quorum options are not enabled, There should be no change in the glusterd functionality.
 2) Check if the volume set functionality for the following options is working fine.
 cluster.server-quorum-type - none/server
 cluster.server-quorum-ratio - this is % > 50. If the volume is not set with any ratio the equation for quorum is:
 active_peer_count > 50% of all peers in cluster. But when the percentage (P)
 is specified the equation for quorum is active_peer_count >= P % of all the
 befriended peers in cluster.
 3) Check if by default cluster.server-quorum-type is none for a volume.
 4) check if all-volume set/reset is working for cluster.server-quorum-ratio
 is working. This option is the only option allowed as an option for
 all-volumes.
 5) When quorum is disabled keep triggering network disconnections between
 peers and observe that the bricks are not going down or coming backup.
 6) When quorum is enabled keep triggering network disconnections between
 peers and observe that the bricks are going down or coming backup.
 7) When quorum is disabled keep bringing down just the glusterd processes
 and check that the bricks are not affected by this.
 8) When quorum is enabled keep bringing down just the glusterds the
 bricks will go down after quorum is not met.
 NOTE: glusterd not running and network connection between two machines
 is down are treated equally.
 9) Check that when the quorum is not met for any of the volume, the volume
 updates on the machine which does not meet quorum the updates are not allowed.
 10) Check that peer probe/deprobe are not allowed on the machine where
 the quorum is not met.
 11) Check that the bricks for a volume are not up until the quorum is met
 when the machine is rebooted if quorum on the volume is enabled.
 12) bricks should comeup as soon as glusterd comes up when quorum is disabled.
 13) Check glusterd volume/peer operations when the quorum status of peers
 is in the process of initializing.
 14) kill glusterd on one of the machines(lets call this M1) in cluster and
 keep killing glusterds on the machines until the quorum on M1 would be lost.
 Bring back the glusterd on M1. Bricks on M1 should not be running once the
 glusterd comes backup.
 15) kill glusterd and bring it back up bricks on the machine should not see
 any brick re-starts if the quorum is not enabled.
 16) Check that peer detach removes the peer when force option is given even
 when quorum is not met.
 17) Check the new store file /var/lib/glusterd/options is updated with the 
 volume set/reset all commands.
 18) Peer probe/deprobe should reflect all volume options.
 19) Check storing and restoring of all volume options.
 20) volume status should work fine even when quorum is not met.
 21) volume set/reset of quorum options should work fine even when the quorum
 is not met. This is to get the system out of locked-in state of quorum in
 desperate circumstances.
 NOTE: Please note that the % above is a floating point percentage.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值