GlusterFS_Buglist

 

博客公告:

 (1)本博客所有博客文章搬迁至《博客虫http://blogchong.com/

                     (2)文章对应的源码下载链接参考博客虫网站首页的“代码GIT”;

                      (3)更多的相关文章更新,以及代码等,请关注博客虫网站,网站中有技术Q群,以及代码共享链接。

                 (4)该博客内容还会继续更新,不过会慢一些。

 该文档为实实在在的原创文档,转载请注明:

http://blog.sina.com.cn/s/blog_8c243ea30101kkiy.html

类型

详细

备注

该文档为在使用GlusterFS_3.4.1中发现的bug,详细内容为向红帽bugzilla报告的主要内容。

 

 

相关描述

²  其他相关文档请参考新浪博客http://blog.sina.com.cn/huangchongyuan

²

²  部分文档涉及到源码,有需要的博客留言,关注我的博客。

² 欢迎加入storm-分布式-IT技术交流群(191321336,群中有详细的资料),一起讨论技术,一起分享代码,一起分享设计。

 


目录

GlusterFS_Buglist. 1

1 文档说明... 1

2 Buglist. 1

2.1 AFR: cannot get volume status when one node down. 1

2.2 AFR: change one file in one brick,prompt "[READ ERRORS]" whenopen it in the client. 2

2.3 AFR: lose files in one node, "ls" failed in the client, butopen normally. 2

2.4 AFR: “volume heal newvolume full” recover file -- deleted filenot copy from carbon node. 3

 

1 文档说明

该文档为在进行GlusterFS_3.4.1版本实践中所遇到的Bug,至此,贴出来,是为了寻求大家的帮助,看看大家在使用的过程中有没有遇到相同的问题。

个人也想明确是个人使用有问题,还是系统真的有这些Bug。

目前以下4个Bug已经向红帽的bugzilla提交,有兴趣的也可以去看看。

以下给出4个bug的链接:

https://bugzilla.redhat.com/show_bug.cgi?id=1029482

https://bugzilla.redhat.com/show_bug.cgi?id=1029492

https://bugzilla.redhat.com/show_bug.cgi?id=1029496

https://bugzilla.redhat.com/show_bug.cgi?id=1029506

 

2 Buglist

2.1 AFR: cannot get volume status whenone node down

Component:replicate

Version:3.4.1 & 3.3.2

Bug Number:1029482

 

Description of problem:

It is OK when all nodes are up,but break one(cut off net or shutdown node) or more cannot get volume status(command:volumestatus).

 

Version-Release number of selected component (ifapplicable):

3.4.1 & 3.3.2

 

How reproducible:

always

 

Steps to Reproduce:

1.create a AFR volume and start

2.gluster volume status(normal)

3.break one node (cut off net:make ethX down--ifconfig ethXdown)

4.gluster volume status(abnormal)

 

Actual results:

cannot get anything or "Anthor transactior is in progress,Pleasetry again after sometime"

 

Expected results:

get something about volume status

 

Additional info:

I get something from bug_807428,but it appear both in 3.4.1 and3.3.2

 

 

2.2 AFR: change one file in onebrick,prompt "[READ ERRORS]" when open it in the client

Component:replicate

Version:3.4.1 & 3.3.2

Bug Number:1029492

 

Description of problem:

AFR volume: change a file in one brick,execute "gluster volumeheal afr_vol full"(actuclly,it doesnot work).open this file in theclient,prompt "[READ ERRORS]".

 

Version-Release number of selected component (ifapplicable):

3.4.1 & 3.3.2

 

How reproducible:

always and i have test many times.

 

Steps to Reproduce:

1.create a afr volume: gluster volume create afr_vol replica 3192.168.8.{80,81,82}:/mnt/sdb1

2.started it: gluster volume start afr_vol

3.change one file in 192.168.8.80:/mnt/sdb1 (just add a row ordelete a row)

4.execute heal:gluster volume heal afr_vol full(it doesnot workand you can see from 8.80:/mnt/sdb1)

5.open this file (used 'vim')

 

Actual results:

prompt "[READ ERRORS]"

 

Expected results:

open file normally

 

Additional info:

"volume heal Volume_name full" doesnot work and I have test manytimes.

 

 

2.3 AFR: lose files in one node, "ls"failed in the client, but open normally

Component:replicate

Version:3.4.1 & 3.3.2

Bug Number:1029496

 

Description of problem:

AFR volume: create a afr volume. delete a file inbrick(default node:system get metadata from this node),"ls" failedin client,but open normally.

 

Version-Release number of selected component (ifapplicable):

3.4.1 & 3.3.2

 

How reproducible:

only delete file in special node (default node)

 

Steps to Reproduce:

1.create a afr volume

2.delete a file in default node(system get metadatafrom this node)

3."ls" in client

 

Actual results:

cannot show this file,but can use it normally. Andit can trigger heal.

 

Expected results:

show this file

 

Additional info:

 

2.4 AFR: “volume heal newvolume full”recover file -- deleted file not copy from carbon node

Component:replicate

Version:3.4.1 & 3.3.2

Bug Number:1029506

 

Description of problem:

AFR volume: create a AFR volume, and then change afile in one brick (just add a row, so it is a wrong file).I deletethis wrong file in this brick, and then execute "gluster volumeheal afr_full". Guess what? yes,it recover a file in this brick,but ... this file is a wrong file(changed by me),not a copy fromcarbon node. why ?

 

Version-Release number of selected component (ifapplicable):

3.4.1 & 3.3.2

 

How reproducible:

always and I have test many times

 

Steps to Reproduce:

1.create a AFR volume : afr_vol

2.change a file in one brick (just add a row)

3.gluster volume heal afr_vol full

4.recover a file (changed by me,not a copy fromcarbon node)

 

Actual results:

this file is the file which changed by me

 

Expected results:

recover a normal file (copy from other node)

 

Additional info:

so why?

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值