原贴:http://www.linuxquestions.org/questions/linux-general-1/md-kicking-non-fresh-sda6-from-array-416853/
md: kicking non-fresh sda6 from array!
Hello,
I have some raid1 failures on my computer. How can I fix this? # dmesg | grep md ata1: SATA max UDMA/133 cmd 0xBC00 ctl 0xB882 bmdma 0xB400 irq 193 ata2: SATA max UDMA/133 cmd 0xB800 ctl 0xB482 bmdma 0xB408 irq 193 md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: raid1 personality registered as nr 3 md: md2 stopped. md: bind<sdb9> md: bind<sda9> raid1: raid set md2 active with 2 out of 2 mirrors md: md1 stopped. md: bind<sda6> md: bind<sdb6> md: kicking non-fresh sda6 from array! md: unbind<sda6> md: export_rdev(sda6) raid1: raid set md1 active with 1 out of 2 mirrors md: md0 stopped. md: bind<sda5> md: bind<sdb5> md: kicking non-fresh sda5 from array! md: unbind<sda5> md: export_rdev(sda5) raid1: raid set md0 active with 1 out of 2 mirrors EXT3 FS on md2, internal journal EXT3 FS on md0, internal journal EXT3 FS on md1, internal journal # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb5[1] 4883648 blocks [2/1] [_U] md1 : active raid1 sdb6[1] 51761280 blocks [2/1] [_U] md2 : active raid1 sda9[0] sdb9[1] 102799808 blocks [2/2] [UU] unused devices: <none> # e2fsck /dev/sda5 e2fsck 1.37 (21-Mar-2005) /usr: clean, 18653/610432 files, 96758/1220912 blocks (check in 3 mounts) # e2fsck /dev/sda6 e2fsck 1.37 (21-Mar-2005) /var: clean, 7938/6471680 files, 350458/12940320 blocks (check in 3 mounts) | ||
![]() |
![]() |
![]() | #2 | |
Senior Member
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86, x86_64, PPC
Posts: 3,506
|
This can happen after an unclean shutdown (like a power fail). Usually removing and re-adding the problem devices will correct the situation:
/sbin/mdadm /dev/md0 --fail /dev/sda5 --remove /dev/sda5 /sbin/mdadm /dev/md0 --add /dev/sda5 /sbin/mdadm /dev/md1 --fail /dev/sda6 --remove /dev/sda6 /sbin/mdadm /dev/md1 --add /dev/sda6 | |
![]() |
![]() |
![]() | #3 | |
Newbie
Registered: Dec 2005
Posts: 4
|
Yes, that is exactly what happend. There was a problem with a UPS.
Problem solved and everyone happy. Thanks! | |
![]() |
![]() |
![]() | #4 | ||
LQ Newbie
Registered: Jan 2007
Posts: 6
|
Quote:
CD | ||
![]() |
![]() |
![]() | #6 | |||
LQ Newbie
Registered: Jul 2006
Posts: 8
|
Same here. This thread saved my day
![]() Now my raid is syncing since sda6 and sda5 failed. Personalities : [raid1] md0 : active raid1 sda6[2] sdb6[1] 238275968 blocks [2/1] [_U] [==>..................] recovery = 10.2% (24469056/238275968) finish=64.3min speed=55398K/sec md2 : active raid1 sda5[0] sdb5[1] 5855552 blocks [2/2] [UU] Last edited by jostmart : 07-29-2007 at 07:36 AM. | |||
![]() |
|
![]() |