rhel5中更换raid坏磁盘
一、更换raid5坏磁盘:
[root@server4 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 11 21:52:53 2009
Raid Level : raid5
Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:12:39 2009
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 9b96b4e8:53d431df:9e9c063d:285a9865
Events : 0.34
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 8 51 2 active sync /dev/sdd3
[root@server4 ~]# mdadm --fail /dev/md0 /dev/sdd3
mdadm: set /dev/sdd3 faulty in /dev/md0
[root@server4 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 11 21:52:53 2009
Raid Level : raid5
Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:13:12 2009
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 9b96b4e8:53d431df:9e9c063d:285a9865
Events : 0.36
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 0 0 2 removed
3 8 51 - faulty spare /dev/sdd3
[root@server4 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md4 : active raid5 sde5[6](S) sdb1[0] sdd2[5] sdd1[4] sdc2[3] sdc1[2] sdb2[1]
10040000 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid1 sdc5[1] sdb5[0]
2008000 blocks [2/2] [UU]
md2 : active raid1 sdc6[1] sdb6[0]
2449792 blocks [2/2] [UU]
md3 : active raid1 sdd6[1] sdd5[0]
2008000 blocks [2/2] [UU]
md0 : active raid5 sdd3[3](F) sdc3[1] sdb3[0]
4016000 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
unused devices:
[root@server4 ~]# mdadm --remove /dev/md0 /dev/sdd3
mdadm: hot removed /dev/sdd3
[root@server4 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 11 21:52:53 2009
Raid Level : raid5
Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:14:22 2009
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 9b96b4e8:53d431df:9e9c063d:285a9865
Events : 0.38
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 0 0 2 removed
[root@server4 ~]# mdadm -a /dev/md0 /dev/sde3
mdadm: re-added /dev/sde3
[root@server4 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md4 : active raid5 sde5[6](S) sdb1[0] sdd2[5] sdd1[4] sdc2[3] sdc1[2] sdb2[1]
10040000 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid1 sdc5[1] sdb5[0]
2008000 blocks [2/2] [UU]
md2 : active raid1 sdc6[1] sdb6[0]
2449792 blocks [2/2] [UU]
md3 : active raid1 sdd6[1] sdd5[0]
2008000 blocks [2/2] [UU]
md0 : active raid5 sde3[3] sdc3[1] sdb3[0]
4016000 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[=>...................] recovery = 8.8% (178320/2008000) finish=1.0min speed=29720K/sec
unused devices:
二、更换raid1坏磁盘:
[root@server4 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Wed Mar 11 21:54:15 2009
Raid Level : raid1
Array Size : 2449792 (2.34 GiB 2.51 GB)
Used Dev Size : 2449792 (2.34 GiB 2.51 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 12 18:58:41 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 293a50bd:c8630ee3:4e5c7694:6db62838
Events : 0.14
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
1 8 38 1 active sync /dev/sdc6
[root@server4 ~]# mdadm --fail /dev/md2 /dev/sdc6
mdadm: set /dev/sdc6 faulty in /dev/md2
[root@server4 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Wed Mar 11 21:54:15 2009
Raid Level : raid1
Array Size : 2449792 (2.34 GiB 2.51 GB)
Used Dev Size : 2449792 (2.34 GiB 2.51 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:16:27 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
UUID : 293a50bd:c8630ee3:4e5c7694:6db62838
Events : 0.16
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
2 8 38 - faulty spared /dev/sdc6
[root@server4 ~]# mdadm --remove /dev/md2 /dev/sdc6
mdadm: hot removed /dev/sdc6
[root@server4 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Wed Mar 11 21:54:15 2009
Raid Level : raid1
Array Size : 2449792 (2.34 GiB 2.51 GB)
Used Dev Size : 2449792 (2.34 GiB 2.51 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:17:02 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 293a50bd:c8630ee3:4e5c7694:6db62838
Events : 0.18
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
1 0 0 1 removed
[root@server4 ~]# mdadm -a /dev/md2 /dev/sde6
mdadm: added /dev/sde6
[root@server4 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Wed Mar 11 21:54:15 2009
Raid Level : raid1
Array Size : 2449792 (2.34 GiB 2.51 GB)
Used Dev Size : 2449792 (2.34 GiB 2.51 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:19:09 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 293a50bd:c8630ee3:4e5c7694:6db62838
Events : 0.20
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
1 8 70 1 active sync /dev/sde6
三、添加热备盘:
1.向raid5中添加热备盘:
[root@server4 ~]# mdadm -a /dev/md0 /dev/sde3
mdadm: added /dev/sde3
[root@server4 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 11 21:52:53 2009
Raid Level : raid5
Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:23:16 2009
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 9b96b4e8:53d431df:9e9c063d:285a9865
Events : 0.46
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 8 51 2 active sync /dev/sdd3
3 8 67 - spare /dev/sde3
[root@server4 ~]# mdadm -a /dev/md4 /dev/sde5
mdadm: re-added /dev/sde5
[root@server4 ~]# mdadm -D /dev/md4
/dev/md4:
Version : 00.90.03
Creation Time : Wed Mar 11 22:51:27 2009
Raid Level : raid5
Array Size : 10040000 (9.57 GiB 10.28 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 4
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:27:16 2009
State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 333d111c:6f4268a8:cd33842a:5daf20b9
Events : 0.6
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
2 8 33 2 active sync /dev/sdc1
3 8 34 3 active sync /dev/sdc2
4 8 49 4 active sync /dev/sdd1
5 8 50 5 active sync /dev/sdd2
6 8 69 - spare /dev/sde5
2.向raid1中添加热备:
[root@server4 ~]# mdadm -a /dev/md2 /dev/sde6
mdadm: added /dev/sde6
[root@server4 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Wed Mar 11 21:54:15 2009
Raid Level : raid1
Array Size : 2449792 (2.34 GiB 2.51 GB)
Used Dev Size : 2449792 (2.34 GiB 2.51 GB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:20:49 2009
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
UUID : 293a50bd:c8630ee3:4e5c7694:6db62838
Events : 0.26
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
1 8 38 1 active sync /dev/sdc6
2 8 70 - spare /dev/sde6
以下内容是当某个磁盘出现故障的时候,热备盘会自动代替坏掉的磁盘:
[root@server4 ~]# mdadm --fail /dev/md2 /dev/sdc6
mdadm: set /dev/sdc6 faulty in /dev/md2
[root@server4 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Wed Mar 11 21:54:15 2009
Raid Level : raid1
Array Size : 2449792 (2.34 GiB 2.51 GB)
Used Dev Size : 2449792 (2.34 GiB 2.51 GB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:29:25 2009
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Rebuild Status : 32% complete
UUID : 293a50bd:c8630ee3:4e5c7694:6db62838
Events : 0.28
Number Major Minor RaidDevice State
0 8 22 0 active sync /dev/sdb6
2 8 70 1 spare rebuilding /dev/sde6
3 8 38 - faulty spare /dev/sdc6
[root@server4 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 11 21:52:53 2009
Raid Level : raid5
Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:23:16 2009
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 9b96b4e8:53d431df:9e9c063d:285a9865
Events : 0.46
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 8 51 2 active sync /dev/sdd3
3 8 67 - spare /dev/sde3
3 8 67 - spare /dev/sde3
[root@server4 ~]# mdadm --fail /dev/md0 /dev/sdd3
mdadm: set /dev/sdd3 faulty in /dev/md0
[root@server4 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md4 : active raid5 sde5[6](S) sdb1[0] sdd2[5] sdd1[4] sdc2[3] sdc1[2] sdb2[1]
10040000 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid1 sdc5[1] sdb5[0]
2008000 blocks [2/2] [UU]
md2 : active raid1 sde6[1] sdc6[2](F) sdb6[0]
2449792 blocks [2/2] [UU]
md3 : active raid1 sdd6[1] sdd5[0]
2008000 blocks [2/2] [UU]
md0 : active raid5 sde3[3] sdd3[4](F) sdc3[1] sdb3[0]
4016000 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[=>...................] recovery = 7.3% (149456/2008000) finish=1.2min speed=24909K/sec
unused devices:
[root@server4 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Mar 11 21:52:53 2009
Raid Level : raid5
Array Size : 4016000 (3.83 GiB 4.11 GB)
Used Dev Size : 2008000 (1961.27 MiB 2056.19 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Mar 12 19:32:20 2009
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 35% complete
UUID : 9b96b4e8:53d431df:9e9c063d:285a9865
Events : 0.48
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
3 8 67 2 spare rebuilding /dev/sde3
4 8 51 - faulty spare /dev/sdd3