Linux 软RAID---mdadm

 

.RAID建立

 

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb{1,2}

 

mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdb{1,2}

 

添加热备:

mdadm --create /dev/md0 --add /dev/sdb3

 

创建raid10

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb{1,2}

# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb{3,4}-a yes

# mdadm --create /dev/md2 --level=10 --raid-devices=2 /dev/md{0,1}-a yes

 

 

. /proc/mdstat

[root@station20~]# cat /proc/mdstat

Personalities: [raid1] [raid6] [raid5] [raid4]

md1: active raid5 sdb10[3] sdb9[1] sdb8[0]

      196224 blocks level 5, 64k chunk,algorithm 2 [3/2] [UU_]

      [===============>.....]  recovery = 79.1% (78592/98112) finish=0.0minspeed=4136K/sec

     

md0: active raid1 sdb7[1] sdb6[0]

      98112 blocks [2/2] [UU]

      bitmap: 0/12 pages [0KB], 4KB chunk

 

unuseddevices: <none>

 

 

.位图: --bitmap=internal

原理描述:

mdadm操作中,bitmap用于记录RAID 阵列从上次同步之后更改的部分,即记录RAID阵列有多少个块已经同步(resync)RAID 阵列会定期将信息写入到bitmap 中。在一般情况下,磁盘阵列在重启之后会有一个完整的同步过程。如果有bitmap,那么只有被修改后的数据才会被同步。在另一种情况下,如果阵列一块磁盘被取出,bitmap不会被清除,当这块磁盘重新加入阵列时,同样只会同步改变过的数据。所以bitmap能够减少磁盘阵列同步的时间,起到优化raid1的作用。Bitmap一般写入的位置是磁盘的metadata或者我们成为外部的另外,要注意的是,bitmap只是对raid1的功能,对raid0等其他设备来说是没有意义的.

注意: 此功能只对RAID1有效

Example: mdadm --create /dev/md0 --level=1 --raid-devices=2 -a yes-b internal

 

在已存在的RAID1上启用视图

mdadm --grow /dev/md0 --bitmap=internal

 

 

. 共享热备盘及邮件通知

 

[root@server109~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda{5,6} -a yes -binternal

[root@server109~]# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sda{7,8,9} -a yes

[root@server109~]# mdadm /dev/md0 --add /dev/sda10

[root@server109~]# cat /proc/mdstat

Personalities: [raid1] [raid6] [raid5] [raid4]

md1: active raid5 sda9[2] sda8[1] sda7[0]

      196736 blocks level 5, 64k chunk,algorithm 2 [3/3] [UUU]

     

md0: active raid1 sda10[2](S) sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

unuseddevices: <none>

 

[root@server109~]# mdadm --examine --scan > /etc/mdadm.conf

[root@server109~]# cat /etc/mdadm.conf

ARRAY/dev/md1 level=raid5 num-devices=3 UUID=891d6352:a0a4efff:4f162d90:c3500453

ARRAY/dev/md0 level=raid1 num-devices=2 UUID=b070e059:fe2cf975:aac92394:e103a46d

   spares=1

 

可以实现热备共享及邮件通知的配置文件如下(直接修改mdadm.conf):

[root@server109~]# cat /etc/mdadm.conf

##Share Host Spares

ARRAY/dev/md1 level=raid5 num-devices=3 UUID=891d6352:a0a4efff:4f162d90:c3500453spare-group=1

ARRAY/dev/md0 level=raid1 num-devices=2 UUID=b070e059:fe2cf975:aac92394:e103a46dspare-group=1

   spares=1

 

##Mail Notification

MAILFROM root@localhost            ## 邮件发出,不写默认为root

MAILADDR raider@localhost      ##邮件接收

 

[root@server109~]# /etc/init.d/mdmonitor start

[root@server109~]# useradd raider

[root@server109~]# echo redhat | passwd --stdin raider

[root@server109~]# cat /proc/mdstat

Personalities: [raid1] [raid6] [raid5] [raid4]

md1: active raid5 sda9[2] sda8[1] sda7[0]

      196736 blocks level 5, 64k chunk,algorithm 2 [3/3] [UUU]

     

md0: active raid1 sda10[2](S) sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

 

unuseddevices: <none>

[root@server109~]# mdadm /dev/md1 -f /dev/sda7 -r /dev/sda7

mdadm:set /dev/sda7 faulty in /dev/md1

mdadm:hot removed /dev/sda7

[root@server109~]# cat /proc/mdstat

Personalities: [raid1] [raid6] [raid5] [raid4]

md1: active raid5 sda10[3] sda9[2] sda8[1]

      196736 blocks level 5, 64k chunk,algorithm 2 [3/2] [_UU]

      [====>................]  recovery = 24.7% (25472/98368) finish=0.1minspeed=6368K/sec

     

md0: active raid1 sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

 

[root@server109~]# mail -u raider

Mailversion 8.1 6/6/93.  Type ? for help.

"/var/mail/raider":1 message 1 new

>N  1 root@server109.examp  Tue Jan 4 04:28  35/1262  "Fail event on/dev/md1:server109.example.com"

&1

Message1:

Fromroot@server109.example.com  Tue Jan  4 04:28:48 2011

Date:Tue, 4 Jan 2011 04:28:47 +0100

From:root@server109.example.com

To:raider@server109.example.com

Subject:Fail event on /dev/md1:server109.example.com

 

.........................

AFail event had been detected on md device /dev/md1.

 

Itcould be related to component device /dev/sda7.

 

...............................

 

 

.RAID扩展 --grow

如果某天RAID 空间不够用了,如何增加RAID的空间呢?

 

Thesteps for adding a new disk are:

1. Add the new disk to the active 3-device RAIDS (starts as a spare):

mdadm -add /dev/mdO/dev/hda8

2 Reshape the RAID5:

mdadm --grow /dev/md0 --raid-devices=4

3. Monitor the reshaping process and estimated time to finish:

watch -n 1 'cat/proc/mdstat'

4.Expand the FS to fill the new space:

resize2fs /dev/md0

 

[root@server109 ~]# mdadm /dev/md1 --add /dev/sda11

[root@server109 ~]# mdadm /dev/md1 --grow --raid-devices=5

[root@server109~]# cat /proc/mdstat

Personalities: [raid1] [raid6] [raid5] [raid4]

md1: active raid5 sda11[4] sda7[3] sda10[0] sda9[2] sda8[1]

      295104 blocks super 0.91 level 5, 64kchunk, algorithm 2 [5/5] [UUUUU]

      [>....................]  reshape = 2.0% (2812/98368) finish=1.0min speed=1406K/sec

     

md0: active raid1 sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

 

unuseddevices: <none>

 

[root@server109~]# resize2fs /dev/md1

 

 

. RAID 恢复

OS为独立硬盘,数据存储在一个RAID5 ,重新安装OS后如何恢复RAID?

 

[root@server109~]# mdadm --examine /dev/sda8

/dev/sda8:

          Magic : a92b4efc

        Version : 0.90.00

           UUID :891d6352:a0a4efff:4f162d90:c3500453

  Creation Time : Tue Jan  4 04:18:45 2011

     Raid Level : raid5

  Used Dev Size : 98368 (96.08 MiB 100.73 MB)

     Array Size : 393472 (384.31 MiB 402.92 MB)

   Raid Devices : 5

  Total Devices : 5

PreferredMinor : 1

 

    Update Time : Tue Jan  4 05:17:52 2011

          State : clean

 Active Devices : 5

WorkingDevices : 5

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 7f9b882a - correct

         Events : 206

 

         Layout : left-symmetric

     Chunk Size : 64K

 

      Number  Major   Minor   RaidDevice State

this     1      8        8        1     active sync   /dev/sda8

 

   0    0       8       10       0      active sync   /dev/sda10

   1    1       8        8       1      active sync   /dev/sda8

   2    2       8        9       2      active sync   /dev/sda9

   3    3       8        7       3      active sync   /dev/sda7

   4    4       8       11       4      active sync   /dev/sda11

 

[root@server109~]# mdadm -A /dev/md1 /dev/sda{7,8,9,10,11}

mdadm:/dev/md1 has been started with 5 drives.

[root@server109~]# mount /dev/md1 /mnt/

 

.Raid 重命名

rename /dev/md1 to /dev /md3

 

[root@server109~]# umount /mnt/

[root@server109~]# mdadm --stop /dev/md1

[root@server109~]# mdadm --assemble /dev/md3 --super-minor=1 --update=super-minor/dev/sda{7,8,9,10,11}

说明: --super-minor=1  这里的"1"/dev/md1一至,如果重命令的是/dev/md0,那这里就是--super-minor=0

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值