Raid详解

Raid-独立的磁盘冗余阵列

一、RAID的概念和原理

1.1、RAID的概念

磁盘阵列(Redundant Arrays of Independent Disks,RAID),有“独立磁盘构成的具有冗余能力的阵列”之意。

磁盘阵列是由很多块独立的磁盘,组合成一个容量巨大的磁盘组,利用个别磁盘提供数据所产生加成效果提升整个磁盘系统效能。利用这项技术,将数据切割成许多区段,分别存放在各个硬盘上。

磁盘阵列还能利用同位检查(Parity Check)的观念,当数组中任意一个硬盘发生故障时,仍可读出数据。在数据重构时,可将数据经计算后重新置入新硬盘中

1.2、RAID技术主要功能
(1)通过对磁盘上的数据进行条带化,实现对数据成块存取,减少磁盘的机械寻道时间,提高了数据存取速度

(2)通过对一个阵列中的几块磁盘同时读取,减少了磁盘的机械寻道时间,提高数据存取速度。 

(3)通过镜像或者存储奇偶校验信息的方式,实现了对数据的冗余保护
1.3、RAID实现方式的分类
(1)软RAID:由操作系统自身的算法来实现的,需要依赖消耗系统本身的资源,不适用与大型,密集型的计算场景

(2)硬RAID:由一个物理的RAID来组成的RAID,不需要依赖消耗操作系统本身的资源,性能好。
1.2.1 硬件raid卡

RAID卡就是一种利用独立硬件来实现RAID功能的方法。要在硬件上实现RAID功能,必须找一个物理硬件作为载体,SCSI卡或者主板上的南桥无疑就是这个载体了,RAID卡有PCI-E插槽的,也有直接集成在主板上的类型,但是他们的功能都是一样,RAID卡上有一套单独的计算单元(比如CPU,内存,IO处理总线等),用于实现RAID的功能。

硬件RAID的优势:

1、由于RAID功能是单独通过RAID卡来完成的,不在依赖于底层操作系统来完成,因此它的性能更好

2、如果是使用的PCI-E插槽的RAID,可以更换性能更强的RAID卡,可以提高更好,更稳定的IO性能,更加灵活

硬件RAID的缺点

1、RAID卡属于一种单独的设备,如果需要升级需要单独购买

2、软RAID由于RAID的信息是写在磁盘上的,迁移方便,但是硬件RAID的信息是写在RAID卡上,一旦RAID出现问题,会导致系统无法识别硬盘。

软件RAID与硬件RAID:您应该选择哪一个?

在软件RAID和硬件RAID之间进行选择取决于您需要做什么和成本。

如果您的预算紧张,并且您使用的是RAID 0或RAID 1,则软件RAID和硬件RAID之间没有太大区别。如果在使用计算密集型RAID 5和RAID 6时需要最佳性能,则应选择硬件RAID,因为软件RAID确实会损害性能。此外,软件RAID通常不支持诸如RAID 10之类的深奥RAID级别。在这种情况下需要硬件RAID。

在这里插入图片描述

在这里插入图片描述

1.4、raid的分类

目前常见的分类由raid0,raid1,raid5,raid 6,raid10,RAID 01,raid50等

1.4.1 RAID 0(stripe,条带卷):在RAID级别中最高的存储性能.

原理:是把连续的数据分散到多个磁盘上存取,系统有数据请求就可以被多个磁盘并行的执行,每个磁盘执行属于他自己的那部分数据请求。这种数据上的并行操作可以充分利用总线的带宽,显著提高磁盘整体存取性能。

磁盘空间=磁盘总量=100%

需要的磁盘数>=1

读写性能=优秀=磁盘个数(n)*I/O速度=n*100%

块大小=每次写入的块大小=2的n次方=一般为2~512kb

优点: 
1. 充分利用I/O总线性能使其带宽翻倍,读写速度翻倍。

2. 充分利用磁盘空间,利用率为100%。

缺点: 1. 不提供数据冗余。

1. 无数据检验,不能保证数据的正确性。
2. 存在单点故障。

应用场景:

1. 对数据完整性要求不高的场景,如:日志存储,个人娱乐。
2. 要求读写效率高,安全性要求不高,如图像工作站。

在这里插入图片描述

1.4.2 RAID 1(Mirror,镜像卷)

是磁盘阵列中单位成本最高的,磁盘利用率最低,但提供了很高的数据安全性和可用性。

原理:将一个两块硬盘所构成RAID磁盘阵列,其容量仅等于一块硬盘的容量,因为另一块只是当做数据“镜像”通过镜像实现数据冗余,成对的独立磁盘上产生互为备份的数据。当原始数据繁忙时,可直接从镜像拷贝中读取数据,因此RAID1可以提高读取性能。当一个磁盘失效时,系统可以自动切换到镜像磁盘上读写,而不需要重组失效的数据。最大允许互为镜像内的单个磁盘故障,如果出现互为镜像的两块磁盘故障则数据丢失。

磁盘空间=磁盘总量/2=50%

需要的磁盘数(n)>=2*n

读性能=优秀=I/O性能*n=200%

写性能=正常=I/O性能=100%

优点:

1. 提供数据冗余,数据双倍存储。
2. 提供良好的读性能

缺点:

1. 无数据校验。
2. 磁盘利用率低,成本高。

应用场景:

1. 存放重要数据,如数据存储领域。

在这里插入图片描述

1.4.3 RAID 5:奇偶校验(XOR)

RAID 0和RAID 1的折中方案。

原理:数据以块分段条带化存储。校验信息交叉地存储在所有的数据盘上。数据和相对应的奇偶校验信息存储到组成RAID5的各个磁盘上,并且奇偶校验信息和相对应的数据分别存储于不同的磁盘上,其中任意N-1块磁盘上都存储完整的数据。

磁盘空间=n-1

需要的磁盘数>=3

读写性能≈优秀=磁盘个数(n)*I/O速度=n*100%

优点:

1. 读写性能高
2. 有校验机制
3. 磁盘空间利用率高

缺点:

磁盘越多安全性越差

应用场景:

安全性高,如金融、数据库、存储等、

在这里插入图片描述

1.4.4 RAID 6

与RAID 5相比,RAID 6增加了第二个独立的奇偶校验信息块。双重奇偶校验

原理:两个独立的奇偶系统使用不同的算法,数据的可靠性非常高,即使两块磁盘同时失效也不会影响数据的使用。但RAID 6需要分配给奇偶校验信息更大的磁盘空间,写性能比RAID5差。

磁盘空间 = n-2

需要的磁盘数 ≥ 4

优点:

1、 良好的随机读性能

2、 有校验机制

 

缺点:

1、 写入速度差

2、 成本高

应用场景:

对数据安全级别要求比较高的企业

在这里插入图片描述

1.4.5 RAID 01

RAID 01:RAID 0和RAID 1的组合形式

原理:先做RAID 0再将RAID 0组合成RAID 1,拥有两种RAID的特性。

磁盘空间= n/2 = 50%

4 ≥ 需要的磁盘数 ≥ 2*n

读写性能 = RAID0

优点:

1、 较高的IO性能

2、 有数据冗余

3、 无单点故障

缺点:

1、 成本稍高

2、 安全性比RAID 10 差 

应用场景:

特别适用于既有大量数据需要存取,同时又对数据安全性要求严格的领域,如银行、金融、商业超市、仓储库房、各种档案管理等。

在这里插入图片描述

1.4.6 RAID 10

RAID 0和RAID 1的组合形式

原理:先做RAID 1再将RAID 1组合成RAID 0,拥有两种RAID的特性,安全性高

磁盘空间=n/2=50%

4≤ 需要的磁盘数 ≥ 2*n

优点:

1. RAID 10的读性能将优于RAID 01
2. 较高的IO性能
3. 有数据冗余
4. 无单点故障
5. 安全性高

缺点:

成本稍高

应用场景:特别适用于既有大量数据需要存取,同时又对数据安全性要求严格的领域,如银行、金融、商业超市、仓储库房、各种档案管理等。

在这里插入图片描述

1.4.7 RAID 50

RAID 50也被称为镜像阵列条带

原理先做RAID 5再将RAID 5组合成RAID 0,拥有两种RAID的特性。

需要的磁盘数≥ 6

在这里插入图片描述

二、RAID的配置

2.1 RAID0的配置实例
2.1.1 准备磁盘

准备2个磁盘或者在一个磁盘上划分2个分区,创建分区的类型是:fd(raid分区类型)

[root@localhost ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.22.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +2G
Partition 1 of type Linux and of size 2 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@localhost ~]# 

Changed type of partition 'Linux' to 'Linux raid autodetect'

注意,在设置分区格式类型的时候可以通过“L”来查看分区的格式类型,比如下图

[root@localhost ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.22.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs        
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT            
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor      
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary  
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS    
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
1e  Hidden W95 FAT1 80  Old Minix      

Command (m for help): 

2)使用如上的办法创建完个分区后,我们查看下分区的格式,系统类型是 Linux raid autodetect

[root@localhost ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00099d4e

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    83886079    40893440   8e  Linux LVM

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x11fc6f8e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     4196351     2097152   fd  Linux raid autodetect
/dev/sdc2         4196352     8390655     2097152   fd  Linux raid autodetect
2.1.2 创建raid
2.1.2.1 安装mdadm软件
yum -y install mdadm
2.1.2.2 mdadm语法格式
mdadm [options] device

选项参数说明

-C  创建阵列  create
-A  激活磁盘阵列  active
-D  打印阵列详细信息  display
-s  扫描磁盘阵列/porc/mdstat,得到阵列缺省信息
-f   将设备状态定位故障
-a  自动创建目标RAID设备的设备文件
-v  显示详细信息
-r   移除设备
-S  解除阵列,释放所有资源  stop
-l   设定磁盘阵列的级别
-x   指定磁盘阵列的备用用盘数量
-c   设定阵列的块chunk大小,单位KB ,默认512KB   
-G  该表阵列大小或形态  grow
-n  磁盘阵列的数量  
2.1.2.3 创建raid
[root@localhost ~]# mdadm -C /dev/md0 -ayes -l0 -n2 /dev/sdc1 /dev/sdc2
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

在这里插入图片描述

2.1.3 查看raid0的信息
[root@localhost ~]# mdadm -D /dev/md0     ##查看raid信息
/dev/md0:
           Version : 1.2
     Creation Time : Fri Sep  3 10:57:40 2021
        Raid Level : raid0
        Array Size : 4188160 (3.99 GiB 4.29 GB)    ##阵列大小
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Sep  3 10:57:40 2021
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 1c2a95dd:157de9e1:d1114fab:15205fa6
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1   ##raid中的磁盘(分区)
       1       8       34        1      active sync   /dev/sdc2
2.1.4 创建md0的配置文件
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=1c2a95dd:157de9e1:d1114fab:15205fa6
2.1.5 创建文件系统
[root@localhost ~]# mkfs.ext4 /dev/md0 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 
2.1.6 创建raid0的挂载点,挂载使用
[root@localhost ~]# mkdir /raid0
[root@localhost ~]# mount /dev/md0 /raid0
[root@localhost ~]# echo aaa >/raid0/test
[root@localhost ~]# cat /raid0/test 
aaa
2.1.7 停止raid0
[root@localhost ~]# umount /raid0
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
2.1.8 激活raid0
[root@localhost ~]# mdadm -A /dev/md0
mdadm: /dev/md0 has been started with 2 drives.

注意:raid0不支持热备盘

2.2 创建raid1
2.2.1 创建磁盘分区

创建2个磁盘分区类型为fd的/sdc6,/sdc7

[root@localhost ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.22.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n            ##新建分区
All primary partitions are in use
Adding logical partition 6
First sector (16783360-41943039, default 16783360): 
Using default value 16783360
Last sector, +sectors or +size{K,M,G} (16783360-41943039, default 41943039): +2G   ##指定分区大小
Partition 6 of type Linux and of size 2 GiB is set

Command (m for help): t      ##修改分区类型
Partition number (1-6, default 6): 
Hex code (type L to list all codes): fd      ##指定成fd类型
Changed type of partition 'Linux' to 'Linux raid autodetect'
......以下设置一样........
Command (m for help): n            
All primary partitions are in use
Adding logical partition 7
First sector (20979712-41943039, default 20979712): 
Using default value 20979712
Last sector, +sectors or +size{K,M,G} (20979712-41943039, default 41943039): +2G
Partition 7 of type Linux and of size 2 GiB is set

Command (m for help): t
Partition number (1-7, default 7): 
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@localhost ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00099d4e

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    83886079    40893440   8e  Linux LVM

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xdc85e46c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     4196351     2097152   fd  Linux raid autodetect
/dev/sdc2         4196352     8390655     2097152   fd  Linux raid autodetect
/dev/sdc3         8390656    12584959     2097152   83  Linux
/dev/sdc4        12584960    41943039    14679040    5  Extended
/dev/sdc5        12587008    16781311     2097152   83  Linux
/dev/sdc6        16783360    20977663     2097152   fd  Linux raid autodetect
/dev/sdc7        20979712    25174015     2097152   fd  Linux raid autodetect

2.2.2 创建raid1
[root@localhost ~]#  mdadm -C /dev/md1 -ayes -l1 -n2 /dev/sdc6 /dev/sdc7   ##创建raid1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
2.2.3 查看raid的信息
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri Sep  3 12:05:10 2021
        Raid Level : raid1
        Array Size : 2094080 (2045.00 MiB 2144.34 MB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Sep  3 12:05:20 2021
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : a87a0450:ec435d63:3cf30dda:4f81875a
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       38        0      active sync   /dev/sdc6
       1       8       39        1      active sync   /dev/sdc7
2.2.4 创建raid1的配置文件
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf 
ARRAY /dev/md/1 metadata=1.2 name=localhost.localdomain:1 UUID=a87a0450:ec435d63:3cf30dda:4f81875a
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=1c2a95dd:157de9e1:d1114fab:15205fa6
2.2.5 使用raid1,先创建文件系统
[root@localhost ~]# mkfs.ext4 /dev/md1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 523520 blocks
26176 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done 
2.2.6 挂载使用
[root@localhost ~]# mkdir /raid1
[root@localhost ~]# mount /dev/md1 /raid1
[root@localhost ~]# echo "test" >/raid1/test.txt
[root@localhost ~]# cat /raid1/test.txt 
test
2.2.7 故障模拟,模拟sdc6这个磁盘出现故障
[root@localhost ~]# mdadm /dev/md1 -f /dev/sdc6
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri Sep  3 12:05:10 2021
        Raid Level : raid1
        Array Size : 2094080 (2045.00 MiB 2144.34 MB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Sep  3 17:31:54 2021
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : a87a0450:ec435d63:3cf30dda:4f81875a
            Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       39        1      active sync   /dev/sdc7

       0       8       38        -      faulty   /dev/sdc6

此时,sdc6已经异常,查看数据是否可以正常可以读写

[root@localhost raid1]# cat test.txt 
test
2.2.8 移除异常磁盘
[root@localhost ~]# mdadm /dev/md1 -r /dev/sdc6
2.2.9 加入新磁盘
[root@localhost ~]# mdadm /dev/md1 -a /dev/sdc6
2.3 创建raid5
2.3.1 使用sdb2,sdb3 sdb5创建raid5,sdb6做热备盘
[root@localhost ~]# mdadm -C /dev/md5 -l5 -n 3 /dev/sdb2 /dev/sdb3 /dev/sdb5 -x 1 /dev/sdb6   ##x代表指定热备盘数量
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
2.3.2 查看raid5的信息
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Sep  4 19:31:56 2021
        Raid Level : raid5
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 19:32:08 2021
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 09ea3233:f3d12a1e:53257621:7da47e40
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       4       8       21        2      active sync   /dev/sdb5

       3       8       22        -      spare   /dev/sdb6
2.3.3 使用raid5,先创建文件系统
[root@localhost ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 
2.3.4 保存raid的配置信息
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf
2.3.5 挂载测试写入数据
[root@localhost ~]# mkdir /raid5
[root@localhost ~]# mount /dev/md5 /raid5
[root@localhost ~]# echo "raid5" >/raid5/raid5.txt
[root@localhost ~]# cat /raid5/raid5.txt 
raid5
2.3.6 故障模拟,模拟sdb5磁盘出现故障
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdb5
mdadm: set /dev/sdb5 faulty in /dev/md5

在这里插入图片描述

注意:在/dev/sdb5出现故障的时候开始,raid5开始使用sdb6来进行raid5的重构,恢复raid.所以我们看到的raid的信息中sdb5是失效的,而且Consistency Policy : resync,代表目前数据一致性已经完成了同步。

[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Sep  4 19:31:56 2021
        Raid Level : raid5
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 19:38:42 2021
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 09ea3233:f3d12a1e:53257621:7da47e40
            Events : 37

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       3       8       22        2      active sync   /dev/sdb6

       4       8       21        -      faulty   /dev/sdb5
2.3.7 移除已经故障磁盘并插入新的磁盘

raid5最少要求是3块磁盘才能组成raid,如果只有一个热备盘的情况下,一个使用中的磁盘故障,就需要立即更换新的磁盘上去做热备

1)移除故障的sdb5磁盘

[root@localhost ~]# mdadm /dev/md5 -r /dev/sdb5
mdadm: hot removed /dev/sdb5 from /dev/md5

2)添加新的磁盘做热备

[root@localhost ~]# mdadm /dev/md5 -a /dev/sdb7
mdadm: added /dev/sdb7
2.3.8 查看raid5的状态
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
           Version : 1.2
     Creation Time : Sat Sep  4 19:31:56 2021
        Raid Level : raid5
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 19:54:28 2021
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 09ea3233:f3d12a1e:53257621:7da47e40
            Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       3       8       22        2      active sync   /dev/sdb6

       4       8       23        -      spare   /dev/sdb7
2.4 创建raid6
2.4.1 创建磁盘

创建raid6最少需要4块以上的磁盘,相比于raid5增加了一块磁盘来做磁盘的奇偶校验,空间利用率比raid5差,数据安全性更高

[root@localhost ~]# mdadm -C /dev/md6 -l6 -n 4 /dev/sdb{2..3} /dev/sdb{5..6} -x 1 /dev/sdb7
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.
2.4.2 查看raid6的信息
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 20:08:01 2021
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 20:08:20 2021
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 7670478e:601ee078:a9000bbf:3e5be2e0
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       2       8       21        2      active sync   /dev/sdb5
       3       8       22        3      active sync   /dev/sdb6

       4       8       23        -      spare   /dev/sdb7
2.4.3 保存raid的配置信息
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf
2.4.4 创建文件系统挂载使用
[root@localhost ~]# mkfs.ext4 /dev/md6
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 
2.4.6 创建挂载点
[root@localhost ~]# mkdir /raid6        
[root@localhost ~]# mount /dev/md6 /raid6
[root@localhost ~]# echo "raid6" >/raid6/raid6.txt
[root@localhost ~]# cat /raid6/raid6.txt 
raid6
2.4.7 改变阵列形态

将热备sdb7添加到阵列中,使之raid6有5块磁盘(注意:改变了阵列的形态需要保存一下阵列的信息)

[root@localhost ~]# mdadm -G /dev/md6 -n 5     ##改变阵列形态
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 20:09:55 2021
        Raid Level : raid6
        Array Size : 6282240 (5.99 GiB 6.43 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 20:10:34 2021
             State : clean 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : c35f37d1:82b8ff76:0871d7e0:48b5af21
            Events : 42

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       2       8       21        2      active sync   /dev/sdb5
       3       8       22        3      active sync   /dev/sdb6
       4       8       23        4      active sync   /dev/sdb7
2.4.8 添加热备盘
[root@localhost ~]# mdadm /dev/md6 -a /dev/sdb8
mdadm: added /dev/sdb8
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 22:35:55 2021
        Raid Level : raid6
        Array Size : 6282240 (5.99 GiB 6.43 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 5
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 22:47:47 2021
             State : clean 
    Active Devices : 5
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : c35f37d1:82b8ff76:0871d7e0:48b5af21
            Events : 43

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       19        1      active sync   /dev/sdb3
       2       8       21        2      active sync   /dev/sdb5
       3       8       22        3      active sync   /dev/sdb6
       4       8       23        4      active sync   /dev/sdb7

       5       8       24        -      spare   /dev/sdb8
2.4.9 故障模拟,模拟sdb2出现故障
[root@localhost ~]# mdadm /dev/md6 -f /dev/sdb2 
mdadm: set /dev/sdb2 faulty in /dev/md6

在这里插入图片描述

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 20:08:28 2021
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 20:08:40 2021
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 15% complete

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : b5334cfa:ceb22527:fcf8e756:73f26d2f
            Events : 21

    Number   Major   Minor   RaidDevice State
       4       8       23        0      spare rebuilding   /dev/sdb7
       1       8       19        1      active sync   /dev/sdb3
       2       8       21        2      active sync   /dev/sdb5
       3       8       22        3      active sync   /dev/sdb6

       0       8       18        -      faulty   /dev/sdb2

出现故障后,热备盘被启用,对数据进行恢复重构了,raid6恢复正常

如果我再将sdb3也模拟成故障后,会出现什么问题呢?

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 20:08:01 2021
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 20:14:48 2021
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 2
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 7670478e:601ee078:a9000bbf:3e5be2e0
            Events : 38

    Number   Major   Minor   RaidDevice State
       4       8       23        0      active sync   /dev/sdb7
       -       0        0        1      removed
       2       8       21        2      active sync   /dev/sdb5
       3       8       22        3      active sync   /dev/sdb6

       0       8       18        -      faulty   /dev/sdb2
       1       8       19        -      faulty   /dev/sdb3

可以发现sdb2,sdb3都是失效状态了,此时,我们查看一下数据,看是否可以能被访问

[root@localhost ~]# cat /raid6/raid6.txt 
raid6

发现是可以被访问的,原因在于raid6相比raid5只是多了一个奇偶校验盘,当一个盘出现故障后,是可以被访问的,此时,如果再出现一个磁盘故障会发生什么呢?

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 20:08:01 2021
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 20:19:09 2021
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 3
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : 7670478e:601ee078:a9000bbf:3e5be2e0
            Events : 40

    Number   Major   Minor   RaidDevice State
       4       8       23        0      active sync   /dev/sdb7
       -       0        0        1      removed
       -       0        0        2      removed
       3       8       22        3      active sync   /dev/sdb6

       0       8       18        -      faulty   /dev/sdb2
       1       8       19        -      faulty   /dev/sdb3
       2       8       21        -      faulty   /dev/sdb5
[root@localhost ~]# cat /raid6/raid6.txt    ##查看数据
raid6
[root@localhost ~]# mdadm -Ss /dev/md6    ##停止阵列
mdadm: stopped /dev/md6

[root@localhost ~]# mdadm -A /dev/md6       ##激活阵列
mdadm: /dev/md6 has been started with 2 drives (out of 4).

我们尝试将raid6重启了一下,系统已经在提示告警raid6只有2块磁盘了,但是raid还是正常的,我们将sdb6也置为故障,查看一下结果

[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
     Creation Time : Sat Sep  4 22:16:28 2021
        Raid Level : raid6
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Sep  4 22:26:22 2021
             State : clean, FAILED 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : b5334cfa:ceb22527:fcf8e756:73f26d2f
            Events : 44

    Number   Major   Minor   RaidDevice State
       4       8       23        0      active sync   /dev/sdb7
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed

       3       8       22        -      faulty   /dev/sdb6     ##磁盘失效

最后测试发现raid6是无法启动了,结论是:raid6可以损坏2块磁盘,超过2块就无法正常启动了,重启服务器发现raid也降级成 raid0,但是raid已经无法启动。

[root@localhost ~]# mdadm -Ss /dev/md6 
mdadm: stopped /dev/md6
[root@localhost ~]# mdadm -A /dev/md6
mdadm: /dev/md6 assembled from 1 drive - not enough to start the array.
[root@localhost ~]# mdadm -D /dev/md6
mdadm: cannot open /dev/md6: No such file or directory
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 5
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 5

              Name : localhost.localdomain:6  (local to host localhost.localdomain)
              UUID : b5334cfa:ceb22527:fcf8e756:73f26d2f
            Events : 17

    Number   Major   Minor   RaidDevice

       -       8       18        -        /dev/sdb2
       -       8       19        -        /dev/sdb3
       -       8       21        -        /dev/sdb5
       -       8       22        -        /dev/sdb6
       -       8       23        -        /dev/sdb7
2.4.10 删除raid
[root@localhost ~]# umount /raid6       ##卸载raid6
[root@localhost ~]# mdadm -Ss /dev/md6     ##停止阵列
mdadm: stopped /dev/md6 
[root@localhost ~]# mdadm --zero-superblock /dev/sdb2     ##删除阵列信息,有几个磁盘(分区)就删除几个
[root@localhost ~]# mdadm --zero-superblock /dev/sdb3
[root@localhost ~]# mdadm --zero-superblock /dev/sdb4
[root@localhost ~]# mdadm --zero-superblock /dev/sdb5
[root@localhost ~]# mdadm --zero-superblock /dev/sdb6ls
[root@localhost ~]# mdadm --zero-superblock /dev/sdb7
2.5 创建raid10
2.5.1 创建raid

raid10是有raid1+raid0组合而成的,因此既有raid1的数据的安全性也有raid0的高效率的特点。

[root@localhost ~]# mdadm -C /dev/md10 -ayes -l10 -n4 /dev/sdb{2,3} /dev/sdb{5,6}   ##创建raid10
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
2.5.2 将raid阵列的配置信息写入配置文件中
[root@localhost ~]# mdadm -Ds >/etc/mdadm.conf
2.5.3 查看raid10的信息
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sun Sep  5 13:13:26 2021
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)       ##阵列实际大小
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Sep  5 13:13:47 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : 795851a7:e0e26bc2:af71f8cf:fda4cb21
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync set-A   /dev/sdb2
       1       8       19        1      active sync set-B   /dev/sdb3
       2       8       21        2      active sync set-A   /dev/sdb5
       3       8       22        3      active sync set-B   /dev/sdb6

可以看到/dev/sdb2和/dev/sdb3为一组,/dev/sdb5和/dev/sdb6为一组,每一组中都是一份完整的数据。

2.5.4 给阵列添加热备盘
[root@localhost ~]# mdadm /dev/md10 -a /dev/sdb7     ##添加热备盘
mdadm: added /dev/sdb7
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sun Sep  5 13:13:26 2021
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sun Sep  5 13:17:52 2021
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : 795851a7:e0e26bc2:af71f8cf:fda4cb21
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync set-A   /dev/sdb2
       1       8       19        1      active sync set-B   /dev/sdb3
       2       8       21        2      active sync set-A   /dev/sdb5
       3       8       22        3      active sync set-B   /dev/sdb6

       4       8       23        -      spare   /dev/sdb7        ##热备盘
[root@localhost ~]# 
2.5.5 给阵列创建文件系统
[root@localhost ~]# mkfs.ext4 /dev/md10
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

2.5.6 挂载使用
[root@localhost ~]# mkdir /raid10         ##新建挂载点
[root@localhost ~]# mount /dev/md10 /raid10       ##挂载
[root@localhost ~]# echo raid10 >/raid10/raid10.txt    ##写入测试数据
[root@localhost ~]# cat /raid
raid0/  raid1/  raid10/ raid5/  raid6/  
[root@localhost ~]# cat /raid10/raid10.txt        ##查看数据
raid10
2.5.7 自动挂载
[root@localhost ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Fri Aug 27 16:44:59 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=c3f8cd35-9d79-4384-b687-b4ac9185d075 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/md10	/raid10     ext4     defaults  0 0 
[root@localhost ~]# 
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 899M     0  899M   0% /dev
tmpfs                    910M     0  910M   0% /dev/shm
tmpfs                    910M  9.6M  901M   2% /run
tmpfs                    910M     0  910M   0% /sys/fs/cgroup
/dev/mapper/centos-root   37G  1.9G   36G   6% /
/dev/sda1               1014M  195M  820M  20% /boot
tmpfs                    182M     0  182M   0% /run/user/0
/dev/md10                3.9G   16M  3.7G   1% /raid10          ##挂载使用
[root@localhost ~]# 
2.5.8 故障模拟,将sdb2置为故障
[root@localhost ~]# mdadm /dev/md10 -f /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md10
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
           Version : 1.2
     Creation Time : Sun Sep  5 13:13:26 2021
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sun Sep  5 13:23:09 2021
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 65% complete

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : 795851a7:e0e26bc2:af71f8cf:fda4cb21
            Events : 30

    Number   Major   Minor   RaidDevice State
       4       8       23        0      spare rebuilding   /dev/sdb7
       1       8       19        1      active sync set-B   /dev/sdb3
       2       8       21        2      active sync set-A   /dev/sdb5
       3       8       22        3      active sync set-B   /dev/sdb6

       0       8       18        -      faulty   /dev/sdb2

说明:故障发生后热备盘迅速的起到了替代故障盘的作用,并迅速的进行了数据的重建。

2.5.9 停止raid和激活raid
[root@localhost ~]# mdadm -Ss /dev/md10       ##停止阵列
mdadm: stopped /dev/md10
[root@localhost ~]# mdadm -As /dev/md10       ##激活阵列
mdadm: /dev/md10 has been started with 2 drives (out of 4).
[root@localhost ~]# mdadm -D /dev/md10    
/dev/md10:
           Version : 1.2
     Creation Time : Sun Sep  5 13:13:26 2021
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sun Sep  5 13:27:12 2021
             State : clean, degraded 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : 795851a7:e0e26bc2:af71f8cf:fda4cb21
            Events : 43

    Number   Major   Minor   RaidDevice State
       4       8       23        0      active sync set-A   /dev/sdb7
       1       8       19        1      active sync set-B   /dev/sdb3
       2       8       21        2      active sync set-A   /dev/sdb5
       3       8       22        3      active sync set-B   /dev/sdb6

       0       8       18        -      faulty   /dev/sdb2
[root@localhost ~]# mount /dev/md10 /raid10       ##重新挂载
[root@localhost ~]# cat /raid10/raid10.txt        ##查看数据
raid10

重启raid10阵列,数据也是可以正常的读取的。
ev/sdb2


说明:故障发生后热备盘迅速的起到了替代故障盘的作用,并迅速的进行了数据的重建。

##### 2.5.9 停止raid和激活raid

```bash
[root@localhost ~]# mdadm -Ss /dev/md10       ##停止阵列
mdadm: stopped /dev/md10
[root@localhost ~]# mdadm -As /dev/md10       ##激活阵列
mdadm: /dev/md10 has been started with 2 drives (out of 4).
[root@localhost ~]# mdadm -D /dev/md10    
/dev/md10:
           Version : 1.2
     Creation Time : Sun Sep  5 13:13:26 2021
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sun Sep  5 13:27:12 2021
             State : clean, degraded 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:10  (local to host localhost.localdomain)
              UUID : 795851a7:e0e26bc2:af71f8cf:fda4cb21
            Events : 43

    Number   Major   Minor   RaidDevice State
       4       8       23        0      active sync set-A   /dev/sdb7
       1       8       19        1      active sync set-B   /dev/sdb3
       2       8       21        2      active sync set-A   /dev/sdb5
       3       8       22        3      active sync set-B   /dev/sdb6

       0       8       18        -      faulty   /dev/sdb2
[root@localhost ~]# mount /dev/md10 /raid10       ##重新挂载
[root@localhost ~]# cat /raid10/raid10.txt        ##查看数据
raid10

重启raid10阵列,数据也是可以正常的读取的。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值