centos8软raid做raid5
一. 环境准备
1.关闭虚拟机添加4块硬盘,添加同样大小的硬盘。
2.开机后若硬盘没有上线,可以使用echo “- - -” > /sys/class/scsi_host/host4/scan扫描
3. 使用命令安装软raid卡,yum -y install mdadm
二. mdadm的基本命令,raid5创建
-a 检测设备名称/添加磁盘
-n 指定设备数量
-l 指定RAID级别
-C 创建
-v 显示过程
-f 模拟设备损坏
-r 移除设备
-Q 查看摘要信息
-D 查看详细信息
-S 停止RAID磁盘阵列
详解可以看看这个https://blog.csdn.net/weixin_30607029/article/details/96386957
创建raid5+1热备盘
mdadm -C -v /dev/md5 -l 5 -n 3 -x 1 -c32 /dev/nvme0n{2,3,4,5}
创建RAID5, 添加1个热备盘,指定chunk大小为32K
-x或–spare-devicds= 指定阵列中备用盘的数量
-c或–chunk= 设定阵列的块chunk块大小 ,单位为KB
[root@node3 ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Wed Apr 28 06:18:54 2021
Raid Level : raid5
Array Size : 8378368 (7.99 GiB 8.58 GB)
Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 29 04:53:15 2021
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Name : node3:5 (local to host node3)
UUID : 0727bb42:5e14d3d2:71af1c14:3e0ab25c
Events : 60
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
5 259 4 1 active sync /dev/nvme0n3
4 259 5 2 active sync /dev/nvme0n4
3 259 6 - spare /dev/nvme0n5
成功创建后记得保存配置文件
[root@node3 ~]# mdadm -Ds > /etc/mdadm.conf
[root@node3 ~]# cat /etc/mdadm.conf
ARRAY /dev/md5 metadata=1.2 spares=1 name=node3:5 UUID=0727bb42:5e14d3d2:71af1c14:3e0ab25c
模拟设备故障,热备盘自动上线;更换硬盘操作。
[root@node3 ~]# mdadm /dev/md5 -f /dev/nvme0n3
mdadm: set /dev/nvme0n3 faulty in /dev/md5
[root@node3 ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Wed Apr 28 06:18:54 2021
Raid Level : raid5
Array Size : 8378368 (7.99 GiB 8.58 GB)
Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 29 05:35:02 2021
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Name : node3:5 (local to host node3)
UUID : 0727bb42:5e14d3d2:71af1c14:3e0ab25c
Events : 79
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
3 259 6 1 active sync /dev/nvme0n5
4 259 5 2 active sync /dev/nvme0n4
5 259 4 - faulty /dev/nvme0n3
可以看到/dev/nvme0n3被/dev/nvme0n5替换
[root@node3 ~]# mdadm /dev/md5 -r /dev/nvme0n3
mdadm: hot removed /dev/nvme0n3 from /dev/md5
剔除坏盘
Consistency Policy : resync
Name : node3:5 (local to host node3)
UUID : 0727bb42:5e14d3d2:71af1c14:3e0ab25c
Events : 80
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
3 259 6 1 active sync /dev/nvme0n5
4 259 5 2 active sync /dev/nvme0n4
将新的 /dev/nvme0n3添加回来,工作中需要手工将硬盘更换。
[root@node3 ~]# mdadm /dev/md5 -a /dev/nvme0n3
mdadm: added /dev/nvme0n3
Consistency Policy : resync
Name : node3:5 (local to host node3)
UUID : 0727bb42:5e14d3d2:71af1c14:3e0ab25c
Events : 81
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
3 259 6 1 active sync /dev/nvme0n5
4 259 5 2 active sync /dev/nvme0n4
5 259 4 - spare /dev/nvme0n3
更换完成
挂载
mkfs.xfs /dev/md5
mkdir /mnt/md5
mount /dev/md5 /mnt/md5
df -Th
[root@node3 ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 886M 0 886M 0% /dev
tmpfs tmpfs 904M 0 904M 0% /dev/shm
tmpfs tmpfs 904M 9.4M 894M 2% /run
tmpfs tmpfs 904M 0 904M 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 17G 8.6G 8.5G 51% /
/dev/nvme0n1p1 ext4 976M 143M 766M 16% /boot
tmpfs tmpfs 181M 1.2M 180M 1% /run/user/42
tmpfs tmpfs 181M 4.0K 181M 1% /run/user/0
/dev/md5 xfs 8.0G 90M 7.9G 2% /mnt/md5
永久挂载就写入 /etc/fstab
三. 磁阵的启动和关闭
1.关闭前保存配置
[root@node3 ~]# mdadm -Dsv > /etc/mdadm.conf
[root@node3 ~]# cat /etc/mdadm.conf
ARRAY /dev/md5 level=raid5 num-devices=3 metadata=1.2 spares=1 name=node3:5 UUID=0727bb42:5e14d3d2:71af1c14:3e0ab25c
devices=/dev/nvme0n2,/dev/nvme0n3,/dev/nvme0n4,/dev/nvme0n5
2.查看状态,主要查看数据同步 Consistency Policy : resync
[root@node3 ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Wed Apr 28 06:18:54 2021
Raid Level : raid5
Array Size : 8378368 (7.99 GiB 8.58 GB)
Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 29 05:47:13 2021
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Name : node3:5 (local to host node3)
UUID : 0727bb42:5e14d3d2:71af1c14:3e0ab25c
Events : 81
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
3 259 6 1 active sync /dev/nvme0n5
4 259 5 2 active sync /dev/nvme0n4
5 259 4 - spare /dev/nvme0n3
-
停止启动磁阵
[root@node3 ~]# umount /mnt/md5 ##停止前需要停止挂载,或者kill占用进程
[root@node3 ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5
[root@node3 ~]# mdadm -As /dev/md5
mdadm: /dev/md5 has been started with 3 drives and 1 spare.
[root@node3 ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Wed Apr 28 06:18:54 2021
Raid Level : raid5
Array Size : 8378368 (7.99 GiB 8.58 GB)
Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Thu Apr 29 06:00:06 2021 State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1Layout : left-symmetric Chunk Size : 32K
Consistency Policy : resync
Name : node3:5 (local to host node3)
UUID : 0727bb42:5e14d3d2:71af1c14:3e0ab25c
Events : 81
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
3 259 6 1 active sync /dev/nvme0n5
4 259 5 2 active sync /dev/nvme0n4
5 259 4 - spare /dev/nvme0n3
启动成功。