服务器制作RAID磁盘阵列并管理
mdadm管理RAID阵列的参数
参数 作用
-a 检测设备名称
-n 指定设备数量
-l 指定RAID级别
-C 创建
-v 显示过程
-f 模拟设备损坏
-r 移除设备
-a 添加设备
-Q 查看摘要信息
-D 查看详细信息
-S 停止阵列
【 案例实施 】
1. 创建raid
1.1 创建raid 0
利用磁盘分区新建2个磁盘分区,每个大小为20 GB。用这2个20 GB的分区来模拟1个40 GB的硬盘
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 19.5G 0 part
├─centos-root 253:0 0 17.5G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 20G 0 disk
sde 8:64 0 20G 0 disk
sr0 11:0 1 4G 0 rom
配置本地YUM安装源,将提供的mdadm_yum文件夹上传至/opt目录
[root@localhost ~]# ls /opt/
mdadm_yum
[root@localhost ~]# mv /etc/yum.repos.d/* /media/
[root@localhost ~]# vi /etc/yum.repos.d/yum.repo
[mdadm]
name=mdadm
baseurl=file:///opt/mdadm_yum/
gpgcheck=0
enabled=1
安装工具mdadm
[root@localhost ~]# yum install -y mdadm
创建一个RAID 0设备:这里使用/dev/sdb和/dev/sdc做实验。
将/dev/sdb和/dev/sdc建立RAID等级为RAID 0的md0(设备名)
[root@localhost ~]# mdadm -C -v /dev/md0 -l 0 -n 2 /dev/sdb /dev/sdc
mdadm: chunk size defaults to 512K
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
查看系统上的RAID
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdc[1] sdb[0]
41908224 blocks super 1.2 512k chunks
unused devices: <none>
查看RAID详细信息
[root@localhost ~]# mdadm -Ds
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=64765953:865164fd:a92a955f:dd35becc
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Feb 13 21:27:27 2021
Raid Level : raid0
Array Size : 41908224 (39.97 GiB 42.91 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Feb 13 21:27:27 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : unknown
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 64765953:865164fd:a92a955f:dd35becc
Events : 0
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
生成配置文件mdadm.conf
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf
对创建的RAID进行文件系统创建并挂载
[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0 isize=256 agcount=16, agsize=654720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=10475520, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=5120, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none
[root@localhost ~]# mkdir /raid0/
[root@localhost ~]# mount /dev/md0 /raid0/
[root@localhost ~]# df -Th /raid0/
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 40G 33M 40G 1% /raid0
设置成开机自动挂载
[root@localhost ~]# blkid /dev/md0
/dev/md0: UUID="2b211c0e-25fa-4560-8ae3-dbeb8df2218b" TYPE="xfs"
[root@localhost ~]# echo "UUID=2b211c0e-25fa-4560-8ae3-dbeb8df2218b /raid0 xfs defaults 0 0" >> /etc/fstab
删除RAID操作
[root@localhost ~]# umount /raid0/ //卸载
[root@localhost ~]# mdadm -S /dev/md0 //停止
mdadm: stopped /dev/md0
[root@localhost ~]# rm -rf /etc/mdadm.conf //删除
[root@localhost ~]# rm -rf /raid0/ //删除
[root@localhost ~]# mdadm --zero-superblock /dev/sdb //取消后加硬盘超级块
[root@localhost ~]# mdadm --zero-superblock /dev/sdc //取消后加硬盘超级块
[root@localhost ~]# vi /etc/fstab
UUID=2b211c0e-25fa-4560-8ae3-dbeb8df2218b /raid0 xfs defaults 0 0 //删除此行
2.运维操作
2.1 raid 5运维操作
利用磁盘分区新建4个磁盘分区,每个大小为20 GB。用3个20 GB的分区来模拟raid 5,加一个热备盘。
[root@localhost ~]# mdadm -Cv /dev/md5 -l5 -n3 /dev/sdb /dev/sdc /dev/sdd --spare-devices=1 /dev/sde
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Fail create md5 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
-l5: 等级为5
-n3: 3块硬盘
--spare-devices:添加一个热备盘
查看RAID的详细信息
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sat Feb 13 21:35:31 2021
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sat Feb 13 21:37:17 2021
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : unknown
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 21a8bc50:00c489e9:ea720012:c4e8a8b4
Events : 18
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
3 8 64 - spare /dev/sde
2.2 模拟硬盘故障
[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md5
查看RAID的详细信息
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Number Major Minor RaidDevice State
3 8 64 0 active sync /dev/sde
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
0 8 16 - faulty /dev/sdb
可以发现原来的热备盘/dev/sde正在参与RAID 5的重建,而原来的/dev/sdb变成了坏盘。
热移除故障盘
[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md5
查看RAID的详细信息
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Number Major Minor RaidDevice State
3 8 64 0 active sync /dev/sde
1 8 32 1 active sync /dev/sdc
4 8 48 2 active sync /dev/sdd
格式化RAID并进行挂载
[root@localhost ~]# mkfs.xfs /dev/md5
mkfs.xfs: /dev/md5 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
提示我们已有系统,加-f覆盖
[root@localhost ~]# mkfs.xfs /dev/md5 -f
meta-data=/dev/md5 isize=256 agcount=16, agsize=654720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=10475520, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=5120, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# mount /dev/md5 /mnt/
查看
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 18G 858M 17G 5% /
devtmpfs 479M 0 479M 0% /dev
tmpfs 489M 0 489M 0% /dev/shm
tmpfs 489M 6.7M 483M 2% /run
tmpfs 489M 0 489M 0% /sys/fs/cgroup
/dev/sda1 497M 125M 373M 25% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md5 40G 33M 40G 1% /mnt