RAID和逻辑卷
文章目录
写在开头:
磁盘有价,数据无价!
请读者在执行任何与磁盘相关操作之前,务必三思而后行。
不论有无十足把握,请准备好回退方案。
一、RAID管理
(1)RAID介绍
RAID(Redundant Array of Independent Disks),独立冗余磁盘阵列。在早期大部分采用的都是单块磁盘进行数据存储和读写,I/O性能非常低,且存储容量较低。而且单块磁盘容易出现故障。为了解决问题,提出了将多块硬盘结合在一起使用,来提高I/O性能和容量,并提高一定的冗余性。于是RAID技术就出现了。下面介绍几种常用的RAID。
RAID0
- 工作原理:将N块硬盘组合在一起使用,即可用于将近N倍的读写性能。容量也是所有硬盘容量的总和。
- 优点:读写性能和存储容量都得到提升
- 缺点:数据安全性较低,整个RAID0组中任何一块硬盘损坏都会导致数据丢失。没有冗余能力。
- 磁盘数量:最少需要两块磁盘
RAID1
- 工作原理:将写入磁盘的数据镜像到另一块磁盘上,当一块磁盘损坏的时候,另一块磁盘可以直接使用。
- 优点:安全性高,技术简单
- 缺点:实现成本高,磁盘利用率低
- 磁盘数量:必须是偶数
RAID3
- 工作原理:在raid组里面专门用一个磁盘来保存XOR(异或校验算法)校验值,这块盘我们称为校验盘。当我们raid组中硬盘出现故障的时候,可以通过XOR算法来重建数据。
- 优点:能够提高读写性能,提供冗余功能,相比raid1成本更低。
- 缺点:当写操作频繁的时,校验盘因为频繁擦写,坏盘几率更大
- 磁盘数量:最少需要三块磁盘。
RAID5
- 工作原理:与RAID3工作原理相似,不同点在于,RAID5的校验值是存放在不同的硬盘上,灭有专门的校验盘。
- 优点:兼顾了存储性能、数据安全和成本等各方面因素,是综合性能最佳的解决方案。
- 缺点:写入速度比对单个磁盘稍慢
- 磁盘数量:最少需要三块磁盘
RAID6
- 工作原理:与RAID5类似,在次基础上RAID6增加了一个校验值,可以同时允许两块磁盘损坏,通过校验算法来重建数据。
- 优点:快速的读取性能,更高的容错能力
- 缺点:写入速度稍慢,成本更高,一般用于RAID10的替代方案。
- 磁盘数量:最少需要4块硬盘
RAID10
- 工作原理:兼具了RAID1和RAID0的优点,将两个raid1组成raid0。
- 优点:具备RAID0的读写性能以及RAID1的容错能力,完美的解决方案。
- 缺点:成本极高。
- 磁盘数量:最少4块磁盘,且必须是4的倍数。
拓展
在RAID成员失效、但没超过冗余能力时,RAID会进入降级(Degraded)状态
降级状态下的性能和可靠性都会大幅降低
为降级RAID组加入新的成员盘会进入重建(Rebuild)状态
重建过程会产生大量磁盘IO,会对存储系统性能造成负面影响
重建操作可能会与业务争夺资源,进一步增加重建耗时
重建状态下的可靠性与降级相同
热备(HotSpare)是阵列组中的在线自动替补机制,可设置热备盘在阵列组降级时自动补位,进入重建状态;有全局热备和局部热备两种
全局热备:存储系统中任意阵列组降级时皆会自动补位
专用热备:指定的某些阵列组降级时才会自动补位
硬RAID和软RAID
-
硬RAID:通过独立的RAID卡或者控制器来实现RIAD功能。
- 优点:独立于操作系统之外,不会从磁盘上占据读写性能,磁盘更换更加方便。
- 缺点:成本较高,当控制器故障,必须找一个兼容控制器更换。配置难以更改
-
软RAID:没有独立的RAID卡,由操作系统和CPU来实现RAID功能。
-
优点:成本较低,操作简单。
-
缺点:会降低RAID性能,并且不允许在操作系统之间共享。性能和可靠性都依赖于CPU。
-
(2)RAID配置
使用mdadm
命令来管理RAID组。
RAID0配置
[root@rhce ~]# mdadm -C /dev/md0 -l 0 -n 2 /dev/sda /dev/sdb
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
mdadm 创建raid
-C 创建模式
/dev/md0 raid组的文件名
-l raid基本
0 raid0
-n 使用磁盘数量
/dev/sda 使用磁盘名称
[root@rhce ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Aug 25 14:57:12 2023
Raid Level : raid0
Array Size : 41908224 (39.97 GiB 42.91 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Aug 25 14:57:12 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : -unknown-
Chunk Size : 512K
Consistency Policy : none
Name : rhce:0 (local to host rhce)
UUID : c1134565:c3145720:0077e0cb:a0166319
Events : 0
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
[root@rhce ~]# mkfs.ext4 /dev/md0
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: f477109d-43fd-4be1-8b80-3ce75c00adf3
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rhce ~]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md0 9:0 0 40G 0 raid0
RAID1配置
[root@rhce ~]# mdadm -C /dev/md1 -l 1 -n 2 /dev/sda /dev/sdb -x 1 /dev/sdc
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? (y/n) y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@rhce ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Fri Aug 25 15:04:50 2023
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Aug 25 15:06:35 2023
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Name : rhce:1 (local to host rhce)
UUID : 10b89b07:a12c11c8:c8db35dd:79d24c1d
Events : 17
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 - spare /dev/sdc
[root@rhce ~]# mkfs.ext4 /dev/md1
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 5238528 4k blocks and 1310720 inodes
Filesystem UUID: 8550c865-1078-4b6d-8cf2-4d2d3204f19a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rhce ~]# lsblk /dev/md1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md1 9:1 0 20G 0 raid1
RAID5配置
[root@rhce ~]# mdadm -C /dev/md5 -l 5 -n 3 /dev/sd{a..c} -x 1 /dev/sdd
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@rhce ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Aug 25 15:17:33 2023
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Aug 25 15:19:18 2023
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : rhce:5 (local to host rhce)
UUID : c8da269e:85a80803:721a7849:70388d16
Events : 18
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
4 8 32 2 active sync /dev/sdc
3 8 48 - spare /dev/sdd
[root@rhce ~]# mkfs.ext4 /dev/md5
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 00088c6c-60dc-4eb6-a99e-701477ce1eae
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rhce ~]# lsblk /dev/md5
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md5 9:5 0 40G 0 raid5
RAID6配置
[root@rhce ~]# mdadm -C /dev/md6 -l 6 -n 4 /dev/sd{a..d} -x 1 /dev/sde
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.
[root@rhce ~]# mdadm /dev/md6
/dev/md6: 39.97GiB raid6 4 devices, 1 spare. Use mdadm --detail for more detail.
[root@rhce ~]# mdadm -D /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Fri Aug 25 15:24:55 2023
Raid Level : raid6
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Fri Aug 25 15:25:02 2023
State : clean, resyncing
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Resync Status : 10% complete
Name : rhce:6 (local to host rhce)
UUID : 931508e9:3bd8d9e6:444ae93b:fff78515
Events : 1
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 - spare /dev/sde
[root@rhce ~]# mkfs.ext4 /dev/md6
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 5e740994-118f-4384-97a9-94ced2117cb3
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rhce ~]# lsblk /dev/md6
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md6 9:6 0 40G 0 raid6
RAID10配置
[root@rhce ~]# mdadm -C /dev/md0 -l 10 -n 4 /dev/sd{a..d}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@rhce ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Aug 25 15:39:48 2023
Raid Level : raid10
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Aug 25 15:39:48 2023
State : clean, resyncing
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Resync Status : 3% complete
Name : rhce:0 (local to host rhce)
UUID : 3b4213f8:21c6b204:bd24ec4a:250d2378
Events : 0
Number Major Minor RaidDevice State
0 8 0 0 active sync set-A /dev/sda
1 8 16 1 active sync set-B /dev/sdb
2 8 32 2 active sync set-A /dev/sdc
3 8 48 3 active sync set-B /dev/sdd
[root@rhce ~]# mdadm -C /dev/md10 -n /dev/
Display all 178 possibilities? (y or n)
[root@rhce ~]# mkfs.ext4 /dev/md0
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 6e7895e3-9557-4097-925e-8985ecf95ccc
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rhce ~]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md0 9:0 0 40G 0 raid10
[root@rhce ~]# mkfs.ext4 /dev/md0
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 6e7895e3-9557-4097-925e-8985ecf95ccc
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
[root@rhce ~]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md0 9:0 0 40G 0 raid10
删除RAID
停止RAID阵列
[root@rhce ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
清除RAID成员中的超级快信息
[root@rhce ~]# mdadm --zero-superblock /dev/sd{a..d}
如果没有清除超级块信息,当我们重新创建RAID的时候,会提醒我们该磁盘以及在某个RAID阵列
[root@rhce ~]# mdadm -C /dev/md0 -l 10 -n 4 /dev/sd{a..d}
mdadm: /dev/sda appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
mdadm: /dev/sdb appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
mdadm: /dev/sdc appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
mdadm: /dev/sdd appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
(3)RAID阵列磁盘损害模拟
以RAID5为例,当损坏一块磁盘的时候,热备盘自动更换,数据重构。
[root@rhce ~]# mkdir /md5
[root@rhce ~]# mount /dev/md5 /md5/
[root@rhce ~]# df -H /dev/md5
Filesystem Size Used Avail Use% Mounted on
/dev/md5 42G 25k 40G 1% /md5
[root@rhce ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Aug 25 15:58:37 2023
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Aug 25 16:00:23 2023
State : active
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : rhce:5 (local to host rhce)
UUID : f0d45dfb:cb5696d9:1d0077e2:d785cba2
Events : 23
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
4 8 32 2 active sync /dev/sdc
3 8 48 - spare /dev/sdd
往RAID5中写入数据
[root@rhce ~]# mount /dev/sr0 /cdrow/
mount: /cdrow: WARNING: source write-protected, mounted read-only.
[root@rhce ~]# cp -rf /cdrow/ /md5/ //将光盘镜像中数据全部复制到raid5中
[root@rhce ~]# ls /md5/
cdrow lost+found
[root@rhce ~]# du -h /md5/
16K /md5/lost+found
108M /md5/cdrow/images/pxeboot
857M /md5/cdrow/images
1.2G /md5/cdrow/BaseOS/Packages
2.4M /md5/cdrow/BaseOS/repodata
1.2G /md5/cdrow/BaseOS
7.0G /md5/cdrow/AppStream/Packages
7.7M /md5/cdrow/AppStream/repodata
7.0G /md5/cdrow/AppStream
109M /md5/cdrow/isolinux
2.3M /md5/cdrow/EFI/BOOT/fonts
6.5M /md5/cdrow/EFI/BOOT
6.5M /md5/cdrow/EFI
9.1G /md5/cdrow
9.1G /md5/
模拟/dev/sdb磁盘损坏
mdadm: set /dev/sdb faulty in /dev/md5
[root@rhce ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Aug 25 15:58:37 2023
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Aug 25 16:04:57 2023
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 10% complete
Name : rhce:5 (local to host rhce)
UUID : f0d45dfb:cb5696d9:1d0077e2:d785cba2
Events : 27
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
3 8 48 1 spare rebuilding /dev/sdd
4 8 32 2 active sync /dev/sdc
1 8 16 - faulty /dev/sdb
[root@rhce ~]# du -h /md5/
16K /md5/lost+found
108M /md5/cdrow/images/pxeboot
857M /md5/cdrow/images
1.2G /md5/cdrow/BaseOS/Packages
2.4M /md5/cdrow/BaseOS/repodata
1.2G /md5/cdrow/BaseOS
7.0G /md5/cdrow/AppStream/Packages
7.7M /md5/cdrow/AppStream/repodata
7.0G /md5/cdrow/AppStream
109M /md5/cdrow/isolinux
2.3M /md5/cdrow/EFI/BOOT/fonts
6.5M /md5/cdrow/EFI/BOOT
6.5M /md5/cdrow/EFI
9.1G /md5/cdrow
9.1G /md5/
二、逻辑卷管理
(1)逻辑卷介绍
LVM(Logical Volume Manager)逻辑卷管理实在Linux环境下对磁盘分区进行管理的一种机制。在大部分系统中都有类型LVM这种磁盘管理方式。LVM可以通过简单的操作来完成磁盘的扩容和缩容,相比于之前的分区,虽然也可以进行扩容,但是有可能会出现数据丢失的情况。
LVM定义
- 物理设备:用于保存数据的存储设备。条目是块设备,可以是磁盘分区,也可以是整个磁盘、RAID阵列或者SAN磁盘。
- 物理卷(PV):要创建逻辑卷必须将物理设备初始化为物理卷。LVM系统会将物理卷分成物理区块(PE)。
- 卷组(VG):卷组是存储池。由一个或多个物理卷组成。一个PV只能分配给一个VG。VG可以包含未使用的空间和任意数目的逻辑卷。
- 逻辑卷(LV):逻辑卷是提供给应用、用户和系统所使用的“存储设备”。
(2)逻辑卷配置
以下实例完整的磁盘为例。也可以将磁盘进行分区后,来创建逻辑卷。
创建逻辑卷
创建逻辑卷基本步骤:
- 初始化物理设备为物理卷
- 创建卷组
- 创建逻辑卷
- 添加文件系统
创建物理卷
使用pvcreate
命令将物理设备标记为物理卷
[root@rhce ~]# pvcreate /dev/sda /dev/sdb
Physical volume "/dev/sda" successfully created.
Physical volume "/dev/sdb" successfully created.
使用pvdisplay
或者pvs
利用查看物理卷信息。
[root@rhce ~]# pvdisplay
--- Physical volume ---
PV Name /dev/nvme0n1p2
VG Name rhel
PV Size <19.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4863
Free PE 0
Allocated PE 4863
PV UUID HPRENt-Y8n0-cr0N-1u8h-3oGG-RXgY-9oBMZf
"/dev/sda" is a new physical volume of "20.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sda
VG Name
PV Size 20.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID zJ0SHT-rdQO-NzL4-ySD3-G95F-LKKu-B0lEAg
"/dev/sdb" is a new physical volume of "20.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 20.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID BrBVT9-semZ-0i6u-wzgH-jdVd-chGj-k4XBZI
[root@rhce ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p2 rhel lvm2 a-- <19.00g 0
/dev/sda lvm2 --- 20.00g 20.00g
/dev/sdb lvm2 --- 20.00g 20.00g
创建卷组
使用vgcreate
命令将一个或多个物理卷组成一个卷组。
[root@rhce ~]# vgcreate vg0 /dev/sda /dev/sdb
Volume group "vg0" successfully created
创建一个名称为vg0的卷组。大小是/dev/sda和/dev/sdb两个PV的大小之和(以PE为单位计算)。
使用vgdisplay
或者vgs
查看卷组信息。
[root@rhce ~]# vgdisplay
--- Volume group ---
VG Name rhel
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.00 GiB
PE Size 4.00 MiB
Total PE 4863
Alloc PE / Size 4863 / <19.00 GiB
Free PE / Size 0 / 0
VG UUID V6MZGK-aQ0s-oWqG-qppZ-oP7N-b7vo-mJlWZC
--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 39.99 GiB
PE Size 4.00 MiB
Total PE 10238
Alloc PE / Size 0 / 0
Free PE / Size 10238 / 39.99 GiB
VG UUID Ge2w5w-Wx5w-4ALl-qe1Z-oqyr-wGvS-TRYIPz
[root@rhce ~]# vgs
VG #PV #LV #SN Attr VSize VFree
rhel 1 2 0 wz--n- <19.00g 0
vg0 2 0 0 wz--n- 39.99g 39.99g
创建逻辑卷
使用lvcreate
命令创建逻辑卷。
[root@rhce ~]# lvcreate -n lv01 -L 10G vg0
Logical volume "lv01" created.
从vg0卷组中创建一个名称为lv01大小为10G的逻辑卷
[root@rhce ~]# lvcreate -n lv02 -l 250 vg0
Logical volume "lv02" created.
从vg0卷组中创建一个有250个PE大小的逻辑卷
使用lvdisplay
或者lvs
查看逻辑卷信息。
[root@rhce ~]# lvdisplay
--- Logical volume ---
LV Path /dev/rhel/swap
LV Name swap
VG Name rhel
LV UUID bwugab-kZjm-Eb2M-Z4Fv-eBkH-4ACa-YCxkrX
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2023-07-24 14:13:15 +0800
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/rhel/root
LV Name root
VG Name rhel
LV UUID v0OM7E-yWaP-eSF2-vCQZ-cg6P-Q3jg-G9bzS4
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2023-07-24 14:13:15 +0800
LV Status available
# open 1
LV Size <17.00 GiB
Current LE 4351
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/vg0/lv01
LV Name lv01
VG Name vg0
LV UUID bRX08g-50Y2-Tcnx-HAnU-bBf0-uWnf-eVGLqu
LV Write Access read/write
LV Creation host, time rhce, 2023-08-25 17:00:23 +0800
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/vg0/lv02
LV Name lv02
VG Name vg0
LV UUID hAqSV4-KbwI-xu6a-IwwB-4nZS-jjDb-CsdkEt
LV Write Access read/write
LV Creation host, time rhce, 2023-08-25 17:01:02 +0800
LV Status available
# open 0
LV Size 1000.00 MiB
Current LE 250
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
[root@rhce ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- <17.00g
swap rhel -wi-ao---- 2.00g
lv01 vg0 -wi-a----- 10.00g
lv02 vg0 -wi-a----- 1000.00m
添加文件系统
[root@rhce ~]# mkfs.xfs /dev/vg0/lv01
meta-data=/dev/vg0/lv01 isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@rhce ~]# mkfs.ext4 /dev/vg0/lv02
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 256000 4k blocks and 64000 inodes
Filesystem UUID: b6263632-28cb-440a-ad99-1da7d43d7bd8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
逻辑卷扩容
扩容重要步骤:
- 扩充逻辑卷大小
- 扩充文件系统大小
扩容逻辑卷,如果卷组上还有多余的空闲空间即可之间分配给逻辑卷,如果没有空闲空间,就需要初始化物理卷,再将卷组扩充,提供给逻辑卷扩容的空间。
扩展卷组
使用vgextend
命令向卷组中添加新物理卷。
[root@rhce ~]# vgextend vg0 /dev/sdc
Physical volume "/dev/sdc" successfully created.
Volume group "vg0" successfully extended
[root@rhce ~]# vgdisplay vg0
--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size <59.99 GiB
PE Size 4.00 MiB
Total PE 15357
Alloc PE / Size 2810 / <10.98 GiB
Free PE / Size 12547 / 49.01 GiB
VG UUID Ge2w5w-Wx5w-4ALl-qe1Z-oqyr-wGvS-TRYIPz
扩展逻辑卷
使用lvextend
命令来扩展逻辑卷大小。
[root@rhce ~]# lvextend -L +5G /dev/vg0/lv01
Size of logical volume vg0/lv01 changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840 extents).
Logical volume vg0/lv01 successfully resized.
[root@rhce ~]# lvs /dev/vg0/lv01
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv01 vg0 -wi-a----- 15.00g
[root@rhce ~]# lvextend -l 500 /dev/vg0/lv02
Size of logical volume vg0/lv02 changed from 1000.00 MiB (250 extents) to 1.95 GiB (500 extents).
Logical volume vg0/lv02 successfully resized.
[root@rhce ~]# lvs /dev/vg0/lv02
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv02 vg0 -wi-a----- 1.95g
扩展示例
lvextend -L +5G 向逻辑卷里面添加5G
lvextend -L 5G 将逻辑卷大小调整到正好5G
lvextend -l +500 向逻辑卷里面添加500个PE
lvextend -l 500 将逻辑卷大小调整到500个PE
扩展文件系统
不同的文件系统有不同的命令
扩容EXT4文件系统
使用resize2fs
来扩展文件系统
[root@rhce ~]# resize2fs /dev/vg0/lv02
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/vg0/lv02 to 512000 (4k) blocks.
The filesystem on /dev/vg0/lv02 is now 512000 (4k) blocks long.
扩容XFS文件系统
使用xfs_growfs
命令来扩展文件系统,xfs系统必须要挂载之后,才能扩展。
[root@rhce ~]# mkdir /date
[root@rhce ~]# mount /dev/vg0/lv01 /date/
[root@rhce ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 728M 9.8M 718M 2% /run
/dev/mapper/rhel-root 17G 4.7G 13G 28% /
/dev/nvme0n1p1 1014M 292M 723M 29% /boot
tmpfs 364M 96K 364M 1% /run/user/0
/dev/sr0 9.0G 9.0G 0 100% /run/media/root/RHEL-9-2-0-BaseOS-x86_64
/dev/mapper/vg0-lv01 10G 104M 9.9G 2% /date
[root@rhce ~]# xfs_growfs /date/
meta-data=/dev/mapper/vg0-lv01 isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2621440 to 3932160
[root@rhce ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 728M 9.8M 718M 2% /run
/dev/mapper/rhel-root 17G 4.7G 13G 28% /
/dev/nvme0n1p1 1014M 292M 723M 29% /boot
tmpfs 364M 96K 364M 1% /run/user/0
/dev/sr0 9.0G 9.0G 0 100% /run/media/root/RHEL-9-2-0-BaseOS-x86_64
/dev/mapper/vg0-lv01 15G 140M 15G 1% /date
逻辑卷缩容
逻辑卷缩容步骤与扩容刚好相反,先缩小文件系统大小,在缩小逻辑卷大小,根据需要是否缩减卷组大小,将物理卷中物理设备移除。
XFS文件系统不支持缩容。下面以ext4系统为例。
文件系统缩容
卸载文件系统
[root@rhce ~]# umount /date1
检查磁盘是否存在错误
[root@rhce ~]# e2fsck -f /dev/vg0/lv02
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg0/lv02: 11/128000 files (0.0% non-contiguous), 12890/512000 blocks
设定文件系统大小
[root@rhce ~]# resize2fs /dev/vg0/lv02 500M
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/vg0/lv02 to 128000 (4k) blocks.
The filesystem on /dev/vg0/lv02 is now 128000 (4k) blocks long.
逻辑卷缩容
使用lvresize
命令重新设置大小
[root@rhce ~]# lvresize -L 500M /dev/vg0/lv02
File system ext4 found on vg0/lv02.
File system size (500.00 MiB) is equal to the requested size (500.00 MiB).
File system reduce is not needed, skipping.
Size of logical volume vg0/lv02 changed from 1.95 GiB (500 extents) to 500.00 MiB (125 extents).
Logical volume vg0/lv02 successfully resized.
检查是否缩容成功
[root@rhce ~]# mount /dev/vg0/lv02 /date1
[root@rhce ~]# df -h /dev/vg0/lv02
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-lv02 475M 24K 440M 1% /date1
卷组缩容
使用vgreduce
命令从卷组中移除物理卷
[root@rhce ~]# vgreduce vg0 /dev/sdc
Removed "/dev/sdc" from volume group "vg0"
删除逻辑卷
删除逻辑卷
使用lvremove
命令删除逻辑卷
[root@rhce ~]# lvremove /dev/vg0/lv01
Do you really want to remove active logical volume vg0/lv01? [y/n]: y
Logical volume "lv01" successfully removed.
[root@rhce ~]# lvremove /dev/vg0/lv02
Do you really want to remove active logical volume vg0/lv02? [y/n]: y
Logical volume "lv02" successfully removed.
删除卷组
使用vgremove
命令删除卷组
[root@rhce ~]# vgremove vg0
Volume group "vg0" successfully removed
删除物理卷
使用pvremove
命令来移除PV。
[root@rhce ~]# pvremove /dev/sd{a..c}
Labels on physical volume "/dev/sda" successfully wiped.
Labels on physical volume "/dev/sdb" successfully wiped.
Labels on physical volume "/dev/sdc" successfully wiped.
(3)LVM命令汇总
命令 | 作用 |
---|---|
pvs | 显示物理卷信息 |
pvdisplay | 显示物理卷的各种属性 |
pvmove | 将PE从一个物理卷移动到另一个物理卷 |
pvcreate | 初始化物理卷供LVM使用 |
pvremove | 从物理卷中删除LVM标签 |
vgs | 显示卷组信息 |
vgdisplay | 显示卷组的各种属性 |
vgcreate | 创建卷组 |
vgremove | 删除卷组 |
vgreduce | 从卷组中移除物理卷 |
vgextend | 往卷组添加物理卷 |
lvs | 显示逻辑卷信息 |
lvdisplay | 显示逻辑卷的各种属性 |
lvcreate | 创建一个逻辑卷 |
lvremove | 删除逻辑卷 |
lvreduce | 减少逻辑卷的大小 |
lvextend | 增加逻辑卷的大小 |
lvresize | 调整逻辑卷的大小 |