LinuxNote 第七章 RAID与LVM磁盘阵列技术

第七章 RAID与LVM磁盘阵列技术

7.1 RAID磁盘阵列

RAID模式特点
0把多块物理硬盘设备(至少两块)通过硬件或软件的方式串联在一起,组成一个大的卷组,并将数据依次写入到各个物理硬盘中;有效地提升硬盘数据的吞吐速度,但是不具备数据备份和错误修复能力
1把两块以上的硬盘设备进行绑定,在写入数据时,是将数据同时写入到多块硬盘设备上(可以将其视为数据的镜像或备份);
5把硬盘设备的数据奇偶校验信息保存到除自身以外的其他每一块硬盘设备上,这样的好处是其中任何一设备损坏后不至于出现致命缺陷;“妥协”地兼顾了硬盘设备的读写速度、数据安全性与存储成本问题;
10RAID 10技术是RAID 1+RAID 0技术的一个“组合体”;

7.1.1 部署磁盘阵列

创建RAID分区 -- 格式化分区 -- 挂载并写入/etc/fstab

mdadm

  • 作用:用于管理Linux 系统中的软件RAID 硬盘阵列(multiple devices admin);
  • 格式:mdadm 参数 硬盘名称
参数作用
-a检测设备名称(可以用来新增替换磁盘)
-n指定设备数量
-l指定RAID级别
-C创建
-v显示过程
-f把RAID成员列为有问题,以便移除该成员
-r移除设备
-Q查看摘要信息
-D查看详细信息
-S停止RAID磁盘阵列
  • 实例:
    从创建到挂载
## 创建RAID设备/dev/md0,显示创建过程,创建用到4个设备,RAID级别为10,设备分别问/dev/sdc至/dev/sdf
[root@localhost ~]# mdadm -Cv /dev/md0 -n 4 -l 10 /dev/sd[c-f]
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

root@localhost ~]# mdadm -Q /dev/md0	# 查看所创建的RAID设备摘要信息(-D可以查看到RAID设备的详细信息)
/dev/md0: 9.99GiB raid10 4 devices, 0 spares. Use mdadm --detail for more detail.
[root@localhost ~]# ls -l /dev/md*		# 设备目录中也能看到创建的设备了
brw-rw----. 1 root disk 9, 0 Jan 28 00:25 /dev/md0

## 格式化RAID分区
[root@localhost ~]# mkfs.ext4 /dev/md0 
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 2618880 4k blocks and 655360 inodes
Filesystem UUID: 69089c81-6d8e-4b24-8b71-e46c95c36be3
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

## 挂载RAID分区
[root@localhost ~]# mkdir /RAID10
[root@localhost ~]# mount /dev/md0 /RAID10		# 挂载设备并检查
[root@localhost ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/md0               9.8G   37M  9.3G   1% /RAID10
...

## 挂载信息写入/etc/fstab
[root@localhost ~]# echo "/dev/md0 /RAID10 ext4 defaults 0 0" >> /etc/fstab

7.1.2 损坏磁盘阵列及修复

模拟RAID10中一块硬盘损坏 -- 卸载设备 -- 新增替换磁盘 -- 重新挂载

PS:对应硬盘设备的操作,一般都要先卸载,以保证数据安全

## 1.模拟/dev/md0中的/dev/sde设备损坏( -f即可模拟损坏,方便移除)
[root@localhost ~]# mdadm /dev/md0 -f /dev/sde
mdadm: set /dev/sde faulty in /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 28 00:25:28 2021
        Raid Level : raid10
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Jan 28 01:00:09 2021
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : f5261ec7:f44b11a3:5ac4b78f:443e93f1
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync set-A   /dev/sdc
       1       8       48        1      active sync set-B   /dev/sdd
       -       0        0        2      removed							# 可以看到此处的设备已被移除/dev/sde
       3       8       80        3      active sync set-B   /dev/sdf

       2       8       64        -      faulty   /dev/sde	# 

## 2.卸载RAID设备
[root@localhost ~]# umount /dev/md0

## 3. 新增替换磁盘
[root@localhost ~]# mdadm /dev/md0 -a /dev/sde		# 使用-a命令新增替换磁盘
mdadm: added /dev/sde
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 28 00:25:28 2021
        Raid Level : raid10
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Jan 28 01:08:13 2021
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 34% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : f5261ec7:f44b11a3:5ac4b78f:443e93f1
            Events : 30

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync set-A   /dev/sdc
       1       8       48        1      active sync set-B   /dev/sdd
       4       8       64        2      spare rebuilding   /dev/sde		# 可以看到新增的替换磁盘已经在做数据恢复了
       3       8       80        3      active sync set-B   /dev/sdf

## 4.重新挂载RAID盘
[root@localhost ~]# mount-a

7.1.3 磁盘阵列加备份盘

虽然我们可以手动替换发生损坏的磁盘,但是使用备份盘能够在发生损坏的第一时间自动顶替上去,保证磁盘阵列的正常工作;
创建RAID分区时使用参数 -x 添加备份盘,由此我们再

## 用3个硬盘设备创建RAID5,另备1个备份盘,创建设备为/dev/sdc至/dev/sdf
[root@localhost ~]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sd[c-f]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 28 05:44:26 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Jan 28 05:44:53 2021
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : a3ed2e0c:10fdc8a0:0ef76476:4e4b6198
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       4       8       64        2      active sync   /dev/sde

       3       8       80        -      spare   /dev/sdf	# 可以看到这块为备份盘
       

挂载分区并验证备份盘

## 将RAID5分区格式化挂载
[root@localhost ~]# mkdir /RAID		
[root@localhost ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 2618880 4k blocks and 655360 inodes
Filesystem UUID: c5a51e3b-40a7-4939-9b32-baa193c15659
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M  9.6M  974M   1% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  4.0G   14G  24% /
/dev/sda1             1014M  152M  863M  15% /boot
tmpfs                  197M   16K  197M   1% /run/user/42
tmpfs                  197M  3.4M  194M   2% /run/user/0
/dev/sr0               6.7G  6.7G     0 100% /run/media/root/RHEL-8-0-0-BaseOS-x86_64
/dev/md0               9.8G   37M  9.3G   1% /RAID			# 挂载成功

## 模拟演练RAID5分区异常,查看备份盘是否自动生效
[root@localhost ~]# mdadm /dev/md0 -f /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md0
[root@localhost ~]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 28 05:44:26 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Jan 28 05:56:56 2021
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 47% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : a3ed2e0c:10fdc8a0:0ef76476:4e4b6198
            Events : 27

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       3       8       80        1      spare rebuilding   /dev/sdf		# 当/dev/sdd盘出问题时,备份盘生效自动重建数据
       4       8       64        2      active sync   /dev/sde

       1       8       48        -      faulty   /dev/sdd

7.2 LVM(逻辑卷管理器)

在硬盘分好区或者部署为RAID 磁盘阵列之后,再想修改硬盘分区大小就不容易了;这时就需要用到另外一项非常普及的硬盘设备资源管理技术了—LVM(逻辑卷管理器)。LVM 可以允许用户对硬盘资源进行动态调整;
LVM 技术是在硬盘分区和文件系统之间添加了一个逻辑层,它提供了一个抽象的卷组,可以把多块硬盘进行卷组合并。这样一来,用户不必关心物理硬盘设备的低层架构和布局,就可以实现对硬盘分区的动态调整。

在这里插入图片描述
理解卷轴的功能可以如下:

  • PV(物理卷):让硬盘支持LVM操作;
  • VG(卷组):让支持LVM的硬盘组成一个大的卷组;
  • LV(逻辑卷):切割成所需大小;

理解使用LVM的方式:
先添加物理卷PV,然后将物理卷组成卷组VG,最后VG分成自己所需要的的逻辑卷LV,且逻辑卷的大小必须是PE(基本单元,比如4M)的倍数;

常用LVM部署命令:

功能/命令物理卷管理卷组管理逻辑卷管理
扫描pvscanvgscanlvscan
建立pvcreatevgcreatelvcreate
显示pvdisplayvgdisplaylvdisplay
删除pvremovevgremovelvremove
扩展vgextendlvextend
缩小vgreducelvreduce
从快照恢复lvconvert

其中xxscan命令展示的有用信心太少,一般使用xxdisplay;

lvcreate 命令参数:
-n :新建逻辑卷名称;
-L:以容量为单位创建逻辑卷,如 -L 150M
-l:以基本单元PE个数为单位创建逻辑卷,如 -l 50,PE为4M,则大小为200M
-s:创建逻辑卷快照
创建的逻辑卷名称为/dev/卷组名称/逻辑卷名称

7.2.1 部署逻辑卷

添加硬盘对LVM的支持(PV) -- 添加PV到卷组VG中 -- 切割逻辑分区LV -- 格式化挂载并写入/etc/fstab
不能格式化成xfs格式,因为xfs有自己的逻辑卷管理命令,且xfs磁盘不支持逻辑卷缩小操作;

[root@localhost ~]# pvcreate /dev/sd[c-e]			# 1.将三个硬盘设备添加为物理卷
  Physical volume "/dev/sdc" successfully created.
  Physical volume "/dev/sdd" successfully created.
  Physical volume "/dev/sde" successfully created.
[root@localhost ~]# vgcreate storage1 /dev/sd[c-e]	# 2.用三个物理卷创建名为storage1的卷组
  Volume group "storage1" successfully created
[root@localhost ~]# vgdisplay storage1				# 查看卷组storage1的详细信息
  --- Volume group ---
  VG Name               storage1
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               <14.99 GiB		# 卷组总大小
  PE Size               4.00 MiB		# 基本单元大小
  Total PE              3837
  Alloc PE / Size       0 / 0   
  Free  PE / Size       3837 / <14.99 GiB	# 空余PE
  VG UUID               DhFc4v-WQdv-LyEs-Kha2-HhxK-xkyk-XUE8fK
   
[root@localhost ~]# lvcreate -n tmp -L 1G storage1		# 3.从卷组中创建一个容量为1G的逻辑卷组,名为tmp
  Logical volume "tmp" created.
[root@localhost ~]# lvdisplay tmp		# 注意逻辑卷无法单独查看信息
  Volume group "tmp" not found
  Cannot process volume group tmp
[root@localhost ~]# lvdisplay							
  --- Logical volume ---
  LV Path                /dev/storage1/tmp
  LV Name                tmp
  VG Name                storage1
  LV UUID                682mwu-QYKD-gmXY-kcGK-Z5Ef-dxIa-94kjkk
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2021-01-28 06:40:18 +0800
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
[root@localhost ~]# mkfs.ext4 /dev/storage1/tmp 		# 4.以下为格式化逻辑卷并挂载,写入/etc/fstab配置文件
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 12665f90-7adc-4661-b765-7a07e7ab084e
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

[root@localhost ~]# mkdir /logictmp
[root@localhost ~]# echo "/dev/storage1/tmp /logictmp ext4 defaults 0 0" >> /etc/fstab
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  969M     0  969M   0% /dev
tmpfs                     984M     0  984M   0% /dev/shm
tmpfs                     984M  9.6M  974M   1% /run
tmpfs                     984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root      17G  4.0G   14G  24% /
/dev/sda1                1014M  152M  863M  15% /boot
tmpfs                     197M   16K  197M   1% /run/user/42
tmpfs                     197M  3.5M  194M   2% /run/user/0
/dev/sr0                  6.7G  6.7G     0 100% /run/media/root/RHEL-8-0-0-BaseOS-x86_64
/dev/mapper/storage1-tmp  976M  2.6M  907M   1% /logictmp		# 挂载成功,大小由于单位换算,所以是正常的

7.2.2 扩容逻辑卷

正如之前所说,对硬盘设备操作时,要先卸载设备以保证数据安全;
卸载设备 -- 扩容 -- 检查硬盘完整性 -- 重置硬盘容量 -- 重新挂载设备

[root@localhost ~]# umount /logictmp 					# 1.卸载设备
[root@localhost ~]# lvextend -L 0.5G /dev/storage1/tmp 	# 2.扩容,注意这里的大小是扩容后的大小,否则会报下列提示
  New size given (128 extents) not larger than existing size (256 extents)
[root@localhost ~]# lvextend -L 1.5G /dev/storage1/tmp 
  Size of logical volume storage1/tmp changed from 1.00 GiB (256 extents) to 1.50 GiB (384 extents).
  Logical volume storage1/tmp successfully resized.			
[root@localhost ~]# e2fsck -f /dev/storage1/tmp 		# 3. 检查硬盘完整性,确认没有报错
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/storage1/tmp: 11/65536 files (0.0% non-contiguous), 12955/262144 blocks
[root@localhost ~]# resize2fs /dev/storage1/tmp 		# 4.重置硬盘容量,告知文件系统逻辑分区大小发生变化
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/storage1/tmp to 393216 (4k) blocks.
The filesystem on /dev/storage1/tmp is now 393216 (4k) blocks long.

[root@localhost ~]# mount -a							# 5.重新挂载设备
[root@localhost ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  969M     0  969M   0% /dev
tmpfs                     984M     0  984M   0% /dev/shm
tmpfs                     984M  9.6M  974M   1% /run
tmpfs                     984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root      17G  4.0G   14G  24% /
/dev/sda1                1014M  152M  863M  15% /boot
tmpfs                     197M   16K  197M   1% /run/user/42
tmpfs                     197M  3.4M  194M   2% /run/user/0
/dev/sr0                  6.7G  6.7G     0 100% /run/media/root/RHEL-8-0-0-BaseOS-x86_64
/dev/mapper/storage1-tmp  1.5G  3.0M  1.4G   1% /logictmp	# 扩容成功

7.2.3 缩小逻辑卷

与扩容流程相反,因为要先判断是否能缩小才能执行缩小操作;判断这步就是重置硬盘容量;
卸载设备 -- 检查文件系统完整性 -- 重置硬盘容量至所需大小 -- 缩小逻辑卷 -- 重新挂载设备

[root@localhost ~]# umount /logictmp 				# 1.卸载设备
[root@localhost ~]# e2fsck -f /dev/storage1/tmp 	# 2.检查文件系统的完整性(保证数据安全)
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/storage1/tmp: 11/98304 files (0.0% non-contiguous), 15140/393216 blocks
[root@localhost ~]# resize2fs /dev/storage1/tmp 1G		# 3.重置硬盘容量到指定容量,看是否会报错;这里要带缩小的大小,与扩容时有区别
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/storage1/tmp to 262144 (4k) blocks.
The filesystem on /dev/storage1/tmp is now 262144 (4k) blocks long.

[root@localhost ~]# lvreduce -L 1G /dev/storage1/tmp 	# 4.缩小逻辑卷
  WARNING: Reducing active logical volume to 1.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce storage1/tmp? [y/n]: y
  Size of logical volume storage1/tmp changed from 1.50 GiB (384 extents) to 1.00 GiB (256 extents).
  Logical volume storage1/tmp successfully resized.
[root@localhost ~]# mount -a							# 5.重新挂载设备
[root@localhost ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  969M     0  969M   0% /dev
tmpfs                     984M     0  984M   0% /dev/shm
tmpfs                     984M  9.6M  974M   1% /run
tmpfs                     984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root      17G  4.0G   14G  24% /
/dev/sda1                1014M  152M  863M  15% /boot
tmpfs                     197M   16K  197M   1% /run/user/42
tmpfs                     197M  3.4M  194M   2% /run/user/0
/dev/sr0                  6.7G  6.7G     0 100% /run/media/root/RHEL-8-0-0-BaseOS-x86_64
/dev/mapper/storage1-tmp  976M  2.6M  907M   1% /logictmp	# 逻辑卷缩小成功

7.2.4 逻辑卷快照

该功能类似于虚拟机软件的还原时间点功能,可以还原逻辑卷到快照状态;逻辑卷快照有如下两个特点:

  • 快照卷的容量必须等同于逻辑卷的容量;
  • 快照卷仅一次有效,一旦执行还原操作后则会被立即自动删除。

查看卷组空间是否足够 -- 创建逻辑快照
卸载逻辑卷 -- 从逻辑卷快照恢复 命令:lvconvert --merge 快照设备

[root@localhost ~]# vgdisplay storage1 			# 查看卷组空间是否足够
  --- Volume group ---
  VG Name               storage1
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               <14.99 GiB
  PE Size               4.00 MiB
  Total PE              3837
  Alloc PE / Size       256 / 1.00 GiB
  Free  PE / Size       3581 / <13.99 GiB		# 尚于14G容量,可以创建逻辑卷快照
  VG UUID               DhFc4v-WQdv-LyEs-Kha2-HhxK-xkyk-XUE8fK

## 创建一个与/dev/storage1/tmp设备同样大小为1G的快照,名为tmpSnap;-s:生成快照卷
[root@localhost ~]# lvcreate -L 1G -s -n tmpSnap /dev/storage1/tmp
  Logical volume "tmpSnap" created.
[root@localhost ~]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/storage1/tmp
  LV Name                tmp
  VG Name                storage1
  LV UUID                682mwu-QYKD-gmXY-kcGK-Z5Ef-dxIa-94kjkk
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2021-01-28 06:40:18 +0800
  LV snapshot status     source of			
                         tmpSnap [active]				# 快照的状态为激活
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/storage1/tmpSnap
  LV Name                tmpSnap
  VG Name                storage1
  LV UUID                srwAu1-jyBB-Irvo-gsfj-6bKO-ZsDq-h8iBPf
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2021-01-28 08:23:12 +0800
  LV snapshot status     active destination for tmp			# 为tmp的快照
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  COW-table size         1.00 GiB
  COW-table LE           256
  Allocated to snapshot  0.01%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:5

[root@localhost ~]# umount /logictmp 							# 卸载设备
[root@localhost ~]# lvconvert --merge /dev/storage1/tmpSnap 	# 从对应的快照恢复
  Merging of volume storage1/tmpSnap started.
  storage1/tmp: Merged: 100.00%
[root@localhost ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/storage1/tmp
  LV Name                tmp
  VG Name                storage1
  LV UUID                682mwu-QYKD-gmXY-kcGK-Z5Ef-dxIa-94kjkk
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2021-01-28 06:40:18 +0800
  LV Status              available			# 已经没有快照卷的信息了,快照卷也消失了
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

7.2.5 删除逻辑卷

卸载设备 -- 删除/etc/fstab中的配置信息 -- 删除逻辑卷 -- 删除卷组 -- 删除物理卷 -- 检查

[root@localhost ~]# umount /logictmp 			# 1.卸载设备
[root@localhost ~]# vim /etc/fstab 				# 2.删除挂载配置信息  
[root@localhost ~]# lvremove /dev/storage1/tmp 	# 3.删除逻辑卷
Do you really want to remove active logical volume storage1/tmp? [y/n]: y
  Logical volume "tmp" successfully removed
[root@localhost ~]# vgremove storage1			# 4.删除卷组
  Volume group "storage1" successfully removed
[root@localhost ~]# pvremove /dev/sd[c-e]		# 5.删除物理卷
  Labels on physical volume "/dev/sdc" successfully wiped.
  Labels on physical volume "/dev/sdd" successfully wiped.
  Labels on physical volume "/dev/sde" successfully wiped.
[root@localhost ~]# lvdisplay					# 6.使用lvdisplay、vgdisplay、pvdisplay查看是否删除干净
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

mitays

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值