RAID和LVM介绍、练习

RAID

独立磁盘冗余阵列(Redundant Arrays of Independent Disks,RAID),磁盘阵列是由很多价格较便宜的磁盘,组合成一个容量巨大的磁盘组。例如把相同的数据存储在多个硬盘的不同的地方(因此,冗余地)的方法,提升读写性能。也就是通过把数据放在多个硬盘上,输入输出操作能以平衡的方式交叠,改良性能。因为多个硬盘增加了平均故障间隔时间(MTBF),储存冗余数据也增加了容错。

磁盘阵列还能利用同位检查(Parity Check)的观念,在数组中任意一个硬盘故障时,仍可读出数据,在数据重构时,将数据经计算后重新置入新硬盘中。

RAID分类:

  • 外接式磁盘阵列柜

外接式磁盘阵列柜最常被使用大型服务器上,具可热交换(Hot Swap)的特性,不过这类产品的价格都很贵。

  • 内接式磁盘阵列卡

内接式磁盘阵列卡,价格便宜,但需要较高的安装技术,适合技术人员使用操作。硬件阵列能够提供在线扩容、动态修改阵列级别、自动恢复数据、驱动器漫游、超高速缓冲等功能。

  • 利用软件来仿真

利用软件仿真的方式,是指通过网络操作系统自身提供的磁盘管理功能将连接的普通SCSI卡上的多块硬盘配置成逻辑盘,组成阵列。软件阵列可以提供数据冗余功能,但是磁盘子系统的性能会有所降低,有的降低幅度还比较大,大30%左右,会拖累机器速度。不适合大数据流量的服务器。

常见的RAID级别:

  • RAID0:[磁盘读写性能提升,容错性没有(冗余为0)]

把多块物理硬盘设备(最少两块)通过硬件或软件的方式串联在一起,组成一个大卷组,将数据依次写入到各个物理硬盘中。这样一来,理论上硬盘设备的读写性能会有很高的提升。(例如三块硬盘组成的RAID0,数据读写请求的流量会均匀的分散到3块硬盘中),但是有个弊端,即当其中一块硬盘发生故障后,这个卷组中的所有硬盘设备都将不可用(数据已经遭到了破坏)。虽然冗余为0,但是读写性能是所有RAID等级中最高的。
在这里插入图片描述
RAID0总结:
磁盘空间使用率:100%,故成本最低。
读性能:N单块磁盘的读性能
写性能:N
单块磁盘的写性能
冗余:无,任何一块磁盘损坏都将导致数据不可用。

  • RAID1:[数据备份,未提升读写性能]

尽管RAID 0技术提升了硬盘设备的读写速度,但是它是将数据依次写入到各个物理硬盘中,也就是说,它的数据是分开存放的,其中任何一块硬盘发生故障都会损坏整个系统的数据。因此,如果生产环境对硬盘设备的读写速度没有很高的要求,而是希望增加数据的安全性时,就需要用到RAID 1技术了。
RAID1是将一个两块硬盘所构成RAID磁盘阵列,其容量仅等于一块硬盘的容量,因为另一块只是当作数据“镜像”。RAID1磁盘阵列显然是最可靠的一种阵列,因为它总是保持一份完整的数据备份。它的性能自然没有RAID0磁盘阵列那样好,但其数据读取确实较单一硬盘来的快,因为数据会从两块硬盘中较快的一块中读出。RAID1磁盘阵列的写入速度通常较慢,因为数据得分别写入两块硬盘中并做比较。RAID1磁盘阵列一般支持“热交换”,就是说阵列中硬盘的移除或替换可以在系统运行时进行,无须中断退出系统。RAID1磁盘阵列是十分安全的,不过也是较贵一种RAID磁盘阵列解决方案,因为两块硬盘仅能提供一块硬盘的容量。RAID1磁盘阵列主要用在数据安全性很高,而且要求能够快速恢复被破坏的数据的场合。
在这里,需要注意的是,读只能在一块磁盘上进行,并不会进行并行读取,性能取决于硬盘中较快的一块。写的话通常比单块磁盘要慢,虽然是并行写,即对两块磁盘的写入是同时进行的,但因为要比较两块硬盘中的数据,所以性能比单块磁盘慢。

在这里插入图片描述
总结:
磁盘使用率为50%,成本较高,但是有数据备份,较为安全。在读写性能上,读操作只能在其中一块硬盘上进行(性能更好的进行读操作),对于写操作,它是并行写入到两块磁盘中,但因为要比对数据一致性,因此性能没有单块磁盘高。

  • RAID5:[奇偶校验存储数据,是RAID0和RAID1的折中方案]

RAID5是RAID0和RAID1的折中方案。RAID 5具有和RAID0相近似的数据读取速度,只是多了一个奇偶校验信息,写入数据的速度比对单个磁盘进行写入操作稍慢。同时由于多个数据对应一个奇偶校验信息,RAID5的磁盘空间利用率要比RAID 1高,存储成本相对较低,是目前运用较多的一种解决方案。
如图所示,RAID5技术是把硬盘设备的数据奇偶校验信息保存到其他硬盘设备中。RAID 5磁盘阵列组中数据的奇偶校验信息并不是单独保存到某一块硬盘设备中,而是存储到除自身以外的其他每一块硬盘设备上,这样的好处是其中任何一设备损坏后不至于出现致命缺陷;图中parity部分存放的就是数据的奇偶校验信息,换句话说,就是RAID 5技术实际上没有备份硬盘中的真实数据信息,而是当硬盘设备出现问题后通过奇偶校验信息来尝试重建损坏的数据。RAID这样的技术特性“妥协”地兼顾了硬盘设备的读写速度、数据安全性与存储成本问题。
在这里插入图片描述
注意:做raid 5阵列所有磁盘容量必须一样大,当容量不同时,会以最小的容量为准。 最好硬盘转速一样,否则会影响性能,而且可用空间=磁盘数n-1,Raid 5 没有独立的奇偶校验盘,所有校验信息分散放在所有磁盘上, 只占用一个磁盘的容量。组成RAID5,磁盘最少3块。
总结
磁盘空间利用率:(N-1)/N,即只浪费一块磁盘用于奇偶校验。
读性能:是(n-1)*单块磁盘的读性能,较为接近RAID0的读性能。
写性能:并行写入,可能因为有校验信息的合成与填写,比单块磁盘的写性能要差。
冗余:只允许一块磁盘损坏。

  • RAID10:

鉴于RAID 5技术是因为硬盘设备的成本问题对读写速度和数据的安全性能而有了一定的妥协,但是大部分企业更在乎的是数据本身的价值而非硬盘价格,因此生产环境中主要使用RAID 10技术。
顾名思义,RAID 10技术是RAID 1+RAID 0技术的一个“组合体”。如图所示,RAID 10技术需要至少4块硬盘来组建,其中先分别两两制作成RAID 1磁盘阵列,以保证数据的安全性;然后再对两个RAID 1磁盘阵列实施RAID 0技术,进一步提高硬盘设备的读写速度。这样从理论上来讲,只要坏的不是同一组中的所有硬盘,那么最多可以损坏50%的硬盘设备而不丢失数据。由于RAID 10技术继承了RAID 0的高读写速度和RAID 1的数据安全性,在不考虑成本的情况下RAID 10的性能都超过了RAID 5,因此当前成为广泛使用的一种存储技术。
在这里插入图片描述
总结:
磁盘空间利用率:50%。
读性能:N/2单块硬盘的读性能
写性能:N/2
单块硬盘的写性能
冗余:只要一对镜像盘中有一块磁盘可以使用就没问题。

  • RAID10和RAID01的比较:

RAID 01/10:根据组合分为RAID 10和RAID 01,实际是将RAID 0和RAID 1标准结合的产物,在连续地以位或字节为单位分割数据并且并行读/写多个磁盘的同时,为每一块磁盘作磁盘镜像进行冗余。它的优点是同时拥有RAID 0的超凡速度和RAID 1的数据高可靠性,但是CPU占用率同样也更高,而且磁盘的利用率比较低。RAID 1+0是先镜射再分区数据,再将所有硬盘分为两组,视为是RAID 0的最低组合,然后将这两组各自视为RAID 1运作。RAID 0+1则是跟RAID 1+0的程序相反,是先分区再将数据镜射到两组硬盘。它将所有的硬盘分为两组,变成RAID 1的最低组合,而将两组硬盘各自视为RAID 0运作。性能上,RAID 0+1比RAID 1+0有着更快的读写速度。可靠性上,当RAID 1+0有一个硬盘受损,其余三个硬盘会继续运作。RAID 0+1 只要有一个硬盘受损,同组RAID 0的另一只硬盘亦会停止运作,只剩下两个硬盘运作,可靠性较低。因此,RAID 10远较RAID 01常用,零售主板绝大部份支持RAID 0/1/5/10,但不支持RAID 01。如图:
在这里插入图片描述

RAID涉及到的文件和命令:

命令:#mdadm
文件:

  • /etc/mdadm.conf://RAID开启磁盘阵列时加载的之前的RAID信息

  • /proc/mdstat://RAID磁盘存储信息

mdadm命令用于管理Linux系统中的软件RAID硬盘阵列,
格式为“mdadm [模式] <RAID设备名称> [选项] [成员设备名称]”。

mdadm命令的常用参数和作用
参数作用
-a检测设备名称
-n指定磁盘数量
-l指定RAID级别
-C创建RAID磁盘阵列
-v显示过程
-x添加额外的备份盘数量
-Q查看摘要信息
-D查看详细信息
-r移除设备
-f模拟设备损坏
-S停止RAID磁盘阵列,-S /dev/md0后md0的相关信息会被删除。注意如果还会继续使用该RAID磁盘,则需要将信息保存到/etc/mdadm.conf中。
-A开启RAID磁盘阵列,开启某RAID磁盘阵列前,/etc/mdadm.conf中需要有该磁盘阵列的保存信息

例:

[root@linuxprobe ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde

其中,-C参数代表创建一个RAID阵列卡;-v参数显示创建的过程,同时在后面追加一个设备名称/dev/md0,这样/dev/md0就是创建后的RAID磁盘阵列的名称;-a yes参数代表自动创建设备文件;-n 4参数代表使用4块硬盘来部署这个RAID磁盘阵列;而-l 10参数则代表RAID 10方案;最后再加上4块硬盘设备的名称就搞定了。
例:先停止再重启指定的RAID
注意:将RAID停止后,在启用RAID之前,我们需要先将RAID的配置信息保存至RAID配置文件中,/etc/mdadm.conf。
保存方法: #mdadm -D --scan > /etc/mdadm.conf
只有保存在RAID配置文件后,才可以使用mdadm -A 启用RAID
#mdadm -D --scan /dev/md* > /etc/mdadm.conf
#mdadm -A /dev/md0
例:只停止不重启指定的RAID:
mdadm -S /dev/md0 停止建立好的raid
在这里插入图片描述
rm -rf /dev/md0 删除指定的raid设备文件

LVM

ext系列文件系统支持扩容和缩减
XFS文件系统只支持扩容

1.LVM相关知识

前面学习的硬盘设备管理技术虽然能够有效地提高硬盘设备的读写速度以及数据的安全性,但是在硬盘分好区或者部署为RAID磁盘阵列之后,再想修改硬盘分区大小就不容易了。换句话说,当用户想要随着实际需求的变化调整硬盘分区的大小时,会受到硬盘“灵活性”的限制。这时就需要用到另外一项非常普及的硬盘设备资源管理技术了—LVM(逻辑卷管理器)。LVM可以允许用户对硬盘资源进行动态调整。
逻辑卷管理器是Linux系统用于对硬盘分区进行管理的一种机制,理论性较强,其创建初衷是为了解决硬盘设备在创建分区后不易修改分区大小的缺陷。尽管对传统的硬盘分区进行强制扩容或缩容从理论上来讲是可行的,但是却可能造成数据的丢失。而LVM技术是在硬盘分区和文件系统之间添加了一个逻辑层,它提供了一个抽象的卷组,可以把多块硬盘进行卷组合并。这样一来,用户不必关心物理硬盘设备的底层架构和布局,就可以实现对硬盘分区的动态调整。LVM的技术架构如图所示。
在这里插入图片描述
物理卷处于LVM中的最底层,可以将其理解为物理硬盘、硬盘分区或者RAID磁盘阵列,这都可以。
卷组建立在物理卷之上,一个卷组可以包含多个物理卷,而且在卷组创建之后也可以继续向其中添加新的物理卷。
逻辑卷是用卷组中空闲的资源建立的,并且逻辑卷在建立后可以动态地扩展或缩小空间。这就是LVM的核心理念。

2.LVM部署中涉及到的命令

一般而言,在生产环境中无法精确地评估每个硬盘分区在日后的使用情况,因此会导致原先分配的硬盘分区不够用。比如,伴随着业务量的增加,用于存放交易记录的数据库目录的体积也随之增加;因为分析并记录用户的行为从而导致日志目录的体积不断变大,这些都会导致原有的硬盘分区在使用上捉襟见肘。而且,还存在对较大的硬盘分区进行精简缩容的情况。
我们可以通过部署LVM来解决上述问题。部署LVM时,需要逐个配置物理卷、卷组和逻辑卷。常用的部署命令如表所示。
在这里插入图片描述
#e2fsck -f /dev/storage/vo (检查文件系统的完整性)
#resize2fs /dev/storage/vo (支持ext2,3,4;扩容+缩容;容量调整后的同步,或通知文件系统vo大小发生变化;缩容时在后面加上缩小后的磁盘容量。)
#xfs.growfs /dev/storage/vo(支持xfs,不支持缩容。容量调整后的同步,或通知文件系统vo大小发生变化)

练习:

RAID练习:

磁盘分区与磁盘的RAID使用区别:将磁盘分区添加到物理卷前,需要设置磁盘分区支持RAID。整块磁盘可以直接添加到物理卷中。

  • 1.将/dev/sdb整个磁盘制作成逻辑分区。划分并使用/dev/sdb{,5,6,7,8,9},5个分区,使其支持RAID。大小都为2GB。
[root@localhost ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x75358bf9.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): e
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): 
Using default value 41943039
Partition 1 of type Extended and of size 20 GiB is set

Command (m for help): n
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (4096-41943039, default 4096): 
Using default value 4096
Last sector, +sectors or +size{K,M,G} (4096-41943039, default 41943039): +2G
Partition 5 of type Linux and of size 2 GiB is set

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x75358bf9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496    5  Extended
/dev/sdb5            4096     4198399     2097152   83  Linux

Command (m for help): n
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 6
First sector (4200448-41943039, default 4200448): 
Using default value 4200448
Last sector, +sectors or +size{K,M,G} (4200448-41943039, default 41943039): +2G
Partition 6 of type Linux and of size 2 GiB is set

Command (m for help): n
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 7
First sector (8396800-41943039, default 8396800): 
Using default value 8396800
Last sector, +sectors or +size{K,M,G} (8396800-41943039, default 41943039): +2G
Partition 7 of type Linux and of size 2 GiB is set

Command (m for help): N
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 8
First sector (12593152-41943039, default 12593152): 
Using default value 12593152
Last sector, +sectors or +size{K,M,G} (12593152-41943039, default 41943039): +2G
Partition 8 of type Linux and of size 2 GiB is set

Command (m for help): n
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 9
First sector (16789504-41943039, default 16789504): 
Using default value 16789504
Last sector, +sectors or +size{K,M,G} (16789504-41943039, default 41943039): +2G
Partition 9 of type Linux and of size 2 GiB is set

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x75358bf9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496    5  Extended
/dev/sdb5            4096     4198399     2097152   83  Linux
/dev/sdb6         4200448     8394751     2097152   83  Linux
/dev/sdb7         8396800    12591103     2097152   83  Linux
/dev/sdb8        12593152    16787455     2097152   83  Linux
/dev/sdb9        16789504    20983807     2097152   83  Linux

```Command (m for help): t
Partition number (1,5-9, default 9): 
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): t
Partition number (1,5-9, default 9): 8
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): t
Partition number (1,5-9, default 9): 7
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): t
Partition number (1,5-9, default 9): 6
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): t
Partition number (1,5-9, default 9): 5
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x75358bf9

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496    5  Extended
/dev/sdb5            4096     4198399     2097152   fd  Linux raid autodetect 
/dev/sdb6         4200448     8394751     2097152   fd  Linux raid autodetect
/dev/sdb7         8396800    12591103     2097152   fd  Linux raid autodetect
/dev/sdb8        12593152    16787455     2097152   fd  Linux raid autodetect
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
/dev/sdb9        16789504    20983807     2097152   8e  Linux LVM
[root@localhost ~]# partprobe 
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.
  • 2.使用/dev/sdb5,/dev/sdb6,/dev/sdb7制作为RAID5磁盘阵列,文件系统格式为ext4,挂载目录为/mnt/data。
[root@localhost ~]# mdadm -Cv /dev/md0  -a yes  -n 3  -l 5  /dev/sdb[5-7]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 2094080K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Mar 14 20:26:09 2020
        Raid Level : raid5
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sat Mar 14 20:26:16 2020
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 33% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : f428a987:df6f969d:3f84f7bc:216cb448
            Events : 6

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync   /dev/sdb5
       1       8       22        1      active sync   /dev/sdb6
       3       8       23        2      spare rebuilding   /dev/sdb7
[root@localhost ~]# mount /dev/md0 /mnt/data
mount: /dev/md0 is write-protected, mounting read-only
mount: unknown filesystem type '(null)'
[root@localhost ~]# rmdir /mnt/data/
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@localhost ~]# mkdir /mnt/data
[root@localhost ~]# mount /dev/md0 /mnt/data
[root@localhost ~]# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda2      xfs        49G  2.8G   47G   6% /
devtmpfs       devtmpfs  685M     0  685M   0% /dev
tmpfs          tmpfs     696M     0  696M   0% /dev/shm
tmpfs          tmpfs     696M  9.7M  686M   2% /run
tmpfs          tmpfs     696M     0  696M   0% /sys/fs/cgroup
/dev/sr0       iso9660   8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3      xfs        40G   33M   40G   1% /data
/dev/sda1      xfs       497M  123M  375M  25% /boot
tmpfs          tmpfs     140M     0  140M   0% /run/user/0
/dev/md0       ext4      3.9G   16M  3.7G   1% /mnt/data

  • 3.使用/dev/sdb[5-8],制作RAID10磁盘阵列,阵列名/dev/md0,文件系统格式ext4,挂载目录/mnt/data。
[root@localhost ~]# umount /dev/md0
[root@localhost ~]# mdadm -S /dev/md0 
mdadm: stopped /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

[root@localhost ~]# mdadm -Cv  /dev/md0 -a yes  -n 4 -l 10  /dev/sdb[5-8]
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb5 appears to be part of a raid array:
       level=raid5 devices=3 ctime=Sat Mar 14 20:26:09 2020
mdadm: /dev/sdb6 appears to be part of a raid array:
       level=raid5 devices=3 ctime=Sat Mar 14 20:26:09 2020
mdadm: /dev/sdb7 appears to be part of a raid array:
       level=raid5 devices=3 ctime=Sat Mar 14 20:26:09 2020
mdadm: size set to 2094080K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]# mkfs.ext4 /dev/md0 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 
[root@localhost ~]#mkdir  /mnt/data
[root@localhost ~]# mount /dev/md0 /mnt/data
[root@localhost ~]# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda2      xfs        49G  2.8G   47G   6% /
devtmpfs       devtmpfs  685M     0  685M   0% /dev
tmpfs          tmpfs     696M     0  696M   0% /dev/shm
tmpfs          tmpfs     696M  9.7M  686M   2% /run
tmpfs          tmpfs     696M     0  696M   0% /sys/fs/cgroup
/dev/sr0       iso9660   8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3      xfs        40G   33M   40G   1% /data
/dev/sda1      xfs       497M  123M  375M  25% /boot
tmpfs          tmpfs     140M     0  140M   0% /run/user/0
/dev/md0       ext4      3.9G   16M  3.7G   1% /mnt/data

  • 4.呈3题,将/dev/sdb9作为热备盘,添加到RAID10的/dev/md0阵列中。
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Mar 14 20:32:47 2020
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Mar 14 20:34:04 2020
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 1b5ad6cd:39cf1ff8:f622da2e:7e60f108
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync set-A   /dev/sdb5
       1       8       22        1      active sync set-B   /dev/sdb6
       2       8       23        2      active sync set-A   /dev/sdb7
       3       8       24        3      active sync set-B   /dev/sdb8
[root@localhost ~]# mdadm /dev/md0  --add  /dev/sdb9
mdadm: added /dev/sdb9
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Mar 14 20:32:47 2020
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Mar 14 20:35:43 2020
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 1b5ad6cd:39cf1ff8:f622da2e:7e60f108
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync set-A   /dev/sdb5
       1       8       22        1      active sync set-B   /dev/sdb6
       2       8       23        2      active sync set-A   /dev/sdb7
       3       8       24        3      active sync set-B   /dev/sdb8

       4       8       25        -      spare   /dev/sdb9

  • 5.使用/dev/sdb[5-8]制作RAID10磁盘阵列,/dev/sdb9作为热备盘在创建时加入RAID10,文件系统为ext4,挂载目录为/dev/data。
[root@localhost ~]# umount /dev/md0
[root@localhost ~]# mdadm -S /dev/md0 
mdadm: stopped /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[root@localhost ~]# mdadm -Cv /dev/md0 -a yes  -n 4 -l 10 -x 1 /dev/sdb[5-9]
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb5 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Sat Mar 14 20:32:47 2020
mdadm: /dev/sdb6 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Sat Mar 14 20:32:47 2020
mdadm: /dev/sdb7 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Sat Mar 14 20:32:47 2020
mdadm: /dev/sdb8 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Sat Mar 14 20:32:47 2020
mdadm: /dev/sdb9 appears to be part of a raid array:
       level=raid10 devices=4 ctime=Sat Mar 14 20:32:47 2020
mdadm: size set to 2094080K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Mar 14 20:41:00 2020
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Mar 14 20:41:10 2020
             State : clean, resyncing 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

     Resync Status : 52% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 377d4e51:ef27c3f7:c6c4b37c:f88eb3b1
            Events : 8

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync set-A   /dev/sdb5
       1       8       22        1      active sync set-B   /dev/sdb6
       2       8       23        2      active sync set-A   /dev/sdb7
       3       8       24        3      active sync set-B   /dev/sdb8

       4       8       25        -      spare   /dev/sdb9
[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@localhost ~]# mount /dev/md0 /mnt/data/
[root@localhost ~]# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda2      xfs        49G  2.8G   47G   6% /
devtmpfs       devtmpfs  685M     0  685M   0% /dev
tmpfs          tmpfs     696M     0  696M   0% /dev/shm
tmpfs          tmpfs     696M  9.7M  686M   2% /run
tmpfs          tmpfs     696M     0  696M   0% /sys/fs/cgroup
/dev/sr0       iso9660   8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3      xfs        40G   33M   40G   1% /data
/dev/sda1      xfs       497M  123M  375M  25% /boot
tmpfs          tmpfs     140M     0  140M   0% /run/user/0
/dev/md0       ext4      3.9G   16M  3.7G   1% /mnt/data
  • 6.呈5,题,停止RAID10的/dev/md0后再启动RAID10/dev/md0。
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [raid10] 
md0 : active raid10 sdb9[4](S) sdb8[3] sdb7[2] sdb6[1] sdb5[0]
      4188160 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
unused devices: <none>
[root@localhost ~]# mdadm -D --scan /dev/md0 > /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=377d4e51:ef27c3f7:c6c4b37c:f88eb3b1
[root@localhost ~]# umount /dev/md0
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@localhost ~]# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=377d4e51:ef27c3f7:c6c4b37c:f88eb3b1
[root@localhost ~]# mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[root@localhost ~]# mdadm -A /dev/md0
mdadm: /dev/md0 has been started with 4 drives and 1 spare.
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Mar 14 20:41:00 2020
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat Mar 14 20:46:00 2020
             State : clean 
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 377d4e51:ef27c3f7:c6c4b37c:f88eb3b1
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync set-A   /dev/sdb5
       1       8       22        1      active sync set-B   /dev/sdb6
       2       8       23        2      active sync set-A   /dev/sdb7
       3       8       24        3      active sync set-B   /dev/sdb8

       4       8       25        -      spare   /dev/sdb9
[root@localhost ~]# mount /dev/md0 /mnt/data
[root@localhost ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        49G  2.8G   47G   6% /
devtmpfs        685M     0  685M   0% /dev
tmpfs           696M     0  696M   0% /dev/shm
tmpfs           696M  9.7M  686M   2% /run
tmpfs           696M     0  696M   0% /sys/fs/cgroup
/dev/sr0        8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3        40G   33M   40G   1% /data
/dev/sda1       497M  123M  375M  25% /boot
tmpfs           140M     0  140M   0% /run/user/0
/dev/md0        3.9G   16M  3.7G   1% /mnt/data

6.呈5,停止并删除RAID10的/dev/md0相关信息,不再使用RAID10/dev/md0;

[root@localhost ~]# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=377d4e51:ef27c3f7:c6c4b37c:f88eb3b1
[root@localhost ~]# > /etc/mdadm.conf 
[root@localhost ~]# cat /etc/mdadm.conf 
[root@localhost ~]# umount /mnt/data/
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [raid10] 
md0 : active raid10 sdb5[0] sdb9[4](S) sdb8[3] sdb7[2] sdb6[1]
      4188160 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
unused devices: <none>
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>
[root@localhost ~]# mdadm -A /dev/md0
mdadm: /dev/md0 not identified in config file.
[root@localhost ~]# cat /etc/mdadm.conf

7.创建一个可用空间为10G的RAID1设备,要求其chunk大小为128k,文件系统为ext4,有一个空闲盘,开机可自动挂载(使用UUID方式自动挂载)至/backup目录。
/dev/sdc1=10G /dev/sdc2=10G /dev/sdc3=10G
#fdisk /dev/sdc
#partprobe
#mdadm -Cv /dev/md0 -a yes –chunk=128K -n 2 -l 1 -x 1 /dev/sdc1 /dev/sdc2 /dev/sdc3
#mkfs.ext4 /dev/md0
#mkdir /backup
#ls -l /dev/disk/by-uuid(目录下有/dev/md0的UUID,blkid /dev/md0(该命令也可查看/dev/md0的UUID,卷标和文件类型))
#echo “UUID=03ff48df-6a2f-47cc-8f53-8f89c79ff423 /backup ext4 defaults 0 0”>> /etc/fstab
#mount /dev/md0 /backup
8.创建一个可用空间为10G的RAID10设备,要求其chunk大小为256k,文件系统为ext4,开机可自动挂载至/mydata目录.
/dev/sdb1,/dev/sdb2,/dev/sdb3,/dev/sdb5,大小都为5G;
#fdisk /dev/sdb
#partprobe
#mdadm -Cv /dev/md0 -a yes --chunk=256K -n4 -l 10 /dev/sdb{1,2,3,5}
#mkfs.ext4 /dev/md0
#mkdir /mydata
#echo “/dev/md0 /mydata ext4 defaults 0 0” >>/etc/fstab
#mount /dev/md0 /mydata

LVM练习:

使用磁盘分区做lvm时,需要在分区的时候指定该分区的系统ID支持LVM,即修改为:8e;
#e2fsck -f /dev/storage/vo (检查文件系统的完整性)
#resize2fs /dev/storage/vo (支持ext2,3,4;扩容+缩容;容量调整后的同步,或通知文件系统vo大小发生变化)
#xfs.growfs /dev/storage/vo(支持xfs,不支持缩容。容量调整后的同步,或通知文件系统vo大小发生变化)

  • 1.使用/dev/sdc磁盘进行分区/dev/sdc1,/dev/sdc2,/dev/sdc3,/dev/sdc5,使其支持LVM,大小为1GB。
[root@localhost ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x26b06987.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +1G
Partition 1 of type Linux and of size 1 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (2-4, default 2): 
First sector (2099200-41943039, default 2099200): 
Using default value 2099200
Last sector, +sectors or +size{K,M,G} (2099200-41943039, default 41943039): +1G
Partition 2 of type Linux and of size 1 GiB is set

Command (m for help): t
Partition number (1,2, default 2): 
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x26b06987

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2099199     1048576   8e  Linux LVM
/dev/sdc2         2099200     4196351     1048576   8e  Linux LVM

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): p
Partition number (3,4, default 3): 
First sector (4196352-41943039, default 4196352): 
Using default value 4196352
Last sector, +sectors or +size{K,M,G} (4196352-41943039, default 41943039): +1G
Partition 3 of type Linux and of size 1 GiB is set

Command (m for help): t
Partition number (1-3, default 3): 
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'
Command (m for help): p

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x26b06987

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2099199     1048576   8e  Linux LVM
/dev/sdc2         2099200     4196351     1048576   8e  Linux LVM
/dev/sdc3         4196352     6293503     1048576   8e  Linux LVM

Command (m for help): n
Partition type:
   p   primary (3 primary, 0 extended, 1 free)
   e   extended
Select (default e): e
Selected partition 4
First sector (6293504-41943039, default 6293504): 
Using default value 6293504
Last sector, +sectors or +size{K,M,G} (6293504-41943039, default 41943039): 
Using default value 41943039
Partition 4 of type Extended and of size 17 GiB is set

Command (m for help): p

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x26b06987

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2099199     1048576   8e  Linux LVM
/dev/sdc2         2099200     4196351     1048576   8e  Linux LVM
/dev/sdc3         4196352     6293503     1048576   8e  Linux LVM
/dev/sdc4         6293504    41943039    17824768    5  Extended
Command (m for help): n
All primary partitions are in use
Adding logical partition 5
First sector (6295552-41943039, default 6295552): 
Using default value 6295552
Last sector, +sectors or +size{K,M,G} (6295552-41943039, default 41943039): +1G
Partition 5 of type Linux and of size 1 GiB is set
Command (m for help): t
Partition number (1-5, default 5): 
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p

Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x26b06987

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2099199     1048576   8e  Linux LVM
/dev/sdc2         2099200     4196351     1048576   8e  Linux LVM
/dev/sdc3         4196352     6293503     1048576   8e  Linux LVM
/dev/sdc4         6293504    41943039    17824768    5  Extended
/dev/sdc5         6295552     8392703     1048576   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@localhost ~]# 
  • 2.制作LVM逻辑卷;物理卷为/dev/sdc1、/dev/sdc2;卷组名为lvm,逻辑卷名为example,PE大小为15M;逻辑卷大小为150M;文件系统格式为ext4,挂载目录为/mnt/lvm。

步骤:
#pvcreate /dev/sdc1 /dev/sdc2
#vgcreate -s 15M lvm /dev/sdc1 /dev/sdc2
#lvcreate -l 10 -n example /dev/lvm/
#mkfs.ext4 /dev/lvm/example
#mkdir /mnt/lvm
#mount /dev/lvm/example /mnt/lvm
#df -h

[root@localhost ~]# pvcreate  /dev/sdc1 /dev/sdc2
  Physical volume "/dev/sdc1" successfully created.
  Physical volume "/dev/sdc2" successfully created.
[root@localhost ~]# vgcreate  -s 15M  lvm  /dev/sdc1 /dev/sdc2
  Volume group "lvm" successfully created
[root@localhost ~]# vgdisplay 
  --- Volume group ---
  VG Name               lvm
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.99 GiB
  PE Size               15.00 MiB
  Total PE              136
  Alloc PE / Size       0 / 0   
  Free  PE / Size       136 / 1.99 GiB
  VG UUID               7zbAd1-abd7-a28T-IWgl-8z7f-np9l-9QHdOM
  [root@localhost ~]# lvcreate -n example -l 10  /dev/lvm
  Logical volume "example" created.
[root@localhost ~]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/lvm/example
  LV Name                example
  VG Name                lvm
  LV UUID                42bTc8-afMF-oUrw-HKgX-xZbA-84T9-EsA5JX
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2020-03-14 21:12:46 +0800
  LV Status              available
  # open                 0
  LV Size                150.00 MiB
  Current LE             10
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
[root@localhost ~]# mkfs.ext4 /dev/lvm/example 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
38456 inodes, 153600 blocks
7680 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33816576
19 block groups
8192 blocks per group, 8192 fragments per group
2024 inodes per group
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 
[root@localhost ~]# mkdir /mnt/lvm
[root@localhost ~]# mount /dev/lvm/example  /mnt/lvm/
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda2                 49G  2.8G   47G   6% /
devtmpfs                 685M     0  685M   0% /dev
tmpfs                    696M     0  696M   0% /dev/shm
tmpfs                    696M  9.7M  686M   2% /run
tmpfs                    696M     0  696M   0% /sys/fs/cgroup
/dev/sr0                 8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3                 40G   33M   40G   1% /data
/dev/sda1                497M  123M  375M  25% /boot
tmpfs                    140M     0  140M   0% /run/user/0
/dev/mapper/lvm-example  142M  1.6M  130M   2% /mnt/lvm

  • 3.将/dev/lvm/example逻辑卷扩容到500MB。

#lvextend -L 500M /dev/lvm/example
#resize2fs /dev/lvm/example
#df -h

[root@localhost ~]# lvextend -L 500M   /dev/lvm/example
  Rounding size to boundary between physical extents: 510.00 MiB.
  Size of logical volume lvm/example changed from 150.00 MiB (10 extents) to 510.00 MiB (34 extents).
  Logical volume lvm/example successfully resized.
[root@localhost ~]# resize2fs  /dev/lvm/example 
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/lvm/example is mounted on /mnt/lvm; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 4
The filesystem on /dev/lvm/example is now 522240 blocks long.

[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda2                 49G  2.8G   47G   6% /
devtmpfs                 685M     0  685M   0% /dev
tmpfs                    696M     0  696M   0% /dev/shm
tmpfs                    696M  9.7M  686M   2% /run
tmpfs                    696M     0  696M   0% /sys/fs/cgroup
/dev/sr0                 8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3                 40G   33M   40G   1% /data
/dev/sda1                497M  123M  375M  25% /boot
tmpfs                    140M     0  140M   0% /run/user/0
/dev/mapper/lvm-example  491M  2.3M  463M   1% /mnt/lvm

  • 4.将扩容后的/dev/lvm/example缩容到300M。

#umount /mnt/lvm
#e2fsck -f /dev/lvm/example
#resize2fs /dev/lvm/example 300M
#lvreduce -L 300M /dev/lvm/example
#mount /dev/lvm/example /mnt/lvm
#df -h

[root@localhost ~]# umount /mnt/lvm/
[root@localhost ~]# e2fsck -f /dev/lvm/example 
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/lvm/example: 11/129536 files (0.0% non-contiguous), 22762/522240 blocks
[root@localhost ~]# resize2fs  /dev/lvm///example 300M
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/lvm///example to 307200 (1k) blocks.
The filesystem on /dev/lvm///example is now 307200 blocks long.

[root@localhost ~]# lvreduce -L 300M   /dev/lvm/example 
  WARNING: Reducing active logical volume to 300.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvm/example? [y/n]: y
  Size of logical volume lvm/example changed from 510.00 MiB (34 extents) to 300.00 MiB (20 extents).
  Logical volume lvm/example successfully resized.
[root@localhost ~]# mount /dev/lvm/example /mnt/lvm
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sda2                 49G  2.8G   47G   6% /
devtmpfs                 685M     0  685M   0% /dev
tmpfs                    696M     0  696M   0% /dev/shm
tmpfs                    696M  9.7M  686M   2% /run
tmpfs                    696M     0  696M   0% /sys/fs/cgroup
/dev/sr0                 8.8G  8.8G     0 100% /mnt/cdrom
/dev/sda3                 40G   33M   40G   1% /data
/dev/sda1                497M  123M  375M  25% /boot
tmpfs                    140M     0  140M   0% /run/user/0
/dev/mapper/lvm-example  287M  2.1M  268M   1% /mnt/lvm

  • 5.删除逻辑卷example,卷组lvm,移除物理卷/dev/sdc[1-2]

#umount /mnt/lvm
#lvremove /dev/lvm/example
#vgremove /dev/lvm
#pvremove /dev/sdc1 /dev/sdc2

[root@localhost ~]# umount /mnt/lvm/
[root@localhost ~]# lvremove /dev/lvm/example 
Do you really want to remove active logical volume lvm/example? [y/n]: y
  Logical volume "example" successfully removed
[root@localhost ~]# lvscan 
[root@localhost ~]# vgremove  /dev/lvm
  Volume group "lvm" successfully removed
[root@localhost ~]# vgscan 
  Reading volume groups from cache.
[root@localhost ~]# pvremove /dev/sdc1 /dev/sdc2
  Labels on physical volume "/dev/sdc1" successfully wiped.
  Labels on physical volume "/dev/sdc2" successfully wiped.
[root@localhost ~]# pvscan 
  No matching physical volumes found

6.对/dev/lvm/example创建快照,此时逻辑卷大小为300M,并将快照挂载至/users目录下。
#lvcreate -n kz -L 300M -s /dev/lvm/example
#mkdir /users
#mount /dev/lvm/example /users

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值