3、调整LVM卷(添加一块大小为1G,类型为SCSI的磁盘)

3-1调整卷组

        3-1-1、查看VG和空闲PV

 

[root@localhost ~]# vgdisplay LVMonRaid | grep 'VG Size'

  VG Size               2.98 GB

[root@localhost ~]# pvdisplay /dev/sdf1 | grep 'PV Size'

  PV Size               1019.72 MB

        3-1-2、把物理卷/dev/sdf1加入到卷组LVMonRaid

 

[root@localhost ~]# vgextend LVMonRaid /dev/sdf1

  Attempt to close device '/dev/cdrom' which is not open.

  Volume group "LVMonRaid" successfully extended

[root@localhost ~]# vgdisplay LVMonRaid | grep 'VG Size'

  VG Size               3.97 GB

 

        3-1-3、把物理卷/dev/sdf1从卷组LVMonRaid中卸载

 

[root@localhost ~]# vgreduce LVMonRaid /dev/sdf1

   Removed "/dev/sdf1" from volume group "LVMonRaid"

 [root@localhost ~]# vgdisplay LVMonRaid | grep 'VG Size'

  VG Size               2.98 GB

 

    3-2、调整逻辑卷

注意:实际工作中在进行逻辑卷调整时,应做好备份,避免错误的命令做好数据的丢失;另外使用resize2fs缩小文件系统时,要先umount文件文件系统,而且必须保证文件系统使用量,必须小于缩小后的大小;放大文件系统,可直接操作,不受上述限制。

        3-2-1、查看文件系统和LV的大小

 

[root@localhost ~]# tune2fs -l /dev/LVMonRaid/LogicLV1 | grep Block

Block count:              262144

Block size:               4096

Blocks per group:         32768

[root@localhost ~]# lvdisplay /dev/LVMonRaid/LogicLV1 | grep 'LV Size'

  LV Size                1.00 GB

 

        3-2-2LV增加100M,再次查看文件系统和LV的大小

 

[root@localhost ~]# lvextend -L +100M /dev/LVMonRaid/LogicLV1

  Extending logical volume LogicLV1 to 1.10 GB

  Logical volume LogicLV1 successfully resized

[root@localhost ~]# tune2fs -l /dev/LVMonRaid/LogicLV1 | grep Block

Block count:              262144

Block size:               4096

Blocks per group:         32768

[root@localhost ~]# lvdisplay /dev/LVMonRaid/LogicLV1 | grep 'LV Size'             

LV Size                1.10 GB

文件系统大小没有变,LV大小增加了100M

 

        3-2-3、放大文件系统,并查看其改变

 

[root@localhost ~]# resize2fs -f /dev/LVMonRaid/LogicLV1                        

resize2fs 1.39 (29-May-2006)

Resizing the filesystem on /dev/LVMonRaid/LogicLV1 to 287744 (4k) blocks.

resize2fs: Can't read an block bitmap while trying to resize /dev/LVMonRaid/LogicLV1

[root@localhost ~]# tune2fs -l /dev/LVMonRaid/LogicLV1 | grep Block             

Block count:              307200

Block size:               4096

Blocks per group:         32768

 

        3-2-4、缩小LV

 

[root@localhost ~]# lvreduce -L 1G /dev/LVMonRaid/LogicLV1

  /dev/cdrom: open failed: 只读文件系统

  WARNING: Reducing active logical volume to 1.00 GB

  THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce LogicLV1? [y/n]: y

  Reducing logical volume LogicLV1 to 1.00 GB

  Logical volume LogicLV1 successfully resized

 

4LVM的高级应用

    4-1、卷快照

       4-1-1、创建/dev/LVMonRaid/LogicLV1的卷快照bakLV1

 

[root@localhost ~]# lvcreate -L 500M -s -n bakLV1 /dev/LVMonRaid/LogicLV1        

  Logical volume "bakLV1" created

 

        4-1-2、读取逻辑卷快照

 

做快照时LV上的数据

[root@localhost ~]# mount /dev/LVMonRaid/LogicLV1 /mnt/lv1

[root@localhost mnt]# ll /mnt/lv1

总计 32

-rw------- 1 root  root   7168 08-23 23:46 aquota.group

-rw------- 1 root  root   8192 08-23 23:31 aquota.user

drwxrwxrwx 2 root  root  16384 08-22 20:38 lost+found

 

快照上的数据

[root@localhost ~]# mount /dev/LVMonRaid/bakLV1 /mnt/lvbak/      

[root@localhost ~]# ll /mnt/lvbak/

总计 32

-rw------- 1 root  root   7168 08-23 23:46 aquota.group

-rw------- 1 root  root   8192 08-23 23:31 aquota.user

drwxrwxrwx 2 root  root  16384 08-22 20:38 lost+found

 

        4-1-3、卸载卷快照

 

[root@localhost mnt]# umount /mnt/lvbak/

 [root@localhost mnt]# lvremove /dev/LVMonRaid/bakLV1

Do you really want to remove active logical volume bakLV1? [y/n]: y

  Logical volume "bakLV1" successfully removed

 

    4-2、移动卷

        4-2-1、查看PV空间

 

[root@localhost mnt]# pvscan

  PV /dev/md1    VG LVMonRaid   lvm2 [1016.00 MB / 0    free]

  PV /dev/md5    VG LVMonRaid   lvm2 [1.99 GB / 1004.00 MB free]

  PV /dev/sdf1   VG LVMonRaid   lvm2 [1016.00 MB / 1016.00 MB free]

  Total: 3 [3.97 GB] / in use: 3 [3.97 GB] / in no VG: 0 [0   ]

 

       4-2-2、移动/dev/md1 /dev/sdf1后,再查看PV空间

 

[root@localhost mnt]# pvmove /dev/md1 /dev/sdf1

  /dev/md1: Moved: 100.0%

[root@localhost mnt]# pvscan

  PV /dev/md1    VG LVMonRaid   lvm2 [1016.00 MB / 1016.00 MB free]

  PV /dev/md5    VG LVMonRaid   lvm2 [1.99 GB / 1004.00 MB free]

  PV /dev/sdf1   VG LVMonRaid   lvm2 [1016.00 MB / 0    free]

  Total: 3 [3.97 GB] / in use: 3 [3.97 GB] / in no VG: 0 [0   ]

 

5、删除LVM(请在做完磁盘配额实验后进行下面删除的实验)

5-1、删除LV

 

[root@localhost ~]# umount /mnt/lv1

[root@localhost ~]# umount /mnt/lv2

[root@localhost ~]# lvremove /dev/LVMonRaid/LogicLV1

Do you really want to remove active logical volume LogicLV1? [y/n]: y

  Logical volume "LogicLV1" successfully removed

[root@localhost ~]# lvremove /dev/LVMonRaid/LogicLV2

Do you really want to remove active logical volume LogicLV2? [y/n]: y

  Logical volume "LogicLV2" successfully removed

 

5-2、删除VG

 

[root@localhost mnt]# vgremove LVMonRaid

   Volume group "LVMonRaid" successfully removed

 

   5-3、删除PV

 

[root@localhost mnt]# pvremove /dev/md1

  Labels on physical volume "/dev/md1" successfully wiped

[root@localhost mnt]# pvremove /dev/md5

  Labels on physical volume "/dev/md5" successfully wiped