由于 LVM 可以弹性调整 filesystem 的大小,但是缺点是可能没有加速与硬件备份(与快照不同)的功能。 而磁盘阵列则具有性能与备份的功能,但是无法提供类似 LVM 的优点。在此情境中,我们想利用『在 RAID 上面建置 LVM』的功能,以达到两者兼顾的能力。
目标:测试在 RAID 磁盘上面架构 LVM 系统;
需求:需要具有磁盘管理的能力,包括 RAID 与 LVM;
那要如何处理呢?如下的流程一个步骤一个步骤的实施看看吧:
利用 umount 先卸除之前挂载的文件系统;
修改 /etc/fstab 里面的数据,让开机不会自动挂载;
利用 fdisk 将该分割槽初除。
最终你的系统应该会只剩下如下的模样:(/dev/sd{b,c,d,e,f}都是已经用mkds格式化为ext3)
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdf doesn't contain a valid partition table
2、建立 RAID ,假设我们利用五个(/dev/sd{b,c,d,e,f}) 8GB 的分割槽建立 RAID-5 ,且具有一个 spare disk
[root@linux ~]# mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=4 --spare-devices=1 /dev/sd{b,c,d,e,f}
mdadm: /dev/sdb appears to contain an ext2fs file system
size=8388608K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sdc appears to contain an ext2fs file system
size=8388608K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sdd appears to contain an ext2fs file system
size=8388608K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sde appears to contain an ext2fs file system
size=8388608K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sdf appears to contain an ext2fs file system
size=8388608K mtime=Thu Jan 1 08:00:00 1970
Continue creating array? y
mdadm: array /dev/md0 started.
3、开始处理 LVM ,现在我们假设所有的参数都使用默认值,包括 PE ,然后 VG 名为 raidvg ,LV 名为 raidlv ,底下为基本的流程:
[root@linux ~]# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
[root@linux ~]# vgcreate raidvg /dev/md0
/dev/cdrom: open failed: 只读文件系统
Attempt to close device '/dev/cdrom' which is not open.
/dev/cdrom: open failed: 只读文件系统
Attempt to close device '/dev/cdrom' which is not open.
Volume group "raidvg" successfully created
[root@linux ~]# lvcreate -l 6143 -n raidlv raidvg
Logical volume "raidlv" created
[root@linux ~]# lvdisplay
--- Logical volume ---
LV Name /dev/raidvg/raidlv
VG Name raidvg
LV UUID rBySS0-JxZ6-ANYe-Vp8G-xlUd-Rz1x-G6NjnT
LV Write Access read/write
LV Status available
# open 0
LV Size 24.00 GB
Current LE 6143
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 253:0
4、挂载
[root@linux ~]# mkdir /mnt/raidlvm
[root@linux ~]# mkfs -t ext3 /dev/raidvg/raidlv
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
3145728 inodes, 6290432 blocks
314521 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
192 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@linux ~]# mount /dev/raidvg/raidlv /mnt/raidlvm/
5、开机自动挂载
[root@linux raidlvm]# mdadm --detail /dev/md0 | grep UUID
UUID : 99de722a:bfd56556:7b3978e1:3bf3f4f9
[root@linux raidlvm]# cat /etc/mdadm.conf
ARRAY /dev/md0 UUID=99de722a:bfd56556:7b3978e1:3bf3f4f9
[root@linux raidlvm]# cat /etc/fstab | grep /mnt/raidlvm
/dev/raidvg/raidlv /mnt/raidlvm ext3 defaults 1 2
6、检查
[root@linux raidlvm]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Fri Feb 17 22:26:44 2012
Raid Level : raid5
Array Size : 25165632 (24.00 GiB 25.77 GB)
Used Dev Size : 8388544 (8.00 GiB 8.59 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Feb 17 22:39:11 2012
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 99de722a:bfd56556:7b3978e1:3bf3f4f9
Events : 0.2
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
4 8 80 - spare /dev/sdf
[root@linux raidlvm]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde[3] sdf[4](S) sdd[2] sdc[1] sdb[0]
25165632 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
[root@linux raidlvm]# pvscan
/dev/cdrom: open failed: 只读文件系统
Attempt to close device '/dev/cdrom' which is not open.
PV /dev/md0 VG raidvg lvm2 [24.00 GB / 0 free]
Total: 1 [24.00 GB] / in use: 1 [24.00 GB] / in no VG: 0 [0 ]
[root@linux raidlvm]# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name raidvg
PV Size 24.00 GB / not usable 3.81 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 6143
Free PE 0
Allocated PE 6143
PV UUID KgwVH9-HwTG-q4it-z0Ps-ACac-Si1y-8RxTkx
[root@linux raidlvm]# vgscan
Reading all physical volumes. This may take a while...
/dev/cdrom: open failed: 只读文件系统
Attempt to close device '/dev/cdrom' which is not open.
Found volume group "raidvg" using metadata type lvm2
[root@linux raidlvm]# vgdisplay
--- Volume group ---
VG Name raidvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 24.00 GB
PE Size 4.00 MB
Total PE 6143
Alloc PE / Size 6143 / 24.00 GB
Free PE / Size 0 / 0
VG UUID zlM0TJ-fjR0-b2kO-rCpO-D6L9-zw0m-W3SVzp
[root@linux raidlvm]# lvscan
ACTIVE '/dev/raidvg/raidlv' [24.00 GB] inherit
[root@linux raidlvm]# lvdisplay
--- Logical volume ---
LV Name /dev/raidvg/raidlv
VG Name raidvg
LV UUID rBySS0-JxZ6-ANYe-Vp8G-xlUd-Rz1x-G6NjnT
LV Write Access read/write
LV Status available
# open 1
LV Size 24.00 GB
Current LE 6143
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 768
Block device 253:0
[root@linux ~]# df
文件系统 1K-块 已用 可用 已用% 挂载点
/dev/sda3 5991232 2662984 3019000 47% /
/dev/sda1 101086 11373 84494 12% /boot
tmpfs 517548 0 517548 0% /dev/shm
/dev/mapper/raidvg-raidlv
24766844 176204 23332556 1% /mnt/raidlvm
[root@linux ~]# cd /mnt/raidlvm/
[root@linux raidlvm]# ll
总计 20
drwx------ 2 root root 16384 02-17 22:37 lost+found
-rw-r--r-- 1 root root 6 02-17 22:38 tt
[root@linux raidlvm]# cat tt
aaaaa