lvm及raid混搭是现今存储管理方式的主要解决方案。笔者参考网上一些贴子及查阅官方文档,初步先构建了raid1+lvm,实现了基本的功能。
如有不熟悉lvm,可以参考如下链接文章:
lvm术语简介及操作指南
概述:我的测试皆在vmware server 1.0.6,os采用centos 5.4
开始构建,步骤大体如下:
1,采用mdadm配置raid1设备,如果大家对mdadm使用不熟悉,请参考这几篇文章:
http://space.itpub.net/9240380/viewspace-630880
http://space.itpub.net/9240380/viewspace-630895
[root@localhost dev]# !223
mdadm --create /dev/md8 -l1 -n2 /dev/sdd1 /dev/sdb1
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
Continue creating array? yes
mdadm: array /dev/md8 started.
[root@localhost dev]# mdadm --detail --scan ---此命令扫描下以上构建的raid设备的相关配置信息
ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90 UUID=8a87573f:f0802515:f9d540fc:c4782e7c
[root@localhost ~]# vgremove vg01 --运行这个是因为以前在os上配置过vg了,所以先要删除以前的vg配置
/dev/cdrom: open failed: Read-only file system
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed
Volume group "vg01" successfully removed
2,利用lvm构建pv,vg,lv
[root@localhost ~]# pvcreate /dev/md8 ---
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Physical volume "/dev/md8" successfully created
[root@localhost ~]# vgcreate vg1 /dev/md8
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Volume group "vg1" successfully created
[root@localhost ~]# lvcreate -L 2.5GB -n lv1 vg1
Logical volume "lv1" created
[root@localhost ~]# lvscan
ACTIVE '/dev/vg1/lv1' [2.50 GB] inherit
[root@localhost ~]# pvscan
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
PV /dev/md8 VG vg1 lvm2 [2.99 GB / 504.00 MB free]
Total: 1 [2.99 GB] / in use: 1 [2.99 GB] / in no VG: 0 [0 ]
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Found volume group "vg1" using metadata type lvm2
3,运地mke2fs,配置对应的ext3文件系统(加j otion),并加载文件系统到cento5.4
[root@localhost ~]# mke2fs -j /dev/vg1/lv1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
327680 inodes, 655360 blocks
32768 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=671088640
20 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mkdir -pv /lun
[root@localhost ~]# mount /dev/vg1/lv1 /lun
[root@localhost ~]# df -hk --查看lvm分区的mount状况
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 17856888 6497720 10437444 39% /
tmpfs 216476 0 216476 0% /dev/shm
none 216388 104 216284 1% /var/lib/xenstored
/dev/hdc 3906842 3906842 0 100% /media/CentOS_5.4_Final
/dev/mapper/vg1-lv1 2580272 69448 2379752 3% /lun
4,为了下次重启自动挂载lvm的分区
[root@localhost ~]# vi /etc/fstab
LABEL=/ / ext3 defaults 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-hda1 swap swap defaults 0 0
/dev/vg1/lv1 /lun ext3 defaults 2 2
~
~
如有不熟悉lvm,可以参考如下链接文章:
lvm术语简介及操作指南
概述:我的测试皆在vmware server 1.0.6,os采用centos 5.4
开始构建,步骤大体如下:
1,采用mdadm配置raid1设备,如果大家对mdadm使用不熟悉,请参考这几篇文章:
http://space.itpub.net/9240380/viewspace-630880
http://space.itpub.net/9240380/viewspace-630895
[root@localhost dev]# !223
mdadm --create /dev/md8 -l1 -n2 /dev/sdd1 /dev/sdb1
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Feb 14 10:36:43 2015
Continue creating array? yes
mdadm: array /dev/md8 started.
[root@localhost dev]# mdadm --detail --scan ---此命令扫描下以上构建的raid设备的相关配置信息
ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90 UUID=8a87573f:f0802515:f9d540fc:c4782e7c
[root@localhost ~]# vgremove vg01 --运行这个是因为以前在os上配置过vg了,所以先要删除以前的vg配置
/dev/cdrom: open failed: Read-only file system
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume lv01? [y/n]: y
Logical volume "lv01" successfully removed
Volume group "vg01" successfully removed
2,利用lvm构建pv,vg,lv
[root@localhost ~]# pvcreate /dev/md8 ---
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Physical volume "/dev/md8" successfully created
[root@localhost ~]# vgcreate vg1 /dev/md8
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Volume group "vg1" successfully created
[root@localhost ~]# lvcreate -L 2.5GB -n lv1 vg1
Logical volume "lv1" created
[root@localhost ~]# lvscan
ACTIVE '/dev/vg1/lv1' [2.50 GB] inherit
[root@localhost ~]# pvscan
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
PV /dev/md8 VG vg1 lvm2 [2.99 GB / 504.00 MB free]
Total: 1 [2.99 GB] / in use: 1 [2.99 GB] / in no VG: 0 [0 ]
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
/dev/cdrom: open failed: Read-only file system
Attempt to close device '/dev/cdrom' which is not open.
Found volume group "vg1" using metadata type lvm2
3,运地mke2fs,配置对应的ext3文件系统(加j otion),并加载文件系统到cento5.4
[root@localhost ~]# mke2fs -j /dev/vg1/lv1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
327680 inodes, 655360 blocks
32768 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=671088640
20 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@localhost ~]# mkdir -pv /lun
[root@localhost ~]# mount /dev/vg1/lv1 /lun
[root@localhost ~]# df -hk --查看lvm分区的mount状况
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda2 17856888 6497720 10437444 39% /
tmpfs 216476 0 216476 0% /dev/shm
none 216388 104 216284 1% /var/lib/xenstored
/dev/hdc 3906842 3906842 0 100% /media/CentOS_5.4_Final
/dev/mapper/vg1-lv1 2580272 69448 2379752 3% /lun
4,为了下次重启自动挂载lvm的分区
[root@localhost ~]# vi /etc/fstab
LABEL=/ / ext3 defaults 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-hda1 swap swap defaults 0 0
/dev/vg1/lv1 /lun ext3 defaults 2 2
~
~
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/9240380/viewspace-630922/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/9240380/viewspace-630922/