LVM有扩容功能,无容错功能
物理卷:
[root@localhost ~]# pvscan
PV /dev/sda2 VG VolGroup lvm2 [19.51 GiB / 0 free]
Writing physical volume data to disk "/dev/sdb"
Physical volume "/dev/sdb" successfully created
Writing physical volume data to disk "/dev/sdc"
Physical volume "/dev/sdc" successfully created
Writing physical volume data to disk "/dev/sdd"
Physical volume "/dev/sdd" successfully created
--- Physical volume --- 原来就有的
PV Name /dev/sda2
VG Name VolGroup
PV Size 19.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 10.00 GiB
Allocatable NO 可以分配的
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3XM4hj-w10O-YgTi-xEKW-05tA-dAZr-Ql3Ica
"/dev/sdc" is a new physical volume of "10.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdc
VG Name
PV Size 10.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3Qc3sW-Hol0-ekui-2CbJ-ECuh-X3ao-Y0fGr5
"/dev/sdd" is a new physical volume of "10.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdd
VG Name
PV Size 10.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV /dev/sda2 VG VolGroup lvm2 [19.51 GiB / 0 free]
PV /dev/sdb lvm2 [10.00 GiB]
PV /dev/sdc lvm2 [10.00 GiB]
Total: 3 [39.51 GiB] / in use: 1 [19.51 GiB] / in no VG: 2 [20.00 GiB]
[root@localhost ~]#
卷组:
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
Reading all physical volumes. This may take a while...
Found volume group "aligege" using metadata type lvm2
Found volume group "VolGroup" using metadata type lvm2
[root@localhost ~]#
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name aligege
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 19.99 GiB
PE Size 4.00 MiB
Total PE 5118
Alloc PE / Size 0 / 0
Free PE / Size 5118 / 19.99 GiB
VG UUID U1swqi-DDR9-0jHR-1Lh5-yq6c-iav6-Wtj185
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 19.51 GiB
PE Size 4.00 MiB 卷组分配后PE也就分配了,之前没有
Total PE 4994
Alloc PE / Size 4994 / 19.51 GiB
Free PE / Size 0 / 0
VG UUID j3HHZT-spWa-PQtD-mA3c-hNNK-t0gk-QfkENs
[root@localhost ~]#
[root@localhost ~]# vgremove aligege
Volume group "aligege" successfully removed
[root@localhost ~]#
[root@localhost ~]# lvcreate -L 15G -n HR-cost aligege
Logical volume "HR-cost" created
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/aligege/HR-cost
LV Name HR-cost
VG Name aligege
LV UUID HlViMr-3lCN-JAK3-muyt-y1Js-2CWW-G3JmaB
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2013-07-20 19:01:20 +0800
LV Status available
# open 0
LV Size 15.00 GiB
Current LE 3840
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/VolGroup/lv_root
LV Name lv_root
VG Name VolGroup
LV UUID APrXHZ-xATh-g51W-7Hpc-UQYL-50oM-xpY7py
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2013-07-18 21:12:47 +0800
LV Status available
# open 1
LV Size 17.48 GiB
Current LE 4474
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/VolGroup/lv_swap
LV Name lv_swap
VG Name VolGroup
LV UUID wG2cMD-ZHph-bgXH-3H1k-x04e-YKFi-bazXir
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2013-07-18 21:12:51 +0800
LV Status available
# open 1
LV Size 2.03 Gi B
Current LE 520
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@localhost ~]#
/dev/md5:
Version : 1.2
Creation Time : Sun Jul 21 01:21:33 2013
Raid Level : raid5
Array Size : 3144192 (3.00 GiB 3.22 GB)
Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sun Jul 21 01:21:44 2013
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 4e379d32:20a354d7:c6baee5e:68bd25cb
Events : 18
主编号 副编号
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
5 8 64 3 active sync /dev/sde
4 8 80 - spare /dev/sdf 热备(spare)
[root@localhost ~]#
[root@localhost ~]# mdadm -Ds
ARRAY /dev/md5 metadata=1.2 spares=1 name=localhost.localdomain:5 UUID=4e379d32:20a354d7:c6baee5e:68bd25cb
[root@localhost ~]#
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdd 损坏一块
mdadm: set /dev/sdd faulty in /dev/md5
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4] sdd[2](F) sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
[=>...................] recovery = 9.5% (100480/1048064) finish=0.1min speed=100480K/sec
unused devices: <none>
[root@localhost ~]#
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4] sdd[2](F) sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] 一会儿又被备用的修复了
unused devices: <none>
[root@localhost ~]#
[root@localhost ~]# mdadm /dev/md5 -f /dev/sde 再毁坏一块
mdadm: set /dev/sde faulty in /dev/md5
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5](F) sdf[4] sdd[2](F) sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
unused devices: <none>
[root@localhost ~]# cd /file 但是还能访问,有容错功能
[root@localhost file]# ls
lost+found
[root@localhost file]# mkdir haha
[root@localhost file]# ls
haha lost+found
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 80 2 active sync /dev/sdf
3 0 0 3 removed
2 8 48 - faulty spare /dev/sdd
5 8 64 - faulty spare /dev/sde
[root@localhost file]# mdadm /dev/md5 -r /dev/sd[de]
mdadm: hot removed /dev/sdd from /dev/md5 热移除
mdadm: hot removed /dev/sde from /dev/md5
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 80 2 active sync /dev/sdf
3 0 0 3 removed
[root@localhost file]# mdadm /dev/md5 -a /dev/sd[h] 新增
mdadm: added /dev/sdh
[root@localhost file]# mdadm /dev/md5 -a /dev/sd[g]
mdadm: added /dev/sdg
[root@localhost file]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Jul 21 01:21:33 2013
Raid Level : raid5
Array Size : 3144192 (3.00 GiB 3.22 GB)
Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sun Jul 21 01:50:03 2013
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 4e379d32:20a354d7:c6baee5e:68bd25cb
Events : 72
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 80 2 active sync /dev/sdf
5 8 112 3 active sync /dev/sdh
6 8 96 - spare /dev/sdg 把最后建的当备用
[root@localhost file]#
先将磁盘做成RAID再做成pv,然后再化LVM(物理卷不一定是硬盘,也可以是 RAID)
因为RAID有容错功能,pv能够扩容,二者结合性能更好。
PV /dev/sda2 VG VolGroup lvm2 [19.51 GiB / 0 free]
Total: 1 [19.51 GiB] / in use: 1 [19.51 GiB] / in no VG: 0 [0 ]
[root@localhost ~]# pvcreate /dev/sd[bcd]
把bcd磁盘都设置为物理卷
Writing physical volume data to disk "/dev/sdb"
Physical volume "/dev/sdb" successfully created
Writing physical volume data to disk "/dev/sdc"
Physical volume "/dev/sdc" successfully created
Writing physical volume data to disk "/dev/sdd"
Physical volume "/dev/sdd" successfully created
[root@localhost ~]#
[root@localhost ~]# pvdisplay
显示物理卷信息
--- Physical volume --- 原来就有的
PV Name /dev/sda2
VG Name VolGroup
PV Size 19.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
PV UUID Pap7Ba-SA2w-3Ymo-tLFS-QG8H-bPsl-Tf04iJ
以下是刚分配的
"/dev/sdb" is a new physical volume of "10.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 10.00 GiB
Allocatable NO 可以分配的
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3XM4hj-w10O-YgTi-xEKW-05tA-dAZr-Ql3Ica
"/dev/sdc" is a new physical volume of "10.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdc
VG Name
PV Size 10.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3Qc3sW-Hol0-ekui-2CbJ-ECuh-X3ao-Y0fGr5
"/dev/sdd" is a new physical volume of "10.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdd
VG Name
PV Size 10.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID cTsZmX-mV5F-wYm2-l5O8-0pSD-otdT-rbaG5x
[root@localhost ~]# pvremove /dev/sdd
Labels on physical volume "/dev/sdd" successfully wiped
[root@localhost ~]# pvscanPV /dev/sda2 VG VolGroup lvm2 [19.51 GiB / 0 free]
PV /dev/sdb lvm2 [10.00 GiB]
PV /dev/sdc lvm2 [10.00 GiB]
Total: 3 [39.51 GiB] / in use: 1 [19.51 GiB] / in no VG: 2 [20.00 GiB]
[root@localhost ~]#
卷组:
Reading all physical volumes. This may take a while...
Found volume group "VolGroup" using metadata type lvm2
[root@localhost ~]# vgcreate aligege /dev/sd[bc]
两种创建方法
[root@localhost ~]# vgcreate aligege /dev/sdb /dev/sdc
[root@localhost ~]# vgcreate aligege /dev/sdb /dev/sdc
Volume group "aligege" successfully created
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "aligege" using metadata type lvm2
Found volume group "VolGroup" using metadata type lvm2
[root@localhost ~]#
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name aligege
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 19.99 GiB
PE Size 4.00 MiB
Total PE 5118
Alloc PE / Size 0 / 0
Free PE / Size 5118 / 19.99 GiB
VG UUID U1swqi-DDR9-0jHR-1Lh5-yq6c-iav6-Wtj185
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 19.51 GiB
PE Size 4.00 MiB 卷组分配后PE也就分配了,之前没有
Total PE 4994
Alloc PE / Size 4994 / 19.51 GiB
Free PE / Size 0 / 0
VG UUID j3HHZT-spWa-PQtD-mA3c-hNNK-t0gk-QfkENs
[root@localhost ~]#
[root@localhost ~]# vgremove aligege
Volume group "aligege" successfully removed
[root@localhost ~]#
逻辑卷操作:
Logical volume "HR-cost" created
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/aligege/HR-cost
LV Name HR-cost
VG Name aligege
LV UUID HlViMr-3lCN-JAK3-muyt-y1Js-2CWW-G3JmaB
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2013-07-20 19:01:20 +0800
LV Status available
# open 0
LV Size 15.00 GiB
Current LE 3840
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/VolGroup/lv_root
LV Name lv_root
VG Name VolGroup
LV UUID APrXHZ-xATh-g51W-7Hpc-UQYL-50oM-xpY7py
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2013-07-18 21:12:47 +0800
LV Status available
# open 1
LV Size 17.48 GiB
Current LE 4474
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/VolGroup/lv_swap
LV Name lv_swap
VG Name VolGroup
LV UUID wG2cMD-ZHph-bgXH-3H1k-x04e-YKFi-bazXir
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2013-07-18 21:12:51 +0800
LV Status available
# open 1
LV Size 2.03 Gi B
Current LE 520
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@localhost ~]#
格式化
[root@localhost ~]# mkfs.ext4 /dev/aligege/HR-cost
挂载使用
[root@localhost ~]# mkdir /HR
[root@localhost ~]# mount /dev/aligege/HR-cost /HR
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
/dev/mapper/aligege-HR--cost
ext4 15G 166M 14G 2% /HR
[root@localhost ~]#
也可以使用分区,这里直接用的是磁盘 ,道理是差不多的。
[root@localhost ~]# mkdir /HR
[root@localhost ~]# mount /dev/aligege/HR-cost /HR
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
/dev/mapper/aligege-HR--cost
ext4 15G 166M 14G 2% /HR
[root@localhost ~]#
也可以使用分区,这里直接用的是磁盘 ,道理是差不多的。
扩容:
vgextend 卷组名字 磁盘物理卷名字
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name aligege
System ID
Format lvm2
Metadata Areas 6
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 6
Act PV 6
VG Size 59.98 GiB 扩容成功
PE Size 4.00 MiB
Total PE 15354
Alloc PE / Size 3840 / 15.00 GiB
Free PE / Size 11514 / 44.98 GiB
VG UUID sDhCUJ-XsJf-Yt0B-bKSt-kbF9-XU09-8jrzjT
[root@localhost ~]# lvextend -L +20G /dev/aligege/HR-cost
Extending logical volume HR-cost to 35.00 GiB
Logical volume HR-cost successfully resized
[root@localhost ~]#
[root@localhost ~]#
mdadm -D /dev/md5
[root@localhost ~]# pvcreate /dev/sd[defg]
Writing physical volume data to disk "/dev/sdd"
Physical volume "/dev/sdd" successfully created
Writing physical volume data to disk "/dev/sde"
Physical volume "/dev/sde" successfully created
Writing physical volume data to disk "/dev/sdf"
Physical volume "/dev/sdf" successfully created
Writing physical volume data to disk "/dev/sdg"
Volume group "aligege" successfully extended
Writing physical volume data to disk "/dev/sdd"
Physical volume "/dev/sdd" successfully created
Writing physical volume data to disk "/dev/sde"
Physical volume "/dev/sde" successfully created
Writing physical volume data to disk "/dev/sdf"
Physical volume "/dev/sdf" successfully created
Writing physical volume data to disk "/dev/sdg"
Physical volume "/dev/sdg" successfully created
[root@localhost ~]# vgextend aligege /dev/sd[defg]
Volume group "aligege" successfully extended
--- Volume group ---
VG Name aligege
System ID
Format lvm2
Metadata Areas 6
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 6
Act PV 6
VG Size 59.98 GiB 扩容成功
PE Size 4.00 MiB
Total PE 15354
Alloc PE / Size 3840 / 15.00 GiB
Free PE / Size 11514 / 44.98 GiB
VG UUID sDhCUJ-XsJf-Yt0B-bKSt-kbF9-XU09-8jrzjT
[root@localhost ~]# lvextend -L +20G /dev/aligege/HR-cost
Extending logical volume HR-cost to 35.00 GiB
Logical volume HR-cost successfully resized
[root@localhost ~]#
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
/dev/mapper/aligege-HR--cost
ext4 15G 166M 14G 2% /HR
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/aligege/HR-cost is mounted on /HR; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 3
Performing an on-line resize of /dev/aligege/HR-cost to 9175040 (4k) blocks.
The filesystem on /dev/aligege/HR-cost is now 9175040 blocks long.
[root@localhost ~]#
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
/dev/mapper/aligege-HR--cost
ext4 35G 173M 33G 1% /HR
[root@localhost ~]#
这样就变成35G了。
[root@localhost ~]# umount /dev/aligege/HR-cost
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
[root@localhost ~]# lvremove /dev/aligege/HR-cost
Do you really want to remove active logical volume HR-cost? [y/n]: y
Logical volume "HR-cost" successfully removed
[root@localhost ~]# vgremove /dev/aligege
Volume group "aligege" successfully removed
[root@localhost ~]# pvremove /dev/sdb[bcdefg]
Physical Volume /dev/sdb[bcdefg] not found
[root@localhost ~]# pvremove /dev/sd[bcdefg]
Labels on physical volume "/dev/sdb" successfully wiped
Labels on physical volume "/dev/sdc" successfully wiped
Labels on physical volume "/dev/sdd" successfully wiped
Labels on physical volume "/dev/sde" successfully wiped
Labels on physical volume "/dev/sdf" successfully wiped
Labels on physical volume "/dev/sdg" successfully wiped
[root@localhost ~]#
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
/dev/mapper/aligege-HR--cost
ext4 15G 166M 14G 2% /HR
[root@localhost ~]#
这里还是15G??为什么?
[root@localhost ~]#
resize2fs /dev/aligege/HR-cost
要
重新识别一下:
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/aligege/HR-cost is mounted on /HR; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 3
Performing an on-line resize of /dev/aligege/HR-cost to 9175040 (4k) blocks.
The filesystem on /dev/aligege/HR-cost is now 9175040 blocks long.
[root@localhost ~]#
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
/dev/mapper/aligege-HR--cost
ext4 35G 173M 33G 1% /HR
[root@localhost ~]#
这样就变成35G了。
[root@localhost ~]# umount /dev/aligege/HR-cost
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用%% 挂载点
/dev/mapper/VolGroup-lv_root
ext4 18G 1.8G 15G 11% /
tmpfs tmpfs 516M 0 516M 0% /dev/shm
/dev/sda1 ext4 485M 31M 429M 7% /boot
[root@localhost ~]# lvremove /dev/aligege/HR-cost
Do you really want to remove active logical volume HR-cost? [y/n]: y
Logical volume "HR-cost" successfully removed
[root@localhost ~]# vgremove /dev/aligege
Volume group "aligege" successfully removed
[root@localhost ~]# pvremove /dev/sdb[bcdefg]
Physical Volume /dev/sdb[bcdefg] not found
[root@localhost ~]# pvremove /dev/sd[bcdefg]
Labels on physical volume "/dev/sdb" successfully wiped
Labels on physical volume "/dev/sdc" successfully wiped
Labels on physical volume "/dev/sdd" successfully wiped
Labels on physical volume "/dev/sde" successfully wiped
Labels on physical volume "/dev/sdf" successfully wiped
Labels on physical volume "/dev/sdg" successfully wiped
[root@localhost ~]#
5,6可以做也可以不做,6是相对于分区来说的,直接在磁盘进行不需要。
PV--》VG--》LV这个过程可以形象的跟建设一个公司筹集资金,再计划资金的使用一样。如下图:
这样理解起来就简单很多了。
安装系统的时候,红帽默认采用lvm方式boot分区不能创建LVM
LVM的优点是当/分区空间不足的时候还可以扩容。
RAID
独立磁盘冗余数组
(
RAID, Redundant Array of Independent Disks
),旧称
廉价磁盘冗余数组
(
RAID, Redundant Array of Inexpensive Disks
),简称
硬盘阵列
。其基本思想就是把多个相对便宜的硬盘组合起来,成为一个硬盘阵列组,使性能达到甚至超过一个价格昂贵、容量巨大的硬盘。
详细介绍参看:
http://zh.wikipedia.org/wiki/RAID
RAID卡
图:
在服务器里,硬件RAID用的比较多。
RAID 0:条带卷(
)
RAID 1:
镜像卷
50%空间利用率
有容错功能
坏一块另外一块可以顶上,一般拿两块磁盘做镜像卷
(两块磁盘一模一样)
有容错功能
RAID 4:带校验的条带
都是放到一块磁盘上的,
都是放到一块磁盘上的,
RAID 5
是一种储存性能、数据安全和存储成本兼顾的存储解决方案。它使用的是Disk Striping(硬盘分区)技术。RAID 5 至少需要三颗硬盘
(n-1)/n利用率
有容错功能,最多可以坏一块
RAID 6
RAID 6
与RAID 5相比,
RAID 6
增加了第二个独立的奇偶校验信息块
,所以最多可以坏两块。
至少四块,利用率(n-1)/n
至少四块,利用率(n-1)/n
RAID 10:镜像+条带
4块磁盘,有容错功能,最多可以坏2块磁盘
(利用率50%)
RAID 10/01:细分为RAID 1+0或RAID 0+1
性能上,RAID 0+1比RAID 1+0有着更快的读写速度。
可靠性上,当RAID 1+0有一个硬盘受损,其余三个硬盘会继续运作。RAID 0+1 只要有一个硬盘受损,同组RAID 0的另一只硬盘亦会停止运作,只剩下两个硬盘运作,可靠性较低。
因此,RAID 10远较RAID 01常用,零售 主板 绝大部份支持RAID 0/1/5/10,但不支持RAID 01。
硬RAID可以直接通过RAID卡恢复,软raid则要经过cpu
这几个一般选RAID6,最多可以坏两块。
这几个一般选RAID6,最多可以坏两块。
mdadm命令
用途:创建,修改监控RAID阵列
1。新建raid5卷,使用4块磁盘作raid5,1块磁盘作热备
mdadm -C /dev/md1 -l5 -n4 -x1 /dev/sd[efghi] 会自动抽取一块做热备
2。格式化raid5设备
mkfs.ext3 /dev/md1
3.挂载使用 mkdir /music mount /dev/md1 /music
4.自动挂载功能,修改/etc/fstab文件,添加
/dev/md1 /music ext3 defaults 0 0
让其中的一块失效,然后看raid5是否能够继续使用
mdadm /dev/md1 -f /dev/sde
使用cat /proc/mdstat命令查看修复过程
删除有问题的磁盘,添加一个好的磁盘作热备,要求磁盘>容量一致
mdadm /dev/md1 -r /dev/sde -a /dev/sdk
=======
[root@localhost ~]# cat /proc/mdstat 查看raid的构建过程
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4](S) sdd[2] sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[=============>.......] recovery = 68.5% (719232/1048064) finish=0.0min speed=143846K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4](S) sdd[2] sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
[root@localhost ~]#
[UUUU]---》表示4个use:在使用
mdadm -C /dev/md1 -l5 -n4 -x1 /dev/sd[efghi] 会自动抽取一块做热备
2。格式化raid5设备
mkfs.ext3 /dev/md1
3.挂载使用 mkdir /music mount /dev/md1 /music
4.自动挂载功能,修改/etc/fstab文件,添加
/dev/md1 /music ext3 defaults 0 0
让其中的一块失效,然后看raid5是否能够继续使用
mdadm /dev/md1 -f /dev/sde
使用cat /proc/mdstat命令查看修复过程
删除有问题的磁盘,添加一个好的磁盘作热备,要求磁盘>容量一致
mdadm /dev/md1 -r /dev/sde -a /dev/sdk
=======
[root@localhost ~]# cat /proc/mdstat 查看raid的构建过程
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4](S) sdd[2] sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[=============>.......] recovery = 68.5% (719232/1048064) finish=0.0min speed=143846K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4](S) sdd[2] sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
[root@localhost ~]#
[UUUU]---》表示4个use:在使用
做raid5最好五个容量都是一样的,一个厂家
/dev/md5:
Version : 1.2
Creation Time : Sun Jul 21 01:21:33 2013
Raid Level : raid5
Array Size : 3144192 (3.00 GiB 3.22 GB)
Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sun Jul 21 01:21:44 2013
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 4e379d32:20a354d7:c6baee5e:68bd25cb
Events : 18
主编号 副编号
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
5 8 64 3 active sync /dev/sde
4 8 80 - spare /dev/sdf 热备(spare)
[root@localhost ~]#
[root@localhost ~]# mdadm -Ds
ARRAY /dev/md5 metadata=1.2 spares=1 name=localhost.localdomain:5 UUID=4e379d32:20a354d7:c6baee5e:68bd25cb
[root@localhost ~]#
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdd 损坏一块
mdadm: set /dev/sdd faulty in /dev/md5
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4] sdd[2](F) sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
[=>...................] recovery = 9.5% (100480/1048064) finish=0.1min speed=100480K/sec
unused devices: <none>
[root@localhost ~]#
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5] sdf[4] sdd[2](F) sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] 一会儿又被备用的修复了
unused devices: <none>
[root@localhost ~]#
[root@localhost ~]# mdadm /dev/md5 -f /dev/sde 再毁坏一块
mdadm: set /dev/sde faulty in /dev/md5
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sde[5](F) sdf[4] sdd[2](F) sdc[1] sdb[0]
3144192 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
unused devices: <none>
[root@localhost ~]# cd /file 但是还能访问,有容错功能
[root@localhost file]# ls
lost+found
[root@localhost file]# mkdir haha
[root@localhost file]# ls
haha lost+found
[root@localhost file]#
[root@localhost file]# mdadm -D /dev/md5
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 80 2 active sync /dev/sdf
3 0 0 3 removed
2 8 48 - faulty spare /dev/sdd
5 8 64 - faulty spare /dev/sde
[root@localhost file]# mdadm /dev/md5 -r /dev/sd[de]
mdadm: hot removed /dev/sdd from /dev/md5 热移除
mdadm: hot removed /dev/sde from /dev/md5
[root@localhost file]#
[root@localhost file]# mdadm -D /dev/md5
Number Major Minor RaidDevice State
[root@localhost file]# mdadm -D /dev/md5
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 80 2 active sync /dev/sdf
3 0 0 3 removed
[root@localhost file]# mdadm /dev/md5 -a /dev/sd[h] 新增
mdadm: added /dev/sdh
[root@localhost file]# mdadm /dev/md5 -a /dev/sd[g]
mdadm: added /dev/sdg
[root@localhost file]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Jul 21 01:21:33 2013
Raid Level : raid5
Array Size : 3144192 (3.00 GiB 3.22 GB)
Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sun Jul 21 01:50:03 2013
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 4e379d32:20a354d7:c6baee5e:68bd25cb
Events : 72
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
4 8 80 2 active sync /dev/sdf
5 8 112 3 active sync /dev/sdh
6 8 96 - spare /dev/sdg 把最后建的当备用
[root@localhost file]#
因为RAID有容错功能,pv能够扩容,二者结合性能更好。