1、前提概述
今天一个开发测试库严重告警软件目录超过99%,这个是一个文件系统的单实例,必须要赶快进行对软件目录进行扩容。
[root@test ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sysvg-root 9.9G 8.7G 716M 93% /
tmpfs 127G 23G 104G 18% /dev/shm
/dev/sda1 485M 39M 421M 9% /boot
/dev/mapper/sysvg-home 20G 19G 282M 99% /home
/dev/mapper/sysvg-tmp 9.9G 152M 9.2G 2% /tmp
/dev/mapper/sysvg-usr 20G 2.7G 16G 15% /usr
/dev/mapper/sysvg-var 9.9G 467M 8.9G 5% /var
/dev/mapper/sysvg-db 119G 110G 2.2G 99% /u01
/dev/mapper/datavg-omsdata 1008G 54G 903G 6% /home/nxyw/data
作为一个单实例为啥占用那么高?因为该库的数据文件放在软件目录下(默认位置/u01下)
[root@test ~]# vgdisplay
--- Volume group ---
VG Name datavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.64 TiB
PE Size 4.00 MiB
Total PE 428975
Alloc PE / Size 262144 / 1.00 TiB
Free PE / Size 166831 / 651.68 GiB
VG UUID 6rFimG-lC3z-v9FQ-HYEV-WofA-412n-Zy4vxE
--- Volume group ---
VG Name sysvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 7
Max PV 0
Cur PV 1
Act PV 1
VG Size 222.57 GiB
PE Size 4.00 MiB
Total PE 56978
Alloc PE / Size 56832 / 222.00 GiB
Free PE / Size 146 / 584.00 MiB
VG UUID 6rsWqU-maob-ii7D-zhnF-eu0t-y33t-oUaFaN
软件目录(/u01)在sysvg下,这个vg本来就没有很多空间了,所以只能考虑从datavg中划空间出来使用(datavg空闲600g)。
所以我理了理操作,先对软件目录进行冷备,在将/u01挂载到datavg下的新的lv上(先取消挂载,然后再挂载上新lv上),在取消过程中/u01的所有数据会丢失所以要对它进行冷备。好了弄清了操作流程开始操作吧!
2、操作过程
操作之前我们先来看看每个lv的划分情况
[root@test enmo]# lvdisplay
--- Logical volume ---
LV Path /dev/datavg/omsdata
LV Name omsdata
VG Name datavg
LV UUID Kn6OjV-Nsja-BmQo-mQ5y-wbMx-uBrX-bB7Rzj
LV Write Access read/write
LV Creation host, time domstest, 2019-03-30 14:32:53 +0800
LV Status available
# open 1
LV Size 1.00 TiB
Current LE 262144
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/sysvg/tmp
LV Name tmp
VG Name sysvg
LV UUID 6s7mGG-9dnc-MaYL-X9dG-WygH-oIwe-BLHhBc
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-09-05 11:48:31 +0800
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/sysvg/swap
LV Name swap
VG Name sysvg
LV UUID TADmCP-WIDx-rflP-3UqH-7DAv-xY2G-f6BdVK
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-09-05 11:48:33 +0800
LV Status available
# open 1
LV Size 32.00 GiB
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/sysvg/var
LV Name var
VG Name sysvg
LV UUID c64HrJ-f8st-cLdY-LirR-PcH9-mXsu-Uozart
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-09-05 11:48:33 +0800
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Logical volume ---
LV Path /dev/sysvg/home
LV Name home
VG Name sysvg
LV UUID qdXL4L-VgKo-QVU6-gZSA-pBML-eCvd-IyqA6J
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-09-05 11:48:34 +0800
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
--- Logical volume ---
LV Path /dev/sysvg/root
LV Name root
VG Name sysvg
LV UUID h3UR3v-UUPb-r9EF-1N6Z-lDCr-Qmro-KWZjNi
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-09-05 11:48:36 +0800
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/sysvg/usr
LV Name usr
VG Name sysvg
LV UUID aW5Fab-GJIS-MYnu-lN2m-exCz-UeKa-oVLEgU
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-09-05 11:48:38 +0800
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/sysvg/db
LV Name db
VG Name sysvg
LV UUID GjIyhv-QzK2-AIoy-Fbag-xLQI-OAqV-aMjyv1
LV Write Access read/write
LV Creation host, time domstest, 2018-09-28 12:25:59 +0800
LV Status available
# open 1
LV Size 120.00 GiB
Current LE 30720
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
1、停止数据库(冷备的关键就是必须停库)
shutdown immediate
2、对软件目录打包
tar -cvf u01.tar /u01
将包移动到其他目录下
mv u01.tar /data
3、这步使用了一个备份权限的脚本,获取/u01下的所有文件的权限和属组。
./permission.pl /u01
4、在datavg下创建一个有102400个PE(刚才看到了一个PE=4G,也就是400G)的lv,名为oradb的lv
lvcreate -l 102400 -n oradb datavg
当然需要对它先进行格式化才能使用
[root@test ~]# mkfs -t ext4 -c /dev/datavg/oradb
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=64 blocks, Stripe width=64 blocks
26214400 inodes, 104857600 blocks
5242880 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
3200 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Checking for bad blocks (read-only test): 59.77% done, 99.41% done, 20:13 elapsed
done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
5、杀掉所有在是用/u01的进程c才能取消挂载
fuser -m -k /u01
/u01取消挂载
umount /u01
6、将/u01挂载到新建的400glv
mount /dev/datavg/oradb /u01
7、将直接的tar包解压到/u01下
tar -xvf u01.tar -C /
但是解压后的文件权限有问题直接会影响数据库无法使用且无法恢复,所以这里要将之前的备份的权限恢复。
chmod u+x permission.pl
./restore*****.cmd
3、验证
[root@test /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/sysvg-root 9.9G 8.7G 714M 93% /
tmpfs 127G 23G 104G 18% /dev/shm
/dev/sda1 485M 39M 421M 9% /boot
/dev/mapper/sysvg-home 20G 19G 282M 99% /home
/dev/mapper/sysvg-tmp 9.9G 151M 9.2G 2% /tmp
/dev/mapper/sysvg-usr 20G 2.7G 16G 15% /usr
/dev/mapper/sysvg-var 9.9G 467M 8.9G 5% /var
/dev/mapper/datavg-omsdata 1008G 118G 839G 13% /home/nxyw/data
/dev/mapper/datavg-oradb 394G 111G 264G 30% /u01
/u01成功扩容到400g的空间且oracle能够正常使用。