适用版本:
Exadata X2-2
Exadata X3-2
目的:
在Exadata计算节点扩展磁盘空间(驱动)在Exadata升级后或者增加新的空间到/u01 或者创建新的文件系统 /u01
详细:
检查物理卷的大小
[root@gasdbadm01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VGExaDb lvm2 a-- 114.00g 30.00g
/dev/sda3 VGExaDb lvm2 a-- 1.52t 1.42t
[root@gasdbadm01 ~]# fdisk -l /dev/sda
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 1797.0 GB, 1796997120000 bytes
255 heads, 63 sectors/track, 218472 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 218473 1754879999+ ee GPT
[root@gasdbadm01 ~]#
检查逻辑卷的大小:
[root@gasdbadm01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LVDbOra1 VGExaDb -wi-ao---- 100.00g
LVDbSwap1 VGExaDb -wi-ao---- 24.00g
LVDbSys1 VGExaDb -wi-ao---- 30.00g
LVDbSys2 VGExaDb -wi-a----- 30.00g
[root@gasdbadm01 ~]#
DETAILS
Reclaim the 4th hard disk drive and extend the physical volume size
Check the sizes of the physical volume and the associated device(s). Note the volume group name (VGExaDb):
PV VG Fmt Attr PSize PFree
/dev/sda2 VGExaDb lvm2 a-- 556.80G 372.80G
[root@exadb02 ~]# fdisk -l /dev/sda
Disk /dev/sda: 597.9 GB, 597998698496 bytes
255 heads, 63 sectors/track, 72702 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
Check the logical volume sizes in the volume group:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LVDbOra1 VGExaDb -wi-ao 100.00G
LVDbSwap1 VGExaDb -wi-ao 24.00G
LVDbSys1 VGExaDb -wi-ao 30.00G
To reclaim the 4th hard disk, either run the reclaim disk script manually:
or check the progress after running dbnodeupdate.sh:
[INFO] This is SUN FIRE X4170 M2 SERVER machine
[INFO] Number of LSI controllers: 1
[INFO] Physical disks found: 4 (252:0 252:1 252:2 252:3)
[INFO] Logical drives found: 1
[WARNING] Reconstruction on the logical disk 0 is in progress: Completed 83%, Taken 160 min.
[INFO] Continue later when reconstruction is done
...
The reclaim normally takes 3-4 hours.
Once the reclaim completes, all 4 disks are in the RAID 5 configuration, but there is also an error, as the RAID 5 should be using all 4 disks.
[INFO] This is SUN FIRE X4170 M2 SERVER machine
[INFO] Number of LSI controllers: 1
[INFO] Physical disks found: 4 (252:0 252:1 252:2 252:3)
[INFO] Logical drives found: 1
[INFO] Dual boot installation: no
[INFO] Linux logical drive: 0
[INFO] RAID Level for the Linux logical drive: 5
[INFO] Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
[INFO] Dedicated Hot Spares for the Linux logical drive: 0
[INFO] Global Hot Spares: 0
[ERROR] Expected RAID 5 from 3 physical disks and 1 global hot spare and no dedicated hot spare
Note that the /dev/sda device size is still 600GB and it still has 2 partitions:
Disk /dev/sda: 597.9 GB, 597998698496 bytes
255 heads, 63 sectors/track, 72702 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
The RAID 5 device correctly shows the new size:
RAID Level : Primary-5, Secondary-0, RAID Level Qualifier-3
Size : 835.394 GB
The compute node needs to be rebooted, in order for the new size for /dev/sda to take effect.
Shut down the clusterware first:
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exadb02'
CRS-2673: Attempting to stop 'ora.crsd' on 'exadb02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'exadb02'
...
CRS-4133: Oracle High Availability Services has been stopped.
Reboot the server...
Broadcast message from root (pts/0) (Fri Sep 6 09:53:05 2013):
The system is going down for reboot NOW!
Verify the new size for /dev/sda after the reboot.
Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
Create the third partition (of type Linux LVM) on /dev/sda:
Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
Create the third partition (of type Linux LVM) on /dev/sda:
[root@exadb02 ~]# fdisk /dev/sda
The number of cylinders for this disk is set to 109053.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (72703-109053, default 72703):
Using default value 72703
Last cylinder or +size or +sizeM or +sizeK (72703-109053, default 109053):
Using default value 109053
Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
/dev/sda3 72703 109053 291989407+ 8e Linux LVM
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
The /dev/sda now has three partitions.
Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 16 128488+ 83 Linux
/dev/sda2 17 72702 583850295 8e Linux LVM
/dev/sda3 72703 109053 291989407+ 8e Linux LVM
Create the physical volume on the new partition and extend the volume group:
Writing physical volume data to disk "/dev/sda3"
Physical volume "/dev/sda3" successfully created
[root@exadb02 ~]# vgextend VGExaDb /dev/sda3
Volume group "VGExaDb" successfully extended
[root@exadb02 ~]# vgdisplay
--- Volume group ---
VG Name VGExaDb
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size 835.26 GB
PE Size 4.00 MB
Total PE 213827
Alloc PE / Size 47104 / 184.00 GB
Free PE / Size 166723 / 651.26 GB
VG UUID a4MsSu-yB9U-5oxT-BuBC-mGjT-mSAc-V0pPb6
Resize an existing file system
To resize any of the existing file systems follow
Oracle® Exadata Database Machine Owner's Guide 11g Release 2 (11.2)
Chapter 7 Maintaining Oracle Exadata Database Machine and Oracle Exadata Storage Expansion Rack
Section 7.25 Resizing LVM Partitions
Resize file system /u01:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1 30G 4.7G 24G 17% /
/dev/sda1 124M 84M 35M 71% /boot
/dev/mapper/VGExaDb-LVDbOra1 99G 25G 70G 27% /u01
tmpfs 81G 0 81G 0% /dev/shm
[root@exadb02 ~]# lvscan
ACTIVE '/dev/VGExaDb/LVDbSys1' [30.00 GB] inherit
ACTIVE '/dev/VGExaDb/LVDbSwap1' [24.00 GB] inherit
ACTIVE '/dev/VGExaDb/LVDbOra1' [100.00 GB] inherit
[root@exadb02 ~]# lvdisplay /dev/VGExaDb/LVDbOra1
--- Logical volume ---
LV Name /dev/VGExaDb/LVDbOra1
VG Name VGExaDb
LV UUID SNn8Wd-NZoK-zAIG-1fyv-GU98-EJPd-Zy7nQE
LV Write Access read/write
LV Status available
# open 1
LV Size 100.00 GB
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
There is plenty of free space in the volume group VGExaDb:
"VGExaDb" 835.26 GB [184.00 GB used / 651.26 GB free]
Shut down clusterware and OSW...
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exadb02'
CRS-2673: Attempting to stop 'ora.crsd' on 'exadb02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'exadb02'
...
CRS-4133: Oracle High Availability Services has been stopped.
[root@exadb02 ~]# /opt/oracle.oswatcher/osw/stopOSW.sh
Unmount and check the file system to be resized (/u01):
[root@exadb02 ~]# e2fsck -f /dev/VGExaDb/LVDbOra1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
DBORA: 164726/13107200 files (0.9% non-contiguous), 6793630/26214400 blocks
Add 100GB to the logical volume (LVDbOra1), to make the total size 200GB:
Finding volume group VGExaDb
Archiving volume group "VGExaDb" metadata (seqno 9).
Extending logical volume LVDbOra1 to 200.00 GB
Found volume group "VGExaDb"
Loading VGExaDb-LVDbOra1 table (253:2)
Suspending VGExaDb-LVDbOra1 (253:2) with device flush
Found volume group "VGExaDb"
Resuming VGExaDb-LVDbOra1 (253:2)
Creating volume group backup "/etc/lvm/backup/VGExaDb" (seqno 10).
Logical volume LVDbOra1 successfully resized
Check the new size of logical volume (LVDbOra1):
ACTIVE '/dev/VGExaDb/LVDbOra1' [200.00 GB] inherit
Resize the file system:
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/VGExaDb/LVDbOra1 to 52428800 (4k) blocks.
Begin pass 1 (max = 800)
Extending the inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 2 (max = 30)
Relocating blocks XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 800)
Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 5 (max = 15)
Moving inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/VGExaDb/LVDbOra1 is now 52428800 blocks long.
Mount it back:
Verify the new file system size:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1 30G 4.7G 24G 17% /
/dev/sda1 124M 84M 35M 71% /boot
tmpfs 81G 0 81G 0% /dev/shm
/dev/mapper/VGExaDb-LVDbOra1 197G 25G 163G 14% /u01
Restart the clusterware and OSW...
CRS-4123: Oracle High Availability Services has been started.
[root@exadb02 ~]# /opt/oracle.cellos/vldrun -script oswatcher
Logging started to /var/log/cellos/validations.log
Command line is /opt/oracle.cellos/validations/bin/vldrun.pl -quiet -script oswatcher
Run validation oswatcher - PASSED
The each boot completed with SUCCESS
Add a new file system using the free space in the extended volume
Ther is still plenty of free space in the volume group:
Free PE / Size 141123 / 551.26 GB
Create a new 200GB logical volume for a new file system:
PV VG Fmt Attr PSize PFree
/dev/sda2 VGExaDb lvm2 a-- 556.80G 272.80G
/dev/sda3 VGExaDb lvm2 a-- 278.46G 278.46G
[root@exadb02 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LVDbOra1 VGExaDb -wi-ao 200.00G
LVDbSwap1 VGExaDb -wi-ao 24.00G
LVDbSys1 VGExaDb -wi-ao 30.00G
[root@exadb02 ~]# lvcreate -L200GB -n LVDbOra2 VGExaDb
Logical volume "LVDbOra2" created
[root@exadb02 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LVDbOra1 VGExaDb -wi-ao 200.00G
LVDbOra2 VGExaDb -wi-a- 200.00G
LVDbSwap1 VGExaDb -wi-ao 24.00G
LVDbSys1 VGExaDb -wi-ao 30.00G
Create a new file system (and name it /u02):
mke2fs 1.39 (29-May-2006)
Filesystem label=u02
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
26214400 inodes, 52428800 blocks
2621440 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1600 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Mount the new file system:
[root@exadb02 ~]# mount -t ext3 /dev/VGExaDb/LVDbOra2 /u02
[root@exadb02 ~]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1 30G 4.8G 24G 17% /
/dev/sda1 124M 84M 35M 71% /boot
tmpfs 81G 0 81G 0% /dev/shm
/dev/mapper/VGExaDb-LVDbOra1 197G 25G 163G 14% /u01
/dev/mapper/VGExaDb-LVDbOra2 197G 188M 187G 1% /u02
Note that there is still some free space in the volume group:
Free PE / Size 89923 / 351.26 GB
[root@exadb02 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VGExaDb lvm2 a-- 556.80G 72.80G
/dev/sda3 VGExaDb lvm2 a-- 278.46G 278.46G
That space can be used to extend the existing file systems or create new ones.
REFERENCES
Oracle® Exadata Database Machine Owner's Guide 11g Release 2 (11.2) Chapter 7 Maintaining Oracle Exadata Database Machine and Oracle Exadata Storage Expansion Rack Section 7.25 Resizing LVM PartitionsNOTE:1525286.1 - Exadata Database Compute Node Volume Is Smaller Than Expected
NOTE:1468877.1 - Exadata 11.2.3.2.0 release and patch (14212264) for Exadata 11.1.3.3, 11.2.1.2.x, 11.2.2.2.x, 11.2.2.3.x, 11.2.2.4.x, 11.2.3.1.x
NOTE:1485475.1 - Exadata 11.2.3.2.1 release and patch (14522699) for Exadata 11.1.3.3, 11.2.1.2.x, 11.2.2.2.x, 11.2.2.3.x, 11.2.2.4.x, 11.2.3.1.x, 11.2.3.2.x