客户的数据库是一套Oracle 11gR2 RAC for AIX,在数据库层面使用的是ASM管理数据。这是一个真实的案例,客户上了一套IBM SVC虚拟化存储产品,原有Oracle数据库系统的存储要先映射给SVC设备,再通过SVC重新映射到服务器,原本这是一件比较简单,且不存在较大风险的问题,但IBM实施工程师将通过SVC重新映射到系统的磁盘画蛇添足的分配了PVID,导致ASM磁盘的头被破坏,所有ASM磁盘组无法成功加载。
《 AIX平台下磁盘的PVID对ASM磁盘的破坏 》: http://blog.itpub.net/23135684/viewspace-1079207/
通过这篇文章来讨论一个该场景下的恢复手段和方法。我们实验的环境是一个Oracle 11.2.0.3.0 Restart Database,使用ASM进行数据管理。第一篇文章将讨论通过ASMCMD中的md_backup和md_restore命令是否能解决这样的问题。
一.检查实验环境。
# pwd
/u01/app/oracle/product/11.2.0/db_1
# cd /u01/app/11.2.0/grid/bin
# ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
ONLINE ONLINE localhost
ora.FRA01.dg
ONLINE ONLINE localhost
ora.LISTENER.lsnr
ONLINE ONLINE localhost
ora.OCRVDISK.dg
ONLINE ONLINE localhost
ora.asm
ONLINE ONLINE localhost Started
ora.ons
OFFLINE OFFLINE localhost
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE localhost
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE localhost
ora.orcl.db
1 ONLINE ONLINE localhost Open
所有相关服务都是ONLINE状态。
# lspv
hdisk0 00cc1ad4ef095bf0 rootvg active
hdisk1 00cc1ad46aff307f None
hdisk2 none None
hdisk3 none None
hdisk4 none None
服务器上一共挂载了5张磁盘。
# su - grid
$ asmcmd -p
ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 51200 49637 0 49637 0 N DATA01/
MOUNTED EXTERN N 512 4096 1048576 51200 50980 0 50980 0 N FRA01/
MOUNTED EXTERN N 512 4096 1048576 2048 1989 0 1989 0 N OCRVDISK/
ASM包括3个磁盘组,分别是DATA01,FRA01,OCRVDISK。
下面是每个磁盘组和磁盘的对应关系:
ASMCMD [+] > lsdsk -G ocrvdisk
Path
/dev/rhdisk2
ASMCMD [+] > lsdsk -G data01
Path
/dev/rhdisk3
ASMCMD [+] > lsdsk -G fra01
Path
/dev/rhdisk4
下面查看的是数据库的状态:
# su - oracle
$ sql
SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 19 15:37:14 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL> set linesize 200
SQL> col name format a50
SQL> select name ,status from v$datafile;
NAME STATUS
-------------------------------------------------- -------
+DATA01/orcl/datafile/system.256.856020443 SYSTEM
+DATA01/orcl/datafile/sysaux.257.856020443 ONLINE
+DATA01/orcl/datafile/undotbs1.258.856020443 ONLINE
+DATA01/orcl/datafile/users.259.856020443 ONLINE
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
-------------------------------------------------- --------------------
ORCL READ WRITE
SQL> select status from v$instance;
STATUS
------------
OPEN
二.使用md_backup命令备份ASM元数据。
# su - grid
$ asmcmd -p
通过help md_backup命令可以获得详细的帮助信息。
ASMCMD [+] > help md_backup
md_backup
The md_backup command creates a backup file containing metadata
for one or more disk groups.
Volume and Oracle Automatic Storage Management Cluster File System
(Oracle ACFS) file system information is not backed up.
md_backup backup_file [-G diskgroup [,diskgroup,...]]
The options for the md_backup command are described below.
backup_file - Specifies the backup file in which you want to
store the metadata.
-G diskgroup - Specifies the disk group name of the disk group
that must be backed up
By default all the mounted disk groups are included in the backup file,
which is saved in the current working directory.
The first example shows the use of the backup command when you run it
without the disk group option. This example backs up all of the mounted
disk groups and creates the backup image in the current working
directory. The second example creates a backup of disk group DATA and
FRA. The backup that this example creates is saved in the
/tmp/dgbackup20090716 file.
ASMCMD [+] > md_backup /tmp/dgbackup20090716
ASMCMD [+] > md_backup /tmp/dgbackup20090716 -G DATA,FRA
Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ASM
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ARCHIVELOG/2009_07_13
Current alias directory path: ORCL/BACKUPSET/2009_07_14
Current alias directory path: ORCL/ARCHIVELOG/2009_07_14
Current alias directory path: ORCL
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/ARCHIVELOG
Current alias directory path: ORCL/BACKUPSET
Current alias directory path: ORCL/ONLINELOG
备份所有ASM磁盘组的元数据信息:
ASMCMD [+] > md_backup /tmp/dggroup20140819
Disk group metadata to be backed up: OCRVDISK
Disk group metadata to be backed up: DATA01
Disk group metadata to be backed up: FRA01
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ASM
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ONLINELOG
三.为ASM磁盘分配PVID。
1).停止所有CRS服务:
# pwd
/u01/app/11.2.0/grid/bin
# ./crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'localhost'
CRS-2673: Attempting to stop 'ora.OCRVDISK.dg' on 'localhost'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'localhost'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'localhost'
CRS-2677: Stop of 'ora.orcl.db' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.DATA01.dg' on 'localhost'
CRS-2673: Attempting to stop 'ora.FRA01.dg' on 'localhost'
CRS-2677: Stop of 'ora.DATA01.dg' on 'localhost' succeeded
CRS-2677: Stop of 'ora.FRA01.dg' on 'localhost' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'localhost' succeeded
CRS-2677: Stop of 'ora.OCRVDISK.dg' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'localhost'
CRS-2677: Stop of 'ora.asm' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'localhost'
CRS-2677: Stop of 'ora.cssd' on 'localhost' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'localhost'
CRS-2677: Stop of 'ora.evmd' on 'localhost' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'localhost' has completed
CRS-4133: Oracle High Availability Services has been stopped.
# ps -ef | grep d.bin
# ps -ef | grep ora
root 7536792 4456600 0 15:42:41 pts/0 0:00 grep ora
oracle 8650932 7667784 0 15:23:16 pts/1 0:00 -ksh
2).为所有ASM磁盘分配PVID:
# lspv
hdisk0 00cc1ad4ef095bf0 rootvg active
hdisk1 00cc1ad46aff307f None
hdisk2 none None
hdisk3 none None
hdisk4 none None
# chdev -l hdisk2 -a pv=yes
hdisk2 changed
# chdev -l hdisk3 -a pv=yes
hdisk3 changed
# chdev -l hdisk4 -a pv=yes
hdisk4 changed
# lspv
hdisk0 00cc1ad4ef095bf0 rootvg active
hdisk1 00cc1ad46aff307f None
hdisk2 00cc1ad4f0022b6c None
hdisk3 00cc1ad4f0024a5c None
hdisk4 00cc1ad4f0026653 None
3).重新启动所有CRS服务:
# pwd
/u01/app/11.2.0/grid/bin
# ./crsctl start has
CRS-4123: Oracle High Availability Services has been started.
经过一段时间后,ASM实例能正常的启动,但是所有的磁盘组都无法ONLINE:
#./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA01.dg
ONLINE OFFLINE localhost
ora.FRA01.dg
ONLINE OFFLINE localhost
ora.LISTENER.lsnr
ONLINE ONLINE localhost
ora.OCRVDISK.dg
ONLINE OFFLINE localhost
ora.asm
ONLINE ONLINE localhost Started
ora.ons
OFFLINE OFFLINE localhost
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE localhost
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE localhost
ora.orcl.db
1 ONLINE OFFLINE Instance Shutdown
4).查看ASM实例告警日志:
# su - grid
$ cd $ORACLE_HOME/log/diag/asm/+asm/+ASM/trace
$ ls *.log
alert_+ASM.log
$ tail -200 alert*.log
......
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/11.2.0/grid/dbs/arch
Autotune of undo retention is turned on.
IMODE=BR
ILAT =0
LICENSE_MAX_USERS = 0
SYS auditing is disabled
NOTE: Volume support enabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option.
ORACLE_HOME = /u01/app/11.2.0/grid
System name: AIX
Node name: localhost
Release: 1
Version: 6
Machine: 00CC1AD44C00
WARNING: using default parameter settings without any parameter file
Tue Aug 19 15:45:03 2014
PMON started with pid=2, OS id=7143492
Tue Aug 19 15:45:03 2014
PSP0 started with pid=3, OS id=7929858
Tue Aug 19 15:45:04 2014
VKTM started with pid=4, OS id=4259946 at elevated priority
VKTM running at (10)millisec precision with DBRM quantum (100)ms
Tue Aug 19 15:45:04 2014
GEN0 started with pid=5, OS id=7077996
Tue Aug 19 15:45:04 2014
DIAG started with pid=6, OS id=6684742
Tue Aug 19 15:45:04 2014
DIA0 started with pid=7, OS id=6357128
Tue Aug 19 15:45:04 2014
MMAN started with pid=8, OS id=7012498
Tue Aug 19 15:45:04 2014
DBW0 started with pid=9, OS id=5636096
Tue Aug 19 15:45:04 2014
LGWR started with pid=10, OS id=5373998
Tue Aug 19 15:45:04 2014
CKPT started with pid=11, OS id=5898392
Tue Aug 19 15:45:05 2014
SMON started with pid=12, OS id=5308618
Tue Aug 19 15:45:05 2014
RBAL started with pid=13, OS id=8454162
Tue Aug 19 15:45:05 2014
GMON started with pid=14, OS id=12583112
Tue Aug 19 15:45:05 2014
MMON started with pid=15, OS id=5111896
Tue Aug 19 15:45:05 2014
MMNL started with pid=16, OS id=5046518
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Tue Aug 19 15:45:05 2014
SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:0:2} */
SQL> ALTER DISKGROUP ALL ENABLE VOLUME ALL /* asm agent *//* {0:0:2} */
SQL> ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group DATA01 number=1 incarn=0xbea398e5
NOTE: cache began mount (first) of group DATA01 number=1 incarn=0xbea398e5
Tue Aug 19 15:45:05 2014
SQL> ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
ERROR: no read quorum in group: required 2, found 0 disks
Tue Aug 19 15:45:06 2014
ALTER SYSTEM SET local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1521))))' SCOPE=MEMORY SID='+ASM';
NOTE: cache dismounting (clean) group 1/0xBEA398E5 (DATA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7274670, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xBEA398E5 (DATA01)
NOTE: cache ending mount (fail) of group DATA01 number=1 incarn=0xbea398e5
NOTE: cache deleting context for group DATA01 1/0xbea398e5
GMON dismounting group 1 at 2 for pid 17, osid 7274670
ERROR: diskgroup DATA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA01"
ERROR: ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
Tue Aug 19 15:45:06 2014
SQL> ALTER DISKGROUP OCRVDISK MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group FRA01 number=1 incarn=0xc1e398e7
NOTE: cache began mount (first) of group FRA01 number=1 incarn=0xc1e398e7
Tue Aug 19 15:45:06 2014
NOTE: No asm libraries found in the system
ASM Health Checker found 1 new failures
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xC1E398E7 (FRA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7798844, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xC1E398E7 (FRA01)
NOTE: cache ending mount (fail) of group FRA01 number=1 incarn=0xc1e398e7
NOTE: cache deleting context for group FRA01 1/0xc1e398e7
GMON dismounting group 1 at 4 for pid 18, osid 7798844
ERROR: diskgroup FRA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "FRA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "FRA01"
ERROR: ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group OCRVDISK number=1 incarn=0xca3398ea
NOTE: cache began mount (first) of group OCRVDISK number=1 incarn=0xca3398ea
SQL> ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
SQL> ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
Tue Aug 19 15:45:07 2014
NOTE: No asm libraries found in the system
ASM Health Checker found 1 new failures
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xCA3398EA (OCRVDISK)
NOTE: messaging CKPT to quiesce pins Unix process pid: 9896018, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xCA3398EA (OCRVDISK)
NOTE: cache ending mount (fail) of group OCRVDISK number=1 incarn=0xca3398ea
NOTE: cache deleting context for group OCRVDISK 1/0xca3398ea
GMON dismounting group 1 at 6 for pid 19, osid 9896018
ERROR: diskgroup OCRVDISK was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "OCRVDISK" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "OCRVDISK"
ERROR: ALTER DISKGROUP OCRVDISK MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group DATA01 number=1 incarn=0xd22398ed
NOTE: cache began mount (first) of group DATA01 number=1 incarn=0xd22398ed
Tue Aug 19 15:45:08 2014
NOTE: No asm libraries found in the system
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xD22398ED (DATA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7798844, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xD22398ED (DATA01)
NOTE: cache ending mount (fail) of group DATA01 number=1 incarn=0xd22398ed
NOTE: cache deleting context for group DATA01 1/0xd22398ed
GMON dismounting group 1 at 8 for pid 18, osid 7798844
ERROR: diskgroup DATA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA01"
ERROR: ALTER DISKGROUP DATA01 MOUNT /* asm agent *//* {0:0:2} */
NOTE: cache registered group FRA01 number=1 incarn=0xd37398f0
NOTE: cache began mount (first) of group FRA01 number=1 incarn=0xd37398f0
Tue Aug 19 15:45:09 2014
NOTE: No asm libraries found in the system
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 1/0xD37398F0 (FRA01)
NOTE: messaging CKPT to quiesce pins Unix process pid: 7274670, image: oracle@localhost (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 1/0xD37398F0 (FRA01)
NOTE: cache ending mount (fail) of group FRA01 number=1 incarn=0xd37398f0
NOTE: cache deleting context for group FRA01 1/0xd37398f0
GMON dismounting group 1 at 10 for pid 17, osid 7274670
ERROR: diskgroup FRA01 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "FRA01" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "FRA01"
ERROR: ALTER DISKGROUP FRA01 MOUNT /* asm agent *//* {0:0:2} */
Tue Aug 19 15:45:09 2014
NOTE: No asm libraries found in the system
ASM Health Checker found 1 new failures
ASM Health Checker found 1 new failures
ASM Health Checker found 1 new failures
四.使用md_restore命令恢复ASM磁盘组元数据。
# su - grid
$ asmcmd -p
ASMCMD [+] > lsdg
ASMCMD [+] > lsdsk
ASMCMD [+] >
执行help md_restore可以查看详细的帮助信息:
ASMCMD [+] > help md_restore
md_restore
This command restores a disk group metadata backup.
md_restore backup_file [--silent][--full|--nodg|--newdg -o 'old_diskgroup:new_diskgroup [,...]'][-S sql_script_file] [-G 'diskgroup [,diskgroup...]']
The options for the md_restore command are described below.
backup_file - Reads the metadata information from
backup_file.
--silent - Ignore errors. Normally, if md_restore
encounters an error, it will stop.
Specifying this flag ignores any errors.
忽略报错信息。
--full - Specifies to create a disk group and restore
metadata.
指明创建一个磁盘组,恢复该磁盘组的元数据信息。
--nodg - Specifies to restore metadata only.
指明只恢复元数据信息。
--newdg -o old_diskgroup:new_diskgroup - Specifies to create a disk
group with a different name when restoring
metadata. The -o option is required
with --newdg.
指明当恢复元数据信息的时候创建一个不同名称的磁盘组,-o选项是必须要指定的。
-S sql_script_file - Write SQL commands to the specified SQL
script file instead of executing the commands.
-G diskgroup - Select the disk groups to be restored.
If no disk groups are defined, then all
disk groups will be restored.
如果不指定该参数,那么所有磁盘组的元数据信息都将被恢复。
The first example restores the disk group DATA from the backup script
and creates a copy. The second example takes an existing disk group
DATA and restores its metadata. The third example restores disk group
DATA completely but the new disk group that is created is called DATA2.
The fourth example restores from the backup file after applying the
overrides defined in the override.sql script file
ASMCMD [+] > md_restore --full -G data --silent /tmp/dgbackup20090714
ASMCMD [+] > md_restore --nodg -G data --silent /tmp/dgbackup20090714
ASMCMD [+] > md_restore --newdg -o 'data:data2' --silent /tmp/dgbackup20090714
ASMCMD [+] > md_restore -S override.sql --silent /tmp/dgbackup20090714
1).首先我们尝试使用--nodg参数恢复元数据信息:
ASMCMD [+] > md_restore --nodg --silent /tmp/dggroup20140819
Current Diskgroup metadata being restored: OCRVDISK
ASMCMD-9360: ADD or ALTER ATTRIBUTE failed
ORA-15032: not all alterations performed
ORA-15001: diskgroup "OCRVDISK" does not exist or is not mounted (DBD ERROR: OCIStmtExecute)
......
Current Diskgroup metadata being restored: DATA01
ASMCMD-9360: ADD or ALTER ATTRIBUTE failed
ORA-15032: not all alterations performed
ORA-15001: diskgroup "DATA01" does not exist or is not mounted (DBD ERROR: OCIStmtExecute)
......
Current Diskgroup metadata being restored: FRA01
ASMCMD-9360: ADD or ALTER ATTRIBUTE failed
ORA-15032: not all alterations performed
ORA-15001: diskgroup "FRA01" does not exist or is not mounted (DBD ERROR: OCIStmtExecute)
......
从这个例子可以看出,要想使用md_restore命令恢复磁盘组元数据信息,ASM磁盘组必须是在MOUNT状态。
2).接下来我们尝试使用--full参数进行恢复:
ASMCMD [+] > md_restore --full --silent /tmp/dggroup20140819
Current Diskgroup metadata being restored: OCRVDISK
Diskgroup OCRVDISK created!
System template PARAMETERFILE modified!
System template XTRANSPORT modified!
System template CONTROLFILE modified!
System template DATAFILE modified!
System template BACKUPSET modified!
System template FLASHFILE modified!
System template CHANGETRACKING modified!
System template DATAGUARDCONFIG modified!
System template ARCHIVELOG modified!
System template TEMPFILE modified!
System template OCRFILE modified!
System template DUMPSET modified!
System template ONLINELOG modified!
System template FLASHBACK modified!
System template AUTOBACKUP modified!
System template ASMPARAMETERFILE modified!
Directory +OCRVDISK/ASM re-created!
Directory +OCRVDISK/ASM/ASMPARAMETERFILE re-created!
Current Diskgroup metadata being restored: DATA01
Diskgroup DATA01 created!
System template PARAMETERFILE modified!
System template CHANGETRACKING modified!
System template CONTROLFILE modified!
System template DATAFILE modified!
System template DATAGUARDCONFIG modified!
System template FLASHFILE modified!
System template XTRANSPORT modified!
System template TEMPFILE modified!
System template BACKUPSET modified!
System template OCRFILE modified!
System template ARCHIVELOG modified!
System template DUMPSET modified!
System template ONLINELOG modified!
System template AUTOBACKUP modified!
System template FLASHBACK modified!
System template ASMPARAMETERFILE modified!
Directory +DATA01/ORCL re-created!
Directory +DATA01/ORCL/CONTROLFILE re-created!
Directory +DATA01/ORCL/DATAFILE re-created!
Directory +DATA01/ORCL/ONLINELOG re-created!
Directory +DATA01/ORCL/PARAMETERFILE re-created!
Directory +DATA01/ORCL/TEMPFILE re-created!
Current Diskgroup metadata being restored: FRA01
Diskgroup FRA01 created!
System template ONLINELOG modified!
System template AUTOBACKUP modified!
System template CONTROLFILE modified!
System template DATAGUARDCONFIG modified!
System template CHANGETRACKING modified!
System template DUMPSET modified!
System template BACKUPSET modified!
System template ASMPARAMETERFILE modified!
System template DATAFILE modified!
System template FLASHBACK modified!
System template OCRFILE modified!
System template FLASHFILE modified!
System template PARAMETERFILE modified!
System template TEMPFILE modified!
System template XTRANSPORT modified!
System template ARCHIVELOG modified!
Directory +FRA01/ORCL re-created!
Directory +FRA01/ORCL/CONTROLFILE re-created!
Directory +FRA01/ORCL/ONLINELOG re-created!
使用--full参数,md_restore首先创建了ASM磁盘组,然后恢复了该磁盘组元数据信息。
ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 51200 51148 0 51148 0 N DATA01/
MOUNTED EXTERN N 512 4096 1048576 51200 51148 0 51148 0 N FRA01/
MOUNTED EXTERN N 512 4096 1048576 2048 1996 0 1996 0 N OCRVDISK/
ASMCMD [+] > lsdsk -G data01
Path
/dev/rhdisk3
ASMCMD [+] > lsdsk -G fra01
Path
/dev/rhdisk4
ASMCMD [+] > lsdsk -G ocrvdisk
Path
/dev/rhdisk2
ASMCMD [+] > ls
DATA01/
FRA01/
OCRVDISK/
ASMCMD [+] > cd data01/ORCL/orcl/
ASMCMD [+data01/orcl] > ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
ASMCMD [+data01/orcl] > cd datafile
ASMCMD [+data01/orcl/datafile] > ls
ASMCMD [+data01/orcl/datafile] > cd ..
ASMCMD [+data01/orcl] > cd controlfile
ASMCMD [+data01/orcl/controlfile] > ls
元数据信息恢复了,但是所有的ASM文件都不在了。
由此不难看出,md_backup和md_restore只是备份和恢复元数据信息,ASM磁盘组中的ASM文件会丢失,达不到我们预期的效果。
另外需要注意两点:
1).--newdg和--full的效果其实是相同的,只是--newdg会对磁盘组进行重命名,依然会先创建磁盘组,然后恢复元数据信息。
2).在AIX平台,恢复元数据或重建ASM磁盘组之前应该使用chdev -l hdiskx -a pv=clear命令先将磁盘的PVID清除掉,这是个好习惯!
--end--
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23135684/viewspace-1260502/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/23135684/viewspace-1260502/