select sid,event,p1,p2,p3,wait_time,seconds_in_wait from v$session_wait where sid=&sid;
log file switch(archiving needed)----在conn trans/system; insert into test select * from test;---500万记录时
select name,space_limit/1024/1024,space_used/1024/1024,space_reclaimable,number_of_files from v$recovery_file_dest;---查看恢复目录的大小和使用情况
Cause
~~~~~
We register all the information about what we place in the flash recovery
area in the rman repository/controlfile. If we determine that there is not
sufficient space in the recovery file destination, as set by dest_size then
we will fail.
Just deleting the old backups/archive logs from disk is not sufficient as
it's the rman repository/controlfile that holds the space used information.
Fix
~~~
There are a couple of possible options.
1) Increase the parameter db_recovery_file_dest_size
2) Stop using the db_recovery_file_dest by unsetting the parameter. (This
assumes you never really wanted to use this option)
3) Remove the Entries from the rman repository/Controlfile
The removal is desribed in the RMAN documentation but this is a quick and
dirty way if you don't have an rman repository - but could endanger your
ability to recover - so be careful.
a) delete unwanted archive log files from disk (rm, del commands)
b) connect to rman ( RMAN> connect target / )
c) RMAN> crosscheck archivelog all; - marks the controlfile that the archives
have been deleted
d) RMAN> delete noprompt expired archivelog all; - deletes the log entries identified
above.
然后,又出现db file sequential read 等待事件
select b.sid,substr(a.object_name,1,30),a.object_type
from dba_objects a,v$session_wait b,x$bh c
where c.obj=a.object_id(+) and b.p1=c.file#(+) and b.p2=c.dbablk(+)
and b.event='db file sequential read';
SQL> conn trans/system; ----出现环块
Connected.
SQL> select count(*) from test;
select count(*) from test
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 7, block # 147809)
ORA-01110: data file 7: '/home/oracle/oracle10g/oradata/rac/trans.dbf'
开始坏块rman recovery,棉花讲要进行full backup,do use incremental backup for block erroption
blockrecover datafile 7 block 147809;---出现ora-01193:file 7 is not the same seen at start recovery,重新运行rman,就正常了
ORA-01193: file string is not the same file seen at start of recovery
Cause: A different copy of the file was accessed the last time media recovery looked at the file header. A backup of the file
was restored or the meaning of the file name changed during recovery
----确认disk_reads高的sql
select substr(sql_text,1,30),sharable_mem,runtime_mem,sorts,loaded_versions,
fetches,executions,loads,first_load_time,parse_calls,disk_reads,buffer_gets
from v$sql where address
in( select distinct sql_address from v$session where username='TRANS');
动态plsql
---出个怪问题:我构建了存储过程,execute v_createdb,提示 ra-01031 insufficient privilege.我conn /as sysdba;grant dba to trans;还是同样错误.怪了.
create or replace procedure v_createtab
as
v_str varchar2(1000);
v_count number;
begin
v_str:='create table test (a int,b int)';
execute immediate v_str;
for v_count in 1000001..1000000000 loop
v_str:='insert into test values(:a,:b)';
execute immediate v_str using v_count,v_count+1;
if mod(v_count,10000)=0 then
commit;
end if;
end loop;
end;
总结:对于大表,dml时,会产生大量的redo,undo,所以归档会非常大,这时,你就要把log file and archived log file 放置到高速的磁盘上,不要和数据文件存储在一起,以balance io.
可加大log_archive_max_processes (from 2 to max 10),在insert 时,会产生checkpoint completion,db sequential read,log file scattered此些事件会依次反复循环出现等.
为了节省磁盘空间:可根据df -ku,backup archivelog all delete input;
latch (shared pool) wait event ---出现在pl sql block including insert into /*+append */ values(i,i+1)
---查看undo的变化情况
select to_char(begin_time,'yyyy-mm-dd hh24:mi:ss'),to_char(end_time,'yyyy-mm-dd hh24:mi:ss'),
undotsn,undoblks,txncount from v$undostat order by undoblks desc;
select bytes/1024/1024 from dba_segments where segment_name='TEST';---查某个对象的大小
alter database add datafile 7 resize 5g;
select tablespace_name,extensible,bytes/1024/1024 mb from dba_data_files;---查表空间的大小和自动扩展性
在oracle10g中,autoextensible的设置对性能的影响区别是什么
大表的分析,何时进行,如何标准,完全还是抽样.
sga_target(可以动态分配)不能大于sga_max_size(此参数要重启生效)
pga和sga的区别及分割标准及作用.如何配置
我作实验,sga_max_size=900m,sga_target=850 to 100m,为何我的1亿大表,响应查询时间差不多都是二分钟呢.
alter system flush shared_pool;
alter system flush buffer_cache;
oracle 自启动脚本
su - root
vi /etc/init.d/dbora
#!/bin/sh
# description: Oracle auto start-stop script.
# chkconfig: - 20 80
#
# Set ORA_HOME to be equivalent to the $ORACLE_HOME
# from which you wish to execute dbstart and dbshut;
#
# Set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME.
ORA_HOME=/home/oracle/oracle10g/product/10.2.0/db_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
echo "Oracle startup: cannot start"
exit
fi
case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start"
;;
'stop')
# Stop the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
su - $ORA_OWNER -c $ORA_HOME/bin/dbshut
;;
'restart')
$0 stop
$0 start
;;
esac
赋予执行权限
chmod 750 /etc/init.d/dbora
作成以下链接:
ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
ln -s /etc/init.d/dbora /etc/rc3.d/S99dbora
执行以下命令:
chkconfig --level 345 dbora on
这样就OK了。下次开关机的时候,Oracle也会随之启动/停止。奇怪,oracle service bootup,but two instance,i use dbca delete other instance.then sqlplus err
ERROR:
ORA-09925: Unable to create audit trail file
Linux Error: 2: No such file or directory
Additional information: 9925
ORA-01075: you are currently logged on
此问题:因为我dbca delete other instance,lead to delete $ORACLE_BASE/admin下面的dest目录.我对应pfile(spfile)重建文件目录,启动就可以了
large_pool_size如进行rman时,到底按什么标准来配置呢
实验了同时并发rman备份(全库备份和0 level backup),同时进行insert into test select * from test;
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 04/23/2008 05:36:30
ORA-19566: exceeded limit of 0 corrupt blocks for file /home/oracle/oracle10g/oradata/rac/trans.dbf
19566, 00000, "exceeded limit of %s corrupt blocks for file %s"
// *Cause: The user specified limit of allowable corrupt blocks was exceeded
// while reading the specified datafile for a datafile copy or backup.
// *Action: None. The copy or backup operation fails.
// The session trace file contains detailed information
// about which blocks were corrupt.
---------------------------------------------------------------
output filename=/backup/ocfs_data/intel/tools01.dbf recid=362 stamp=615574424
channel dmd2: copied datafile 7
output filename=/backup/ocfs_data/intel/users01.dbf recid=363 stamp=615574428
channel dmd2: copied datafile 8
output filename=/backup/ocfs_data/intel/xdb01.dbf recid=364 stamp=615574437
channel dmd2: copied datafile 9
output filename=/backup/ocfs_data/intel/undotbs02.dbf recid=365 stamp=615574645
released channel: dmd2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of copy command on dmd2 channel at 02/26/2007 16:56:10
ORA-19566: exceeded limit of 0 corrupt blocks for file /ocfs_data/intel/base_data01.dbf
ORA-19600: input file is datafile 10 (/ocfs_data/intel/base_data01.dbf)
ORA-19601: output file is datafile-copy 0 (/backup/ocfs_data/intel/base_data01.dbf)
RMAN>
RMAN>
RMAN>
RMAN> **end-of-file**
***
Corrupt block relative dba: 0x0280cab8 (file 10, block 51896)
Fractured block found during datafile copy
Data in bad block -
type: 0 format: 2 rdba: 0x0000cab8
last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
consistency value in tail: 0x00000000
check value in block header: 0xcdb8, computed block checksum: 0x1
spare1: 0x0, spare2: 0x0, spare3: 0x0
***
一下是回复:
应该是file 10 的块51896损坏
如果有备份的话使用blockrecover修复一下
-------------------------------------------------
Find the segment and rebuild it.
If it is an index, rebuild it directly.
If it is a table, can you query all rows from it?
查詢壞塊:
SELECT segment_name,segment_type,extent_id,block_id, blocks
from dba_extents t
where file_id = 41
AND 55692 between block_id and (block_id + blocks - 1)
tolywang 评论于:2007.03.03 16:32
re: RMAN出现文件损坏ORA-19566: exceeded limit of 0 corrupt blocks 处理 [回复]
本文主要介绍如何去处理在Oracle数据库中出现坏块的问题,对于坏块产生在不同的对象上,处理的方法会有所不同,本文将大致对这些方法做一些介绍。因为数据库运行时间长了,由于硬件设备的老化,出现坏块的几率会越来越大,因此,做为一个DBA,怎么去解决数据库出现的坏块问题就成了一个重要的议题了。
一:什么是数据库的坏块
首先我们来大概看一下数据库块的格式和结构
数据库的数据块有固定的格式和结构,分三层:cache layer,transaction layer,data layer。在我们对数据块进行读取写入操作的时候,数据库会对要读写的数据块做一致性的检查,其中包括:数据块的类型、数据块的地址信息、数据块的SCN号以及数据块的头部和尾部。如果发现其中有不一致的信息,那数据库就会标记这个数据块为坏块了。数据库的坏块分为两种,逻辑坏块和物理坏块。
二:坏块对数据库产生的影响
如果数据库出现坏块,数据库的告警日志文件里面会存在有如下的一些报错信息:Ora-1578以及Ora-600 and trace file in bdump directory,其中Ora-600错误的第一个参数值的范围是[2000]-[8000],不同的值代表着数据块的不同的层出现问题,具体的如下表所示:
Range block layer
Cache layer 2000 – 4000
Transaction layer 4000 – 6000
Data layer 6000 - 8000
坏块产生影响的对象可能是数据字典表、回滚段表、临时段、用户数据表和索引等。不同的对象产生坏块后的处理方法不尽相同。
三:坏块产生的原因
Oracle调用标准C的系统函数,对数据块进行读写操作,因此,坏块是有可能由以下几种原因产生:
硬件的I/O错误
操作系统的I/O错误或缓冲问题
内存或paging问题
磁盘修复工具
一个数据文件的一部分正在被覆盖
Oracle试图访问一个未被格式化的系统块失败
数据文件部分溢出
Oracle或者操作系统的bug
四:坏块的处理方法
1.先收集相应的关于坏快的信息,从AlertSID.log文件或者从trace文件中查找,找到例如以下的一些信息:
Ora-1578 file# (RFN) block#
Ora-1110 file# (AFN) block#
Ora-600 file# (AFN) block#
其中RFN表示的是relative_fno
AFN表示的是file_id
Select file_name,tablespace_name,file_id “AFN”,relative_fno “RFN”
From dba_data_files;
Select file_name,tablespace_name,file_id, relative_fno“RFN”
From dba_temp_files;
2. 确定存在坏块的对象是什么:
SELECT tablespace_name, segment_type, owner, segment_name, partition_name FROM dba_extents WHERE file_id = and between block_id AND block_id + blocks – 1;
通过上面这个查询语句就可以查出当前存在坏块的对象是什么,是什么类型的对象。需要注意的是如果是temp文件中出现坏块,是没有记录返回的。
3. 根据2中查询出来的对象类型,确定相应的处理方法
出现坏块的常见对象有:
Sys用户下的对象
回滚段
临时段
索引或者分区索引
表
常用的处理方法有:
恢复数据文件
只恢复坏的block(9i以上版本可用)
通过ROWID RANGE SCAN 保存数据
使用DBMS_REPAIR
使用EVENT
4. 具体处理方法的介绍
恢复数据文件方法:
如果数据库是归档方式下,并且有完整的物理备份,就可以使用此方法来恢复。
步骤如下:
1) 先offline受影响的数据文件,执行以下的语句:
ALTER DATABASE DATAFILE 'name_file' OFFLINE;
2) 保留有坏块的数据文件,然后拷贝备份的数据文件。如果恢复的数据文件要求路径不同,执行以下的语句:
ALTER DATABASE RENAME FILE 'old_name' TO 'new_name';
3) 恢复数据文件,执行以下语句:
RECOVER DATAFILE 'name_of_file';
4) Online恢复后的数据文件,执行以下的语句:
ALTER DATABASE DATAFILE 'name_of_file' ONLINE;
只恢复坏的block(9i以上版本可用)
使用这种方法要求数据库版本是9.2.0以上,要求配置了Rman的catalog数据库,数据库为归档方式,并且有完整的物理备份。
步骤如下:
使用RMAN的BLOCKRECOVER命令 :
Rman>run{blockrecover datafile 5 block 11,16;}
也可以强制使用某个SCN号之前的备份,恢复数据块。
Rman>run{blockrecover datafile 5 block 11,16 restore until sequence 8505;}
通过ROWID RANGE SCAN 保存数据
1) 先取得坏块中ROW ID的最小值,执行以下的语句:
SELECT dbms_rowid.rowid_create(1,,,,0) from DUAL;
2)取得坏块中的ROW ID的最大值,执行以下的语句:
SELECT dbms_rowid.rowid_create(1,,,+1,0) from DUAL;
3)建议一个临时表存储那些没有坏块的数据,执行以下的语句:
CREATE TABLE salvage_table AS SELECT * FROM corrupt_tab Where 1=2;
4)保存那些不存在坏块的数据到临时表中,执行以下的语句:
INSERT INTO salvage_table SELECT /*+ ROWID(A) */ * FROM A WHERE rowid = '‘;
5) 根据临时表中的数据重建表,重建表上的索引,限制。
使用10231诊断事件,在做全表扫描的时候跳过坏块
可以在session级别设定:
ALTER SESSION SET EVENTS '10231 TRACE NAME CONTEXT FOREVER, LEVEL 10';
也可以在数据库级别上设定,在初始化参数中加入:event="10231 trace name context forever, level 10" ,然后重启数据库。
然后从存在坏块的表中取出不存在坏块的数据,执行以下的语句:
CREATE TABLE salvage_emp AS SELECT * FROM corrupt_table;
最后rename生成的corrupt_table为原来表的名字,并重建表上的索引和限制。
使用dbms_repair包进行恢复
使用dbms_repair标记有坏块的表,在做全表扫描的时候跳过坏块,执行以下的语句:
Execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('','');
然后使用exp工具或者createtable as select的方法取出没有坏块数据,然后重建表,表上的索引和限制。
五:坏块的预先发现的方法
1.如果要检测数据库中所有的表,可以利用exp工具导出整个数据库可以检测坏块。不过这个工具有一些缺陷,对以下情况的坏块是检测不出来的:
HWM以上的坏块是不会发现的
索引中存在的坏块是不会发现的
数据字典中的坏块是不会发现的
2. 如果只是对数据库中比较重要的表进行坏块检查,可以使用ANALYZE TABLE tablename VALIDATE STRUCTURE CASCADE 的方法来检测坏块,它执行坏块的检查,但是不会标记坏块为corrupt,检测的结果保存在USER_DUMP_DEST目录下的用户trace文件中。
3. 使用Oracle的专门工具dbv来检查坏块,具体的语法如下:
关键字 说明 (默认)
----------------------------------------------------
FILE 要验证的文件 (无)
START 起始块 (文件的第一个块)
END 结束块 (文件的最后一个块)
BLOCKSIZE 逻辑块大小 (2048)
LOGFILE 输出日志 (无)
FEEDBACK 显示进度 (0)
PARFILE 参数文件 (无)
USERID 用户名/口令 (无)
SEGMENT_ID 段 ID (tsn.relfile.block) (无)
例如:
Dbv file=system01.dbf blocksize=8192
DBVERIFY: Release 9.2.0.5.0 - Production on 星期六 11月 27 15:29:13 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
DBVERIFY - 验证正在开始 : FILE = system01.dbf
DBVERIFY - 验证完成
检查的页总数 :32000
处理的页总数(数据):13261
失败的页总数(数据):0
处理的页总数(索引):2184
失败的页总数(索引):0
处理的页总数(其它):1369
处理的总页数 (段) : 0
失败的总页数 (段) : 0
空的页总数 :15186
标记为损坏的总页数:0
汇入的页总数 :0
注:因为dbv要求file后面跟的必须是一个文件扩展名,所以如果用裸设备存储的,就必须使用ln链接裸设备到一个文件,然后再用dbv对这个链接文件进行检查。
文章引用自:
评论() 引用 阅读() 圈子 编辑 打印 有奖举报
前一篇:SOLARIS 查看Volume Manager管理下磁盘的可用空间
后一篇:使用dbv和RMAN检查数据文件中的坏块
数据加载中...
续20080424学习笔记
在select count(*) from test ----on user trans.查看alert如下信息:
Thread 1 advanced to log sequence 184
Current log# 2 seq# 184 mem# 0: /home/oracle/oracle10g/oradata/rac/redo02.log
Thu Apr 24 01:29:30 2008
ARC2: Log corruption near block 71437 change 948099 time ?
Thu Apr 24 01:29:30 2008
Errors in file /home/oracle/oracle10g/admin/rac/bdump/10gdb_arc2_6341.trc:
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 71437 change 948099 time 04/23/2008 01:16:17
ORA-00312: online log 3 thread 1: '/home/oracle/oracle10g/oradata/rac/redo03.log'
ARC2: All Archive destinations made inactive due to error 354
Thu Apr 24 01:29:30 2008
ARC2: Closing local archive destination LOG_ARCHIVE_DEST_10: '/home/oracle/newdest/RAC/archivelog/2008_04_24/o1_mf_1_122_4106osbp_.arc' (error 354)
(10gdb)
Committing creation of archivelog '/home/oracle/newdest/RAC/archivelog/2008_04_24/o1_mf_1_122_4106osbp_.arc' (error 354)
Deleted Oracle managed file /home/oracle/newdest/RAC/archivelog/2008_04_24/o1_mf_1_122_4106osbp_.arc
ARC2: Failed to archive thread 1 sequence 122 (354)
ARCH: Archival stopped, error occurred. Will continue retrying
Thu Apr 24 01:29:31 2008
ORACLE Instance 10gdb - Archival Error
Thu Apr 24 01:29:31 2008
ORA-16038: log 3 sequence# 122 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 3 thread 1: '/home/oracle/oracle10g/oradata/rac/redo03.log'
Thu Apr 24 01:29:31 2008
Errors in file /home/oracle/oracle10g/admin/rac/bdump/10gdb_arc2_6341.trc:
ORA-16038: log 3 sequence# 122 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 3 thread 1: '/home/oracle/oracle10g/oradata/rac/redo03.log'
select group#,thread#,sequence#,members,archived,status,to_char(first_time,'yyyy-mm-dd hh24:mi:ss') from v$log;---发现日志出现坏块,数据库会不定期关闭
注:
如果丢失的是current状态的logfile
如果一致的关闭数据库,将数据库启动到mount状态,然后直接open resetlogs即可,不过完成以后别忘了备份数据库!
如果没有一致的关闭数据库,就需要recover database until cancel;恢复了并且open resetlogs;
如果丢失的不是当前正在使用的redo log 的话,将那个实例启动到mount状态:
然后alter database clear logfile '..';
重新生成一个就是了,然后open database,不过要记得马上备份数据库。
startup --因为redolog3处现坏块,未归档,未open,只能mounted
alter database clear unarchived logfile group 3;
alter database open;
conn trans/system;
select count(*) from test;---出现
ERROR at line 1:
ORA-00600: internal error code, arguments: [6856], [0], [213], [], [], [], [],
[]
Errors in file /home/oracle/oracle10g/admin/rac/bdump/10gdb_smon_6484.trc:
ORA-00600: internal error code, arguments: [6856], [0], [213], [], [], [], [], []
oracle tkprof的使用
1.设置sql trace
(1)实例级
初始化参数据SQL_TRACE=TRUE
(2)对话级
ALTER SESSION SET SQL_TRACE=TRUE
DBMS_SESSION.SQL_TRACE(SQL_TRACE)
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#,SQL_TRACE);
(3).trc文件存放的位置
查看初始化参数:用户跟踪文件user_dump_dest
2.设置时间统计(这样就可以统计每个sql执行的时间)
alter system set timed_statistics=true scope=both;
3.tkprof的使用
tkprof可对.trc文件进行分析,并生成一个分析文件.并且在分析.trc文件,生成分析文件时按各种选项进行排序,其中比较有用的一个排序选项是fchela,即按照elapsed time fetching来对分析的结果排序(记住要设置初始化参数time_statistics=true),生成的.prf文件将把最消耗时间的sql放在最前面显示。另外一个有用的参数就是sys,这个参数设置为no可以阻止所有以sys用户执行的sql被显示出来,这样可以减少分析出来的文件的复杂度,便于查看。
举例:
(1)设置跟踪SID,SERIAL#O为12.73
exec dbms_system.SET_SQL_TRACE_IN_SESSION(12,73,true);
(2)查询生成的.trc文件号
select spid from v$process where addr = (select paddr from v$session where sid=12);
(3)使用tkprof生成分析文件
c:>tkprof e racleadminhunterhunter_ora_4188.trc c: est.prf aggregate=yes sys=no sort=fchela(此例中将执行最耗时的sql放在分析文件的开头)
(4)打开.prf文件
可用ultraedit打开查看生成的分析文件
文章引用自:
select sid,opname,target,sound(sofar/totalwork*100,2),start_time,time_remaining,elapsed_seconds
from session_longops where opname like 'RMAN';
delete backup;
delete copy;
crosscheck backup;
crosscheck copy;
control file parallel write事件:
表现为高日志切换,我在进行插入10亿记录时.它和log file switch completion变化.
ckpt process和arch process会产生这个event.
---查看执行控制文件更新事务的会话
select /*ordered */ a.sid,b.time_waited,round(b.time_waited/b.tatal_waits,4) average_wait,round((sysdate-a.logon_time)*24) hours_connected
from v$session_event b,v$session a
where a.sid=b.sid and b.event='control file parallel write' order by time_waited;
----查看重作日志的大小.对于数据库事务来说,日志可能大小.以下查看日志多长时间切换一次
select thread#,to_char(first_time,'yyyy-mm-dd hh24:mi:ss') creation_time,sequence#,first_change# lowest_scn_in_log,
next_change# hightesst_scn_in_log,recid controlfile_record_id,stamp controlfile_record_stamp
from v$log_history order by first_time;
----查看哪个用户在频繁提交事务 ---note lace redo log on raid 5 for speed lgwr
select sid,value from v$sesstat where statistic# =(select statistic# from v$statname where name='user commits')
大数据量的备份:1T
每天增量备份
每周进行一次全备 保存时间一个月
其中分为:历史和当前表空间
历史表空间可采用一个月备份一次 保存一个季度(因为这是不一致备份,也称热备,所以也要备份archivelog )
当前表空间:每个星期备份一次 保存一个月
dml变化大的表:采用exp和imp(indexes=no buffers=large feedback=100000),还个要恢复加上备份表对应的datafile,不然只有dmp是不能恢复
select bytes/1024/1024 from dba_segments where segment_name='TEST';
-----查执行许多硬解析的会话
select a.sid,c.username,b.name,a.value,round((sysdate-c.logon_time)*24) hours_connected
from v$sesstat a,v$statname b,v$session c
where c.sid=a.sid and a.statistic#=b.statistic# and a.value>0 and b.name='parse count (hard)'
order by a.value desc and rownum<2;
----学习v$transaction
select xidusn,xidsqn,ubafil
from v$transaction;
select ,ubablk,ubasqn,ubarec,status,start_time,
start_scnb,start_scnw,start_uext,start_ubafil,start_ubablk,start_ubasqn
start_ubarec,ses_addr,flag,
used_ublk,used_urec,log_io,phy_io,cr_get,cr_change,
dscn_base,xid,prv_xid,ptx_xid
from v$transaction;
latch:library cache ---wait event
/
db_writer_processes is static parameter
---rman v$视图
select session_key,command_id,to_char(start_time,'yyyymmdd hh24:mi:ss'),to_char(end_time,'yyyymmdd hh24:mi:ss'),
input_bytes,output_bytes,output_device_type,status,iknput_type as backuptype,
elapsed_seconds,optimized,compression_ratio,input_bytes_per_sec,output_bytes_per_sec,
input_bytes_display,output_bytes_display,time_taken_display
from v$rman_backup_job_details;
select session_key,to_char(start_time,'yyyymmdd hh24:mi:ss') as bt,to_char(end_time,'yyyymmdd hh24:mi:ss') as et,
input_bytes,output_bytes,output_device_type,status,input_type,
elapsed_seconds,compression_ratio
from v$rman_backup_job_details;
------------我直接进行增量三级备的出错提示
RMAN> backup incremental level 3 database;
Starting backup at 25-APR-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 3 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00005 name=/home/oracle/bigts.dbf
input datafile fno=00001 name=/home/oracle/oracle10g/oradata/newdb/system01.dbf
input datafile fno=00003 name=/home/oracle/oracle10g/oradata/newdb/sysaux01.dbf
input datafile fno=00002 name=/home/oracle/oracle10g/oradata/newdb/undotbs01.dbf
input datafile fno=00004 name=/home/oracle/oracle10g/oradata/newdb/users01.dbf
channel ORA_DISK_1: starting piece 1 at 25-APR-08
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 04/25/2008 13:40:
04
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 52428800 bytes disk space from 2147483648 limit
continuing other job steps, job failed will not be re-run
channel ORA_DISK_1: starting incremental level 3 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current control file in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 25-APR-08
channel ORA_DISK_1: finished piece 1 at 25-APR-08
piece handle=/home/oracle/rmanbackup/NEWDB/backupset/2008_04_25/o1_mf_ncsn3_TAG2
0080425T133838_4145vppo_.bkp tag=TAG20080425T133838 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 04/25/2008 13:40:
04
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 52428800 bytes disk space from 2147483648 limit
log file switch(archiving needed)----在conn trans/system; insert into test select * from test;---500万记录时
select name,space_limit/1024/1024,space_used/1024/1024,space_reclaimable,number_of_files from v$recovery_file_dest;---查看恢复目录的大小和使用情况
Cause
~~~~~
We register all the information about what we place in the flash recovery
area in the rman repository/controlfile. If we determine that there is not
sufficient space in the recovery file destination, as set by dest_size then
we will fail.
Just deleting the old backups/archive logs from disk is not sufficient as
it's the rman repository/controlfile that holds the space used information.
Fix
~~~
There are a couple of possible options.
1) Increase the parameter db_recovery_file_dest_size
2) Stop using the db_recovery_file_dest by unsetting the parameter. (This
assumes you never really wanted to use this option)
3) Remove the Entries from the rman repository/Controlfile
The removal is desribed in the RMAN documentation but this is a quick and
dirty way if you don't have an rman repository - but could endanger your
ability to recover - so be careful.
a) delete unwanted archive log files from disk (rm, del commands)
b) connect to rman ( RMAN> connect target / )
c) RMAN> crosscheck archivelog all; - marks the controlfile that the archives
have been deleted
d) RMAN> delete noprompt expired archivelog all; - deletes the log entries identified
above.
然后,又出现db file sequential read 等待事件
select b.sid,substr(a.object_name,1,30),a.object_type
from dba_objects a,v$session_wait b,x$bh c
where c.obj=a.object_id(+) and b.p1=c.file#(+) and b.p2=c.dbablk(+)
and b.event='db file sequential read';
SQL> conn trans/system; ----出现环块
Connected.
SQL> select count(*) from test;
select count(*) from test
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 7, block # 147809)
ORA-01110: data file 7: '/home/oracle/oracle10g/oradata/rac/trans.dbf'
开始坏块rman recovery,棉花讲要进行full backup,do use incremental backup for block erroption
blockrecover datafile 7 block 147809;---出现ora-01193:file 7 is not the same seen at start recovery,重新运行rman,就正常了
ORA-01193: file string is not the same file seen at start of recovery
Cause: A different copy of the file was accessed the last time media recovery looked at the file header. A backup of the file
was restored or the meaning of the file name changed during recovery
----确认disk_reads高的sql
select substr(sql_text,1,30),sharable_mem,runtime_mem,sorts,loaded_versions,
fetches,executions,loads,first_load_time,parse_calls,disk_reads,buffer_gets
from v$sql where address
in( select distinct sql_address from v$session where username='TRANS');
动态plsql
---出个怪问题:我构建了存储过程,execute v_createdb,提示 ra-01031 insufficient privilege.我conn /as sysdba;grant dba to trans;还是同样错误.怪了.
create or replace procedure v_createtab
as
v_str varchar2(1000);
v_count number;
begin
v_str:='create table test (a int,b int)';
execute immediate v_str;
for v_count in 1000001..1000000000 loop
v_str:='insert into test values(:a,:b)';
execute immediate v_str using v_count,v_count+1;
if mod(v_count,10000)=0 then
commit;
end if;
end loop;
end;
总结:对于大表,dml时,会产生大量的redo,undo,所以归档会非常大,这时,你就要把log file and archived log file 放置到高速的磁盘上,不要和数据文件存储在一起,以balance io.
可加大log_archive_max_processes (from 2 to max 10),在insert 时,会产生checkpoint completion,db sequential read,log file scattered此些事件会依次反复循环出现等.
为了节省磁盘空间:可根据df -ku,backup archivelog all delete input;
latch (shared pool) wait event ---出现在pl sql block including insert into /*+append */ values(i,i+1)
---查看undo的变化情况
select to_char(begin_time,'yyyy-mm-dd hh24:mi:ss'),to_char(end_time,'yyyy-mm-dd hh24:mi:ss'),
undotsn,undoblks,txncount from v$undostat order by undoblks desc;
select bytes/1024/1024 from dba_segments where segment_name='TEST';---查某个对象的大小
alter database add datafile 7 resize 5g;
select tablespace_name,extensible,bytes/1024/1024 mb from dba_data_files;---查表空间的大小和自动扩展性
在oracle10g中,autoextensible的设置对性能的影响区别是什么
大表的分析,何时进行,如何标准,完全还是抽样.
sga_target(可以动态分配)不能大于sga_max_size(此参数要重启生效)
pga和sga的区别及分割标准及作用.如何配置
我作实验,sga_max_size=900m,sga_target=850 to 100m,为何我的1亿大表,响应查询时间差不多都是二分钟呢.
alter system flush shared_pool;
alter system flush buffer_cache;
oracle 自启动脚本
su - root
vi /etc/init.d/dbora
#!/bin/sh
# description: Oracle auto start-stop script.
# chkconfig: - 20 80
#
# Set ORA_HOME to be equivalent to the $ORACLE_HOME
# from which you wish to execute dbstart and dbshut;
#
# Set ORA_OWNER to the user id of the owner of the
# Oracle database in ORA_HOME.
ORA_HOME=/home/oracle/oracle10g/product/10.2.0/db_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
echo "Oracle startup: cannot start"
exit
fi
case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start"
;;
'stop')
# Stop the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
su - $ORA_OWNER -c $ORA_HOME/bin/dbshut
;;
'restart')
$0 stop
$0 start
;;
esac
赋予执行权限
chmod 750 /etc/init.d/dbora
作成以下链接:
ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
ln -s /etc/init.d/dbora /etc/rc3.d/S99dbora
执行以下命令:
chkconfig --level 345 dbora on
这样就OK了。下次开关机的时候,Oracle也会随之启动/停止。奇怪,oracle service bootup,but two instance,i use dbca delete other instance.then sqlplus err
ERROR:
ORA-09925: Unable to create audit trail file
Linux Error: 2: No such file or directory
Additional information: 9925
ORA-01075: you are currently logged on
此问题:因为我dbca delete other instance,lead to delete $ORACLE_BASE/admin下面的dest目录.我对应pfile(spfile)重建文件目录,启动就可以了
large_pool_size如进行rman时,到底按什么标准来配置呢
实验了同时并发rman备份(全库备份和0 level backup),同时进行insert into test select * from test;
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 04/23/2008 05:36:30
ORA-19566: exceeded limit of 0 corrupt blocks for file /home/oracle/oracle10g/oradata/rac/trans.dbf
19566, 00000, "exceeded limit of %s corrupt blocks for file %s"
// *Cause: The user specified limit of allowable corrupt blocks was exceeded
// while reading the specified datafile for a datafile copy or backup.
// *Action: None. The copy or backup operation fails.
// The session trace file contains detailed information
// about which blocks were corrupt.
---------------------------------------------------------------
output filename=/backup/ocfs_data/intel/tools01.dbf recid=362 stamp=615574424
channel dmd2: copied datafile 7
output filename=/backup/ocfs_data/intel/users01.dbf recid=363 stamp=615574428
channel dmd2: copied datafile 8
output filename=/backup/ocfs_data/intel/xdb01.dbf recid=364 stamp=615574437
channel dmd2: copied datafile 9
output filename=/backup/ocfs_data/intel/undotbs02.dbf recid=365 stamp=615574645
released channel: dmd2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of copy command on dmd2 channel at 02/26/2007 16:56:10
ORA-19566: exceeded limit of 0 corrupt blocks for file /ocfs_data/intel/base_data01.dbf
ORA-19600: input file is datafile 10 (/ocfs_data/intel/base_data01.dbf)
ORA-19601: output file is datafile-copy 0 (/backup/ocfs_data/intel/base_data01.dbf)
RMAN>
RMAN>
RMAN>
RMAN> **end-of-file**
***
Corrupt block relative dba: 0x0280cab8 (file 10, block 51896)
Fractured block found during datafile copy
Data in bad block -
type: 0 format: 2 rdba: 0x0000cab8
last change scn: 0x0000.00000000 seq: 0x1 flg: 0x05
consistency value in tail: 0x00000000
check value in block header: 0xcdb8, computed block checksum: 0x1
spare1: 0x0, spare2: 0x0, spare3: 0x0
***
一下是回复:
应该是file 10 的块51896损坏
如果有备份的话使用blockrecover修复一下
-------------------------------------------------
Find the segment and rebuild it.
If it is an index, rebuild it directly.
If it is a table, can you query all rows from it?
查詢壞塊:
SELECT segment_name,segment_type,extent_id,block_id, blocks
from dba_extents t
where file_id = 41
AND 55692 between block_id and (block_id + blocks - 1)
tolywang 评论于:2007.03.03 16:32
re: RMAN出现文件损坏ORA-19566: exceeded limit of 0 corrupt blocks 处理 [回复]
本文主要介绍如何去处理在Oracle数据库中出现坏块的问题,对于坏块产生在不同的对象上,处理的方法会有所不同,本文将大致对这些方法做一些介绍。因为数据库运行时间长了,由于硬件设备的老化,出现坏块的几率会越来越大,因此,做为一个DBA,怎么去解决数据库出现的坏块问题就成了一个重要的议题了。
一:什么是数据库的坏块
首先我们来大概看一下数据库块的格式和结构
数据库的数据块有固定的格式和结构,分三层:cache layer,transaction layer,data layer。在我们对数据块进行读取写入操作的时候,数据库会对要读写的数据块做一致性的检查,其中包括:数据块的类型、数据块的地址信息、数据块的SCN号以及数据块的头部和尾部。如果发现其中有不一致的信息,那数据库就会标记这个数据块为坏块了。数据库的坏块分为两种,逻辑坏块和物理坏块。
二:坏块对数据库产生的影响
如果数据库出现坏块,数据库的告警日志文件里面会存在有如下的一些报错信息:Ora-1578以及Ora-600 and trace file in bdump directory,其中Ora-600错误的第一个参数值的范围是[2000]-[8000],不同的值代表着数据块的不同的层出现问题,具体的如下表所示:
Range block layer
Cache layer 2000 – 4000
Transaction layer 4000 – 6000
Data layer 6000 - 8000
坏块产生影响的对象可能是数据字典表、回滚段表、临时段、用户数据表和索引等。不同的对象产生坏块后的处理方法不尽相同。
三:坏块产生的原因
Oracle调用标准C的系统函数,对数据块进行读写操作,因此,坏块是有可能由以下几种原因产生:
硬件的I/O错误
操作系统的I/O错误或缓冲问题
内存或paging问题
磁盘修复工具
一个数据文件的一部分正在被覆盖
Oracle试图访问一个未被格式化的系统块失败
数据文件部分溢出
Oracle或者操作系统的bug
四:坏块的处理方法
1.先收集相应的关于坏快的信息,从AlertSID.log文件或者从trace文件中查找,找到例如以下的一些信息:
Ora-1578 file# (RFN) block#
Ora-1110 file# (AFN) block#
Ora-600 file# (AFN) block#
其中RFN表示的是relative_fno
AFN表示的是file_id
Select file_name,tablespace_name,file_id “AFN”,relative_fno “RFN”
From dba_data_files;
Select file_name,tablespace_name,file_id, relative_fno“RFN”
From dba_temp_files;
2. 确定存在坏块的对象是什么:
SELECT tablespace_name, segment_type, owner, segment_name, partition_name FROM dba_extents WHERE file_id = and between block_id AND block_id + blocks – 1;
通过上面这个查询语句就可以查出当前存在坏块的对象是什么,是什么类型的对象。需要注意的是如果是temp文件中出现坏块,是没有记录返回的。
3. 根据2中查询出来的对象类型,确定相应的处理方法
出现坏块的常见对象有:
Sys用户下的对象
回滚段
临时段
索引或者分区索引
表
常用的处理方法有:
恢复数据文件
只恢复坏的block(9i以上版本可用)
通过ROWID RANGE SCAN 保存数据
使用DBMS_REPAIR
使用EVENT
4. 具体处理方法的介绍
恢复数据文件方法:
如果数据库是归档方式下,并且有完整的物理备份,就可以使用此方法来恢复。
步骤如下:
1) 先offline受影响的数据文件,执行以下的语句:
ALTER DATABASE DATAFILE 'name_file' OFFLINE;
2) 保留有坏块的数据文件,然后拷贝备份的数据文件。如果恢复的数据文件要求路径不同,执行以下的语句:
ALTER DATABASE RENAME FILE 'old_name' TO 'new_name';
3) 恢复数据文件,执行以下语句:
RECOVER DATAFILE 'name_of_file';
4) Online恢复后的数据文件,执行以下的语句:
ALTER DATABASE DATAFILE 'name_of_file' ONLINE;
只恢复坏的block(9i以上版本可用)
使用这种方法要求数据库版本是9.2.0以上,要求配置了Rman的catalog数据库,数据库为归档方式,并且有完整的物理备份。
步骤如下:
使用RMAN的BLOCKRECOVER命令 :
Rman>run{blockrecover datafile 5 block 11,16;}
也可以强制使用某个SCN号之前的备份,恢复数据块。
Rman>run{blockrecover datafile 5 block 11,16 restore until sequence 8505;}
通过ROWID RANGE SCAN 保存数据
1) 先取得坏块中ROW ID的最小值,执行以下的语句:
SELECT dbms_rowid.rowid_create(1,,,,0) from DUAL;
2)取得坏块中的ROW ID的最大值,执行以下的语句:
SELECT dbms_rowid.rowid_create(1,,,+1,0) from DUAL;
3)建议一个临时表存储那些没有坏块的数据,执行以下的语句:
CREATE TABLE salvage_table AS SELECT * FROM corrupt_tab Where 1=2;
4)保存那些不存在坏块的数据到临时表中,执行以下的语句:
INSERT INTO salvage_table SELECT /*+ ROWID(A) */ * FROM A WHERE rowid = '‘;
5) 根据临时表中的数据重建表,重建表上的索引,限制。
使用10231诊断事件,在做全表扫描的时候跳过坏块
可以在session级别设定:
ALTER SESSION SET EVENTS '10231 TRACE NAME CONTEXT FOREVER, LEVEL 10';
也可以在数据库级别上设定,在初始化参数中加入:event="10231 trace name context forever, level 10" ,然后重启数据库。
然后从存在坏块的表中取出不存在坏块的数据,执行以下的语句:
CREATE TABLE salvage_emp AS SELECT * FROM corrupt_table;
最后rename生成的corrupt_table为原来表的名字,并重建表上的索引和限制。
使用dbms_repair包进行恢复
使用dbms_repair标记有坏块的表,在做全表扫描的时候跳过坏块,执行以下的语句:
Execute DBMS_REPAIR.SKIP_CORRUPT_BLOCKS('','');
然后使用exp工具或者createtable as select的方法取出没有坏块数据,然后重建表,表上的索引和限制。
五:坏块的预先发现的方法
1.如果要检测数据库中所有的表,可以利用exp工具导出整个数据库可以检测坏块。不过这个工具有一些缺陷,对以下情况的坏块是检测不出来的:
HWM以上的坏块是不会发现的
索引中存在的坏块是不会发现的
数据字典中的坏块是不会发现的
2. 如果只是对数据库中比较重要的表进行坏块检查,可以使用ANALYZE TABLE tablename VALIDATE STRUCTURE CASCADE 的方法来检测坏块,它执行坏块的检查,但是不会标记坏块为corrupt,检测的结果保存在USER_DUMP_DEST目录下的用户trace文件中。
3. 使用Oracle的专门工具dbv来检查坏块,具体的语法如下:
关键字 说明 (默认)
----------------------------------------------------
FILE 要验证的文件 (无)
START 起始块 (文件的第一个块)
END 结束块 (文件的最后一个块)
BLOCKSIZE 逻辑块大小 (2048)
LOGFILE 输出日志 (无)
FEEDBACK 显示进度 (0)
PARFILE 参数文件 (无)
USERID 用户名/口令 (无)
SEGMENT_ID 段 ID (tsn.relfile.block) (无)
例如:
Dbv file=system01.dbf blocksize=8192
DBVERIFY: Release 9.2.0.5.0 - Production on 星期六 11月 27 15:29:13 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
DBVERIFY - 验证正在开始 : FILE = system01.dbf
DBVERIFY - 验证完成
检查的页总数 :32000
处理的页总数(数据):13261
失败的页总数(数据):0
处理的页总数(索引):2184
失败的页总数(索引):0
处理的页总数(其它):1369
处理的总页数 (段) : 0
失败的总页数 (段) : 0
空的页总数 :15186
标记为损坏的总页数:0
汇入的页总数 :0
注:因为dbv要求file后面跟的必须是一个文件扩展名,所以如果用裸设备存储的,就必须使用ln链接裸设备到一个文件,然后再用dbv对这个链接文件进行检查。
文章引用自:
评论() 引用 阅读() 圈子 编辑 打印 有奖举报
前一篇:SOLARIS 查看Volume Manager管理下磁盘的可用空间
后一篇:使用dbv和RMAN检查数据文件中的坏块
数据加载中...
续20080424学习笔记
在select count(*) from test ----on user trans.查看alert如下信息:
Thread 1 advanced to log sequence 184
Current log# 2 seq# 184 mem# 0: /home/oracle/oracle10g/oradata/rac/redo02.log
Thu Apr 24 01:29:30 2008
ARC2: Log corruption near block 71437 change 948099 time ?
Thu Apr 24 01:29:30 2008
Errors in file /home/oracle/oracle10g/admin/rac/bdump/10gdb_arc2_6341.trc:
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 71437 change 948099 time 04/23/2008 01:16:17
ORA-00312: online log 3 thread 1: '/home/oracle/oracle10g/oradata/rac/redo03.log'
ARC2: All Archive destinations made inactive due to error 354
Thu Apr 24 01:29:30 2008
ARC2: Closing local archive destination LOG_ARCHIVE_DEST_10: '/home/oracle/newdest/RAC/archivelog/2008_04_24/o1_mf_1_122_4106osbp_.arc' (error 354)
(10gdb)
Committing creation of archivelog '/home/oracle/newdest/RAC/archivelog/2008_04_24/o1_mf_1_122_4106osbp_.arc' (error 354)
Deleted Oracle managed file /home/oracle/newdest/RAC/archivelog/2008_04_24/o1_mf_1_122_4106osbp_.arc
ARC2: Failed to archive thread 1 sequence 122 (354)
ARCH: Archival stopped, error occurred. Will continue retrying
Thu Apr 24 01:29:31 2008
ORACLE Instance 10gdb - Archival Error
Thu Apr 24 01:29:31 2008
ORA-16038: log 3 sequence# 122 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 3 thread 1: '/home/oracle/oracle10g/oradata/rac/redo03.log'
Thu Apr 24 01:29:31 2008
Errors in file /home/oracle/oracle10g/admin/rac/bdump/10gdb_arc2_6341.trc:
ORA-16038: log 3 sequence# 122 cannot be archived
ORA-00354: corrupt redo log block header
ORA-00312: online log 3 thread 1: '/home/oracle/oracle10g/oradata/rac/redo03.log'
select group#,thread#,sequence#,members,archived,status,to_char(first_time,'yyyy-mm-dd hh24:mi:ss') from v$log;---发现日志出现坏块,数据库会不定期关闭
注:
如果丢失的是current状态的logfile
如果一致的关闭数据库,将数据库启动到mount状态,然后直接open resetlogs即可,不过完成以后别忘了备份数据库!
如果没有一致的关闭数据库,就需要recover database until cancel;恢复了并且open resetlogs;
如果丢失的不是当前正在使用的redo log 的话,将那个实例启动到mount状态:
然后alter database clear logfile '..';
重新生成一个就是了,然后open database,不过要记得马上备份数据库。
startup --因为redolog3处现坏块,未归档,未open,只能mounted
alter database clear unarchived logfile group 3;
alter database open;
conn trans/system;
select count(*) from test;---出现
ERROR at line 1:
ORA-00600: internal error code, arguments: [6856], [0], [213], [], [], [], [],
[]
Errors in file /home/oracle/oracle10g/admin/rac/bdump/10gdb_smon_6484.trc:
ORA-00600: internal error code, arguments: [6856], [0], [213], [], [], [], [], []
oracle tkprof的使用
1.设置sql trace
(1)实例级
初始化参数据SQL_TRACE=TRUE
(2)对话级
ALTER SESSION SET SQL_TRACE=TRUE
DBMS_SESSION.SQL_TRACE(SQL_TRACE)
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#,SQL_TRACE);
(3).trc文件存放的位置
查看初始化参数:用户跟踪文件user_dump_dest
2.设置时间统计(这样就可以统计每个sql执行的时间)
alter system set timed_statistics=true scope=both;
3.tkprof的使用
tkprof可对.trc文件进行分析,并生成一个分析文件.并且在分析.trc文件,生成分析文件时按各种选项进行排序,其中比较有用的一个排序选项是fchela,即按照elapsed time fetching来对分析的结果排序(记住要设置初始化参数time_statistics=true),生成的.prf文件将把最消耗时间的sql放在最前面显示。另外一个有用的参数就是sys,这个参数设置为no可以阻止所有以sys用户执行的sql被显示出来,这样可以减少分析出来的文件的复杂度,便于查看。
举例:
(1)设置跟踪SID,SERIAL#O为12.73
exec dbms_system.SET_SQL_TRACE_IN_SESSION(12,73,true);
(2)查询生成的.trc文件号
select spid from v$process where addr = (select paddr from v$session where sid=12);
(3)使用tkprof生成分析文件
c:>tkprof e racleadminhunterhunter_ora_4188.trc c: est.prf aggregate=yes sys=no sort=fchela(此例中将执行最耗时的sql放在分析文件的开头)
(4)打开.prf文件
可用ultraedit打开查看生成的分析文件
文章引用自:
select sid,opname,target,sound(sofar/totalwork*100,2),start_time,time_remaining,elapsed_seconds
from session_longops where opname like 'RMAN';
delete backup;
delete copy;
crosscheck backup;
crosscheck copy;
control file parallel write事件:
表现为高日志切换,我在进行插入10亿记录时.它和log file switch completion变化.
ckpt process和arch process会产生这个event.
---查看执行控制文件更新事务的会话
select /*ordered */ a.sid,b.time_waited,round(b.time_waited/b.tatal_waits,4) average_wait,round((sysdate-a.logon_time)*24) hours_connected
from v$session_event b,v$session a
where a.sid=b.sid and b.event='control file parallel write' order by time_waited;
----查看重作日志的大小.对于数据库事务来说,日志可能大小.以下查看日志多长时间切换一次
select thread#,to_char(first_time,'yyyy-mm-dd hh24:mi:ss') creation_time,sequence#,first_change# lowest_scn_in_log,
next_change# hightesst_scn_in_log,recid controlfile_record_id,stamp controlfile_record_stamp
from v$log_history order by first_time;
----查看哪个用户在频繁提交事务 ---note lace redo log on raid 5 for speed lgwr
select sid,value from v$sesstat where statistic# =(select statistic# from v$statname where name='user commits')
大数据量的备份:1T
每天增量备份
每周进行一次全备 保存时间一个月
其中分为:历史和当前表空间
历史表空间可采用一个月备份一次 保存一个季度(因为这是不一致备份,也称热备,所以也要备份archivelog )
当前表空间:每个星期备份一次 保存一个月
dml变化大的表:采用exp和imp(indexes=no buffers=large feedback=100000),还个要恢复加上备份表对应的datafile,不然只有dmp是不能恢复
select bytes/1024/1024 from dba_segments where segment_name='TEST';
-----查执行许多硬解析的会话
select a.sid,c.username,b.name,a.value,round((sysdate-c.logon_time)*24) hours_connected
from v$sesstat a,v$statname b,v$session c
where c.sid=a.sid and a.statistic#=b.statistic# and a.value>0 and b.name='parse count (hard)'
order by a.value desc and rownum<2;
----学习v$transaction
select xidusn,xidsqn,ubafil
from v$transaction;
select ,ubablk,ubasqn,ubarec,status,start_time,
start_scnb,start_scnw,start_uext,start_ubafil,start_ubablk,start_ubasqn
start_ubarec,ses_addr,flag,
used_ublk,used_urec,log_io,phy_io,cr_get,cr_change,
dscn_base,xid,prv_xid,ptx_xid
from v$transaction;
latch:library cache ---wait event
/
db_writer_processes is static parameter
---rman v$视图
select session_key,command_id,to_char(start_time,'yyyymmdd hh24:mi:ss'),to_char(end_time,'yyyymmdd hh24:mi:ss'),
input_bytes,output_bytes,output_device_type,status,iknput_type as backuptype,
elapsed_seconds,optimized,compression_ratio,input_bytes_per_sec,output_bytes_per_sec,
input_bytes_display,output_bytes_display,time_taken_display
from v$rman_backup_job_details;
select session_key,to_char(start_time,'yyyymmdd hh24:mi:ss') as bt,to_char(end_time,'yyyymmdd hh24:mi:ss') as et,
input_bytes,output_bytes,output_device_type,status,input_type,
elapsed_seconds,compression_ratio
from v$rman_backup_job_details;
------------我直接进行增量三级备的出错提示
RMAN> backup incremental level 3 database;
Starting backup at 25-APR-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 3 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00005 name=/home/oracle/bigts.dbf
input datafile fno=00001 name=/home/oracle/oracle10g/oradata/newdb/system01.dbf
input datafile fno=00003 name=/home/oracle/oracle10g/oradata/newdb/sysaux01.dbf
input datafile fno=00002 name=/home/oracle/oracle10g/oradata/newdb/undotbs01.dbf
input datafile fno=00004 name=/home/oracle/oracle10g/oradata/newdb/users01.dbf
channel ORA_DISK_1: starting piece 1 at 25-APR-08
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 04/25/2008 13:40:
04
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 52428800 bytes disk space from 2147483648 limit
continuing other job steps, job failed will not be re-run
channel ORA_DISK_1: starting incremental level 3 datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current control file in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 25-APR-08
channel ORA_DISK_1: finished piece 1 at 25-APR-08
piece handle=/home/oracle/rmanbackup/NEWDB/backupset/2008_04_25/o1_mf_ncsn3_TAG2
0080425T133838_4145vppo_.bkp tag=TAG20080425T133838 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 04/25/2008 13:40:
04
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 52428800 bytes disk space from 2147483648 limit
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/9240380/viewspace-350553/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/9240380/viewspace-350553/