session 1 | session 2 |
| SQL> SELECT /*+rowid(a)*/* FROM ctais2.test a WHERE ROWID=dbms_rowid.rowid_create(1,6388,6,13,2);
ID NAME ---------- ------------------------------ 2 aa
Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 1 consistent gets 0 physical reads 0 redo size 455 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed |
SQL> select rowid from test where id=10;
ROWID ------------------ AAABj0AAGAAAAANAAK
SQL> delete from test where id=1;
1 row deleted. |
|
| SQL> set autotrace on stat SQL> select * from test where rowid='AAABj0AAGAAAAANAAK';
ID NAME ---------- ------------------------------ 10 aa
Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 52 redo size 455 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed |
SQL> delete from test where id=2;
1 row deleted. |
|
| SQL> /
ID NAME ---------- ------------------------------ 10 aa
Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 52 redo size 455 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed |
SQL> delete test;
998 rows deleted. |
|
| SQL> /
ID NAME ---------- ------------------------------ 10 aa
Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 616 consistent gets 0 physical reads 52 redo size 455 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed |
SQL> SELECT DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) FILE#, 2 DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) BLOCK#, 3 COUNT(*) 4 FROM TEST 5 GROUP BY DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID), 6 DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID);
FILE# BLOCK# COUNT(*) ---------- ---------- ---------- 6 13 614 6 14 386 |
上面的实验可以看出,语句的consistent gets一般等于 undo record + 2
尝试过DUMP UNDO来看,一个REDO块中包含大概30条这个块的undo record
尝试加大arraysize,consistent gets不变
设置Event 10201 - Dump Consistent Read Undo Application,可以看到ORACLE是一条一条的应用UNDO记录
设置Event 10200 - Dump Consistent Reads 或 buffer trace,没有跟踪出ORACLE到底对那个块读了那么多次
很困惑ORACLE到底读那个块读了那么多,熊哥用OraTracer看了下,发现基本是在读取undo块。我装的win7 x64,跑OraTracer要死……怨念
以前看TOP,书上有这么一段话:
For every SQL statement, the database engine has to guarantee the consistency of the processed
data. For that purpose, based on current blocks and undo information, consistent copies of
blocks might be created at runtime. To execute such an operation, several logical reads are
performed. Therefore, the number of logical reads performed by a SQL statement is strongly
dependent on the number of blocks that have to be reconstructed.
一直以为一致性读很大程度上依赖于需要重构的一致性块,今天研究undo的时候,意外的发现,其实一致性读很大程度上依赖于undo record的记录数
以前在ORACLE ERP上捕获到一系列异常的SQL,这些SQL访问的表,会以一定的间隔时间,大批量的DELETE数据,在INSERT数据进去。SP报告中平均每次逻辑读高达2000多,但是我测试的时候发现逻辑读只有30多,调查了v$sql_plan中的信息,发现执行计划也没什么不同,列分布也没有不均匀,当时这些SQL还不算TOP SQL,也就没怎么研究,现在想想,可能是一致性读的问题
如何弱化这个问题?应该首先考虑避免大事务,分阶段提交;如果是在高并发的系统中,可以考虑使用更小的块大小,减少每个块中的行数,避免有些走索引,访问很少数据块的语句也产生很高的逻辑读
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/8242091/viewspace-671343/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/8242091/viewspace-671343/