LogMiner redo log

Log:The internal operations must be archived in order to be recovered if the import is failing. This is considered an expected behavior.

Simple steps for mining the redo logs, for instance, to troubleshoot excessive redo generation
 

SOLUTION

1. Enable SUPPLEMENTAL Log to Database.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

 
2. As sysdba, install the logminer package (if not installed by default installed) from following path

SQL> @$ORACLE_HOME/rdbms/admin/dbmslm.sql

NOTE:
You can simply check whether logminer is already available using

SQL> desc dbms_logmnr

 
3 Create a list of logs by specifying the NEW option when executing the  DBMS_LOGMNR.ADD_LOGFILE procedure.

NOTE:
You can query the available archived redo log files from v$archived_log

 
For example, enter the following:

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '+DATA1/CDB/ARCHIVELOG/2024_04_13/thread_1_seq_242.566.1166174789', -
     OPTIONS => DBMS_LOGMNR.NEW);

 
4. If desired, add more logs by specifying the ADDFILE option.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '+FRA/v1120/archivelog/2012_11_09/thread_1_seq_564.261.798899763', -
     OPTIONS => DBMS_LOGMNR.ADDFILE);


5. Start LogMiner.

Start LogMiner and specify the dictionary to use.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
     OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

NOTE:
Using DICT_FROM_ONLINE_CATALOG, the database must be open and only redo can be mined of the latest table versions.

For more information, please refer to
  Oracle® Database Utilities
  11g Release 2 (11.2)
  Chapter 19 Using LogMiner to Analyze Redo Log Files
  http://docs.oracle.com/cd/E11882_01/server.112/e22490/logminer.htm#SUTIL019

 
6. Query the V$LOGMNR_CONTENTS view.

SQL> SELECT username AS USR, 
     (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, 
     operation, 
     SQL_REDO, 
     SQL_UNDO 
     FROM V$LOGMNR_CONTENTS 
     WHERE username IN ('<Username>');

NOTE:
For other possible columns to query, please issue:

SQL> desc v$logmnr_contents


7. End the LogMiner session.

SQL&gt; EXECUTE DBMS_LOGMNR.END_LOGMNR();

This article provides guidelines DBAs can use to determine which OPERATION codes are generating lots of redo information.

This article is intended for DBAs. The article assumes the reader is familiar with LogMiner and has basic skills in mining redo logs.

SOLUTION

--- How to determine the cause of lots of redo generation using LogMiner ---

Using OPERATION Codes to Understand Redo Information

There are multiple operation codes which can generate the redo information, using following guide lines you can identify the operation codes which are causing the high redo generation and you need to take an appropriate action on it to reduce the high redo generation.

NOTE:
Redo records are not all equally sized. So remember that just because certain statements show up a lot in the LogMiner output, this does not guarantee that you have found the area of functionality generating the excessive redo.


What are these OPERATION codes ?

  • INSERT / UPDATE / DELETE -- Operations are performed on SYS objects are also considered as an Internal Operations.
  • COMMIT -- This is also "Internal" operation, you will get line "commit;" in the column sql_redo.
  • START -- This is also "Internal" operation, you will get line "set transaction read write;" in sql_redo INTERNAL -- Dictionary updates
  • SELECT_FOR_UPDATE - This is also an Internal operation and oracle generates the redo information for "select" statements which has "for update" clause.

In general INTERNAL operations are not relevant, so to query the relevant data, use "seg_owner=' in the "where" clause.

Examples:

How to extract relevant information from the view v$logmnr_contents?


1. This SQL lists operations performed by user SCOTT

SQL> select distinct operation,username,seg_owner from v$logmnr_contents where seg_owner='SCOTT';

OPERATION USERNAME SEG_OWNER
-------------------------- ------------------------- ---------------------
DDL SCOTT SCOTT
DELETE SCOTT SCOTT
INSERT SCOTT SCOTT
UPDATE SCOTT SCOTT



2. This SQL lists the undo and redo associated with operations that user SCOTT performed

SQL> select seg_owner,operation,sql_redo,sql_undo from v$logmnr_contents where SEG_owner='SCOTT';

SCOTT DDL
create table LM1 (c1 number, c2 varchar2(10));

SCOTT INSERT
insert into "SCOTT"."LM1"("C1","C2") values ('101','AAAA');
delete from "SCOTT"."LM1" where "C1" = '101' and "C2" = 'AAAA'
and ROWID = 'AAAHfBAABAAAMUqAAA';

SCOTT UPDATE update "SCOTT"."LM1" set "C2" = 'YYY'
where "C2" = 'EEE' and ROWID = 'AAAHfBAABAAAMUqAAE';
update "SCOTT"."LM1" set "C2" = 'EEE' where "C2" = 'YYY'
and ROWID = 'AAAHfBAABAAAMUqAAE';

INSERT / UPDATE / DELETE -- Operations are performed on SYS objects are also considered as an Internal Operations.


3. This SQL lists undo and redo generated for UPDATE statements issues by user SCOTT

SQL> select username, seg_owner,operation,sql_redo,sql_undo from v$logmnr_contents where operation ='UPDATE' and USERNAME='SCOTT';

UNAME SEG_OW OPERATION SQL_REDO SQL_UNDO
---------- ---------- ------------ -----------------------------------
SCOTT SYS UPDATE update "SYS"."OBJ$" set "OBJ#" = '1'..... update ....
SCOTT SYS UPDATE update "SYS"."TSQ$" set "GRANTO..... update .......
SCOTT SYS UPDATE update "SYS"."SEG$" set "TYPE#" = '5'.. update......

As per above result user SCOTT has updated SYS objects so, if you query on USERNAME, you may get incorrect result. So, better to query v$logmnr_contents on SEG_OWNER.


4. Identifying Operation Counts

Run the following query to see the OPERATION code row count from v$logmnr_contents, to understand which OPERATION code has generated lots of redo information.

SQL> select operation,count(*) from v$logmnr_contents group by operation;

OPERATION COUNT(*)
-------------------- ----------
COMMIT 22236
DDL 2
DELETE 1
INSERT 11
INTERNAL 11
SELECT_FOR_UPDATE 32487
START 22236
UPDATE 480

8 rows selected



5. Identifying User Counts

Run the following query to check user activity and operation counts:

SQL> select seg_owner,operation,count(*) from v$logmnr_contents group by seg_owner,operation;

SEG_OWNER OPERATION COUNT(*)
-------------------- ---------------- ---------
SCOTT COMMIT  22236
SCOTT DDL 2
SCOTT DELETE 1
...
BILLY COMMIT 12899
BILLY DDL 5
BILLY DELETE 2
...

NOTE: 
Be aware of next known issue:

If you are not using "select for update" statements often in your application and yet find a high operation count for operation code "SELECT_FOR_UPDATE" then you might be hitting a known issue.

To confirm this check whether SQL_REDO shows select,update statements on AQ$_QUEUE_TABLE_AFFINITIES and AQ$_QUEUE_TABLES.

If you see these selects and updates, then check the value of the Init.ora parameter AQ_TM_PROCESSES.  The default value is AQ_TM_PROCESSES = 0 meaning that the queue monitor is not created.

If you are not using Advanced Queuing, then set AQ_TM_PROCESSES back to zero to avoid lots of redo generation on objects AQ$_QUEUE_TABLE_AFFINITIES and AQ$_QUEUE_TABLES.

Despite force_logging is set to "NO" on the database, archive logs are still being generated when running a DataPump import job with "transform=disable_archive_logging:y".

Why?
 

SOLUTION

This parameter is intended to reduce redo generation, not disable archive logging.

The operations stored in the redo logs are internal operations done by SYS. The tables involved are internal tables like : "SYS"."ACCESS$", "SYS"."DEPENDENCY$", "SYS"."PROCEDURE$", "SYS"."SETTINGS$", "SYS"."SOURCE$", "SYS"."SEG$" , etc.

The internal operations must be archived in order to be recovered if the import is failing. This is considered an expected behavior.

Below is a scenario showing this behavior:

1. Enable minimum supplemental logging:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

2. Install LogMiner (if not installed):
SQL> @ORACLE_HOME/rdbms/admin/dbmslm.sql

Note:  You can simply check whether LogMiner is already available using
SQL> desc dbms_logmnr;

3. Run an import to reproduce the issue:
> impdp <LOGIN>/<PASSWORD> directory=<DATA_PUMP_DIR> dumpfile=<DUMP_NAME>.dmp logfile=<LOG_NAME>.log schemas=<SCHEMA_NAME> transform=disable_archive_logging:y

4. Identify the archived log created during the import execution.

SQL>select name from v$archived_log;

5. Create the list of logs by specifying the NEW and ADDFILE options when executing the DBMS_LOGMNR.ADD_LOGFILE procedure.

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '<PATH>/<ARCHIVED_LOG_FILE1>',OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '<PATH>/<ARCHIVED_LOG_FILE2>', OPTIONS => DBMS_LOGMNR.ADDFILE);

6. Start LogMiner:

EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

7. Query V$LOGMNR_CONTENTS.

8. You can use this also

SQL>select

select

   count(*) total,

   sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end) SYS_count,

   sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end)/count(*)*100 SYS_percent

from v$logmnr_contents where rowid is not null;

 

This shows the total amount of DMLs, the total amount of DMLs that affect SYS-owned tables ending in '$' (which should be data dictionary tables),

and the percentage of DMLs against data dictionary tables expressed vs total number of DMLs.

The higher the percentage of operations by SYS against data dictionary tables, the more this note applies. Here is an example of the query output:

 

SQL> select

   count(*) total,

   sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end) SYS_count,

   sum (case when seg_owner = 'SYS' and seg_name like '%$' then 1 else 0 end)/count(*)*100 SYS_percent

from v$logmnr_contents where rowid is not null;

  2    3    4    5

     TOTAL  SYS_COUNT SYS_PERCENT

---------- ---------- -----------

    201741       9577  4.74717583


.......
.......
 

 Note: logging of SYS operations is only one possible reason for excess redo generation even if transform=disable_archive_logging:Y is set and the fix for Bug 25143018 is installed. There are others possible reasons for this behavior.

  • 7
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值