Oracle补全日志

Oracle Server – Enterprise Edition – Version: 10.2.0.1 to 10.2.0.4
Information in this document applies to any platform.
Goal

Supplemental Logging :

Redo log files are generally used for instance recovery and media recovery. The data needed for such operations is automatically recorded in the redo log files. However, a redo-based application may require that additional columns be logged in the redo log files. The process of logging these additional columns is called supplemental logging.

Please refer to the following documentation for Utilities, which mentions two important points regarding Supplemental Logging.

Oracle® Database
Utilities
10g Release 2 (10.2)
B14215-01

1) You must enable supplemental logging prior to generating log files that will be analyzed by LogMiner.

When you enable supplemental logging, additional information is recorded in the redo stream that is needed to make the information in the redo log files useful to you. Therefore, at the very least, you must enable minimal supplemental logging, as the following SQL statement shows:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

To determine whether supplemental logging is enabled, query the V$DATABASE view, as the following SQL statement shows:

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If the query returns a value of YES or IMPLICIT, minimal supplemental logging is enabled.

You can turn off Supplemental Logging by following command.

SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

2) By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable.

Let us take an example of this, which is using Online Catalog option for logminer dictionary.

Part A : Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned OFF (default option )

Part B : Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned ON.
Solution

Part A :

Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned OFF (default option )

———————————————————————————————————–

Example assumes that database is in ARCHIVE LOG MODE

SQL> connect / as sysdba
Connected.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 8
Next log sequence to archive 10
Current log sequence 10
SQL> alter system switch logfile;

System altered.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
2 WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
——————————————————————————–

C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2008_11_17\O1_MF_1_10_4L2PCZNJ_.ARC

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
——–
NO

SQL> connect scott/tiger
Connected.

SQL> create table test ( n number );

Table created.

SQL> insert into test values ( 1);

1 row created.

SQL> insert into test values (2);

1 row created.

SQL> commit;

Commit complete.

SQL> connect / as sysdba
Connected.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
2 WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
——————————————————————————–

C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2008_11_17\O1_MF_1_ 12_4L2PPL4D_.ARC

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
> LOGFILENAME =>’C:\ORACLE\PRODUCT\10.2.0FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2008_11_17\O1_MF_1_12_4L2PPL4D_.ARC’ , -
> PTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
> PTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.

SQL> SELECT SQL_REDO , SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username=’SCOTT’;

no rows selected

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

PL/SQL procedure successfully completed.

———————————————————————————————————–
———————————————————————————————————–
———————————————————————————————————–

Part B :

Shows result from V$LOGMNR_CONTENTS when supplemental logging is turned ON.

The same example as above but with Supplemental Logging turned ON.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

SUPPLEME
——–
YES

SQL> alter system switch logfile;

System altered.

SQL> connect scott/tiger
Connected.

SQL> create table test2 ( n2 number );

Table created.

SQL> insert into test2 values ( 2211);

1 row created.

SQL> insert into test2 values ( 2222);

1 row created.

SQL> commit;

Commit complete.

SQL> connect / as sysdba
Connected.
SQL> alter system switch logfile;

System altered.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
2 WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
——————————————————————————–
C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2008_11_17\O1_MF_1_14_4L2QFMRF_.ARC

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
> LOGFILENAME => ‘C:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2008_11_17\O1_MF_1_14_4L2QFMRF_.ARC’ , -
> PTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
> PTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

PL/SQL procedure successfully completed.

SQL> SELECT SQL_REDO , SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username=’SCOTT’;

SQL_REDO
——————————————————————————–
SQL_UNDO
——————————————————————————–

commit;

set transaction read write;

insert into “SCOTT”.”TEST2″(“N2″) values (’2211′);
delete from “SCOTT”.”TEST2″ where “N2″ = ’2211′ and ROWID = ‘AAAM7oAAEAAAAGtAAA’

SQL_REDO
——————————————————————————–
SQL_UNDO
——————————————————————————–
;

insert into “SCOTT”.”TEST2″(“N2″) values (’2222′);
delete from “SCOTT”.”TEST2″ where “N2″ = ’2222′ and ROWID = ‘AAAM7oAAEAAAAGtAAB’
;

commit;

23 rows selected.

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

PL/SQL procedure successfully completed.

Please make a note that V$LOGMNR_CONTENTS also shows Data dictionary logs like inserting data into Data Dictionary of Database but have been removed in our example and only relevant data is captured.

 

Solution
There are two types of supplemental logging: database supplemental logging and table supplemental logging.

Database supplemental logging specifies supplemental logging for an entire database, while table supplemental logging enables you to specify log groups for supplemental logging for a particular table. If you use table supplemental logging, then you can choose between unconditional and conditional log groups.

Unconditional log groups log the before images of specified columns any time the table is updated, regardless of whether the update affected any of the specified columns. This is sometimes referred to as an ALWAYS log group. Conditional log groups log the before images of all specified columns only if at least one of the columns in the log group is updated.

Supplementing logging at the database level, unconditional log groups at the table level, and conditional log groups at the table level together determine which old values are logged in an update statement or piecewise LOB update.

To check whether one or more log groups are specified for the table at the source database, run the following query:

COLUMN LOG_GROUP_NAME HEADING ‘Log Group’ FORMAT A20

COLUMN TABLE_NAME HEADING ‘Table’ FORMAT A20

COLUMN ALWAYS HEADING ‘Type of Log Group’ FORMAT A30

SELECT

LOG_GROUP_NAME,

TABLE_NAME,

DECODE(ALWAYS,

‘ALWAYS’, ‘Unconditional’,

NULL, ‘Conditional’) ALWAYS

FROM DBA_LOG_GROUPS;

Your output looks similar to the following:
Log Group Table Type of Log Group

——————– ——————– ——————————

LOG_GROUP_DEP_PK DEPARTMENTS Unconditional

LOG_GROUP_JOBS_CR JOBS Conditional

To list the columns in a particular log group, query the DBA_LOG_GROUP_COLUMNS data dictionary view. You can also query the V$DATABASE dynamic performance view to display supplemental logging specified at the database level.

The GoldenGate ADD TRANDATA command is equivelant to the following SQL statement,

alter table [TABLE_NAME] add supplemental log group [GROUP_NAME] ([COLUMN_LIST]) always;

The Group_name will usually be GGS_TABLENAME, the COLUMN_LIST is either the primary key columns or unique index columns, or if there is no primary key or unique index, it will be all columns.

The GoldenGate DELETE TRANDATA command is equivelant to the following SQL statement

alter table [TABLE_NAME] drop supplemental log group [GROUP_NAME]

To check the database level supplemental logging info

select supplemental_log_data_min, supplemental_log_data_pk, supplemental_log_data_ui, force_logging from v$database;

GoldenGate needs at least “supplemental_log_data_min” to be YES to Extract chained row correctly from the Oracle redo log. See attached doc for the detail explain of what other options are.

 

Purpose
Oracle Streams enables the sharing of data and events in a data
stream, either within a database or from one database to another.
This article is intended to assist Replication DBAs in understanding the supplemental logging requirements of Streams Replication.
Scope and Application
To be used by Oracle support analysts and replication DBAs to
setup replication using Streams in Oracle 10.1 or higher. Most of the information in this note is obtained from ORACLE documentation. The links to the documentation can be found in the REFERENCES section.
Streams Replication Supplemental Logging Requirements

INTRODUCTION

ORACLE redo log files contain redo information needed for instance and media recovery . However, some of the redo based applications such as STREAMS, Logical Standby, Adhoc LogMiner need additional information to be logged into the redolog files. The process of logging this additional information into the redo files is called Supplemental Logging.

SUPPLEMENTAL LOGGING LEVELS

There are two levels of Supplemental Logging.

1. Database Level Supplemental Logging – There are two types of Database level logging.

Minimal supplemental Logging – This places information needed for identifying the rows in the redo logs. This can be enabled using the following command.
SQL> Alter database add supplemental log data;

Identification Key Logging – This places the before and after images of the specified type of columns in the redo log files. This type of logging can be specified for ALL ,PRIMARY KEY, UNIQUE and FOREIGN KEY. This can be enabled using the following command
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL,PRIMARY KEY,UNIQUE,FOREIGN KEY) COLUMNS;

You can check the database level supplemental logging using the following query

select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,
SUPPLEMENTAL_LOG_DATA_all from v$database;

2. Table-level Supplemental Logging – Creates individual log groups for each table. Logging can be unconditional or conditional .

Unconditional Logging means before images of the columns are logged regardless of whether the column is updated or not. Conditional means the before images of the columns are logged only when the corresponding columns are updated. After images are always captured for the columns specified in the log group. The following query can be used to check the table level log groups defined in the database.
select * from DBA_LOG_GROUPS;
select owner,log_group_name,table_name,column_name,logging_property, from DBA_LOG_GROUP_COLUMNS;

SUPPLEMENTAL LOGGING REQUIREMENTS FOR STREAMS REPLICATION

Supplemental logging should always be enabled on the source database. The following types of columns at the APPLY site are candidates for supplemental logging on the source.

1. All columns that are used in Primary keys at the source site for which changes are applied
on the target must be unconditionally logged at the table level or at the db level.

2. All columns that are used as substitute columns at the APPLY site must be unconditionally logged .

3. All columns that are used in DML handlers, Error handlers, Rules, Rule based transformations,
virtual dependency definitions, Subset rules must be unconditionally logged.

4. All columns that are used in column list for conflict resolution methods must be conditionally logged,
if more than one column from the source is part of the column list.

5. If Parallelism of APPLY is > 1, then all columns that are part of FOREIGN KEY, UNIQUE KEY
constraints that are defined on more than 1 column and BIT MAP indexes that are defined on more than
one column at the source must be conditionally logged.

You can enable table level supplemental logging using the
following command.
alter table HR.EMPLOYEES
ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID);

Alternately, you can create user-defined log groups using the following command. Omitting the ALWAYS clause creates a conditional log groups.
Alter table HR.EMPLOYEES
add SUPPLEMENTAL LOG data (PRIMARY KEY,UNIQUE,FOREIGN,ALL) columns;

Note: LOB, LONG, and ADT columns cannot be supplementally logged.
These columns will be ignored when an ALL clause if specified in the command.
Alter table HR.EMPLOYEES
ADD SUPPLEMENTAL LOG GROUP emp_parttime(
DEPARTMENT_ID NO LOG, EMPLOYEE_ID);

In versions 10g and above, prepare_xxx_instantiation procedure implicitly creates supplemenal log groups. Type of supplemental logging that is enabled implicitly using this command can be checked using the sql in the following link to the documentation. However, additional supplemental logging might neeed to be enabled depending on the requirements as mentioned above.

 

Symptoms

As part of the Streams setup you can specify supplemental logging at the database level.

It can be done manually:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

or implicitly running procedures like DBMS_STREAMS_ADM.ADD_SCHEMA_RULES

or if you setup Streams using Grid Control  it is automatically executed by SrcSchemaRule1 Stage Step 12.

In some cases, this step hangs and the statement ALTER DATABASE add  SUPPLEMENTAL LOG DATA remains waiting for TX lock in shared mode.
Cause

The statement ALTER DATABASE add  SUPPLEMENTAL LOG DATA  is waiting for TX lock in shared mode when there is any uncommitted transaction. This is the expected behavior.

You can issue ALTER DATABASE ADD SUPPLEMENTAL LOG DATA when the database is open. However, Oracle Database will invalidate all DML cursors in the cursor cache, which will have an effect on performance until the cache is repopulated. Besides, we also need to wait for the completion of all the in-flight transaction so that any redo generated after this DDL would have the right supplemental logging attributes.

Solution

You need to wait for the completion of all the in-flight transaction.

In those databases with high activity where there are always active transactions the supplemental logging can be enabled by bouncing the database and running the statement manually:

STARTUP MOUNT
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE OPEN;
References
ORA-32593 database supplemental logging attributes in flux.

 

Question
Why does GoldenGate for Oracle require Supplemental Logging at the Database Level for Capture?

Answer
GoldenGate version 7.3.1 for Oracle 9i and above has enhanced the Log Based Capture. In the process, it became necessary to enable supplemental logging at the database level. By enabling this feature, Oracle adds in a few extra bytes of information into the redo logs for some DML operations.

 

 

 - THE END -

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/11417069/viewspace-683848/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/11417069/viewspace-683848/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值