对于stream复制环境的源数据库一端,对归档日志的清理需要注意一些问题,不是所有的归档都可以随意删除,如果误删了capture进程还需要读取的归档日志就会出现capture虽然能正常启动 status也是enable状态,但是数据缺法复制。
这里需要注意一个capture进程的参数REQUIRED_CHECKPOINT_SCN这个参数表示capture进程重新启动时需要scan的最小scn号,可以通过这个参数找到需要为capture进程保留的归档日志。这里还需要讲的一个参数是capture进程的_CHEKPOINT_FREQUENCY参数这个参数的表示logminer做一次checkpoint需要挖掘的日志大小,这个参数的单位是M,ORACLE官方建议设置为500M,也就是说当logminer处理了500M大小的redo的时候会做一次logminer的checkpoint,checkpoint之后REQUIRED_CHECKPOINT_SCN被更新,所以通过设_CHEKPOINT_FREQUENCY的大小,可以控制需要保留的归档的大小,可以使用dbms_capture_adm.set_parameter过程修改_CHEKPOINT_FREQUENCY参数
下面是一个测试例子:
通过REQUIRED_CHECKPOINT_SCN确定需要保留的archive log:
SQL> conn strmadmin/strmadmin
Connected.
SQL> COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
SQL> COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
SQL> COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40
SQL> SELECT r.CONSUMER_NAME,
2 r.SOURCE_DATABASE,
3 r.SEQUENCE#,
4 r.NAME
5 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
6 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME AND
7 r.NEXT_SCN >= c.REQUIRED_CHECKPOINT_SCN;
通过REQUIRED_CHECKPOINT_SCN确定需要保留的archive log:
SQL> conn strmadmin/strmadmin
Connected.
SQL> COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
SQL> COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
SQL> COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40
SQL> SELECT r.CONSUMER_NAME,
2 r.SOURCE_DATABASE,
3 r.SEQUENCE#,
4 r.NAME
5 FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
6 WHERE r.CONSUMER_NAME = c.CAPTURE_NAME AND
7 r.NEXT_SCN >= c.REQUIRED_CHECKPOINT_SCN;
Capture Required
Process Source Sequence Archived Redo Log
Name Database Number File Name
--------------- ---------- -------- ----------------------------------------
CAPTURE_A SOURCE 100 /arch/1_100_714317792.dbf
CAPTURE_A SOURCE 101 /arch/1_101_714317792.dbf
CAPTURE_A SOURCE 102 /arch/1_102_714317792.dbf
CAPTURE_A SOURCE 103 /arch/1_103_714317792.dbf
CAPTURE_A SOURCE 104 /arch/1_104_714317792.dbf
Process Source Sequence Archived Redo Log
Name Database Number File Name
--------------- ---------- -------- ----------------------------------------
CAPTURE_A SOURCE 100 /arch/1_100_714317792.dbf
CAPTURE_A SOURCE 101 /arch/1_101_714317792.dbf
CAPTURE_A SOURCE 102 /arch/1_102_714317792.dbf
CAPTURE_A SOURCE 103 /arch/1_103_714317792.dbf
CAPTURE_A SOURCE 104 /arch/1_104_714317792.dbf
可以看到capture重新启动的时候需要的日志是从sequence 100以后的,现在我们首先关闭掉capture然后rename/arch/1_100_714317792.dbf
再重新启动capture,观察caputre情况
SQL> exec dbms_capture_adm.stop_capture('capture_a');
PL/SQL procedure successfully completed.
SQL> host
[oracle@primary arch]$ cd /arch
[oracle@primary arch]$ ls
1_100_714317792.dbf 1_51_714317792.dbf 1_60_714317792.dbf 1_69_714317792.dbf 1_78_714317792.dbf 1_87_714317792.dbf 1_96_714317792.dbf
1_101_714317792.dbf 1_52_714317792.dbf 1_61_714317792.dbf 1_70_714317792.dbf 1_79_714317792.dbf 1_88_714317792.dbf 1_97_714317792.dbf
1_102_714317792.dbf 1_53_714317792.dbf 1_62_714317792.dbf 1_71_714317792.dbf 1_80_714317792.dbf 1_89_714317792.dbf 1_98_714317792.dbf
1_103_714317792.dbf 1_54_714317792.dbf 1_63_714317792.dbf 1_72_714317792.dbf 1_81_714317792.dbf 1_90_714317792.dbf 1_99_714317792.dbf
1_104_714317792.dbf 1_55_714317792.dbf 1_64_714317792.dbf 1_73_714317792.dbf 1_82_714317792.dbf 1_91_714317792.dbf
1_47_714317792.dbf 1_56_714317792.dbf 1_65_714317792.dbf 1_74_714317792.dbf 1_83_714317792.dbf 1_92_714317792.dbf
1_48_714317792.dbf 1_57_714317792.dbf 1_66_714317792.dbf 1_75_714317792.dbf 1_84_714317792.dbf 1_93_714317792.dbf
1_49_714317792.dbf 1_58_714317792.dbf 1_67_714317792.dbf 1_76_714317792.dbf 1_85_714317792.dbf 1_94_714317792.dbf
1_50_714317792.dbf 1_59_714317792.dbf 1_68_714317792.dbf 1_77_714317792.dbf 1_86_714317792.dbf 1_95_714317792.dbf
[oracle@primary arch]$ mv 1_100_714317792.dbf 1_100_714317792.dbfdfdfdfd
[oracle@primary arch]$ ls
1_100_714317792.dbfdfdfdfd 1_52_714317792.dbf 1_62_714317792.dbf 1_72_714317792.dbf 1_82_714317792.dbf 1_92_714317792.dbf
1_101_714317792.dbf 1_53_714317792.dbf 1_63_714317792.dbf 1_73_714317792.dbf 1_83_714317792.dbf 1_93_714317792.dbf
1_102_714317792.dbf 1_54_714317792.dbf 1_64_714317792.dbf 1_74_714317792.dbf 1_84_714317792.dbf 1_94_714317792.dbf
1_103_714317792.dbf 1_55_714317792.dbf 1_65_714317792.dbf 1_75_714317792.dbf 1_85_714317792.dbf 1_95_714317792.dbf
1_104_714317792.dbf 1_56_714317792.dbf 1_66_714317792.dbf 1_76_714317792.dbf 1_86_714317792.dbf 1_96_714317792.dbf
1_47_714317792.dbf 1_57_714317792.dbf 1_67_714317792.dbf 1_77_714317792.dbf 1_87_714317792.dbf 1_97_714317792.dbf
1_48_714317792.dbf 1_58_714317792.dbf 1_68_714317792.dbf 1_78_714317792.dbf 1_88_714317792.dbf 1_98_714317792.dbf
1_49_714317792.dbf 1_59_714317792.dbf 1_69_714317792.dbf 1_79_714317792.dbf 1_89_714317792.dbf 1_99_714317792.dbf
1_50_714317792.dbf 1_60_714317792.dbf 1_70_714317792.dbf 1_80_714317792.dbf 1_90_714317792.dbf
1_51_714317792.dbf 1_61_714317792.dbf 1_71_714317792.dbf 1_81_714317792.dbf 1_91_714317792.dbf
[oracle@primary arch]$ cd /arch
[oracle@primary arch]$ ls
1_100_714317792.dbf 1_51_714317792.dbf 1_60_714317792.dbf 1_69_714317792.dbf 1_78_714317792.dbf 1_87_714317792.dbf 1_96_714317792.dbf
1_101_714317792.dbf 1_52_714317792.dbf 1_61_714317792.dbf 1_70_714317792.dbf 1_79_714317792.dbf 1_88_714317792.dbf 1_97_714317792.dbf
1_102_714317792.dbf 1_53_714317792.dbf 1_62_714317792.dbf 1_71_714317792.dbf 1_80_714317792.dbf 1_89_714317792.dbf 1_98_714317792.dbf
1_103_714317792.dbf 1_54_714317792.dbf 1_63_714317792.dbf 1_72_714317792.dbf 1_81_714317792.dbf 1_90_714317792.dbf 1_99_714317792.dbf
1_104_714317792.dbf 1_55_714317792.dbf 1_64_714317792.dbf 1_73_714317792.dbf 1_82_714317792.dbf 1_91_714317792.dbf
1_47_714317792.dbf 1_56_714317792.dbf 1_65_714317792.dbf 1_74_714317792.dbf 1_83_714317792.dbf 1_92_714317792.dbf
1_48_714317792.dbf 1_57_714317792.dbf 1_66_714317792.dbf 1_75_714317792.dbf 1_84_714317792.dbf 1_93_714317792.dbf
1_49_714317792.dbf 1_58_714317792.dbf 1_67_714317792.dbf 1_76_714317792.dbf 1_85_714317792.dbf 1_94_714317792.dbf
1_50_714317792.dbf 1_59_714317792.dbf 1_68_714317792.dbf 1_77_714317792.dbf 1_86_714317792.dbf 1_95_714317792.dbf
[oracle@primary arch]$ mv 1_100_714317792.dbf 1_100_714317792.dbfdfdfdfd
[oracle@primary arch]$ ls
1_100_714317792.dbfdfdfdfd 1_52_714317792.dbf 1_62_714317792.dbf 1_72_714317792.dbf 1_82_714317792.dbf 1_92_714317792.dbf
1_101_714317792.dbf 1_53_714317792.dbf 1_63_714317792.dbf 1_73_714317792.dbf 1_83_714317792.dbf 1_93_714317792.dbf
1_102_714317792.dbf 1_54_714317792.dbf 1_64_714317792.dbf 1_74_714317792.dbf 1_84_714317792.dbf 1_94_714317792.dbf
1_103_714317792.dbf 1_55_714317792.dbf 1_65_714317792.dbf 1_75_714317792.dbf 1_85_714317792.dbf 1_95_714317792.dbf
1_104_714317792.dbf 1_56_714317792.dbf 1_66_714317792.dbf 1_76_714317792.dbf 1_86_714317792.dbf 1_96_714317792.dbf
1_47_714317792.dbf 1_57_714317792.dbf 1_67_714317792.dbf 1_77_714317792.dbf 1_87_714317792.dbf 1_97_714317792.dbf
1_48_714317792.dbf 1_58_714317792.dbf 1_68_714317792.dbf 1_78_714317792.dbf 1_88_714317792.dbf 1_98_714317792.dbf
1_49_714317792.dbf 1_59_714317792.dbf 1_69_714317792.dbf 1_79_714317792.dbf 1_89_714317792.dbf 1_99_714317792.dbf
1_50_714317792.dbf 1_60_714317792.dbf 1_70_714317792.dbf 1_80_714317792.dbf 1_90_714317792.dbf
1_51_714317792.dbf 1_61_714317792.dbf 1_71_714317792.dbf 1_81_714317792.dbf 1_91_714317792.dbf
启动capture
[oracle@primary arch]$ sqlplus /nolog
[oracle@primary arch]$ sqlplus /nolog
SQL*Plus: Release 10.2.0.3.0 - Production on Wed Mar 31 14:46:39 2010
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
SQL> conn strmadmin/strmadmin
Connected.
SQL> exec dbms_capture_adm.start_capture('capture_a');
Connected.
SQL> exec dbms_capture_adm.start_capture('capture_a');
PL/SQL procedure successfully completed.
SQL>
重新启动capture日志中发现下面信息
Streams CAPTURE C001 started with pid=23, OS id=7950
Wed Mar 31 14:46:55 2010
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
Streams CAPTURE C001 started with pid=23, OS id=7950
Wed Mar 31 14:46:55 2010
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
capture虽然启动,但是logminer一直没有开始正常工作。
SQL> SET LINESIZE 200
COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
COLUMN ERROR_MESSAGE HEADING 'Capture|Process|E_MESSAGE' FORMAT A20
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS,ERROR_NUMBER,ERROR_MESSAGE
FROM DBA_CAPTURE; 2
Capture Capture Capture Capture
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
Capture State Last Message
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A DICTIONARY INITIALIZATION 14:49:52 03/31/10
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A DICTIONARY INITIALIZATION 14:49:52 03/31/10
SQL>
可以看到重启后capture虽然是enable状态 但是一直是处于DICTIONARY INITIALIZATION中,无法正常同步数据
现在我们把rename了的归档文件rename回原来的名字
[oracle@primary arch]$ ls
1_100_714317792.dbfdfdfdfd 1_52_714317792.dbf 1_62_714317792.dbf 1_72_714317792.dbf 1_82_714317792.dbf 1_92_714317792.dbf
1_101_714317792.dbf 1_53_714317792.dbf 1_63_714317792.dbf 1_73_714317792.dbf 1_83_714317792.dbf 1_93_714317792.dbf
1_102_714317792.dbf 1_54_714317792.dbf 1_64_714317792.dbf 1_74_714317792.dbf 1_84_714317792.dbf 1_94_714317792.dbf
1_103_714317792.dbf 1_55_714317792.dbf 1_65_714317792.dbf 1_75_714317792.dbf 1_85_714317792.dbf 1_95_714317792.dbf
1_104_714317792.dbf 1_56_714317792.dbf 1_66_714317792.dbf 1_76_714317792.dbf 1_86_714317792.dbf 1_96_714317792.dbf
1_47_714317792.dbf 1_57_714317792.dbf 1_67_714317792.dbf 1_77_714317792.dbf 1_87_714317792.dbf 1_97_714317792.dbf
1_48_714317792.dbf 1_58_714317792.dbf 1_68_714317792.dbf 1_78_714317792.dbf 1_88_714317792.dbf 1_98_714317792.dbf
1_49_714317792.dbf 1_59_714317792.dbf 1_69_714317792.dbf 1_79_714317792.dbf 1_89_714317792.dbf 1_99_714317792.dbf
1_50_714317792.dbf 1_60_714317792.dbf 1_70_714317792.dbf 1_80_714317792.dbf 1_90_714317792.dbf
1_51_714317792.dbf 1_61_714317792.dbf 1_71_714317792.dbf 1_81_714317792.dbf 1_91_714317792.dbf
[oracle@primary arch]$ mv 1_100_714317792.dbfdfdfdfd 1_100_714317792.dbf
[oracle@primary arch]$
下面是rename后的日志
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
LOGMINER: session# = 84, reader process P000 started with pid=13 OS id=7972
LOGMINER: session# = 84, preparer process P002 started with pid=25 OS id=7976
LOGMINER: session# = 84, builder process P001 started with pid=24 OS id=7974
Wed Mar 31 14:52:41 2010
LOGMINER: Begin mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: End mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: Begin mining logfile: /arch/1_101_714317792.dbf
LOGMINER: Parameters summary for session# = 84
LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
LOGMINER: Memory Size = 10M, Checkpoint interval = 3000M
LOGMINER: session# = 84, reader process P000 started with pid=13 OS id=7972
LOGMINER: session# = 84, preparer process P002 started with pid=25 OS id=7976
LOGMINER: session# = 84, builder process P001 started with pid=24 OS id=7974
Wed Mar 31 14:52:41 2010
LOGMINER: Begin mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: End mining logfile: /arch/1_100_714317792.dbf
Wed Mar 31 14:52:54 2010
LOGMINER: Begin mining logfile: /arch/1_101_714317792.dbf
可以看到capture已经恢复正常工作
现在我们在来看看capture的状态
SQL> conn strmadmin/strmadmin
Connected.
SQL> SET LINESIZE 200
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
SQL> COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
SQL> COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
SQL> COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
SQL> COLUMN ERROR_MESSAGE HEADING 'Capture|Process|E_MESSAGE' FORMAT A20
SQL> SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS,ERROR_NUMBER,ERROR_MESSAGE
2 FROM DBA_CAPTURE;
Connected.
SQL> SET LINESIZE 200
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A15
SQL> COLUMN QUEUE_NAME HEADING 'Capture|Process|Queue' FORMAT A15
SQL> COLUMN RULE_SET_NAME HEADING 'Positive|Rule Set' FORMAT A15
SQL> COLUMN NEGATIVE_RULE_SET_NAME HEADING 'Negative|Rule Set' FORMAT A15
SQL> COLUMN STATUS HEADING 'Capture|Process|Status' FORMAT A15
SQL> COLUMN ERROR_MESSAGE HEADING 'Capture|Process|E_MESSAGE' FORMAT A20
SQL> SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS,ERROR_NUMBER,ERROR_MESSAGE
2 FROM DBA_CAPTURE;
Capture Capture Capture Capture
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
Process Process Positive Negative Process Process
Name Queue Rule Set Rule Set Status ERROR_NUMBER E_MESSAGE
--------------- --------------- --------------- --------------- --------------- ------------ --------------------
CAPTURE_A Q_CAP_QUEUE RULESET$_60 ENABLED
SQL> COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A15
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
SQL> COLUMN STATE HEADING 'State' FORMAT A27
SQL> COLUMN STATE_CHANGED HEADING 'State|Change Time'
SQL> COLUMN CREATE_MESSAGE HEADING 'Last Message|Create Time'
SQL> SELECT CAPTURE_NAME,
2 STATE,
3 TO_CHAR(STATE_CHANGED_TIME, 'HH24:MI:SS MM/DD/YY') STATE_CHANGED,
4 TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_MESSAGE
5 FROM V$STREAMS_CAPTURE;
Capture State Last Message
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A CAPTURING CHANGES 14:55:05 03/31/10 14:55:03 03/31/10
Name State Change Time Create Time
--------------- --------------------------- ----------------- -----------------
CAPTURE_A CAPTURING CHANGES 14:55:05 03/31/10 14:55:03 03/31/10
可以看到capture进程已经正常 状态变为CAPTURING CHANGES 数据同步恢复。
由此可见对于stream环境的archive log管理一定要注意保留capture重启所需要的archive log,以免capture无法启动
转载于:https://blog.51cto.com/luruoyu/929376