oracle 10g-->11g schema级别下游实时捕获stream配置
http://space.itpub.net/25618347/viewspace-717062
应业务需求,需要做一个报表数据库,数据来源是正式库的应用schema,目前的情况是主库版本是oracle 10.2.0.5.0,而需要配置的目标库,也就是报表库数据库版本是11.2.0.1.0,操作系统仍然是linux x86-64,昨天在测试环境进行了配置,详细过程如下:
1.前期的参数设置等在此省略,详细设置可见我之前的文章;
2.主要配置还是在目标库进行,具体如下:
相应的用户、表空间、授权等操作先执行,之后以strmadmin用户登陆:
队列设置:
BEGINITPUB个人空间E(n&h {#W#]4^^ NJE ^r
DBMS_STREAMS_ADM.SET_UP_QUEUE(
1U%VY'L0nA67104queue_table => 'strmadmin.DOWNSTREAM_Q_TABLE',ITPUB个人空间U+K1m/Rf A,T4A5O
queue_name => 'strmadmin.DOWNSTREAM_Q',ITPUB个人空间{}}#cc'd7x]
STORAGE_CLAUSE => 'tablespace tbs_stream',
Y!O5O.S L67104queue_user => 'STRMADMIN');ITPUB个人空间5?(P/x*cv3\m1F"e
END;ITPUB个人空间\ C-@q"rvo B&Sw
/
创建应用进程:
BEGINITPUB个人空间"g i ](`?s7V,iN
DBMS_APPLY_ADM.CREATE_APPLY(ITPUB个人空间x6d-VU?R Ey2Se/B
queue_name => 'DOWNSTREAM_Q',
RV@8tF Z#^%Z7e67104apply_name => 'DOWNSTREAM_APPLY',
'i ?&P[$x67104apply_captured => TRUEITPUB个人空间o/CC,O;HH8C
);
yc @x _5o67104END;ITPUB个人空间1Y^9M5by S:j ~ o
/
创建捕获进程:
BEGIN
hO d!Ht(X`]67104DBMS_CAPTURE_ADM.CREATE_CAPTURE(
8}?0YN8R*U/_ x67104queue_name => 'DOWNSTREAM_Q',
N}9`\.^T#AA1k67104capture_name => 'DOWNSTREAM_CAPTURE',ITPUB个人空间!s}r cO
rule_set_name => NULL,ITPUB个人空间6h"c0Ql0Ii Nl q
start_scn => NULL,
:GOT!vJ_67104source_database => 'uat',ITPUB个人空间j6B"b"I8zb\-lyS
use_database_link => true,
`[am#N,BLCh67104first_scn => NULL,
,F$I G @ x D G/`/b O67104logfile_assignment => 'implicit'); -- Refer to Note below.
"O3JHW2^.Z+Z67104END;ITPUB个人空间4^zL,Y5d5a"l z,w&o
/
It is at this point that we specifically focus on the logfile_assignment
Z @'qt]8|2}d+[JR67104parameter. We set this parameter to 'implicit' to instruct the CAPTUREITPUB个人空间.?1{M-s/j5g4oBNv6A
process to scan all redo log files added by redo transport services orITPUB个人空间!KP"Y+Z&HrbO
manually from the source database to the downstream database.
为捕获进程添加规则:
BEGINITPUB个人空间$T'xM'TB}
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(ITPUB个人空间u3ZN2e Ou
schema_name => 'MYNET_APP',ITPUB个人空间+APl6k!q%r$S`
streams_type => 'capture',ITPUB个人空间7o1p$Y/fPv,w
streams_name => 'downstream_capture',
(J:Ozu*u6a*{~$V67104queue_name => 'downstream_q',ITPUB个人空间 DR2B P OZ6d
include_dml => true,ITPUB个人空间 `.t0If;R(Q
include_ddl => true,
;z0v&x'HJ4dv7eV67104include_tagged_lcr => false,
/T;SU @;p6k67104source_database => 'uat',ITPUB个人空间n*W"A,Q p
inclusion_rule => TRUE);
m(_?$Oyo67104END;
,bb|'|IF67104/
实例化schema:
DECLAREITPUB个人空间.n;X7y4gQB)L)Q
-- Variable to hold instantiation SCN valueITPUB个人空间(I9UT5@%T
iscn NUMBER;
OLk,UZ8Us67104BEGINITPUB个人空间aw0HAF!ydJI
-- Get current SCN from SourceITPUB个人空间.c S4d8zi:x
iscn :=DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@uat;ITPUB个人空间c&j^9a4^+UFE!C
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
l*gzW.Z0Q'W{ lpg\67104source_schema_name => 'MYNET_APP',
f([pk4t.S#ME67104source_database_name => 'uat',
z `1a$Y2j(]F?nu i67104instantiation_scn => iscn,
-m4rX)`9}-[8N`67104recursive => TRUE);ITPUB个人空间uq,}G{5h
END;ITPUB个人空间,N3V:B^ M_!~%WZ6G
/
之后在操作系统层面进行数据的导入:
impdp strmadmin directory = DPDATA1 network_link = uat schemas=mynet_app STREAMS_CONFIGURATION=Y table_exists_action=replace PARALLEL=2
导入完毕之后,启用应用进程:
exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'DOWNSTREAM_APPLY');
启动捕获进程,并开启实时捕获:
exec DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE');
BEGIN
kH'u!}z Q9Hnv G67104DBMS_CAPTURE_ADM.SET_PARAMETER(ITPUB个人空间)Rf[Zp4u3K9[
capture_name => 'DOWNSTREAM_CAPTURE',ITPUB个人空间.? o I{1Xu3r@&u&Q
parameter => 'downstream_real_time_mine',
V w8m4N*D q~67104value => 'y');
6po)Bjz m67104END;
`G0d2|)Ej \-f'G67104/
至此配置基本结束,配置结束可进行相关查询:
SQL> select CAPTURE_NAME,status,ERROR_MESSAGE from dba_capture
j5Mc!I-}L$Ob sM67104 2 union allITPUB个人空间)pP4B"n!]:l
3 SELECT apply_name,status,error_message from dba_apply;
2a `jF:w.}0t3n67104 ITPUB个人空间)Rh&DR C(L0f
CAPTURE_NAME STATUS ERROR_MESSAGEITPUB个人空间u6v;Y`7{n$|q
------------------------------ -------- --------------------------------------------------------------------------------
2q"MA%~"B5n}#L3o9Zt67104DOWNSTREAM_CAPTURE ENABLED ITPUB个人空间]4f$p;S.W DfM(k&L j
DOWNSTREAM_APPLY ENABLED
在主库进行dml或者ddl操作之后,archive log ,从库的告警日志会发现如下信息:
RFS LogMiner: Registered logfile [/arch/1_9665_716985918.dbf] to LogMiner session id [18]
|[u,N.p;V%V67104Fri Feb 24 09:45:40 2012
F.g!d+~ d!g,v*e67104LOGMINER: End mining logfile for session 18 thread 1 sequence 9665, /oracle/oradata/myrpt/stdbyredo02.log
zsx\ z:DbL;M67104LOGMINER: Begin mining logfile for session 18 thread 1 sequence 9666, /oracle/oradata/myrpt/stdbyredo01.logITPUB个人空间7~8u!x'ph7Z L
Fri Feb 24 09:48:26 2012ITPUB个人空间9v.k6x4Bi
说明已经实时应用了!
有一点需要提醒大家的是,配置完了之后,目标库是没有propagation进程的,这点为什么,原因很简单,propagation进程主要是针对非下游捕获而言的,目的是传播捕获进程捕获到的redo变化信息,然后传送到目标库,而现在capture和apply进程都位于目标库,所以没必要再创建propagation进程,还有当主库进行大批量的更新操作时,即大事务,目标库将很难应用起来,这也是stream的最大弊端,这次配置完之后,进行了测试,在主库创建了一张十几万数据量的表,也很快就传送到目标库并应用,之后对该表进行全表更新,redo变化也传递到了目标库,但是应用不了,如果该大事务无法应用,那之后的应用也将无法进行,相关查询显示,有活动的大事务:
SELECT t.xidusn||'.'||t.xidslt||'.'||t.xidsqn transaction_id ,message_count,t.first_message_create_time FROM DBA_APPLY_SPILL_TXN t;
就是之前的全表更新,解决方法主要就是忽略掉该事务,当然这也不是彻底的解决方法,我的做法是在目标库修改了一些参数,然后该大事务开始应用,不过时间稍微有点长,大概10分钟左右才完全应用,具体原因还在排查。