应业务需求,需要做一个报表数据库,数据来源是正式库的应用schema,目前的情况是主库版本是oracle 10.2.0.5.0,而需要配置的目标库,也就是报表库数据库版本是11.2.0.1.0,操作系统仍然是linux x86-64,昨天在测试环境进行了配置,详细过程如下:
1.前期的参数设置等在此省略,详细设置可见我之前的文章;
2.主要配置还是在目标库进行,具体如下:
相应的用户、表空间、授权等操作先执行,之后以strmadmin用户登陆:
队列设置:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.DOWNSTREAM_Q_TABLE',
queue_name => 'strmadmin.DOWNSTREAM_Q',
STORAGE_CLAUSE => 'tablespace tbs_stream',
queue_user => 'STRMADMIN');
END;
/
创建应用进程:
BEGIN
DBMS_APPLY_ADM.CREATE_APPLY(
queue_name => 'DOWNSTREAM_Q',
apply_name => 'DOWNSTREAM_APPLY',
apply_captured => TRUE
);
END;
/
创建捕获进程:
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'DOWNSTREAM_Q',
capture_name => 'DOWNSTREAM_CAPTURE',
rule_set_name => NULL,
start_scn => NULL,
source_database => 'uat',
use_database_link => true,
first_scn => NULL,
logfile_assignment => 'implicit'); -- Refer to Note below.
END;
/
It is at this point that we specifically focus on the logfile_assignment
parameter. We set this parameter to 'implicit' to instruct the CAPTURE
process to scan all redo log files added by redo transport services or
manually from the source database to the downstream database.
为捕获进程添加规则:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'MYNET_APP',
streams_type => 'capture',
streams_name => 'downstream_capture',
queue_name => 'downstream_q',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'uat',
inclusion_rule => TRUE);
END;
/
实例化schema:
DECLARE
-- Variable to hold instantiation SCN value
iscn NUMBER;
BEGIN
-- Get current SCN from Source
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@uat;
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'MYNET_APP',
source_database_name => 'uat',
instantiation_scn => iscn,
recursive => TRUE);
END;
/
之后在操作系统层面进行数据的导入:
impdp strmadmin directory = DPDATA1 network_link = uat schemas=mynet_app STREAMS_CONFIGURATION=Y table_exists_action=replace PARALLEL=2
导入完毕之后,启用应用进程:
exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'DOWNSTREAM_APPLY');
启动捕获进程,并开启实时捕获:
exec DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE');
BEGIN
DBMS_CAPTURE_ADM.SET_PARAMETER(
capture_name => 'DOWNSTREAM_CAPTURE',
parameter => 'downstream_real_time_mine',
value => 'y');
END;
/
至此配置基本结束,配置结束可进行相关查询:
SQL> select CAPTURE_NAME,status,ERROR_MESSAGE from dba_capture
2 union all
3 SELECT apply_name,status,error_message from dba_apply;
CAPTURE_NAME STATUS ERROR_MESSAGE
------------------------------ -------- --------------------------------------------------------------------------------
DOWNSTREAM_CAPTURE ENABLED
DOWNSTREAM_APPLY ENABLED
在主库进行dml或者ddl操作之后,archive log ,从库的告警日志会发现如下信息:
RFS LogMiner: Registered logfile [/arch/1_9665_716985918.dbf] to LogMiner session id [18]
Fri Feb 24 09:45:40 2012
LOGMINER: End mining logfile for session 18 thread 1 sequence 9665, /oracle/oradata/myrpt/stdbyredo02.log
LOGMINER: Begin mining logfile for session 18 thread 1 sequence 9666, /oracle/oradata/myrpt/stdbyredo01.log
Fri Feb 24 09:48:26 2012
说明已经实时应用了!
有一点需要提醒大家的是,配置完了之后,目标库是没有propagation进程的,这点为什么,原因很简单,propagation进程主要是针对非下游捕获而言的,目的是传播捕获进程捕获到的redo变化信息,然后传送到目标库,而现在capture和apply进程都位于目标库,所以没必要再创建propagation进程,还有当主库进行大批量的更新操作时,即大事务,目标库将很难应用起来,这也是stream的最大弊端,这次配置完之后,进行了测试,在主库创建了一张十几万数据量的表,也很快就传送到目标库并应用,之后对该表进行全表更新,redo变化也传递到了目标库,但是应用不了,如果该大事务无法应用,那之后的应用也将无法进行,相关查询显示,有活动的大事务:
SELECT t.xidusn||'.'||t.xidslt||'.'||t.xidsqn transaction_id ,message_count,t.first_message_create_time FROM DBA_APPLY_SPILL_TXN t;
就是之前的全表更新,解决方法主要就是忽略掉该事务,当然这也不是彻底的解决方法,我的做法是在目标库修改了一些参数,然后该大事务开始应用,不过时间稍微有点长,大概10分钟左右才完全应用,具体原因还在排查。
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/25618347/viewspace-717062/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/25618347/viewspace-717062/