TimesTen的cache connection功能相信大家都有用过而且感觉还不错,基本上能够完成我们想要的东西,但是有一个问题一直困恼着我,就是数据从oracle到TT的同步时间根本不知道多长,TT也没有提供相关命令或者图形界面进行观察。这对于实时的电信计费系统来说是非常被动的,仍了几百万数据也不知道同步了多少数据,还需要多长时间才能同步完成,某天出于对该问题的好奇,反复探索得知以下一二,欢迎切磋、指教。
TimesTen的cache group功能基本原理是基于触发器,每当你建立了一个cache group后都会在oracle端产生一个触发器,通过观察该触发器的内容我们可以知道一些东西,实验如下:
SQL> SELECT TRIGGER_NAME, BASE_OBJECT_TYPE, TABLE_NAME
2 FROM USER_TRIGGERS
3 WHERE TABLE_NAME IN ('CUST_ACCOUNT', 'MDSE_SPEC', 'ACCT_RELA',
4 'TELECOM_ACCOUNT', 'PAYMENT_PLAN', 'PROD', 'MDSE');
TRIGGER_NAME BASE_OBJECT_TYPE TABLE_NAME
------------------------------ ---------------- ------------------------------
TT_03_17497_T TABLE CUST_ACCOUNT
TT_03_17501_T TABLE MDSE_SPEC
TT_03_17481_T TABLE ACCT_RELA
TT_03_17487_T TABLE TELECOM_ACCOUNT
TT_03_17491_T TABLE PAYMENT_PLAN
TT_03_17476_T TABLE PROD
TT_03_18385_T TABLE MDSE
7 rows selected
从以上的内容我们可以得知实体表是由哪个触发器来工作的。通过user_triggers的trigger_body 我们可以看出触发器是如何工作的。
trigger_body 的具体的内容如下:
DECLARE
MAX_INT CONSTANT INT := 99999999999999999999999999999999999999;
MINREPORTTS DATE;
MINBOOKMARK NUMBER;
REQDEMPTYSPACES INT := 2;
ERR_NUM NUMBER;
DUMMY INT;
DUMMYNAME CHAR(100);
BEGIN
<<RETRY>>
BEGIN
IF (INSERTING) THEN
INSERT INTO ABMBILL.TT_03_17476_L
(LOGSEQ, FT_CACHEGROUP, "PK_ID", XID)
VALUES
(MAX_INT, 0, :NEW."PK_ID", DBMS_TRANSACTION.LOCAL_TRANSACTION_ID);
ELSE
IF (UPDATING AND ((:OLD."PK_ID" <> :NEW."PK_ID"))) THEN
INSERT INTO ABMBILL.TT_03_17476_L
(LOGSEQ, FT_CACHEGROUP, "PK_ID", XID)
VALUES
(MAX_INT, 0, :NEW."PK_ID", DBMS_TRANSACTION.LOCAL_TRANSACTION_ID);
END IF;
INSERT INTO ABMBILL.TT_03_17476_L
(LOGSEQ, FT_CACHEGROUP, "PK_ID", XID)
VALUES
(MAX_INT, 0, :OLD."PK_ID", DBMS_TRANSACTION.LOCAL_TRANSACTION_ID);
END IF;
EXCEPTION
WHEN OTHERS THEN
BEGIN
EXECUTE IMMEDIATE 'select objectname from abmbill.TT_03_SYNC_OBJS where objectname = :objectName'
INTO DUMMYNAME
USING 'DONOTTRUNCATE';
IF (SQL%ROWCOUNT > 0) THEN
RAISE;
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
DUMMY := 0;
END;
ERR_NUM := SQLCODE;
IF (ERR_NUM = -1536 OR ERR_NUM = -1653 OR ERR_NUM = -1688 OR
ERR_NUM = -3233) THEN
EXECUTE IMMEDIATE 'select min(bookmark) from abmbill.TT_03_agent_status where object_id = 17476'
INTO MINBOOKMARK;
DELETE FROM ABMBILL.TT_03_17476_L WHERE LOGSEQ <= MINBOOKMARK;
IF (SQL%ROWCOUNT >= REQDEMPTYSPACES) THEN
INSERT INTO ABMBILL.TT_03_17476_L
(LOGSEQ, FT_CACHEGROUP)
VALUES
(MINBOOKMARK, 0);
GOTO RETRY;
ELSE
EXECUTE IMMEDIATE 'select min(reportTS) from abmbill.TT_03_agent_status where object_id = 17476'
INTO MINREPORTTS;
EXECUTE IMMEDIATE 'select bookmark from abmbill.TT_03_agent_status where object_id = 17476 and reportTS = :minReportTS'
INTO MINBOOKMARK
USING MINREPORTTS;
DELETE FROM ABMBILL.TT_03_17476_L WHERE LOGSEQ <= MINBOOKMARK;
IF (SQL%ROWCOUNT >= REQDEMPTYSPACES) THEN
INSERT INTO ABMBILL.TT_03_17476_L
(LOGSEQ, FT_CACHEGROUP)
VALUES
(MINBOOKMARK, 0);
GOTO RETRY;
ELSE
DELETE FROM ABMBILL.TT_03_17476_L;
IF (SQL%ROWCOUNT < REQDEMPTYSPACES) THEN
RAISE;
END IF;
GOTO RETRY;
END IF;
END IF;
ELSE
RAISE;
END IF;
END;
END;
从该段代码里面我们可以看出,实际上oracle把有变化的数据写入到了TT_03_17476_L表里面,TT_03_agent_status表为oracle与TT的同步状态表,里面记录了最后同步的标签与时间。
SQL> SELECT MAX(LOGSEQ) PROD, TO_CHAR(SYSDATE, 'yyyy-mm-dd hh24:mi:ss') "date"
2 FROM TT_03_17476_L;
PROD date
---------- -------------------
1043 2010-01-05 10:06:36
SQL> SELECT DECODE(OBJECT_ID,
2 19497,
3 'CUST_ACCOUNT',
4 17501,
5 'MDSE_SPEC',
6 17481,
7 'ACCT_RELA',
8 17487,
9 'TELECOM_ACCOUNT',
10 17491,
11 'PAYMENT_PLAN',
12 17476,
13 'PROD',
14 18385,
15 'MDSE'),
16 BOOKMARK,
17 TO_CHAR(REPORTTS, 'yyyy-mm-dd hh24:mi:ss') "date"
18 FROM TT_03_AGENT_STATUS
19 WHERE OBJECT_ID IN (17497, 17501, 17481, 17487, 17491, 17476, 18385);
DECODE(OBJECT_ID,19497,'CUST_A BOOKMARK date
------------------------------ ---------- -------------------
PAYMENT_PLAN 1025 2010-01-05 10:07:58
PROD 1043 2010-01-05 10:08:04
ACCT_RELA 1026 2010-01-05 10:08:04
TELECOM_ACCOUNT 1024 2010-01-05 10:08:04
MDSE_SPEC 1 2010-01-05 10:08:04
MDSE 0 2010-01-05 10:08:04
PAYMENT_PLAN 1025 2010-01-05 10:08:04
1016 2010-01-05 10:08:04
从以上两个语句我们可以看出prod表的最后一个同步标签为1043,两表的数据一致,证明数据同步完成。
接下来我们做个数据同步实验来验证下:
10:10:44 SQL> UPDATE prod SET stop_status = 0
2 WHERE pk_id >=305657 AND pk_id < 306657;
1000 rows updated
Executed in 0.25 seconds
10:10:45 SQL> commit;
Commit complete
Executed in 0.015 seconds
10:10:47 SQL>
从语句的执行时间来看我们可以得知数据commit完成后的时间是10:10:47。
10:12:42 SQL> SELECT DECODE(OBJECT_ID,
2 19497,
3 'CUST_ACCOUNT',
4 17501,
5 'MDSE_SPEC',
6 17481,
7 'ACCT_RELA',
8 17487,
9 'TELECOM_ACCOUNT',
10 17491,
11 'PAYMENT_PLAN',
12 17476,
13 'PROD',
14 18385,
15 'MDSE'),
16 BOOKMARK,
17 TO_CHAR(REPORTTS, 'yyyy-mm-dd hh24:mi:ss') "date"
18 FROM TT_03_AGENT_STATUS
19 WHERE OBJECT_ID IN (17497, 17501, 17481, 17487, 17491, 17476, 18385);
DECODE(OBJECT_ID,19497,'CUST_A BOOKMARK date
------------------------------ ---------- -------------------
PAYMENT_PLAN 1025 2010-01-05 10:12:58
PROD 1044 2010-01-05 10:13:04
ACCT_RELA 1026 2010-01-05 10:13:04
TELECOM_ACCOUNT 1024 2010-01-05 10:13:04
MDSE_SPEC 1 2010-01-05 10:13:04
MDSE 0 2010-01-05 10:13:04
PAYMENT_PLAN 1025 2010-01-05 10:13:04
Prod表的标签从1043改变到了1044,证明了事务同步完成,完成时间为10:13:04。
把同步完成时间减去语句完成时间,得出本次同步花费了137秒,大家可能会觉得1000条数据尽然花费了137秒,太慢了吧。这里需要说明下,原来配置cache group的时间设定的自动刷新时间为90秒,故得出的时间有可能比较长也有可能比较短,取决自动刷新的间隔频率。
TIPS:
从trigger的名字我们可以看出object_id和trigger更新表
Eg: trigger: TT_03_17476_T
Object_id :17476
Trigger更新表:TT_03_17476_L