Oracle GoldenGate - Heartbeat process to monitor lag and performance in GoldenGate [ID 1299679.1] | |||||
| |||||
修改时间 24-MAY-2011 类型 HOWTO 状态 PUBLISHED |
Applies to:
Oracle GoldenGate - Version: 10.4.0.0 and later [Release: 10.4.0 and later ]Oracle GoldenGate - Version: 10.4.0.0 and later [Release: 10.4.0 and later]
Information in this document applies to any platform.
Affected Database: Oracle
Goal
This documents a procedure to reliably measure lag between source and target systems through the use of 'heartbeat' tables.Solution
The basic idea is to have a table on the source system that gets updated at a predetermined interval. In your capture processes you would capture the update from the heartbeat table. Using tokens you would add some additional information to the heartbeat record to be able to tell which extract process was capturing the update. This additional information would be used downstream to calculate the real lag time between the source and target systems for a given extract and by checking the last update time on the heartbeat at the target you could also determine if data has stopped flowing between the source and target. Configuration Overview For the heartbeat table to function you will need to create a "heartbeat" table on the source and a heartbeat table on the target. The source table can be very simple since all you are doing is updating a single timestamp on the source. In this version of the heartbeat we are adding additional information that is extracted from issuing a "INFO SHOWCH" in GGSCI. The output is parsed and then updated in the heartbeat table. Extract will then extract the updated information and then add a couple of tokens to the record. That is then passed in the trail down to the target. On the target system the record will then be inserted into the GGS_LAGTIME table. When the record is inserted, additional information will be added to the table via tokens and a trigger. For this example, the target will include information from the source, information on the data pump if used, and target information. This information can be used to determine where the lag is developing and create a history of the flow of data.
Step 1 - Source and Target users
Both the source and target will need a schema in order to create the source and target table. Below is just a suggested user. Any schema, that is not one of the schema containing user tables that need to be replicated, can be used and it is suggested to use the GoldenGate user that you have already set up as part of the base GoldenGate configuration.
On Source system -
SQL> create user source identified by ggs;
SQL> grant connect, resource, dba to source;
One Target system -
SQL> create user target identified by ggs;
SQL> grant connect, resource, dba to target;
Step 2 - Heartbeat table Source
The source and target heartbeat table have the same columns. The difference is that in the target you have two tables, one is a history table which is insert all rows and the other is the current status table that just shows the latest update. I also added a table level trigger to do the calulations as the data is loaded into the target tables.
Heartbeat table Source
The source and target heartbeat table have the same columns. The source table only has the extract information populated at the source.
Column | Contents |
ID | Unique Sequence number |
Extract_name | Name of the extract process |
Source_commit | Commit time in the source |
Target_commit | Commit time on the target |
CAPTIME | Time the row was extracted on the source |
CAPGROUP | Same as the extract name |
CAPLAG | The difference between the Source Commit time and the capture time. |
PMPTIME | Timestamp when record passes thru the data pump |
PMPGROUP | Data Pump process name |
PMPLAG | Lag at the data pump |
DELTIME | Replicat time |
DELGROUP | Replicat Group name |
DELLAG | Lag in replicat |
TOTALLAG | Total lag |
Lag_hh_mi_ss | Current lag time that is reported in the info showch command |
Lag_as_of_hh_mi_ss | Time of lag |
Thread | If the source database is RAC you would see a thread number greater than 1 |
Status | Current status of extract process. |
Current_seqno | Current redo log that extract is reading from. |
Current_rba | Current position inside the log file that we are extract from. |
Start_time | Time that the extract was started |
Report_time | From the showch "Last Update Time =" under the status section. |
Recovery_seqno | This is the recovery checkpoint. It shows which log file the oldest outstanding transaction is located. |
Recovery_rba | The position that recovery must start at. |
Recovery_timestamp | The recovery timestamp |
Update_timestamp | The time that the heartbeat table was updated. |
Run sqlplus script to create the heartbeat table and add a row to that table -
SQL> @/heartbeat_table_v9.sql
Heartbeat table Sql script -
-- Heartbeat table V9
-- This is created on the SOURCE system
-- Last update
-- 10-30-08 SGEORGE
-- 11-25-08 Table updated to match target.
-- PK is different on source.
-- 3-19-10 SGEORGE - changed format, updated for timestamp.
--
drop table ggs_heartbeat;
-- Create table statement
CREATE TABLE GGS_HEARTBEAT
( ID NUMBER ,
extract_name varchar2(8),
SOURCE_COMMIT TIMESTAMP,
TARGET_COMMIT TIMESTAMP,
CAPTIME TIMESTAMP,
CAPGROUP VARCHAR2(8 BYTE),
CAPLAG NUMBER,
PMPTIME TIMESTAMP,
PMPGROUP VARCHAR2(8 BYTE),
PMPLAG NUMBER,
DELTIME TIMESTAMP,
DELGROUP VARCHAR2(8 BYTE),
DELLAG NUMBER,
TOTALLAG NUMBER,
lag_hh_mi_ss varchar2(15),
lag_as_of_hh_mi_ss varchar2(15),
thread number,
status varchar2(10),
current_seqno number,
current_rba number,
start_time date,
report_time date,
recovery_seqno number,
recovery_rba number,
recovery_timestamp timestamp,
update_timestamp timestamp,
CONSTRAINT GGS_HEARTBEAT_PK PRIMARY KEY (extract_name, thread) ENABLE
)
/
Add trandata to the heartbeat table -
GGSCI> ADD TRANDATA SOURCE.GGS_HEARTBEAT
Shell script to Update heartbeat table on the source -
See attached shell script - heartbeat_timestamp_TS_v9.sh (10Kb) - Heartbeat Shell script to update heartbeat on source system.
This shell script is to be run in the background -
sh nohup ./scripts/heartbeat_timestamp_TS_v9.sh &
This script issues a GGSCI info comand and parses the output and inserts the results into the heartbeat table.
Step 3 - Heartbeat table setup - Target
This table has added columns to capture the history of inserts. The additional columns are added from the tokens and the firing of the trigger which does the calculations of lag times.
NOTE: The system times across all system must be synchronized. If not, then the calculated lag times will be inaccurate. If you see negative times, check the clocks.
Note: lag times expressed in microseconds
In sqlplus run the SQL script to create the target heartbeat tables -
SQL> @dirsql/heartbeat_table_target_TS_v9.sql
This is the sql script - (Also attached)
~~-----------------------------------------------------------------
~~-
~~- Heartbeat table create script
~~-
~~-----------------------------------------------------------------
~~dirsql/heartbeat_table_target_TS_v9.sql
-- Usage: heartbeat_table_target_TS_v7.sql
-- This table collects the heartbeats that are captured on the source and ----
-- adds in the committime and group name (replicat name). ----
-- The trigger adds ID, sgeorge.ime and calculates lagtime. ----
-- updated 11-17-08 changed target table to GGS_HEARTBEAT
-- 11-25-08 Table updated to match source.
-- PK is different on source.
-- 03-11-10 updated for timestamp
--
DROP SEQUENCE SEQ_GGS_HEARTBEAT_ID ;
CREATE SEQUENCE SEQ_GGS_HEARTBEAT_ID INCREMENT BY 1 START WITH 1 ORDER ;
DROP TABLE GGS_HEARTBEAT;
CREATE TABLE GGS_HEARTBEAT
( ID NUMBER NOT NULL ENABLE,
EXTRACT VARCHAR2(8),
SOURCE_COMMIT TIMESTAMP,
TARGET_COMMIT TIMESTAMP,
CAPTIME TIMESTAMP,
CAPGROUP VARCHAR2(8 BYTE),
CAPLAG NUMBER,
PMPTIME TIMESTAMP,
PMPGROUP VARCHAR2(8 BYTE),
PMPLAG NUMBER,
DELTIME TIMESTAMP,
DELGROUP VARCHAR2(8 BYTE),
DELLAG NUMBER,
TOTALLAG NUMBER,
lag_hh_mi_ss varchar2(15),
lag_as_of_hh_mi_ss varchar2(15),
thread number,
status varchar2(10),
current_seqno number,
current_rba number,
start_time date,
report_time date,
recovery_seqno number,
recovery_rba number,
recovery_timestamp timestamp,
update_timestamp timestamp,
CONSTRAINT GGS_HEARTBEAT_PK PRIMARY KEY (DELGROUP) ENABLE
);
CREATE OR REPLACE TRIGGER GGS_HEARTBEAT_TRIG
BEFORE INSERT OR UPDATE ON GGS_HEARTBEAT
FOR EACH ROW
BEGIN
select seq_ggs_HEARTBEAT_id.nextval
into :NEW.ID
from dual;
select systimestamp
into :NEW.target_COMMIT
from dual;
select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.CAPLAG
from dual;
select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME - :NEW.CAPTIME,' ')))) * 86400
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000
into :NEW.PMPLAG
from dual;
select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME - :NEW.PMPTIME,' ')))) * 86400
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2))
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000
into :NEW.DELLAG
from dual;
select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.TOTALLAG
from dual;
end ;
/
ALTER TRIGGER "GGS_HEARTBEAT_TRIG" ENABLE;
--
-- This is for the History heartbeat table
--
DROP SEQUENCE SEQ_GGS_HEARTBEAT_HIST ;
CREATE SEQUENCE SEQ_GGS_HEARTBEAT_HIST INCREMENT BY 1 START WITH 1 ORDER ;
DROP TABLE GGS_HEARTBEAT_HISTORY;
CREATE TABLE GGS_HEARTBEAT_HISTORY
( ID NUMBER NOT NULL ENABLE,
EXTRACT VARCHAR2(8),
SOURCE_COMMIT TIMESTAMP,
TARGET_COMMIT TIMESTAMP,
CAPTIME TIMESTAMP,
CAPGROUP VARCHAR2(8 BYTE),
CAPLAG NUMBER,
PMPTIME TIMESTAMP,
PMPGROUP VARCHAR2(8 BYTE),
PMPLAG NUMBER,
DELTIME TIMESTAMP,
DELGROUP VARCHAR2(8 BYTE),
DELLAG NUMBER,
TOTALLAG NUMBER,
lag_hh_mi_ss varchar2(15),
lag_as_of_hh_mi_ss varchar2(15),
thread number,
status varchar2(10),
current_seqno number,
current_rba number,
start_time date,
report_time date,
recovery_seqno number,
recovery_rba number,
recovery_timestamp timestamp,
update_timestamp timestamp,
CONSTRAINT GGS_HEARTBEAT_HIST_PK PRIMARY KEY (ID) ENABLE
);
CREATE OR REPLACE TRIGGER GGS_HEARTBEAT_TRIG_HIST
BEFORE INSERT OR UPDATE ON GGS_HEARTBEAT_HISTORY
FOR EACH ROW
BEGIN
select seq_ggs_HEARTBEAT_HIST.nextval
into :NEW.ID
from dual;
select systimestamp
into :NEW.target_COMMIT
from dual;
select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.CAPLAG
from dual;
select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME - :NEW.CAPTIME,' ')))) * 86400
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000
into :NEW.PMPLAG
from dual;
select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME - :NEW.PMPTIME,' ')))) * 86400
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2))
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000
into :NEW.DELLAG
from dual;
select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.TOTALLAG
from dual;
end ;
/
ALTER TRIGGER "GGS_HEARTBEAT_TRIG_HIST" ENABLE;
Step 4 Extract Configuration
In the extract parameter file a MAP statement will need to be added in order to capture the update to the heartbeat table. Along with the update, a couple of tokens also need to be added in order to tell which extract and host the data originated. The best way to add the heartbeat map statement is to use an include file with the statement. This way you can add the same include file to all of your extract parameter files.
Here is the include file with the heartbeat map statement for the heartbeat table
Parameter file - OGG_HOME/dirprm/HB_Extract.inc
-- HB_Extract.inc
-- Heartbeat Table
TABLE source.GGS_HEARTBEAT,
TOKENS (
CAPGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),
CAPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP"))
),
FILTER ( @STREQ ( EXTRACT_NAME,@GETENV ("GGENVIRONMENT", "GROUPNAME")) );
This is example of a complete extract parameter file with the include file for the heartbeat
Extract Parameter file - OGG_HOME/dirprm/ext_hb.prm
-- Extract example for source DB protected by Oracle Data Guard
-- 4-9-10 SGEORGE
--
EXTRACT ext_hb
-- Since this may be for a Data Guard enviroment you should set the SID
SETENV (ORACLE_SID=ora11g)
-- Use USERID to specify the type of database authentication for GoldenGate to use.
USERID source password ggs
EXTTRAIL ./dirdat/db
-- Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
-- records that it cannot process. GoldenGate creates the specified discard file in
-- the dirrpt sub-directory of the GoldenGate installation directory. You can use the
-- discard file for problem-solving.
DISCARDFILE ./dirrpt/ext_hb.dsc, APPEND
-- Use REPORTCOUNT to generate a count of records that have been processed since
-- the Extract or Replicat process started
-- REPORTCOUNT [EVERY] {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]
REPORTCOUNT EVERY 5 MINUTES, RATE
-- Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetches
FETCHOPTIONS, NOUSESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT
-- Use STATOPTIONS to specify information to be included in statistical displays
-- generated by the STATS EXTRACT or STATS REPLICAT command.
STATOPTIONS REPORTFETCH
-- This is the Heartbeat table
include dirprm/HB_Extract.inc
-- The implementation of this parameter varies depending on the process.
TABLE app_schemae.*;
Step 5 - Data Pump Configuration
In the Data Pump parameter file you will need to include a map statement so that the pump name and current timestamp are added to the record as the record is passed thru the data pump. In the data pump you do not need the same where clause that was used in the extract because the filtering was already done during the extract. Again the best practice is to use an include file for the data pump heartbeat map statement.
The include file for the heartbeat in the data pump is as follows:
Include file for Data Pump - dirprm/HB_pmp.inc
-- HB_pmp.inc
-- Heartbeat Table
table ~ggsuser~.ggs_heartbeat,
TOKENS (
PMPGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"),
PMPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP"))
);
Example of a complete Data Pump parameter file
./dirprm/pmp_hb.prm
-- Data Pump configuration file
-- last update
-- 11-3-08 SGEORGE
--
extract PMP_hb
-- Database login info
userid source, password ggs
-- Just in case we can't process a record we'll dump info here
discardfile ./dirrpt/PMP_hb.dsc, append
-- Remote host and remort manager port to write trail
rmthost localhost, mgrport 9000
-- This is the Trail to where we output
rmttrail ./dirdat/H1
-- Heartbeat
include dirprm/HB_pmp.inc
Table app_schema.*;
Step 6 - Replicat Configuration
The replicat will need to have the heartbeat added to the map statements along with the token mapping. When the replicat inserts the row into the table a "before insert" trigger will fire and update the values in the GGS_HEARTBEAT table.
There are two heartbeat tables, the first is the heartbeat table that has the current heartbeat. It will have only one row for each replicat. The second table is the history table that contains all of the heartbeats. This table can be used to graph the lag time in each replicat end to end.
As with the extract and data pump, we are adding data to the record when we insert the row into the target heartbeat table. Again, using an include file for the map statement.
This is the include file for the Map statement:
./dirprm/HB_Rep.inc
-- Heartbeat table
MAP ~ggsuser~.GGS_HEARTBEAT, TARGET ~ggsuser~.GGS_HEARTBEAT,
KEYCOLS (DELGROUP),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
ID = 0,
SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),
CAPGROUP = @TOKEN ("CAPGROUP"),
CAPTIME = @TOKEN ("CAPTIME"),
PMPGROUP = @TOKEN ("PMPGROUP"),
PMPTIME = @TOKEN ("PMPTIME"),
DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),
DELTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP"))
);
MAP ~ggsuser~.GGS_HEARTBEAT, TARGET ~ggsuser~.GGS_HEARTBEAT_HISTORY,
KEYCOLS (ID),
INSERTALLRECORDS,
COLMAP (USEDEFAULTS,
ID = 0,
SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),
CAPGROUP = @TOKEN ("CAPGROUP"),
CAPTIME = @TOKEN ("CAPTIME"),
PMPGROUP = @TOKEN ("PMPGROUP"),
PMPTIME = @TOKEN ("PMPTIME"),
DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),
DELTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP"))
);
This is an example of a complete Replicat parameter file:
./dirprm/rep_hb.prm
-- Updates
-- 3-17-10 - SGEORGE - Added reporting and heartbeat table and some comments.
-- 3-18-10 - SGEORGE - Updated the heartbeat table
--
replicat rep_hb
-- Use ASSUMETARGETDEFS when the source and target tables specified with a MAP
-- statement have the same column structure, such as when synchronizing a hot
-- site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.
assumetargetdefs
--setting oracle_environment variable
--useful in multi oracle home situations
setenv (ORACLE_SID="ora11g")
--userid password password encrypted using encrypt ggsci command
userid target,password ggs
-- change to handle collisions during initial load
-- nohandlecollisions
-- Use REPORTCOUNT to generate a count of records that have been processed since
-- the Extract or Replicat process started
-- REPORTCOUNT [EVERY] {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]
REPORTCOUNT EVERY 5 MINUTES, RATE
include ./dirprm/HB_Rep.inc
map app_source.* ,target App_target.*;
Step 7 - helpful Sqlplus Queries on the target system
-----------------------------------------------------------------
--
-- Heartbeat SQL query for Replicat
--
-----------------------------------------------------------------
./dirsql/del_lag.sql
set pagesize 200
col "Total Lag" format a30
col "Extract Lag" format a30
col "Pump Lag" format a30
select DELGROUP,
(SOURCE_COMMIT - CAPTIME ) "Extract Lag",
(SOURCE_COMMIT - PMPTIME ) "Pump Lag",
(SOURCE_COMMIT - TARGET_COMMIT ) "Total Lag"
from target.ggs_heartbeat_history order by id;
./dirsql/HB_table.sql
set pagesize 200
col Lag format a30
col SOURCE_COMMIT format a30
col TARGET_COMMIT format a30
col CAPTIME format a30
col PMPTIME format a30
col DELTIME format a30
col START_TIME format a30
col RECOVERY_TIMESTAMP format a30
col UPDATE_TIMESTAMP format a30
select * from target.ggs_heartbeat;
Conclusion
In order to calculate the true lag you will need to add the heartbeat table into both the extract and replicats. By using the tokens that are added to the trail and the commit time on the target you can tell the true lag between systems even with low data flow. Also using this method you can tell on the target if the data flow from the source has been interrupted because you can check the last update time and compare that to the current time. It is critical that clocks on both the source and target systems are in sync. Note, OGG does correct the commit timestamp for differences between the source and target systems.
Appendix - Data Guard Scripts
The following scripts are used to position extract on a standby database upon a database fail-over. The idea is that if you fail over to the standby database you will need to reposition the extract on the standby system to start where the primary system stopped. If the primary system is still available, you would use the checkpoint information in the checkpoint file to position the extract on the standby system. You can get the current checkpoint information by issuing the GGSCI command:
GGSCI> info showch
However, if the source system is not available, you can use the heartbeat table to determine the recovery position in which to set the extract. Note, since the heartbeat is set to a interval, you will likely have some overlap in the data when you reposition the extract. In that case you may have to use HANDLECOLLISION parameter on the target database.
More details on how to use these scripts can be found in the Best Practice document GGS Best Practice - GoldenGate and Oracle Data Guard
Shell script alter_ext_begin.sh
You will need to alter the GGS_DIR parameter reflect the GoldenGate home directory.
#! /bin/ksh
# last updated 4-28-09 SGEORGE
# You will need to update the GGS_DIR parameter to reflect the current
# GoldenGate home directory.
# You only want to rin this script ONCE. It will reset the extract
# start position to the recovery point of the old primary system.
#
GGS_DIR=/export/home/ggudb/GGS_10/
GGS_SCR_DIR=$GGS_DIR/dirsql
cd $GGS_SCR_DIR
sqlplus ggudb/ggudb @alter_ext_begin.sql
cd $GGS_DIR
./ggsci < $GGS_SCR_DIR/alter.obey
exit
Sql script alter_ext_begin.sql
This script will need to be placed in the "dirsql" subdirectory in the GoldenGate home directory. You will also need to edit the extract name to match your extract names.
-- this script is used to set the extract to a starting position after
-- a switchover or failover
--
-- Source extract EXT_EB maps to Standby extract EXT_EE
spool alter.obey
select 'alter ext_ee extseqno ' || RECOVERY_SEQNO ||' extrba '|| RECOVERY_RBA
from ggs_heartbeat
where extract_name ='EXT_EB';
-- Source extract EXT_XB maps to Standby extract EXT_XP
select 'alter ext_xp extseqno ' || RECOVERY_SEQNO ||' extrba '|| RECOVERY_RBA
from ggs_heartbeat
where extract_name ='EXT_XB';
spool off
Appendix II - Troubleshooting
Problem: Two rows in target heartbeat table
Solution: Check you map statements in the data pump and make sure you the table is not captured because of wild carding and the explicit map statement.
Problem: Lag time is in negative numbers
Solution: The system clocks on the source and target are out of sync. Note, by default the commit timestamp is corrected for time differences between source and target. If the source and target systems are out of sync, you will notice lag time between system is wrong but overall lag is correct.
附件 |
相关的 产品
|
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/161195/viewspace-1051580/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/161195/viewspace-1051580/