oracle streams note

streams capture process:
capture process rules
local capture and downstream capture
streams capture processes and oracle rac
capture process can run on the source database(local capture process) or on a remote database(downstream capture process).


logical change records(lcrs):an lcrs is a message with a specific format that describe a database change.

for imrpoved performamce,captured messages always are stored in a bufferd queue.

row lcrs:


dbms_capture_adm package:include_extra_attribute procedure,insstruct a capture process to capture one or more extra attributes.


capture process rules:captures or discards changes based on rules that you define.
                       


capture process:does not capture the results of dml changes to columns of the following datatypes:
                bfile,rowid,and user-defined types,and the columns have been encrypted using
                transparent data encryption.


types of dml changes captured:insert,update,delete,merge,piecewise updates to lobs

ddl changes and capture processes:except for the following types of ddl changes
                                     alter database,create controlfile,create database,create pfile,create spfile,flashback database


capture process can capture ddl statements,but not the result of ddl statements,unless the ddl statement is
         a  create table as select statement.

other types of changes ignored by a capture process:
   alter session and set role control statements
   alter system control statement
   invocation of plsql procedures
   dbms_redefinition package use online redefinition.

nologging and unrecovreable cannot captured by capture process


archived-log downstream capture:you can copy the archived redo log files to the downstream database using
                                 redo transport services,the dbms_file_transfer package,ftp,or other mechanism.


start scn must be greater than or equal to first scn


85

dba_queue_tables ---contains information about the owner instance for a queue table.

cross instance archival(cia):in rac,each instance archives its files to all other instances.

capture process:an optional oracle background.its name is cnnn,where nnn is a capture process number(from c001 to c999).

capture user can be associated with only one user,but one user can be associated with many capture processes.

capture process component:
  reader server that read the redo log and divides the redo log into regions.
  preparer servers that scan the regions defined by the reader server in parallel and
     perform prefiltering of changes found in the redo log.
  builder server that merges redo records from the preparer servers.it preserves the scn order
    of these redo records and passes the merged redo records to the capture process.
  
capture process(cnnn) performs the following actions:when it receives merged redo records from
    the builder server:
frommats the change into an lcr
send inconclusive for the change in the lcr to the rules engine for full evaluation.
receives the results of the full evaluation of the lcr if it was performed.
enqueue the lcr into the queue associated with the capture process if the scr satisfies the rules in the positive
   rule set for the capture process.or discard.


capture process states:
  it describes what the capture process is doing currently. by querying the state column
  in the v$streams_capture.
  all possible value:
    initializing--starting up.
    waiting for dictionary redo--waiting for redo log files contaning the dictionary
      build related to the first scn to be added to the capture process session.
   dictionary initialization--processing a dictionary build.
   mining(processed scn=scn-value0)-mining a dictionary build at the scn scn-value
   loading(step x of y)--where x and y are numbers,processing information from a dictionary build
     and currently at step x in a process that involves y steps
   capture changes---scaning the redo log for changes that evaluate to true against the capture process
                     rule sets.
   waiting for redo---waiting for the new redo log files to be added to the capture process session.
   evaluating rule---evaluating a change against a capture process rule set.
   creating lcr---converting a change into an lcr.
   enqueuing message---enqueuing an lcr that satisfies the capture process rule sets into the capture
        process queue.
   paused for flow control---unable to enqueue lcrs either because of low memory or
                             because propagations and apply processes are consuming messages slower than the capture
                             process is creating them.this state indicates flow control that is used to reduce
                             spilling of captured messages when apply has falledn behind or is unavailable .


   shutting down--stopping





multiple capture processes in a single database
  if you run multiple capture processes in a single database,consider increasing the size of the sga
  for each instance.




capture process checkpoints:
  required checkpoint scn:corresponds to the lowest checkpoint for which a capture process requires
                          redo data.determine the required checkpoint scn for a capture process by
                          quering the required_checkpoint_scn column in the dba_capture.

  maximum checkpoint scn:corresponds to the last checkpoint recorded by a capture process.
  

checkpoint retention time:
   is the amount of time,that a capture process retains checkpoints before purging them automatically.
   dba_registered_archvied_log displays the first_time and next_time for archived redo log files,



capture process creation:you can create a capture process using the dbms_streams_adm package or
        the dbms_capture_adm package.using dbms_streams_adm is simple for the dafult to you,using
        dbms_capture_adm package is flexible.
        to create a capture process  at a downstream database,you must use the dbms_capture_adm package.
        





note:after creating a capture process,avoid changing the dbid of global name of the source database for
    the capture process.if do,the capture process must be dropped and re-created.


the logminer data dictionary for a capture process:  

   ----92 page
   

logminer data dictionary:a capture process requires a data dictionary that is
     separate from the primary data dictionary for the source database.
      it 对应于capture process.
   dbms_capture_adm.build procedure extracts data dictionary information to
    the redo log.


if you specify null for the first_scn parameter,the new capture process attempts to share a logminer data
dictionary with one or more exsting capture process that capture changes from the same source database.
null is the default for the firt_scn parameter.
if you specify not-null,the new capture process uses a new logminer data dictionary that is created when
  the new capture process isi started for the first time.


when you reset the first scn value fro an exsting capture process,oracle automatically purges
logminer data dictionary information prior to the new first scn setting.


the streams data dictionary 13.gifropagations and apply processes use it  to keep track of the
  database objects from a particular source database.

a database can contain multiple streams data dictionaries if it propagates or applies changes from
multiple source database,or a database can contain only one streams data dictionary for
a particular source database.



query the dba_logmnr_purged_log data dictionary view to determine which archived redo log
files will never be needed by any capture process.


each reader server,preparer server,and builder server is a parallel execution server.

resetting the parallelism parameter automatically stops and restarts the capture process.

the time_limit capture process parameter specifies the amount of time a capture process runs,
and the message_limit capture parameter specifies the number of messages a capture process can capture.
the capture process stops automatically when it reaches one of these limits.

when a capture process is restarted,it starts to capture changes at the point where it last stopped.


message propagation between queues

commit-time queues

streams staging


streams uses queues to stage messages.
a queue of anydata type can stage messages of almost any type and is called a anydata queue.
a typed queue can store messages of a specific type.
streams client always use anydata queues.


the queue from which the messages are propagated is called the source queue,and the
queue that receives the messages is called the destination queue.
there can be a one-to-may,many-to-one,or many-to-many relationship between source and destination queues.


source queue---propagate messages------&gtdestination queue


if the message does not contain an lcr,then the apply process can invoke a user-specified procedure called
message handler to process it.


streams use job queues to propagate messages.


you need to ensure that your environment is configured to avoid cycling a change in an endless loop.you
can use streams tags to avoid such a change cycling loop.---donot understand it


if a propagation has both a positive and a negative rule set,then negative rule set is always evaluated first.


a propagation can be queue-to-queue or queue-to-database link(queue-to-dblink)


a queue-to-queue propagation always has its own exclusive propagation job to prapagate messages from
the source queue to the destination queue.
  because each propagation job has its own propagation schedule,the propagation schedule can be managed separately.


a single database link can be used by multiple queue-to-queue propagations.

in contrast,a queue-to-dblink propagation shares a propagation job with other queue-to-dblink propagations
from the same source queue that use same database link.

currently,a queue service is created when the database is a rac and the queue is a buffered queue.

directed networks:is one in which propagated messages pass through one or more intermediate database before
arriving at a destination database.


an intermediate database in a directed network can propagate messages using either queue forwarding
or apply forwarding.queue forwarding means that the messages beging forworded at an intermediate
database the messages received by the intermediate database.
apply forwarding means that the messages being forwarded at an intermediate database are first
processed by an apply process.these messages are then recaptured by a capture process at the
  intermediate database and forwarded.when you use apply forwarding,intermediate database becomes the
  new source database for the message.


binary file propagation:


messaging clients:dequeues user-enqueued messages when it is invoked by an  application or a user.


a messagin client can be associated with only one user,but one user can be associated with
many messaging clients.


dba_services data dictionary view contains the service name for a queue.
the network_name column in the dba_queues data dictionary view contains the network name  for a queue.


the dba_queue_tables data dictionary view contains information about the owner instance for a queue table.


commit-time queues:use dbms_aqadm.create_queue_table procedure determines how user-enqueued messages are
  ordered.oracle 10g r2 introduces commit-time queues.each message in a commit-time queue is ordered by
  an approximate commit system change number(approximate cscn).
  commit-time ordering is specified for a queue table,and queues that use the queue table are called
  commit-time queues.



stream pool:is a portion of memory in the sga that is used by streams,stores buffered queue messages in memory,
it provides memory for capture processes and apply processes.

in automatic shared memory management,if the streams_pool_size also set to nonzero value,then uses this value
as a minimum for a stream pool.
if sga_target=0 ,streams_pool_size size can be determined by v$streams_pool_advice

if sga_target=0 and stream_pool_size=0,default ,the first use of stream in a database transfers an
amount of memory equeal to 10% of the shared pool from the buffer cache to the streams pool.


messages that spill from memory are stored in the appropriate  aq$_queue_table_name_p,
for each spilled message,information is stored in the aq$_queue_table_name_d about any
  propagation and apply process that are eligible for processing the message.



a propagation schedule specifies how often a propagation job propagates messages from a source queue to
a destination queue.


secure queues:queues for which aq agents must be associated explicitly with one or more database users who
can perform queue operations,but other users cannot perform queue operations on a secure queue,unless
  they are configured as secure queue users.


you might also want to drop the agent if it is no longer needed.you can view the aq agents and their
associated users by quering the dba_aq_agent_privs.











a transactional queue is a queue in which user-enqueued messages can be grouped into a set that
are applied  as one transaction.that is ,an apply process performs a commit after it applies all
the user-enqueued messages in a group.


a nontransactional queue is one in which each user-enqueued message is its own transaction.that is,
an apply process performs a commit after a each user-enqueued message it applies.

message type:captured messages,user-enqueued messages


streams apply process:

these messages can be logical change records(lcrs) or user messages.
an apply process either applies each message directly of passes it as a paerameter to
an apply handler.an apply handler is a user-defined procedure used by
an apply process for customized  processing of messages.




dml handlers,ddl handlers,and message handlers


apply process components:reader server, coordinator process,apply server

you can view the state of the reader server for an apply process by querying the v$streams_apply_reader view.
  state:initializing,idle,dequeue messages,schedule messages,spilling(spilling unapplied messages from memory to hadr disk),paused

---148page


rule: is a database object that enables a client to perform an action when  an event
occurs and a condition is satisfied.a rule consists of the following components:
rule condition
rule evaluation context(optional)
rule action context (optional)
---153page
---158page

[ 本帖最后由 wisdomone1 于 2008-5-8 21:21 编辑 ]

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/9240380/viewspace-350557/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/9240380/viewspace-350557/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值