tom 就是tom

You Asked

Good Morning Tom.

I need your expertise in this regard.  I got a table which contains millions or records.  
 I want to update and commit every time for so many records ( say 10,000 records).  I 
dont want to do in one stroke as I may end up in Rollback segment issue(s).  Any 
suggestions please ! ! !

Murali 

and we said...

If I had to update millions of records I would probably opt to NOT update.

I would more likely do:

CREATE TABLE new_table as select  from old_table;

index new_table
grant on new table
add constraints on new_table
etc on new_table

drop table old_table
rename new_table to old_table;


you can do that using parallel query, with nologging on most operations generating very 
little redo and no undo at all -- in a fraction of the time it would take to update the 
data. 
 
 
另一个老哥的parallel load 的补充
This is absolutely a viable approach, and one we have used repeatedly.  One of our apps updates a 
table of several hundred million records.  

The cursor..For loop approach for the update was calculated to take 53.7 years to complete!
We institued the Insert into a dummy table append with nologging, and were able to complete the 
"update" in under 30 minutes.  

With nologging, if the system aborts, you simply re-run the 'update' again, as you have the 
original data in the main table.  When done, we swap the partition of original data with the 
'dummy' table  (the one containing new values), rebuild indexes in parallel, and wha-la!  Our 
update is complete.  

i.e, to update field2 in a table:

1) First create your dummy hold table:  create table xyz_HOLD as select * from xyz where rownum<1. 
Alter tablexyz nologging.

2) insert /*+ append parallel (xyzhold,12) */ into xyz_hold xyzhold (field1, field2, field3) select 
/*+ parallel (x,12) */ xyz.field1, my_new_value_for_field2, xyz.field3 from xyz x where blah blah 
blah.

3) when done, either rename the table, or swap the partition if your original table is partitioned, 
and you only updated one partition as we do.  Obviously you need to rebuild indecies, etc as 
required.

Hope this helps!
 
 
We've a similar situation., We delete around 3 million  records from 30 million rows table 
everyday. 
There is no logical column to do partition., 
I guess the insert into a new table will take considerable time with 27 mil records.. Please let me 
know what is the best approach.  


Followup:

wait 10 days so that you are deleting 30 million records from a 60 million record table and then 
this will be much more efficient.

Time it some day though.  3 million records on an indexed table will take considerable time.  There 
is a chance that INSERT /*+ append */ select  done nologging, parallel with create 
indexes in parallel could best the delete. 
 
  
 
  
In response to the Jack Silvey (from Richardson, TX ) review, where he wrote "It is the only way to 
fly. We also have an absolutely incredible stored procedure that rebuilds all of our indexes 
concurrently after the load, using the Oracle job scheduler as the mechanism of allowing separate 
threads in pl/sql":

Could you provide more information about that procedure and how to rebuild multiple same-table 
indexes concurrently using Oracle job scheduler?

Thanks,
Boris.
  


Followup:

instead of 

begin
   execute immediate 'alter index idx1 rebuild';
   execute immediate 'alter index idx2 rebuild';
end;

you can code:

declare
  l_job number;
begin
  dbms_job.submit( l_job, 'execute immediate ''alter index idx1 rebuild'';' );
  commit;
  dbms_job.submit( l_job, 'execute immediate ''alter index idx2 rebuild'';' );
  commit;
end;


Now, just set job_queue_processes > 0 to set the "degree of threads" and in 8i and before set 
job_queue_interval to say 60 or so and there you go. 
 
   
 
  
[@more@]

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/412/viewspace-891657/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/412/viewspace-891657/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值