提高上百万行数据insert速度的“经典”方法
有两个结构相同的表table1,table2
将table1插入到table2中:
现在采用两种方法:
1、指定回滚段,回滚段足够大
set transaction use rollback segment RBS1;
INSERT INTO table1 NOLOGGING
SELECT * FROM table2;
commit;
2、采用定义cursor,每5000或10000条记录提交一次
declare
cursor cur_select is
select t1,t2,t3..... from tabl1;
v_id number;
v_t table1%rowtype;
begin
open cur_select;
loop
exit when cur_select%notfound;
fetch cur_select into v_t.t1,v_t.t2,v_t.t3..... ;
insert into table2
(t1,t2,t3......)
values
(v_t.t1,v_t.t2,v_t.t3..... );
v_id := v_id + 1;
if v_id = 10000 then
commit;
v_id := 0;
end if;
end loop;
commit;
end;
我现在只有这两种方法,足够笨的!请教各位高士还有没有其他的方法?
有两个结构相同的表table1,table2
有两个结构相同的表table1,table2
将table1插入到table2中:
现在采用两种方法:
1、指定回滚段,回滚段足够大
set transaction use rollback segment RBS1;
INSERT INTO table1 NOLOGGING
SELECT * FROM table2;
commit;
2、采用定义cursor,每5000或10000条记录提交一次
declare
cursor cur_select is
select t1,t2,t3..... from tabl1;
v_id number;
v_t table1%rowtype;
begin
open cur_select;
loop
exit when cur_select%notfound;
fetch cur_select into v_t.t1,v_t.t2,v_t.t3..... ;
insert into table2
(t1,t2,t3......)
values
(v_t.t1,v_t.t2,v_t.t3..... );
v_id := v_id + 1;
if v_id = 10000 then
commit;
v_id := 0;
end if;
end loop;
commit;
end;
我现在只有这两种方法,足够笨的!请教各位高士还有没有其他的方法?
将table1插入到table2中:
现在采用两种方法:
1、指定回滚段,回滚段足够大
set transaction use rollback segment RBS1;
INSERT INTO table1 NOLOGGING
SELECT * FROM table2;
commit;
2、采用定义cursor,每5000或10000条记录提交一次
declare
cursor cur_select is
select t1,t2,t3..... from tabl1;
v_id number;
v_t table1%rowtype;
begin
open cur_select;
loop
exit when cur_select%notfound;
fetch cur_select into v_t.t1,v_t.t2,v_t.t3..... ;
insert into table2
(t1,t2,t3......)
values
(v_t.t1,v_t.t2,v_t.t3..... );
v_id := v_id + 1;
if v_id = 10000 then
commit;
v_id := 0;
end if;
end loop;
commit;
end;
我现在只有这两种方法,足够笨的!请教各位高士还有没有其他的方法?