1.For the SQL
DEL_ORACLE_DATA. I think we can use truncate to replace delete. Because
truncate is faster than delete.
We can update it to be:
public final static String DEL_ORACLE_DATA =
"begin for cur in (select owner, table_name from all_tables where owner =
''{0}'') loop execute immediate ''truncate table
''||cur.owner||''.''||cur.table_name || '' drop storage '';end loop;
end;";
2. For the
exception:
at
java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:25)
at oracle.jdbc.driver.OraclePreparedStatement.setupBindBuffers(II)V
(OraclePreparedStatement.java:2764)
at
oracle.jdbc.driver.OraclePreparedStatement.executeBatch()[I
(OraclePreparedStatement.java:9314)
at
oracle.jdbc.driver.OracleStatementWrapper.executeBatch()[I
(OracleStatementWrapper.java:211)
I thread dump when the tool throw out of memory exception, and use the tool eclipse
memory analyzer to analyze, found the inputStream use 73.51% memory and lastboundChars use 25.18% memory. (you can
see the bellow picture)
Inputstream is used to store the blob column,
lastboundChars is used to bound chars. I found the size for array inputStream
is 32768. It is a little strange, because my batch size is 20000.
I watch the source code, found when the numberof bind rows allocated is less
than the rows need,
numberOfBindRowsAllocated will be double, and growBinds. For example, when the size is bigger than
16384, then numberOfBindRowsAllocated will be 32768.
So it is the reason why my batch size is 20000, but
the inputstream array size is 32768. In
this case, I think we can set the size to be 16380 to fully use the memory.
if (i2
>= this.numberOfBindRowsAllocated)
{
int i3
= this.numberOfBindRowsAllocated << 1;
if (i3
<= i2) {
i3 = i2 + 1;
}
growBinds(i3);
this.currentBatchNeedToPrepareBinds = true;
}
3.I found in our tool, we do not disable the
log. Also do not drop the index, after disable log and drop the index, the
speed improved a lot.
public final static String
DISABLE_ORACLE_LOG = "begin for cur in (select owner, table_name from
all_tables where owner = ''{0}'') loop execute immediate ''ALTER table
''||cur.owner||''.''||cur.table_name || '' nologging '';end loop; end;";
public final static String
DROP_ORACLE_INDEX = "begin for cur in (select owner, object_name from
all_objects where object_type=''INDEX''
and owner = ''{0}'') loop execute immediate '' drop index
''||cur.owner||''.''||cur.object_name;end loop; end;";
Line
81: 2012-09-10 09:46:46,984 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] Dynamic SQL template in
TABLE endpoint is: insert into endpoint
(model,locale,hardware_id,brandid,trigger_port,type,num_triggers,version,codec6,id,noc_id,codec7,codec8,codec9,created,description,trigger_prefix,last_trigger_ts,os,nf_delivery_type,last_connection_date,vendor,msisdn,msisdn_validated,codec1,codec0,codec5,codec4,codec3,modified,device_uid,os_version,codec2,ip,extra_codecs_id,csc,provisioning_id,imsi,device_group,public_key,ekey)
values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
Line
81: 2012-09-10 09:46:46,984 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] Dynamic SQL template in
TABLE endpoint is: insert into endpoint
(model,locale,hardware_id,brandid,trigger_port,type,num_triggers,version,codec6,id,noc_id,codec7,codec8,codec9,created,description,trigger_prefix,last_trigger_ts,os,nf_delivery_type,last_connection_date,vendor,msisdn,msisdn_validated,codec1,codec0,codec5,codec4,codec3,modified,device_uid,os_version,codec2,ip,extra_codecs_id,csc,provisioning_id,imsi,device_group,public_key,ekey)
values
(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
Line
125: 2012-09-10 09:46:51,815 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 16380 start commit time:Mon Sep 10 09:46:51 CST 2012
Line
128: 2012-09-10 09:46:55,406 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 16380 end commit time:Mon Sep 10 09:46:55 CST 2012
Line
143: 2012-09-10 09:47:03,123 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 32760 start commit time:Mon Sep 10 09:47:03 CST 2012
Line
146: 2012-09-10 09:47:04,306 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 32760 end commit time:Mon Sep 10 09:47:04 CST 2012
Line
159: 2012-09-10 09:47:12,439 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 49140 start commit time:Mon Sep 10 09:47:12 CST 2012
Line
160: 2012-09-10 09:47:12,730 [pool-4-thread-1] [INFO com.seven.migrate.job.RollbackJobForBlobTable]
========== Table Name : endpoint, Now counter = 49140 end commit time:Mon Sep
10 09:47:12 CST 2012
Line
175: 2012-09-10 09:47:22,810 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 65520 start commit time:Mon Sep 10 09:47:22 CST 2012
Line
176: 2012-09-10 09:47:23,250 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 65520 end commit time:Mon Sep 10 09:47:23 CST 2012
Line
191: 2012-09-10 09:47:33,913 [pool-4-thread-1] [INFO
com.seven.migrate.job.RollbackJobForBlobTable] ========== Table Name :
endpoint, Now counter = 81900 start commit time:Mon Sep 10 09:47:33 CST 2012
4.If the speed still not perfect. For oracle we
also can consider to set a table with a table space for oracle, and use oracle parallel execution. For MySQL
we can consider change the environment property of MySQL to improve the read
speed of MySQL.
分享到:
2012-09-10 10:45
浏览 1076
评论