jdbcTemplate的batchupdate操作:
SpringJdbcTemplate的batch操作最后还是利用了JDBC提供的方法,Spring只是做了一下改造JDBC的batch操作:
final List<ExpressFreightBillImportDetail> tempOrderList = records;
jdbcTemplate.batchUpdate(sql,new BatchPreparedStatementSetter() {
@Override
public int getBatchSize() {
return tempOrderList.size();
}
@Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setString(1, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getOrderNo()));
ps.setInt(2, tempOrderList.get(i).getGoodNum());
ps.setString(3, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getCollectPerson()));
ps.setString(4, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getDeliveryPerson()));
ps.setBigDecimal(5, tempOrderList.get(i).getSumFee());
ps.setBigDecimal(6, tempOrderList.get(i).getDeliveryFee());
ps.setBigDecimal(7, tempOrderList.get(i).getPackageFee());
ps.setBigDecimal(8, tempOrderList.get(i).getInsuredFee());
ps.setBigDecimal(9, tempOrderList.get(i).getCod());
ps.setLong(10, StringUtil.nullConvertToLong(tempOrderList.get(i).getDeliveryBranchId()));
ps.setString(11, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getDeliveryBranch()));
ps.setLong(12, tempOrderList.get(i).getBillId());
ps.setString(13, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getBillNo()));
ps.setInt(14, tempOrderList.get(i).getEffectFlag());
ps.setString(15, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getDismatchReason()));
ps.setLong(16, StringUtil.nullConvertToLong(tempOrderList.get(i).getImportPersonId()));
ps.setString(17, StringUtil.nullConvertToEmptyString(tempOrderList.get(i).getImportPerson()));
ps.setDate(18, new java.sql.Date(tempOrderList.get(i).getImportTime().getTime()));
}
});
为什么在Spring JdbcTemplate的batchUpdate中,没有看到conn.setAutoCommit(false)的操作?
这是因为Spring有它自己的事务管理机制
如果你配置了JDBC的事务管理,那么DataSourceTransactionManager会自动设置
DataSourceTransactionManagerr的doBegin方法。
下面我们可以看下:spring中的源码:
public int[] batchUpdate(String sql, final BatchPreparedStatementSetter pss) throws DataAccessException {
if (logger.isDebugEnabled()) {
logger.debug("Executing SQL batch update [" + sql + "]");
}
return execute(sql, new PreparedStatementCallback<int[]>() {
public int[] doInPreparedStatement(PreparedStatement ps) throws SQLException {
try {
int batchSize = pss.getBatchSize();
InterruptibleBatchPreparedStatementSetter ipss =
(pss instanceof InterruptibleBatchPreparedStatementSetter ?
(InterruptibleBatchPreparedStatementSetter) pss : null);
if (JdbcUtils.supportsBatchUpdates(ps.getConnection())) {
for (int i = 0; i < batchSize; i++) {
pss.setValues(ps, i);
if (ipss != null && ipss.isBatchExhausted(i)) {
break;
}
ps.addBatch();
}
return ps.executeBatch();
}
else {
List<Integer> rowsAffected = new ArrayList<Integer>();
for (int i = 0; i < batchSize; i++) {
pss.setValues(ps, i);
if (ipss != null && ipss.isBatchExhausted(i)) {
break;
}
rowsAffected.add(ps.executeUpdate());
}
int[] rowsAffectedArray = new int[rowsAffected.size()];
for (int i = 0; i < rowsAffectedArray.length; i++) {
rowsAffectedArray[i] = rowsAffected.get(i);
}
return rowsAffectedArray;
}
}
finally {
if (pss instanceof ParameterDisposer) {
((ParameterDisposer) pss).cleanupParameters();
}
}
}
});
}
其中对批量操作还进行了判断,如果可以则执行批量操作,否则一条一条的插入记录。
将excel数据导入数据库的程序时,由于数据量大,准备采用jdbc的批量插入。于是用了preparedStatement.addBatch();当加入1w条数据时,再执行插入操作,preparedStatement.executeBatch()。原以为这样会很快,结果插入65536条数据一共花30多分钟,完全出乎我的意料。网上百度了一下在处理这种大批量数据导入的时候是如何处理的,发现大部分都是用的jdbc批量插入处理,但不同的是:他们使用了con.setAutoCommit(false);然后再preparedStatement.executeBatch()之后,再执行con.commit();网上摘了一段说明:
* When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that
requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with
SET autocommit and COMMIT statements:
SET autocommit=0;
... SQL import statements ...
COMMIT;