数据库现有两张那个表,一张群表cluste.一张群用户表clusteruser(当然还有用户表user暂时用不到).两张表的结构如下:
cluster表:有一个外键指向用户id
mysql> desc cluster;
+--------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| clusterId | varchar(15) | NO | PRI | NULL | |
| clusterName | varchar(15) | YES | | NULL | |
| owner | varchar(15) | YES | MUL | NULL | |
| info | varchar(15) | YES | | NULL | |
| num | int(2) | YES | | NULL | |
| registerTime | datetime | YES | | NULL | |
+--------------+-------------+------+-----+---------+-------+
6 rows in set (0.00 sec)
clusteruser:有两个外键,一个外键指向用户id,一个指向clusterId
mysql> desc clusteruser;
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| id | int(10) | NO | PRI | NULL | auto_increment |
| clusterId | varchar(15) | YES | MUL | NULL | |
| uaccount | varchar(15) | YES | MUL | NULL | |
| role | int(2) | YES | | NULL | |
| cnickname | varchar(15) | YES | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
5 rows in set (0.00 sec)
当用户要创建群的时候,既要外往cluster表插入记录,也要往clusteruser表插入记录,并且二者要么都成功,要么都失败,也就是要进行进行事务控制.业务层如下:
public void testSql1() {
String sql = "insert into cluster values('6666','ceshi','8888','batch',0,now());insert into clusteruser values(null,'6666','1516',1,null);";
long start = System.currentTimeMillis();
if(DBUtil.execuBatchUpdate(sql,null)){
System.out.println("success");
}else {
System.out.println("failed");
}
long end = System.currentTimeMillis();
System.out.println("方式1,耗时"+(end-start));
}
业务层只需要调用封装好的数据库工具类就好了,当然都要开启事务,具体有以下几种方式:
方式一:先拆分sql,再参数预编译,再往batch里面扔,在执行
public static boolean execuBatchUpdate(String sql, Object... args) {
boolean flag = false;
try {
con = getConn();
if (con == null) return false;
con.setAutoCommit(false);
String[] sqls = sql.split(";");
int[] rarr = null;
for(int k=0;k<sqls.length;k++){
pstm = con.prepareStatement(sqls[k]);
if (args != null) {
for (int i = 0; i < args.length; i++) {
pstm.setObject(i + 1, args[i]);
}
}
pstm.addBatch();
rarr = pstm.executeBatch();
}
flag = true;
con.commit();
con.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace();
try {
con.rollback();
} catch (SQLException ex) {
ex.printStackTrace();
}
} finally {
close(con, pstm, null);
}
return flag;
}
@org.junit.Test
方二先:先进行参数预编译,再拆分成一条条独立的sql,再往batch里面扔,再执行
public static boolean execuTransActionUpdate(String sql, Object... args) {
boolean flag = false;
try {
con = getConn();
if (con == null) return false;
con.setAutoCommit(false);
pstm = con.prepareStatement(sql);
if(args!=null){
for(int i=0;i<args.length;i++){
pstm.setObject(i+1,args[i]);
}
}
String[] singles = sql.split(";");
for(int j=0;j<singles.length;j++){
pstm.addBatch(singles[j]);
}
pstm.executeBatch();
flag = true;
con.commit();
con.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace();
try {
con.rollback();
} catch (SQLException ex) {
ex.printStackTrace();
}
} finally {
close(con, pstm, null);
}
return flag;
}
方式三:不用 pstm.executeBatch(),还是用 pstm.executeUpdate().只需要先开启事务,对传过来的sql语句进行拆分,如果捕获到异常
说明至少有一条sql执行发生了错误,那么直接回滚就好了,如下:
public static boolean executeLowBatchUpdate(String sql, Object... args) {
boolean flag = false;
try {
con = getConn();
if (con == null) return false;
con.setAutoCommit(false);
pstm = con.prepareStatement(sql);
if(args!=null){
for(int i=0;i<args.length;i++){
pstm.setObject(i+1,args[i]);
}
}
String[] singles = sql.split(";");
for(int j=0;j<singles.length;j++){
pstm.executeUpdate(singles[j]);
}
flag = true;
con.commit();
con.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace();
try {
con.rollback();
} catch (SQLException ex) {
ex.printStackTrace();
}
} finally {
close(con, pstm, null);
}
return flag;
}
那这三者到底用那个好呢,因为我们我们只有两条slq,而且要传的参数都预先写进去了,就好像没传参一样,也看不出效果.如果数据量非常大,不建议使用executeupdate(),而使用executeBatch();比如我们要把一张100万用户数据的表里面的数据插入到另一张表内,当然我们不可能事先写100万条sql吧.再进行字符串切割吧,这显然不可取.