前面的博客,我们介绍mybatis
的查询的流程,但是我们忘记介绍XML
中匹配的参数是怎么生成的。今天会介绍一下,同时会介绍一下mybatis
的一级和二级缓存。我们先看参数的生成的代码,具体的代码如下:
public class ParamNameResolver {
public ParamNameResolver(Configuration config, Method method) {
final Class<?>[] paramTypes = method.getParameterTypes();
//获取所有的参数列表同时带有参数的那种
final Annotation[][] paramAnnotations = method.getParameterAnnotations();
final SortedMap<Integer, String> map = new TreeMap<>();
int paramCount = paramAnnotations.length;
// get names from @Param annotations
//遍历这个参数二维数组
for (int paramIndex = 0; paramIndex < paramCount; paramIndex++) {
//判断是否是特殊的参数,如果是直接跳过
if (isSpecialParameter(paramTypes[paramIndex])) {
// skip special parameters
continue;
}
String name = null;
//遍历数组
for (Annotation annotation : paramAnnotations[paramIndex]) {
//是@param注解
if (annotation instanceof Param) {
hasParamAnnotation = true;
//取出注解中的值
name = ((Param) annotation).value();
break;
}
}
//如果没有这个注解
if (name == null) {
// @Param was not specified.
//Spring MVC 底层调用的不是JDK的API Spring MVC底层是去解析字节码
//在jdk8以前 调用这个getName 会有问题 arg0
//jdk8
if (config.isUseActualParamName()) {
//获取默认的参数名 这儿生成的arg0,arg1
name = getActualParamName(method, paramIndex);
}
if (name == null) {
// use the parameter index as the name ("0", "1", ...)
// gcode issue #71
name = String.valueOf(map.size());
}
}
map.put(paramIndex, name);
}
names = Collections.unmodifiableSortedMap(map);
}
}
上面就是参数的生成规则,但是由于jdk
的问题,所以这儿如果没有加上注解就会生成arg0
,arg1
这种参数,当然如果在jdk1.8以后编译的时候加上一个参数,就可以获得真正的参数名。我们可以写一个测试类测试一下,具体的代码如下:
import java.lang.reflect.Method;
import java.lang.reflect.Parameter;
public class Test{
public void test(String name,String age){}
public static void main(String[]args) throws Exception{
Method test = Test.class.getMethod("test",String.class,String.class);
for(Parameter parameter: test.getParameters()){
System.out.println(parameter.getName());
}
}
}
我们使用以下的方式编译,然后运行。
javac Test.java
java Test
这个时候打印的结果如下:
可以看到获取的就是arg0
,arg1
,这样的参数名。但是如果我们使用下面的命令进行编译和运行,结果就是不一样了
javac -parameters Test.java
java Test
这个时候打印的结果如下:
可以看到我们能够取到真正的参数名了。所以加了这个参数重新编译mybatis
,就可以实现,不用写注解,就可以直接使用参数名了。
讲完了这个开始我们今天的正题了,mybatis
中的缓存机制了。首先介绍一下两种缓存的区别,具体的区别如下:
一级缓存不能关闭 但是可以更改他默认作用域
区别:作用域
一级缓存 单个SqlSession Statement
二级缓存:所有SqlSession
每个缓存的单位:namespace
这个时候我们先来看下一级缓存,由于一级缓存无法关闭,所以一级缓存开启和关闭就没有这个说法了。我们先写测试的方法,具体的代码如下:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.DemoMapper">
<cache></cache>
<select id="selectById" parameterType="int" resultType="map">
select * from mybooks where id =#{id};
</select>
</mapper>
public interface DemoMapper {
List<Map<String,Object>> selectById(int id);
}
public class TestCache {
public static void main(String[] args) throws IOException {
String resource = "mybatis.xml";
InputStream inputStream = Resources.getResourceAsStream(resource);
SqlSessionFactory sessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
SqlSession sqlSession1 = sessionFactory.openSession();
SqlSession sqlSession2 = sessionFactory.openSession();
DemoMapper mapper1 = sqlSession1.getMapper(DemoMapper.class);
DemoMapper mapper2 = sqlSession2.getMapper(DemoMapper.class);
System.out.println("测试一级缓存");
System.out.println(mapper1.selectById(1));
System.out.println(mapper1.selectById(1));
sqlSession1.commit();
System.out.println("测试二级缓存");
System.out.println(mapper2.selectById(1));
}
}
运行结果如下:
可以看到我们只执行了一次查询,证明一级和二级缓存都生效了。至于这儿为什么要提交这个事务,后面会详细的说明。
我们先看一级缓存。我们继续跟进查询代码,我们只会讲重要的代码,因为前面已经讲了查询的执行原理了。走来前面的都是一样的,我们直接看cachekey
的生成规则,具体的代码如下:
public class CachingExecutor implements Executor {
@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
BoundSql boundSql = ms.getBoundSql(parameterObject);
//确定缓存的key
CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}
@Override
public CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
return delegate.createCacheKey(ms, parameterObject, rowBounds, boundSql);
}
}
我们继续跟进,具体的代码如下:
public abstract class BaseExecutor implements Executor {
@Override
public CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
//判断这个连接是否关闭
if (closed) {
throw new ExecutorException("Executor was closed.");
}
//创建CacheKey对象
CacheKey cacheKey = new CacheKey();
//hashcode的计算 判断是否是同一条查询的依据
// 首先 sql id相同 com.DemoMapper.selectById
// 如果开启分页 起始位置得相同 查询的条数得相同
// 绑定的sql得相同
// 传的参数得相同
cacheKey.update(ms.getId());
cacheKey.update(rowBounds.getOffset());
cacheKey.update(rowBounds.getLimit());
cacheKey.update(boundSql.getSql());
List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
TypeHandlerRegistry typeHandlerRegistry = ms.getConfiguration().getTypeHandlerRegistry();
// mimic DefaultParameterHandler logic
for (ParameterMapping parameterMapping : parameterMappings) {
if (parameterMapping.getMode() != ParameterMode.OUT) {
Object value;
String propertyName = parameterMapping.getProperty();
if (boundSql.hasAdditionalParameter(propertyName)) {
value = boundSql.getAdditionalParameter(propertyName);
} else if (parameterObject == null) {
value = null;
} else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
value = parameterObject;
} else {
MetaObject metaObject = configuration.newMetaObject(parameterObject);
value = metaObject.getValue(propertyName);
}
cacheKey.update(value);
}
}
if (configuration.getEnvironment() != null) {
// issue #176
cacheKey.update(configuration.getEnvironment().getId());
}
return cacheKey;
}
}
上面的代码都是调用update
的方法,只是传的参数是不一样的。所以我们跟进去看看,具体的代码如下:
public class CacheKey implements Cloneable, Serializable {
public void update(Object object) {
//如果为null,直接设置为1,如果不为null,直接计算hashcode的值
int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object);
// java判断两个对象: = = 判断的是内存地址
// map 判断两个key: 首先判断hashcode hashcode 相同的情况下 判断equals
// 判断依据累加值
count++; //
checksum += baseHashCode; //计算出来的hashcode的值
baseHashCode *= count; //*1
hashcode = multiplier * hashcode + baseHashCode; // 37 * 17 + hashcode
//添加到集合中
updateList.add(object);
}
}
将上面所有的传入的参数都进行了update
方法。首先是XML
中执行的语句的id
,然后是起式的分页参数,每页的条数,对应的SQL
语句,传入的参数,还有就是配置的数据源。将这些条件进行update
,然后将cacheKey
进行返回。换句话说来说只要这些东西不一样,就不会命中缓存,然后从缓存中的代码如下:
public abstract class BaseExecutor implements Executor {
@Override
public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
if (closed) {
throw new ExecutorException("Executor was closed.");
}
if (queryStack == 0 && ms.isFlushCacheRequired()) {
clearLocalCache();
}
List<E> list;
try {
queryStack++;
//从缓存中去
list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
if (list != null) {
//对于存储过程有输出资源的处理
handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
} else {
list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
}
} finally {
queryStack--;
}
if (queryStack == 0) {
for (DeferredLoad deferredLoad : deferredLoads) {
deferredLoad.load();
}
// issue #601
deferredLoads.clear();
if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
// issue #482
clearLocalCache();
}
}
return list;
}
}
上面的代码就会讲刚才生成键到缓存到Map
中去,如果取到了,就直接赋值,如果没有取到,就直接查询数据库。到此一级缓存讲完了。
接下来我们说下二级缓存,二级缓存我们要知道的它的作用域是所有SqlSession
,所以这儿要考虑事务的问题。
假设下面的场景,数据库中老的数据是ABC
,这个时候开启事务,然后增加一条数据D
,这个时候数据库的数据就是ABCD
了,如果这个时候执行查询更新二级缓存,然后提交事务,这个时候假设事务提交失败了数据库进行回滚,数据库中的数据就变成ABC
了,而缓存中就是ABCD
,这个时候就出现了问题了。对于这个问题,mybatis
是怎么解决的。我们下面会详细的说明。
接下来我会详细说明的。二级缓存是默认开启,我们可以通过源码来证明,具体的代码的如下:
private void settingsElement(Properties props) {
configuration.setAutoMappingBehavior(AutoMappingBehavior.valueOf(props.getProperty("autoMappingBehavior", "PARTIAL")));
configuration.setAutoMappingUnknownColumnBehavior(AutoMappingUnknownColumnBehavior.valueOf(
props.getProperty("autoMappingUnknownColumnBehavior", "NONE")));
configuration.setCacheEnabled(booleanValueOf(props.getProperty("cacheEnabled"), true));
configuration.setProxyFactory((ProxyFactory) createInstance(props.getProperty("proxyFactory")));
configuration.setLazyLoadingEnabled(booleanValueOf(props.getProperty("lazyLoadingEnabled"), false));
configuration.setAggressiveLazyLoading(booleanValueOf(props.getProperty("aggressiveLazyLoading"), false));
configuration.setMultipleResultSetsEnabled(booleanValueOf(props.getProperty("multipleResultSetsEnabled"), true));
configuration.setUseColumnLabel(booleanValueOf(props.getProperty("useColumnLabel"), true));
configuration.setUseGeneratedKeys(booleanValueOf(props.getProperty("useGeneratedKeys"), false));
configuration.setDefaultExecutorType(ExecutorType.valueOf(props.getProperty("defaultExecutorType", "SIMPLE")));
configuration.setDefaultStatementTimeout(integerValueOf(props.getProperty("defaultStatementTimeout"), null));
configuration.setDefaultFetchSize(integerValueOf(props.getProperty("defaultFetchSize"), null));
configuration.setDefaultResultSetType(resolveResultSetType(props.getProperty("defaultResultSetType")));
configuration.setMapUnderscoreToCamelCase(booleanValueOf(props.getProperty("mapUnderscoreToCamelCase"), false));
configuration.setSafeRowBoundsEnabled(booleanValueOf(props.getProperty("safeRowBoundsEnabled"), false));
configuration.setLocalCacheScope(LocalCacheScope.valueOf(props.getProperty("localCacheScope", "SESSION")));
configuration.setJdbcTypeForNull(JdbcType.valueOf(props.getProperty("jdbcTypeForNull", "OTHER")));
configuration.setLazyLoadTriggerMethods(stringSetValueOf(props.getProperty("lazyLoadTriggerMethods"),
"equals,clone,hashCode,toString"));
configuration.setSafeResultHandlerEnabled(booleanValueOf(props.getProperty("safeResultHandlerEnabled"), true));
configuration.setDefaultScriptingLanguage(resolveClass(props.getProperty("defaultScriptingLanguage")));
configuration.setDefaultEnumTypeHandler(resolveClass(props.getProperty("defaultEnumTypeHandler")));
configuration.setCallSettersOnNulls(booleanValueOf(props.getProperty("callSettersOnNulls"), false));
configuration.setUseActualParamName(booleanValueOf(props.getProperty("useActualParamName"), true));
configuration.setReturnInstanceForEmptyRow(booleanValueOf(props.getProperty("returnInstanceForEmptyRow"), false));
configuration.setLogPrefix(props.getProperty("logPrefix"));
configuration.setConfigurationFactory(resolveClass(props.getProperty("configurationFactory")));
}
可以看到cacheEnabled
的默认值就是true
,这个就是二级缓存的开关,但是单个mapper
文件的二级缓存需要我们写节点,才会开启,具体的代码如下:
private void cacheElement(XNode context) {
if (context != null) {
String type = context.getStringAttribute("type", "PERPETUAL");
Class<? extends Cache> typeClass = typeAliasRegistry.resolveAlias(type);
String eviction = context.getStringAttribute("eviction", "LRU");
//LruCache.class
Class<? extends Cache> evictionClass = typeAliasRegistry.resolveAlias(eviction);
Long flushInterval = context.getLongAttribute("flushInterval");
Integer size = context.getIntAttribute("size");
boolean readWrite = !context.getBooleanAttribute("readOnly", false);
boolean blocking = context.getBooleanAttribute("blocking", false);
Properties props = context.getChildrenAsProperties();
builderAssistant.useNewCache(typeClass, evictionClass, flushInterval, size, readWrite, blocking, props);
}
}
可以看到content
变量不等与空的时候,单个文件的二级缓存才会开启,而这个content
就是cache
节点。我们再来看下 builderAssistant.useNewCache(typeClass, evictionClass, flushInterval, size, readWrite, blocking, props);
方法,也就是构建单个mapper
文件的二级缓存的对象,具体如下:
public Cache useNewCache(Class<? extends Cache> typeClass,
Class<? extends Cache> evictionClass,
Long flushInterval,
Integer size,
boolean readWrite,
boolean blocking,
Properties props) {
//构造器模式
Cache cache = new CacheBuilder(currentNamespace)
.implementation(valueOrDefault(typeClass, PerpetualCache.class))
.addDecorator(valueOrDefault(evictionClass, LruCache.class))
.clearInterval(flushInterval)
.size(size)
.readWrite(readWrite)
.blocking(blocking)
.properties(props)
.build();
configuration.addCache(cache);
currentCache = cache;
return cache;
}
public class CacheBuilder {
public Cache build() {
setDefaultImplementations();
Cache cache = newBaseCacheInstance(implementation, id);
setCacheProperties(cache);
// issue #352, do not apply decorators to custom caches
if (PerpetualCache.class.equals(cache.getClass())) {
for (Class<? extends Cache> decorator : decorators) {
cache = newCacheDecoratorInstance(decorator, cache);
setCacheProperties(cache);
}
cache = setStandardDecorators(cache);
} else if (!LoggingCache.class.isAssignableFrom(cache.getClass())) {
cache = new LoggingCache(cache);
}
return cache;
}
//装饰者设计模式
private Cache setStandardDecorators(Cache cache) {
try {
MetaObject metaCache = SystemMetaObject.forObject(cache);
if (size != null && metaCache.hasSetter("size")) {
metaCache.setValue("size", size);
}
if (clearInterval != null) {
cache = new ScheduledCache(cache);
((ScheduledCache) cache).setClearInterval(clearInterval);
}
if (readWrite) {
cache = new SerializedCache(cache);
}
cache = new LoggingCache(cache);
cache = new SynchronizedCache(cache);
if (blocking) {
cache = new BlockingCache(cache);
}
return cache;
} catch (Exception e) {
throw new CacheException("Error building standard cache decorators. Cause: " + e, e);
}
}
}
可以发现我们创建二级缓存的对象的时候,使用的是装饰者模式,最终构建的对象如下图所示:
可以看到最终的cache
是这个样子,利用是装饰者模式,对对应的cache
进行层层包装。我们来解释一下各个对象是用来干嘛的,具体如下:
SynchronizedCache:同步Cache,实现比较简单,直接使用synchronized修饰方法。
LoggingCache:日志功能,用于记录缓存的命中率,如果开启了DEBUG模式,则会输出命中率日志。
SerializedCache:序列化功能,将值序列化后存到缓存中。该功能用于缓存返回一份实例的Copy,用于保存线程安全。
LruCache:采用了Lru算法的Cache实现,移除最近最少使用的Key/Value。
回收原则:
LRU(最近最少使用的):移除最长时间不被使用的对象,这是默认值
FIFO(先进先出): 按对象进入缓存的顺序来移除它们
SOFT(软引用):移除基于垃圾回收器状态和软引用规则的对象
WEAK(弱引用):更积极地移除基于垃圾收集器状态和弱引用规则的对象
PerpetualCache: 作为为最基础的缓存类,底层实现比较简单,直接使用了HashMap。
让我们现在跟着代码的流程来看下二级缓存。由于代码的篇幅太长,我只列出关键的代码,具体如下:
public class CachingExecutor implements Executor {
@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey
key, BoundSql boundSql)
throws SQLException {
//二级缓存的Cache
Cache cache = ms.getCache();
if (cache != null) {
flushCacheIfRequired(ms);
if (ms.isUseCache() && resultHandler == null) {
ensureNoOutParams(ms, boundSql);
@SuppressWarnings("unchecked")
List<E> list = (List<E>) tcm.getObject(cache, key);
if (list == null) {
list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
tcm.putObject(cache, key, list); // issue #578 and #116
}
return list;
}
}
return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}
}
上面的代码我们发现从二级缓存中取值的时候,是通过tcm
取的,而这个tcm
变量是TransactionalCacheManager
对象,而这个对象就是解决我们上面说的事务的问题了,我们来看下这个对象的源码吧。具体如下:
public class TransactionalCacheManager {
private final Map<Cache, TransactionalCache> transactionalCaches = new HashMap<>();
public void clear(Cache cache) {
getTransactionalCache(cache).clear();
}
public Object getObject(Cache cache, CacheKey key) {
return getTransactionalCache(cache).getObject(key);
}
public void putObject(Cache cache, CacheKey key, Object value) {
getTransactionalCache(cache).putObject(key, value);
}
public void commit() {
for (TransactionalCache txCache : transactionalCaches.values()) {
txCache.commit();
}
}
public void rollback() {
for (TransactionalCache txCache : transactionalCaches.values()) {
txCache.rollback();
}
}
private TransactionalCache getTransactionalCache(Cache cache) {
return transactionalCaches.computeIfAbsent(cache, TransactionalCache::new);
}
}
可以发现这个对象中维护了一个map
,这个map
的键是cache,值是TransactionalCache
对象,所以这个对象也是关键。具体的代码如下:
public class TransactionalCache implements Cache {
private static final Log log = LogFactory.getLog(TransactionalCache.class);
//缓存对象
private final Cache delegate;
//是否需要清空提交空间的标识
private boolean clearOnCommit;
//所有待提交的缓存
private final Map<Object, Object> entriesToAddOnCommit;
//错误修改
//未命中的缓存集合,用于统计缓存命中率
private final Set<Object> entriesMissedInCache;
public TransactionalCache(Cache delegate) {
this.delegate = delegate;
this.clearOnCommit = false;
this.entriesToAddOnCommit = new HashMap<>();
this.entriesMissedInCache = new HashSet<>();
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
return delegate.getSize();
}
@Override
public Object getObject(Object key) {
// issue #116
Object object = delegate.getObject(key);
if (object == null) {
entriesMissedInCache.add(key);
}
// issue #146
if (clearOnCommit) {
return null;
} else {
return object;
}
}
//本来应该put到缓存里面去
//put需要提交的空间里面去
@Override
public void putObject(Object key, Object object) {
entriesToAddOnCommit.put(key, object);
}
@Override
public Object removeObject(Object key) {
return null;
}
@Override
public void clear() {
clearOnCommit = true;
entriesToAddOnCommit.clear();
}
public void commit() {
if (clearOnCommit) {
delegate.clear();
}
flushPendingEntries();
reset();
}
public void rollback() {
unlockMissedEntries();
reset();
}
private void reset() {
clearOnCommit = false;
entriesToAddOnCommit.clear();
entriesMissedInCache.clear();
}
private void flushPendingEntries() {
for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
//put到真实缓存
delegate.putObject(entry.getKey(), entry.getValue());
}
for (Object entry : entriesMissedInCache) {
//也会把未命中的一起put
if (!entriesToAddOnCommit.containsKey(entry)) {
delegate.putObject(entry, null);
}
}
}
private void unlockMissedEntries() {
for (Object entry : entriesMissedInCache) {
try {
//清空在真实缓存区里面的未命中的缓存
delegate.removeObject(entry);
} catch (Exception e) {
log.warn("Unexpected exception while notifiying a rollback to the cache adapter."
+ "Consider upgrading your cache adapter to the latest version. Cause: " + e);
}
}
}
}
我们可以先看下放入缓存的方法,发现调用的方法是entriesToAddOnCommit.put(key, object);
可以发现我们是将这个缓存放入到entriesToAddOnCommit
的Map
中,这个entriesToAddOnCommit
的Map
是所有待提交的缓存。我们再看下获取缓存到方法,具体的代码如下:
@Override
public Object getObject(Object key) {
// issue #116
Object object = delegate.getObject(key);
if (object == null) {
entriesMissedInCache.add(key);
}
// issue #146 默认为false
if (clearOnCommit) {
return null;
} else {
return object;
}
}
可以从真正的缓存中获取对应的内容,然后如果获取的值为空,就添加到entriesMissedInCache
的集合中去,这个集合是未命中的缓存集合,用于统计缓存命中率。然后判断是否需要清空提交空间的标识,如果需要直接返回null
,如果不需要直接返回这个缓存。我们在看下commit
方法。具体的代码如下:
public void commit() {
//需要清空提交空间的标识
if (clearOnCommit) {
delegate.clear();
}
flushPendingEntries();
reset();
}
private void flushPendingEntries() {
for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
//put到真实缓存
delegate.putObject(entry.getKey(), entry.getValue());
}
for (Object entry : entriesMissedInCache) {
//也会把未命中的一起put
if (!entriesToAddOnCommit.containsKey(entry)) {
delegate.putObject(entry, null);
}
}
}
private void reset() {
clearOnCommit = false;
entriesToAddOnCommit.clear();
entriesMissedInCache.clear();
}
commit
方法会将所有待提交的缓存全部put
到真实的缓存中,同时也将未命中的缓存也一起put
到真实的缓存中。然后调用reset
方法,将clearOnCommit
的值改成false
,然后将两个集合中数据清除掉。然后再看下rollback
方法,具体的代码如下:
public void rollback() {
unlockMissedEntries();
reset();
}
private void unlockMissedEntries() {
for (Object entry : entriesMissedInCache) {
try {
//清空在真实缓存区里面的未命中的缓存
delegate.removeObject(entry);
} catch (Exception e) {
log.warn("Unexpected exception while notifiying a rollback to the cache adapter."+ "Consider upgrading your cache adapter to the latest version. Cause: " + e);
}
}
}
private void reset() {
clearOnCommit = false;
entriesToAddOnCommit.clear();
entriesMissedInCache.clear();
}
rollback
的方法会将真实缓存中内容清除掉,然后调用reset
方法,将clearOnCommit
的值改成false
,然后将两个集合中数据清除掉。至此这个类所有的方法就说完了。而TransactionalCacheManager
中的方法就是调用TransactionalCache
中的方法名是一样的。既然已经搞懂了这些方法的含义了。我们就来看下这些方法在什么地方调用了。继续回到主流程的代码的地方。具体如下:
public class CachingExecutor implements Executor {
@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey
key, BoundSql boundSql)
throws SQLException {
//二级缓存的Cache
Cache cache = ms.getCache();
if (cache != null) {
flushCacheIfRequired(ms);
if (ms.isUseCache() && resultHandler == null) {
ensureNoOutParams(ms, boundSql);
@SuppressWarnings("unchecked")
List<E> list = (List<E>) tcm.getObject(cache, key);
if (list == null) {
//调用对应的查询
list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
//然后将对应的缓存put到待提交的缓存Map中去
tcm.putObject(cache, key, list); // issue #578 and #116
}
return list;
}
}
return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}
}
可以看到查询的信息都是直接put
到待提交的缓存Map
中去。但是什么时候提交缓存呢?很明显是在事务提交的时候提交缓存。事务在回滚的时候清除缓存。具体的代码如下:
public class DefaultSqlSession implements SqlSession {
@Override
public void commit() {
commit(false);
}
@Override
public void commit(boolean force) {
try {
//这个isCommitOrRollbackRequired(force) 判断是否需要提交或回滚
executor.commit(isCommitOrRollbackRequired(force));
dirty = false;
} catch (Exception e) {
throw ExceptionFactory.wrapException("Error committing transaction. Cause: " + e, e);
} finally {
ErrorContext.instance().reset();
}
}
private boolean isCommitOrRollbackRequired(boolean force) {
//autoCommit的值是openSession中传入的。默认的是false
//dirty的值是增删改的时候改成true
return (!autoCommit && dirty) || force;
}
}
public class CachingExecutor implements Executor {
@Override
public void commit(boolean required) throws SQLException {
delegate.commit(required);
//将待提交的缓存Map提交到真正的缓存中去
tcm.commit();
}
}
public abstract class BaseExecutor implements Executor {
@Override
public void commit(boolean required) throws SQLException {
if (closed) {
throw new ExecutorException("Cannot commit, transaction is already closed");
}
clearLocalCache();
flushStatements();
if (required) {
//底层的提交事务
transaction.commit();
}
}
}
所以只有在调用sqlSession.commit();
方法的时候会将这个将待提交的缓存Map
添加到真正的缓存中。然后我们再来看下回滚的方法。
public class CachingExecutor implements Executor { public class DefaultSqlSession implements SqlSession {
@Override
public void rollback() {
rollback(false);
}
@Override
public void rollback(boolean force) {
try {
//这个isCommitOrRollbackRequired(force) 判断是否需要提交或回滚
executor.rollback(isCommitOrRollbackRequired(force));
dirty = false;
} catch (Exception e) {
throw ExceptionFactory.wrapException("Error rolling back transaction. Cause: " + e, e);
} finally {
ErrorContext.instance().reset();
}
}
private boolean isCommitOrRollbackRequired(boolean force) {
//autoCommit的值是openSession中传入的。默认的是false
//dirty的值是增删改的时候改成true
return (!autoCommit && dirty) || force;
}
}
public class CachingExecutor implements Executor {
@Override
public void rollback(boolean required) throws SQLException {
try {
delegate.rollback(required);
} finally {
if (required) {
tcm.rollback();
}
}
}
}
public abstract class BaseExecutor implements Executor {
@Override
public void rollback(boolean required) throws SQLException {
if (!closed) {
try {
clearLocalCache();
flushStatements(true);
} finally {
if (required) {
//底层的提交事务
transaction.rollback();
}
}
}
}
}
所以在rollback
的时候,会将缓存中数据删除。还有就是调用sqlSession.close();
具体的代码如下:
public class DefaultSqlSession implements SqlSession {
@Override
public void close() {
try {
executor.close(isCommitOrRollbackRequired(false));
closeCursors();
dirty = false;
} finally {
ErrorContext.instance().reset();
}
}
}
public class CachingExecutor implements Executor {
@Override
public void close(boolean forceRollback) {
try {
//issues #499, #524 and #573
if (forceRollback) {
tcm.rollback();
} else {
tcm.commit();
}
} finally {
delegate.close(forceRollback);
}
}
}
public abstract class BaseExecutor implements Executor {
@Override
public void close(boolean forceRollback) {
try {
try {
rollback(forceRollback);
} finally {
if (transaction != null) {
transaction.close();
}
}
} catch (SQLException e) {
// Ignore. There's nothing that can be done at this point.
log.warn("Unexpected exception on closing transaction. Cause: " + e);
} finally {
transaction = null;
deferredLoads = null;
localCache = null;
localOutputParameterCache = null;
closed = true;
}
}
}
所以在关闭的时候也会提交缓存,isCommitOrRollbackRequired(false)
方法和之前的事一样的。至此整个缓存的机制也讲完了。至于上面的问题,mybatis
的解决方案就是,缓存的提交是和事务挂钩的。最后我们再来看下整体的流程,具体如下图: