Cache的各个实现类的分析
1. PerpetualCache(默认的真正缓存的类)
如上面所说,缓存的实现大多数都是HashMap,这里也不例外,对缓存的的操作都是在调用HashMap的方法。下面值展示了缓存的几个常用的方法,put,remove,get ,注意这个构造方法,这有一个参数为String 的构造方法。这个缓存没有啥可说的。
需要注意hashcode和equals方法。
hashcode方法hash的是id,因为nameSpace是唯一的,缓存的作用域也是NameSpace。
这让我想起来,在重写equals方法时候的注意点:
1 自反性:对任意引用值X,x.equals(x)的返回值一定为true.
2 对称性:对于任何引用值x,y,当且仅当y.equals(x)返回值为true时,x.equals(y)的返回值一定为true;
3 传递性:如果x.equals(y)=true, y.equals(z)=true,则x.equals(z)=true
4 一致性:如果参与比较的对象没任何改变,则对象比较的结果也不应该有任何改变
5 非空性:任何非空的引用值X,x.equals(null)的返回值一定为false
/** 永久缓存
* @author Clinton Begin
*/
public class PerpetualCache implements Cache {
private final String id;
private final Map<Object, Object> cache = new HashMap<>();
public PerpetualCache(String id) {
this.id = id;
}
// ……中间有省略
@Override
public void putObject(Object key, Object value) {
cache.put(key, value);
}
@Override
public Object getObject(Object key) {
return cache.get(key);
}
@Override
public Object removeObject(Object key) {
return cache.remove(key);
}
@Override
public void clear() {
cache.clear();
}
@Override
public boolean equals(Object o) {
if (getId() == null) {
throw new CacheException("Cache instances require an ID.");
}
if (this == o) {
return true;
}
if (!(o instanceof Cache)) {
return false;
}
Cache otherCache = (Cache) o;
return getId().equals(otherCache.getId());
}
@Override
public int hashCode() {
if (getId() == null) {
throw new CacheException("Cache instances require an ID.");
}
return getId().hashCode();
}
}
2. LruCache(对应的标签是LRU)
LruCache实现的重点是LinkHashMap,限制默认的大小为1024,LinkHashMap的大小默认是1204,在存放值的时候,在LinkHashmap中也会存放一份,这会导致数据冗余吗?并且会检查是否有需要移除的元素,如果有,就从delegate移除掉。并且在get元素的时候,要先从LinkHashMap中获取一下,在从delegate中获取,这里从LinkHashMap获取的目的是为了保证数据的热。(在LinkHashMap中,get元素的时候会将当前元素移动到队尾,久而久之,队首就是最老的元素了)
/**
* Lru (least recently used) cache decorator.
*
* @author Clinton Begin
*/
public class LruCache implements Cache {
// 被代理的对象,一般来说PerpetualCache
private final Cache delegate;
//keyMap实现LRU的关键
private Map<Object, Object> keyMap;
//最老的key
private Object eldestKey;
//初始化,注意,这构造方法的参数可不是一个String,而是一个Cache,专门用于包装Cache的
public LruCache(Cache delegate) {
this.delegate = delegate;
//默认大小为1024
setSize(1024);
}
// 获取id,其实就是NameSpace
@Override
public String getId() {
return delegate.getId();
}
// 还是将操作委托给delegate
@Override
public int getSize() {
return delegate.getSize();
}
// 设置大小,LRU算法是用LinkedHashMap来实现的,注意构造方法的最后一个参数,evict为true
// 利用LinkHashMap实现LRU之前已经介绍过了。
public void setSize(final int size) {
keyMap = new LinkedHashMap<Object, Object>(size, .75F, true) {
private static final long serialVersionUID = 4267176411845948333L;
//这个方法返回为true,就会将头结点剔除出去。
@Override
protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
// 是否超过了限制
boolean tooBig = size() > size;
if (tooBig) {
// eldest就是LinkHahsMap的头节点
eldestKey = eldest.getKey();
}
return tooBig;
}
};
}
//put的时候直接放,注意在cycleKeyList里面也有操作,等会看看
@Override
public void putObject(Object key, Object value) {
delegate.putObject(key, value);
cycleKeyList(key);
}
@Override
public Object getObject(Object key) {
// 思考一下,这里为什么要get一下, 却没有赋值操作,
// 因为get操作要保持数据的新。具体看看linkHashMap实现LRU的分析把
keyMap.get(key);
return delegate.getObject(key);
}
@Override
public Object removeObject(Object key) {
return delegate.removeObject(key);
}
@Override
public void clear() {
delegate.clear();
keyMap.clear();
}
//看这个方法,这个方法在putObject的时候调用,主要是从delegate里面移除掉最老的key。
private void cycleKeyList(Object key) {
keyMap.put(key, key);
if (eldestKey != null) {
delegate.removeObject(eldestKey);
eldestKey = null;
}
}
}
3. FifoCache
FifoCache的实现比较简单,利用队列来实现FIFO,限制的大小默认是1024,只有在put的时候才会有操作队列。
/**
* FIFO (first in, first out) cache decorator.
*
* @author Clinton Begin
*/
public class FifoCache implements Cache {
private final Cache delegate;
// 利用队列来实现FIFO
private final Deque<Object> keyList;
private int size;
public FifoCache(Cache delegate) {
this.delegate = delegate;
this.keyList = new LinkedList<>();
this.size = 1024;
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
return delegate.getSize();
}
public void setSize(int size) {
this.size = size;
}
@Override
public void putObject(Object key, Object value) {
cycleKeyList(key);
delegate.putObject(key, value);
}
@Override
public Object getObject(Object key) {
return delegate.getObject(key);
}
@Override
public Object removeObject(Object key) {
return delegate.removeObject(key);
}
@Override
public void clear() {
delegate.clear();
keyList.clear();
}
// 重点是这个方法,将元素添加到队列里面,判断队列的大小是否超过规定的size,如果超过大小,就从队列中获取头节点,然后移除掉。
private void cycleKeyList(Object key) {
keyList.addLast(key);
if (keyList.size() > size) {
Object oldestKey = keyList.removeFirst();
delegate.removeObject(oldestKey);
}
}
}
4. LoggingCache
这个实现很简单了,就在getObject的时候打日志。算比率。没啥可说的
/**
* @author Clinton Begin
*/
public class LoggingCache implements Cache {
private final Log log;
private final Cache delegate;
protected int requests = 0;
protected int hits = 0;
public LoggingCache(Cache delegate) {
this.delegate = delegate;
this.log = LogFactory.getLog(getId());
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
return delegate.getSize();
}
@Override
public void putObject(Object key, Object object) {
delegate.putObject(key, object);
}
@Override
public Object getObject(Object key) {
requests++;
final Object value = delegate.getObject(key);
if (value != null) {
hits++;
}
if (log.isDebugEnabled()) {
log.debug("Cache Hit Ratio [" + getId() + "]: " + getHitRatio());
}
return value;
}
@Override
public Object removeObject(Object key) {
return delegate.removeObject(key);
}
@Override
public void clear() {
delegate.clear();
}
@Override
public int hashCode() {
return delegate.hashCode();
}
@Override
public boolean equals(Object obj) {
return delegate.equals(obj);
}
private double getHitRatio() {
return (double) hits / (double) requests;
}
}
5. SoftCache
将Key和value包装为SoftEntry,保存在delegate中,并且自己持有hardLinksToAvoidGarbageCollection(强引用的集合),和一个queueOfGarbageCollectedEntries(引用队列),在put和remove的时候,在调用delegate之前,会先从引用队列里面获取值,如果能获取值,就从delegate中移除。在get的时候会先从delegate获取,如果这个值存在,并且没有被GC,给他增加强引用(添加到hardLinksToAvoidGarbageCollection里面去),并且如果强引用集合大于numberOfHardLinks(默认是256),就移除队尾元素。
/**
* Soft Reference cache decorator
* Thanks to Dr. Heinz Kabutz for his guidance here.
*
* @author Clinton Begin
*/
public class SoftCache implements Cache {
//队列,持有一个强引用
private final Deque<Object> hardLinksToAvoidGarbageCollection;
//软引用被回收之后放置的集合
private final ReferenceQueue<Object> queueOfGarbageCollectedEntries;
private final Cache delegate;
//有多少个强引用,默认是256
private int numberOfHardLinks;
public SoftCache(Cache delegate) {
this.delegate = delegate;
this.numberOfHardLinks = 256;
this.hardLinksToAvoidGarbageCollection = new LinkedList<>();
this.queueOfGarbageCollectedEntries = new ReferenceQueue<>();
}
@Override
public String getId() {
return delegate.getId();
}
// removeGarbageCollectedItems方法是干嘛的?
@Override
public int getSize() {
removeGarbageCollectedItems();
return delegate.getSize();
}
public void setSize(int size) {
this.numberOfHardLinks = size;
}
@Override
public void putObject(Object key, Object value) {
removeGarbageCollectedItems();
// 将key和value包装为SoftEntry值。
delegate.putObject(key, new SoftEntry(key, value, queueOfGarbageCollectedEntries));
}
//
@Override
public Object getObject(Object key) {
Object result = null;
@SuppressWarnings("unchecked") // assumed delegate cache is totally managed by this cache
// 先从delegate中获取元素,这个元素是SoftReference,
SoftReference<Object> softReference = (SoftReference<Object>) delegate.getObject(key);
if (softReference != null) {
result = softReference.get();
if (result == null) {
delegate.removeObject(key);
} else {
// 如果不是null,说明当前的这个对象对象还没有被回收了,所以,添加到hardLinksToAvoidGarbageCollection里面,增加强引用关系
// See #586 (and #335) modifications need more than a read lock
synchronized (hardLinksToAvoidGarbageCollection) {
hardLinksToAvoidGarbageCollection.addFirst(result);
//如果大于numberOfHardLinks,就将hardLinksToAvoidGarbageCollection里面的尾元素移除
if (hardLinksToAvoidGarbageCollection.size() > numberOfHardLinks) {
hardLinksToAvoidGarbageCollection.removeLast();
}
}
}
}
return result;
}
@Override
public Object removeObject(Object key) {
removeGarbageCollectedItems();
return delegate.removeObject(key);
}
@Override
public void clear() {
synchronized (hardLinksToAvoidGarbageCollection) {
hardLinksToAvoidGarbageCollection.clear();
}
removeGarbageCollectedItems();
delegate.clear();
}
// 看这里的逻辑,好多都调用了这个方法,从queueOfGarbageCollectedEntries队列里面出队一个元素,如果有,说明这个元素已经被GC要被GC回收了,那么就需要将delegate中的这个元素也移除掉,问题?hardLinksToAvoidGarbageCollection里面要不要移除?
// 不需要,因为从queueOfGarbageCollectedEntries里面能出来的话,说明他已经是个垃圾了,没有强引用啦。
private void removeGarbageCollectedItems() {
SoftEntry sv;
while ((sv = (SoftEntry) queueOfGarbageCollectedEntries.poll()) != null) {
delegate.removeObject(sv.key);
}
}
private static class SoftEntry extends SoftReference<Object> {
private final Object key;
SoftEntry(Object key, Object value, ReferenceQueue<Object> garbageCollectionQueue) {
super(value, garbageCollectionQueue);
this.key = key;
}
}
}
6. SynchronizedCache
这个就很简单了,直接添加锁就好了
/**
* @author Clinton Begin
*/
public class SynchronizedCache implements Cache {
private final Cache delegate;
public SynchronizedCache(Cache delegate) {
this.delegate = delegate;
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public synchronized int getSize() {
return delegate.getSize();
}
@Override
public synchronized void putObject(Object key, Object object) {
delegate.putObject(key, object);
}
@Override
public synchronized Object getObject(Object key) {
return delegate.getObject(key);
}
@Override
public synchronized Object removeObject(Object key) {
return delegate.removeObject(key);
}
@Override
public synchronized void clear() {
delegate.clear();
}
@Override
public int hashCode() {
return delegate.hashCode();
}
@Override
public boolean equals(Object obj) {
return delegate.equals(obj);
}
}
7. ScheduledCache
两个属性,清理时间的间隔(clearInterval),上次清理的时间(lastClear)。在remove,get,put的时候,会先判断,判断一下时间,当前时间-上一次的时 是否大于需要清里的间隔时间,如果是就直接清。并且将当前时间赋值给lastClear。
问题?
- 这个清理方式有问题吗?
有,这种是有问题的,这种将清理的触发的操作绑定给了操作,那如果说不操作,其实这个清理时间就没有意义,也没有清理操作。
/**
* @author Clinton Begin
*/
public class ScheduledCache implements Cache {
private final Cache delegate;
// 清理间隔
protected long clearInterval;
// 上次清理的时间
protected long lastClear;
public ScheduledCache(Cache delegate) {
this.delegate = delegate;
this.clearInterval = TimeUnit.HOURS.toMillis(1);
this.lastClear = System.currentTimeMillis();
}
public void setClearInterval(long clearInterval) {
this.clearInterval = clearInterval;
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
clearWhenStale();
return delegate.getSize();
}
//
@Override
public void putObject(Object key, Object object) {
clearWhenStale();
delegate.putObject(key, object);
}
@Override
public Object getObject(Object key) {
return clearWhenStale() ? null : delegate.getObject(key);
}
@Override
public Object removeObject(Object key) {
clearWhenStale();
return delegate.removeObject(key);
}
@Override
public void clear() {
lastClear = System.currentTimeMillis();
delegate.clear();
}
@Override
public int hashCode() {
return delegate.hashCode();
}
@Override
public boolean equals(Object obj) {
return delegate.equals(obj);
}
//这逻辑简单了吧。判断一下时间,当前时间-上一次的时 是否大于需要清里的间隔时间,如果是就直接清
private boolean clearWhenStale() {
if (System.currentTimeMillis() - lastClear > clearInterval) {
clear();
return true;
}
return false;
}
}
8. WeakCache
这个和SoftCache
差不多,看看就可以,我这里就不写了
9. SerializedCache
要用这个缓存,必须要实现Serializable
接口,在put和get的时候会出现序列化和反序列化。序列化的目的是为了深拷贝,深拷贝是为了什么?安全。
/**
* @author Clinton Begin
*/
public class SerializedCache implements Cache {
private final Cache delegate;
public SerializedCache(Cache delegate) {
this.delegate = delegate;
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
return delegate.getSize();
}
//主要看这个,必须要实现Serializable接口,会有序列化。
@Override
public void putObject(Object key, Object object) {
if (object == null || object instanceof Serializable) {
delegate.putObject(key, serialize((Serializable) object));
} else {
throw new CacheException("SharedCache failed to make a copy of a non-serializable object: " + object);
}
}
@Override
public Object getObject(Object key) {
Object object = delegate.getObject(key);
// 反序列化
return object == null ? null : deserialize((byte[]) object);
}
@Override
public Object removeObject(Object key) {
return delegate.removeObject(key);
}
@Override
public void clear() {
delegate.clear();
}
@Override
public int hashCode() {
return delegate.hashCode();
}
@Override
public boolean equals(Object obj) {
return delegate.equals(obj);
}
//这里直接利用序列化来拷贝了一份,这不就是copyOnWrite吗
// 序列化的操作没有啥可说的,就这里为啥要序列化?
// 深拷贝呀,
private byte[] serialize(Serializable value) {
try (ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(bos)) {
oos.writeObject(value);
oos.flush();
return bos.toByteArray();
} catch (Exception e) {
throw new CacheException("Error serializing object. Cause: " + e, e);
}
}
//反序列化
private Serializable deserialize(byte[] value) {
SerialFilterChecker.check();
Serializable result;
try (ByteArrayInputStream bis = new ByteArrayInputStream(value);
ObjectInputStream ois = new CustomObjectInputStream(bis)) {
result = (Serializable) ois.readObject();
} catch (Exception e) {
throw new CacheException("Error deserializing object. Cause: " + e, e);
}
return result;
}
public static class CustomObjectInputStream extends ObjectInputStream {
public CustomObjectInputStream(InputStream in) throws IOException {
super(in);
}
@Override
protected Class<?> resolveClass(ObjectStreamClass desc) throws ClassNotFoundException {
return Resources.classForName(desc.getName());
}
}
}
10. BlockingCache
有超时时间(timeout),CacheKey和CountDownLatch的map(locks)。在get的时候先获取锁,在释放锁,获取锁的操作是在locks里面尝试添加值,如果添加成功,表示获取到锁,如果失败,就等待,直到被CountDownLatch唤醒,如果唤醒就在while尝试添加锁。要注意,这里的get和put操作是有顺序的,要先get才能put,否则会报错,看一下put里面的代码就好了。
而且这里的put不需要获取锁。为什么?
如果CacheKey一样(Hash,equals)一样,直接会覆盖掉,对于locks来说,他只是通过cacheKey来获取,这次获取就获取到了之前的值,继续countDown,这没有啥问题
比如说,有一个CacheKey先来查询一下,此时已经在locks里面已经保存了一个CacheKey和CountDownLatch(1),别的线程get的时候发现已经上锁,就在这里等待。此时他继续Put,put的时候直接没有添加锁,所以,他能直接put进去,put完释放锁,这个时候,会先到lock里面获取到CountDownLatch,然后countDown,此时别的get的线程才能获取到锁,查找值,释放锁。
这就达到了Put对get是可见的。
public class BlockingCache implements Cache {
// 超时时间
private long timeout;
private final Cache delegate;
// key是CacheKye,value是CountDownLatch
private final ConcurrentHashMap<Object, CountDownLatch> locks;
public BlockingCache(Cache delegate) {
this.delegate = delegate;
this.locks = new ConcurrentHashMap<>();
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
return delegate.getSize();
}
@Override
public void putObject(Object key, Object value) {
try {
delegate.putObject(key, value);
} finally {
//释放锁操作,注意这里的释放都是写在finally里面的,保证能完全释放
releaseLock(key);
}
}
@Override
public Object getObject(Object key) {
//获取锁
acquireLock(key);
Object value = delegate.getObject(key);
if (value != null) {
//获取到锁就释放锁
releaseLock(key);
}
return value;
}
@Override
public Object removeObject(Object key) {
// despite of its name, this method is called only to release locks
releaseLock(key);
return null;
}
@Override
public void clear() {
delegate.clear();
}
// 看看获取锁的操作
private void acquireLock(Object key) {
CountDownLatch newLatch = new CountDownLatch(1);
// 这里肯定是一个while循环,在释放锁的时候肯定有一个CountDownLatch的操作
while (true) {
//locks里面存放了CacheKey,如果没有,就创建,有就返回
// 这里的CountDownLatch表示1,这就要保证locks是一个线程安全的,所以,这里是ConcurrentHashMap。
// 因为他讲加锁的操作移动到了ConcurrentHashMap,CountDownLatch只是保证了线程的顺序。
CountDownLatch latch = locks.putIfAbsent(key, newLatch);
//如果没有,就说明对于这个Cachekey之前就没有添加锁的操作,直接返回就行,new出来的时候说明已经创建好了,已经对于这个Cachekey添加好锁了
if (latch == null) {
break;
}
try {
//如果设置了超时时间就等timeout,否则就一直等。
if (timeout > 0) {
boolean acquired = latch.await(timeout, TimeUnit.MILLISECONDS);
//等待超时时间之后,要还是没有获取到锁,就直接报错。
if (!acquired) {
throw new CacheException(
"Couldn't get a lock in " + timeout + " for the key " + key + " at the cache " + delegate.getId());
}
} else {
//一直等,
latch.await();
}
} catch (InterruptedException e) {
throw new CacheException("Got interrupted while trying to acquire lock for key " + key, e);
}
}
}
// 从当前的这里面获取到了CountDownLatch
private void releaseLock(Object key) {
CountDownLatch latch = locks.remove(key);
if (latch == null) {
throw new IllegalStateException("Detected an attempt at releasing unacquired lock. This should never happen.");
}
//释放掉,这边释放掉,会缓存上面的await,会再次while循环,再次加锁。
latch.countDown();
}
public long getTimeout() {
return timeout;
}
public void setTimeout(long timeout) {
this.timeout = timeout;
}
}
11. TransactionalCache
put不会直接放在delegate里面,而是会放在一个中间的map里面(entriesToAddOnCommit),并且也保存了没有命中缓存的key(entriesMissedInCache)
在commit的时候会将entriesToAddOnCommit
添加到delegate里面,并且将entriesMissedInCache里面的也存进去,只不过value是null。在commit之后会将清楚(entriesToAddOnCommit,和entriesMissedInCache),在回滚的时候会从delegate里面移除entriesMissedInCache的key,并且(清楚(entriesToAddOnCommit,和entriesMissedInCache)
/**<p>二级缓存的buffer,在调用commit或者rollback的时候会将这个缓存添加进去。</p>
* The 2nd level cache transactional buffer.
* <p>
* This class holds all cache entries that are to be added to the 2nd level cache during a Session.
* Entries are sent to the cache when commit is called or discarded if the Session is rolled back.
* Blocking cache support has been added. Therefore any get() that returns a cache miss
* will be followed by a put() so any lock associated with the key can be released.
*
* @author Clinton Begin
* @author Eduardo Macarron
*/
public class TransactionalCache implements Cache {
private static final Log log = LogFactory.getLog(TransactionalCache.class);
private final Cache delegate;
//标志位置
private boolean clearOnCommit;
// commit的时候需要添加到缓存里面的实体
private final Map<Object, Object> entriesToAddOnCommit;
// 没有命中缓存的key的集合
private final Set<Object> entriesMissedInCache;
public TransactionalCache(Cache delegate) {
this.delegate = delegate;
this.clearOnCommit = false;
this.entriesToAddOnCommit = new HashMap<>();
this.entriesMissedInCache = new HashSet<>();
}
@Override
public String getId() {
return delegate.getId();
}
@Override
public int getSize() {
return delegate.getSize();
}
// 在get的时候如果没有值,会放在entriesMissedInCache里面
@Override
public Object getObject(Object key) {
// issue #116
Object object = delegate.getObject(key);
if (object == null) {
entriesMissedInCache.add(key);
}
// issue #146
if (clearOnCommit) {
return null;
} else {
return object;
}
}
//put不会直接调用delegate方法,会先放在entriesToAddOnCommit里面
@Override
public void putObject(Object key, Object object) {
entriesToAddOnCommit.put(key, object);
}
@Override
public Object removeObject(Object key) {
return null;
}
@Override
public void clear() {
clearOnCommit = true;
entriesToAddOnCommit.clear();
}
//只有commit的时候,会先将delegate清楚,之后将entriesToAddOnCommit里面的添加到delegate里面
public void commit() {
if (clearOnCommit) {
delegate.clear();
}
flushPendingEntries();
reset();
}
public void rollback() {
unlockMissedEntries();
reset();
}
private void reset() {
clearOnCommit = false;
entriesToAddOnCommit.clear();
entriesMissedInCache.clear();
}
private void flushPendingEntries() {
// 这里可以用PutAll操作
for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
delegate.putObject(entry.getKey(), entry.getValue());
}
// 对于缓存没有命中,直接放一个null
for (Object entry : entriesMissedInCache) {
if (!entriesToAddOnCommit.containsKey(entry)) {
delegate.putObject(entry, null);
}
}
}
// 移除
private void unlockMissedEntries() {
for (Object entry : entriesMissedInCache) {
try {
delegate.removeObject(entry);
} catch (Exception e) {
log.warn("Unexpected exception while notifying a rollback to the cache adapter. "
+ "Consider upgrading your cache adapter to the latest version. Cause: " + e);
}
}
}
}
问题?
为啥在commit的时候要将那些没有没有缓存命中的key要放一个null?
不知道,拿到是为了缓存穿透?,如果说一个数据,数据库里面也没有查出来,那就在缓存里面防止一个null,但是在外面的判断会判断如果是null的话,还是会从数据库查。那这个null有什么意义?这个操作看不懂,着实看不懂。
在rollBack的时候为啥不需要将中间map(entriesToAddOnCommit)中的key从delegate中移除?
因为entriesToAddOnCommit里面肯定是不会放在delegate的,entriesMissedInCache为啥就会放要清楚,难道就是entriesMissedInCache里面的可能会在delegate?,按照这样的的逻辑的话,那也只有Commit的时候才会在里面,但是Commit的时候就会将之前的都清除掉。这个操作看不懂,着实看不懂。
Mybatis的这些实现类,借助这些各个方面的缓存,就可以一层一层的套起来,是功能叠加起来,利用装饰器的设计模式。其实这里也感觉可以说是代理了。只不过代理的有点多。
关于Cache的各个实现类的分析就分析到这里了。 如有不正确的地方,欢迎指出。谢谢。