Java中哪种数据结构最适合实现内存中对象缓存,而对象具有各自的到期时间?
基本上对于缓存,我可以使用提供put和get方法的Map(键可以是String),并使用"时间戳" +"对象"对的有序列表来管理到期时间。 因此,清理线程可以检查第一个列表条目,并在其过期时间过后删除该对象。 (删除第一个元素应该在O(1)时间内)
您所描述的构建基本上是ExpiringMap。还有其他类似的实现,例如Guava(请参阅CacheBuilder)-尽管我不相信它像ExpiringMap那样支持每个条目的到期时间。
+1用于Guava的CacheBuilder。我认为这是最合适的建议,因为它易于使用且轻巧。
提议的原因是,OP并不是要请求缓存服务器场,而是要请求内存中的缓存结构。番石榴Cache最适合这里。
番石榴如何为每个对象设置到期时间?
@sodik我不认为番石榴支持每个条目的到期时间。虽然ExpiringMap可以。更新了答案以反映这一点。
缓存框架现在已经相当成熟:
EhCache:http://ehcache.org/
Memcached:http://memcached.org/
但是,如果您坚持要重新发明轮子,请记住要考虑内存使用率。我经常看到错误地实现的缓存(HashMap)实际上变成了内存泄漏。
在此处查看Cowan的答案:Java WeakHashMap和缓存:为什么它引用键而不是值?
Guava Cachebuilder:
LoadingCache graphs = CacheBuilder.newBuilder()
.maximumSize(10000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.removalListener(MY_LISTENER)
.build(
new CacheLoader() {
public Graph load(Key key) throws AnyException {
return createExpensiveGraph(key);
}
});
由于WeekHashmap不适合缓存,但是您始终可以使用Map>,其值有资格获得GC的参考以用于周参考。
最重要的是,我们始终将EhCache,Memcached和一致性作为最受欢迎的选择。
我会考虑使用现有的库,例如ehcache。
但是,如果您想编写自己的应用程序,除非您需要它,否则我不会使用背景线程,因为它增加了复杂性。相反,我会让前台线程删除过期的条目。
如果您只需要LRU缓存,我会使用LinkedHashMap。但是,如果要定时到期,我可以将HashMap和PriorityQueue一起使用(以便您可以检查下一个即将到期的条目是否已到期)
实际上,LinkedHashMap是一个很好的选择。但是有一点要澄清:要实现过期条目的删除,您可以将LinkedHashMap与执行此工作的某个线程结合使用,对吗?如果是,您是否认为这会稍微降低此类缓存的性能?其次:为什么要使用前台线程而不是后台线程?
@GrzesiekD。您可以在访问地图时删除过期项,而不使用后台线程。
如先前的回答所述,最好使用一种流行的内存中缓存,例如EhCache,Memcached等。
但是,就像您想通过自己的具有对象过期功能且时间复杂度较低的缓存来实现它一样,我也尝试像这样实现它((非常感谢任何测试评论/建议)。)
public class ObjectCache {
private volatile boolean shutdown;
private final long maxObjects;
private final long timeToLive;
private final long removalThreadRunDelay;
private final long objectsToRemovePerRemovalThreadRun;
private final AtomicLong objectsCount;
private final Map cachedDataStore;
private final BlockingQueue queue;
private final Object lock = new Object();
private ScheduledExecutorService executorService;
public ObjectCache(long maxObjects, long timeToLive, long removalThreadRunDelay, long objectsToRemovePerRemovalThreadRun) {
this.maxObjects = maxObjects;
this.timeToLive = timeToLive;
this.removalThreadRunDelay = removalThreadRunDelay;
this.objectsToRemovePerRemovalThreadRun = objectsToRemovePerRemovalThreadRun;
this.objectsCount = new AtomicLong(0);
this.cachedDataStore = new HashMap();
this.queue = new LinkedBlockingQueue();
}
public void put(K key, V value) {
if (key == null || value == null) {
throw new IllegalArgumentException("Key and Value both should be not null");
}
if (objectsCount.get() + 1 > maxObjects) {
throw new RuntimeException("Max objects limit reached. Can not store more objects in cache.");
}
// create a value wrapper and add it to data store map
CacheEntryWrapper entryWrapper = new CacheEntryWrapper(key, value);
synchronized (lock) {
cachedDataStore.put(key, entryWrapper);
}
// add the cache entry reference to queue which will be used by removal thread
queue.add(entryWrapper.getCacheEntryReference());
objectsCount.incrementAndGet();
// start the removal thread if not started already
if (executorService == null) {
synchronized (lock) {
if (executorService == null) {
executorService = Executors.newSingleThreadScheduledExecutor();
executorService.scheduleWithFixedDelay(new CacheEntryRemover(), 0, removalThreadRunDelay, TimeUnit.MILLISECONDS);
}
}
}
}
public V get(K key) {
if (key == null) {
throw new IllegalArgumentException("Key can not be null");
}
CacheEntryWrapper entryWrapper;
synchronized (lock) {
entryWrapper = cachedDataStore.get(key);
if (entryWrapper != null) {
// reset the last access time
entryWrapper.resetLastAccessedTime();
// reset the reference (so the weak reference is cleared)
entryWrapper.resetCacheEntryReference();
// add the new reference to queue
queue.add(entryWrapper.getCacheEntryReference());
}
}
return entryWrapper == null ? null : entryWrapper.getValue();
}
public void remove(K key) {
if (key == null) {
throw new IllegalArgumentException("Key can not be null");
}
CacheEntryWrapper entryWrapper;
synchronized (lock) {
entryWrapper = cachedDataStore.remove(key);
if (entryWrapper != null) {
// reset the reference (so the weak reference is cleared)
entryWrapper.resetCacheEntryReference();
}
}
objectsCount.decrementAndGet();
}
public void shutdown() {
shutdown = true;
executorService.shutdown();
queue.clear();
cachedDataStore.clear();
}
public static void main(String[] args) throws Exception {
ObjectCache cache = new ObjectCache<>(1000000, 60000, 1000, 1000);
long i = 0;
while (i++ < 10000) {
cache.put(i, i);
}
i = 0;
while(i++ < 100) {
Thread.sleep(1000);
System.out.println("Data store size:" + cache.cachedDataStore.size() +", queue size:" + cache.queue.size());
}
cache.shutdown();
}
private class CacheEntryRemover implements Runnable {
public void run() {
if (!shutdown) {
try {
int count = 0;
CacheEntryReference entryReference;
while ((entryReference = queue.peek()) != null && count++ < objectsToRemovePerRemovalThreadRun) {
long currentTime = System.currentTimeMillis();
CacheEntryWrapper cacheEntryWrapper = entryReference.getWeakReference().get();
if (cacheEntryWrapper == null || !cachedDataStore.containsKey(cacheEntryWrapper.getKey())) {
queue.poll(100, TimeUnit.MILLISECONDS); // remove the reference object from queue as value is removed from cache
} else if (currentTime - cacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
synchronized (lock) {
// get the cacheEntryWrapper again just to find if put() has overridden the same key or remove() has removed it already
CacheEntryWrapper newCacheEntryWrapper = cachedDataStore.get(cacheEntryWrapper.getKey());
// poll the queue if -
// case 1 - value is removed from cache
// case 2 - value is overridden by new value
// case 3 - value is still in cache but it is old now
if (newCacheEntryWrapper == null || newCacheEntryWrapper != cacheEntryWrapper || currentTime - cacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
queue.poll(100, TimeUnit.MILLISECONDS);
newCacheEntryWrapper = newCacheEntryWrapper == null ? cacheEntryWrapper : newCacheEntryWrapper;
if (currentTime - newCacheEntryWrapper.getLastAccessedTime().get() > timeToLive) {
remove(newCacheEntryWrapper.getKey());
}
} else {
break; // try next time
}
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private class CacheEntryWrapper {
private K key;
private V value;
private AtomicLong lastAccessedTime;
private CacheEntryReference cacheEntryReference;
public CacheEntryWrapper(K key, V value) {
this.key = key;
this.value = value;
this.lastAccessedTime = new AtomicLong(System.currentTimeMillis());
this.cacheEntryReference = new CacheEntryReference(this);
}
public K getKey() {
return key;
}
public V getValue() {
return value;
}
public AtomicLong getLastAccessedTime() {
return lastAccessedTime;
}
public CacheEntryReference getCacheEntryReference() {
return cacheEntryReference;
}
public void resetLastAccessedTime() {
lastAccessedTime.set(System.currentTimeMillis());
}
public void resetCacheEntryReference() {
cacheEntryReference.clear();
cacheEntryReference = new CacheEntryReference(this);
}
}
private class CacheEntryReference {
private WeakReference weakReference;
public CacheEntryReference(CacheEntryWrapper entryWrapper) {
this.weakReference = new WeakReference(entryWrapper);
}
public WeakReference getWeakReference() {
return weakReference;
}
public void clear() {
weakReference.clear();
}
}
}
我认为您的决定是对的。
我会精确地使用HashMap。
选择不佳的LinkedHashMap在这里会更好,因为它具有通过删除陈旧条目来减少内存消耗的功能。