与LRU不同LFU关注与出现的频率,然后想自己写一个,区别在于:
如果是LRU,假如capacity是2:输入是ABCBBBBBBCAA,那么最后会剩下的是C和A在缓存里。
但是如果是LFU,capacity也是2:输入是ABCBBBBBBCAA,最后剩下的是[A,B],因为B出现的频率高。也就是说就算之前因为频率低被淘汰了,后面又出现了也要再回来。
所以我想的设计方案是,整个LFUCache的底层是一个HashTable,然后cache的本质是一个doubleLinkedList或者一个PriorityQueue,每次有新数字来了,先去hashtable里面更新,然后更新完的key,value(value就是对应的frequency)去更新这个doublelinkedlist或者priorityqueue。
于是我写了一个类似这样
package testAndfun;
import java.util.Comparator;
import java.util.HashMap;
import java.util.PriorityQueue;
import java.util.Queue;
/*
:least frequently used cache(signature貌似是lruCache),
给你一个array of key, 然后一个cache的长度len,array里的key依次被输入,
然后让你求长度为len的cache里key会被replace过几次. 比如len = 2, array是{2,1,3,1},
output是1: 2进来存好,1进来存好,3进来发现cache满了,把2踢走,replaceCount++,
然后1进来,发现已经存在了,就只要update一下lfu的index就行
*/
import testAndfun.kclosestPoints.CPoint;
public class LFUCache {
HashMap<Integer,Element> map = new HashMap<Integer, Element>();
Cache cache;
private void set(int a){
if(!map.containsKey(a)){
map.put(a, new Element(a,map.get(a).freq+1));
if (cache.queue.contains(map.get(a))){
Element tmp =new Element(a, map.get(a).freq+1);
cache.queue.remove(map.get(a));
cache.queue.offer(tmp);
}
else{
cache.queue.offer(new Element(a,map.get(a).freq+1));
}
}
else{
map.put(a, new Element(a,map.get(a).freq+1));
}
}
}
class Element{
int key;
int freq;
Element(int a, int b){
this.key =a;
this.freq =b;
}
}
class Cache{
int capacity;
Comparator<Element> cmp = new Comparator<Element>(){
@Override
public int compare(Element o1, Element o2) {
// TODO Auto-generated method stub
return o1.freq - o2.freq;
}
};
PriorityQueue<Element> queue = new PriorityQueue<Element>(capacity, cmp);
Cache(int capacity, Element e){
this.capacity = capacity;
queue.offer(e);
}
Cache(int capacity, Element[] arr){
this.capacity = capacity;
for(Element e : arr)
queue.offer(e);
}
}
后来一边写一边觉得自己好傻,为什么map里还要再设计一个element,直接对应frequency就可以了。。。
后来看到一个大神设计的,非常完整,而且性能也不错:
import java.util.LinkedHashMap;
import java.util.Map;
public class LFUCache {
class CacheEntry
{
private String data;
private int frequency;
// default constructor
private CacheEntry()
{}
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
public int getFrequency() {
return frequency;
}
public void setFrequency(int frequency) {
this.frequency = frequency;
}
}
private static int initialCapacity = 10;
private static LinkedHashMap<Integer, CacheEntry> cacheMap = new LinkedHashMap<Integer, CacheEntry>();
/* LinkedHashMap is used because it has features of both HashMap and LinkedList.
* Thus, we can get an entry in O(1) and also, we can iterate over it easily.
* */
public LFUCache(int initialCapacity)
{
this.initialCapacity = initialCapacity;
}
public void addCacheEntry(int key, String data)
{
if(!isFull())
{
CacheEntry temp = new CacheEntry();
temp.setData(data);
temp.setFrequency(0);
cacheMap.put(key, temp);
}
else
{
int entryKeyToBeRemoved = getLFUKey();
cacheMap.remove(entryKeyToBeRemoved);
CacheEntry temp = new CacheEntry();
temp.setData(data);
temp.setFrequency(0);
cacheMap.put(key, temp);
}
}
public int getLFUKey()
{
int key = 0;
int minFreq = Integer.MAX_VALUE;
for(Map.Entry<Integer, CacheEntry> entry : cacheMap.entrySet())
{
if(minFreq > entry.getValue().frequency)
{
key = entry.getKey();
minFreq = entry.getValue().frequency;
}
}
return key;
}
public String getCacheEntry(int key)
{
if(cacheMap.containsKey(key)) // cache hit
{
CacheEntry temp = cacheMap.get(key);
temp.frequency++;
cacheMap.put(key, temp);
return temp.data;
}
return null; // cache miss
}
public static boolean isFull()
{
if(cacheMap.size() == initialCapacity)
return true;
return false;
}
}
这个我觉得思路就很清晰,最底层是一个hashtable,然后cache的本质就是一个LinkedHashMap,方便删除任意一个节点的元素。