FIFO、LRU、LFU三种缓存淘汰算法

本文介绍三种常用缓存淘汰算法,即它们的简单实现。

简介

缓存,就是将程序或系统经常要调用的对象存在内存中,再次调用时可以快速从内存中获取对象,不必再去创建新的重复的实例。
当缓存中的数据太多超过一定值时,通常会采取一些缓存淘汰算法进行处理。

FIFO(先进先出)

FIFO即先进先出算法,队列也具有先进先出的性质,所以可以考虑采用LinkedList实现FIFO算法。但是只用LinkedList的话,查找时时间复杂度为O(n),所以可以考虑采用HashMap+LinkedList实现。

public class FIFOCache<K,V>{
    private Map<K,V> cache;
    private LinkedList<K> list;
    private volatile int maxCapacity;
    private final Lock lock;

    public FIFOCache(){
        this(1000);
    }

    public FIFOCache(int maxCapacity){
        this.maxCapacity = maxCapacity;
        this.lock = new ReentrantLock();
        this.cache = new HashMap<>();
        this.list = new LinkedList<>();
    }

    public V get(K key){
        this.lock.lock();

        V var;
        try {
            var = this.cache.get(key);
        }finally {
            this.lock.unlock();
        }
        return var;
    }

    public void put(K key, V value){
        this.lock.lock();

        try {
            this.list.addLast(key);
            if (maxCapacity < this.list.size()){
                K k = this.list.getFirst();
                cache.remove(k);
                this.list.removeFirst();
            }
            this.cache.put(key, value);
        }finally {
            this.lock.unlock();
        }
    }

    @Override
    public String toString() {
        return "FIFOCache{" +
                "cache=" + cache +
                ", list=" + list +
                ", maxCapacity=" + maxCapacity +
                ", lock=" + lock +
                '}';
    }
}

测试类

public class TestFIFO {
    public static void main(String[] args) {
        FIFOCache<String,String> fifoCache = new FIFOCache<String,String>(3);

        fifoCache.put("1","a");
        fifoCache.put("2","b");
        fifoCache.put("3","c");
        fifoCache.put("4","d");

        System.out.println(fifoCache.toString());

        System.out.println(fifoCache.get("2"));
    }
}

FIFO算法

LRU(最近最久未使用)

LRU(The Least Recently Used,最近最久未使用算法)是一种常见的缓存算法,在很多分布式缓存系统(如Redis, Memcached)中都有广泛使用。

LRU算法的思想是:如果一个数据在最近一段时间没有被访问到,那么可以认为在将来它被访问的可能性也很小。因此,当空间满时,最久没有访问的数据最先被置换(淘汰)。

实现

  1. 数组+时间戳 (时间复杂度较高O(n))
  2. 链表 (查询时间复杂度还是O(n))每次将访问到的节点移到链表尾部
  3. 哈希表+双向链表(LinkedHashMap)

Dubbo中的LRU实现

package cache;

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

/**
 * @author zousy
 * @version v1.0
 * @Description
 * @date 2021-03-16 14:46
 */
public class LRUCache<K, V> extends LinkedHashMap<K, V> {
    
    private static final float DEFAULT_LOAD_FACTOR = 0.75F;
    private static final int DEFAULT_MAX_CAPACITY = 1000;
    private final Lock lock;
    private volatile int maxCapacity;

    public LRUCache() {
        this(1000);
    }

    public LRUCache(int maxCapacity) {
        super(16, 0.75F, true);
        this.lock = new ReentrantLock();
        this.maxCapacity = maxCapacity;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return this.size() > this.maxCapacity;
    }

    @Override
    public boolean containsKey(Object key) {
        this.lock.lock();

        boolean var2;
        try {
            var2 = super.containsKey(key);
        } finally {
            this.lock.unlock();
        }

        return var2;
    }

    @Override
    public V get(Object key) {
        this.lock.lock();

        Object var2;
        try {
            var2 = super.get(key);
        } finally {
            this.lock.unlock();
        }

        return (V) var2;
    }

    @Override
    public V put(K key, V value) {
        this.lock.lock();

        Object var3;
        try {
            var3 = super.put(key, value);
        } finally {
            this.lock.unlock();
        }

        return (V) var3;
    }

    @Override
    public V remove(Object key) {
        this.lock.lock();

        Object var2;
        try {
            var2 = super.remove(key);
        } finally {
            this.lock.unlock();
        }

        return (V) var2;
    }

    @Override
    public int size() {
        this.lock.lock();

        int var1;
        try {
            var1 = super.size();
        } finally {
            this.lock.unlock();
        }

        return var1;
    }

    @Override
    public void clear() {
        this.lock.lock();

        try {
            super.clear();
        } finally {
            this.lock.unlock();
        }

    }

    public int getMaxCapacity() {
        return this.maxCapacity;
    }

    public void setMaxCapacity(int maxCapacity) {
        this.maxCapacity = maxCapacity;
    }
}

测试类

public class TestLRUCache {
    private static LRUCache<String, Integer> cache = new LRUCache<>(10);

    public static void main(String[] args) {
        for (int i = 0; i < 10; i++) {
            cache.put("k" + i, i);
        }
        System.out.println("all cache :'{"+cache+"}'");
        cache.get("k3");
        System.out.println("get k3 :'{"+cache+"}'");
        cache.get("k4");
        System.out.println("get k4 :'{"+cache+"}'");
        cache.get("k4");
        System.out.println("get k4 :'{"+cache+"}'");
        cache.put("k" + 10, 10);
        System.out.println("After running the LRU algorithm cache :'{"+cache+"}'");
    }
}

结果

LFU(最近最少使用)

LFU(Least Frequently Used ,最近最少使用算法)也是一种常见的缓存算法。

LFU算法的思想是:如果一个数据在最近一段时间很少被访问到,那么可以认为在将来它被访问的可能性也很小。因此,当空间满时,最小频率访问的数据最先被淘汰。

算法实现策略:考虑到 LFU 会淘汰访问频率最小的数据,我们需要一种合适的方法按大小顺序维护数据访问的频率。

实现

  1. 计算器+链表+哈希表
  2. LIFO的Deque数组+哈希表+链表

这里看下Dubbo中的LFU算法的实现。

public class LFUCache<K, V> {
    private Map<K, LFUCache.CacheNode<K, V>> map;
    private LFUCache.CacheDeque<K, V>[] freqTable; 
    private final int capacity;
    private int evictionCount;
    private int curSize;
    private final ReentrantLock lock;
    private static final int DEFAULT_LOAD_FACTOR = 1000;
    private static final float DEFAULT_EVICTION_CAPACITY = 0.75F;//默认淘汰因子

    public LFUCache() {
        this(1000, 0.75F);
    }

    public LFUCache(final int maxCapacity, final float evictionFactor) {
        this.curSize = 0;
        this.lock = new ReentrantLock();
        if (maxCapacity <= 0) {
            throw new IllegalArgumentException("Illegal initial capacity: " + maxCapacity);
        } else {
            boolean factorInRange = evictionFactor <= 1.0F || evictionFactor < 0.0F;
            if (factorInRange && !Float.isNaN(evictionFactor)) {
                this.capacity = maxCapacity;
                this.evictionCount = (int)((float)this.capacity * evictionFactor);
                this.map = new HashMap();
                this.freqTable = new LFUCache.CacheDeque[this.capacity + 1];

                int i;
                for(i = 0; i <= this.capacity; ++i) {
                    this.freqTable[i] = new LFUCache.CacheDeque();
                }

                for(i = 0; i < this.capacity; ++i) {
                    this.freqTable[i].nextDeque = this.freqTable[i + 1];
                }

                this.freqTable[this.capacity].nextDeque = this.freqTable[this.capacity];
            } else {
                throw new IllegalArgumentException("Illegal eviction factor value:" + evictionFactor);
            }
        }
    }

    public int getCapacity() {
        return this.capacity;
    }

    public V put(final K key, final V value) {
        this.lock.lock();

        LFUCache.CacheNode node;
        try {
            if (this.map.containsKey(key)) {
                node = (LFUCache.CacheNode)this.map.get(key);
                if (node != null) {
                    LFUCache.CacheNode.withdrawNode(node);
                }

                node.value = value;
                this.freqTable[0].addLastNode(node);
                this.map.put(key, node);
            } else {
                node = this.freqTable[0].addLast(key, value);
                this.map.put(key, node);
                ++this.curSize;
                if (this.curSize > this.capacity) {
                    this.proceedEviction();
                }
            }
        } finally {
            this.lock.unlock();
        }

        return node.value;
    }

    public V remove(final K key) {
        LFUCache.CacheNode<K, V> node = null;
        this.lock.lock();

        try {
            if (this.map.containsKey(key)) {
                node = (LFUCache.CacheNode)this.map.remove(key);
                if (node != null) {
                    LFUCache.CacheNode.withdrawNode(node);
                }

                --this.curSize;
            }
        } finally {
            this.lock.unlock();
        }

        return node != null ? node.value : null;
    }

    public V get(final K key) {
        LFUCache.CacheNode<K, V> node = null;
        this.lock.lock();

        try {
            if (this.map.containsKey(key)) {
                node = (LFUCache.CacheNode)this.map.get(key);
                LFUCache.CacheNode.withdrawNode(node);
                //数组下一个位置存放访问过的值,相当于访问过n次。淘汰时,根据对应位置链表长度进行淘汰。
                node.owner.nextDeque.addLastNode(node);
            }
        } finally {
            this.lock.unlock();
        }

        return node != null ? node.value : null;
    }

    private int proceedEviction() {
        int targetSize = this.capacity - this.evictionCount;//允许缓存的大小 = 容量-容量*淘汰因子
        int evictedElements = 0;

        for(int i = 0; i <= this.capacity; ++i) {
            while(!this.freqTable[i].isEmpty()) {
                LFUCache.CacheNode<K, V> node = this.freqTable[i].pollFirst();
                this.remove(node.key);
                if (targetSize >= this.curSize) {
                    return evictedElements;
                }

                ++evictedElements;
            }
        }

        return evictedElements;
    }

    public int getSize() {
        return this.curSize;
    }
	
    static class CacheDeque<K, V> {
        LFUCache.CacheNode<K, V> last = new LFUCache.CacheNode();
        LFUCache.CacheNode<K, V> first = new LFUCache.CacheNode();
        LFUCache.CacheDeque<K, V> nextDeque;

        CacheDeque() {
            this.last.next = this.first;
            this.first.prev = this.last;
        }

        LFUCache.CacheNode<K, V> addLast(final K key, final V value) {
            LFUCache.CacheNode<K, V> node = new LFUCache.CacheNode(key, value);
            node.owner = this;
            node.next = this.last.next;
            node.prev = this.last;
            node.next.prev = node;
            this.last.next = node;
            return node;
        }

        LFUCache.CacheNode<K, V> addLastNode(final LFUCache.CacheNode<K, V> node) {
            node.owner = this;
            node.next = this.last.next;
            node.prev = this.last;
            node.next.prev = node;
            this.last.next = node;
            return node;
        }

        LFUCache.CacheNode<K, V> pollFirst() {
            LFUCache.CacheNode<K, V> node = null;
            if (this.first.prev != this.last) {
                node = this.first.prev;
                this.first.prev = node.prev;
                this.first.prev.next = this.first;
                node.prev = null;
                node.next = null;
            }

            return node;
        }

        boolean isEmpty() {
            return this.last.next == this.first;
        }
    }

    static class CacheNode<K, V> {
        LFUCache.CacheNode<K, V> prev;
        LFUCache.CacheNode<K, V> next;
        K key;
        V value;
        LFUCache.CacheDeque owner;

        CacheNode() {
        }

        CacheNode(final K key, final V value) {
            this.key = key;
            this.value = value;
        }

        static <K, V> LFUCache.CacheNode<K, V> withdrawNode(final LFUCache.CacheNode<K, V> node) {
            if (node != null && node.prev != null) {
                node.prev.next = node.next;
                if (node.next != null) {
                    node.next.prev = node.prev;
                }
            }

            return node;
        }
    }
}

数据结构
在这里插入图片描述
测试

public class TestLFUCache {

    private static LFUCache<String, Integer> cache = new LFUCache<String, Integer>(3,0);

    public static void main(String[] args) {
        cache.put("k1",1);
        cache.get("k1");
        cache.put("k2",2);
        cache.get("k2");
        cache.put("k3",3);
        cache.put("k4",4);
        cache.put("k5",5);
        cache.put("k6",6);

        System.out.println(cache.get("k1"));
        System.out.println(cache.get("k2"));
        System.out.println(cache.get("k3"));
        System.out.println(cache.get("k4"));
        System.out.println(cache.get("k5"));
        System.out.println(cache.get("k6"));
    }
}

结果
结果
CacheDeque链表数组相当于给访问过的值计数,每访问过一次就移动到下一个下标处。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值