C++ LRU cache 实现

LRU

简单回顾 LRU( Least Recently Used),最近最久未使用。
LRU 缓存要具备以下特点:

  • 新增缓存时,应该被放到 Cache 的最前面
  • 访问某个缓存之后,也应该被挪到 Cache 的最前面
  • 容量不够,擦除尾端的Cache

最近翻了下mediapipe的源码,发现里面有个image_multi_pool.cc,基本原理就是使用LRU cache 对image对象池的管理,不过里面LRU实现用的是deque+unordered_map。
主要代码

class ImageMultiPool {
 public:
  ImageMultiPool() {}
  explicit ImageMultiPool(void* ignored) {}
  ~ImageMultiPool();

  // Obtains a buffer. May either be reused or created anew.
  Image GetBuffer(int width, int height, bool use_gpu,
                  ImageFormat::Format format /*= ImageFormat::SRGBA*/)
                  {
    IBufferSpec key(width, height, format);
    auto pool_it = pools_cpu_.find(key);
    if (pool_it == pools_cpu_.end()) {
      // Discard the least recently used pool in LRU cache.
      if (pools_cpu_.size() >= kMaxPoolCount) {
        auto old_spec = buffer_specs_cpu_.front();  // Front has LRU.
        buffer_specs_cpu_.pop_front();
        pools_cpu_.erase(old_spec);
      }
      buffer_specs_cpu_.push_back(key);  // Push new spec to back.
      std::tie(pool_it, std::ignore) = pools_cpu_.emplace(
          std::piecewise_construct, std::forward_as_tuple(key),
          std::forward_as_tuple(MakeSimplePoolCpu(key)));
    } else {
      // Find and move current 'key' spec to back, keeping others in same order.
      auto specs_it = buffer_specs_cpu_.begin();
      while (specs_it != buffer_specs_cpu_.end()) {
        if (*specs_it == key) {
          buffer_specs_cpu_.erase(specs_it);
          break;
        }
        ++specs_it;
      }
      buffer_specs_cpu_.push_back(key);
    }
    return GetBufferFromSimplePool(pool_it->first, pool_it->second);
  }

  struct IBufferSpec {
    IBufferSpec(int w, int h, mediapipe::ImageFormat::Format f)
        : width(w), height(h), format(f) {}
    int width;
    int height;
    mediapipe::ImageFormat::Format format;
  };

 private:
  std::unordered_map<IBufferSpec, SimplePoolcpu, IBufferSpecHash> pools_cpu_
  std::deque<IBufferSpec> buffer_specs_cpu_;
};

问题

对LRU管理对象池,没问题,但是使用deque实现LRU管理最近使用的池感觉不太合适,里面的遍历deque查找最近使用的池也不是最优解。

实现

其实就是把deque用list代替,list 元素插删可以保证o(1),deque只保证头尾插删的大部分情况下O(1)复杂度,触发resize时还是要做整体迁移O(n)。
另外一个就是存储迭代器,方便擦除LRU ,这个也是不能使用deque的原因,插入和擦除都可能会导致deque的迭代器失效,而list则只有在删除元素时,该元素的迭代器失效。

#include <iostream>
#include <unordered_map>
#include <list>
#include <mutex>
#include <thread>

template <typename KeyType, typename ValueType, int capacity = 10>
class LRUCache {
 private:
  std::unordered_map<KeyType, std::pair<ValueType, typename std::list<KeyType>::iterator>> cache;
  std::list<KeyType> lruList;
  std::mutex mtx;  // Mutex for synchronization

 public:
  ValueType get(const KeyType& key) {
    std::lock_guard<std::mutex> lock(mtx);  // Lock for thread safety
    if (cache.find(key) != cache.end()) {
      // Move the accessed item to the front of the list
      lruList.erase(cache[key].second);
      lruList.push_front(key);
      cache[key].second = lruList.begin();
      return cache[key].first;
    }
    return ValueType();  // Return a default-constructed value if the key is not in the cache
  }

  void put(const KeyType& key, const ValueType& value) {
    std::lock_guard<std::mutex> lock(mtx);  // Lock for thread safety
    if (cache.find(key) != cache.end()) {
      // If key exists, update its value and move to the front
      cache[key].first = value;
      lruList.erase(cache[key].second);
      lruList.push_front(key);
      cache[key].second = lruList.begin();
    } else {
      // If key does not exist
      if (cache.size() >= capacity) {
        // Remove the least recently used item
        KeyType lruKey = lruList.back();
        cache.erase(lruKey);
        lruList.pop_back();
      }
      // Add the new key-value pair
      lruList.push_front(key);
      cache[key] = std::make_pair(value, lruList.begin());
    }
  }

  friend std::ostream& operator<<(std::ostream& os,
                                  const LRUCache<KeyType, ValueType, capacity>& rhs) {
    for (auto p : rhs.lruList) {
      os << p << " ";
    }
    return os;
  }
};

int main() {
  LRUCache<std::string, int, 2> cache;  // Capacity is set to 2

  std::thread t1([&]() {
    cache.put("one", 1);
    cache.put("two", 2);
    std::cout << "Thread 1 cache: " << cache << std::endl;
    std::cout << "Thread 1: " << cache.get("one") << std::endl;
    std::cout << "Thread 1 cache: " << cache << std::endl;
  });

  std::thread t2([&]() {
    cache.put("three", 3);
    std::cout << "Thread 2 cache: " << cache << std::endl;
    std::cout << "Thread 2: " << cache.get("two") << std::endl;
    std::cout << "Thread 2 cache: " << cache << std::endl;
    std::cout << "Thread 2: " << cache.get("one") << std::endl;
    std::cout << "Thread 2 cache: " << cache << std::endl;
  });

  t1.join();
  t2.join();

  return 0;
}


  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是LRU算法的C++实现: ```c++ #include <iostream> #include <list> #include <unordered_map> using namespace std; class LRUCache { public: LRUCache(int capacity) { cap = capacity; } int get(int key) { auto it = cache.find(key); if (it == cache.end()) { return -1; } // 将当前访问的节点移到链表头部,并更新哈希表中该节点的地址 cacheList.splice(cacheList.begin(), cacheList, it->second); it->second = cacheList.begin(); return it->second->second; } void put(int key, int value) { auto it = cache.find(key); if (it != cache.end()) { // 如果 key 已经存在,先删除旧的节点 cacheList.erase(it->second); } // 插入新节点到链表头部,并在哈希表中添加该节点 cacheList.push_front(make_pair(key, value)); cache[key] = cacheList.begin(); // 如果超出缓存容量,删除链表尾部节点,并在哈希表中删除对应的项 if (cache.size() > cap) { int k = cacheList.rbegin()->first; cacheList.pop_back(); cache.erase(k); } } private: int cap; list<pair<int, int>> cacheList; unordered_map<int, list<pair<int, int>>::iterator> cache; }; int main() { LRUCache cache(2); cache.put(1, 1); cache.put(2, 2); cout << cache.get(1) << endl; // 输出 1 cache.put(3, 3); cout << cache.get(2) << endl; // 输出 -1,因为缓存中已经删除了 key 为 2 的项 cache.put(4, 4); cout << cache.get(1) << endl; // 输出 -1,因为缓存中已经删除了 key 为 1 的项 cout << cache.get(3) << endl; // 输出 3 cout << cache.get(4) << endl; // 输出 4 return 0; } ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值