Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get
and put
.
get(key)
- Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.put(key, value)
- Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.
The cache is initialized with a positive capacity.
Follow up:
Could you do both operations in O(1) time complexity?
设计并实现最近最少使用(LRU)缓存的数据结构。 它应该支持以下操作:get
和put
。
get(key)
- 如果key在缓存中,则获取key的值(始终为正),否则返回-1put(key, value)
- 如果key不存在,则设置或插入。 当缓存达到其容量时,它应在插入新项目之前使最近最少使用的项目无效。
进阶:
能否在时间复杂度为O(1)的情况下进行两项操作?
题目要求时间复杂度为O(1)。所以,在get操作时,快速查找需要使用哈希map;而put操作时,如果满了则需要按照最近最少使用原则删除数据,就需要对数据进行排序,这里应该使用链表比较快速。
class LRUCache {
public:
LRUCache(int capacity) {
size = capacity;
}
int get(int key) {
auto it = data.find(key);
if (it == data.end()) // not found
return -1;
data_queue.push_back(make_pair(it->first, it->second->second));
data_queue.erase(it->second);
it->second = --data_queue.end();
return it->second->second;
}
void put(int key, int value) {
auto it = data.find(key);
if (it == data.end()) { //not found
if (data.size() >= size) { // full
data.erase(data_queue.begin()->first);
data_queue.pop_front();
}
data_queue.push_back(make_pair(key, value));
data.insert(make_pair(key, --data_queue.end()));
}
else { //found
data_queue.push_back(make_pair(it->first, value));
data_queue.erase(it->second);
it->second = --data_queue.end();
it->second->second = value;
}
}
private:
int size;
list<pair<int, int>> data_queue;
unordered_map<int, list<pair<int, int>>::iterator> data;
};
这里我使用iterator代替指针
时间是178 ms
下面是最快的代码(68 ms)
class LRUNode{
public:
int key;
int value;
LRUNode * prev;
LRUNode * next;
};
class LRUCache {
public:
unordered_map<int,LRUNode*> m_map;
LRUNode * m_root;
int m_capacity;
LRUCache(int capacity) {
m_capacity = capacity;
m_root = nullptr;
}
inline LRUNode * findNode(int key )
{
auto it = m_map.find(key);
return (it != m_map.end())? it->second : nullptr;
}
inline void moveNodeToHead(LRUNode * node)
{
// if node is head exit (nothing to do)
if ( node == m_root)
return;
// remove node from list. stitching up list
// if there is a prev link then its an exiting node
if ( node->prev )
{
node->prev->next = node->next;
node->next->prev = node->prev;
}
// connect node to head
node->next = m_root;
node->prev = m_root->prev;
m_root->prev->next = node;
m_root->prev = node;
m_root = node;
}
int get(int key) {
LRUNode * node = findNode(key);
if (!node)
return -1;
moveNodeToHead(node);
return m_root->value;
}
void put(int key, int value) {
// find key
LRUNode * node = findNode(key);
// if key found move to front and change value
if(node) {
moveNodeToHead(node);
node->value = value;
} else {
// check if at cap. if so use oldest node as
// new node. remove from hash table
// else create new node
if( m_map.size() >= m_capacity)
{
m_map.erase(m_root->prev->key);
node = m_root->prev;
node->key = key;
node->value = value;
} else {
node = new LRUNode;
node->key = key;
node->value = value;
if ( !m_root )
{
m_root = node;
m_root->next = m_root;
m_root->prev = m_root;
} else {
node->next = m_root;
node->prev = m_root->prev;
node->prev->next = node;
m_root->prev = node;
}
}
// add to hash map and set nodes value
m_map[key] = node;
m_root = node;
}
return;
}
};
auto speedup=[](){
std::ios::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
return nullptr;
}();
这段构造了一个双链表结点类,LRUCache中实现2个内置函数findNode和moveNodeToHead,2个函数get和put都是细化了实现细节。我想来大概是iterator有些冗余的部分,会降低速度
最后一段speedup的代码,大意就是禁用stdio,解除cin和cout的绑定,大概能让时间快个10ms
但是当我直接用这段代码提交的时候,却显示我用了140ms,看来时间这项只有参考作用