Using Redis as an LRU cache

Using Redis as an LRU cache

When Redis is used as a cache, sometimes it is handy to let it automatically evict old data as you add new one. This behavior is very well known in the community of developers, since it is the default behavior of the popular memcached system.  >>当Redis被用作缓存的时候,当往Redis中添加新的数据,但是内存不足时,使用lru测试自动清除老的数据。这几乎是所有缓存应用的默认特性。

LRU is actually only one of the supported eviction methods. This page covers the more general topic of the Redis maxmemory directive that is used in order to limit the memory usage to a fixed amount, and it also covers in depth the LRU algorithm used by Redis, that is actually an approximation of the exact LRU.  >>lru算法是redis唯一的内存回收算法。下面的内容包括redis的maxmemory参数和redis的lru算法的详细介绍

Maxmemory configuration directive

The maxmemory configuration directive is used in order to configure Redis to use a specified amount of memory for the data set. It is possible to set the configuration directive using the redis.conf file, or later using the CONFIG SETcommand at runtime.  >>maxmemory用来指定redis可以用来保存数据的最大内存。可以通过在redis.conf中指定,也可以通过config set 命令在实例运行中动态指定。

For example in order to configure a memory limit of 100 megabytes, the following directive can be used inside the redis.conf file.  >>如果你想指定内存为100mb,可以在redis.conf中指定如下:

maxmemory 100mb

Setting maxmemory to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB.  >>指定maxmemory为0表示没有内存限制,在64位操作系统上默认是不限制内存,32为操作系统上默认maxmemory为3GB

When the specified amount of memory is reached, it is possible to select among different behaviors, called policies. Redis can just return errors for commands that could result in more memory being used, or it can evict some old data in order to return back to the specified limit every time new data is added.  >>当redis使用的内存已经达到了maxmemory指定值时,根据不同的策略,redis会选择不同的处理方法。可以直接对需要内存的操作报错,也可以通过清除老数据,为该操作腾出内存空间(redis具体如何处理取决于你maxmemory-policy参数的配置)。

Eviction policies  

The exact behavior Redis follows when the maxmemory limit is reached is configured using the maxmemory-policy configuration directive.  >>通过maxmemory-policy参数指定内存达到限制后,redis的具体处理方法

The following policies are available:  >>maxmemory-policy参数可选值有

  • noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).  >>在内存达到限制之后,如果客户端执行的命令需要申请新的内存的话(一般都是写命令),那么会直接报错。
  • allkeys-lru: evict keys trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.  >>在内存达到限制之后,如果客户端执行的命令需要申请新的内存的话,redis根据lru算法清楚一些老数据,为新的操作腾出可用内存
  • volatile-lru: evict keys trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.  >>这个算法跟上面的allkeys-lru类似,但是只能在设置了过期时间的键中根据lru算法进行清除
  • allkeys-random: evict random keys in order to make space for the new data added.  >>在内存达到限制之后,如果客户端执行的命令需要申请新的内存的话,redis随机清理一些keys
  • volatile-random: evict random keys in order to make space for the new data added, but only evict keys with an expire set.  >>跟上面的allkeys-random类似,也是随机清理,但是只能在设置了过期时间的keys中随机清理
  • volatile-ttl: In order to make space for the new data, evict only keys with an expire set, and try to evict keys with a shorter time to live (TTL) first.  >>在内存达到限制之后,如果客户端执行的命令需要申请新的内存的话,redis清楚那些设置了过期时间的keys,并且根据ttl时间来清理,清理ttl时间最短的那部分keys

The policies volatile-lruvolatile-random and volatile-ttl behave like noeviction if there are no keys to evict matching the prerequisites. >>如果redis中没有设置了过期时间的key,那么volatile-lruvolatile-randomvolatile-ttl几个策略同noeviction策略效果是一样的

To pick the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis INFO output in order to tune your setup.  >>你应该根据你应用的访问类型来决定为maxmemory-policy指定什么策略。当然你也可以在redis实例运行时动态的修改改策略,并通过info命令来观察内存的命中和丢失情况判断那种策略更适合你的应用。

In general as a rule of thumb:  >>下面是策略选择的几个基本原则

  • Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.  >>如果你的应用请求呈幂率分布,即访问集中在一部分数据库上,那么你可以使用allkeys-lru测试。如果你无法确定你应用的访问类型,那么allkeys-lru也是一个不错的选择。
  • Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform (all elements likely accessed with the same probability).  >>如果你的应用是循环访问的或者访问请求分布比较平均,那么你可以选择allkeys-random
  • Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.  >>如果你想提示redis哪个键更该被删除,你可以选择volatile-ttl策略,然后创建key的时候指定expire

The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.  >>如果你的redis中既有persistent keys 也有设置了 expire 的keys,那么volatile-lru和volatile-random会对你比较有用,但是这种情况下我们更推荐使用两个redis实例把persistent keys和 expire keys分开

It is also worth to note that setting an expire to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure.  >>我们需要注意,为key设置expire是消耗内存的,如果你的内存比较紧张,那么你可以选择不为key设置expire,然后选择allkeys-lru。

How the eviction process works

It is important to understand that the eviction process works like this:  >>redis清除老数据的过程如下

  • A client runs a new command, resulting in more data added.  >>客户端进行需要申请新内存的操作
  • Redis checks the memory usage, and if it is greater than the maxmemory limit , it evicts keys according to the policy.  >>redis 检查内存的使用情况,如果已经达到了maxmemory指定的限制,那么它根据指定策略来清除老数据
  • A new command is executed, and so forth.  >>第一步中的客户端命令被执行

So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.

If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.  >>如果执行了一个需要大量内存的操作,那么我们有时可能会看到使用内存明显超出我们限制值的情况。

Approximated LRU algorithm

Redis LRU algorithm is not an exact implementation. This means that Redis is not able to pick the best candidate for eviction, that is, the access that was accessed the most in the past. Instead it will try to run an approximation of the LRU algorithm, by sampling a small number of keys, and evicting the one that is the best (with the oldest access time) among the sampled keys.  >>redis使用的不是真正的lru算法。它使用的是近似的lru算法,通过抽样少量的keys,然后根据lru算法删除其中某个key。因此redis根据lru算法清除的key不一定是整个实例中"最近最少使用的",而只是抽样中最近最少使用的。

However since Redis 3.0 the algorithm was improved to also take a pool of good candidates for eviction. This improved the performance of the algorithm, making it able to approximate more closely the behavior of a real LRU algorithm.  >>redis 3.0开始redis的lru算法已经被改进了,提高了性能,更接近于真正的lru算法了。

What is important about the Redis LRU algorithm is that you are able to tune the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:  >>你可以通过调整每次清除key时的抽样数,来调节lru算法的精确程度。该参数如下:

maxmemory-samples 5

The reason why Redis does not use a true LRU implementation is because it costs more memory. However the approximation is virtually equivalent for the application using Redis. The following is a graphical comparison of how the LRU approximation used by Redis compares with true LRU.  >>说了那么多,那么redis为什么不用真正的lru算法呢?原因很简单,真正的lru算法太消耗内存。下面是redis使用的lru算法和真正的lru算法的效果对比图

LRU comparison

The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last, so that the first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted.  >>我们通过向redis中添加一定数量的keys,产生如上效果图。

You can see three kind of dots in the graphs, forming three distinct bands.

  • The light gray band are objects that were evicted.  >>浅灰色带表示被删除的对象
  • The gray band are objects that were not evicted.    >>灰色带表示没有被删除的对象
  • The green band are objects that were added.         >>绿色带表是添加的对象

In a theoretical LRU implementation we expect that, among the old keys, the first half will be expired. The Redis LRU algorithm will instead only probabilistically expire the older keys.  >>对于 theoretical LRU算法,最老的一部分应该被清除(即图1中的浅灰色部分)。对于redis lru算法,是尽可能清除最老的keys,并不能保证清除的是最老的keys

As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the approximation is very close to the theoretical performance of Redis 3.0.  

Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely resembles the power law, most of the accesses will be in the set of keys that the LRU approximated algorithm will be able to handle well.

In simulations we found that using a power law access pattern, the difference between true LRU and Redis approximation were minimal or non-existent.

However you can raise the sample size to 10 at the cost of some additional CPU usage in order to closely approximate true LRU, and check if this makes a difference in your cache misses rate.  >>从第二个图中我们看到redis 3.0设置maxmemory-samples=10时,redis的lru算法实现效果同 theoretical LRU算法几乎一致(但是会导致cpu的使用率上升)

To experiment in production with different values for the sample size by using the CONFIG SET maxmemory-samples <count> command, is very simple.  >>在运行的redis实例上,我们可以通过config set命令修改maxmemory-samples参数

1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
应用背景为变电站电力巡检,基于YOLO v4算法模型对常见电力巡检目标进行检测,并充分利用Ascend310提供的DVPP等硬件支持能力来完成流媒体的传输、处理等任务,并对系统性能做出一定的优化。.zip深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值