持续学习-Continual learning

当前主流的针对神经网络模型的持续学习方法可以分为一下几类:

1. Regularization: 在网络参数更新的时候增加限制,使得网络在学习新任务的时候不影响之前的知识。这类方法中,最典型的算法就是EWC。
EWC https://github.com/GMvandeVen/continual-learning
2. Ensembling: 当模型学习新任务的时候,增加新的模型(可以是显示或者隐式的方法),使得多个任务实质还是可以对应多个模型,最后把多个模型的预测进行整合。增加子模型的方式固然好,但是没多一个新任务就多一个子模型,对学习效率和存储都是一个很大的挑战。goole发布的PathNet是一个典型的ensembling算法。
3. Rehearsal: 这个方法的idea非常的直观,我们担心模型在学习新任务的时候忘记了旧任务,那么可以直接通过不断复习回顾的方式来解决,在模型学习新任务的同时混合原来任务的数据,让模型能够学习新任务的同时兼顾旧任务。不过,这样做有一个不太好的地方就是我们需要一直保存所有旧任务的数据,并且同一个数据会出现多次重复学习的情况。其中,GeppNet是一个基于rehearsal的经典算法。
4. Dual-memory: 这个方法结合了人类记忆的机制,设计了两个网络,一个是fast-memory(短时记忆),另一个是slow-memory(长时记忆),新学习的知识存储在fast memory中,fast-memory不断的将记忆整合transfer到slow-memory中,其中GeppNet+STM是rehearsal和dual-memory相结合的一个算法。
5.Sparse-coding: 灾难性遗忘是因为模型在学习新任务(参数更新)时,把对旧任务影响重大的参数修改了。如果我们在模型训练的时候,人为的让模型参数变得稀疏(把知识存在少数的神经元上),就可以减少新知识记录对旧知识产生干扰的可能性。Sensitivity-Driven是这类方法的一个经典算法。

有关增量学习的相关论文

  • CVPR2017_iCaRL- Incremental Classifier and Representation Learning
  • CVPR2019_Learning a Unified Classifier Incrementally via Rebalancing
  • ICCV2019_Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation
  • 2
    点赞
  • 27
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Continual learning through synaptic intelligence is a form of machine learning that mimics the way the human brain learns and adapts to new information. It involves the creation of artificial neural networks that are capable of learning from new data without forgetting previously learned knowledge. In traditional machine learning, a model is trained on a fixed dataset, and once training is complete, the model is deployed and cannot be updated or improved without retraining on a new dataset. This approach is not suitable for applications where new data is constantly being generated or where the model needs to adapt to changing conditions. Continual learning through synaptic intelligence addresses this limitation by allowing models to learn incrementally from new data, while retaining previously learned knowledge. This is achieved through the use of dynamic synapses that can adapt and change in response to new input. In a continual learning system, the model is trained on a small initial dataset, and as new data becomes available, the model updates its synapses to incorporate this information. The synapses are designed to be flexible and adaptive, allowing the model to learn new concepts and patterns without overwriting previously learned knowledge. One of the key benefits of continual learning through synaptic intelligence is that it can improve the overall accuracy and robustness of machine learning models over time. By continually updating and refining the model based on new data, the model can adapt to changes in the environment or user behavior, leading to better performance and more accurate predictions. Overall, continual learning through synaptic intelligence is an exciting area of research that has the potential to revolutionize the field of machine learning by enabling models to learn and adapt in a more human-like way.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值