持续学习——Automatic Recall Machines-Internal Replay, Continual Learning and the Brain——arxiv202006

作者信息

Abstract

Replay-based methods, present a method where these auxiliary samples are generated on the fly(出发点,就是减少内存开销),也加入了神经科学的启发来加强motivation。

Introduction

learn from sequential or non-stationary data的能力(人和神经网络相比),谈到replay这一类的方法;The goal of this work, Automatic Recall Machines, is to optimally exploit the implicit memory in the tasks model for not forgetting, by using its parameters for both inference and generation.(核心想法就是说基于当前任务模型的参数来产生replay,之前的生成模型replay方法生成模型难训练,直接存样本replay方法内存开销大);每个batch,基于当前的真实样本生成一些most dissonant related samples。
Provide a formal explanation for why training with the most dissonant related samples is optimal for not forgetting,基于这个intuition that was used for buffer selection《Online continual learning with maximal interfered retrieval, NeurIPS2019》

Method

方法内容非常少,一页左右;

Conclusion

conditional replay
Key points: paper-writing一般;这篇思想有点类似于《Dreaming to Distill: Data-free Knowledge Transfer via Deep Inversion》;三步走粗读一遍;最重要的Insight是training with the most dissonant related samples is optimal for not forgetting。然后作者的replay就是设计一个方法只要当前模型就能够生成这样的样本。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值