增量学习-论文速读

阅读时间8:51-9:39

 

iCaRL: Incremental Classifier and Representation Learning


Abstract: A major open problem on the road toartificial intelligence is the development of incrementally learning systemsthat learn about more and more concepts over time from a stream of data. Inthis work, we introduce a new training strategy, iCaRL, that allows learning insuch a class-incremental way: only the training data for a small number ofclasses has to be present at the same time and new classes can be addedprogressively. iCaRL learns strong classifiers and a data representationsimultaneously. This distinguishes it from earlier works that werefundamentally limited to fixed data representations and therefore incompatiblewith deep learning architectures. We show by experiments on CIFAR-100 andImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally overa long period of time where other strategies quickly fail.

代码实现:https://github.com/Khurramjaved96/incremental-learning

平台:Pytorch

以增量的方式学习,并且可以逐渐增加新类,可以在很长时间逐步学习新类。

 

From N to N+1: Multiclass Transfer Incremental Learning

Since the seminal work of Thrun [16], thelearning to learn paradigm has been defined as the ability of an agent toimprove its performance at each task with experience, with the number of tasks.Within the object categorization domain, the visual learning community hasactively declined this paradigm in the transfer learning setting. Almost allproposed methods focus on category detection problems, addressing how to learna new target class from few samples by leveraging over the known source. But ifone thinks of learning over multiple tasks, there is a need for multiclasstransfer learning algorithms able to exploit previous source knowledge whenlearning a new class, while at the same time optimizing their overallperformance. This is an open challenge for existing transfer learningalgorithms. The contribution of this paper is a discriminative method thataddresses this issue, based on a Least-Squares Support Vector Machineformulation. Our approach is designed to balance between transferring to thenew class and preserving what has already been learned on the source models.Extensive experiments on subsets of publicly available datasets prove theeffectiveness of our approach.

基于最小二乘支持向量机解决在学习新类时利用以前的源知识,同时优化其整体性能的问题

 

Recent Advances in Zero-shot Recognition


Abstract: With the recent renaissance of deepconvolution neural networks, encouraging breakthroughs have been achieved onthe supervised recognition tasks, where each class has sufficient training dataand fully annotated training data. However, to scale the recognition to a largenumber of classes with few or now training samples for each class remains anunsolved problem. One approach to scaling up the recognition is to developmodels capable of recognizing unseen categories without any training instances,or zero-shot recognition/ learning. This article provides a comprehensivereview of existing zero-shot recognition techniques covering various aspectsranging from representations of models, and from datasets and evaluationsettings. We also overview related recognition tasks including one-shot andopen set recognition which can be used as natural extensions of zero-shotrecognition when limited number of class samples become available or when zero-shotrecognition is implemented in a real-world setting. Importantly, we highlightthe limitations of existing approaches and point out future research directionsin this existing new research area.

零点识别技术,在没有任何训练实例的情况下识别未发现的类别

 

Incremental Learning of Object Detectorswithout Catastrophic Forgetting

Despite their success for object detection,convolutional neural networks are ill-equipped for incremental learning, i.e.,adapting the original model trained on a set of classes to additionally detectobjects of new classes, in the absence of the initial training data. Theysuffer from “catastrophic forgetting”–an abrupt degradation of performance onthe original set of classes, when the training objective is adapted to the newclasses. We present a method to address this issue, and learn object detectorsincrementally, when neither the original training data nor annotations for theoriginal classes in the new training set are available. The core of ourproposed solution is a loss function to balance the interplay betweenpredictions on the new classes and a new distillation loss which minimizes thediscrepancy between responses for old classes from the original and the updatednetworks. This incremental learning can be performed multiple times, for a newset of classes in each step, with a moderate drop in performance compared tothe baseline network trained on the ensemble of data. We present objectdetection results on the PASCAL VOC 2007 and COCO datasets, along with adetailed empirical analysis of the approach.

在新类别上进行训练逐步调整原始模型以检测新类别时,表现出灾难性的遗忘,在原始类别上的性能突然退化。本文提出的平衡方法使得增量学习可以多次执行,与在数据集合上训练的基线网络相比,性能会有所下降。

Learning multiple visual domains with residual adapters

Abstract: There is a growing interest inlearning data representations that work well for many different types ofproblems and data. In this paper, we look in particular at the task of learninga single visual representation that can be successfully utilized in theanalysis of very different types of images, from dog breeds to stop signs anddigits. Inspired by recent work on learning networks that predict theparameters of another, we develop a tunable deep network architecture that, bymeans of adapter residual modules, can be steered on the fly to diverse visualdomains. Our method achieves a high degree of parameter sharing whilemaintaining or even improving the accuracy of domain-specific representations.We also introduce the Visual Decathlon Challenge, a benchmark that evaluatesthe ability of representations to capture simultaneously ten very differentvisual domains and measures their ability to recognize well uniformly.

学习对多种问题和数据都能表现很好的数据表示引发了越来越多的兴趣。本文可以分析不同类型的图像,从狗品种到停止标志和数字,开发了一种可调节的深度网络架构,通过适配器残留模块,可以实时引导到不同的视觉领域。还介绍了视觉十项全能挑战赛,这是一个基准,用于评估表征能够同时捕捉十个非常不同的视觉领域,并衡量他们的统一识别能力。

 

总结:在物体识别过程中逐步新增类别,深度学习方法目前还不实用,传统实现方式更靠谱?

  • 1
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

多模态

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值