元学习:Meta Learning on a Sequence of Imbalanced Domains with Difficulty Awareness(二)

本文探讨了元学习中针对非平稳和不平衡任务分布的遗忘问题。提出了记忆管理机制M2D3,它结合任务难度和域分布来决定存储和移除任务。此外,还引入了一种自适应任务采样方法,以减小训练中的方差和加速学习。实验部分展示了这些方法的有效性,应用于元学习模型如ANIL和Prototypical Network上,与传统的连续学习和增量学习方法相比,提高了泛化能力。
摘要由CSDN通过智能技术生成

上篇文章主要阐述了元学习的相关知识以及顶会论文所要进行的事情是什么,末尾提到了在线域更改,当然他们所做的事情是非常有意义的。对于顶会论文我觉得目前对我来说重要的是先读取思想看能否为我利用,再实现代码等的复现与迁移。

In this section, we design the memory management mechanism for determining which task to be stored in the memory and which task to be moved out. The mechanism, named Memory Management with Domain Distribution and Dif ficulty Awareness (M2D3), jointly considers the difficulty and distribution of few shot tasks in our setting. M2D3 first estimates the probability of the current task T t to be moved into the memory. The model will then determine the task to be moved out in the event that a new task move-in happens. To improve efficiency, we utilize the obtained latent domain information associated with each task (as described in pre vious section) to first estimate this move-out probability at cluster-level before sampling single task, as in Figure 3 .

如图是内存管理过程的说明。作者设计了内存管理机制,用于确定将哪个任务存储在内存中以及要移出的任务。每个彩色圆圈代表缓冲区中的一个簇,每个点表示一项任务。该机制名为内存管理与域分布和难度感知(M2D3),它们共同考虑难度以及在我们的环境中分配一些射击任务。M2D3优先估计当前任务Tt移动的概率进入记忆。然后,模型将确定要执行的任务在发生新任务移入时被移出。为了提高效率,我们利用获得的潜在域与每个任务(如前一节所述)相关的信息,以首先在采样单个任务之前的集群级别,如图所示。

作者对于其中的细节给出了定义:在这里,我们定义了以下所涉及的符号方法描述。内存中的每个任务Tt都是关联的。具有潜在域标签Lt和所有具有相同潜在域标签形成一个簇。Mi表示集群由内存中具有潜在域标签i的所有任务组成M、 ni=| Mi |表示Mi中的任务数,并且n=| M |表示内存中的任务总数,以及Ii表示聚类Mi的重要性得分。

我理解的是内存管理机制无疑是这篇文章的核心之一。正是创造出来了这样的一个内存管理机制,通过其中的内部操作,我们才可能在元学习上做到一定的突破,而这篇顶会无疑是开创性的首先提出。

接下来介绍的是“用于训练的自适应记忆任务采样”

During meta training, a mini-batch of tasks are sampled from the memory and are jointly trained with current tasks to mitigate catastrophic forgetting. Direct uniform sampling tasks from memory incurs high variance, and results in un stable training [ 32 , 9 ]. On the other hand, our intuition for
non-uniform task sampling mechanism is that the tasks are not equally important for retaining the knowledge from previ ous domains. The tasks that carry more information are more beneficial for the model to remember previous domains, and should be sampled more frequently. To achieve this goal, we propose an efficient adaptive task sampling scheme in mem ory that accelerates training and reduces gradient estimation variance. As shown in Figure 4 , the sampling probability of Miniimagenet and Aircraft are adjusted and increased based on the scheme suggesting the importance of these domains are higher than that of Omniglot for retaining knowledge.

Figure 4: A simple example of uniform task sampling and our adaptive memory task sampling method for sampling tasks from memory buffer during meta training.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

talentstars

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值