Learning Embedding Adaptation for Few-Shot Learning

Abstract

  • by learning an instance embedding function from seen classes, and apply the function to instances from unseen classes with limited labels.
  • usually learn a discriminative instance embedding model from the SEEN categories, and apply the embedding model to visual data in UNSEEN categories
  • no-parametric classifiers to avoid learning complicated recognition models from a small number of examples.
  • The most useful features for discerning “cat” versus “tiger” could be irrelevant and noise to the task of discerning “cat” versus “dog”

Introduce

  • What is lacking in the current approaches for few-shot learning is an adaptation strategy that tailors the visual knowledge extracted from the SEEN classes to the UNSEEN ones in a target task. In other words, we desire separate embedding spaces where each one of them is customized such that the visual features are most discriminative for a given task
  • The key assumption is that the embeddings capture all necessarily discriminative representations of data such that simple classifiers are sufficed。
  • We use the Transformer architecture to implement T. In particular, we employes self-attention mechanism to improve each instance embedding with consideration to its contextual embedding

实验

use pre-train??

  • to demonstrate the effectiveness of using a permutation-invariant set function instead of a sequence model. Please see supplementary for details.
  • Transformer is set-to-set transformation
  • customizes a task-specific embedding spaces via a self-attention architecture
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值