transformer 多模态综述学习笔记 《Multimodal Learning with Transformers: A Survey》

我的目标:调研目前transformer在多模态领域的应用,内容也以此为重点(不全文解析)。

论文标题:Multimodal Learning with Transformers: A Survey

论文地址https://arxiv.org/abs/2206.06488

作者:Peng Xu, Xiatian Zhu, and David A. Clifton

摘要:本文主要工作包括:

  • 多模态学习、Transformer生态系统和多模态大数据时代方向的背景调查
  • 从几何拓扑的角度对Vanilla Transformer、Vision Transformer和multimodal Transformer进行了理论回顾;
  • 通过两个重要的范例(多模态预训练和特定的多模态任务)对multimodal Transformer 的应用进行回顾;

  • multimodal Transformer模型和应用共享的共同挑战和设计的总结;

  • 开放问题和潜在的研究方向的探讨。

本文的描述的重点集中在应用transformr的多模态学习。因为它的固有优势和在建模不同模态和任务上很少使用模态特定的结构假设的可伸缩性。具体来说就是Transformer可以包含一种或多种token序列一级每个序列的属性,自然允许在不进行结构修稿的情况下使用MML(Multimodal Learning)。

Background要点(本文介绍偏向视觉方向历程)

Transformer理论

通用化理解:给定来自任意模态的输入,在将数据输入到transformer之前,用户只需要执行两个主要步骤:

(1)令牌化输入,(2)选择一个嵌入空间来表示令牌。

以图象为例:

用户可以在多个粒度级别上选择或设计令牌粗粒度vs细粒度。例如,使用roi(通过对象检测器获得)和CNN feature作为令牌和令牌嵌入,使用pathces和linear projection作为令牌和令牌嵌入,或使用图节点(通过对象检测器和图形生成器获得)以及GNN feature作为令牌和令牌嵌入等方式。

多模态输入tokenization和token emdedding方法比较

多模态transformer建模方式(见下图)

  • early summation (token-wise, weighted)
  • early concatenation
  • hierarchical attention (multi-stream to one-stream)
  • hierarchical attention (one-stream to multi-stream)
  • cross-attention
  • cross-attention to concatenation

图形化方法表示建模方式

公式化方法表示多模态建模方式

 论文其余部分待以后记录。

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Visual segmentation is one of the most important tasks in computer vision, which involves dividing an image into multiple segments, each of which corresponds to a different object or region of interest in the image. In recent years, transformer-based methods have emerged as a promising approach for visual segmentation, leveraging the self-attention mechanism to capture long-range dependencies in the image. This survey paper provides a comprehensive overview of transformer-based visual segmentation methods, covering their underlying principles, architecture, training strategies, and applications. The paper starts by introducing the basic concepts of visual segmentation and transformer-based models, followed by a discussion of the key challenges and opportunities in applying transformers to visual segmentation. The paper then reviews the state-of-the-art transformer-based segmentation methods, including both fully transformer-based approaches and hybrid approaches that combine transformers with other techniques such as convolutional neural networks (CNNs). For each method, the paper provides a detailed description of its architecture and training strategy, as well as its performance on benchmark datasets. Finally, the paper concludes with a discussion of the future directions of transformer-based visual segmentation, including potential improvements in model design, training methods, and applications. Overall, this survey paper provides a valuable resource for researchers and practitioners interested in the field of transformer-based visual segmentation.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值