Transformer顶会最新进展 | 含IJCAI,CVPR,AAAI2023等会议论文

Transformer 模型是自然语言处理领域中的一种最先进的模型,它使用注意力机制来处理任意长度的输入文本,并且可以学习语言模式和规律。近年来,随着计算能力的不断提高,Transformer 模型已经被广泛应用于机器翻译、文本生成、文本分类等任务中。同时,Transformer 模型也被应用于代码生成任务中,通过训练模型来生成高质量的代码。

Transformer作为提升NLP效率的黑魔法,在自然语言处理领域发生了巨大的影响,如当下火热的GPT-3 和 lang-8 等,就是基于Transformer架构构建的大语言模型。近年来,研究者们在注意力机制方面进行了广泛的研究。其中,一些研究者提出了基于注意力机制的新方法,如自适应的注意力机制和全局注意力机制。这些新方法为 Transformer 模型的改进提供了新的思路。同时,为了更好地利用计算资源,研究者们提出了许多模型压缩方法,如剪枝、量化和蒸馏等。这些方法可以帮助研究者们更好地利用已有的 Transformer 模型,并提高模型的性能和效率。

Transformer 模型的最新进展表明,它已经成为自然语言处理领域中最重要的模型之一,并且在许多任务中取得了最先进的性能。

关于Transformer模型的顶会论文列表如下(由于篇幅关系,本篇只展现部分顶会论文,可复制文末链接,直达顶会会议列表,查看所有论文)

1.Singularformer: Learning to Decompose Self-Attention to Linearize the Complexity of Transformer

2.Towards Long-delayed Sparsity: Learning a Better Transformer through Reward Redistribution

3.HDFormer: High-order Directed Transformer for 3D Human Pose Estimation

4.CiT-Net: Convolutional Neural Networks Hand in Hand with Vision Transformers for Medical Image Segmentation

5.Learning Attention from Attention: Efficient Self-refinement Transformer for Face Super-resolution

6.FedET: A Communication-Efficient Federated Class-Incremental Learning Framework Based on Enhanced Transformer

7.Towards Incremental NER Data Augmentation via Syntactic-aware Insertion Transformer

8.Neighborhood Attention Transformer

9.EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention

10.RGB no more: Minimally-decoded JPEG Vision Transformers

11.BiFormer: Vision Transformer with Bi-Level Routing Attention

12.Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors

13.DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets

14.OneFormer: One Transformer to Rule Universal Image Segmentation

15.Graph Transformer GANs for Graph-Constrained House Generation

16.DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with Higher Quality

17.Vision Transformers are Parameter-Efficient Audio-Visual Learners

18.In-context Reinforcement Learning with Algorithm Distillation

19.Language Modelling with Pixels

20.A Time Series is Worth 64 Words: Long-term Forecasting with Transformers

21.Relational Attention: Generalizing Transformers for Graph-Structured Tasks

22.Encoding Recurrence into Transformers

23.Specformer: Spectral Graph Neural Networks Meet Transformers

24.MaskViT: Masked Visual Pre-Training for Video Prediction

25.Efficient Attention via Control Variates

26.What Do Self-Supervised Vision Transformers Learn?

27.Are More Layers Beneficial to Graph Transformers?

28.User Retention-oriented Recommendation with Decision Transformer

29.Tracing Knowledge Instead of Patterns: Stable Knowledge Tracing with Diagnostic Transformer

30.MetaTroll: Few-shot Detection of State-Sponsored Trolls with Transformer Adapters

31.Compact Transformer Tracker with Correlative Masked Modeling

32.Learning Progressive Modality-shared Transformers for Effective Visible-Infrared Person Re-identification

33.An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling

34.Vision Transformers Are Good Mask Auto-Labelers

35.Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

36.Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization

37.Lite-Mono: A Lightweight CNN and Transformer Architecture for Self-Supervised Monocular Depth Estimation

38.Burstormer: Burst Image Restoration and Enhancement Transformer

39.Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers

40.Recurrent Vision Transformers for Object Detection with Event Cameras

41.Q-DETR: An Efficient Low-Bit Quantized Detection Transformer

42.Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers

43.A Light Touch Approach to Teaching Transformers Multi-view Geometry

44.DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text Spotting

45.Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention

46.CompletionFormer: Depth Completion with Convolutions and Vision Transformers

47.Devil is in the Queries: Advancing Mask Transformers for Real-world Medical Image Segmentation and Out-of-Distribution Localization

48.Vision Transformer with Super Token Sampling

49.POTTER: Pooling Attention Transformer for Efficient Human Mesh Recovery

50.Supervised Masked Knowledge Distillation for Few-Shot Transformers

点击链接,直达顶会页面:https://www.aminer.cn/conf

————————————————————————————————

如何使用ChatPaper?

为了让更多科研人更高效的获取文献知识,AMiner基于GLM-130B大模型能力,开发了Chatpaper,帮助科研人快速提高检索、阅读论文效率,获取最新领域研究动态,让科研工作更加游刃有余。

图片
ChatPaper是一款集检索、阅读、知识问答于一体的对话式私有知识库,AMiner希望通过技术的力量,让大家更加高效地获取知识。

ChatPaper:https://www.aminer.cn/chat/g

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值