论文阅读 【CVPR-2022】 A ConvNet for the 2020s

A ConvNet for the 2020s

studyai.com

搜索论文: A ConvNet for the 2020s

摘要(Abstract)

The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation.

视觉识别的“Roaring 20年代”始于视觉变换器(ViTs)的引入,它很快取代了ConvNets,成为最先进的图像分类模型。另一方面,普通ViTs在应用于一般的计算机视觉任务(如目标检测和语义分割)时面临困难。

It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks.

正是层次变换器(例如,Swin变换器)重新引入了几个ConvNet Prior,才使得Transformers作为一个通用的视觉骨干网络切实可行,并在各种视觉任务中表现出卓越的性能。

However,the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions.

然而,这种混合方法的有效性在很大程度上仍然归功于Transformer的固有优势,而不是卷积固有的归纳偏置。

In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt.

在这项工作中,我们重新审视了设计空间,并测试了纯ConvNet所能达到的极限。我们将一个标准的ResNet逐步“现代化”,使其朝着视觉Transformer的设计靠拢,并发现了几个关键组件,这些组件在这一过程中会导致性能差异。这次探索的结果是产生了一系列纯ConvNet模型,称为ConvNeXt。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值