A ConvNet for the 2020s
studyai.com
搜索论文: A ConvNet for the 2020s
摘要(Abstract)
The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation.
视觉识别的“Roaring 20年代”始于视觉变换器(ViTs)的引入,它很快取代了ConvNets,成为最先进的图像分类模型。另一方面,普通ViTs在应用于一般的计算机视觉任务(如目标检测和语义分割)时面临困难。
It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks.
正是层次变换器(例如,Swin变换器)重新引入了几个ConvNet Prior,才使得Transformers作为一个通用的视觉骨干网络切实可行,并在各种视觉任务中表现出卓越的性能。
However,the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions.
然而,这种混合方法的有效性在很大程度上仍然归功于Transformer的固有优势,而不是卷积固有的归纳偏置。
In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt.
在这项工作中,我们重新审视了设计空间,并测试了纯ConvNet所能达到的极限。我们将一个标准的ResNet逐步“现代化”,使其朝着视觉Transformer的设计靠拢,并发现了几个关键组件,这些组件在这一过程中会导致性能差异。这次探索的结果是产生了一系列纯ConvNet模型,称为ConvNeXt。