前言
该文章分享2020年的十篇影响力很大的顶级论文,有的我看过,有的我还没看,以后尽量都看完,然后发相关博文,下面链接自取。我们可以看到如 EfficientDet、ResNeSt、YoLov4都是比较熟悉的,并且在当时效果SOTA的优秀网络。
链接
1、EfficientDet: Scalable and Efficient Object Detection — Tan et al
https://paperswithcode.com/paper/efficientdet-scalable-and-efficient-object
2、Fixing the train-test resolution discrepancy — Touvron et al
https://paperswithcode.com/paper/fixing-the-train-test-resolution-discrepancy-2
3、ResNeSt: Split-Attention Networks — Zhang et al
https://paperswithcode.com/paper/resnest-split-attention-networks
4、Big Transfer (BiT) — Kolesnikov et al
https://paperswithcode.com/paper/large-scale-learning-of-general-visual
5、Object-Contextual Representations for Semantic Segmentation — Yuan et al
https://paperswithcode.com/paper/object-contextual-representations-for
6、Self-training with Noisy Student improves ImageNet classification — Xie et al
https://paperswithcode.com/paper/self-training-with-noisy-student-improves
7、YOLOv4: Optimal Speed and Accuracy of Object Detection — Bochkovskiy et al
https://paperswithcode.com/paper/yolov4-optimal-speed-and-accuracy-of-object
8、An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale — Dosovitskiy et al
https://paperswithcode.com/paper/an-image-is-worth-16x16-words-transformers-1
9、Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer — Raffel et al
https://paperswithcode.com/paper/exploring-the-limits-of-transfer-learning
10、Hierarchical Multi-Scale Attention for Semantic Segmentation — Tao et al https://paperswithcode.com/paper/hierarchical-multi-scale-attention-for