计算机视觉 CV领域经典论文 CV paper

本文主要是为了帮助那些开始阅读文献但没有头绪的,也不知道从哪里开始读起的同学,文中列举的论文都是我挑选并读过的,给各位做一个参考,其中必定也有疏漏,请见谅!

1.文献下载途径
1.1 https://arxiv.org/list/cs.CV/recent 该网站是一个paper发行之前的预印本,几乎最新的动向上面基本都有。几乎所有的paper上面都能找到,按标题搜索即可。
1.2 谷歌学术镜像 http://scholar.hedasudi.com 备用选项
arxiv
谷歌学术镜像
2、文献列表
最好是按照顺序来读,不要跳跃的太厉害。

No.Reading MethodTitleModelYear
1Intensive ReadingDeep Learning/2015
2Intensive ReadingImageNet Classification with Deep Convolutional Neural NetworksAlexNet2012
3Intensive ReadingVery deep convolutional networks for large-scale image recognitionVGG2014
4Intensive ReadingDeep residual learning for image recognitionResNet2016
5Intensive ReadingGoing deeper with convolutionsGoogLeNet2014
6Seletive ReadingRethinking the Inception Architecture for Computer VisionInception V32016
7SkimmingInception-v4, inception-ResNet and the impact of residual connections on learningInception V42017
8Intensive ReadingBatch normalization: Accelerating deep network training by reducing internal covariate shiftInception V2(BN)2015
9Intensive ReadingAggregated residual transformations for deep neural networksResNeXt2016
10Intensive ReadingSqueeze-and-Excitation NetworksSENet2017
11Intensive ReadingDensely connected convolutional networksDensenNet2017
12Intensive ReadingRich feature hierarchies for accurate object detection and semantic segmentationR-CNN2014
13Intensive ReadingFast R-CNNFast R-CNN2015
14SkimmingFaster R-CNN: Towards Real-Time Object Detection with Region Proposal NetworksFaster R-CNN2016
15Intensive ReadingYou Only Look Once: Unified, Real-Time Object DetectionYOLO V12015
16Intensive ReadingYOLO9000: Better, faster, strongerYOLO V22016
17Intensive ReadingMobileNets: Efficient convolutional neural networks for mobile vision applicationsMobileNet V12017
18Seletive ReadingMobileNetV2: Inverted Residuals and Linear BottlenecksMobileNet V22018
19SkimmingSearching for mobileNetV3MobileNet V32019
20Intensive ReadingShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile DevicesShuffleNet V12017
21Intensive ReadingShufflenet V2: Practical guidelines for efficient cnn architecture designShuffleNet V22018
22Intensive ReadingEfficientNet: Rethinking model scaling for convolutional neural networksEfficientNet V12019
23Intensive ReadingEfficientNetV2: Smaller Models and Faster TrainingEfficientNet V22021
24Seletive ReadingEfficient Estimation of Word Representations in Vector SpaceWord2Vec2013
25SkimmingNeural architecture search with reinforcement learningNAS2017
26Intensive ReadingLearning Transferable Architectures for Scalable Image RecognitionNASNet2018
27Intensive ReadingMixUp: Beyond empirical risk minimizationMixUP2018
28Intensive ReadingFully Convolutional Networks for Semantic SegmentationFCN2015
29Intensive ReadingU-Net: Convolutional Networks for Biomedical Image SegmentationU-Net2015
30Seletive ReadingSegNet: A Deep Convolutional Encoder-Decoder Architecture for Image SegmentationSegNet2016
31Intensive ReadingAttention is all you needTransformer2017
32Intensive ReadingAn image is worht 16 X 16 words : Transformers for Image Recognition At ScaleViT2020
33SkimmingGenerative adversarial networksGAN2014
34Seletive ReadingRethinking Atrous Convolution for Semantic Image SegmentationDeepLab V32017
35Intensive ReadingEncoder-decoder with atrous separable convolution for semantic image segmentationDeepLab V3+2018
36Intensive ReadingPyramid scene parsing networkPSPNet2017
37Intensive ReadingSwin Transformer: Hierarchical Vision Transformer using Shifted WindowsSwin Transformer2021
38Seletive ReadingUnsupervised representation learning with deep convolutional generative adversarial networksDCGAN2016
39Intensive ReadingBottleneck Transformers for Visual RecognitionBoTNet2021
40Intensive ReadingMLP-Mixer: An all-MLP Architecture for VisionMLP-mixer2021
41Seletive ReadingDo You Even Need Attention A Stack of Feed-Forward Layers DoesSurprisingly Well on ImageNet/2021
42Seletive ReadingDistilling the Knowledge in a Neural NetworkKnowledge Distillation2015
43Intensive ReadingTraining data-efficient image transformers& distillation through attentionDeiT2021
44Intensive ReadingCvT: Introducing Convolutions to Vision TransformersCvt2021

3.总结概括
3.1当你读完paper以后要记得总结,需要一个文献阅读的记录表,表头如下,尽量简单的几句话概括paper的精髓。

No.Reading MethodTimeModelTitleAuthorsYearJournalKey WordsAbstract"approach/
technique"Method overviewConclusionGapMy CommentsData and CodeMind Mapping

3.2 思维导图
我使用的免费版的xmind,虽然不能插入图片和公式有点麻烦,有条件可以买付费版。
在这里插入图片描述
4.代码实现
paper当中的源码一般是开源的,作者一般会在paper当中提供提供的GitHub。但pytorch和TensorFlow版本有时候只有一种,这对刚好不熟悉的另一种的同学可能有些困难,这时候可以去paperwithcode上面查找,一般多个框架下的代码实现均有。https://paperswithcode.com/sota这个网站还可以看到正确率排名,可以帮助我们及时的跟进学术最前沿。
在这里插入图片描述

  • 4
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值