常见的神经网络模型 AlexNet,VGGNet,GoogleNet,resNet,inceptionV3,Inception-v4,xception等论文下载链接

LeNet 1986
AlexNet 2012 http://pan.baidu.com/s/1NpEG2,作者:Alex Krizhevsky,Ilya Sutskever,Geoffrey E.Hinton

VGGNet 2014 https://arxiv.org/pdf/1409.1556.pdf 6.8% test error,作者:Karen Simonyan,Andrew Zisserman

GoogleNet(inception v1) 2014 http://arxiv.org/pdf/1409.4842v1.pdf Going Deeper with Convolutions, 6.67% test error
作者:Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich

resNet 2015 https://arxiv.org/pdf/1512.03385.pdf 3.6% test error,作者:Kaiming He, Xiangyu Zhang,Shaoqing Ren,Jian Sun

inception V2-V3 2015 http://arxiv.org/abs/1512.00567 3.6% test error
作者:Christian Szegedy,Vincent Vanhoucke,Sergey Ioffe,Jonathon Shlens

BN layer https://arxiv.org/pdf/1502.03167.pdf 4.8% test error,作者:bn Sergey Ioffe,Christian Szegedy

Inception-v4, Inception-ResNet 2016 http://arxiv.org/abs/1602.07261 3.08% test error
作者:Christian Szegedy,Sergey Ioffe,Vincent Vanhoucke

xception 2017 https://arxiv.org/pdf/1610.02357.pdf 94.5% acc,作者:Francois Chollet

denseNet 2017(best) https://arxiv.org/abs/1608.06993 5.29% test error
MobileNet 2017 https://arxiv.org/pdf/1704.04861.pdf top1 70.6%
NasNet 2018 https://arxiv.org/abs/1707.07012 top1 82.7% top 5 96.2
SqueezeNet 2016 http://arxiv.org/abs/1602.07360 top1 60.4% top5 82.5%
FCN 2016 https://arxiv.org/abs/1605.06211 解决end-to-end image semantic segmentation
DCN 2017 https://arxiv.org/abs/1703.06211 可变形卷积

Object Detection
RCNN 2013 http://arxiv.org/abs/1311.2524
SPPNet 2014 https://arxiv.org/abs/1406.4729v2
fast RCNN 2015 https://arxiv.org/abs/1504.08083
faster RCNN 2015 https://arxiv.org/abs/1506.01497
YOLO 2015 https://arxiv.org/abs/1506.02640
SSD 2015 https://arxiv.org/abs/1512.02325
YOLO9000 2016 https://arxiv.org/abs/1612.08242
R-FCN 2016 https://arxiv.org/abs/1605.06409
Deformable-ConvNets 2017 https://arxiv.org/abs/1703.06211
Mask R-CNN 2017 https://arxiv.org/abs/1703.06870
FPN(Feature Pyramid Networks for Object Detection) 2017 https://arxiv.org/abs/1612.03144 对不同卷积层进行融合merge

GAN对抗生成网络
GAN 2014 https://arxiv.org/abs/1406.2661
DCGAN 2015 http://arxiv.org/abs/1511.06434
LSGAN 2016 https://arxiv.org/abs/1611.04076
WGAN 2017 https://arxiv.org/abs/1701.07875
WGAN-GP https://arxiv.org/abs/1704.00028 improved wgan
BEGAN https://arxiv.org/abs/1703.10717

词向量
word2vec,固定的词向量,词与词的距离关系
2013,Efficient Estimation of Word Representations in Vector Space,https://arxiv.org/abs/1301.3781 ,ICLR
2013,Distributed Representations of Words and Phrases and their Compositionality. https://arxiv.org/abs/1310.4546,NIPS

ELMo,动态的词向量,相同的词在不同的语境词向量不同
2018,Deep contextualized word representations https://arxiv.org/abs/1802.05365

BERT,从词向量变成了词向量+句向量+位置向量
2018 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805

attention
2014 Recurrent Models of Visual Attention https://arxiv.org/abs/1406.6247 从图片的识别方式开始引出attention,但不是真正的attention
2014 Neural Machine Translation by Jointly Learning to Align and Translate https://arxiv.org/abs/1409.0473 在NLP中第一个使用attention机制,Tensorflow中有BahdanauAttention
2015 Effective Approaches to Attention-based Neural Machine Translation https://arxiv.org/abs/1508.04025 Tensorflow中有LuongAttention
2017 Attention Is All You Need https://arxiv.org/abs/1706.03762 transformer中的self_attention,encode_decode_attention

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值