126篇殿堂级深度学习论文分类整理 从入门到应用(下)

126篇殿堂级深度学习论文分类整理 从入门到应用(下)

3 应用

3.1 自然语言处理 (NLP)

█[1] Antoine Bordes, et al. "Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing." AISTATS(2012) [pdf] ★★★★

地址:hds.utc.fr/~bordesan/do

█[2] Mikolov, et al. "Distributed representations of words and phrases and their compositionality." ANIPS(2013): 3111-3119 [pdf] (word2vec) ★★★

地址:papers.nips.cc/paper/50

█[3] Sutskever, et al. "“Sequence to sequence learning with neural networks." ANIPS(2014) [pdf] ★★★

地址:papers.nips.cc/paper/53

█[4] Ankit Kumar, et al. "“Ask Me Anything: Dynamic Memory Networks for Natural Language Processing." arXiv preprint arXiv:1506.07285(2015) [pdf] ★★★★

地址:arxiv.org/abs/1506.0728

█[5] Yoon Kim, et al. "Character-Aware Neural Language Models." NIPS(2015) arXiv preprint arXiv:1508.06615(2015) [pdf] ★★★

地址:[1508.06615] Character-Aware Neural Language Models

█[6] Jason Weston, et al. "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks." arXiv preprint arXiv:1502.05698(2015) [pdf] (bAbI tasks) ★★★

地址:A Set of Prerequisite Toy Tasks

█[7] Karl Moritz Hermann, et al. "Teaching Machines to Read and Comprehend." arXiv preprint arXiv:1506.03340(2015) [pdf](CNN/每日邮报完形填空风格的问题) ★★

地址:[1506.03340] Teaching Machines to Read and Comprehend

█[8] Alexis Conneau, et al. "Very Deep Convolutional Networks for Natural Language Processing." arXiv preprint arXiv:1606.01781(2016) [pdf] (文本分类的前沿技术) ★★★

地址:[1606.01781] Very Deep Convolutional Networks for Text Classification

█[9] Armand Joulin, et al. "Bag of Tricks for Efficient Text Classification." arXiv preprint arXiv:1607.01759(2016) [pdf] (比前沿技术稍落后, 但快很多) ★★★

地址:[1607.01759] Bag of Tricks for Efficient Text Classification

3.2 物体检测

█[1] Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "Deep neural networks for object detection." Advances in Neural Information Processing Systems. 2013. [pdf] ★★★

地址:papers.nips.cc/paper/52

█[2] Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. [pdf] (RCNN) ★★★★★

地址:cv-foundation.org/opena

█[3] He, Kaiming, et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition." European Conference on Computer Vision. Springer International Publishing, 2014. [pdf] (SPPNet) ★★★★

地址:arxiv.org/pdf/1406.4729

█[4] Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE International Conference on Computer Vision. 2015. [pdf] ★★★★

地址:pdfs.semanticscholar.org

█[5] Ren, Shaoqing, et al. "Faster R-CNN: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015. [pdf] ★★★★

地址:papers.nips.cc/paper/56

█[6] Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." arXiv preprint arXiv:1506.02640 (2015). [pdf] (YOLO,杰出研究,非常具有使用价值) ★★★★★

地址:homes.cs.washington.edu

█[7] Liu, Wei, et al. "SSD: Single Shot MultiBox Detector." arXiv preprint arXiv:1512.02325 (2015). [pdf] ★★★

地址:arxiv.org/pdf/1512.0232

█[8] Dai, Jifeng, et al. "R-FCN: Object Detection via Region-based Fully Convolutional Networks." arXiv preprint arXiv:1605.06409 (2016). [pdf] ★★★★

地址:Object Detection via Region-based Fully Convolutional Networks

3.3 视觉追踪

█[1] Wang, Naiyan, and Dit-Yan Yeung. "Learning a deep compact image representation for visual tracking." Advances in neural information processing systems. 2013. [pdf] (第一篇使用深度学习做视觉追踪的论文,DLT Tracker) ★★★

地址:papers.nips.cc/paper/51

█[2] Wang, Naiyan, et al. "Transferring rich feature hierarchies for robust visual tracking." arXiv preprint arXiv:1501.04587 (2015). [pdf] (SO-DLT) ★★★★

地址:arxiv.org/pdf/1501.0458

█[3] Wang, Lijun, et al. "Visual tracking with fully convolutional networks." Proceedings of the IEEE International Conference on Computer Vision. 2015. [pdf] (FCNT) ★★★★

地址:cv-foundation.org/opena

█[4] Held, David, Sebastian Thrun, and Silvio Savarese. "Learning to Track at 100 FPS with Deep Regression Networks." arXiv preprint arXiv:1604.01802 (2016). [pdf] (GOTURN,在深度学习方法里算是非常快的,但仍比非深度学习方法慢很多) ★★★★

地址:arxiv.org/pdf/1604.0180

█[5] Bertinetto, Luca, et al. "Fully-Convolutional Siamese Networks for Object Tracking." arXiv preprint arXiv:1606.09549 (2016). [pdf] (SiameseFC,实时物体追踪领域的最新前沿技术) ★★★★

地址:arxiv.org/pdf/1606.0954

█[6] Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg. "Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking." ECCV (2016) [pdf] (C-COT) ★★★★

地址:cvl.isy.liu.se/research

█[7] Nam, Hyeonseob, Mooyeol Baek, and Bohyung Han. "Modeling and Propagating CNNs in a Tree Structure for Visual Tracking." arXiv preprint arXiv:1608.07242 (2016). [pdf] (VOT2016 获奖论文,TCNN) ★★★★

地址:arxiv.org/pdf/1608.0724

3.4 图像标注

█[1] Farhadi,Ali,etal. "Every picture tells a story: Generating sentences from images". In Computer VisionECCV 2010. Springer Berlin Heidelberg:15-29, 2010. [pdf] ★★★

地址:cs.cmu.edu/~afarhadi/pa

█[2] Kulkarni, Girish, et al. "Baby talk: Understanding and generating image descriptions". In Proceedings of the 24th CVPR, 2011. [pdf] ★★★★

地址:tamaraberg.com/papers/g

█[3] Vinyals, Oriol, et al. "Show and tell: A neural image caption generator". In arXiv preprint arXiv:1411.4555, 2014. [pdf] ★★★

地址:arxiv.org/pdf/1411.4555

█[4] Donahue, Jeff, et al. "Long-term recurrent convolutional networks for visual recognition and description". In arXiv preprint arXiv:1411.4389 ,2014. [pdf]

地址:arxiv.org/pdf/1411.4389

█[5] Karpathy, Andrej, and Li Fei-Fei. "Deep visual-semantic alignments for generating image descriptions". In arXiv preprint arXiv:1412.2306, 2014. [pdf] ★★★★★

地址:cs.stanford.edu/people/

█[6] Karpathy, Andrej, Armand Joulin, and Fei Fei F. Li. "Deep fragment embeddings for bidirectional image sentence mapping". In Advances in neural information processing systems, 2014. [pdf] ★★★★

地址:arxiv.org/pdf/1406.5679

█[7] Fang, Hao, et al. "From captions to visual concepts and back". In arXiv preprint arXiv:1411.4952, 2014. [pdf] ★★★★★

地址:arxiv.org/pdf/1411.4952

█[8] Chen, Xinlei, and C. Lawrence Zitnick. "Learning a recurrent visual representation for image caption generation". In arXiv preprint arXiv:1411.5654, 2014. [pdf] ★★★★

地址:arxiv.org/pdf/1411.5654

█[9] Mao, Junhua, et al. "Deep captioning with multimodal recurrent neural networks (m-rnn)". In arXiv preprint arXiv:1412.6632, 2014. [pdf] ★★★

地址:arxiv.org/pdf/1412.6632

█[10] Xu, Kelvin, et al. "Show, attend and tell: Neural image caption generation with visual attention". In arXiv preprint arXiv:1502.03044, 2015. [pdf] ★★★★★

地址:arxiv.org/pdf/1502.0304

3.5 机器翻译

部分里程碑研究被列入 RNN / Seq-to-Seq 版块。

█[1] Luong, Minh-Thang, et al. "Addressing the rare word problem in neural machine translation." arXiv preprint arXiv:1410.8206 (2014). [pdf] ★★★★

地址:arxiv.org/pdf/1410.8206

█[2] Sennrich, et al. "Neural Machine Translation of Rare Words with Subword Units". In arXiv preprint arXiv:1508.07909, 2015. [pdf] ★★★

地址:arxiv.org/pdf/1508.0790

█[3] Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015). [pdf] ★★★★

地址:arxiv.org/pdf/1508.0402

[4] Chung, et al. "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation". In arXiv preprint arXiv:1603.06147, 2016. [pdf] ★★

地址:arxiv.org/pdf/1603.0614

█[5] Lee, et al. "Fully Character-Level Neural Machine Translation without Explicit Segmentation". In arXiv preprint arXiv:1610.03017, 2016. [pdf] ★★★★★

地址:arxiv.org/pdf/1610.0301

█[6] Wu, Schuster, Chen, Le, et al. "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". In arXiv preprint arXiv:1609.08144v2, 2016. [pdf] (Milestone) ★★★★

地址:arxiv.org/pdf/1609.0814

3.6 机器人

█[1] Koutník, Jan, et al. "Evolving large-scale neural networks for vision-based reinforcement learning." Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013. [pdf] ★★★

地址:repository.supsi.ch/455

█[2] Levine, Sergey, et al. "End-to-end training of deep visuomotor policies." Journal of Machine Learning Research 17.39 (2016): 1-40. [pdf] ★★★★★

地址:jmlr.org/papers/volume1

█[3] Pinto, Lerrel, and Abhinav Gupta. "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours." arXiv preprint arXiv:1509.06825 (2015). [pdf] ★★★

地址:arxiv.org/pdf/1509.0682

█[4] Levine, Sergey, et al. "Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection." arXiv preprint arXiv:1603.02199 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1603.0219

█[5] Zhu, Yuke, et al. "Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning." arXiv preprint arXiv:1609.05143 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1609.0514

█[6] Yahya, Ali, et al. "Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search." arXiv preprint arXiv:1610.00673 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1610.0067

█[7] Gu, Shixiang, et al. "Deep Reinforcement Learning for Robotic Manipulation." arXiv preprint arXiv:1610.00633 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1610.0063

█[8] A Rusu, M Vecerik, Thomas Rothörl, N Heess, R Pascanu, R Hadsell."Sim-to-Real Robot Learning from Pixels with Progressive Nets." arXiv preprint arXiv:1610.04286 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1610.0428

█[9] Mirowski, Piotr, et al. "Learning to navigate in complex environments." arXiv preprint arXiv:1611.03673 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1611.0367

3.7 艺术

█[1] Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (2015). "Inceptionism: Going Deeper into Neural Networks". Google Research. [html] (Deep Dream) ★★★★

地址:research.googleblog.com

█[2] Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. "A neural algorithm of artistic style." arXiv preprint arXiv:1508.06576 (2015). [pdf] (杰出研究,迄今最成功的方法) ★★★★★

地址:arxiv.org/pdf/1508.0657

█[3] Zhu, Jun-Yan, et al. "Generative Visual Manipulation on the Natural Image Manifold." European Conference on Computer Vision. Springer International Publishing, 2016. [pdf] (iGAN) ★★★★

地址:arxiv.org/pdf/1609.0355

█[4] Champandard, Alex J. "Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artworks." arXiv preprint arXiv:1603.01768 (2016). [pdf] (Neural Doodle) ★★★★

地址:arxiv.org/pdf/1603.0176

█[5] Zhang, Richard, Phillip Isola, and Alexei A. Efros. "Colorful Image Colorization." arXiv preprint arXiv:1603.08511 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1603.0851

█[6] Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution." arXiv preprint arXiv:1603.08155 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1603.0815

█[7] Vincent Dumoulin, Jonathon Shlens and Manjunath Kudlur. "A learned representation for artistic style." arXiv preprint arXiv:1610.07629 (2016). [pdf] ★★★★

地址:arxiv.org/pdf/1610.0063

█[8] Gatys, Leon and Ecker, et al."Controlling Perceptual Factors in Neural Style Transfer." arXiv preprint arXiv:1611.07865 (2016). [pdf] (control style transfer over spatial location,colour information and across spatial scale) ★★★★

地址:arxiv.org/pdf/1610.0428

█[9] Ulyanov, Dmitry and Lebedev, Vadim, et al. "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images." arXiv preprint arXiv:1603.03417(2016). [pdf] (纹理生成和风格变化) ★★★★

地址:arxiv.org/pdf/1611.0367

3.8 目标分割 Object Segmentation

█[1] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation.” in CVPR, 2015. [pdf] ★★★★★

地址:arxiv.org/pdf/1411.4038

█[2] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. "Semantic image segmentation with deep convolutional nets and fully connected crfs." In ICLR, 2015. [pdf] ★★★★★

地址:arxiv.org/pdf/1606.0091

█[3] Pinheiro, P.O., Collobert, R., Dollar, P. "Learning to segment object candidates." In: NIPS. 2015. [pdf] ★★★★

地址:arxiv.org/pdf/1506.0620

█[4] Dai, J., He, K., Sun, J. "Instance-aware semantic segmentation via multi-task network cascades." in CVPR. 2016 [pdf] ★★★

地址:arxiv.org/pdf/1512.0441

█[5] Dai, J., He, K., Sun, J. "Instance-sensitive Fully Convolutional Networks." arXiv preprint arXiv:1603.08678 (2016). [pdf] ★★★

地址:arxiv.org/pdf/1603.0867



原文地址:songrotek/Deep-Learning-Papers-Reading-Roadmap

                    https://zhuanlan.zhihu.com/p/25549585


  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
1. 概述类 首先是概述类论文,先后有2013年的“Representation Learning: A Review and New Perspectives”和2015年的”Deep Learning in Neural Networks: An Overview”两篇。 上传了较新的一篇。 3. 分布式计算 分布式计算方面论文涉及到具体解决计算能力的问题。有2012年的两篇论文Building High-level Features Using Large Scale Unsupervised Learning和Large Scale Distributed Deep Networks,其中后篇较好,其中第一次提到GPU对深度学习计算进行提速,其描述的情形大致是如何对多个GPGPU并行计算的深度学习框架进行编程。故上传了此篇 4. 具体算法 而后便是具体的算法方面的典型论文,包括K-means、单层非监督网络、卷积网络CNN、多级架构、Maxout和增强学习,论文列举如下: 2006年Notes on Convolutional Neural Networks 2009年What is the Best Multi-Stage Architecture for Object Recognition 2011年An Analysis of Single-Layer Networks in Unsupervised Feature Learning 2012年Learning Feature Representations with K-means 2012年Sparse Filtering (其中有RBM,auto-encoder等) 2014年Improving deep neural network acoustic models using generalized maxout networks 2014年Adolescent-specific patterns of behavior and neural activity during social reinforcement learning 2015年Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis和Human-level control through deep reinforcement learning
### 回答1: 男女之间的爱情,源自朋友之间的善良,升华为恋人相互的珍重,最终演变成夫妻之间真挚的爱情。无论是从朋友到恋人,还是从恋人到夫妻,它们都有着共同的特点:坚定的信任、坚实的依赖、深沉的爱恋。 朋友之间,可以是一种无言的了解,也可以是熟悉的笑容,虽然不像恋人之间那样深情,但也有着温暖的爱意。只要朋友之间的感情坚固,就可以把这份友谊深化,发展成恋爱关系。恋人之间,开始可能只是一种牵动心弦的吸引,但随着相处越来越长,一种熟悉和依赖也会逐渐形成,双方也会越来越了解彼此,对对方的爱更加贴心。夫妻之间,是一种共同承担责任的爱,也是一种坚定不移的承诺。夫妻相互扶持,在困难时帮助彼此,在幸福时分享快乐,相互扶持,造就了一个完美的家庭。男女之间的爱情,从朋友到恋人,再到夫妻,走过的路都是那么的美好,愿每一份爱情都能够承受时间的考验,抵达最后的彼岸。 ### 回答2: 朋友,是一种伴我笑,伴我闹的亲密关系。我们分享喜悦,诉说心情,无拘无束地相互扶持。然而,在这个纷繁复杂的世界里,友情是那样的单纯而珍贵,它并不常常转变成恋情。然而,有些幸运的人却从朋友角色中逐渐渗透进了爱的领域。 我们的关系像水般悄无声息地发生着改变,从朋友到恋人。那是一份互相吸引的力量,仿佛有一股无形的纽带将我们紧密联系在一起。我们不再只是谈论琐事,而开始分享更深层次的感情。每一次相遇都像是缠绵的一吻,每一句问候都蓄满了柔情蜜意。 我们的爱情在细水长流中承载着我们的梦想和希望,以成为一对有情有义的夫妻。我们相信彼此,彼此信任、关心和支持。我们渴望通过婚姻的誓约,将这份爱 perpetuated 到永远。 但婚姻更像是一个新的起点,一个完全陌生的领域。我们将面临更多的责任和挑战,我们需要学会更多的忍耐、理解和包容。然而,我相信我们的坚定信念将会支撑我们的爱。在每个挫折面前,我们会紧握对方的手,共同面对人生的风雨。我们的夫妻关系将成为两颗心紧密相连的羁绊,永远无法割舍。 朋友,恋人,夫妻,这是一个演替的剧本。我们从一个角色转变为另一个,经历着不同的人生阶段。然而,我们的情感始终如一,不断升华。我们的爱不仅仅是悦目的表象,而是一种深入灵魂的契约。无论走到何处,我们将始终携手共进,共同创造美好的回忆。 ### 回答3: 朋友,是一个人生中最美好的礼物。当我们与心灵相投的人成为朋友,情感的种子悄然埋下,温暖的友谊之花在我们心中绽放。然而,有时候,友谊的边界并不止于此。当两个灵魂开始互相吸引,友情之上的情感便默默滋生,演变为一段美丽的恋情。 从朋友到恋人,这个过程接踵而来。我们开始对对方产生更多的非分之想,有了愈发深入的了解和真心的关心。我们乐于分享心事,乐于倾听对方的痛苦和喜悦。友谊之间的默契和信任,成为我们之间恋爱的基石。 在爱情的土壤里,我们的关系逐渐变得亲密而温暖。我们体验到彼此之间的亲密接触,感受到心弦上那动人的共鸣。我们能够在对方身上找到安慰和支持,一同度过人生的喜怒哀乐。 终于,我们迈入了夫妻的殿堂。这是一个承诺,表明我们决心一生一世守护对方、彼此相伴。婚姻不仅仅意味着爱情的牢牢束缚,更是两颗心彼此紧紧相连的象征。我们相信,这份爱将带给我们无尽的欢乐和充实,使我们的人生更加完整。 友谊可以演变成深厚的爱情,一对恋人可以成为携手走过一生的夫妻。这种发展,是源于我们共同的成长和理解,以及对彼此不断的包容和宽容。我们的情感经历了这样的转变,茁壮成长,让我们的心灵从相识到相知,从相知到相守。 友情、爱情和婚姻,是人生旅途上的三层境界。无论何时何地,这些情感都会陪伴我们度过平凡和特殊的日子。愿我们的友谊转变为永恒的爱情,成为相伴一生的伴侣,共同书写一段幸福的篇章。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值