深度之眼Paper带读笔记目录

简介

本次的Paper学习营分CV和NLP两个方向,每个方向又分精读、重点阅读和推荐阅读三类文章,精读基本每篇文章都分三部分:论文导读、论文精读、代码讲解。
为了快速了解各种方法、框架,代码部分我基本就略过,等遇到实际问题再上手-。-
两个方向的第一篇文章是一样的。
所有论文都可以从谷歌学术上找到。
2020.7.15 更新说明,在原来NLP精读论文的基础上,增加NLP Baseline学习,内容更加详细,多了代码讲解。可以看有Baseline标注的即可。

图神经网络(已完结)

01.Node2Vec:Node2Vec: Scalable Feature Learning for Networks
02.LINE:LINE: Large-scale Information Network Embedding
03.SDNE:Structural Deep Network Embedding
04.metapath2vec:metapath2vec:Scalable Representation Learning for Heterogeneous Networks
05.TransE/H/R/D:
TransE:Translating Embeddings for Modeling Multi-relational Data
TransH:Knowledge Graph Embedding by Translating on Hyperplanes
TransR:Learning entity and relation embeddings for knowledge graph completion
TransD:Knowledge Graph Embedding via Dynamic Mapping Matrix
06.GAT:Graph Attention Networks
07.GraphSAGE:Inductive Representation kearping on Large Graphs
08.GCN:Semi-Supervised Classification with Graph Convolutional Networks
09.GGNN:Gated Graph Sequence Neural Networks
10.MPNN:Neural Message Passing for Quantum Chemistry

NLP精读论文目录(已完结)

01.Deep learning:Deep learning
02.word2vec:Efficient Estimation of Word Representations in Vector Space
03.句和文档的embedding:Distributed representations of sentences and docments
04.machine translation:Neural Machine Translation by Jointly Learning to Align and Translate
05.transformer:Transformer: attention is all you need
06.GloVe:GloVe: Global Vectors for Word Representation
07.Skip:Skip-Thought Vector
08.TextCNN:Convolutional Neural Networks for Sentence Classification
09.基于CNN的词级别的文本分类:Character-level Convolutional Networks for Text Classification
10.DCNN:A Convolutional Neural Network For Modelling Sentences
11.FASTTEXT:Bag of Tricks for Efficient Text Classification
12.HAN:Hierarchical Attention Network for Document Classification
13.PCNNATT:Neural Relation Extraction with Selective Attention over Instances
14.E2ECRF:End-to-end Sequence Labeling via Bi-directional LSTM-CNNS-CRF
15.多层LSTM:Sequence to Sequence Learning with Neural Networks
16.卷积seq2seq:Convolutional Sequence to Sequence Learning
17.GNMT:Google’s Neural Machine Translation System:Bridging the Gap between Human and Machine Translation
18.UMT:Phrase-Based&Neural Unsupervised Machine Translation
19.指针生成网络:Get To The Point:Summarization with Pointer-Generator Networks
20.End-to-End Memory Networks:End-to-End Memory Networks
21.QANet:QANet:Combining Local Convolution with Global Self-Attention for Reading Comprehension
22.双向Attention:Bi-Directional Attention Flow for Machine Comprehension
23.Dialogue:Adversarial Learning for Neural Dialogue Generation
24.缺
25.R-GCNs:Modeling Relational Data with GraphConvolutional Networks
26.大规模语料模型:Exploring the limits of language model
27.Transformer-XL:Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context
28.TCN:An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
29.Deep contextualized word representations
30.BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding

NLP Baseline(已完结)

1.Word2Vec.Efficient Estimation of Word Representations in Vector Space
2.GloVe.GloVe: Global Vectors for Word Representation
3.C2W.Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation
4.TextCNN.Convolutional Neural Networks for Sentence Classification
5.CharCNN.Character-level Convolutional Networks for Text Classification
6.FastText.Bag of Tricks for Efficient Text Classification
7.Seq2Seq.Sequence to Sequence Learning with Neural Networks
8.Attention NMT.Neural Machine Translation by Jointly Learning to Align and Translate
9.HAN.Hierarchical Attention Network for Document Classification
10.SGM.SGM: Sequence Generation Model for Multi-Label Classification

CV目录(已太监)

这里和之前的CV paper学习营内容有变化,停更。
01.Deep learning:Deep learning
02.AlexNet:ImageNet Classification with Deep Convolutional Neural Networks

  • 9
    点赞
  • 81
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论
多模态深度学习是一种融合多种感官信息的学习方法,它能够同时处理视觉、听觉和语言等不同类型的数据。多模态深度学习paper主要是研究多模态深度学习的原理、方法、框架和应用。 在多模态深度学习paper中,一般会对多模态数据的表示、融合和学习进行研究。首先,它会介绍多模态数据的表示方式,比如如何将像、音频和文本等数据表示成机器可以理解的形式,常见的方法包括使用卷积神经网络(CNN)和循环神经网络(RNN)等。 其次,多模态深度学习paper会探讨如何将不同类型的数据进行融合。融合可以是级联式的,即将不同模态的网络分别训练,然后将它们的结果融合在一起进行决策。也可以是并行式的,即同时训练多个模态的网络,然后将它们的特征进行融合。此外,一些paper还会提出一些专门的融合算法,比如多模态融合网络和交互式融合网络。 最后,多模态深度学习paper也会讨论多模态深度学习在不同应用领域的具体应用。比如在像识别中,多模态深度学习可以同时利用像和文本信息来提高识别精度;在语音识别中,多模态深度学习可以同时利用语音和文本信息来提高识别准确率。 总之,多模态深度学习paper是对多模态深度学习理论和应用的研究,它为我们理解和应用多模态深度学习提供了重要的参考和指导。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

oldmao_2000

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值