NLP
DarrenXf
这个作者很懒,什么都没留下…
展开
-
pytorch实现classifying names with a character-level RNN
papersThe Unreasonable Effectiveness of Recurrent Neural Networkshttps://karpathy.github.io/2015/05/21/rnn-effectiveness/Understanding LSTM Networkshttps://colah.github.io/posts/2015-08-Understan...原创 2019-02-19 14:40:45 · 520 阅读 · 0 评论 -
XLNet: Generalized Autoregressive PreTraining for Language Understanding
XLNet: Generalized Autoregressive PreTraining for Language Understanding个人翻译,并不专业。论文地址https://arxiv.org/pdf/1906.08237.pdfXLNet: 语言理解的广义自回归预训练摘要具有双向上下文建模,自动编码去燥的能力与基于自动回归语言模型的预训练方法相比,基于BERT的预训...翻译 2019-07-18 20:57:42 · 979 阅读 · 1 评论 -
GLUE多任务数据集介绍
GLUE 是一个自然语言任务集合,包括以下这些数据集namefull nametaskchineseMNLIMulti-Genre NLINatural language inference自然语言推断QQPQuora Quora Question PairsSemantic textual similarity/Paraphrase identifica...原创 2019-04-07 18:14:43 · 9899 阅读 · 0 评论 -
BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT个人翻译,并不权威。paperhttps://arxiv.org/pdf/1810.04805.pdfBERT:Pre-training of Deep Bidirectional Transformers for Language Understanding 深度双向Transformers预训练解决语言理解Abstract 摘要我们引入被叫做BERT的新的语言表示模型,...翻译 2019-04-10 15:23:15 · 2960 阅读 · 0 评论 -
pytorch 实现GPT2
papersGaussian Error Linear Unitstranslate to chineseAttention Is All You Needtranslate to chineseImproving Language Understanding by Generative Pre-Trainingtranslate to chineseLanguage Models ...原创 2019-03-23 21:47:14 · 2641 阅读 · 0 评论 -
OpenAI GPT Improving Language Understanding by Generative Pre-Training
paper OpenAI GPT Improving Language Understanding by Generative Pre-Traininghttps://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf个人...翻译 2019-03-12 17:07:15 · 2223 阅读 · 0 评论 -
OpenAI GPT pytorch 实现微调 ROCStories 数据集
implement OpenAI gptpapersGaussian Error Linear Unitstranslate to chineseAttention Is All You Needtranslate to chineseImproving Language Understanding by Generative Pre-Trainingtranslate to chi...原创 2019-03-20 17:46:56 · 1456 阅读 · 0 评论 -
Transformer Attention Is All You Need
Attention Is All You Needpaperhttps://arxiv.org/pdf/1706.03762.pdf注意力就是你需要的所有摘要主导的序列转换模型是基于复杂的循环或卷积神经网络,包括编码器和解码器。最佳性能的模型还通过注意力机制连接编码器和解码器。我们提出了一种新的简单的网络结构,即Transformer,它只是基于注意力机制,完全不需要循环和卷积。两...翻译 2019-03-14 16:50:43 · 1121 阅读 · 0 评论 -
OpenAI GPT-2语言模型是非监督多任务学习器 Language Models are Unsupervised Multitask Learners
paperhttps://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf个人翻译,并不权威语言模型是非监督多任务学习器摘要自然语言处理任务,比如问答,机器翻译,阅读理解和摘要,通常是通过在具体任务数据集上的监督学习方法处理的。我们证明语言模型开始学习这些任务没有任何显示的监督, 当训练在一...翻译 2019-03-09 18:48:22 · 2148 阅读 · 0 评论 -
NLP 自然语言处理数据集 粗略
收集匆忙,并不保证准确datasetindexdatasetAbbreviationtasknote1LiBriSpeechAutomatic speech recogniton2WSJAutomatic speech recogniton3Hub5’00 EvaluationAutomatic speech recogniton...原创 2019-02-26 17:16:41 · 3615 阅读 · 0 评论 -
NLP 自然语言处理 中文任务列表
tableI translated it myself. It may not be authoritative.indexEnglishChinese1Automatic speech recogniton自动语音识别2CCG supertaggingCCG 超级标记3Common sense常识4Constituency parsing...原创 2019-02-26 12:48:21 · 1281 阅读 · 0 评论 -
NLP自然语言处理任务列表 task list
task listAutomatic speech recognitionCCG supertaggingCommon senseConstituency parsingCoreference resolutionDependency parsingDialogueDomain adaptationEntity linkingGrammatical error correct...原创 2019-02-26 12:42:08 · 796 阅读 · 0 评论 -
Transformer小结
Attention is all you needTransformerLayerNorm(x + Sublayer(x))整理的Transformer 伪代码输入 Inputs 输出 OutputsX = Positional_Encoding(Input_Embedding(Inputs))X = LayerNorm(X + Multi-Head_Attention(X))X ...原创 2019-07-24 16:40:18 · 1355 阅读 · 0 评论