nlp
FocusYang55
这个作者很懒,什么都没留下…
展开
-
Word Embedding Preparation 5. BERT
BERTPublished by Google in 2018 Bidirectional Encoder Representation from Transformers Two Phrases: Pre-training, fine-turning Use Transformer proposed in Attention Is All You Need by Google 2017 to replace RNN BERT takes advantages of multiple mode.原创 2020-12-17 00:55:56 · 107 阅读 · 0 评论 -
Word Embedding Preparation 4: ElMo
ElMoPublished in 2018 and named as Embedding from language Models Deep contextualized word representations that models complex characteristics of word use and how these uses vary across linguistic contexts. It enables models to better disambiguate betw原创 2020-12-17 00:55:26 · 100 阅读 · 0 评论 -
Word Embedding Preparation 3. Glove
GloveGlobal Vectors for word Representation. Same model as Word2Vec Trainning is performed on aggregated global word-word. co-occurence statistics from a corpus. Must be trained offline.原创 2020-12-17 00:15:48 · 94 阅读 · 0 评论 -
Word embedding Preparation 2: Word2Vec
Word2VecAbstractThe similar structure as NNLM, bug focus on Word EmbeddingTwo learning approach: Continuous Bag of Word(CBOW) Continuous Skip-gram(Skip-gram)1. CBowGiven its context wi-N, wi-n+1,wi-n+2...原创 2020-12-17 00:56:30 · 119 阅读 · 0 评论 -
Word Embedding Preparation 1: From Hard-code to NNLM
algorithm->machine learning->nlp->word embeddingAbstracthard encodeBag of Wordonehot embedding1. Hard-codedWord is represent by ID. IDs arejust symbolic data.For example Enum, unicode stringCons Hard-code...原创 2020-12-17 00:58:09 · 207 阅读 · 0 评论