- 博客(5)
- 资源 (6)
- 收藏
- 关注
原创 Word Embedding Preparation 1: From Hard-code to NNLM
algorithm->machine learning->nlp->word embeddingAbstracthard encodeBag of Wordonehot embedding1. Hard-codedWord is represent by ID. IDs arejust symbolic data.For example Enum, unicode stringCons Hard-code...
2020-12-17 00:58:09 207
原创 Word embedding Preparation 2: Word2Vec
Word2VecAbstractThe similar structure as NNLM, bug focus on Word EmbeddingTwo learning approach: Continuous Bag of Word(CBOW) Continuous Skip-gram(Skip-gram)1. CBowGiven its context wi-N, wi-n+1,wi-n+2...
2020-12-17 00:56:30 119
原创 Word Embedding Preparation 5. BERT
BERTPublished by Google in 2018 Bidirectional Encoder Representation from Transformers Two Phrases: Pre-training, fine-turning Use Transformer proposed in Attention Is All You Need by Google 2017 to replace RNN BERT takes advantages of multiple mode.
2020-12-17 00:55:56 107
原创 Word Embedding Preparation 4: ElMo
ElMoPublished in 2018 and named as Embedding from language Models Deep contextualized word representations that models complex characteristics of word use and how these uses vary across linguistic contexts. It enables models to better disambiguate betw
2020-12-17 00:55:26 101
原创 Word Embedding Preparation 3. Glove
GloveGlobal Vectors for word Representation. Same model as Word2Vec Trainning is performed on aggregated global word-word. co-occurence statistics from a corpus. Must be trained offline.
2020-12-17 00:15:48 95
ffmpeng 视频解码器 安装解码支持库
2015-01-20
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人