Word Embedding Preparation 4: ElMo

ElMo

  • Published in 2018 and named as Embedding from language Models
  • Deep contextualized word representations that models complex characteristics of word use and how these uses vary across linguistic contexts.
  • It enables models to better disambiguate between *** sense of a given word.
  • Elmo dynamically determines word embedding in downstream task.
  • Elmo generates three embeddings.(1) word embedding. (2) 1st LSTM layer embedding (3) 2st LSTM layer embedding.
  • Pre-training -> get three embedding(v1, v2, v3) per word.(Big data environment)
  • Fine-tunning -> freeze embeddings and train weights(w1, w2, w3) for (v1, v2, v3) (local environment)
  • The final embedding is w1v1 + w2*v2 + w3*v3

Two layer bidirectional LSTM backbone

two-layer - to learn different uses.

Bidirectional - to learn from context(context before and context-after)

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值