【NLP】Representation Learning for Natural Language Processing

​​​​​​One-hot representation

  • assigns a unique index to each word → a high-dimensional sparse representation
  • cannot capture the semantic relatedness among words (The difference between cat and dog is as much as the difference between cat and bed in one-hot word representation)
  • inflexible to deal with new words in a real-world scenario

Distributed representation

  • Representation learning aims to learn informative representations of objects from raw data automatically. /// Distributed representation has been proved to be more effificient because it usually has low dimensions that can prevent the sparsity issue.
  • Deep learning is a typical approach for representation learning.
Development of representation learning in NLP

Representation LearningMajor characteristics
N-gram ModelPredicts the next item in a sequence based on its previous n-1 items ∈ probabilistic language model
Bag-of-wordsdisregarding the orders of these words in the document: ①each word that has appeared in the document corresponds to a unique and nonzero dimension. ②a score can be computed for each word (e.g., the numbers of occurrences) to indicate the weights
TF-IDFBoW → Moreover, researchers usually take the importance of different words into consideration, rather than treat all the words equally
Neural Probabilistic Language Model (NPLM)NPLM first assigns a distributed vector for each word, then uses a neural network to predict the next word. 例如,前馈神经网络语言模型、循环神经网络语言模型、长短期记忆的循环神经网络语言模型。

Word embeddings: 

Word2Vec, GloVe, fastText

Inspired by NPLM, there came many methods that embed words into distributed representations. ///  Word embeddings in the NLP pipeline map discrete words into informative low-dimensional vectors.

Pre-trained Language Models (PLM):

ELMo, BERT

take complicated context in text into consideration /// calculate dynamic representations for the words based on their context, which is especially useful for the words with multiple meanings /// pretrained fine-tuning pipeline

The Pre-trained language model family


Applications

Neural Relation Extraction

  • Sentence-Level NRE: A basic form of sentence-level NRE consists of three components: (a) an input encoder to give a representation for each input word (Word Embeddings, Position Embeddings, Part-of-speech (POS) Tag Embeddings, WordNet Hypernym Embeddings)(b) a sentence encoder which computes either a single vector or a sequence of vectors to represent the original sentence. (c) a relation classifier which calculates the conditional probability distribution of all relations.

  • Bag-Level NRE: utilizing information from multiple sentences (bag-level) rather than a single sentence (sentence-level) to decide if a relation holds between two entities. A basic form of bag-level NRE consists of four components: (a) an input encoder similar to sentence-level NRE, (b) a sentence encoder similar to sentence-level NRE, (c) a bag encoder which computes a vector representing all related sentences in a bag, and (d) a relation classifier similar to sentence-level NRE which takes bag vectors as input instead of sentence vectors.

Topic Model

  • Topic modeling algorithms do not require any prior annotations or labeling of the documents.
  • 主题模型∈生成模型,一篇文章中每个词都是通过 “以一定概率选择某个主题,并从这个主题中以一定概率选择某个词语” 这样一个过程得到的。
  • LDA即根据给定的一篇文档,反推其主题分布。在LDA中,一篇文档的生成过程如下

for each document in the collection, we generate the words in a two-stage process:

1. Randomly choose a distribution over topics.

2. For each word in the document,

    • Randomly choose a topic from the distribution over topics in step #1.

    • Randomly choose a word from the corresponding distribution over the vocabulary.

LDA生成过程图

 Assumptions of the LDA

  • One assumption that LDA makes is the bag-of-words assumption that the order of the words in the document does not matter.
  • Another assumption is that the order of documents does not matter → → →  This assumption may be unrealistic when analyzing long-running collections that span years or centuries. In such collections, we may want to assume that the topics change over time. One approach to this problem is the  dynamic topic model, a model that respects the ordering of the documents and gives a more productive posterior topical structure than LDA. 
  • The third assumption about LDA is that the number of topics is assumed known and fifixed.


Other

Knowledge point

  • To build an effective machine learning system, we first transform useful information on raw data into internal representations such as feature vectors.
  • Conventional machine learning systems adopt careful feature engineering as preprocessing to build feature representations from raw data.
  • The distributional hypothesis that linguistic objects with similar distributions have similar meanings is the basis for distributed word representation learning.

Chapter 6: Sememe Knowledge Representation

  • For example, the meaning of man can be considered as the combination of the meanings of human , male and adult ( Sememe )
  • WordNet is a large lexical database for the English language.  HowNetChinese and  English
An example of word annotated with sememes in HowNet

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值