CS224N-Notes01-word2vecs

CS224n:Natural Language Precessing with Deep Learning
Lecture Notes:Part 1
Word Vectors 1:Introduction, SVD, and Word2Vec
Authors: Francois Chaubard etc.

1. Introduction to Natural Language Processing

We began with a general discussion of what is NLP.

1.1 What is so special about NLP?

What is so special about human (natural) language? Human language is a system specifically constructed to convey meaning, and is not produced by a physical manifestation(物理形式的外在表现) of any kind. In that way, it is very different from vision or any other machine learning task.
Most words are just symbols for an extra-linguistic entity(超语言学的实体): the word is a signifier that maps to a signified idea or thing. (所有词汇仅仅是特定含义或物体的表示符号)
For instance, the words “rocket” refers to the concept of a rocket, and by extension can designate an instance of a rocket. There are some exceptions, when we use words and letters for expressive signaling, like in “Whooompaa”. On top of this, the symbols of language can be encoded in several modalities: voice, gesture, writing, etc that are transmitted via continus signals to the brain, which itself appears to encode things in a continuous manner.

1.2 Example of tasks

There are several different levels of tasks in NLP, form speech processing to speech processing to semantic interpretation and discourse processing. The goal of NLP is to be able to design algorithms to allow computers to “understand” natural language in order to perform some task. Example tasks come in varying level of difficulty.
Easy:
- Spell Checking
- Keyword Search
- Finding Synonyms
Medium:
- Parsing information form websites, documents, etc
Hard:
- Machine Translation
- Semantic Analysis (what is the meaning of query statement?
- Coreference (e.g. What does “he” or “it” refer to given a document?)
- Question Answering

1.3 How to represent words?

The first and arguable most important common denominator across all NLP tasks is how we represent words as input to any of our models. Much of the earlier NLP work that we will no cover treats word as atomic symbols. To perform well on most NLP tasks we first need to have some notion of similarity and difference between words. With word vectors, we can quite easily encode this ability in the vectors themselves.

2. Word vectors

There are an estimated 13 million tokens for English language but are they all completely unrelated? Feline(猫科) to cat, hotel to motel? I think not. Thus, we want to encode words tokens each in some vector that represents a point in some of “word” space (such that N << 13 million) that is sufficient to encode all semantics of our languages. Each dimension would encode some meaning that we transfer using speech. For instance, semantic dimensions might indicate tense (past vs. present vs. future), count (singular vs. plural), and gender (masculine vs. feminine).
So let’s dive into our first word vector and arguably the most simple, the one-hot vector: Represent every word as an R ∣ V ∣ × 1 R^{|V|×1} RV×1 vector with all 0s and one 1 at the index of that word in the sorted English language. In this notation, ∣ V ∣ |V| V is the size of our vocabulary. Word vectors in this type of encoding would appears as the following:

We represent each word as a completely independent entity. As we previously discussed, this word representation does not give us any directly an notion of similarity. So maybe wo can reduce the size of this space for R ∣ V ∣ R^{|V|} RV to something smaller and thus find a subspace that encodes the relationships between words.

3. SVD Based Methods(奇异值分解方法)

For this class of methods to find word embeddings, we first loop over a massive dataset and accumulate word co-occurrence counts in some form of a matrix X, and then perform Singular Value Decomposition on X to get a U S V T USV^T USVT decomposition. We then use the rows of U as the word embeddings for all words in our dictionary. Let us discuss a few choices of X.

3.1 Word-Document Matrix

As our first attempt, we make the bold conjecture that words that are related will often appears in the same documents. For instance, “banks”, “bonds”, “stocks”, “money”, etc. are probably likely to appear together. But “banks”, “octopus”(章鱼), “banana”, and “hockey”(曲棍球) would probably bot consistently appear together. We use this fact to build a word-document matrix, X in the following manner: Loop over billions of document and for each time the word i i i appears in document j j j, we add one to entry X i j X_{ij} Xij. This is obviously a very large matrix and it scales with the number of documents. So perhaps wo can try something better.

3.2 Window based Co-occurrence Matrix

The same kind of logic applies here however, the matrix X X X stores co-occurrences of words thereby becoming an affinity matrix(亲和矩阵). In this method wo count the number of times each word appears inside a window of a particular size around the word of interest. We calculate this count for all the words in corpus. We display an example below. Let our corpus contain just three sentences and the window size be 1.

  1. I enjoy flying.
  2. I like NLP.
  3. I like deep learning.
    The resulting counts matrix will then be:
3.3 Apply SVD to the cooccurrence matrix

We now perform SVD on X X X, observe the singular values(the diagonal entries in the resulting S S S matrix, and cut them off at some index k k k based on the desired percentage variance captured: ∑ i = 1 k σ i ∑ i = 1 ∣ V ∣ σ i \frac{\sum_{i=1}^k\sigma_i}{\sum_{i=1}^{|V|}\sigma_i} i=1Vσii=1kσi. We then take the submatrix of U 1 : ∣ V ∣ , 1 : K U_{1:|V|,1:K} U1:V,1:K to be our embedding matrix. This would thus give us a k k k-dimensional representation of every word in the vocabulary.
Applying SVD to X X X:

Reducing dimensionality by selecting first k k k singular vectors:

Both of these methods gives us word vectors that are more than sufficient to encode semantic and syntactic information but are associated with many other problems:

  • The dimensions of the matrix change very often (new words are added and corpus changes in size)
  • The matrix is extremely sparse since most words do not co-occur.
  • The matrix is high dimensional in general.
  • Quadratic(二次的) cost to train.
  • Requires the incorporation of some hacks on X X X to account for the drastic imbalance in word frequency.
    Some solutions to exist to resolve some of the issues discussed above:
  • Ignore function words such as “the”, “he”, ”has”, etc.
  • Apply a ramp window- i.e. weight the co-occurrence count based of distance between the words in document.
  • Use Pearson correlation and set negative counts to o instead of using just raw count.
    As we can see in the next section, iteration based methods solve many of these issues in a far more elegant manner.
4. Iteration Based Methods – Word2vec

Let’s step back and try a new approach. Instead of computing and storing global information about some huge dataset (which might be billions of sentences), we can try to create a model that will be able to learn one iteration at a time and eventually be able to encode the probability of a word given its context.
The idea is to design a model whose parameters are the word vectors. Then, train the model on a certain objective. At every iteration we run our model, evaluate the errors, and follow an update rule that has some notion of penalizing the model parameters that caused the error. Thus, we learn our word vectors. This idea is a very old one dating back to 1986. We call this method “backpropagating” the errors. The simpler model and the task, the faster it will be to train it.
Several approaches have been tested. Collobert design models for NLP whose first step so to transform each word in a vector. For each special task they train not only the model’s parameters but also the vectors and achieve great performance, while computing good word vectors!
In this class, we will present a simpler, more recent, probabilistic method by Mikolov: word2vec. Word2vec is a software package that actually includes:

  • 2 algorithms: continuous bag-of-words (CBOW) and skip-gram. CBOW aims to predict a center word form the context in terms if word vectors. Skip-gram does the opposite, and predicts the distribution (probability) of context word from a center word.
  • 2 training methods: negative sampling and hierarchical(分层级地) softmax. Negative sampling defines an objective by sampling negative examples, while hierarchical softmax defines an objective using an efficient trees structure to compute probabilities for all the vocabulary.
4.1 Language Models

First, we need to create such a model that will assign a probability to a sequence of tokens, Let’s start with an example:

T he cat jumped over the puddle.

A good language model will give this sentence a high probability because this is a completely valid sentence, syntactically and semantically. Similarly, the sentence “stock boil fish is toy” should have a very low probability because it makes no sense. Mathematically, we can call this probability on any given sequence of n n n words: P ( w 1 , w 2 , ⋅ , w n ) P(w_1,w_2,\cdot,w_n) P(w1,w2,,wn) We can take the unary language model approach and break apart this probability by assuming the word occurrences are completely independent: P ( w 1 , w 2 , ⋅ , w n ) = ∏ i = 1 n P ( w i ) P(w_1,w_2,\cdot,w_n)= \prod_{i=1}{n}P(w_i) P(w1,w2,,wn)=i=1nP(wi) However, we know this is a bit ludicrous(荒唐的) because we know the nest word is highly contingent upon the previous sequence of words. And the silly sentence example might actually score highly. So perhaps we let the probability of the sequence depend of the pariwise(成对的) probability of a word in the sequence and the word next to it. We call this the bigram model and represent it as: P ( w 1 , w 2 , ⋅ , w n ) = ∏ i = 2 n P ( w i ∣ w i − 1 ) P(w_1,w_2,\cdot,w_n)= \prod_{i=2}{n}P(w_i|w_{i-1}) P(w1,w2,,wn)=i=2nP(wiwi1) Again this is certainly a bit naïve since we only concerning ourselves with pairs of neighboring words rather than evaluating a whole sentence, but as we will see, this representation gets us pretty far along. Note in the Word-Word Matrix with a context of size 1, we basically can learn these pairwise probabilities. But again, this would require computing and storing global information about a massive dataset.
Now that we understand how wo can think about a sequence of tokens having a probability, let us observe some example models that could learn these probabilities.

4.4 Continuous Bag of Words Model (CBOW)

One approach is to treat {“The”, “cat”, “over”, “the”, “puddle”} as a context form these words, be able to predict or generate the center word “jumped”. This type of model we call a Continuous Bag of Words Model.
Let’s discuss the CBOW Model above in greater detail. First, we set up our known parameters. Let the known parameters in our model be the sentence represented by one-hot word vectors. The input one hot vectors or context we will represent with an x x x^{x} xx. And the output as y c y^{c} yc in the CBOW Model, since wo only have one output, so we just call this y y y which is the one hot vector of the known center word. Now let’s define our unknowns in our model.
We create two matrix, v ∈ R n × ∣ V ∣ v\in R^{n×|V|} vRn×V and u ∈ R ∣ V ∣ × n u\in R^{|V|×n} uRV×n. Where n n n is an arbitrary size which defines the size of our embedding space. V V V is the input word matrix such that the i i i-th column of V V V is the n-dimensional embedded vector for word w i w_i wi when it is an input to this model. We denote this n × 1 n×1 n×1 vector as v i v_i vi. Similarly, U U U is the output word matrix. The j j j-th row of U U U is an n-dimensional embedded vector for word matrix w j w_j wj when it is an output of the model. We denote this row of U U U as U j U_j Uj. Note that wo do in fact learn two vetors for every word w i w_i wi (i.e. input word vector v i v_i vi and out put vector u i u_i ui)
We breakdown the way this model works in there steps:

  1. Generate one-hot vectors for the input context of size m : ( x ( c − m ) , ⋯ &ThinSpace; , x ( c − 1 ) , x ( c + 1 ) , ⋯ &ThinSpace; , x ( c + m ) ) m:(x^{(c-m)},\cdots, x^{(c-1)} ,x^{(c+1)},\cdots, x^{(c+m)}) m:(x(cm),,x(c1),x(c+1),,x(c+m)).
  2. Get embedded word vectors for the context v ( c − m ) = V x ( c − m ) , v ( c − m + 1 ) = V x ( c − m + 1 ) , ⋯ &ThinSpace; , v ( c + m ) = V x ( c + m ) v_{(c-m)}=Vx^{(c-m)}, v_{(c-m+1)}=Vx^{(c-m+1)},\cdots,v_{(c+m)}=Vx^{(c+m)} v(cm)=Vx(cm),v(cm+1)=Vx(cm+1),,v(c+m)=Vx(c+m)
  3. Average these vectors to get v ^ = v ( c − m ) + v ( c − m + 1 ) + ⋯ + v ( c + m ) 2 m ∈ R n \hat v=\frac{v_{(c-m)}+ v_{(c-m+1)}+\cdots+ v_{(c+m)}}{2m} \in R^n v^=2mv(cm)+v(cm+1)++v(c+m)Rn
  4. Generate a score vector z = U v ^ ∈ R ∣ V ∣ z=U\hat v\in R^{|V|} z=Uv^RV. As the dot product of similar vectors is higher, it will push similar words close to each other in order to achieve a high score.
  5. Turn the scores into probability y ^ = s o f t m a x ( z ) ∈ R ∣ V ∣ \hat y=softmax(z)\in R^{|V|} y^=softmax(z)RV.
  6. We desire our probability generated, y ^ ∈ R ∣ V ∣ \hat y\in R^{|V|} y^RV, to match the true probabilities, y ∈ R ∣ V ∣ y\in R^{|V|} yRV, which also happens to be one hot vector of the actual word.

So now that we have an understanding of how our model would work if we had a V V V and U U U, how would we learn these two matrices? Well, we need to create an objective function. Very often when we are trying to learn a probability from some true probability, we look to information theory to give us a measure of distance between two distributions. Here, we use a popular choice of distance/loss measure, cross entropy H ( y ^ , y ) H(\hat y, y) H(y^,y).
The intuition for the use of cross-entropy in the discrete case can be derived from the formulation of the loss function: H ( y ^ , y ) = − ∑ j = 1 ∣ V ∣ y j l o g ( y ^ j ) H(\hat y,y)=-\sum_{j=1}^{|V|}y_jlog(\hat y_j) H(y^,y)=j=1Vyjlog(y^j).Let us concern ourselves with the case at hand, which is that y y y is one-hot vector. Thus we know that the above loss simplifies to simply: H ( y ^ , y ) = − y j l o g ( y ^ j ) H(\hat y,y)=-y_jlog(\hat y_j) H(y^,y)=yjlog(y^j) In this formulation, c c c is the index where the correct word’s one hot vector is 1. We can now consider the case where our prediction was perfect and thus y ^ c = 1 \hat y_c=1 y^c=1 We can then calculate H ( y ^ , y ) = − y j l o g ( y ^ j ) = 0 H(\hat y,y)=-y_jlog( \hat y_j)=0 H(y^,y)=yjlog(y^j)=0. Thus for a perfect prediction, we face no penalty or loss. Now let us consider the opposite case where our prediction was very bad and thus y ^ c = 0.01 \hat y_c=0.01 y^c=0.01. As before, we calculate our loss to be H ( y ^ , y ) = − 1 ( 0 ^ . 01 ) = 4.605 H(\hat y,y)=-1(\hat 0.01) = 4.605 H(y^,y)=1(0^.01)=4.605. We can thus see that for probability distributions, cross entropy provides us with a good measure of distance. We thus formulate our optimization objective as: m i n i m i z e J = − l o g P ( w c ∣ w c − m , ⋯ &ThinSpace; , w c − 1 , w c + 1 , ⋯ &ThinSpace; , w c + m ) = − l o g P ( u c ∣ v ^ ) = − l o g e x p ( u c T v ^ ) ∑ j = 1 ∣ V ∣ e x p ( u j T v ^ ) = − u c T v ^ + l o g ∑ j = 1 ∣ V ∣ e x p ( u j T v ^ ) \begin{aligned} minimize J&amp;=-logP(w_c|w_{c-m},\cdots,w_{c-1},w_{c+1},\cdots,w_{c+m})\\ &amp;=-logP(u_c|\hat v)\\ &amp;=-log \frac{exp(u_c^T\hat v)}{\sum_{j=1}^{|V|}exp(u_j^T\hat v)}\\ &amp;=-u_c^T\hat v+log\sum_{j=1}^{|V|}exp(u_j^T\hat v)\end{aligned} minimizeJ=logP(wcwcm,,wc1,wc+1,,wc+m)=logP(ucv^)=logj=1Vexp(ujTv^)exp(ucTv^)=ucTv^+logj=1Vexp(ujTv^)We use stochastic gradient descent to update all relevant word vectors u c u_c uc and v j v_j vj.

4.3 Skip-gram Model

Another approach is to create a model such that given the center word “jumped”, the model will be able to predict or generate the surrounding words “The”, “cat”, “over”, “the”, “puddle”. Here we call the word “jumped” the context. We call this type of model a Skip-Gram model.
Let’s discuss the Skip-Gram model above. The setup is largely the same but we essentially swap our x x x and y y y i.e. x x x in the CBOW are now y y y and vice-versa. The input one hot vector we will represent with an x x x (since there is only one). And the out put vectors as y ( i ) y^{(i)} y(i). We define V V V and U U U the same as in CBOW.
We breakdown the way this model works in these 6 steps:

  1. We generate our one hot input vector x ∈ R ∣ V ∣ x\in R^{|V|} xRV of the center word.
  2. Generate embedded word vector for the center word v c = V x ∈ R n v_c=Vx\in R^n vc=VxRn
  3. Generate a score vector z = U v c z=Uv_c z=Uvc
  4. Turn the score vector into probabilities, y ^ = s o f t m a x ( z ) \hat y=softmax(z) y^=softmax(z). Note that y ^ c − m , ⋯ &ThinSpace; , y ^ c − 1 , y ^ c + 1 , ⋯ &ThinSpace; , y ^ c + m \hat y^{c-m}, \cdots,\hat y^{c-1},\hat y^{c+1},\cdots,\hat y^{c+m} y^cm,,y^c1,y^c+1,,y^c+m are the probabilities of observing each context word.
  5. We desire our probability vector generated to match the true probabilities which is y ( c − m ) , ⋯ &ThinSpace; , y ( c − 1 ) , y ( c + 1 , ⋯ &ThinSpace; , y ( c + m ) y^{(c-m)},\cdots,y^{(c-1)},y^{(c+1},\cdots,y^{(c+m)} y(cm),,y(c1),y(c+1,,y(c+m), the one hot vectors of the actual output.
    As in the CBOW, we need to generate an objective function for us to evaluate the model. A key difference here is that we invoke a Naïve Bayes assumption to break out the probabilities. If you have not seen this before, then simply put, it is a strong conditional independence as assumption. In other words, given the center word, all output words are completely independent.
    m i n i m i z e J = − l o g P ( w c − m , ⋯ &ThinSpace; , w c − 1 , w c + 1 , ⋯ &ThinSpace; , w c + m ∣ w c ) = − l o g ∏ j = 0 , j ≠ m 2 m P ( w c − m + j ∣ w c ) = − l o g ∏ j = 0 , j ≠ m 2 m P ( w c − m + j ∣ v c ) = − l o g ∏ j = 0 , j ≠ m 2 m e x p ( u c − m + 1 T v c ) ∑ k = 1 ∣ V ∣ e x p ( u k T v c ) = − ∑ j = 0 , j ≠ m 2 m u c − m + j T v c + 2 m l o g ∑ k = 1 ∣ V ∣ e x p ( u k T v c ) \begin{aligned} minimize J&amp;=-logP(w_{c-m},\cdots,w_{c-1},w_{c+1},\cdots,w_{c+m}|w_c)\\ &amp;=-log\prod_{j=0,j\neq m}^{2m}P(w_{c-m+j}|w_c)\\ &amp;=-log\prod_{j=0,j\neq m}^{2m}P(w_{c-m+j}|v_c)\\ &amp;=-log\prod_{j=0,j\neq m}^{2m}\frac{exp(u_{c-m+1}^Tv_c)}{\sum_{k=1}^{|V|}exp(u_k^Tv_c)}\\ &amp;=-\sum_{j=0,j\neq m}^{2m}u_{c-m+j}^Tv_c+2mlog\sum_{k=1}^{|V|}exp(u_k^Tv_c)\end{aligned} minimizeJ=logP(wcm,,wc1,wc+1,,wc+mwc)=logj=0,j̸=m2mP(wcm+jwc)=logj=0,j̸=m2mP(wcm+jvc)=logj=0,j̸=m2mk=1Vexp(ukTvc)exp(ucm+1Tvc)=j=0,j̸=m2mucm+jTvc+2mlogk=1Vexp(ukTvc)
    With this objective function, we can compute the gradients with respect to known parameters and at each iteration update them via Stochastic Gradient Descent.
    Note that J = − ∑ j = 0 , j ≠ m 2 m l o g P ( u c − m + j ∣ v c ) = ∑ j = 0 , j ≠ m 2 m H ( y ^ , y c − m + j ) \begin{aligned} J&amp;=-\sum_{j=0,j\neq m}^{2m}logP(u_{c-m+j}|v_c)\\ &amp;=\sum_{j=0,j\neq m}^{2m}H(\hat y,y_{c-m+j})\end{aligned} J=j=0,j̸=m2mlogP(ucm+jvc)=j=0,j̸=m2mH(y^,ycm+j)where H ( y ^ , y c − m + j ) H(\hat y,y_{c-m+j}) H(y^,ycm+j) is the cross-entropy between the probability vector y ^ \hat y y^ and the one-hot vector y c − m + j y_{c-m+j} ycm+j.
4.4 Negative Sampling

Let’s take a second to look at the objective function. Note that the summation over ∣ V ∣ |V| V is computationally huge! Any update we do or evaluation of the objective function would take O ( ∣ V ∣ ) O(|V|) O(V) time which if we recall is in the millions. A simple idea is we could instead just approximate it.
For every training step, instead of looping over the entire vocabulary, we can just sample several negative samples! We “sample” from a noise distribution P n ( w ) P_n(w) Pn(w) whose probabilities match the ordering of the frequency of the vocabulary. To augment our formulation of the problem to incorporate Negative Sampling, all we need to do is update the:
 Objective function
 Gradients
 Updates rules
While negative sampling is based on the Skip-Gram model, it is in fact optimizing a different objective. Consider a pair ( w , c ) (w,c) (w,c) of word and the context. Did this pair come from the training data? Let’s denote by P ( D = 1 ∣ w , c ) P(D=1|w,c) P(D=1w,c) the probability that ( w , c ) (w,c) (w,c) came from the corpus data. Correspondingly, P ( D = 0 ∣ w , c ) P(D=0|w,c) P(D=0w,c) will be the probability that ( w , c ) (w,c) (w,c) did not come from the corpus data. First, let’s model P ( D = 1 ∣ w , c ) P(D=1|w,c) P(D=1w,c) with the sigmoid function: P ( D = 1 ∣ w , c , θ ) = σ ( v c T v w ) = 1 1 + e ( − v c T v w ) P(D=1|w,c,\theta)=\sigma(v_c^Tv_w)=\frac{1}{1+e^{(-v_c^Tv_w)}} P(D=1w,c,θ)=σ(vcTvw)=1+e(vcTvw)1.Now we build a new objective function that tries to maximize the probability of a word and context being in the corpus data if it indeed is not. We take a simple maximun likelihood approach of these two probabilities. Here we take θ \theta θ to be the parameters of the model, and in our case it is V V V and U U U.
在这里插入图片描述
Note that maximizing the likelihood is the same as minimizing the negative log likelihood J = − ∑ ( w , c ) ∈ D l o g 1 1 + e x p ( − u w T v c ) − ∑ l o g 1 1 + e x p ( u w T v c ) J=-\sum_{(w,c) \in D}log\frac{1}{1+exp(-u_w^Tv_c)}-\sum log\frac{1}{1+exp(u_w^Tv_c)} J=(w,c)Dlog1+exp(uwTvc)1log1+exp(uwTvc)1Note that D ^ \hat D D^ is a “false” or “negative” corpus. Where we would have sentences like "Stock boil fish is a toy”. Unnatural sentences that should get a low probability of ever occurring. We can generate D ^ \hat D D^ on the fly by randomly sampling this negative from the word bank.
For skip-gram, our new objective function for observing the context word c − m + j c-m+j cm+j given the enter word c would be − l o g σ ( u c − m + j T ⋅ v c ) − ∑ k = 1 K l o g σ ( − u ^ K T ⋅ v c ) -log \sigma (u_{c-m+j}^T \cdot v_c)-\sum_{k=1}^{K}log \sigma(-\hat u_K^T \cdot v_c) logσ(ucm+jTvc)k=1Klogσ(u^KTvc)For CBOW, our new objective function for observing the center word u c u_c uc given the context vector v ^ = v c − m + v c − m + 1 + ⋯ + v c + m 2 m \hat v=\frac{v_{c-m}+v_{c-m+1}+\cdots+v_{c+m}}{2m} v^=2mvcm+vcm+1++vc+m would be − l o g σ ( u c T ⋅ v ^ ) − ∑ k = 1 K l o g σ ( − u ^ K T ⋅ v c ) -log \sigma(u_c^T \cdot \hat v)- \sum_{k=1}^{K}log \sigma(-\hat u_K^T \cdot v_c) logσ(ucTv^)k=1Klogσ(u^KTvc) In the above formulation, KaTeX parse error: Expected 'EOF', got '}' at position 24: …_K|k=1 \cdots K}̲ are sampled form P n ( w ) P_n(w) Pn(w). Let’s discuss what P n ( w ) P_n(w) Pn(w) should be. While there is much discussion of what makes the best approximation, what seems to work best is the Unigram Model raised to the power of 3/4. Why 3/4? Here is an example that might help gain some intuition:
is: 0. 9 3 / 4 = 0.92 0.9^{3/4}=0.92 0.93/4=0.92
Constitution: 0.0 9 3 / 4 = 0.16 0.09^{3/4}=0.16 0.093/4=0.16
Bombastic: 0.0 1 3 / 4 = 0.032 0.01^{3/4}=0.032 0.013/4=0.032
“Bombastic” is now 3x more likely to be sampled while “is” only went up marginally.

Let’s introduce some notation. Let L ( w ) L(w) L(w) be the number of nodes in the path from the root to the leaf w w w. For instance, L ( w 2 ) L(w_2) L(w2) in Figure 4 is 3. Let’s write n ( w , i ) n(w,i) n(w,i) as the i i i-th node in this path with associated vector $v_{n(w,i)}. So n ( w , 1 ) n(w,1) n(w,1) is the root, while $n(w, L(w)) is the father of w w w. Now for each inner node n n n, we arbitrarily choose one of its children and call it c h ( n ) ch(n) ch(n) (e.g. always the left node). Then, we can compute the probability as P ( w ∣ w i ) = ∏ i = 1 L ( w ) − 1 σ ( [ n ( w , j + 1 ) = c h ( n ( w , j ) ) ] ⋅ v n ( w , j ) T v w i ) P(w|w_i)=\prod_{i=1}^{L(w)-1}\sigma([n(w,j+1)=ch(n(w,j))]\cdot v_{n(w,j)}^Tv_{w_i}) P(wwi)=i=1L(w)1σ([n(w,j+1)=ch(n(w,j))]vn(w,j)Tvwi)where

4.5 Hierarchical Softmax

MIKOLOV ET AL. also present hierarchical softmax as a much more efficient alternative to the normal softmax. In practice, hierarchical softmax tends to be better for infrequent words, while negative sampling works better for frequent words and lower dimensional vectors.
Hierarchical softmax uses a binary tree to represent all words in the vocabulary. Each leaf if the tree is a word, and there is a unique path from root to leaf. In this model, there is no output representation for words. Instead, each node of the graph (expect the root and the leaves) is associated to a vector that the model is going to learn.
In this model, the probability of a word w given a vector w i w_i wi, P ( w ∣ w i ) P(w|w_i) P(wwi), is equal to the probability of a random walk starting in the root and ending in the leaf node corresponding to w w w. The main advantage in computing the probability this way is that the cost is only O(log(|V|)), corresponding to the length of the path.

And σ ( ⋅ ) \sigma(\cdot) σ() is the sigmoid function.
This formula is fairly dense, so let’s examine it more closely. First we computing a product of terms based on the shape of the path from the root ( n ( w , 1 ) ) (n(w,1)) (n(w,1)) to the leaf ( w ) (w) (w). If we assume c h ( n ) ch(n) ch(n) is always the left node n n n, if we sum the probabilities for going to the left and right node, you can check that for any value of v n T v w i v_n^Tv_{w_i} vnTvwi, σ ( v n T v w i ) + σ ( − v n T v w i ) = 1 \sigma(v_n^Tv_{w_i})+\sigma (-v_n^Tv_{w_i})=1 σ(vnTvwi)+σ(vnTvwi)=1
The normalization also ensures that ∑ w = 1 ∣ V ∣ P ( w ∣ w i ) = 1 \sum_{w=1}^{|V|}P(w|w_i)=1 w=1VP(wwi)=1, just as in the original softmax.
Finally, we compare the simllarity os our input vector v w i v_{w_i} vwi to each inner node vector v n ( w , j ) T v_{n(w,j)}^T vn(w,j)T using a dot product. Let’s run through an example. Taking w 2 w_2 w2 in Figure 4, we must take two left edges and then a right edge to reach w 2 w_2 w2 from the root, so
在这里插入图片描述
To train the model, our goal is still to minimize the negative log likelihood – log P ( w ∣ w i ) P(w|w_i) P(wwi). But instead of updating output vectors per word, we update the vectors of the nodes in the binary tree that are in the path from root to leaf node.
The speed of this method is determined by the way in which the binary tree is constructed and words are assigned to leaf nodes. MIKOLOV ET AL. use a binary Huffman tree, which assign frequent words shorter paths in the tree.

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值