计算效率问题
计算效率问题
- 序列生成模型的输出层为词表中所有词的条件概率,需要Softmax归一化,当词表较大时,计算效率比较低
层次化Softmax
Hierarchical Softmax(H-Softmax)
-
首先考虑使用两层的树结构来组织词表:将词表中的词分成K组,并且每一个词只能属于一个分组,那么每组大小为 ∣ V ∣ K \frac{|\mathcal{V}|}{K} K∣V∣,假设词 w w w所属的组为 c ( w ) c(w) c(w),则
p ( w ∣ h ~ ) = p ( w , c ( w ) ∣ h ~ ) = p ( w ∣ c ( w ) , h ~ ) p ( c ( w ) ∣ h ~ ) \begin{aligned} p(w \mid \tilde{h}) &=p(w, c(w) \mid \tilde{h}) \\ &=p(w \mid c(w), \tilde{h}) p(c(w) \mid \tilde{h}) \end{aligned} p(w∣h~)=p(w,c(w)∣h~)=p(w∣c(w),h~)p(c(w)∣h~)
- 因此一个词的概率可以分解为两个概率 p ( w ∣ c ( w ) , h ~ ) p(w \mid c(w), \tilde{h}) p(w∣c(w),h~)和 p ( c ( w ) ∣ h ~ ) p(c(w) \mid \tilde{h}) p(c(w)∣h~)的乘积,它们可以分别利用神经网络来估计,这样计算Softmax函数时只需要分别做 K K K和 ∣ V ∣ K \frac{|\mathcal{V}|}{K} K∣V∣次求和,从而大大提高Softmax的计算速度
-
其次,还可以使用更深层的树结构来组织词表。例如采用二叉树来组织词表中的所有词,二叉树的叶子节点代表词表中的词,非叶子节点表示不同层次上的类别
- 如果我们将二叉树所有左连边标记为0,右连边标记为1。每一个词可以用根节点到它所在的叶子之间路径上的标记来进行编码。左图所示编码为
v 1 = 00 , v 2 = 01 , v 3 = 10 , v 4 = 11 v_{1}=00, \quad v_{2}=01, \quad v_{3}=10, \quad v_{4}=11 v1=00,v2=01,v3=10,v4=11
- 假设词 v v v在二叉树上从根节点到其所在叶子节点的路径长度为 M M M,其编码可以表示一个位向量(bit vector): [ b 1 , ⋯ , b M ] T [b_1, \cdots, b_M]^T [b1,⋯,bM]T,词 v v v的条件概率为
P ( v ∣ h ~ ) = p ( b 1 , ⋯ , b M ∣ h ~ ) = ∏ m = 1 M p ( b m ∣ b 1 , ⋯ , b m − 1 , h ~ ) = ∏ m = 1 M p ( b m ∣ b m − 1 , h ~ ) \begin{aligned} P(v \mid \tilde{h}) &=p\left(b_{1}, \cdots, b_{M} \mid \tilde{h}\right) \\ &=\prod_{m=1}^{M} p\left(b_{m} \mid b_{1}, \cdots, b_{m-1}, \tilde{h}\right) \\ &=\prod_{m=1}^{M} p\left(b_{m} \mid b_{m-1}, \tilde{h}\right) \end{aligned} P(v∣h~)=p(b1,⋯,bM∣h~)=m=1∏Mp(bm∣b1,⋯,bm−1,h~)=m=1∏Mp(bm∣bm−1,h~)
-
由于 b m ∈ 0 , 1 b_m \in {0,1} bm∈0,1为二值变量,可以将 p ( b m ∣ b m − 1 , h ~ ) p\left(b_{m} \mid b_{m-1}, \tilde{h}\right) p(bm∣bm−1,h~)看作二分类问题,使用Logistic回归来进行预测
-
转换编码方式可以利用WordNet或霍夫曼编码等
重要性采样
Importance Sampling
-
采用重要性采样来近似估计梯度,避免计算Softmax
-
目标函数关于 θ \theta θ的梯度为
∂ log p θ ( x t ∣ h ~ t ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − ∂ log ( ∑ v exp ( s ( v , h ~ t ; θ ) ) ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − 1 ∑ v exp ( s ( v , h ~ t ; θ ) ) ∂ ∑ v exp ( s ( v , h ~ t ; θ ) ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − ∑ v 1 ∑ w exp ( s ( w , h ~ t ; θ ) ) ∂ exp ( s ( v , h ~ t ; θ ) ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − ∑ v exp ( s ( v , h ~ t ; θ ) ) ∑ w exp ( s ( w , h ~ t ; θ ) ) ∂ s ( v , h ~ t ; θ ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − ∑ v p θ ( v ∣ h ~ t ) ∂ s ( v , h t ; θ ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − E p θ ( v ∣ h t ) [ ∂ s ( v , h ~ t ; θ ) ∂ θ ] . \begin{aligned} &\frac{\partial \log p_{\theta}\left(x_{t} \mid \tilde{h}_{t}\right)}{\partial \theta}=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\frac{\partial \log \left(\sum_{v} \exp \left(s\left(v, \tilde{h}_{t} ; \theta\right)\right)\right)}{\partial \theta} \\ &=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\frac{1}{\sum_{v} \exp \left(s\left(v, \tilde{h}_{t} ; \theta\right)\right)} \frac{\partial \sum_{v} \exp \left(s\left(v, \tilde{h}_{t} ; \theta\right)\right)}{\partial \theta} \\ &=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\sum_{v} \frac{1}{\sum_{w} \exp \left(s\left(w, \tilde{h}_{t} ; \theta\right)\right)} \frac{\partial \exp \left(s\left(v, \tilde{h}_{t} ; \theta\right)\right)}{\partial \theta} \\ &=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\sum_{v} \frac{\exp \left(s\left(v, \tilde{h}_{t} ; \theta\right)\right)}{\sum_{w} \exp \left(s\left(w, \tilde{h}_{t} ; \theta\right)\right)} \frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\sum_{v} p_{\theta}\left(v \mid \tilde{h}_{t}\right) \frac{\partial s\left(v, h_{t} ; \theta\right)}{\partial \theta} \\ &=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\mathbb{E}_{p_{\theta}\left(v \mid h_{t}\right)}\left[\frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta}\right] . \end{aligned} ∂θ∂logpθ(xt∣h~t)=∂θ∂s(xt,h~t;θ)−∂θ∂log(∑vexp(s(v,h~t;θ)))=∂θ∂s(xt,h~t;θ)−∑vexp(s(v,h~t;θ))1∂θ∂∑vexp(s(v,h~t;θ))=∂θ∂s(xt,h~t;θ)−v∑∑wexp(s(w,h~t;θ))1∂θ∂exp(s(v,h~t;θ))=∂θ∂s(xt,h~t;θ)−v∑∑wexp(s(w,h~t;θ))exp(s(v,h~t;θ))∂θ∂s(v,h~t;θ)=∂θ∂s(xt,h~t;θ)−v∑pθ(v∣h~t)∂θ∂s(v,ht;θ)=∂θ∂s(xt,h~t;θ)−Epθ(v∣ht) ∂θ∂s(v,h~t;θ) .
采用采样的方法来近似估计上式中的期望- 重要性采样是用一个容易采样的提议分布 q q q来近似估计分布 p p p
E p θ ( v ∣ h t ) [ ∂ s ( v , h ~ t ; θ ) ∂ θ ] = ∑ v ∈ V p θ ( v ∣ h ~ t ) ∂ s ( v , h ~ t ; θ ) ∂ θ = ∑ v ∈ V q ( v ∣ h ~ t ) p θ ( v ∣ h ~ t ) q ( v ∣ h ~ t ) ∂ s ( v , h ~ t ; θ ) ∂ θ = E q ( v ∣ h ~ t ) [ p θ ( v ∣ h ~ t ) q ( v ∣ h ~ t ) ∂ s ( v , h ~ t ; θ ) ∂ θ ] \begin{aligned} \mathbb{E}_{p_{\theta}\left(v \mid h_{t}\right)}\left[\frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta}\right] &=\sum_{v \in \mathcal{V}} p_{\theta}\left(v \mid \tilde{h}_{t}\right) \frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &=\sum_{v \in \mathcal{V}} q\left(v \mid \tilde{h}_{t}\right) \frac{p_{\theta}\left(v \mid \tilde{h}_{t}\right)}{q\left(v \mid \tilde{h}_{t}\right)} \frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &=\mathbb{E}_{q\left(v \mid \tilde{h}_{t}\right)}\left[\frac{p_{\theta}\left(v \mid \tilde{h}_{t}\right)}{q\left(v \mid \tilde{h}_{t}\right)} \frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta}\right] \end{aligned} Epθ(v∣ht) ∂θ∂s(v,h~t;θ) =v∈V∑pθ(v∣h~t)∂θ∂s(v,h~t;θ)=v∈V∑q(v∣h~t)q(v∣h~t)pθ(v∣h~t)∂θ∂s(v,h~t;θ)=Eq(v∣h~t) q(v∣h~t)pθ(v∣h~t)∂θ∂s(v,h~t;θ)
原始分布 p θ ( v ∣ h ~ t ) p_{\theta}\left(v \mid \tilde{h}_{t}\right) pθ(v∣h~t)上的期望转换为提议分布 q ( v ∣ h ~ t ) q\left(v \mid \tilde{h}_{t}\right) q(v∣h~t)上的期望,提议分布需要尽可能与原始分布接近,并且从 q ( v ∣ h ~ t ) q\left(v \mid \tilde{h}_{t}\right) q(v∣h~t)采样的代价要比较小。在实际中,提议分布 q ( v ∣ h ~ t ) q\left(v \mid \tilde{h}_{t}\right) q(v∣h~t)可以采用N元模型的分布函数- 根据分布 q ( v ∣ h ~ t ) q\left(v \mid \tilde{h}_{t}\right) q(v∣h~t)独立采样K个样本 v 1 , ⋯ , v K v_1, \cdots, v_K v1,⋯,vK来近似求解上式
E p θ ( v ∣ h t ) [ ∂ s ( v , h ~ t ; θ ) ∂ θ ] ≈ 1 K ∑ k = 1 K p θ ( v k ∣ h t ) q ( v k ∣ h ~ t ) ∂ s ( v k , h ~ t ; θ ) ∂ θ \mathbb{E}_{p_{\theta}\left(v \mid h_{t}\right)}\left[\frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta}\right] \approx \frac{1}{K} \sum_{k=1}^{K} \frac{p_{\theta}\left(v_{k} \mid h_{t}\right)}{q\left(v_{k} \mid \tilde{h}_{t}\right)} \frac{\partial s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{\partial \theta} Epθ(v∣ht) ∂θ∂s(v,h~t;θ) ≈K1k=1∑Kq(vk∣h~t)pθ(vk∣ht)∂θ∂s(vk,h~t;θ)
- 但在上式中可以发现,依然需要计算 p θ ( v ∣ h ~ t ) p_{\theta}\left(v \mid \tilde{h}_{t}\right) pθ(v∣h~t),即
p θ ( v k ∣ h ~ t ) = s ( v k , h ~ t ; θ ) Z ( h ~ t ) p_{\theta}\left(v_{k} \mid \tilde{h}_{t}\right)=\frac{s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{Z\left(\tilde{h}_{t}\right)} pθ(vk∣h~t)=Z(h~t)s(vk,h~t;θ)
其中 Z ( h ~ t ) = ∑ w exp ( s ( w , h ~ t ; θ ) ) Z\left(\tilde{h}_{t}\right)=\sum_{w} \exp \left(s\left(w, \tilde{h}_{t} ; \theta\right)\right) Z(h~t)=∑wexp(s(w,h~t;θ)),为了避免这种情况,也采用重要性采样来计算配分函数Z ( h ~ t ) = ∑ w exp ( s ( w , h ~ t ; θ ) ) = ∑ w q ( w ∣ h ~ t ) 1 q ( w ∣ h ~ t ) exp ( s ( w , h ~ t ; θ ) ) = F q ( w ∣ h ˉ t ) [ 1 q ( w ∣ h ~ t ) exp ( s ( w , h ~ t ; θ ) ) ] ≈ 1 K ∑ k = 1 K 1 q ( v k ∣ h ~ t ) exp ( s ( v k , h ~ t ; θ ) ) = 1 K ∑ k = 1 K exp ( s ( v k , h ~ t ; θ ) ) q ( v k ∣ h ~ t ) = 1 K ∑ k = 1 K r ( v k ) , \begin{aligned} Z\left(\tilde{h}_{t}\right) &=\sum_{w} \exp \left(s\left(w, \tilde{h}_{t} ; \theta\right)\right) \\ &=\sum_{w} q\left(w \mid \tilde{h}_{t}\right) \frac{1}{q\left(w \mid \tilde{h}_{t}\right)} \exp \left(s\left(w, \tilde{h}_{t} ; \theta\right)\right) \\ &=\mathbb{F}_{q\left(w \mid \bar{h}_{t}\right)}\left[\frac{1}{q\left(w \mid \tilde{h}_{t}\right)} \exp \left(s\left(w, \tilde{h}_{t} ; \theta\right)\right)\right] \\ & \approx \frac{1}{K} \sum_{k=1}^{K} \frac{1}{q\left(v_{k} \mid \tilde{h}_{t}\right)} \exp \left(s\left(v_{k}, \tilde{h}_{t} ; \theta\right)\right) \\ &=\frac{1}{K} \sum_{k=1}^{K} \frac{\exp \left(s\left(v_{k}, \tilde{h}_{t} ; \theta\right)\right)}{q\left(v_{k} \mid \tilde{h}_{t}\right)} \\ &=\frac{1}{K} \sum_{k=1}^{K} r\left(v_{k}\right), \end{aligned} Z(h~t)=w∑exp(s(w,h~t;θ))=w∑q(w∣h~t)q(w∣h~t)1exp(s(w,h~t;θ))=Fq(w∣hˉt) q(w∣h~t)1exp(s(w,h~t;θ)) ≈K1k=1∑Kq(vk∣h~t)1exp(s(vk,h~t;θ))=K1k=1∑Kq(vk∣h~t)exp(s(vk,h~t;θ))=K1k=1∑Kr(vk),
其中 r ( v k ) = exp ( s ( v k , h ~ t ; θ ) ) q ( v k ∣ h ~ k ) r\left(v_{k}\right)=\frac{\exp \left(s\left(v_{k}, \tilde{h}_{t} ; \theta\right)\right)}{q\left(v_{k} \mid \tilde{h}_{k}\right)} r(vk)=q(vk∣h~k)exp(s(vk,h~t;θ)),提议分布与前述提议分布可以设为一致E p θ ( v ∣ h t ) [ ∂ s ( v , h ~ t ; θ ) ∂ θ ] ≈ 1 K ∑ k = 1 K p θ ( v k ∣ h ~ t ) q ( v k ∣ h ~ t ) ∂ s ( v k , h ~ t ; θ ) ∂ θ = 1 K ∑ k = 1 K exp ( s ( v k , h ~ t ; θ ) ) Z ( h ~ t ) 1 q ( v k ∣ h ~ t ) ∂ s ( v k , h ~ t ; θ ) ∂ θ = 1 K ∑ k = 1 K 1 Z ( h ~ t ) r ( v k ) ∂ s ( v k , h ~ t ; θ ) ∂ θ ≈ ∑ k = 1 K r ( v k ) ∑ k = 1 K r ( v k ) ∂ s ( v k , h ~ t ; θ ) ∂ θ = 1 ∑ k = 1 K r ( v k ) ∑ k = 1 K r ( v k ) ∂ s ( v k , h ˉ t ; θ ) ∂ θ . \begin{aligned} &\mathbb{E}_{p_{\theta}\left(v \mid h_{t}\right)}\left[\frac{\partial s\left(v, \tilde{h}_{t} ; \theta\right)}{\partial \theta}\right] \approx \frac{1}{K} \sum_{k=1}^{K} \frac{p_{\theta}\left(v_{k} \mid \tilde{h}_{t}\right)}{q\left(v_{k} \mid \tilde{h}_{t}\right)} \frac{\partial s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &=\frac{1}{K} \sum_{k=1}^{K} \frac{\exp \left(s\left(v_{k}, \tilde{h}_{t} ; \theta\right)\right)}{Z\left(\tilde{h}_{t}\right)} \frac{1}{q\left(v_{k} \mid \tilde{h}_{t}\right)} \frac{\partial s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &=\frac{1}{K} \sum_{k=1}^{K} \frac{1}{Z\left(\tilde{h}_{t}\right)} r\left(v_{k}\right) \frac{\partial s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &\approx \sum_{k=1}^{K} \frac{r\left(v_{k}\right)}{\sum_{k=1}^{K} r\left(v_{k}\right)} \frac{\partial s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{\partial \theta} \\ &=\frac{1}{\sum_{k=1}^{K} r\left(v_{k}\right)} \sum_{k=1}^{K} r\left(v_{k}\right) \frac{\partial s\left(v_{k}, \bar{h}_{t} ; \theta\right)}{\partial \theta} . \end{aligned} Epθ(v∣ht) ∂θ∂s(v,h~t;θ) ≈K1k=1∑Kq(vk∣h~t)pθ(vk∣h~t)∂θ∂s(vk,h~t;θ)=K1k=1∑KZ(h~t)exp(s(vk,h~t;θ))q(vk∣h~t)1∂θ∂s(vk,h~t;θ)=K1k=1∑KZ(h~t)1r(vk)∂θ∂s(vk,h~t;θ)≈k=1∑K∑k=1Kr(vk)r(vk)∂θ∂s(vk,h~t;θ)=∑k=1Kr(vk)1k=1∑Kr(vk)∂θ∂s(vk,hˉt;θ).
∂ log p θ ( x t ∣ h ~ t ) ∂ θ = ∂ s ( x t , h ~ t ; θ ) ∂ θ − 1 ∑ k = 1 K r ( v k ) ∑ k = 1 K r ( v k ) ∂ s ( v k , h ~ t ; θ ) ∂ θ \frac{\partial \log p_{\theta}\left(x_{t} \mid \tilde{h}_{t}\right)}{\partial \theta}=\frac{\partial s\left(x_{t}, \tilde{h}_{t} ; \theta\right)}{\partial \theta}-\frac{1}{\sum_{k=1}^{K} r\left(v_{k}\right)} \sum_{k=1}^{K} r\left(v_{k}\right) \frac{\partial s\left(v_{k}, \tilde{h}_{t} ; \theta\right)}{\partial \theta} ∂θ∂logpθ(xt∣h~t)=∂θ∂s(xt,h~t;θ)−∑k=1Kr(vk)1k=1∑Kr(vk)∂θ∂s(vk,h~t;θ)
-
重要性采样相当于采样了一个词表的子集,然后在这个子集上求梯度的期望。采样样本数量越大,近似越接近正确值。实际上K取100左右就可以以足够高精度对梯度作出估计。
-
重要性采样的提议分布若选取不合适,会造成梯度估计非常不稳定。提议分布经常使用一元模型的分布函数
噪声对比估计
Noise-Contrastive Estimation,NCE
-
噪声估计对比也是一种近似估计梯度的方法
-
假设有三个分布,第一个是需要建模的真实数据分布 p r ( x ) p_r(x) pr(x);第二个是模型分布 p θ ( x ) p_{\theta}(x) pθ(x),并期望调整模型参数 θ \theta θ使得 p θ ( x ) p_{\theta}(x) pθ(x)可以拟合真实数据分布 p r ( x ) p_r(x) pr(x);第三个是噪声分布 q ( x ) q(x) q(x),用于对比学习。给定一个样本 x x x,如果 x x x是从 p r ( x ) p_r(x) pr(x)中抽取的,则称为真实样本,如果 x x x是从 q ( x ) q(x) q(x)中抽取的,则称为噪声样本。为了判断样本 x x x是噪声样本还是真实样本,引入一个判别函数 D D D
-
噪声对比估计是通过调整模型 p θ ( x ) p_{\theta}(x) pθ(x)使得判别函数 D D D很容易分辨出样本 x x x来自哪个分布。令 y ∈ 1 , 0 y \in {1,0} y∈1,0表示一个样本 x x x是真实样本或噪声样本,其条件概率为
p ( x ∣ y = 1 ) = p θ ( x ) p ( x ∣ y = 0 ) = q ( x ) \begin{aligned} &p(x \mid y=1)=p_{\theta}(x) \\ &p(x \mid y=0)=q(x) \end{aligned} p(x∣y=1)=pθ(x)p(x∣y=0)=q(x)
-
一般噪声样本的数量要比真实样本大很多.为了提高近似效率,我们近似假设噪声样本的数量是真实样本的 K K K倍,即 y y y的先验分布满足
p ( y = 0 ) = K p ( y = 1 ) p(y=0)=K p(y=1) p(y=0)=Kp(y=1)
-
根据贝叶斯公式,样本𝑥来自于真实数据分布的后验概率为
p ( y = 1 ∣ x ) = p ( x ∣ y = 1 ) p ( y = 1 ) p ( x ∣ y = 1 ) p ( y = 1 ) + p ( x ∣ y = 0 ) p ( y = 0 ) = p θ ( x ) p ( y = 1 ) p θ ( x ) p ( y = 1 ) + q ( x ) k p ( y = 1 ) = p θ ( x ) p θ ( x ) + K q ( x ) \begin{aligned} p(y=1 \mid x) &=\frac{p(x \mid y=1) p(y=1)}{p(x \mid y=1) p(y=1)+p(x \mid y=0) p(y=0)} \\ &=\frac{p_{\theta}(x) p(y=1)}{p_{\theta}(x) p(y=1)+q(x) k p(y=1)} \\ &=\frac{p_{\theta}(x)}{p_{\theta}(x)+K q(x)} \end{aligned} p(y=1∣x)=p(x∣y=1)p(y=1)+p(x∣y=0)p(y=0)p(x∣y=1)p(y=1)=pθ(x)p(y=1)+q(x)kp(y=1)pθ(x)p(y=1)=pθ(x)+Kq(x)pθ(x)
-
从真实分布 p r ( x ) p_r(x) pr(x)中抽取 N N N个样本 x 1 , ⋯ , x N x_1, \cdots, x_N x1,⋯,xN,将其类别设为 y = 1 y=1 y=1,然后从噪声分布中抽取 K N KN KN个样本 x 1 ′ , ⋯ , x K N ′ x_1', \cdots, x_{KN}' x1′,⋯,xKN′,将其类别设为 y = 0 y=0 y=0。噪声对比估计的目标是将真实样本和噪声样本区别开来,可以看作是一个二分类问题。噪声对比估计的损失函数为
L ( θ ) = − 1 N ( K + 1 ) ( ∑ n = 1 N log p ( y = 1 ∣ x n ) + ∑ n = 1 K N log p ( y = 0 ∣ x n ′ ) ) \mathcal{L}(\theta)=-\frac{1}{N(K+1)}\left(\sum_{n=1}^{N} \log p\left(y=1 \mid x_{n}\right)+\sum_{n=1}^{K N} \log p\left(y=0 \mid x_{n}^{\prime}\right)\right) L(θ)=−N(K+1)1(n=1∑Nlogp(y=1∣xn)+n=1∑KNlogp(y=0∣xn′))
通过不断采样真实样本和噪声样本,并用梯度下降法,可以学习参数 θ \theta θ使得 p θ ( x ) p_{\theta}(x) pθ(x)逼近于真实分布 p r ( x ) p_r(x) pr(x) -
在噪声对比估计中的判别函数 D D D是通过贝叶斯公式计算得到
-
在计算序列模型的条件概率时,我们也可以利用噪声对比估计的思想来提高计算效率
- 噪声对比估计方法的一个特点是会促使未归一化分布exp自己学习到一个近似归一化的分布,所以就不需要再做softmax
-
基于采样的方法并不改变模型的结构,只是近似计算参数梯度.在训练时可以显著提高模型的训练速度,但是在测试阶段依然需要计算配分函数.而基于层次化Softmax的方法改变了模型的结构,在训练和测试时都可以加快计算速度