Noise Contrastive Estimation

统计机器学习中经常遇到熵的概念,在介绍NCE和InfoNCE之前,对熵以及相关的概念做简单的梳理。信息量用于度量不确定性的大小,熵可以看作信息量的期望,香农信息熵的定义:对于随机遍历 X X X,香农信息的定义为 I ( X ) = − l o g ( P ( X ) ) I(X) = -log(P(X)) I(X)=log(P(X)),香农熵的定义为香农信息的期望 H ( X ) = E ( I ( X ) ) = ∑ x P ( x ) I ( x ) = − ∑ x P ( x ) l o g ( P ( x ) ) H(X) = E(I(X))= \sum_{x} P(x)I(x) = -\sum_{x} P(x)log(P(x)) H(X)=E(I(X))=xP(x)I(x)=xP(x)log(P(x))。当随机变量为均匀分布时,熵最大。香农编码定理表明熵是传输一个随机变量状态值所需的比特位下界。

对于多维随机变量,联合熵的定义为: H ( X , Y ) = E ( I ( X , Y ) ) = ∑ x , y P ( x , y ) I ( x , y ) = − ∑ x , y P ( x , y ) l o g ( P ( x , y ) ) H(X,Y) =E(I(X,Y))= \sum_{x,y}P(x,y)I(x,y) = -\sum_{x,y}P(x,y)log(P(x,y)) H(X,Y)=E(I(X,Y))=x,yP(x,y)I(x,y)=x,yP(x,y)log(P(x,y))

条件熵:

H ( Y ∣ X ) = E x ( H ( Y ∣ X = x ) ) = − ∑ x p ( x ) ∑ y p ( y ∣ x ) l o g P ( y ∣ x ) H(Y|X) = E_{x}(H(Y|X=x)) = -\sum_{x} p(x)\sum_{y}p(y|x)logP(y|x) H(YX)=Ex(H(YX=x))=xp(x)yp(yx)logP(yx)

= − ∑ x ∑ y p ( x ) p ( y ∣ x ) l o g ( p ( y ∣ x ) ) =-\sum_{x}\sum_{y}p(x)p(y|x)log(p(y|x)) =xyp(x)p(yx)log(p(yx))

= − ∑ x , y p ( x , y ) l o g ( p ( y ∣ x ) ) =-\sum_{x,y}p(x,y)log(p(y|x)) =x,yp(x,y)log(p(yx))

有定义可推出条件熵的一下性质:

H ( Y ∣ X ) = − ∑ x , y p ( x , y ) l o g ( p ( y ∣ x ) ) H(Y|X) = -\sum_{x,y}p(x,y)log(p(y|x)) H(YX)=x,yp(x,y)log(p(yx))

= − ∑ x , y p ( x , y ) l o g ( p ( x , y ) / p ( x ) ) ) =-\sum_{x,y}p(x,y)log(p(x,y)/p(x))) =x,yp(x,y)log(p(x,y)/p(x)))

= − ∑ x , y p ( x , y ) l o g ( p ( x , y ) ) + ∑ x , y p ( x , y ) l o g ( p ( x ) ) =-\sum_{x,y}p(x,y)log(p(x,y)) + \sum_{x,y}p(x,y)log(p(x)) =x,yp(x,y)log(p(x,y))+x,yp(x,y)log(p(x))

= − ∑ x , y p ( x , y ) l o g ( p ( x , y ) ) + ∑ x l o g ( p ( x ) ) ∑ y p ( x , y ) =-\sum_{x,y}p(x,y)log(p(x,y)) + \sum_{x}log(p(x))\sum_{y}p(x,y) =x,yp(x,y)log(p(x,y))+xlog(p(x))yp(x,y)

= H ( X , Y ) + ∑ x l o g ( p ( x ) ) p ( x ) =H(X,Y) + \sum_{x}log(p(x))p(x) =H(X,Y)+xlog(p(x))p(x)

= H ( X , Y ) − H ( X ) =H(X,Y)-H(X) =H(X,Y)H(X)

相对熵(也成KL散度):

p ( x ) , q ( x ) p(x),q(x) p(x),q(x)是离散随机变量的两个概率分布,则p相对q的相对熵为

K L ( p ∣ ∣ q ) = E x ∼ p ( x ) ( l o g ( p ( x ) q ( x ) ) ) = ∑ x p ( x ) l o g ( p ( x ) q ( x ) ) KL(p||q) = E_{x\sim p(x)}(log(\frac{p(x)}{q(x)})) = \sum_{x}p(x)log(\frac{p(x)}{q(x)}) KL(pq)=Exp(x)(log(q(x)p(x)))=xp(x)log(q(x)p(x))

  • 如何p,q相同,则 K L ( p ∣ ∣ q ) = K L ( q ∣ ∣ p ) = 0 KL(p||q) = KL(q||p)= 0 KL(pq)=KL(qp)=0
  • K L ( p ∣ ∣ q ) ≠ K L ( q ∣ ∣ p ) KL(p||q)\ne KL(q||p) KL(pq)=KL(qp)
  • K L ( p ∣ ∣ q ) ≥ 0 KL(p||q) \ge 0 KL(pq)0

证明:

K L ( p ∣ ∣ q ) = E x ∼ p ( x ) ( l o g p ( x ) q ( x ) ) KL(p||q)=E_{x\sim p(x)}(log\frac{p(x)}{q(x)}) KL(pq)=Exp(x)(logq(x)p(x))

= ∑ x p ( x ) l o g p ( x ) q ( x ) =\sum_{x}p(x)log\frac{p(x)}{q(x)} =xp(x)logq(x)p(x)

= − ∑ x p ( x ) l o g ( q ( x ) p ( x ) ) =-\sum_{x}p(x)log(\frac{q(x)}{p(x)}) =xp(x)log(p(x)q(x))

= − E x ∼ p ( x ) ( l o g q ( x ) p ( x ) ) =-E_{x\sim p(x)}(log\frac{q(x)}{p(x)}) =Exp(x)(logp(x)q(x))

≥ − l o g E x ∼ p ( x ) ( q ( x ) p ( x ) ) − − − J e n s e n 不 等 式 \ge -logE_{x\sim p(x)}(\frac{q(x)}{p(x)}) --- Jensen不等式 logExp(x)(p(x)q(x))Jensen

= − l o g ∑ x p ( x ) q ( x ) p ( x ) =-log \sum_{x}p(x)\frac{q(x)}{p(x)} =logxp(x)p(x)q(x)

= − l o g ∑ x q ( x ) = 0 =-log\sum_xq(x)=0 =logxq(x)=0

交叉熵:

H ( p , q ) = − ∑ x p ( x ) l o g ( q ( x ) ) H(p,q) = -\sum_xp(x)log(q(x)) H(p,q)=xp(x)log(q(x))

交叉熵有如下性质:

K L ( p ∣ ∣ q ) = ∑ x p ( x ) l o g p ( x ) q ( x ) KL(p||q)=\sum_{x}p(x)log\frac{p(x)}{q(x)} KL(pq)=xp(x)logq(x)p(x)

= ∑ x p ( x ) l o g ( p ( x ) ) − ∑ x p ( x ) l o g ( q ( x ) ) =\sum_xp(x)log(p(x))-\sum_xp(x)log(q(x)) =xp(x)log(p(x))xp(x)log(q(x))

= − ∑ x p ( x ) l o g ( q ( x ) ) − ( − ∑ x p ( x ) l o g ( p ( x ) ) ) =-\sum_xp(x)log(q(x)) - (-\sum_xp(x)log(p(x))) =xp(x)log(q(x))(xp(x)log(p(x)))

= H ( p , q ) − H ( p ) =H(p,q) - H(p) =H(p,q)H(p)

≤ H ( p , q ) \le H(p, q) H(p,q)

因此最小化交叉熵 H ( p , q ) H(p,q) H(p,q),在最小化KL散度的上界。事实上机器学习中,可以认为 p ( x ) p(x) p(x) 为真实的数据分布, q ( x ) q(x) q(x) 为通过模型建模的数据分布,参数待估计(学习)。一个常见的参数估计(学习)策略就是最小化交叉熵,可以证明样本固定的情况下,最小化交叉熵等价于最小化相对熵(KL散度),也等价于最大化似然。

实际情况是数据的真实分布 p ( x ) p(x) p(x) 是未知的,因此学习中用来自总体的样本$ {x_i, i=1, 2,…,n}$近似计算,希望学习到的分布 q ( x ) q(x) q(x) 与样本分布一致。

根据最小交叉熵策略,损失函数定义为:

L o s s ( θ ) = H ( p , q ) = − ∑ x p ( x ) l o g ( q ( x ; θ ) ) = − E x ∼ p ( x ) ( l o g ( q ( x ; θ ) ) ) Loss(\theta) = H(p,q)=-\sum_xp(x)log(q(x;\theta))=-E_{x\sim p(x)}(log(q(x;\theta))) Loss(θ)=H(p,q)=xp(x)log(q(x;θ))=Exp(x)(log(q(x;θ)))

≈ − 1 n ∑ i = 1 n l o g ( q ( x i ; θ ) ) \approx -\frac{1}{n}\sum_{i=1}^nlog(q(x_i;\theta)) n1i=1nlog(q(xi;θ))

θ ∗ = a r g m i n θ − 1 n ∑ i = 1 n l o g ( q ( x i ; θ ) ) \theta^* = argmin_{\theta}-\frac{1}{n}\sum_{i=1}^nlog(q(x_i;\theta)) θ=argminθn1i=1nlog(q(xi;θ))

根据最大似然估计策略,样本的似然函数为:

L ( θ ) = ∑ i = 1 n l o g ( q ( x i ; θ ) ) L(\theta) = \sum_{i=1}^nlog(q(x_i;\theta)) L(θ)=i=1nlog(q(xi;θ))

θ M L E = a r g m a x θ ∑ i = 1 n l o g ( q ( x i ; θ ) ) \theta_{MLE} = argmax_{\theta}\sum_{i=1}^nlog(q(x_i;\theta)) θMLE=argmaxθi=1nlog(q(xi;θ))

可以看出最小交叉熵估计等价于最大似然估计。

互信息:

连个随机变量X,Y的互信息定义为

I ( X , Y ) = E x , y ∼ p ( x , y ) ( l o g p ( x , y ) p ( x ) p ( y ) ) = ∑ x ∑ y p ( x , y ) l o g p ( x , y ) p ( x ) p ( y ) I(X,Y) = E_{x,y\sim p(x,y)}(log\frac{p(x,y)}{p(x)p(y)})=\sum_x\sum_yp(x,y)log\frac{p(x,y)}{p(x)p(y)} I(X,Y)=Ex,yp(x,y)(logp(x)p(y)p(x,y))=xyp(x,y)logp(x)p(y)p(x,y)

互信息满足:

  • 对称性: I ( X ; Y ) = I ( Y ; X ) I(X;Y)=I(Y;X) I(X;Y)=I(Y;X)
  • 半正定性: I ( X : Y ) ≥ 0 I(X:Y)\ge 0 I(X:Y)0,当 X , Y X,Y XY互相独立时, I ( X ; Y ) = 0 ) I(X;Y)=0) I(X;Y)=0)

证明:

I ( X ; Y ) = E x , y ∼ p ( x , y ) ( l o g p ( x , y ) p ( x ) p ( y ) ) I(X;Y)=E_{x,y\sim p(x,y)}(log\frac{p(x,y)}{p(x)p(y)}) I(X;Y)=Ex,yp(x,y)(logp(x)p(y)p(x,y))

− E x y ∼ p ( x , y ) ( l o g p ( x ) p ( y ) p ( x , y ) ) -E_{xy\sim p(x,y)}(log \frac{p(x)p(y)}{p(x,y)}) Exyp(x,y)(logp(x,y)p(x)p(y))

≥ l o g E x , y ∼ p ( x , y ) ( p ( x ) p ( y ) p ( x , y ) ) − − − J e n s e n 不 等 式 \ge log E_{x,y\sim p(x,y)}(\frac{p(x)p(y)}{p(x,y)}) --- Jensen不等式 logEx,yp(x,y)(p(x,y)p(x)p(y))Jensen

= l o g ∑ x , y p ( x , y ) p ( x ) p ( y ) p ( x , y ) ) =log\sum_{x,y}p(x,y)\frac{p(x)p(y)}{p(x,y)}) =logx,yp(x,y)p(x,y)p(x)p(y))

= l o g 1 = 0 =log 1 = 0 =log1=0

Noise Contrastive Estimate

语言模型中建模给定上下文 c c c 下单词 w w w的条件概率:

p ( w ∣ c ) = p θ ( w ∣ c ) = e x p ( s θ ( w , c ) ) ∑ w ′ ∈ V e x p ( s θ ( w , c ) ) ( 1 ) p(w|c)=p_{\theta}(w|c)=\frac{exp(s_{\theta}(w,c))}{\sum_{w'\in V}exp(s_{\theta}(w,c))}\quad\quad (1) p(wc)=pθ(wc)=wVexp(sθ(w,c))exp(sθ(w,c))(1)

下面我们考虑给定上下文 c c c 后的参数 θ \theta θ 的估计问题。

考虑极大似然估计方法,样本的的对数似然函数为:

L ( θ ) = ∑ i = 1 n l o g p θ ( w i ∣ c ) L(\theta) = \sum_{i=1}^n logp_{\theta}(w_i|c) L(θ)=i=1nlogpθ(wic)

= ∑ i = 1 n l o g p θ ( w i ∣ c ) = \sum_{i=1}^n logp_{\theta}(w_i|c) =i=1nlogpθ(wic)

= ∑ i = 1 n l o g e x p ( s θ ( w i , c ) − ∑ i = 1 n l o g ∑ w ′ ∈ V e x p ( s θ ( w ’ , c ) ) ) = \sum_{i=1}^n logexp(s_{\theta}(w_i,c) - \sum_{i=1}^n log\sum_{w'\in V}exp(s_{\theta}(w’,c))) =i=1nlogexp(sθ(wi,c)i=1nlogwVexp(sθ(w,c)))

= ∑ i = 1 n s θ ( w i , c ) − n ∗ l o g ∑ w ′ ∈ V e x p ( s θ ( w ′ , c ) ) ) = \sum_{i=1}^n s_{\theta}(w_i,c) - n * log\sum_{w'\in V}exp(s_{\theta}(w',c))) =i=1nsθ(wi,c)nlogwVexp(sθ(w,c)))

其中 n 为样本容量。

最大似然估计:

θ ^ M L E = a r g m a x θ L ( θ ) \hat{\theta}_{MLE} = argmax_{\theta}L(\theta) θ^MLE=argmaxθL(θ)

通过梯度下降方式求解估计值 θ ^ M L E \hat{\theta}_{MLE} θ^MLE,首先梯度计算为:

∂ L ( θ ) ∂ θ = ∑ i = 1 n [ ∂ s θ ( w i , c ) ∂ θ ] − n ∗ ∑ w ′ ∈ V 1 ∑ w ′ ∈ V e x p ( s θ ( w ′ , c ) ) e x p ( s θ ( w ′ , c ) ) ∂ s θ ( w ′ , c ) ∂ θ \frac{\partial L(\theta)}{\partial \theta}=\sum_{i=1}^n[\frac{\partial s_{\theta}(w_i,c)}{\partial \theta}] - n * \sum_{w'\in V}\frac{1}{\sum_{w'\in V}exp(s_{\theta}(w',c))}exp(s_{\theta}(w',c))\frac{\partial s_{\theta}(w',c)}{\partial \theta} θL(θ)=i=1n[θsθ(wi,c)]nwVwVexp(sθ(w,c))1exp(sθ(w,c))θsθ(w,c)

= ∑ i = 1 n [ ∂ s θ ( w i , c ) ∂ θ ] − n ∗ ∑ w ′ ∈ V p θ ( w ′ , c ) ∂ s θ ( w ′ , c ) ∂ θ =\sum_{i=1}^n[\frac{\partial s_{\theta}(w_i,c)}{\partial \theta}] - n * \sum_{w'\in V}p_{\theta}(w',c)\frac{\partial s_{\theta}(w',c)}{\partial \theta} =i=1n[θsθ(wi,c)]nwVpθ(w,c)θsθ(w,c)

上面的计算分为两部分,前半部分针对每个训练样本 ( w i , c ) (w_i,c) (wi,c),后半部分则需要对整个词表 V 进行计算,当词表较大时,计算量较大,因此提出了一些近似计算的方式:

  • 通过采样的方式计算,也就是从 p θ ( w ∣ c ) p_{\theta}(w|c) pθ(wc)中采用样本,用 1 k ∑ i = 1 k ∂ s θ ( w i , c ) ∂ θ \frac{1}{k}\sum_{i=1}^k\frac{\partial s_{\theta}(w_i,c)}{\partial\theta} k1i=1kθsθ(wi,c)结果近似第二部分;实际操作中从 p θ ( w ∣ c ) p_{\theta}(w|c) pθ(wc)中采样的操作比较复杂,因此会用另一个分布 q ( w ∣ c ) q(w|c) q(wc)作为采样分布,这种方式称为sampled softmax

∂ L ( θ ) ∂ θ = = ∑ i = 1 n [ ∂ s θ ( w i , c ) ∂ θ ] − n ∗ ∑ w ′ ∈ V p θ ( w ′ , c ) ∂ s θ ( w ′ , c ) ∂ θ \frac{\partial L(\theta)}{\partial\theta} = =\sum_{i=1}^n[\frac{\partial s_{\theta}(w_i,c)}{\partial \theta}] - n * \sum_{w'\in V}p_{\theta}(w',c)\frac{\partial s_{\theta}(w',c)}{\partial \theta} θL(θ)==i=1n[θsθ(wi,c)]nwVpθ(w,c)θsθ(w,c)

= ∑ i = 1 n [ ∂ s θ ( w i , c ) ∂ θ ] − n ∗ E w ′ ∼ p θ ( w ∣ c ) ∂ s θ ( w ′ , c ) ∂ θ =\sum_{i=1}^n[\frac{\partial s_{\theta}(w_i,c)}{\partial \theta}] - n * E_{w'\sim p_{\theta}(w|c)}\frac{\partial s_{\theta}(w',c)}{\partial \theta} =i=1n[θsθ(wi,c)]nEwpθ(wc)θsθ(w,c)

≈ ∑ i = 1 n [ ∂ s θ ( w i , c ) ∂ θ ] − n ∗ 1 k ∑ i = 1 k ∂ s θ ( w i ′ , c ) ∂ θ \approx \sum_{i=1}^n[\frac{\partial s_{\theta}(w_i,c)}{\partial \theta}] - n *\frac{1}{k} \sum_{i=1}^k\frac{\partial s_{\theta}(w'_i,c)}{\partial \theta} i=1n[θsθ(wi,c)]nk1i=1kθsθ(wi,c)

  • 噪音对比学习(Noise Contrastive Estimate)

NCE 将训练样本 ( w i , c ) (w_i,c) (wi,c)作为正样本,记为 D = 1 D = 1 D=1,然后从一个已知确定的与c无关的噪音分布 q ( w ) q(w) q(w)中采样样本作为负样本,记为 D = 0 D = 0 D=0。随机采样的负样本个数为 k k k,则新样本中:

p ( D = 1 ) = n n + k ∗ n p(D = 1) = \frac{n}{n + k * n} p(D=1)=n+knn

p ( D = 0 ) = 1 − p ( D = 1 ) = k ∗ n n + k ∗ n p(D = 0) = 1 - p(D = 1) = \frac{k*n}{n + k * n} p(D=0)=1p(D=1)=n+knkn

p ( w ∣ D = 1 , c ) = p θ ( w ∣ c ) p(w|D = 1,c) = p_{\theta}(w|c) p(wD=1,c)=pθ(wc)

p ( w ∣ D = 0 , c ) = q ( w ) p(w|D = 0, c) = q(w) p(wD=0,c)=q(w)

p ( D = 1 ∣ w , c ) = p ( D = 1 ) p ( w ∣ D = 1 , c ) p ( D = 0 ) p ( w ∣ D = 0 , c ) + p ( D = 1 ) p ( w ∣ D = 1 , c ) p(D = 1 | w, c) = \frac{p(D = 1)p(w | D = 1,c)}{p(D = 0)p(w|D = 0,c) + p(D = 1)p(w|D = 1,c)} p(D=1w,c)=p(D=0)p(wD=0,c)+p(D=1)p(wD=1,c)p(D=1)p(wD=1,c)

= p θ ( w ∣ c ) p θ ( w ∣ c ) + k q ( w ) =\frac{p_{\theta}(w|c)}{p_{\theta}(w|c) + kq(w)} =pθ(wc)+kq(w)pθ(wc)

p ( D = 0 ∣ w , c ) = 1 − p ( D = 1 ∣ w , c ) = p ( D = 0 ) p ( w ∣ D = 0 , c ) p ( D = 1 ) p ( w ∣ D = 0 , c ) + p ( D = 1 ) p ( w ∣ D = 0 , c ) p(D = 0 | w, c) = 1 - p(D = 1|w, c) = \frac{p(D = 0)p(w|D = 0, c)}{p(D = 1)p(w|D = 0, c) + p(D = 1)p(w|D = 0, c)} p(D=0w,c)=1p(D=1w,c)=p(D=1)p(wD=0,c)+p(D=1)p(wD=0,c)p(D=0)p(wD=0,c)

= k q ( w ) p θ ( w ∣ c ) + k q ( w ) =\frac{kq(w)}{p_{\theta}(w|c) + kq(w)} =pθ(wc)+kq(w)kq(w)

原有样本和随机负采样的样本组成新的样本集合,样本容量为 n + k * n,下面依然采用最大似然估计,新的样本的对数似然为:

L c ( θ ) = ∑ i = 1 n l o g p θ ( D = 1 ∣ w i , c ) + ∑ i = n + 1 k ∗ n l o g p θ ( D = 0 ∣ w i , c ) L^c(\theta) = \sum_{i = 1}^{n}logp_{\theta}(D = 1|w_i,c) + \sum_{i=n+1}^{k * n}logp_{\theta}(D = 0|w_i,c) Lc(θ)=i=1nlogpθ(D=1wi,c)+i=n+1knlogpθ(D=0wi,c)

其中

p θ ( w ∣ c ) = e x p ( s θ ( w , c ) ) ∑ w ′ ∈ V e x p ( s θ ( w ′ , c ) ) p_{\theta}(w|c) = \frac{exp(s_{\theta}(w,c))}{\sum_{w'\in V}exp(s_{\theta}(w',c))} pθ(wc)=wVexp(sθ(w,c))exp(sθ(w,c))

= u θ ( w , c ) Z ( c ) =\frac{u_{\theta}(w,c)}{Z(c)} =Z(c)uθ(w,c)

p ( D = 1 ∣ w , c ) = p θ ( w ∣ c ) p θ ( w ∣ c ) + k q ( w ) p(D = 1|w,c) = \frac{p_{\theta}(w|c)}{p_{\theta}(w|c) + kq(w)} p(D=1w,c)=pθ(wc)+kq(w)pθ(wc)

p ( D = 0 ∣ w , c ) = k q ( w ) p θ ( w ∣ c ) + k q ( w ) p(D = 0|w,c) = \frac{kq(w)}{p_{\theta}(w|c) + kq(w)} p(D=0w,c)=pθ(wc)+kq(w)kq(w)

可以看到以上的似然函数中依然需要计算归一化分母 Z ( c ) = ∑ w ′ ∈ V e x p ( s θ ( w ′ , c ) ) Z(c) = \sum_{w'\in V}exp(s_{\theta}(w',c)) Z(c)=wVexp(sθ(w,c)),计算量依然较大,下面我们对模型做一次改造:

Z ( c ) = θ c Z(c) = \theta^c Z(c)=θc

p θ ( w ∣ c ) = e x p ( s θ ( w , c ) ) / θ c = p θ 0 ( w ∣ c ) / θ c p_{\theta}(w|c) = exp(s_{\theta(w,c)})/\theta^c= p_{\theta^0}(w|c) / \theta^c pθ(wc)=exp(sθ(w,c))/θc=pθ0(wc)/θc

新的函数的参数包括两部分组成: θ = θ 0 , θ c \theta = {\theta^0,\theta^c} θ=θ0,θc

p θ 0 ( w ∣ c ) p_{\theta^0}(w|c) pθ0(wc)为未归一化的函数 p θ 0 ( w ∣ c ) = u θ 0 ( w ∣ c ) p_{\theta^0}(w|c) = u_{\theta^0}(w|c) pθ0(wc)=uθ0(wc)

L c ( θ ) = ∑ i = 1 n l o g p θ ( D = 1 ∣ w i , c ) + ∑ i = n + 1 k ∗ n l o g p θ ( D = 0 ∣ w i , c ) L^c(\theta) = \sum_{i = 1}^{n}logp_{\theta}(D = 1|w_i,c) + \sum_{i=n+1}^{k*n}logp_{\theta}(D = 0|w_i,c) Lc(θ)=i=1nlogpθ(D=1wi,c)+i=n+1knlogpθ(D=0wi,c)

= ∑ i = 1 n l o g p θ ( w i ∣ c ) p θ ( w i ∣ c ) + k q ( w i ) + ∑ w ∼ q ( w i ) l o g k q ( w ) p θ ( w i ∣ c ) + k q ( w i ) =\sum_{i = 1}^{n}log\frac{p_{\theta}(w_i|c)}{p_{\theta}(w_i|c) + kq(w_i)} + \sum_{w\sim q(w_i)}log\frac{kq(w)}{p_{\theta}(w_i|c) + kq(w_i)} =i=1nlogpθ(wic)+kq(wi)pθ(wic)+wq(wi)logpθ(wic)+kq(wi)kq(w)

作者实验发现取 θ c = 1 \theta^c = 1 θc=1也有不错的效果,因此模型只剩下一部分参数待估计 θ 0 \theta^0 θ0

InfoNCE

请添加图片描述

  • 表示学习:通过预测future的任务,学习到好的表示

  • 作者认为直接建模条件概率 p ( x ∣ c ) p(x|c) p(xc)方式对于提取 x , c x, c x,c的共享信息并不是最优的方法

  • 作者提出建模 x , c x, c x,c之间的建模密度比 p ( x ∣ c ) p ( x ) \frac{p(x|c)}{p(x)} p(x)p(xc) f k ( x t + k , c t ) ∝ x t + k ∣ c t p ( x t + k ) f_k(x_{t+k},c_t) \propto \frac{x_{t+k}|c_t}{p(x_{t+k})} fk(xt+k,ct)p(xt+k)xt+kct,提升密度比提升x,c 的互信息: I ( X , C ) = ∑ x ∑ c p ( x , c ) l o g p ( x ∣ c ) p ( x ) I(X,C) = \sum_x\sum_cp(x,c)log\frac{p(x|c)}{p(x)} I(X,C)=xcp(x,c)logp(x)p(xc)

  • 实践中作者采用的模型为 f k ( x t + k , c t ) = e x p ( z t + k T W k c t ) , z t = g e n c ( x t ) , c t = g a r ( z ≤ t ) f_k(x_{t+k},c_t) = exp(z^T_{t+k}W_kc_t), z_t = g_{enc}(x_t), c_t = g_{ar}(z_{\le t}) fk(xt+k,ct)=exp(zt+kTWkct),zt=genc(xt),ct=gar(zt)

  • 给定一个训练batch样本 X = { x 1 , x 2 , . . . , x N } X=\{x_1,x_2,...,x_N\} X={x1,x2,...,xN},包含一个从 p ( x t + k ∣ c t ) p(x_{t+k}|c_t) p(xt+kct)中采样的样本,作为正样本,N-1 个来自 p ( x t + k ) p(x_{t+k}) p(xt+k)的样本作为负样本,InfoNCE的Loss定义为: L N = − E [ l o g f k ( x t + k , c t ) ∑ x ∈ X f k ( x j , c t ) ] L_N = -E[log\frac{f_k(x_{t+k},c_t)}{\sum_{x\in X}f_k(x_j, c_t)}] LN=E[logxXfk(xj,ct)fk(xt+k,ct)]

  • I ( x t + k , c t ) ≥ l o g N − L N I(x_{t+k},c_t) \ge logN -L_N I(xt+k,ct)logNLN

证明:

L N = − E [ l o g f k ( x t + k , c t ) f k ( x t + k , c t ) + ∑ x j ∈ X n e g f k ( x j , c t ) ] L_N = -E[log \frac{f_k(x_{t+k},c_t)}{f_k(x_{t+k},c_t) + \sum_{x_j\in X_{neg}f_k(x_j,c_t)}}] LN=E[logfk(xt+k,ct)+xjXnegfk(xj,ct)fk(xt+k,ct)]

= E [ l o g f k ( x t + k , c t ) + ∑ x j ∈ X n e g f k ( x j , c t ) f k ( x t + k , c t ) ] =E [log \frac{f_k(x_{t+k},c_t) + \sum_{x_j\in X_{neg}}f_k(x_{j},c_t)}{f_k(x_{t+k},c_t)}] =E[logfk(xt+k,ct)fk(xt+k,ct)+xjXnegfk(xj,ct)]

= E l o g [ 1 + p ( x t + k ) p ( x t + k ∣ c t ) ∑ x j ∈ X n e g p ( x j ∣ c t ) p ( x j ) ] =Elog[1 + \frac{p(x_{t+k})}{p(x_{t+k}|c_t)}\sum_{x_j\in X_{neg}}\frac{p(x_j|c_t)}{p(x_j)}] =Elog[1+p(xt+kct)p(xt+k)xjXnegp(xj)p(xjct)]

≈ E l o g [ 1 + p ( x t + k ) p ( x t + k ∣ c t ) ( N − 1 ) E x j ∼ p ( x j ) p ( x j ∣ c t ) p ( x j ) ] \approx Elog[1 + \frac{p(x_{t+k})}{p(x_{t+k}|c_t)}(N-1)E_{x_j\sim p(x_j)}\frac{p(x_j|c_t)}{p(x_j)}] Elog[1+p(xt+kct)p(xt+k)(N1)Exjp(xj)p(xj)p(xjct)]

= E l o g [ 1 + p ( x t + k ) p ( x t + k ∣ c t ) ( N − 1 ) ] =Elog[1 + \frac{p(x_{t+k})}{p(x_{t+k}|c_t)}(N-1)] =Elog[1+p(xt+kct)p(xt+k)(N1)]

≥ E l o g [ p ( x t + k ) p ( x t + k ∣ c t ) N ] \ge Elog[\frac{p(x_{t+k})}{p(x_{t+k}|c_t)}N] Elog[p(xt+kct)p(xt+k)N]

= − H ( x t + k , c t ) + l o g N =-H(x_{t+k},c_t) + logN =H(xt+k,ct)+logN

因此最小化L_N等价于最大化互信息 I ( x t + k , c t ) I(x_{t+k},c_t) I(xt+k,ct)的下限。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值