此文章主要是结合哔站shuhuai008大佬的白板推导视频: VI变分推断_126min
全部笔记的汇总贴:机器学习-白板推导系列笔记
一、背景
对于概率模型
- 从频率派角度来看就会是一个优化问题
- 从贝叶斯角度来看就会是一个积分问题
从贝叶斯来看
p ( x ^ ∣ x ) = ∫ θ p ( x ^ , θ ∣ x ) d θ = ∫ θ p ( x ^ ∣ θ , x ) p ( θ ∣ x ) d θ = ∫ θ p ( x ^ ∣ θ ) p ( θ ∣ x ) d θ = E θ ∣ x [ p ( x ^ ∣ θ ) ] p(\hat{x}|x)=\int _{\theta }p(\hat{x},\theta |x)\mathrm{d}_\theta \\=\int _{\theta }p(\hat{x}|\theta ,x)p(\theta |x)\mathrm{d}_\theta \\ \overset{}{=}\int _{\theta }p(\hat{x}|\theta)p(\theta |x)\mathrm{d}_\theta \\=E_{\theta |x}[p(\hat{x}|\theta )] p(x^∣x)=∫θp(x^,θ∣x)dθ=∫θp(x^∣θ,x)p(θ∣x)dθ=∫θp(x^∣θ)p(θ∣x)dθ=Eθ∣x[p(x^∣θ)]
Inference分为:
- 精确推断
- 近似推断(确定性近似—VI;随机近似—MCMC、MH、Gibbs)
优化问题分为:
- 回归 model: f ( w ) = w T x f(w)=w^Tx f(w)=wTx
loss-function:无约束
L ( w ) = ∑ i = 1 N ∣ ∣ w T x i − y i ∣ ∣ 2 L(w)=\sum^{N}_{i=1}||w^Tx_i-y_i||^2 L(w)=∑i=1N∣∣wTxi−yi∣∣2
w ^ = arg min L ( w ) \hat{w}=\arg\min L(w) w^=argminL(w)
解法:
1.解析解:求导令为 0 0 0,得 w ∗ = ( X T X ) − 1 X T Y w^*=(X^TX)^{-1}X^TY w∗=(XTX)−1XTY
2.数值解:GD、SGD
- SVM(分类)
f ( w ) = s i g n ( w T x + b ) f(w)=sign(w^Tx+b) f(w)=sign(wTx+b)
loss-function:有约束
min 1 2 w T w \min\frac{1}{2}w^Tw min21wTw
s . t . y i ( w T x i + b ) ≥ 1 , i = 1 , 2 , ⋯ , N s.t. \ y_i(w^Tx_i+b)\geq 1,i=1,2,\cdots,N s.t. yi(wTxi+b)≥1,i=1,2,⋯,N
connex优化 对偶
- EM
θ ^ = arg max log p ( x ∣ θ ) \hat{\theta}=\arg\max\log p(x|\theta) θ^=argmaxlogp(x∣θ)
θ ( t + 1 ) = arg max ∫ p ( x , z ∣ θ ) ⋅ p ( z ∣ x , θ ( t ) ) d z θ \theta^{(t+1)}=\underset{\theta}{\argmax\int p(x,z|\theta)\cdot p(z|x,\theta^{(t)}){d}z} θ(t+1)=θargmax∫p(x,z∣θ)⋅p(z∣x,θ(t))dz
二、公式
Data:
x x x:observed variable → X : { x i } i = 1 N \rightarrow X:\left \{x_{i}\right \}_{i=1}^{N} →X:{ xi}i=1N
z z z:latent variable + parameter → Z : { z i } i = 1 N \rightarrow Z:\left \{z_{i}\right \}_{i=1}^{N} →Z:{ zi}i=1N
( X , Z ) (X,Z) (X,Z):complete data
引入分布 q ( z ) q(z) q(z):
l o g p ( x ) = l o g p ( x , z ) − l o g p ( z ∣ x ) = l o g p ( x , z ) q ( z ) − l o g p ( z ∣ x ) q ( z ) log\; p(x)=log\; p(x,z)-log\; p(z|x)=log\; \frac{p(x,z)}{q(z)}-log\; \frac{p(z|x)}{q(z)} logp(x)=logp(x,z)−logp(z∣x)=logq(z)p(x,z)−logq(z)p(z∣x)
式子两边同时对 q ( z ) q(z) q(z)求积分:
左边 = ∫ z q ( z ) ⋅ l o g p ( x ∣ θ ) d z = l o g p ( x ∣ θ ) ∫ z q ( z ) d z = l o g p ( x ∣ θ ) =\int _{z}q(z)\cdot log\; p(x |\theta )\mathrm{d}z=log\; p(x|\theta )\int _{z}q(z )\mathrm{d}z=log\; p(x|\theta ) =∫zq(z)⋅logp(x∣θ)dz=logp(x∣θ)∫zq(z)dz=logp(x∣θ)
右边 = ∫ z q ( z ) l o g p ( x , z ∣ θ ) q ( z ) d z ⏟ E L B O ( E v i d e n c e L o w e r B o u n d ) − ∫ z q ( z ) l o g p ( z ∣ x , θ ) q ( z ) d z ⏟ K L ( q ( z ) ∣ ∣ p ( z ∣ x , θ ) ) = L ( q ) ⏟ 变 分 + K L ( q ∣ ∣ p ) ⏟ ≥ 0 =\underset{ELBO(Evidence\; Lower\; Bound)}{\underbrace{\int _{z}q(z)log\; \frac{p(x,z|\theta )}{q(z)}\mathrm{d}z}}\underset{KL(q(z)||p(z|x,\theta ))}{\underbrace{-\int _{z}q(z)log\; \frac{p(z|x,\theta )}{q(z)}\mathrm{d}z}}\\ =\underset{变分}{\underbrace{L(q)}} + \underset{\geq 0}{\underbrace{KL(q||p)}} =ELBO(EvidenceLowerBound) ∫zq(z)logq(z)p(x,z∣θ)dzKL(q(z)∣∣p(z∣x,θ)) −∫zq(z)logq(z)p(z∣x,θ)dz=变分 L(q)+≥0 KL(q∣∣p)
当 q q q与 p p p相等时, K L ( q ∣ ∣ p ) KL(q||p) KL(q∣∣p)等于 0 0 0,此时 K L ( q ∣ ∣ p ) KL(q||p) KL(q∣∣p)取值最小,所以这时就是要使 L ( q ) L(q) L(q)越大越好:
q ~ ( z ) = a r g m a x q ( z ) L ( q ) ⇒ q ~ ( z ) ≈ p ( z ∣ x ) \tilde{q}(z)=\underset{q(z)}{argmax}\; L(q)\Rightarrow \tilde{q}(z)\approx p(z|x) q~(z)=q(z)argmaxL(q)⇒q~(z)≈p(z∣x)
我们对 q ( z q(z q(z)做以下假设,将多维变量的不同维度分为 M M M组,组与组之间而且是相互独立的,所以:
q ( z ) = ∏ i = 1 M q i ( z i ) q(z)=\prod_{i=1}^{M}q_{i}(z_{i}) q(z)=i=1∏Mqi(zi)
此时我们固定 q i ( z i ) , i ≠ j q_{i}(z_{i}),i\neq j qi(zi),i=j来求 q j ( z j ) q_{j}(z_{j}) qj(zj),所以:
L ( q ) = ∫ z q ( z ) l o g p ( x , z ) d z ⏟ ① − ∫ z q ( z ) l o g q ( z ) d z ⏟ ② L(q)=\underset{①}{\underbrace{\int _{z}q(z)log\; p(x,z)\mathrm{d}z}}-\underset{②}{\underbrace{\int _{z}q(z)log\; q(z)\mathrm{d}z}} L(q)=①