ML step by step|20200103

Machine Learning problem discussion

Proof that min of the regularized error function is equivalent to minimizing the unregularized sum-of-squares error

(2 points) Using the technique of Lagrange multipliers, show that minimization of the regularized error function
1 2 ∑ i = 1 n ( y i − ω T x i ) 2 + λ 2 ∑ i = 1 n ∣ ω i ∣ q \frac{1}{2}\sum_{i=1}^{n}\left ( y_{i}-\mathbf{\omega}^{T}\mathbf{x}_{i}\right )^{^{2}}+\frac{\lambda }{2}\sum_{i=1}^{n}\left | \omega_i \right |^{^{q}} 21i=1n(yiωTxi)2+2λi=1nωiq
is equivalent to minimizing the unregularized sum-of-squares error
1 2 ∑ i = 1 n ( y i − ω T x i ) 2 \frac{1}{2}\sum_{i=1}^{n}\left ( y_{i}-\mathbf{\omega}^{T}\mathbf{x}_{i}\right )^{^{2}} 21i=1n(yiωTxi)2
subject to the constraint
∑ i = 1 n ∣ ω j ∣ q ⩽ η \sum_{i=1}^{n}\left | \omega_j \right |^{^{q}}\leqslant \eta i=1nωjqη

Proof :
m i n 1 2 ∑ i = 1 n ( y i − ω T x i ) 2 s . t ∑ i = 1 n ∣ ω j ∣ q ⩽ η min \frac{1}{2}\sum_{i=1}^{n}\left ( y_{i}-\mathbf{\omega}^{T}\mathbf{x}_{i}\right )^{^{2}}\\s.t\sum_{i=1}^{n}\left | \omega_j \right |^{^{q}}\leqslant \eta min21i=1n(yiωTxi)2s.ti=1nωjqη
It is a convex optimization problem. Let’s write its Lagrange function.
L = ∑ i = 1 n ( y i − ω T x i ) 2 + λ ( ∣ ω j ∣ q − η ) L =\sum_{i=1}^{n}\left ( y_{i}-\mathbf{\omega}^{T}\mathbf{x}_{i}\right )^{^{2}}+\lambda (\left | \omega_j \right |^{^{q}}-\eta)\\ L=i=1n(yiωTxi)2+λ(ωjqη)
According to KKT conditions, and let w* , lambda* be the optimal solution for original and dual problem.
0 ⩽ λ ∗ 0\leqslant \lambda^* 0λ

0 = ▽ ω ( ∑ i = 1 n ( y i − ω T x i ) 2 + λ ( ∣ ω j ∣ q − η ) ) = ∑ i = 1 n ( − x i ) ( y i − ∑ i = 1 n ω j x i j ) + λ q ∣ ω j ∣ q − 1 k j k j = { ⊆ [ − 1 , 1 ] , i f    ω j = 0 1 , i f    ω j > 0 − 1 , i f    ω j < 0 } 0=\bigtriangledown _{\omega}(\sum_{i=1}^{n}\left ( y_{i}-\mathbf{\omega}^{T}\mathbf{x}_{i}\right )^{^{2}}+\lambda (\left | \omega_j \right |^{^{q}}-\eta))\\=\sum_{i=1}^{n}(-x_i)(y_i-\sum_{i=1}^{n}\omega_jx_{ij})+\lambda q\left | \omega_j \right |^{^{q-1}}k_j\\ k_j=\begin{Bmatrix} \subseteq [-1,1], if ~~\omega_j=0 \\ 1, if ~~\omega_j>0 \\ -1, if ~~\omega_j<0 \end{Bmatrix} 0=ω(i=1n(yiωTxi)2+λ(ωjqη))=i=1n(xi)(yii=1nωjxij)+λqωjq1kjkj=[1,1],if  ωj=01,if  ωj>01,if  ωj<0

∴ λ q ∣ ω j ∣ q − 1 k j = ∑ i = 1 n ( ( − λ i ) ( y i − ∑ j = 1 n ω j x i j ) ) \therefore \lambda q\left | \omega_j \right |^{^{q-1}}k_j= \sum_{i=1}^{n}\left ((-\lambda_i)(y_{i}-\sum_{j=1}^{n}\omega_jx_{ij})\right) λqωjq1kj=i=1n((λi)(yij=1nωjxij))

For the unconstrained regularized optimization function,

F . O . C     ∂ ∂ x [ 1 2 ∑ i = 1 n ( y i − ω T x i ) 2 + λ 2 ∑ j = 1 n ∣ ω q ∣ ] = − ∑ i = 1 n x i j ( y i − ∑ j = 1 n x i j ω j ) + λ 2 ∑ j = 1 n ∣ ω ∣ q − 1 k j = 0 ∴ λ 2 q ∣ ω ∣ q − 1 k j = ∑ i = 1 n ( ( − λ i ) ( y i − ∑ j = 1 n ω j x i j ) ) F.O.C ~~~\frac{\partial }{\partial x} \left [ \frac{1}{2}\sum_{i=1}^{n}\left ( y_{i}-\mathbf{\omega}^{T}\mathbf{x}_{i}\right )^{^{2}} + \frac{\lambda }{2}\sum_{j=1}^{n}\left | \mathbf{\omega }^{q} \right | \right ] =-\sum_{i=1}^{n}x_{ij}(y_i-\sum_{j=1}^{n}x_{ij}\omega_j)+\frac{\lambda }{2}\sum_{j=1}^{n}\left | \mathbf{\omega } \right |^{q-1}k_j=0\\ \therefore \frac{\lambda }{2}q\left | \mathbf{\omega } \right |^{q-1}k_j =\sum_{i=1}^{n}\left ((-\lambda_i)(y_{i}-\sum_{j=1}^{n}\omega_jx_{ij})\right) F.O.C   x[21i=1n(yiωTxi)2+2λj=1nωq]=i=1nxij(yij=1nxijωj)+2λj=1nωq1kj=02λqωq1kj=i=1n((λi)(yij=1nωjxij))

MAP of LASSO

(2 points) (MAP Estimation) We mentioned that when θ is an isotropic Laplace distribution, MAP corresponds to LASSO (L1-regularization). Now you are maximizing the likelihood function ∏ i = 1 n p ( x i ∣ θ ) \prod_{i=1}^{n}p(x_i|\theta) i=1np(xiθ) with prior distribution p(θ).
p ( θ ) = λ 2 e x p ( − λ ∣ θ ∣ )        λ > 0 p(\theta )= \frac{\lambda }{2}exp(-\lambda\left |\theta \right |)~~~~~~\lambda>0\\ p(θ)=2λexp(λθ)      λ>0
Please prove that it is equivalent to maximizing
l o g ∏ i = 1 n p ( x i ∣ θ ) − λ ∣ θ ∣ log\prod_{i=1}^{n}p(x_i|\theta)-\lambda\left |\theta \right | logi=1np(xiθ)λθ
**Proof : **
a r g m a x θ ( ∏ i = 1 n p ( x i ∣ θ ) p ( θ ) ) = a r g m a x θ l o g ( ∏ i = 1 n p ( x i ∣ θ ) p ( θ ) ) ) = a r g m a x θ l o g ( ∏ i = 1 n p ( x i ∣ θ ) ) + l o g p ( θ ) = a r g m a x θ l o g ( ∏ i = 1 n p ( x i ∣ θ ) ) + l o g 1 2 λ − λ ∣ θ ∣ = a r g m a x θ l o g ( ∏ i = 1 n p ( x i ∣ θ ) ) − λ ∣ θ ∣ argmax \theta (\prod_{i=1}^{n}p(x_i|\theta)p(\theta)) \\=argmax\theta log(\prod_{i=1}^{n}p(x_i|\theta)p(\theta))) \\=argmax\theta log(\prod_{i=1}^{n}p(x_i|\theta))+logp(\theta) \\=argmax\theta log(\prod_{i=1}^{n}p(x_i|\theta))+log\frac{1}{2}\lambda-\lambda\left | \theta \right | \\=argmax\theta log(\prod_{i=1}^{n}p(x_i|\theta))-\lambda\left | \theta \right | argmaxθ(i=1np(xiθ)p(θ))=argmaxθlog(i=1np(xiθ)p(θ)))=argmaxθlog(i=1np(xiθ))+logp(θ)=argmaxθlog(i=1np(xiθ))+log21λλθ=argmaxθlog(i=1np(xiθ))λθ

Bias-Variance Tradeoff and its applications

(2 points) (Mean Square Error) We mentioned Bias-Variance Tradeoff in class. We define the MSE of X ^ \hat{X} X^ an estimator of X as
M S E ( X ) ≜ E [ ( X − X ) 2 ] MSE(X^) ≜ E[(X^-X)^{^{2}}] MSE(X)E[(XX)2]
The variance of X^ is defined as
V a r ( X ^ ) ≜ E [ ( X ^ − E [ X ^ ] ) 2 ] Var(\hat{X}) ≜ E[(\hat{X} -E[\hat{X}])^{^{2}}] Var(X^)E[(X^E[X^])2]
and the bias is defined as
B i a s ( X ^ ) ≜ E [ X ^ ] − X Bias(\hat{X}) ≜ E[\hat{X}]-X Bias(X^)E[X^]X.

(a) Please prove that
M S E [ X ^ ] = V a r [ X ^ ] + ( B a i s [ X ^ ] ) 2 MSE[\hat{X}]=Var[\hat{X}]+(Bais[\hat{X}])^{^{2}} MSE[X^]=Var[X^]+(Bais[X^])2
(b) Our data are added with an independent Gaussian noise, say, X + N, where E[N] = 0 and E[N2] = σ2 and the estimator is X^. We define the empirical MSE as E [ ( X ^ − X − N ) 2 ] E[(\hat{X} - X - N)^{^2}] E[(X^XN)2].

Please prove that
E [ ( X ^ − X − N ) 2 ] = M S E [ X ^ ] + σ 2 E[(\hat{X} - X - N)^{^2}]=MSE[\hat{X}]+\sigma ^{2} E[(X^XN)2]=MSE[X^]+σ2
The equation tells us that the empirical error is a good estimation of the true error. Thus, we can minimize the empirical error in order to properly minimize the true error.

Proof :

(a)
M S E [ X ^ ] = E [ [ X ^ − X ] 2 ] = E [ [ ( X ^ − E ( X ^ ) + ( X + E ( X ^ ) ] 2 ] = E [ ( X ^ − E ( X ^ ) 2 ] + E [ ( X − E ( X ^ ) 2 ] − 2 E [ ( X ^ − E ( X ^ ) ( X − E ( X ^ ) ] = V a r [ X ^ ] + ( B a i s [ X ^ ] ) 2 − 2 E [ X ^ X + E ( X ^ ) E ( X ^ ) − X E ( X ^ ) − X ^ E ( X ) ) ] = = V a r [ X ^ ] + ( B a i s [ X ^ ] ) 2 MSE[\hat{X}]=E[[\hat{X}-X]^{^{2}}] \\=E[[(\hat{X}-E(\hat{X})+(X+E(\hat{X})]^{^{2}}] \\=E[(\hat{X}-E(\hat{X})^{^{2}}]+E[(X-E(\hat{X})^{^{2}}]-2E[(\hat{X}-E(\hat{X})(X-E(\hat{X})] \\=Var[\hat{X}]+(Bais[\hat{X}])^{^{2}}-2E[\hat{X}X+E(\hat{X})E(\hat{X})-XE(\hat{X})-\hat{X}E(X))] =\\=Var[\hat{X}]+(Bais[\hat{X}])^{^{2}} MSE[X^]=E[[X^X]2]=E[[(X^E(X^)+(X+E(X^)]2]=E[(X^E(X^)2]+E[(XE(X^)2]2E[(X^E(X^)(XE(X^)]=Var[X^]+(Bais[X^])22E[X^X+E(X^)E(X^)XE(X^)X^E(X))]==Var[X^]+(Bais[X^])2
(b)
E [ ( X ^ − X − N ) 2 ] = E [ X ^ 2 ] + E [ ( X ^ + N ) 2 ] − 2 E [ X ^ ( X + N ) ] = V a r [ X ^ ] + E [ X ^ ] 2 − 2 E [ X ^ X ] − 2 E [ X ^ N ] + E [ ( X + N ) 2 ] = V a r [ X ^ ] + E [ X ^ ] 2 + E [ X 2 ] + E [ N 2 ] + 2 E [ X N ] − 2 E [ X ^ X ] − 2 E [ X ^ N ] = V a r [ X ^ ] + E [ ( X ^ − X ) 2 ] + σ 2 = M S E [ X ^ ] + σ 2 E[(\hat{X} - X - N)^{^2}]=E[\hat{X}^{^{2}}]+E[(\hat{X}+N)^{^{2}}]-2E[\hat{X}(X+N)] \\=Var[\hat{X}]+E[\hat{X}]^{^{2}}-2E[\hat{X}X]-2E[\hat{X}N]+E[(X+N)^{^2}] \\=Var[\hat{X}]+E[\hat{X}]^{^{2}}+E[X^{^{2}}]+E[N^{^{2}}]+2E[XN]-2E[\hat{X}X]-2E[\hat{X}N] \\=Var[\hat{X}]+E[(\hat{X}-X)^{^2}]+\sigma ^{2} \\=MSE[\hat{X}]+\sigma ^{2} E[(X^XN)2]=E[X^2]+E[(X^+N)2]2E[X^(X+N)]=Var[X^]+E[X^]22E[X^X]2E[X^N]+E[(X+N)2]=Var[X^]+E[X^]2+E[X2]+E[N2]+2E[XN]2E[X^X]2E[X^N]=Var[X^]+E[(X^X)2]+σ2=MSE[X^]+σ2

VC Dimension’s application

(4 points) (VC Dimension) Given some finite domain set, χ \chi χ, and a number k ≤ χ k ≤ \chi kχ please figure out the VC-dimension of each of the following classes:
(a) (2 points)
H κ χ = { h ∈ { 0 , 1 } κ : ∣ { x : h ( x ) = 1 } ∣ = k } \Eta _{\kappa }^{\chi }=\left \{ h\in {\left\{0,1\right\}^{\kappa}:| \left \{ x:h(x)=1 \right \}|=k } \right \} Hκχ={h{0,1}κ:{x:h(x)=1}=k}
That is, the set of all functions that assign the value 1 to exactly k elements of κ \kappa κ.
(b) (2 points)
H κ χ = { h ∈ { 0 , 1 } κ : ∣ { x : h ( x ) = 0 } ∣ ≤ k    o r    { x : h ( x ) = 1 } ∣ ≤ k } \Eta _{\kappa }^{\chi }=\left \{ h\in {\left\{0,1\right\}^{\kappa}:| \left \{ x:h(x)=0 \right \}|\leq k ~~or~~\left \{ x:h(x)=1 \right \}|\leq k } \right \} Hκχ={h{0,1}κ:{x:h(x)=0}k  or  {x:h(x)=1}k}
Solution:

(a)

Since for every hypothesis class, there are exactly k elements of χ \chi χ to make h ( x ) = 1 h(x)=1 h(x)=1. It means when we give k+1 points as a subset to shatter, and labeled all of them with “1”, then we can’t find a hypothesis from the above classes to fit them. Then for any k points or any less than k points set, for any labeling, we always can find a hypothesis to shatter them. Hence, the VC dimension of the H k χ \Eta_{k}^{\chi} Hkχ is k. However, another case, if you think ∣ χ ∣ \left |\chi \right | χ is not large enough, when k points in the whole set is marked as 1, then ∣ χ ∣ − k \left |\chi \right |-k χk is marked as 0. Hence, there is a labeling(mark all negative) which is not able to shatter. Then, the VC-dimensions will be ∣ χ ∣ − k \left |\chi \right |-k χk. Hence, according to the above analysis, the VC-dimension will be m i n ( k , ∣ χ ∣ − k ) min(k,\left |\chi \right |-k) min(k,χk).

(b)

Since for every hypothesis class, there are at most k elements of χ \chi χ to make h ( x ) = 1   o r   h ( x ) = 0 h(x)=1~or~h(x)=0 h(x)=1 or h(x)=0. It means when we give a set of points in χ \chi χ , and if there always exists m m m points labeled 1 or 0 when m < k m<k m<k, then we always can find a hypothesis from the above classes to fit them. Hence, for any points with size less than 2 χ + 1 2\chi+1 2χ+1 , we can shatter them with the hypothesis class. However, for 2 χ + 2 2\chi+2 2χ+2 points, for example, mark k + 1 k+1 k+1 points as 1 and another k + 1 k+1 k+1 points 0, then no valid hypothesis in this H \Eta H . Hence, the VC-dimension of the H κ χ \Eta _{\kappa }^{\chi } Hκχ is 2 k + 1 2k+1 2k+1. If considering the ∣ χ ∣ \left |\chi \right | χ , it should be m i n ( 2 k + 1 , ∣ χ ∣ ) min(2k+1,\left |\chi \right |) min(2k+1,χ).

reference

欢迎关注二幺子的知识输出通道:
avatar

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值