Lec2.2 Regularization

Classical Regularization: Parameter Norm Penalty

54:00

Most classical regularization approaches are based on limiting the capacity of models, by adding a parameter norm penalty Ω ( θ ) \Omega(\theta) Ω(θ) to the Objective function J J J.

J ( θ ; X , y ) = J ( θ ; X , y ) + α Ω ( θ ) J(\theta;X,y)=J(\theta;X,y)+\alpha\Omega(\theta) J(θ;X,y)=J(θ;X,y)+αΩ(θ)

L 2 L^2 L2 Parameter Regularization, weight decay

Regularization term Ω ( θ ) = 1 2 ∥ w ∥ 2 2 \Omega(\theta)=\frac{1}{2}\left \|w \right \|^2_2 Ω(θ)=21w22

Gradient of the total objective function:

∇ w J ~ ( w ; X , y ) = α w + ∇ w J ( w ; X , y ) w : = w − ϵ ( α w + ∇ w J ( w ; X , y ) ) \nabla_w\tilde{J}(w;X,y)=\alpha w+ \nabla_wJ(w;X,y) \\ w := w -\epsilon (\alpha w + \nabla_w J(w;X,y)) wJ~(w;X,y)=αw+wJ(w;X,y)w:=wϵ(αw+wJ(w;X,y))

插播一个公式,Taylor展开式: f ( x ) = f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) + f ′ ′ ( x 0 ) ( x − x 0 ) 2 + . . . f(x)=f(x_0)+f'(x_0)(x-x_0)+f''(x_0)(x-x_0)^2+... f(x)=f(x0)+f(x0)(xx0)+f(x0)(xx0)2+...

assume w ⋆ w^\star w is the optimal solution of J J J, that is ∇ J ( w ⋆ ) = 0 \nabla J(w^{\star})=0 J(w)=0.

J ( w ) J(w) J(w) w ⋆ w^{\star} w的Taylor展开式为: J ^ ( w ) = J ( w ⋆ ) + 1 2 ( w − w ⋆ ) T H ( w − w ⋆ ) \hat{J}(w)=J(w^{\star})+\frac{1}{2} (w-w^{\star})^TH(w-w^{\star}) J^(w)=J(w)+21(ww)TH(ww).

substitue to pervious equation,

∇ w J ~ ( w ; X , y ) = α w + H ( w − w ⋆ ) \nabla_w \tilde{J}(w;X,y)=\alpha w+H(w-w^{\star}) wJ~(w;X,y)=αw+H(ww), set it to 0 0 0.

α w + H ( w − w ⋆ ) = = 0 w ~ = ( H + α I ) − 1 H w ⋆ \alpha w+H(w-w^{\star})==0 \\ \tilde{w}=(H+\alpha I)^{-1}H w^{\star} αw+H(ww)==0w~=(H+αI)1Hw
w ~ \tilde{w} w~ is new solution, that’s how my solution is going to change after weight decay.

If α → 0 , w ~ → w ⋆ \alpha \rightarrow 0, \tilde{w} \rightarrow w^{\star} α0,w~w, what if α \alpha α is not 0.

H is symmetric, so it can be decomposed by H = Q Λ Q T H=Q\Lambda Q^T H=QΛQT, substitue to the previous equation, we get, w ~ = ( Q Λ Q T + α I ) − 1 H w ⋆ = ( Q ( Λ + α I ) Q T ) − 1 Q Λ Q T w ⋆ = ( Q ( Λ + α I ) − 1 Q T ) Q Λ Q T w ⋆ = Q ( Λ + α I ) − 1 Λ Q T w ⋆ \tilde{w}=(Q\Lambda Q^T+\alpha I)^{-1}Hw^{\star} \\= (Q(\Lambda+\alpha I)Q^T)^{-1}Q\Lambda Q^T w^{\star} \\= (Q(\Lambda+\alpha I)^{-1}Q^T)Q\Lambda Q^T w^{\star} \\=Q(\Lambda+\alpha I)^{-1}\Lambda Q^T w^{\star} w~=(QΛQT+αI)1Hw=(Q(Λ+αI)QT)1QΛQTw=(Q(Λ+αI)1QT)QΛQTw=Q(Λ+αI)1ΛQTw

中间项展开式如下图所示,也是一个对角矩阵。 Q , Q T Q,Q^T Q,QT都可以看作旋转矩阵,并且它们两个是向相反的方向旋转。上面的等式可以这么理解:经过weight decay后的解是将原来的解,换到不同的基上,并且在每个方向都乘上一个系数。如果 λ i > > α \lambda_i >> \alpha λi>>α,系数趋近于1,反之,趋近于0。这样就会去掉一些项。

Hessian matrix of J J J,代表 J J J在某些方向上的变化快慢。 H = Q Λ Q T H=Q \Lambda Q^T H=QΛQT

全集

Directions along which the parameters contribute significantly to reducing the objective function are presented a small eigenvalue of the Hessian tell us that movement in this direction will not significantly increase the gradient

effective number of parameters defined to be

γ = ∑ i λ i λ i + α \gamma=\sum\limits_i \frac{\lambda_i}{\lambda_i+\alpha} γ=iλi+αλi.

As α \alpha α is increased, the effective number of parameters decreases.

Dataset Augmentation

The best way to make a machine learning model generalize better is to train it on more data.

The amount of data we have is limited.

Create fake data and add it to the training set.

Not applicable to all tasks.

For example, it is difficult to generate new fake data for a density estimation task unless we have already solved the density estimation problem.

Operations like translating the training images a few pixels in each direction can often greatly improve generalization.

One way to improve the robustness of neural networks is simply to train them with random noise applied to their inputs.

This same approach also works when the noise is applied to the hidden units, which can be seen as doing dataset augmentation at multiple levels of abstraction.

Noise injection

Two ways that noise can be used as part of a regularization strategy.

Adding noise to the input.

This can be interpreted simply as form of dataset augmentation.

Can also interpret it as being equivalent to more traditional forms of regularization.

Adding it to the weights.

This technique has been used primarily in the context of recurrent neural networks(Jim et al., 1996, Graves, 2011a).

This can be interpreted as a stochastic implementation of a Bayesian interpretation over the weights.

Manifold Tangent Classifier

It is assumed that we are trying to classify examples and that examples on the same manifold share the same category.

The classifier should be invariant to the local factors of variation that correspond to movement on the manifold.

Use an nearest-neighbor distance between points x 1 x_1 x1 and x 2 x_2 x2 the distance between the manifolds M 1 M_1 M1 and M 2 M_2 M2 to which they respectively belong.

Approximate M 1 M_1 M1 by its tangent plane at x i x_i xi and measure the distance between the two tangent.

Train a neural net classifier with an extra penalty to make the output f ( x ) f(x) f(x) of the neural net locally invariant to known factors of variation(Tangent Prop algorithm, Simard et al. 1992)

Python网络爬虫与推荐算法新闻推荐平台:网络爬虫:通过Python实现新浪新闻的爬取,可爬取新闻页面上的标题、文本、图片、视频链接(保留排版) 推荐算法:权重衰减+标签推荐+区域推荐+热点推荐.zip项目工程资源经过严格测试可直接运行成功且功能正常的情况才上传,可轻松复刻,拿到资料包后可轻松复现出一样的项目,本人系统开发经验充足(全领域),有任何使用问题欢迎随时与我联系,我会及时为您解惑,提供帮助。 【资源内容】:包含完整源码+工程文件+说明(如有)等。答辩评审平均分达到96分,放心下载使用!可轻松复现,设计报告也可借鉴此项目,该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的。 【提供帮助】:有任何使用问题欢迎随时与我联系,我会及时解答解惑,提供帮助 【附带帮助】:若还需要相关开发工具、学习资料等,我会提供帮助,提供资料,鼓励学习进步 【项目价值】:可用在相关项目设计中,皆可应用在项目、毕业设计、课程设计、期末/期中/大作业、工程实训、大创等学科竞赛比赛、初期项目立项、学习/练手等方面,可借鉴此优质项目实现复刻,设计报告也可借鉴此项目,也可基于此项目来扩展开发出更多功能 下载后请首先打开README文件(如有),项目工程可直接复现复刻,如果基础还行,也可在此程序基础上进行修改,以实现其它功能。供开源学习/技术交流/学习参考,勿用于商业用途。质量优质,放心下载使用。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值