Improving Deep Neural Networks学习笔记(一)

文章作者:Tyan
博客:noahsnail.com  |  CSDN  |  简书

1. Setting up your Machine Learning Application

1.1 Train/Dev/Test sets

Make sure that the dev and test sets come from the same distribution。

Not having a test set might be okay.(Only dev set.)

So having set up a train dev and test set will allow you to integrate more quickly. It will also allow you to more efficiently measure the bias and variance of your algorithm, so you can more efficiently select ways to improve your algorithm.

1.2 Bias/Variance

High Bias: underfitting
High Variance: overfitting

Assumption——human: 0% (Optimal/Bayes error), train set and dev set are drawn from the same distribution.

Train set errorDev set errorResult
1%11%high variance
15%16%high bias
15%30%high bias and high variance
0.5%1%low bias and low variance
1.3 Basic Recipe for Machine Learning

High bias –> Bigger network, Training longer, Advanced optimization algorithms, Try different netword.

High variance –> More data, Try regularization, Find a more appropriate neural network architecture.

2. Regularizing your neural network

2.1 Regularization

In logistic regression,

wRnx,bR
J(w,b)=1mi=1mL(ŷ (i),y(i))+λ2m||w||22
||w||22=j=1nxw2j=wTw

This is called L2 regularization.

J(w,b)=1mi=1mL(ŷ (i),y(i))+λ2m||w||1

This is called L1 regularization. w will end up being sparse. λ is called regularization parameter.

In neural network, the formula is

J(w[1],b[1],...,w[L],b[L])=1mi=1mL(ŷ (i),y(i))+λ2ml=1L||w[l]||2
||w[l]||2=i=1n[l1]j=1n[l](w[l]ij)2,w:(n[l1],n[l])

This matrix norm, it turns out is called the Frobenius Norm of the matrix, denoted with a F in the subscript.

L2 norm regularization is also called weight decay.

2.2 Why regularization reduces overfitting?

If λ is set too large, matrices W is set to be reasonabley close to zero, and it will zero out the impact of these hidden units. And that’s the case, then this much simplified neural network becomes a much smaller neural network. It will take you from overfitting to underfitting, but there is a just right case in the middle.

2.3 Dropout regularization

Dropout will go through each of the layers of the network, and set some probability of eliminating a node in neural network. By far the most common implementation of dropouts today is inverted dropouts.

Inverted dropout, kp stands for keep-prob:

z[i+1]=w[i+1]a[i]+b[i+1]
a[i]=a[i]/kp

In test phase, we don’t use dropout and keep-prob.

2.4 Understanding dropout

Why does dropout workd? Intuition: Can’t rely on any one feature, so have to spread out weights.

By spreading all the weights, this will tend to have an effect of shrinking the squared norm of the weights.

2.5 Other regularization methods
  • Data augmentation.
  • Early stopping

3. Setting up your optimization problem

3.1 Normalizing inputs

Normalizing inputs can speed up training. Normalizing inputs corresponds to two steps. The first is to subtract out or to zero out the mean. And then the second step is to normalize the variances.

3.2 Vanishing/Exploding gradients

If the network is very deeper, deep network suffer from the problems of vanishing or exploding gradients.

3.3 Weight initialization for deep networks

If activation function is ReLU or tanh, w initialization is:

w[l]=np.random.randn(shape)np.sqrt(2n[l1]).
This is called Xavier initalization.

Another formula is

w[l]=np.random.randn(shape)np.sqrt(2n[l1]+n[l]).

3.4 Numberical approximation of gradients

In order to build up to gradient checking, you need to numerically approximate computatiions of gradients.

g(θ)f(θ+ϵ)f(θϵ)2ϵ

3.5 Gradient checking

Take matrix W, vector b and reshape them into vectors, and then concatenate them, you have a giant vector θ . For each i:

dθapprox[i]=J(θ1,...,θi+ϵ,...)J(θ1,...,θiϵ,...)2ϵdθi=Jθi

If

||dθapproxdθ||2||dθapprox||2+||θ||2107
, that’s great. If 105 , you need to do double check, if 105 , there may be a bug.

3.6 Gradient checking implementation notes
  • Don’t use gradient check in training, only to debug.
  • If algorithm fails gradient check, look at components to try to identify bug.
  • Remember regularization.
  • Doesn’t work with dropout.
  • Run at random initialization; perhaps again after some training.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值