[coursera/ImprovingDL/week1]Practical aspects of Deep Learning(summary&question)

1.1 Setting up your Machine Learning Application

Train/Dev/Test sets

It depends on the data. (98/1/1)

worst:

Train set error: 50%(high bias) underfitting

Dev set error: 50%(high variance) overfitting

basic recipe for ml


Now we don't need to balance bias and variance.


1.2 Regularizing your neural network

high bias: regularization/add training data

L2 may lead over fitness

L2 norm: "weight decay"

For each iteration, w minus a small percent of w. This seems like gradient descent.

So why regularization can efficiently reduce overfitting.


In a word, regularization increases the "linear" percent of activating function(tanh/sigmoid)

regularization: w is a linear model.

dropout regularization: seems like a highly risky operation

initialize keep_prob(a random 0/1-matrix)

For each layer, we set a dropout-f, and if necessary we can set a key to controlling whether user fropout=f.


Other regularization methods

data augmentation: horizontal, vertically, rotation

early stopping: avoid overfitting but stop minimize cost function which increases the bias


1.3 Setting up your optimization problem

normalizing inputs: here is a photo showing the differences after normalizing inputs


deep network: Vanishing / Exploding gradient


weight initialization for deep networks 

let w have a default number



check Gradient

wrong:

2.The dev and test set should: Come from the same distribution

6. Increase the regularization hyperparameter lambda: Weights are pushed toward becoming smaller (closer to 0)

question








  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值