吴恩达深度学习学习笔记——C2W1——神经网络优化基础及正则化——练习题

C2W1 Quiz - Practical aspects of deep learning

20230609 updated:

 

 

20230531 updated:

 

Ans: C

Ans: A (Come from the same distribution)

Ans: C、D

Ans: A、C

Note: refer below diagram

Ans: A

Ans: A

Ans: D

Ans: B、D

Ans: B、E、G (Data augmentation, L2 regularization, Dropout)

Ans: B

  1. If you have 10,000,000 examples, how would you split the train/dev/test set?
    • 98% train . 1% dev . 1% test
  2. The dev and test set should:
    • Come from the same distribution
  3. If your Neural Network model seems to have high variance, what of the following would be promising things to try?
    • Add regularization
    • Get more training data
  4. You are working on an automated check-out kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.)
    • Increase the regularization parameter lambda
    • Get more training data
  5. What is weight decay?
    • A regularization technique (such as L2 regularization) that results in gradient descent shrinking the weights on every iteration.
  6. What happens when you increase the regularization hyperparameter lambda?
    • Weights are pushed toward becoming smaller (closer to 0)
  7. With the inverted dropout technique, at test time:
    • You do not apply dropout (do not randomly eliminate units) and do not keep the 1/keep_prob factor in the calculations used in training
  8. Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following: (Check the two that apply)
    • Reducing the regularization effect
    • Causing the neural network to end up with a lower training set error
  9. Which of these techniques are useful for reducing variance (reducing overfitting)? (Check all that apply.)
    • Dropout
    • L2 regularization
    • Data augmentation
  10. Why do we normalize the inputs x?
    • It makes the cost function faster to optimize
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值