深度学习Course2第一周Practical aspects of Deep Learning习题整理

Practical aspects of Deep Learning

  1. If you have 10,000,000 examples, how would you split the train/dev/test set?
  • 33% train. 33% dev. 33% test
  • 60% train. 20% dev. 20% test
  • 98% train. 1% dev. 1% test
  1. When designing a neural network to detect if a house cat is present in the picture, 500,000 pictures of cats were taken by their owners. These are used to make the training, dev and test sets. It is decided that to increase the size of the test set, 10,000 new images of cats taken from security cameras are going to be used in the test set. Which of the following is true?
  • This will increase the bias of the model so the new images shouldn’t be used.
  • This will be harmful to the project since now dev and test sets have different distributions.
  • This will reduce the bias of the model and help improve it.
  1. If your Neural Network model seems to have high variance, what of the following would be promising things to try?
  • Make the Neural Network deeper
  • Get more training data
  • Add regularization
  • Get more test data
  • Increase the number of units in each hidden layer
  1. You are working on an automated check-out kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.)
  • Increase the regularization parameter lambda
  • Decrease the regularization parameter lambda
  • Get more training data
  • Use a bigger neural network
  1. In every case it is a good practice to use dropout when training a deep neural network because it can help to prevent overfitting. True/False?
  • True
  • False
  1. The regularization hyperparameter must be set to zero during testing to avoid getting random results. True/False?
  • True
  • False
  1. With the inverted dropout technique, at test time:
  • You apply dropout (randomly eliminating units) but keep the 1/keep_prob factor in the calculations used in training.
  • You do not apply dropout (do not randomly eliminate units), but keep the 1/keep_prob factor in the calculations used in training.
  • You apply dropout (randomly eliminating units) and do not keep the 1/keep_prob factor in the calculations used in training
  • You do not apply dropout (do not randomly eliminate units) and do not keep the 1/keep_prob factor in the calculations used in training
  1. Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following: (Check the two that apply)
  • Increasing the regularization effect
  • Reducing the regularization effect
  • Causing the neural network to end up with a higher training set error
  • Causing the neural network to end up with a lower training set error
  1. Which of the following actions increase the regularization of a model? (Check all that apply)
  • Decrease the value of the hyperparameter lambda.
  • Decrease the value of keep_prob in dropout.
    Correct. When decreasing the keep_prob value, the probability that a node gets discarded during training is higher, thus reducing the regularization effect.
  • Increase the value of the hyperparameter lambda.
    Correct. When increasing the hyperparameter lambda, we increase the effect of the L_2 penalization.
  • Increase the value of keep_prob in dropout.
  • Use Xavier initialization.
  1. Which of the following is the correct expression to normalize the input x ? \mathbf{x}? x?
  • x = x − μ σ x = \frac{x-\mu }{\sigma } x=σxμ
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

l8947943

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值