Coursera-吴恩达-深度学习-神经网络和深度学习-week3-测验

本文章内容:

Coursera吴恩达深度学习课程,第一课,神经网络和深度学习Neural Networks and Deep Learning,

第三周:浅层神经网络(Shallow neural networks)

部分的测验,题目及答案截图。
 

F正确

C错误 example都用()表示。

 As seen in lecture the output of the tanh is between -1 and 1, it thus centers the data which makes the learning simpler for the next layer.

Sigmoid outputs a value between 0 and 1 which makes it a very good choice for binary classification. You can classify as 0 if the output is less than 0.5 and classify as 1 if the output is more than 0.5. It can be done with tanh as well but it is less convenient as the output is between -1 and 1.

 we use (keepdims = True) to make sure that A.shape is (4,1) and not (4, ). It makes our code more rigorous.

ogistic Regression doesn't have a hidden layer. If you initialize the weights to zeros, the first example x fed in the logistic regression will output zero but the derivatives of the Logistic Regression depend on the input x (because there's no hidden layer) which is not zero. So at the second iteration, the weights values follow x's distribution and are different from each other if x is not a constant vector.

 gradient to be close to zero. This slows down the optimization algorithm.

 

9.

F错误

H正确,b的维度是(当前层数的note数,1)

A正确。

C错误。

Remember that Z^{[1]}Z[1] and A^{[1]}A[1] are quantities computed over a batch of training examples, not only 1.

 

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值