神经网络和深度学习-第4周 deep-neural-network

本博客停止更新,查看本文点击此处

Note

This is my personal note at the 4th week after studying the course neural-networks-deep-learning and the copyright belongs to deeplearning.ai.

01_deep-neural-network

Welcome to the fourth week of this course. By now, you've seen four promulgation and back promulgation in the context of a neural network, with a single hidden layer, as well as logistic regression, and you've learned about vectorization, and when it's important to initialize the ways randomly. If you've done the past couple weeks homework, you've also implemented and seen some of these ideas work for yourself. So by now, you've actually seen most of the ideas you need to implement a deep neural network. What we're going to do this week, is take those ideas and put them together so that you'll be able to implement your own deep neural network. Because this week's problem exercise is longer, it just has been more work, I'm going to keep the videos for this week shorter as you can get through the videos a little bit more quickly, and then have more time to do a significant problem exercise at then end, which I hope will leave you having thoughts deep in neural network, that if you feel proud of.

but over the last several years the AI, on the machine learning community, has realized that there are functions that very deep neural networks can learn that shallower models are often unable to. Although for any given problem, it might be hard to predict in advance exactly how deep in your network you would want. So it would be reasonable to try logistic regression, try one and then two hidden layers, and view the number of hidden layers as another hyper parameter that you could try a variety of values of, and evaluate on all that across validation data, or on your development set. See more about that later as well.

02_forward-propagation-in-a-deep-network

In the last video we distract what is the deep neural network and also talked about the notation we use to describe such networks in this video you see how you can perform for propagation in a deep network.

One of the ways to increase your odds of having bug-free implementation is to think very systematic and carefully about the matrix dimensions you're working with so when I'm trying to develop my own code I often pull a piece of paper and just think carefully through so the dimensions of the matrix I'm working with let's see how you could do that in the next video.

03_getting-your-matrix-dimensions-right

When implementing a deep neural network, one of the debugging tools I often use to check the correctness of my code is to pull a piece of paper, and just work through the dimensions and matrix I'm working with. So let me show you how to do that, since I hope this will make it easier for you to implement your deep nets as well.

one training example

\because \text{the dimensions of x}(a^{[0]}) \text{: } (n^{[0]}, 1)the dimensions of x(a[0])(n[0],1)

\therefore
W^{[l]}: (n^{[l]}, n^{[l-1]})W[l]:(n[l],n[l1])

b^{[l]}: (n^{[l]}, 1)b[l]:(n[l],1)

dW^{[l]}: (n^{[l]}, n^{[l-1]})d

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值