本博客停止更新,查看本文点击此处
Note
This is my personal note at the 4th week after studying the course neural-networks-deep-learning and the copyright belongs to deeplearning.ai.
01_deep-neural-network
Welcome to the fourth week of this course. By now, you've seen four promulgation and back promulgation in the context of a neural network, with a single hidden layer, as well as logistic regression, and you've learned about vectorization, and when it's important to initialize the ways randomly. If you've done the past couple weeks homework, you've also implemented and seen some of these ideas work for yourself. So by now, you've actually seen most of the ideas you need to implement a deep neural network. What we're going to do this week, is take those ideas and put them together so that you'll be able to implement your own deep neural network. Because this week's problem exercise is longer, it just has been more work, I'm going to keep the videos for this week shorter as you can get through the videos a little bit more quickly, and then have more time to do a significant problem exercise at then end, which I hope will leave you having thoughts deep in neural network, that if you feel proud of.
but over the last several years the AI, on the machine learning community, has realized that there are functions that very deep neural networks can learn that shallower models are often unable to. Although for any given problem, it might be hard to predict in advance exactly how deep in your network you would want. So it would be reasonable to try logistic regression, try one and then two hidden layers, and view the number of hidden layers as another hyper parameter that you could try a variety of values of, and evaluate on all that across validation data, or on your development set. See more about that later as well.
02_forward-propagation-in-a-deep-network
In the last video we distract what is the deep neural network and also talked about the notation we use to describe such networks in this video you see how you can perform for propagation in a deep network.
One of the ways to increase your odds of having bug-free implementation is to think very systematic and carefully about the matrix dimensions you're working with so when I'm trying to develop my own code I often pull a piece of paper and just think carefully through so the dimensions of the matrix I'm working with let's see how you could do that in the next video.
03_getting-your-matrix-dimensions-right
When implementing a deep neural network, one of the debugging tools I often use to check the correctness of my code is to pull a piece of paper, and just work through the dimensions and matrix I'm working with. So let me show you how to do that, since I hope this will make it easier for you to implement your deep nets as well.
one training example
\because \text{the dimensions of x}(a^{[0]}) \text{: } (n^{[0]}, 1)∵the dimensions of x(a[0]): (n[0],1)
\therefore∴
W^{[l]}: (n^{[l]}, n^{[l-1]})W[l]:(n[l],n[l−1])
b^{[l]}: (n^{[l]}, 1)b[l]:(n[l],1)
dW^{[l]}: (n^{[l]}, n^{[l-1]})d