![](https://img-blog.csdnimg.cn/20201014180756923.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
Neural Networks and Deep Learning
文章平均质量分 72
王彩旗 edwardwangcq.com
这个作者很懒,什么都没留下…
展开
-
Ian Goodfellow
CUDA CUDA is a parallel computing platform and application programming interface model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit for general purpose processing – an approach term原创 2021-07-31 11:57:09 · 244 阅读 · 0 评论 -
Pieter Abbeel
deep reinforcement learning reinforcement leraning autonomous helicopter flight ImageNet (Geoffrey Hinton, Toronto team) AlexNet Supervise learning is about learning an input and output mapping Reinforcement learning: where does the data even come fr原创 2021-07-28 12:47:08 · 274 阅读 · 0 评论 -
Geoffrey Hinton
The notes for interviewing Geoffrey Hinton, the godfather for deep learning, by Andrew Ng. You can definitely see many advanced techniques in deep learning and some good advice for how to start with your deep leraning!————————————————Restricted Boltzman原创 2021-07-27 11:52:56 · 264 阅读 · 0 评论 -
Deep Neural Networks - Parameters vs Hyperparameters
Being effective in developing your deep NN requires that you not only organize your parameters well, but also your hyper parameters. So, what are hyper parameters?ParametersHyper parametersLearning rate : it determines how the parameters evolve N.原创 2021-07-25 18:58:58 · 99 阅读 · 0 评论 -
Deep Neural Networks - Building blocks of deep neural networks
Take the deep NN of figure-1 as the example:Figure-1Following figure-2 shows the building blocks of this deep NN while generalized the figure for NN with L layers:Figure-2原创 2021-07-22 12:50:44 · 107 阅读 · 0 评论 -
Deep Neural Networks - Forward Propagation in a Deep Network
Take a deep NN in figure-1 as an example,Figure-1For a single training example xConcretely,For multiple (m) training examples原创 2021-07-21 10:10:30 · 121 阅读 · 1 评论 -
Deep Neural Networks - Deep L-layer Neural network
The notes when study the Coursera class by Mr. Andrew Ng "Neural Networks & Deep Learning", section 4.1 "Deep L-layer Neural network". It shows what deep NN looks like and notations to denote and compute NN. Share it with you and hope it helps!———————原创 2021-07-20 22:30:53 · 144 阅读 · 0 评论 -
One hidden layer Neural Network - Random Initialization
When you train your NN, it's important to initialize the weights ( etc.) randomly. For logistic regression, it's ok to initialize the weights to 0; but for NN, if initialize the weights all to 0 and then apply gradient descent, it won't work!原创 2021-07-12 10:51:19 · 80 阅读 · 0 评论 -
One hidden layer Neural Network - Gradient descent for neural networks
The notes when study the Coursera class by Mr. Andrew Ng "Neural Networks & Deep Learning", section 3.9 "Gradient descent for neural networks". It shows the computation graph for NN, how to compute back propagation of NN when there is one and multiple原创 2021-07-12 10:09:06 · 85 阅读 · 0 评论 -
One hidden layer Neural Network - Derivatives of activation functions
When you implement back propagation for your NN, you need to compute the slop/derivative of the activation function. Let's take a look at how to compute the slope of those activation functions.Sigmoid functionfigure-1We have:Following is the deriv..原创 2021-07-09 12:15:39 · 59 阅读 · 0 评论 -
One hidden layer Neural Network - Activation functions
W W WWhen build a NN, one of the choices to make is what activation functions to use in the hidden layers as well as the output layer. Besides the sigmoid activation function, sometimes other choices can work much better.tanh functionAls...原创 2021-07-07 12:29:59 · 103 阅读 · 0 评论 -
One hidden layer Neural Network - Vectorizing across multiple examples
The notes when study the Coursera class by Mr. Andrew Ng "Neural Networks & Deep Learning", section 3.4 "Vectorizing across multiple examples". It shows how to compute NN output via vectorization when there are multiple training example. Share it with原创 2021-07-06 15:33:04 · 87 阅读 · 0 评论 -
One hidden layer Neural Network - Computing a Neural Network‘s Output
Let's see how the Neural Network computes its output. It's like logistic regression, but repeat a lot of times!figure-1Figure-1 shows how to compute the output of logistic regression.Figure-2 shows how to compute the activation units of hidden la..原创 2021-07-05 15:09:03 · 72 阅读 · 0 评论 -
One hidden layer Neural Network - Neural Network Representation
The notes when study the Coursera class by Mr. Andrew Ng "Neural Networks & Deep Learning", section 3.2 "Neural Network Representation". Share it with you and hope it helps!------------------figure-1Figure-1 shows names of different parts of Neura原创 2021-07-05 14:45:55 · 95 阅读 · 0 评论 -
One hidden layer Neural Network - Neural Networks Overview
Let's give a quick overview of how you implement your neural network.figure-1 logistic regressionFigure-1 shows how we compute logistic regression using computation graph.figure-2 neural networkFigure-2 shows what the Neural Network looks like. We'll u原创 2021-07-05 12:45:41 · 76 阅读 · 0 评论 -
Basics of Neural Network Programming - Vectorizing Logistic Regression and its Gradient Computation
Let's talk about how to vectorize the implementation of logistic regression. With that we can implement a single iteration of gradient descent with respect to entire training set without using even a single explicit for loop.Figure-1In gradient descent,原创 2021-06-23 11:49:32 · 73 阅读 · 0 评论 -
Basics of Neural Network Programming - More vectorization examples
The rule of thumb to keep in mind is when you programming your neural networks or logistic regression, whenever possible, avoid explicit for loops. Let's look at another example.Figure-1Figure-1 shows two different ways to compute where u and v are vec.原创 2021-06-22 18:57:22 · 65 阅读 · 0 评论 -
Basics of Neural Network Programming - Vectorization
Vectorization is the art to get rid of the for loops in the code. The ability to perform vectorization has become a key skill.Figure-1Figure-1 shows two different ways to calculate原创 2021-06-22 17:55:52 · 91 阅读 · 0 评论 -
Basics of Neural Network Programming - Gradient descent on m examples
Last class, you saw how to compute derivatives and implement gradient descent with respect to just one training example for logistic regression. Now, we'll do it for m training examples.Recap the cost function of logistic regression:And,Accordin.原创 2021-06-21 20:38:10 · 87 阅读 · 0 评论 -
Basics of Neural Network Programming - Logistic Regression Gradient descent
This is the notes when study class Neural Networks & Deep Learning, section Logistic Regression Gradient Descent. Share it with anyone who is interested in.Following two figures shows how to calculate derivatives for logistic regression when one tra原创 2021-06-21 19:34:00 · 73 阅读 · 0 评论 -
Basics of Neural Network Programming - Derivatives with a Computation Graph
In last class, we worked through an example of using computation graph to compute the function J.原创 2021-06-20 12:09:20 · 71 阅读 · 0 评论 -
Basics of Neural Network Programming - Computation Graph
The computations of a neural network are organized in terms of a forward propagation step in which we compute the原创 2021-06-20 10:48:42 · 79 阅读 · 0 评论 -
Basics of Neural Network Programming - More derivatives examples
Let's see more complex examples where the slope of the function can be different at different points of the function.原创 2021-06-17 12:53:34 · 60 阅读 · 0 评论 -
Basics of Neural Network Programming - Derivatives
Let's try to get an intuitive understanding of calculus and derivatives.原创 2021-06-17 12:13:39 · 493 阅读 · 0 评论 -
Basics of Neural Network Programming - Gradient Descent
Let's talk about how you can use gradient descent to train/learn the parameters w and b on your training set.原创 2021-06-16 10:31:56 · 71 阅读 · 0 评论 -
Basics of Neural Network Programming - Logistic Regression cost function
For logistic regression, to train the parameters w and b, we need to define a cost function原创 2021-06-11 12:44:05 · 72 阅读 · 0 评论 -
Basics of Neural Network Programming - Logistic Regression
Logistic RegressionLogistic regression is a learning algorithm used in a supervised learning problem when the output ???? are all either zero or one. The goal of logistic regression is to minimize the error between its predictions and training data.Examp原创 2021-06-10 23:37:55 · 70 阅读 · 0 评论 -
Basics of Neural Network Programming - Binary Classification
Binary ClassificationIn a binary classification problem, the result is a discrete value output.For example - account hacked (1) or compromised (0)- a tumor malign (1) or benign (0)Example: Cat vs Non-CatThe goal is to train a classifier that the input原创 2021-06-09 12:10:04 · 96 阅读 · 0 评论 -
Introduction to Deep Learning - About this Course
Followings are the five courses of this specialization:Neural Networks and Deep Learning Improving Deep Neural Networks: Hyperparameter tuning, Regulization and Optimization Structuring your Machine Learning project Convolutional Neural Networks Natu原创 2021-05-26 13:10:22 · 105 阅读 · 0 评论 -
Introduction to Deep Learning - Why is deep learning taking off?
If the basic technical idea behind deep learning and neural networks have been around for decades, why are they only just now taking off? In this class, let's go over some of the main drivers behind the rise of deep learning. This will help you better spot原创 2021-05-25 14:30:34 · 105 阅读 · 0 评论 -
Introduction to Deep Learning - Supervised Learning with Neural Networks
It turns out that so far almost all the economic value created by neural networks has been through supervised learning. Let's see what that means and we'll go through some examples in this class.In supervised learning, you have some input x, and you w.原创 2021-05-18 08:19:24 · 100 阅读 · 0 评论 -
Introduction to Deep Learning - What is a Neural Network?
The term deep learning refers to training neural networks, sometimes very large neural networks. So what is exactly a neural network?Figure-1Let's start with a housing price prediction example. Say you have a data set with six houses. You know the size o原创 2021-05-15 16:04:53 · 169 阅读 · 0 评论