看别人见解违法coursera荣誉,看懂和做对是两码事
Which of the following are true? (Check all that apply.)
- denotes activation vector of the layer on the training example.
- denotes the activation vector of the layer for the training example.
- X is a matrix in which each column is one training example.
- is the activation output by the neuron of the layer
- X is a matrix in which each row is one training example.
- is the activation output of the layer for the training example
- denotes the activation vector of the layer.
a激活函数 x层数 y第几个例子 z该层下的第几个参数
The tanh activation usually works better than sigmoid activation function for hidden units because the mean of its output is closer to zero, and so it centers the data better for the next layer. True/False?
True
显然 tanh (-1,1) sigmoid(0,1)
Which of these is a correct vectorized implementation of forward propagation for layer l, where ?
分清各层意义,Z当前层参量,A当前层输出,b当前层参量,当前层输出=当前层参量*上一层输出+当前层参量
You are building a binary classifier for recognizing cucumbers (y=1) vs. watermelons (y=0). Which one of these activation functions would you recommend using for the output layer?
- ReLU
- Leaky ReLU
- sigmoid
- tanh
二元分类sigmoid
Consider the following code:What will be B.shape? (If you’re not sure, feel free to run this in python to find out).
A = np.random.randn(4,3)
B = np.sum(A, axis = 1, keepdims = True)
- (1, 3)
- (4, 1)
- (, 3)
- (4, )
(,3)和(4,)很容易出现,可变大小,所以在使用的时候要好好看,一般情况下出现这种情况没事,但是极少数会出现错误,一种错误是大小不匹配,另一种错误是能算,但是最后结果不对
axis 0列1行
Suppose you have built a neural network. You decide to initialize the weights and biases to be zero. Which of the following statements is true?
- Each neuron in the first hidden layer will perform the same computation. So even after multiple iterations of gradient descent each neuron in the layer will be computing the same thing as other neurons.
- Each neuron in the first hidden layer will perform the same computation in the first iteration. But after one iteration of gradient descent they will learn to compute different things because we have “broken symmetry”.
- Each neuron in the first hidden layer will compute the same thing, but neurons in different layers will compute different things, thus we have accomplished “symmetry breaking” as described in lecture.
- The first hidden layer’s neurons will perform different computations from each other even in the first iteration; their parameters will thus keep evolving in their own way.
对称性问题,同样的输入,同样的计算,凭什么打破,因此需要初始化
Logistic regression’s weights w should be initialized randomly rather than to all zeros, because if you initialize to all zeros, then logistic regression will fail to learn a useful decision boundary because it will fail to “break symmetry”, True/False?
False
逻辑回归没有隐藏层。如果将权重初始化为零,logistic回归中输入的第一个示例x将输出零,但logistic回归的导数取决于输入x(因为没有隐藏层),而输入x不是零。因此,在第二次迭代中,权重值遵循x的分布,如果x不是一个常量向量,那么它们之间就不同了。
You have built a network using the tanh activation for all the hidden units. You initialize the weights to relative large values, using np.random.randn(..,..)*1000. What will happen?
- This will cause the inputs of the tanh to also be very large, causing the units to be “highly activated” and thus speed up learning compared to if the weights had to start from small values.
- This will cause the inputs of the tanh to also be very large, thus causing gradients to also become large. You therefore have to set \alphaα to be very small to prevent divergence; this will slow down learning.
- It doesn’t matter. So long as you initialize the weights randomly gradient descent is not affected by whether the weights are large or small.
- This will cause the inputs of the tanh to also be very large, thus causing gradients to be close to zero. The optimization algorithm will thus become slow.
为了更快的计算结束,当然希望斜率大点,但是对于tanh函数而言,斜率大的部分在中间,需要小值*0.01,不是*1000
Consider the following 1 hidden layer neural network:
Which of the following statements are True? (Check all that apply).
- will have shape (2, 4)
- will have shape (4, 1)
- will have shape (4, 2)
- will have shape (2, 1)
- will have shape (1, 4)
- will have shape (4, 1)
- will have shape (4, 1)
- will have shape (1, 1)
同第一题,搞清楚各角标的含义 x层数 y第几个例子 z该层下的第几个参数 A=WX+b
size关系跟前层和当前层的参数个数有关,注意为了让代码运行更快,我们决定向量化参数,向量化后堆叠参数,如果不清楚参数的大小关系,建议画图
In the same network as the previous question, what are the dimensions of and ?
- and are (4,m)
- and are (4,1)
- and are (4,2)
- and are (1,4)
A=WX+b