《吴恩达网络课程神经网络与深度学习》——第二周测试题

《吴恩达网络课程神经网络与深度学习》——第二周测试题

第一题

What does a neuron compute?(神经元节点计算什么?)

【A】 A neuron computes an activation function followed by a linear function ( z = W x + b z = Wx + b z=Wx+b)(神经元节点先计算激活函数,再计算线性函数( z = W x + b z = Wx + b z=Wx+b))

【B】 A neuron computes a linear function ( z = W x + b z = Wx + b z=Wx+b) followed by an activation function(神经元节点先计算线性函数( z = W x + b z = Wx + b z=Wx+b),再计算激活。)

【C】 A neuron computes a function g that scales the input x linearly ( W x + b Wx + b Wx+b)(神经元节点计算函数 g g g,函数 g 计算(Wx + b))

【D】 A neuron computes the mean of all features before applying the output to an activation function(在 将输出应用于激活函数之前,神经元节点计算所有特征的平均值)

Note: The output of a neuron is a = g(Wx + b) where g is the activation function (sigmoid, tanh, ReLU, …).(注:神经元的输出是 a = g(Wx + b),其中 g 是激活函数(sigmoid,tanh,ReLU,…))

答案:B


第二题

Which of these is the “Logistic Loss”?(下面哪一个是 Logistic 损失?)

【A】 L ( i ) ( y ( i ) ^ , y ( i ) ) = ∣ y ( i ) − y ( i ) ^ ∣ 2 L(i)(\hat{y(i)},y(i) )=|y(i)-\hat{y(i)}|^2 L(i)(y(i)^,y(i))=y(i)y(i)^2

【B】 L ( i ) ( y ( i ) ^ , y ( i ) ) = m a x ( 0 , y ( i ) − y ( i ) ^ ) L(i)(\hat{y(i)},y(i) )=max(0,y(i)-\hat{y(i)}) L(i)(y(i)^,y(i))=max(0,y(i)y(i)^)

【C】 L ( i ) ( y ( i ) ^ , y ( i ) ) = ∣ y ( i ) − y ( i ) ^ ∣ L(i)(\hat{y(i)},y(i) )=|y(i)-\hat{y(i)}| L(i)(y(i)^,y(i))=y(i)y(i)^

【D】 L ( i ) ( y ( i ) ^ , y ( i ) ) = − ( y ( i ) l o g ( y ^ ( i ) + ( 1 − y ( i ) ) l o g ( 1 − y ^ ( i ) ) ) L(i)(\hat{y(i)},y(i) )=-(y(i)log(\hat{y}(i)+(1-y(i))log(1-\hat{y}(i))) L(i)(y(i)^,y(i))=(y(i)log(y^(i)+(1y(i))log(1y^(i)))

Note: We are using a cross-entropy loss function.(注:我们使用交叉熵损失函数。)

答案:D


第三题

Suppose img is a (32,32,3) array, representing a 32x32 image with 3 color channels red, green and blue. How do you reshape this into a column vector?(假设 img 是一个(32,32,3)数组,具有 3 个颜色通道:红色、绿色和蓝色的 32x32 像素的图像。 如何将其重新转换为列向量?)

【A】 x = i m g . r e s h a p e ( ( 32 ∗ 32 ∗ 3 , 1 ) ) x=img.reshape((32*32*3,1)) x=img.reshape((32323,1))

【B】 x = i m g . r e s h a p e ( ( 1 , 32 ∗ 32 , ∗ 3 ) ) x=img.reshape((1,32*32,*3)) x=img.reshape((1,3232,3))

【C】 x = i m g . r e s h a p e ( ( 3 , 32 ∗ 32 ) ) x=img.reshape((3,32*32)) x=img.reshape((3,3232))

【D】 x = i m g . r e s h a p e ( ( 32 ∗ 32 , 3 ) ) x=img.reshape((32*32,3)) x=img.reshape((3232,3))

答案:A


第四题

Consider the two following random arrays “a” and “b”:(看一下下面的这两个随机数组“a”和“b”:)

a = np.random.randn(2, 3) # a.shape = (2, 3)
b = np.random.randn(2, 1) # b.shape = (2, 1)
c = a + b

What will be the shape of “c”?(请问数组 c 的维度是多少?)

Note:b (column vector) is copied 3 times so that it can be summed to each column of a. Therefore, c.shape = (2, 3).( B(列向量)复制 3 次,以便它可以和 A 的每一列相加,所以:c.shape = (2, 3))

答案:

c.shape = (2, 3)

第五题

Consider the two following random arrays “a” and “b”:(看一下下面的这两个随机数组“a”和“b”)

a = np.random.randn(4, 3) # a.shape = (4, 3)
b = np.random.randn(3, 2) # b.shape = (3, 2)
c = a * b

What will be the shape of “c”?(请问数组“c”的维度是多少?)

Note:operator indicates element-wise multiplication. Element-wise multiplication requires same dimension between two matrices. It’s going to be an error.(注:运算符 “*” 说明了按元素乘法来相乘,但是元素乘法需要两个矩阵之间的维数相同,所以这将报错,无法计算。)

答案:The computation cannot happen because the sizes don’t match. It’s going to be “error”!


第六题

Suppose you have 𝒏𝒙 input features per example. Recall that 𝑿 = [𝒙 (𝟏) , 𝒙 (𝟐) … 𝒙 (𝒎) ]. What is the dimension of X?(假设你的每一个样本有 n x n_x nx个输入特征,想一下在𝑿 = [𝒙 (𝟏) , 𝒙 (𝟐) … 𝒙 (𝒎) ]中,X的维度是多少?)

Note:A stupid way to validate this is use the formula$ 𝑍 (𝑙) = 𝑊(𝑙)𝐴 (𝑙) $when l = 1 l = 1 l=1, then we have(请注意:一个比较笨的方法是当 l = 1 l = 1 l=1的时候,那么计算一下 z ( l ) = W ( l ) A ( l ) z (l) = W(l)A (l) z(l)=W(l)A(l),所以我们就有:)

A ( 1 ) = X A^{ (1)} = X A(1)=X

X . s h a p e = ( n x , m ) X.shape=(n_x,m) X.shape=(nx,m)

Z ( 1 ) . s h a p e = ( n ( 1 ) , m ) Z^{(1)}.shape=(n^{(1)},m) Z(1).shape=(n(1),m)

W ( 1 ) . s h a p e = ( n ( 1 ) , n x ) W^{(1)}.shape=(n^{(1)},n_x) W(1).shape=(n(1),nx)

答案: n x , m n_x,m nx,m


第七题

Recall that np.dot(a,b) performs a matrix multiplication on a and b, whereas a*b performs an element-wise multiplication.(回想一下,np.dot(a,b)在 a 和 b 上执行矩阵乘法,而“a * b”执行元素方式的乘法。)Consider the two following random arrays “a” and “b”:(看一下下面的这两个随机数组“a”和“b”:)

a = np.random.randn(12288, 150) # a.shape = (12288, 150)
b = np.random.randn(150, 45) # b.shape = (150, 45)
c = np.dot(a, b)

What is the shape of c?(请问 c 的维度是多少?)

答案: c . s h a p e = ( 12288 , 45 ) c.shape = (12288, 45) c.shape=(12288,45), this is a simple matrix multiplication example.

( c . s h a p e = ( 12288 , 45 ) c.shape = (12288, 45) c.shape=(12288,45), 这是一个简单的矩阵乘法例子。)


第八题

Consider the following code snippet:(看一下下面的这个代码片段:)

# a.shape = (3,4)
# b.shape = (4,1)
for i in range(3):
 for j in range(4):
 c[i][j] = a[i][j] + b[j]

How do you vectorize this?(请问要怎么把它们向量化?)

答案:

c = a + b.T

第九题

Consider the following code:(看下面的代码:)

a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a * b

What will be c?(请问 c 的维度会是多少? )

Note:This will invoke broadcasting, so b is copied three times to become ( 3 , 3 ) (3,3) (3,3), and * is an element wise product so c . s h a p e = ( 3 , 3 ) c.shape = (3, 3) c.shape=(3,3).(这将会使用广播机制,b 会被复制三次,就会变成 (3,3),再使用元素乘法。所以: c . s h a p e = ( 3 , 3 ) . ) c.shape = (3, 3).) c.shape=(3,3).)

答案: c . s h a p e = ( 3 , 3 ) c.shape = (3, 3) c.shape=(3,3)


第十题

Consider the following computation graph,What is the output J.(看一下下面的计算图,J 输出是什么:)

在这里插入图片描述

答案: J = u + v − w = a ∗ b + a ∗ c − b − c = ( a − 1 ) ∗ ( b + c ) J=u+v-w=a*b+a*c-b-c=(a-1)*(b+c) J=u+vw=ab+acbc=(a1)(b+c)

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值