机器学习中的神经网络Neural Networks for Machine Learning:Lecture 3 Quiz

Lecture 3 QuizHelp Center

Warning: The hard deadline has passed. You can attempt it, but you will not get credit for it. You are welcome to try it as a learning exercise.


Question 1

Which of the following neural networks are examples of a feed-forward neural network?

Question 2

Consider a neural network with only one training case with input  x=(x1,x2,,xn)  and correct output  t . There is only one output neuron, which is linear, i.e.  y=wx  (notice that there are no biases). The loss function is squared error. The network has no hidden units, so the inputs are directly connected to the output neuron with weights  w=(w1,w2,,wn) . We're in the process of training the neural network with the backpropagation algorithm. What will the algorithm add to  wi  for the next iteration if we use a step size (also known as a learning rate) of  ϵ ?

Question 3

Suppose we have a set of examples and Brian comes in and duplicates every example, then randomly reorders the examples. We now have twice as many examples, but no more information about the problem than we had before. If we do not remove the duplicate entries, which one of the following methods will  not be affected by this change, in terms of the computer time (time in seconds, for example) it takes to come close to convergence?

Question 4

Consider a linear output unit versus a logistic output unit for a feed-forward network with  no hidden layer shown below. The network has a set of inputs  x  and an output neuron  y  connected to the input by weights  w  and bias  b .
output.png
We're using the squared error cost function even though the task that we care about, in the end, is binary classification. At training time, the target output values are  1  (for one class) and  0  (for the other class). At test time we will use the classifier to make decisions in the standard way: the class of an input  x  according to our model  after training is as follows:
class of x={1 if wTx+b00 otherwise
Note that we will be training the network using  y , but that the decision rule shown above will be the same at  test time, regardless of the type of output neuron we use for training. Which of the following statements is true?

Question 5

Consider a neural network with one layer of  logistic hidden units (intended to be fully connected to the input units) and a linear output unit. Suppose there are  n  input units and  m  hidden units. Which of the following statements are true? Check all that apply. 
net.png

Question 6

Brian wants to make his feed-forward network (with no hidden units) using a  linear output neuron more powerful. He decides to combine the predictions of two networks by averaging them. The first network has weights  w1  and the second network has weights  w2 . The predictions of this network for an example  x  are therefore:
y=12wT1x+12wT2x
Can we get the exact same predictions as this combination of networks by using a single feed-forward network (again with no hidden units) using a  linear output neuron and weights  w3=12(w1+w2) ?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值