Test in Lecture 2- Neural Network of Hinton

If the output of a model is given by y=f(\mathbf{x};W)y=f(x;W), then which of the following choices for ff are most appropriate when the task is binary classification?

第 2 个问题

2。第 2 个问题 

题目解析:For every training case, the update to the weight matrix is determined by the output of the perceptron unit, this is 1 bit of information. However, we can also represent the model with an integer that stores whether we added / subtracted or left the weight matrix unchanged when we looked at that example (-1, 0, +1).


After learning using the Perceptron algorithm, how easy is it to express the learned weight vector in terms of the input vectors and the initial weight vector? Assume the input vectors have real-valued components.

第 3 个问题

3。第 3 个问题

Suppose we are given three data points:

x1,01,10,1t110

Furthermore, we are given the following weight vector (where the bias is set to 0):

w=(0,3)

Let w(t)w(t1)2 be the distance between the weight vectors at iteration t and iterationt1 of the perceptron learning algorithm. Here, for a given 2D vector vv2=v12+v22 (this is also called the Euclidean norm). What is the maximum amount by which the weight vectors can change between successive iterations? Note that in this example we are not learning the bias.

答案:\sqrt{2}


第 4 个问题

4。第 4 个问题

Suppose that we have a perceptron with weight vector \mathbf{w}w and we create a new set of weights \mathbf{w}^*=c \mathbf{w}w=cw by scaling \mathbf{w}w by some positive constant cc.

Assume that the bias is zero.

True or false: if the perceptron now uses \mathbf{w}^*w instead then it's classification decisions might change (that is, we have moved the classification boundary).

True

第 5 个问题

5。第 5 个问题

Suppose that we have a perceptron with weight vector \mathbf{w}w and we create a new set of weights \mathbf{w}^*=\mathbf{w} + \mathbf{c}w=w+c by adding some constant vector \mathbf{c}c to \mathbf{w}w. Assume that the bias is zero.

True or false: if the perceptron now uses \mathbf{w}^*w instead then it's classification decisions might change (that is, we have moved the classification boundary).

False

第 6 个问题

6。第 6 个问题

Suppose we are given four training cases:

x1,11,00,10,0t1001

It is impossible for a binary threshold unit to produce the desired target outputs for all four cases. Now suppose that we add an extra input dimension so that each of the four input vectors consists of three numbers instead of two.

Which of the following ways of setting the value of the extra input will create a set of four input vectors that is linearly separable (i.e. that can be given the right target values by a binary threshold unit with appropriate weights and bias).

第 7 个问题

7。第 7 个问题

Brian wants to use a neural network to predict the price of a stock tomorrow given today's price and the price over the last 10 days. The inputs to this network are price over the last 10 days and the output is tomorrow's price. The hidden units in this network receive information from the layer below, transmit information to the layer above and do not send information within the same layer. Is this an example of a feed-forward network or a recurrent network?

Recurrent

第 8 个问题

8。第 8 个问题

Brian and Andy are having an argument about the perceptron algorithm. They have a dataset that the perceptron cannot seem to classify (that is, it fails to converge to a solution). Andy reasons that if he could collect more examples, that might solve the problem by making the data set linearly separable and then the perceptron algorithm will converge. Brian claims that collecting more examples will not help. Which one of them is correct?

Andy


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值