机器学习测试Week2_1Linear Regression with Multiple Variables

Week 2 | 1Linear Regression with Multiple Variables

第 1 题

Suppose m=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

midterm exam(midterm exam)^2final exam
89792196
72518474
94883687
69476178

You’d like to use polynomial regression to predict a student’s final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ0+θ1x1+θ2x2 , where x1 is the midterm score and x2 is (midterm score)2. Further, you plan to use both feature scaling (dividing by the “max-min”, or range, of a feature) and mean normalization.
What is the normalized feature x(4)2 ? (Hint: midterm = 89, final = 96 is training example 1.) Please enter your answer in the text box below. If applicable, please provide at least two digits after the decimal place.
**    答案: 4
    解析: 将数据代入公式就可以求得结果,公式: xi:=xiμisi
其中: μi 是这一列数据的平均数 (7921+5184+8836+4761)/4=6675.5
si is the range of values (max - min), (88364761)=4075
所以最终的结果是 (47616675.5)/4075=0.469815950920245398773006134969330.47
**


第 2 题

You run gradient descent for 15 iterations with α=0.3 and compute J(θ) after each iteration. You find that the value of J(θ) decreases slowly and is still decreasing after 15 iterations. Based on this, which of the following conclusions seems most plausible?

  • α=0.3 is an effective choice of learning rate.
  • Rather than use the current value of α , it’d be more promising to try a smaller value of α (say α=0.1 ).
  • Rather than use the current value of α , it’d be more promising to try a larger value of α (say α=1.0 )
    **     答案: 2
        解析: 用步长0.3发现每次迭代之后 J(θ) 的值越来越大了,误差越来越大了. 说明越过了局部最小值,这时候就需要更小的步长去迭代
    **

第 3 题

Suppose you have m=14 training examples with n=3 features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation is θ=(XTX)1XTy . For the given values of m and n, what are the dimensions of θ, X, and y in this equation?

  • X is 14×3, y is 14×1, θ is 3×3
  • X is 14×4, y is 14×1, θ is 4×1
  • X is 14×3, y is 14×1, θ is 3×1
  • X is 14×4, y is 14×4, θ is 4×4
    **     答案: 第2个 X is 14×4, y is 14×1, θ is 4×1
        解析: 有m=14个样本,n=3个特征,所以X是14行4列,别忘了还有一个恒为1的列
    hθ=θ0+θ1x1+θ2x2+θ3x3
    **

第 4 题

Suppose you have a dataset with m=1000000 examples and n=200000 features for each example. You want to use multivariate linear regression to fit the parameters θ to our data. Should you prefer gradient descent or the normal equation?

  • Gradient descent, since (XTX)1 will be very slow to compute in the normal equation.
  • The normal equation, since it provides an efficient way to directly find the solution.
  • Gradient descent, since it will always converge to the optimal θ.
  • The normal equation, since gradient descent might be unable to find the optimal θ.
    **     答案: 1
        解析: 在很大的样本及很多特征的情况下,用正规方程法去计算 (XTX)1 的值,其时间复杂度为 O(n3) , 通常在n<10000时可以用正规方程法,但题目中的n=200000远远大于10000,此时用正规方程法去做时,效率会很低; 但是梯度下降法在特征数量n很大时也能较好的适用$
    **

第 5 题

Which of the following are reasons for using feature scaling?

  • It is necessary to prevent gradient descent from getting stuck in local optima.
  • It speeds up gradient descent by making each iteration of gradient descent less expensive to compute.
  • It speeds up gradient descent by making it require fewer iterations to get to a good solution.
  • It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).
    **     答案: 3
        解析:
    第1个选项: 归一化并不能阻止梯度下降局部最优,只是让迭代次数多而己.
    第2个选项: 归一化可以使得收敛速度加快,但选项的原因不对,数值的大小改变与计算速度无关
    第3个选项: 多维特征时,需要保证这些特征都具有相近的尺度,将所有特征的尺度都尽量缩放到-1 到 1 之间时, 等高线更接近于一个圆,而不是一个很扁的椭圆.
    当等高线接近于圆形时,可以使得迭代次数的减少,能够加快局部最小值的得出
    第4个选项: 归一化只是改变了数值大小,并不能使得矩阵不可逆
    **
  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值