Chaper 04 Linear Regession with multiple variables

Chaper 04 Linear Regession with multiple variables



4.1 Multiple feature 多特征

假设
在这里插入图片描述
在这里插入图片描述
Linear regression with multiple variables is also known as “multivariate linear regression”.

We now introduce notation for equations where we can have any number of input variables.
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
In order to develop intuition about this function, we can think about
θ0 as the basic price of a house,
θ1 as the price per square meter,
θ2 as the price per floor, etc.

x1 will be the number of square meters in the house,
x2 the number of floors, etc.

Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:

Remark: Note that for convenience reasons in this course we assume X0(I) = 1 for(i∈1,…,m).
This allows us to do matrix operations with theta and x.
Hence making the two vectors ‘θ’ and X(I) match each other element-wise (that is, have the same number of elements: n+1)

在这里插入图片描述

4.2 Gradient Decent for Multiple variables 多元梯度下降法

在这里插入图片描述

The gradient descent equation itself is generally the same form; we just have to repeat it for our ‘n’ features:

在这里插入图片描述
In other words:
在这里插入图片描述
在这里插入图片描述
There is no need to do feature scaling with the normal equation.

The following is a comparison of gradient descent and the normal equation:
在这里插入图片描述
With the normal equation, computing the inversion has complexity O(n3).
So if we have a very large number of features, the normal equation will be slow.
In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.
在这里插入图片描述

4.3 Gradient Decent in Practice I: Feature Scaling 多元梯度下降法 I: 特征缩放

在这里插入图片描述
We can speed up gradient descent by having each of our input values in roughly the same range.
This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
在这里插入图片描述
These aren’t exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.

在这里插入图片描述
正则化(均值归一化)

Two techniques to help with this are feature scaling and mean normalization.

Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1.

Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero.

To implement both of these techniques, adjust your input values as shown in this formula:
在这里插入图片描述
Where
μi is the average of all the values for feature (i)
si is the range of values (max - min),
or
si is the standard deviation.

Note that dividing by the range, or dividing by the standard deviation, give different results.
The quizzes in this course use range - the programming exercises use standard deviation.

For example, if xi represents housing prices with a range of 100 to 2000 and a mean value of 1000, then,
在这里插入图片描述

在这里插入图片描述
注:The average size of a house is 1000 but 100 is accidentally written instead
在这里插入图片描述

4.4 Gradient Decent in Practice II: Learning rate 多元梯度下降法 I: 学习率

在这里插入图片描述
Debugging gradient descent.
Make a plot with number of iterations on the x-axis.
Now plot the cost function, J(θ) over the number of iterations of gradient descent.
If J(θ) ever increases, then you probably need to decrease α.
Automatic convergence test.
Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as 10−3.
However in practice it’s difficult to choose this threshold value.

在这里插入图片描述
It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.
在这里插入图片描述

在这里插入图片描述
注:[5:20 - the x -axis label in the right graph should be θ rather than No. of iterations ]

在这里插入图片描述

4.5 Features and polynomial regression 特征和多项式回归

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

4.6 Normal Equation 正规方程

区别于迭代方法的直接解法
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
8:00 to 8:44 - The design matrix X (in the bottom right side of the slide) given in the example should have elements x with subscript 1 and superscripts varying from 1 to m because for all m training sets there are only 2 features x0 and x1
​12:56 - The X matrix is m by (n+1) and NOT n by n.
在这里插入图片描述

在这里插入图片描述

4.7 Normal Equation and Non-invertibility(optional) 正规方程以及不可逆性

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

4.8 Working on and Submitting Programming Exercises 编程技巧

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值