笔记Andrew Ng:Machine Learning Week2
- 一、Linear Regression with Multiple Variables
一、Linear Regression with Multiple Variables
Welcome to week 2! I hope everyone has been enjoying the course and learning a lot! This week we’re covering linear regression with multiple variables. we’ll show how linear regression can be extended to accommodate multiple input features. We also discuss best practices for implementing linear regression.
We’re also going to go over how to use Octave. You’ll work on programming assignments designed to help you understand how to implement the learning algorithms in practice. To complete the programming assignments, you will need to use Octave or MATLAB.
As always, if you get stuck on the quiz and programming assignment, you should post on the Discussions to ask for help. (And if you finish early, I hope you’ll go there to help your fellow classmates as well.)
(1) Multivariate Linear Regression
Multiple Features
Gradient Descent for Multiple Variable
求导后得到:
(simultaneously update θj for j=0,1,…,n)
python:compute Cost Function
import numpy as np
def computeCost(X, y, theta):
inner = np.power(((X * theta.T) - y), 2)
return np.sum(inner) / (2 * len(X))
Gradient Descent in Practice 1 - Feature Scaling
We can speed up gradient descent by having each of our input values in roughly the same range.
This is because θ will descend quickly on small ranges and slowly on large ranges,
and so will oscillate inefficiently down to the optimum when the variables are very uneven.The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same.
Ideally: -1<=x(i)<=1 or -0.5<=x(i)<=0.5
These aren’t exact requirements; we are only trying to speed things up.
The goal is to get all input variables into roughly one of these ranges, give or take a few.Two techniques to help with this are feature scaling and mean normalization.
two techniques
- feature scaling
Feature scaling involves dividing the input values by the range
(i.e. the maximum value minus the minimum value)
of the input variable, resulting in a new range of just 1.
- mean normalization
Mean normalization involves subtracting the average value for an input variable from the values
for that input variable resulting in a new average value for the input variable of just zero.
Gradient Descent in Practice 2 - Learing rate
Debugging gradient descent
Make a plot with number of iterations on the x-axis.
Now plot the cost function, J(θ) over the number of iterations of gradient descent.
If J(θ) ever increases, then you probably need to decrease α.
Automatic convergence test
Declare convergence if J(θ) decreases by less than E in one iteration,
where E is some small value such as 10−3.
However in practice it’s difficult to choose this threshold value.
It has been proven that if learning rate α is sufficiently small,
then J(θ) will decrease on every iteration.
summarize
- If α is too small: slow convergence.
- If α is too large: may not decrease on every iteration and thus may not converge.
Features and Polynomial Regression
We can improve our features and the form of our hypothesis function in a couple different ways.
We can combine multiple features into one. For example, we can combine x1 and x2 into a new feature x3 by taking x1*x2
Polynomial Regression
Our hypothesis function need not be linear (a straight line) if that does not fit the data well.
We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic
or square root function (or any other form).
One important thing : feature scaling
One important thing to keep in mind is, if you choose your features this way
then feature scaling becomes very important.
eg. if x1 has range 1-1000 then range of x1^2 becomes 1-1000000 and that of x1^3 becomes 1-1000000000
(2) Computing Parameters Analytically
Normal Equation
The normal equation formula :
Formula derivation process
a comparison of gradient descent and the normal equation
And there is no need to do feature scaling with the normal equation
Normal Equation Noninvertibility
python:implement Normal Equation
#Using python to implement Normal Equation
import numpy as np
def normalEqn(X, y):
theta = np.linalg.inv(X.T@X)@X.T@y #X.T@X等价于X.T.dot(X)
return theta