笔记Andrew Ng:Machine Learning Week2

一、Linear Regression with Multiple Variables

Welcome to week 2! I hope everyone has been enjoying the course and learning a lot! This week we’re covering linear regression with multiple variables. we’ll show how linear regression can be extended to accommodate multiple input features. We also discuss best practices for implementing linear regression.
We’re also going to go over how to use Octave. You’ll work on programming assignments designed to help you understand how to implement the learning algorithms in practice. To complete the programming assignments, you will need to use Octave or MATLAB.
As always, if you get stuck on the quiz and programming assignment, you should post on the Discussions to ask for help. (And if you finish early, I hope you’ll go there to help your fellow classmates as well.)

(1) Multivariate Linear Regression

Multiple Features

multiple features

Gradient Descent for Multiple Variable

在这里插入图片描述
求导后得到:
gradient descent for multiple variables
(simultaneously update θj for j=0,1,…,n)

python:compute Cost Function

import numpy as np
def computeCost(X, y, theta):
    inner = np.power(((X * theta.T) - y), 2)
    return np.sum(inner) / (2 * len(X))

Gradient Descent in Practice 1 - Feature Scaling

We can speed up gradient descent by having each of our input values in roughly the same range.
This is because θ will descend quickly on small ranges and slowly on large ranges,
and so will oscillate inefficiently down to the optimum when the variables are very uneven.

The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same.
Ideally: -1<=x(i)<=1 or -0.5<=x(i)<=0.5
These aren’t exact requirements; we are only trying to speed things up.
The goal is to get all input variables into roughly one of these ranges, give or take a few.

Two techniques to help with this are feature scaling and mean normalization.

two techniques
  • feature scaling

Feature scaling involves dividing the input values by the range
(i.e. the maximum value minus the minimum value)
of the input variable, resulting in a new range of just 1.

  • mean normalization

Mean normalization involves subtracting the average value for an input variable from the values
for that input variable resulting in a new average value for the input variable of just zero.
practice1_formula

Gradient Descent in Practice 2 - Learing rate

Debugging gradient descent

Make a plot with number of iterations on the x-axis.
Now plot the cost function, J(θ) over the number of iterations of gradient descent.
If J(θ) ever increases, then you probably need to decrease α.

Automatic convergence test

Declare convergence if J(θ) decreases by less than E in one iteration,
where E is some small value such as 10−3.
However in practice it’s difficult to choose this threshold value.

It has been proven that if learning rate α is sufficiently small,
then J(θ) will decrease on every iteration.

summarize
  • If α is too small: slow convergence.
  • If α is too large: may not decrease on every iteration and thus may not converge.

Features and Polynomial Regression

We can improve our features and the form of our hypothesis function in a couple different ways.
We can combine multiple features into one. For example, we can combine x1 and x2 into a new feature x3 by taking x1*x2

Polynomial Regression

Our hypothesis function need not be linear (a straight line) if that does not fit the data well.
We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic
or square root function (or any other form).
polynomial regressior example

One important thing : feature scaling

One important thing to keep in mind is, if you choose your features this way
then feature scaling becomes very important.
eg. if x1 has range 1-1000 then range of x1^2 becomes 1-1000000 and that of x1^3 becomes 1-1000000000

(2) Computing Parameters Analytically

Normal Equation

The normal equation formula :

normal equation formula

Formula derivation process

normal equation.jpg

a comparison of gradient descent and the normal equation

comparison_1
And there is no need to do feature scaling with the normal equation
comparison_2

Normal Equation Noninvertibility

noninvertible

python:implement Normal Equation

#Using python to implement Normal Equation
import numpy as np
    
def normalEqn(X, y):    
    theta = np.linalg.inv(X.T@X)@X.T@y #X.T@X等价于X.T.dot(X)
    
    return theta
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值