Linear regression

m=Number of training examples
h(hypothesis)=output function
这里写图片描述
Linear regression with one variable. Univariate linear regression.

Idea: Choose θ0,θ1 so that hθ(x) is close to y for our training examples(x,y)

Cost Function:
squared error function
x
minimize J( θ0,θ1 )——overall objective for linear regression

Gradient Descent
outline:

start with some θ0,θ1 (initialize them to 0)
(the result depending on where to start on the graph,however J( θ0,θ1 ) is a convex function)

repeat until convergence{
θj:=θjαδδθjJ(θ0,θ1)(for j=0 and j=1)
}
αδδθjJ(θ0,θ1) is the derivative term
α is learning rate
*simultaneous update
temp0:=...
temp1:=...
θ0=temp0
θ1=temp1
if you do not do so,maybe it also works well, but we don’t call it GD algorithm.

compress:
这里写图片描述

这里写图片描述

if α is too small ,gradient descent can be slow.
if α is too large,gradient decent can overshoot the minimum.It may fail to converge, or even diverge.
GD can converge to a local minimum,even with the learning rate α fixed, because as we approach a local minimum, GD will automatically take smaller steps.


Linear Regression with Multiple Features
X(i)j is the value of feature j in ith training example
hθ(x)=θ0x0+θ1x1+...θnxn=θTX
(both with n+1 elements)
(define x0=1 )
这里写图片描述
这里写图片描述


feature scaling
if different features takes on similar ranges of values,FD will
converge more quickly.
It speeds up gradient descent by making it require fewer iterations to get to a good solution.
maybe
-3 to 3
- 13 to 13
is well
else x1 = ...10000
get every feature into approximately a 1xi1
range or standard deviation

mean normalization
makes features have approximately 0

to implement both measures above:
x = (value - average_value)/(max_value - min_value)

feature scaling doesn’t have to be too exact

converge judge
这里写图片描述
There is a example convergence test,but the threshold is not easy to find. So we had better plot the function.
这里写图片描述
这里写图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值