Coursera Machine Learning Note - Week 1

Linear Regression with One Variable

Hypothesis function: hθ(x)=θ0+θ1x

Idea:Choose θ0,θ1 so that hθ(x) is close to y for our training examples(x,y)

minimizeθ0,θ112mmi=1(hθ(x(i))y(i))2

Parameters: θ0,θ1

Cost function: J(θ0,θ1)=12mmi=1(hθ(x(i))y(i))2 , where m is the training size

So, the goal: minimizeθ0,θ1J(θ0,θ1)

Note that: J(θ0,θ1)=12mmi=1(hθ(x(i))y(i))2=12mmi=1((θ0+θ1x(i))2+(y(i))22(θ0+θ1x(i))y(i))

It’s a function of the parameters θ0,θ1 , its graph like the following:

这里写图片描述

Gradient descent algorithm:

  1. Start with some θ0,θ1
  2. Keep changing θ0,θ1 to reduce J(θ0,θ1) until we hopefully end up at a minimum

repeat until convergence{

θ0:=θ0αθ0J(θ0,θ1)=θ0α1mmi=1(hθ(x(i))y(i))
θ1:=θ1αθ1J(θ0,θ1)=θ1α1mmi=1(hθ(x(i))y(i))x(i)
(update θ0,θ1 simultaneously)
}
where α is learning rate, usually set to 0.03

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

手撕机

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值