机器学习笔记(Washington University)- Regression Specialization-week six

1. Fit locally

If the true model changes much, we want to fit our function locally to

different regions of the input space. 

 

2. Scaled distance

\

we put weight on each input to define relative importance.

 

3. KNN

KNN is really sensitive to regions with little data and also noise in the data.

if we can get infinite amount of noiseless data, the 1-KNN will leads to no bias and variance.

boudary effect: near the boudary, the prediction tends to avergae over the same data sample.

Discontinuities: jumps in the prediction values.

the more dimensions d you have, the more points N you need to cover the space.

procedure:

1.find k closet x(i) in dataset

2,predict the value(the average value of k samples)

 

weighted KNN:

weight more similar data more than those similar in list.

 

4. kernal

How the weights gonna decay as a function of the distance between a given point and query point

kernal has bounded support, only subset of data needed to compute local fit.

we can also use the validation set or cross validation to choose the lambda.

Gaussian kernal:

and the weights never goes to zero for gaussian kernal.

 

转载于:https://www.cnblogs.com/climberclimb/p/6821431.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值
>