Stanford机器学习Prolem Set#1:Supervised Learning

2.Locally-weighted logistic regression
Code:
 lwlr.m

function y = lwlr(X_train,y_train,x,tau)

m = size(X_train,1);
n = size(X_train,2);

theta = zeros(n,1);

% compute weights
w = exp(-sum((X_train - repmat(x',m,1)).^2,2)/(2*tau));

% perform Newton's method
g = ones(n,1);
while(norm(g) > 1e-6)
    h = 1./(1+exp(-X_train*theta));
    g = X_train'*(w.*(y_train - h))-1e-4*theta;
    H = -X_train'*diag(w.*h.*(1-h))*X_train - 1e-4*eye(n);
    theta = theta - H\g;
end

%return predicted y
y = double(x'*theta > 0);

Test.m

clc;clear;close all;
load('q1x.dat');
load('q1y.dat');

X = q1x; Y = q1y;
m = size(X,1);
n = size(X,2);
% minX= min(X);   maxX = max(X);
minX= [0 -6];   maxX = [8 4];
resolution = 200;
tau = [0.01 0.05 0.1 0.5 0.5 5];
for i = 1:6
    i
    subplot(2,3,i);
    Title = sprintf('tau=%f',tau(i));
    title(Title);  hold on;

    for x1 = linspace(minX(1),maxX(1),resolution)
        for x2 = linspace(minX(2),maxX(2),resolution)
            y = lwlr(X,Y,[x1;x2],tau(i));
            if y == 0
                plot(x1,x2,'r.');
            else
                plot(x1,x2,'g.');
            end
        end
    end
    
    for j=1:m
        if Y(j) == 0
            plot(X(j,1),X(j,2),'kx');
        else
            plot(X(j,1),X(j,2),'ko');
        end
    end
    
    hold off;  
end

Conclusion: For smaller τ, the classifier appears to overfit the data set, obtaining zero training error, but outputing a sporadic looking decision boundary. As τ grows, the resulting decision boundary becomes smoother, enentually converging (in the limit as τ→ ∞ to the unweighted linear regression solution).

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值