linear Regression 学习

初入门,仅做学习记录(http://cs229.stanford.edu/materials.html) 的过程。记录学习的代码,理论内容偷懒一下,参考:http://blog.csdn.net/yangliuy/article/details/18455525

linear_grad_ascent.m文件内容

function [theta, J_history] = linear_grad_ascent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
%   theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by 
%   taking num_iters gradient steps with learning rate alpha


% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
%两种方法,一种使用循环公式,
% for iter = 1:num_iters
%     % Batch gradient descent
%      grad = 0;
%      for i = 1:m
%          grad = grad + (X(i,:) * theta - y(i)) * X(i, :)';
%      end 
%     theta = theta - alpha *2 *grad;
%     % Save the cost J in every iteration    
%     J_history(iter) = computeCost(X, y, theta);
% end
%一种使用矢量运算方法,参考ng
x = X;
sample_num = m;
for iter = 1:num_iters
    grad = x'*(x*theta-y) ;
    theta = theta - alpha /sample_num * grad;
    
    J_history(iter) = computeCost(x, y, theta);%(1/(2*sample_num))*(x*theta-y)'*(x*theta-y);%Jtheta是个行向量
end


end

linear_grad_ascent_test.m测试文件代码

function linear_grad_ascent_test
%LINEAR_GRAD_ASCENT_TEST Summary of this function goes here
%   Detailed explanation goes here
x = load('ex2x.dat');
y = load('ex2y.dat');
plotData(x,y);
x = [ones(size(x,1),1),x];


alpha = 0.0001;
max_iters = 5000;


theta = [0.722254032225002;2.585491252616242]; %randn(2,1);


%andrew ng 的代码
x = x';
y = y';
theta = linear_regress(x,y);
x = x';
%andrew ng 的代码


%自己代码
% [theta,J] = linear_grad_ascent(x, y, theta, alpha, max_iters);


y1 = x * theta;


hold on;
plot(x(:,2)',y1','b');
hold off;



t = 1:max_iters;


figure;
plot(t(1:20)',J(1:20)','b');
plot(t(1:20)',J(1:20)','b');


xlabel('Number of iterations');  
ylabel('Cost J'); 

end




function plotData(x, y)
%PLOTDATA Plots the data points x and y into a new figure 
%   PLOTDATA(x,y) plots the data points and gives the figure axes labels of
%   population and profit.


% ====================== YOUR CODE HERE ======================
% Instructions: Plot the training data into a figure using the 
%               "figure" and "plot" commands. Set the axes labels using
%               the "xlabel" and "ylabel" commands. Assume the 
%               population and revenue data have been passed in
%               as the x and y arguments of this function.
%
% Hint: You can use the 'rx' option with plot to have the markers
%       appear as red crosses. Furthermore, you can make the
%       markers larger by using plot(..., 'rx', 'MarkerSize', 10);


figure; % open a new figure window


plot(x, y, 'rx', 'MarkerSize', 10); % Plot the data


% ============================================================


end

linear_regress.m(ng代码)

% linear_regress.m
function w_learned = linear_regress(X,y)
% linear regression model 


% Plot the original data
epsilon = 0.0001;
max_iters = 5000;


% Use gradient descent to to learn a set of parameters w_learned
% initialize w_learned randomly
w_learned = randn(2,1);
% iterate for max_iters # of iterations (could use other convergence
% criteria)


for iteration = 1:max_iters
    grad = 2*sum(repmat(w_learned'*X-y,size(X,1),1).*X,2);
    w_learned=w_learned-0.0001*grad;
    err=sum((y-w_learned'*X).^2);
end


测试数据连接:

http://openclassroom.stanford.edu/MainFolder/courses/DeepLearning/exercises/ex2materials/ex2Data.zip


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值