gradient descent

clear all; close all; clc

x = load('ex2x.dat'); 
y = load('ex2y.dat');

plot(x,y,'o')
ylabel('Height in meters')
xlabel('Age')
m = length(y)
x=[ones(m,1),x];

theta0=0;
theta1=0;
alpha = 0.07;
n= 1500;
theta=[theta0;theta1];
theta_array=[];
for k=1:n
    % for k=1:2  %x的维                  
      %sum1 = 0;
      %sum0 = 0;   
      %sum=[sum0;sum1];
      sum = x'*(x*theta - y);      
     % for i=1:m %样本个数                
          %sum =sum +(theta'*x(i,:)'-y(i)).*x(i,:)';
          % sum1 = sum1 + (theta*x(i,:)-y(i))*x(i,2); 
          % sum0 = sum0 + (theta*x(i,:)-y(i))*x(i,1);
          %sum1 = sum1 + (theta1*x(i,2)+theta0*x(i,1)-y(i))*x(i,2);      
          %sum0 = sum0 + (theta1*x(i,2)+theta0*x(i,1)-y(i))*x(i,1);
     % end   
      theta_array = [theta_array, theta];
      theta = theta - alpha*(1/m)*sum;
     
      %theta1 = theta1 - alpha*(1/m)*sum1;
      %theta0 = theta0 - alpha*(1/m)*sum0;       
end

%theta1
%theta0

hold on
plot(x(:,2),theta(2)*x(:,2)+theta(1)*x(:,1))

%计算代价函数
J_vals = zeros(100, 100);   % initialize Jvals to 100x100 matrix of 0's
theta0_vals = linspace(-3, 3, 100);
theta1_vals = linspace(-1, 1, 100);
for i = 1:length(theta0_vals)
	  for j = 1:length(theta1_vals)
	  t = [theta0_vals(i); theta1_vals(j)];
	  J_vals(i,j) =1/(2*m).* (x * t - y)' * (x * t - y); %% YOUR CODE HERE %%
    end
end

% Plot the surface plot
% Because of the way meshgrids work in the surf command, we need to 
% transpose J_vals before calling surf, or else the axes will be flipped
J_vals = J_vals';
figure;
surf(theta0_vals, theta1_vals, J_vals)
xlabel('\theta_0'); ylabel('\theta_1')

% Contour plot
figure;
% Plot J_vals as 15 contours spaced logarithmically between 0.01 and 100
contour(theta0_vals, theta1_vals, J_vals, logspace(-2, 2, 15))%画出等高线
hold on
plot(theta_array(1,:),theta_array(2,:),'r'); %把迭代得到的theta值在等高线的图上也绘制出来
xlabel('\theta_0'); ylabel('\theta_1');%类似于转义字符,但是最多只能是到参数0~9



这个代码,我真的调试了很久很久,不过终于搞定了。以后遇到梯度下降法做回归,不用担心了。我接下来将要做的是希望能够使用C++来实现梯度下降法。

小结:

在机器学习中,常说的有监督学习和非监督学习,其中监督的学习可以分为(分类)和(回归),其中它们的区别在于分类的output是离群的,而回归的output是连续的。

梯度下降法中的cost>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值