gradientDescent----吴恩达机器学习作业

1.Question

Next, you will implement gradient descent in the le gradientDescent.m.The loop structure has been written for you, and you only need to supply the updates to θ within each iteration.As you program, make sure you understand what you are trying to optimize and what is being updated. Keep in mind that the cost J(θ ) is parameterized by the vector θ, not X and y. That is, we minimize the value of J(θ ) by changing the values of the vector θ , not by changing X or y. Refer to the equations in this handout and to the video lectures if you are uncertain.A good way to verify that gradient descent is working correctly is to look at the value of J(θ ) and check that it is decreasing with each step. The starter code for gradientDescent.m calls computeCost on every iteration and prints the cost. Assuming you have implemented gradient descent and computeCost correctly, your value of J(θ ) should never increase, and should converge to a steady value by the end of the algorithm.After you are nished, ex1.m will use your nal parameters to plot the linear t. The result should look something like Figure 2:
Your nal values for θ will also be used to make predictions on pro ts in areas of 35,000 and 70,000 people. Note the way that the following lines in ex1.m uses matrix multiplication, rather than explicit summation or looping, to calculate the predictions. This is an example of code vectorization in Octave/MATLAB.You should now submit your solutions.

2.code

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
%   theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by 
%   taking num_iters gradient steps with learning rate alpha
%   J_history 意味着在执行每一次的J的值
%   theta是返回一次的还是?

% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);

for iter = 1:num_iters

    % ====================== YOUR CODE HERE ======================
    % Instructions: Perform a single gradient step on the parameter vector
    %               theta. 
    %
    % Hint: While debugging, it can be useful to print out the values
    %       of the cost function (computeCost) and gradient here.
    %
    s=1/m*(X*theta-y).*X;
    s0=sum(s);
    s1=s0(:,1);
    s2=s0(:,2);
    theta(1,:)=theta(1,:)-alpha*s1;
    theta(2,:)=theta(2,:)-alpha*s2;
    % ============================================================

    % Save the cost J in every iteration    
    J_history(iter) = computeCost(X, y, theta);

end
%aaa=1:1500;
% aaa=aaa';
% 
% plot(aaa,J_history);
end

3.Result

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值