先看一下作业的要求:
前四个函数是必须要写的
warmUpExercise.m 视频中给来练习的函数。不多解释
plotData.m 要求如下:
题目的大体意思是将ex1data1.txt中x,y的值导入 并且画出图来
照着PDF 的代码稍加修改或抄上
function plotData(x, y)
%PLOTDATA Plots the data points x and y into a new figure
%画出(x,y)的图像
% PLOTDATA(x,y) plots the data points and gives the figure axes labels of
% population and profit.
%给出轴的名称
figure; % open a new figure window
% ====================== YOUR CODE HERE ======================
% Instructions: Plot the training data into a figure using the
% "figure" and "plot" commands. Set the axes labels using
% the "xlabel" and "ylabel" commands. Assume the
% population and revenue data have been passed in
% as the x and y arguments of this function.
%
% Hint: You can use the 'rx' option with plot to have the markers
% appear as red crosses. Furthermore, you can make the
% markers larger by using plot(..., 'rx', 'MarkerSize', 10);
data = load('ex1data1.txt');
x = data(:,1);
y = data(:,2);
plot(x,y,'rx','MarkerSize',10);
xlabel('Population of City in 10,000s');
ylabel('Profit in $10,1000s');
得到如下的图像:
第二个函数:
computeCost.m // 是让我们计算代价的
看了一下:
function J = computeCost(X, y, theta)
那么X是什么?????
在这呢
x原本是行向量 又在前面加了一个行向量组成了 X(theta0,theta1)
PDF中给出了代价方程 所以我们就直接照着方程写代码
J = sum(((X * theta) - y).^2)/(2*m)
解释一下上面这行代码:
X * theta:X是一个 m * 2 的矩阵 theta 是一个 2 * 1 的向量 所以得到一个 m * 1 的向量
完整代码如下:
function J = computeCost(X, y, theta)
%COMPUTECOST Compute cost for linear regression
% J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
% parameter for linear regression to fit the data points in X and y
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
% You should set J to the cost.
J = sum(((X * theta) - y).^2)/(2*m)
% =========================================================================
end
gradientDescent.m
也就是不断的更新theta 并计算出J
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
%生成了一个(迭代数 * 1)的零矩阵
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
temp1 = theta(1) - alpha * (1/m) * sum((X * theta) - y);
temp2 = theta(2) - alpha * (1/m) * sum(((X * theta) - y).*X(:,2));
theta(1) = temp1;
theta(2) = temp2;
%X(:,2) X的第二列 也就是x
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
ok 之后给出其他的函数