吴恩达机器学习第二周编程作业
computeCost
计算h时注意要使用 .^ 因为实际上是对于矩阵中的每个量取平方,在做完之后要使用sum对整体结果进行求和。
function J = computeCost(X, y, theta)%COMPUTECOST Compute cost for linear regression% J = COMPUTECOST(X, y, theta) computes the cost of using theta as the% parameter for linear regression to fit the data points in X and y
% Initialize some useful valuesm = length(y); % number of training examples
% You need to return the following variables correctly J = 0;
% ====================== YOUR CODE HERE ======================% Instructions: Compute the cost of a particular choice of theta% You should set J to the cost.
h = X*theta - y;
J = 1/(2*m)* sum(h .^ 2);
% =========================================================================
end
computeCost(muti)
与上题类似
function J = computeCostMulti(X, y, theta)%COMPUTECOSTMULTI Compute cost for linear regression with multiple variables% J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the% parameter for linear regression to fit the data points in X and y
% Initialize some useful valuesm = length(y); % number of training examples
% You need to return the following variables correctly J = 0;
% ====================== YOUR CODE HERE ======================% Instructions: Compute the cost of a particular choice of theta% You should set J to the cost.
h = X*theta;
J = 1/(2*m) * sum((h - y).^2);
% =========================================================================
end
featurenormalize
x = (x - mean)/std 按照这一公式进行计算
function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms.
% You need to set these values correctlyX_norm = X;mu = zeros(1, size(X, 2));sigma = zeros(1, size(X, 2));
% ====================== YOUR CODE HERE ======================
% Instructions: First, for each feature dimension, compute the mean% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
%
% Hint: You might find the 'mean' and 'std' functions useful.
%
mu = mean(X);
sigma = std(X);
X_norm = (X_norm - mu) ./ sigma;
% ============================================================
end
grandientDescent
按照梯度下降法求导之后的式子做即可
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y);
% number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector % theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
h = X*theta - y;
theta = theta - alpha*1/m*X'*(h);
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
gradientDescentMulti
其实多元和单参数都基本一致,关键还是要掌握向量化这个思想。
function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
%GRADIENTDESCENTMULTI Performs gradient descent to learn theta
% theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y);
% number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector % theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCostMulti) and gradient here.
%
theta = theta - alpha*1/m*X'*(X*theta - y);
% ============================================================
% Save the cost J in every iteration J_history(iter) = computeCostMulti(X, y, theta);
end
end
normalEqn
function [theta] = normalEqn(X, y)
%NORMALEQN Computes the closed-form solution to linear regression
% NORMALEQN(X,y) computes the closed-form solution to linear
% regression using the normal equations.
theta = zeros(size(X, 2), 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Complete the code to compute the closed form solution
% to linear regression and put the result in theta.
%
% ---------------------- Sample Solution ----------------------
theta = pinv(X'*X)*X'*y;
% -------------------------------------------------------------
% ============================================================
end
本次的作业,重点在于能够使用语言将算式表达出来,确实用matlab相较于C等语言较为直白易懂。