因为使用的mac, 使用homebrew安装的octave是5.1.0版本,有一个坑,即:用pause()函数无法响应按键事件,详见 https://www.mobibrw.com/2019/18501 目前运行只能把
ex1.m
的pause;
逐个注释掉,感觉有点糟心,希望5.2.0版本早点发布啊。
1 warmUpExercise.m
按照文档要求写入代码就可以生成一个 5 × 5 5\times5 5×5的单位矩阵
function A = warmUpExercise()
%WARMUPEXERCISE Example function in octave
% A = WARMUPEXERCISE() is an example function that returns the 5x5 identity matrix
A = [];
% ============= YOUR CODE HERE ==============
% Instructions: Return the 5x5 identity matrix
% In octave, we return values by defining which variables
% represent the return values (at the top of the file)
% and then set them accordingly.
A = eye(5);
% ===========================================
end
2.1 plotData.m
继续照着文档写
function plotData(x, y)
%PLOTDATA Plots the data points x and y into a new figure
% PLOTDATA(x,y) plots the data points and gives the figure axes labels of
% population and profit.
figure; % open a new figure window
% ====================== YOUR CODE HERE ======================
% Instructions: Plot the training data into a figure using the
% "figure" and "plot" commands. Set the axes labels using
% the "xlabel" and "ylabel" commands. Assume the
% population and revenue data have been passed in
% as the x and y arguments of this function.
%
% Hint: You can use the 'rx' option with plot to have the markers
% appear as red crosses. Furthermore, you can make the
% markers larger by using plot(..., 'rx', 'MarkerSize', 10);
plot(x, y, 'rx', 'MarkerSize', 10); % Plot the data
ylabel('Profit in $10,000s'); % Set the y−axis label
xlabel('Population of City in 10,000s'); % Set the x−axis label
% ============================================================
end
2.2.3 computeCost.m
根据公式 J θ = 1 2 m ∑ i = 1 m ( h θ ( x ( i ) ) − y ( i ) ) 2 J_\theta = \frac{1}{2m}\sum_{i=1}^m{\left(h_\theta(x^{(i)})-y^{(i)}\right)^2} Jθ=2m1∑i=1m(hθ(x(i))−y(i))2 和 h θ ( x ) = θ T x h_\theta(x)=\theta^Tx hθ(x)=θTx 来计算误差,注意作业里是矩阵运算,所以第二个公式需要修改一下。
function J = computeCost(X, y, theta)
%COMPUTECOST Compute cost for linear regression
% J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
% parameter for linear regression to fit the data points in X and y
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
% You should set J to the cost.
h_theta = X*theta;
J = 1/2/m * sum((h_theta-y).^2);
% =========================================================================
end
2.2.4 gradientDescent.m
因为是梯度下降,所以公式为 θ j : = θ j − a 1 m ∑ i = 1 m ( h θ ( x ( i ) ) − y ( i ) ) 2 x j ( i ) \theta_j:=\theta_j-a\frac{1}{m}\sum_{i=1}^m{\left(h_\theta(x^{(i)})-y^{(i)}\right)^2}x_j^{(i)} θj:=θj−am1∑i=1m(hθ(x(i))−y(i))2xj(i)
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
theta = theta - alpha*(1/m)*X'*(X*theta-y);
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
3.1 featureNormalize.m
这里需要对数据做mean normalization,目的是减少迭代次数, x i = x i − μ i S i x_i=\frac{x_i-\mu_i}{S_i} xi=Sixi−μi
function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms.
% You need to set these values correctly
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
% ====================== YOUR CODE HERE ======================
% Instructions: First, for each feature dimension, compute the mean
% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
%
% Hint: You might find the 'mean' and 'std' functions useful.
%
mu = mean(X);
sigma = std(X);
X_norm = (X-mu)./sigma;
% ============================================================
end
3.2 computeCostMulti.m
因为之前就考虑了矩阵,所以和之前一样就行了
function J = computeCostMulti(X, y, theta)
%COMPUTECOSTMULTI Compute cost for linear regression with multiple variables
% J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the
% parameter for linear regression to fit the data points in X and y
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
% You should set J to the cost.
h_theta = X*theta;
J = 1/2/m * sum((h_theta-y).^2);
% =========================================================================
end
3.2 gradientDescentMulti.m
这个比较简单,直接调用刚刚写好的方法来计算cost就可以了
function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
%GRADIENTDESCENTMULTI Performs gradient descent to learn theta
% theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCostMulti) and gradient here.
%
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCostMulti(X, y, theta);
end
end
3.3 normalEqn.m (optional)
根据文档中给出的公式 θ = ( X T X ) − 1 X T y → \theta=(X^TX)^{-1}X^T\overrightarrow{y} θ=(XTX)−1XTy ,可以写出代码
function [theta] = normalEqn(X, y)
%NORMALEQN Computes the closed-form solution to linear regression
% NORMALEQN(X,y) computes the closed-form solution to linear
% regression using the normal equations.
theta = zeros(size(X, 2), 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Complete the code to compute the closed form solution
% to linear regression and put the result in theta.
%
% ---------------------- Sample Solution ----------------------
theta = pinv(X'*X)*X'*y;
% -------------------------------------------------------------
% ============================================================
end
提交
在octave的终端执行submit
并填写邮箱和token就可以看到成绩了
$ submit
== Submitting solutions | Linear Regression with Multiple Variables...
Use token from last successful submission (wuchuansheng@yeah.net)? (Y/n): Y
warning: findstr is obsolete; use strfind instead
warning: strmatch is obsolete; use strncmp or strcmp instead
==
== Part Name | Score | Feedback
== --------- | ----- | --------
== Warm-up Exercise | 10 / 10 | Nice work!
== Computing Cost (for One Variable) | 40 / 40 | Nice work!
== Gradient Descent (for One Variable) | 50 / 50 | Nice work!
== Feature Normalization | 0 / 0 | Nice work!
== Computing Cost (for Multiple Variables) | 0 / 0 | Nice work!
== Gradient Descent (for Multiple Variables) | 0 / 0 |
== Normal Equations | 0 / 0 | Nice work!
== --------------------------------
== | 100 / 100 |
==