3 多变量线性回归
在这部分我们来通过多个变量预测房价
这个数据存储在ex1data2.txt,第一列是房子的面积大小,第二列是卧室的数量,第三列是房价,这部分内容的主程序在ex1_multi.m中
3.1 元素(影响因素)归一化
我们注意到房子的面积从数值上来看比卧室数量大很多,当不同变量相差数量级时,通常应该记性变量的归一化,这样梯度下降的速度更快。
我们在这里的第一个任务是完成featureNormalize.m中的代码完成两个功能:
1)计算出来每个元素的平均值
2)求出来一个所谓的“标准差”
标准差是一种衡量数据偏离平均值的方法。
这里提示了一下,在计算归一化时,样本数据X中没有添加第一列全1的列向量。
看到提示使用mean 和 std
>> help mean
'mean' is a function from the file C:\Octave\OCTAVE~1.2\share\octave\4.2.2\m\statistics\base\mean.m
-- mean (X)
-- mean (X, DIM)
-- mean (X, OPT)
-- mean (X, DIM, OPT)
Compute the mean of the elements of the vector X.
The mean is defined as
mean (X) = SUM_i X(i) / N
where N is the length of the X vector.
If X is a matrix, compute the mean for each column and return them 如果X是一个矩阵,则计算每一列的平均值,返回一个行
in a row vector.
If the optional argument DIM is given, operate along this
dimension.
The optional argument OPT selects the type of mean to compute. The
following options are recognized:
"a"
Compute the (ordinary) arithmetic mean. [default]
"g"
Compute the geometric mean.
"h"
Compute the harmonic mean.
Both DIM and OPT are optional. If both are supplied, either may
appear first.
See also: median, mode.
>> help std
'std' is a function from the file C:\Octave\OCTAVE~1.2\share\octave\4.2.2\m\statistics\base\std.m
-- std (X)
-- std (X, OPT)
-- std (X, OPT, DIM)
Compute the standard deviation of the elements of the vector X.
The standard deviation is defined as
std (X) = sqrt ( 1/(N-1) SUM_i (X(i) - mean(X))^2 )
where N is the number of elements of the X vector.
If X is a matrix, compute the standard deviation for each column 如果X是一个矩阵,计算每一列的标准差,返回一个行向量
and return them in a row vector.
The argument OPT determines the type of normalization to use.
Valid values are
0:
normalize with N-1, provides the square root of the best
unbiased estimator of the variance [default]
1:
normalize with N, this provides the square root of the second
moment around the mean
If the optional argument DIM is given, operate along this
dimension.
See also: var, range, iqr, mean, median.
以下贴出我的代码
function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms.
% You need to set these values correctly
X_norm = X;
mu = zeros(1, size(X, 2)); %1x2
sigma = zeros(1, size(X, 2));
% ====================== YOUR CODE HERE ======================
% Instructions: First, for each feature dimension, compute the mean
% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
%
% Hint: You might find the 'mean' and 'std' functions useful.
%
mu=mean(X_norm);
sigma=std(X_norm);
X_norm(:,1)=(X_norm(:,1).-mu(1))./sigma(1);
X_norm(:,2)=(X_norm(:,2).-mu(2))./sigma(2);
这里就是我们先求出每个因素的均值和方差,然后分别对每一个因素进行归一化,怎样使用矩阵归一化呢?
这个我还没有想好。
在归一化后就计算代价函数和梯度下降,这时候指导给出了一个计算偏导数的矩阵表达形式,避免了每个theta的偏导重复计算
代价函数直接用就行了,因为我们已经用了矩阵形式,下面给出梯度下降的代码:
%GRADIENTDESCENTMULTI Performs gradient descent to learn theta
% theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCostMulti) and gradient here.
%
H_theta=X*theta;
theta=theta-alpha.*(1/m).*X'*(H_theta-y);
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCostMulti(X, y, theta);
end
end
其实Octave的代码就是把数学公式写上去就OK了
后面还有步长调整,这部分就详细展开了,接下来是用矩阵来计算theta
function [theta] = normalEqn(X, y)
%NORMALEQN Computes the closed-form solution to linear regression
% NORMALEQN(X,y) computes the closed-form solution to linear
% regression using the normal equations.
theta = zeros(size(X, 2), 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Complete the code to compute the closed form solution
% to linear regression and put the result in theta.
%
% ---------------------- Sample Solution ----------------------
theta=inv(X'*X)*X'*y;
% -------------------------------------------------------------
% ============================================================
end
这里用到了矩阵求逆
最后我们看一下两种方式的预测差多少
Predicted price of a 1650 sq-ft, 3 br house (using gradient descent):
$289314.620338
Program paused. Press enter to continue.
Solving with normal equations...
Theta computed from the normal equations:
89597.909543
139.210674
-8738.019112
Predicted price of a 1650 sq-ft, 3 br house (using normal equations):
$293081.464335
可以看到不太一样,但是基本可以认为相差不大,说明两个结果是可以互相验证的。