此次作业需要我们实现正则线性化,并在此基础上解决训练过程中出现的过拟合和欠拟合问题。
1 Regularized Linear Regression
这一部分需要我们通过线性正则化来实现预测大坝的出水量。
1.1 Visualizing the dataset
此次给出的训练数据中
X
X
X只有一维,我们会在后面的练习中多项式话它。
训练数据的图像如下:
1.2 Regularized linear regression cost function
正则化线性回归的代价函数如下:
J
(
θ
)
=
1
2
m
(
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
2
)
+
λ
2
m
(
∑
j
=
1
n
θ
j
2
)
J(\theta) = \frac{1}{2m}(\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})^2)+\frac{\lambda}{2m}(\sum_{j=1}^{n}\theta^2_j)
J(θ)=2m1(i=1∑m(hθ(x(i))−y(i))2)+2mλ(j=1∑nθj2)
我们需要在linearRegCostFunction.m中完成代价函数的求解。
代码如下:
linearRegCostFunction.m
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost and gradient of regularized linear
% regression for a particular choice of theta.
%
% You should set J to the cost and grad to the gradient.
%
% cost function compute
h = X*theta;
J = (1/(2*m))*((h-y)'*(h-y))+((lambda/(2*m))*(theta'*theta-theta(1)^2));
1.3 Regularized linear regression gradient
此部分需要我们完成正则化线性回归中的梯度求解
梯度求解公式如下:
∂
J
(
θ
)
∂
θ
0
=
1
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
x
j
(
i
)
j
=
0
\frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})x^{(i)}_j \qquad j=0
∂θ0∂J(θ)=m1i=1∑m(hθ(x(i))−y(i))xj(i)j=0
∂
J
(
θ
)
∂
θ
j
=
1
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
x
j
(
i
)
+
λ
m
θ
j
j
>
0
\frac{\partial J(\theta)}{\partial \theta_j} = \frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})x^{(i)}_j+\frac{\lambda}{m}\theta_j \qquad j>0
∂θj∂J(θ)=m1i=1∑m(hθ(x(i))−y(i))xj(i)+mλθjj>0
我们在linearRegCostFunction.m完成梯度的计算
linearRegCostFunction.m
% compute grad
grad = 1/m*(X'*(h-y))+lambda/m*theta;
grad(1) = grad(1) - lambda/m*theta(1);
完整的linearRegCostFunction.m代码如下:
function [J, grad] = linearRegCostFunction(X, y, theta, lambda)
%LINEARREGCOSTFUNCTION Compute cost and gradient for regularized linear
%regression with multiple variables
% [J, grad] = LINEARREGCOSTFUNCTION(X, y, theta, lambda) computes the
% cost of using theta as the parameter for linear regression to fit the
% data points in X and y. Returns the cost in J and the gradient in grad
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost and gradient of regularized linear
% regression for a particular choice of theta.
%
% You should set J to the cost and grad to the gradient.
%
h = X*theta;
J = (1/(2*m))*((h-y)'*(h-y))+((lambda/(2*m))*(theta'*theta-theta(1)^2));
grad = 1/m*(X'*(h-y))+lambda/m*theta;
grad(1) = grad(1) - lambda/m*theta(1);
% =========================================================================
grad = grad(:);
end
1.4 Fitting linear regression
完成了代价函数和梯度的计算后就可以通过fmincg函数通过计算出
m
i
n
J
(
θ
)
minJ(\theta)
minJ(θ)下的
θ
\theta
θ了。
在这一部分,我们设
λ
\lambda
λ为0,因为正则化不会对二维数据产生多么大的帮助,可以忽略不计。
下面我们画出了最佳拟合的图像。
可以看出我们预测的
h
θ
(
x
)
h_\theta(x)
hθ(x)没有很好的拟合给出的训练数据,这说明这个问题不仅仅是能用一次函数能解决的,我们需要对特征值进行多项式化。在下面的练习中我们需要实现特征值的多项式化。
2 Bias-variance
在此部分练习中,我们将在学习曲线上绘制训练和测试错误,以判断偏差-方差问题。
2.1 Learning curves
这一部分我们将会绘制学习曲线,学习曲线对我们的学习算法调试非常有用。
学习曲线的
X
X
X轴表示的是训练数据的大小,
Y
Y
Y轴表示的是误差。
通过绘制出train data 和 cross validation 各自的
J
(
θ
)
J(\theta)
J(θ)曲线来判断是否存在过拟合或欠拟合问题。
J
t
r
a
i
n
(
θ
)
J_{train}(\theta)
Jtrain(θ)和
J
c
v
(
θ
)
J_{cv}(\theta)
Jcv(θ)的求解公式如下:
J
t
r
a
i
n
(
θ
)
=
1
2
m
(
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
2
)
J_{train}(\theta) = \frac{1}{2m}(\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})^2)
Jtrain(θ)=2m1(i=1∑m(hθ(x(i))−y(i))2)
J
c
v
(
θ
)
=
1
2
m
c
v
(
∑
i
=
1
m
c
v
(
h
θ
(
x
c
v
(
i
)
)
−
y
c
v
(
i
)
)
2
)
J_{cv}(\theta) = \frac{1}{2m_{cv}}(\sum_{i=1}^{m_{cv}}(h_\theta(x_{cv}^{(i)})-y_{cv}^{(i)})^2)
Jcv(θ)=2mcv1(i=1∑mcv(hθ(xcv(i))−ycv(i))2)
我们需要对
J
t
r
a
i
n
(
θ
)
J_{train}(\theta)
Jtrain(θ)和
J
c
v
(
θ
)
J_{cv}(\theta)
Jcv(θ)进行求解
此部分需要我们在learningCurve.m完成。
代码如下:
learningCurve.m
function [error_train, error_val] = ...
learningCurve(X, y, Xval, yval, lambda)
%LEARNINGCURVE Generates the train and cross validation set errors needed
%to plot a learning curve
% [error_train, error_val] = ...
% LEARNINGCURVE(X, y, Xval, yval, lambda) returns the train and
% cross validation set errors for a learning curve. In particular,
% it returns two vectors of the same length - error_train and
% error_val. Then, error_train(i) contains the training error for
% i examples (and similarly for error_val(i)).
%
% In this function, you will compute the train and test errors for
% dataset sizes from 1 up to m. In practice, when working with larger
% datasets, you might want to do this in larger intervals.
%
% Number of training examples
m = size(X, 1);
% You need to return these values correctly
error_train = zeros(m, 1);
error_val = zeros(m, 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return training errors in
% error_train and the cross validation errors in error_val.
% i.e., error_train(i) and
% error_val(i) should give you the errors
% obtained after training on i examples.
%
% Note: You should evaluate the training error on the first i training
% examples (i.e., X(1:i, :) and y(1:i)).
%
% For the cross-validation error, you should instead evaluate on
% the _entire_ cross validation set (Xval and yval).
%
% Note: If you are using your cost function (linearRegCostFunction)
% to compute the training and cross validation error, you should
% call the function with the lambda argument set to 0.
% Do note that you will still need to use lambda when running
% the training to obtain the theta parameters.
%
% Hint: You can loop over the examples with the following:
%
% for i = 1:m
% % Compute train/cross validation errors using training examples
% % X(1:i, :) and y(1:i), storing the result in
% % error_train(i) and error_val(i)
% ....
%
% end
%
% ---------------------- Sample Solution ----------------------
for i=1:m
theta=trainLinearReg(X(1:i,:),y(1:i,:),lambda);
[error_train(i),grad] = linearRegCostFunction(X(1:i,:),y(1:i,:),theta,0);
[error_val(i),grad] = linearRegCostFunction(Xval,yval,theta,0);
endfor
% -------------------------------------------------------------
% =========================================================================
end
可以看出随着训练数据的增加,训练数据和交叉验证数据的误差都很高,这反映了模型具有高偏差问题(欠拟合)。我们可以通过多项式化来解决。
3 Polynomial regression
在这一部分中,我们需要给我们的训练来加上更多的特征。
我们可以通过增加特征值幂的方式来增加特征值。
我们可以令
h
θ
(
x
)
h_\theta(x)
hθ(x)等于如下形式:
h
θ
(
x
)
=
θ
0
+
θ
1
x
+
θ
2
x
2
+
θ
3
x
3
+
.
.
.
+
θ
p
x
p
h_\theta(x)=\theta_0+\theta_1x+\theta_2x^2+\theta_3x^3+...+\theta_px^p
hθ(x)=θ0+θ1x+θ2x2+θ3x3+...+θpxp
我们需要在polyFeatures.m中完成增添多项式特征。
代码如下:
polyFeatures.m
function [X_poly] = polyFeatures(X, p)
%POLYFEATURES Maps X (1D vector) into the p-th power
% [X_poly] = POLYFEATURES(X, p) takes a data matrix X (size m x 1) and
% maps each example into its polynomial features where
% X_poly(i, :) = [X(i) X(i).^2 X(i).^3 ... X(i).^p];
%
% You need to return the following variables correctly.
X_poly = zeros(numel(X), p);
% ====================== YOUR CODE HERE ======================
% Instructions: Given a vector X, return a matrix X_poly where the p-th
% column of X contains the values of X to the p-th power.
%
%
for i=1:p
X_poly(:,i) = X.^i;
endfor
% =========================================================================
end
3.1 Learning Polynomial Regression
在完成以上任务之后,我们需要对我们的训练数据进行特征规范化,然后我们再次训练数据,令 λ = 0 \lambda=0 λ=0得到的图像如下:
然后我们再将学习曲线画出:
发现存在过拟合问题,所以我们需要调整合适的
λ
\lambda
λ来解决过拟合问题。
3.3 Selecting λ using a cross validation set
此部分需要我们选出合适的
λ
\lambda
λ来解决过拟合问题。我们通过设置不同的
λ
\lambda
λ来求出不同的
θ
\theta
θ然后分别求出
J
t
r
a
i
n
J_{train}
Jtrain和
J
c
v
J_{cv}
Jcv,然后绘制图像来观察最合适的
λ
\lambda
λ。
我们需要完成validationCurve.m中的求解
J
t
r
a
i
n
J_{train}
Jtrain和
J
c
v
J_{cv}
Jcv功能。
代码如下:
validationCurve.m
function [lambda_vec, error_train, error_val] = ...
validationCurve(X, y, Xval, yval)
%VALIDATIONCURVE Generate the train and validation errors needed to
%plot a validation curve that we can use to select lambda
% [lambda_vec, error_train, error_val] = ...
% VALIDATIONCURVE(X, y, Xval, yval) returns the train
% and validation errors (in error_train, error_val)
% for different values of lambda. You are given the training set (X,
% y) and validation set (Xval, yval).
%
% Selected values of lambda (you should not change this)
lambda_vec = [0 0.001 0.003 0.01 0.03 0.1 0.3 1 3 10]';
% You need to return these variables correctly.
error_train = zeros(length(lambda_vec), 1);
error_val = zeros(length(lambda_vec), 1);
% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return training errors in
% error_train and the validation errors in error_val. The
% vector lambda_vec contains the different lambda parameters
% to use for each calculation of the errors, i.e,
% error_train(i), and error_val(i) should give
% you the errors obtained after training with
% lambda = lambda_vec(i)
%
% Note: You can loop over lambda_vec with the following:
%
% for i = 1:length(lambda_vec)
% lambda = lambda_vec(i);
% % Compute train / val errors when training linear
% % regression with regularization parameter lambda
% % You should store the result in error_train(i)
% % and error_val(i)
% ....
%
% end
%
%
for i=1:length(lambda_vec)
lambda = lambda_vec(i);
theta = trainLinearReg(X,y,lambda);
%theta_v = trainLinearReg(Xval,yval,lambda);
[error_train(i),grad] = linearRegCostFunction(X,y,theta,0);
[error_val(i),grad] = linearRegCostFunction(Xval,yval,theta,0);
endfor
% =========================================================================
end
注意 这次里的
J
t
r
a
i
n
J_{train}
Jtrain和
J
c
v
J_{cv}
Jcv与前面的
J
t
r
a
i
n
J_{train}
Jtrain和
J
c
v
J_{cv}
Jcv求法是一样的,都不带正则化的式子。
绘制的图像如下:
从图像上可以看出
λ
\lambda
λ的最佳值为3。
3.4 Optional (ungraded) exercise: Computing test set error
此部分是选做部分,我们需要在 λ = 3 \lambda=3 λ=3的前提下计算 J t e s t J_{test} Jtest,跟前面的算法是一样的,求出 θ \theta θ然后带入求解 J t e s t J_{test} Jtest的式子, J t e s t J_{test} Jtest同样不需要正则化求解。