[Coursera机器学习]Regularized Linear Regression and Bias v.s. Variance WEEK6编程作业

1.2 Regularized linear regression cost function

Recall that regularized linear regression has the following cost function:

J(θ)=12m(mi=1(hθ(x(i))y(i))2)+λ2m(nj=1θ2j)

You should now complete the code in the fi le linearRegCostFunction.m. Your task is to write a function to calculate the regularized linear regression cost function. If possible, try to vectorize your code and avoid writing loops.

1.3 Regularized linear regression gradient

Correspondingly, the partial derivative of regularized linear regression’s cost for θj is defi ned as:

J(θ)θ0=1mmi=1(hθ(x(i))y(i))x(i)j                       for j = 0

J(θ)θj=1mmi=1(hθ(x(i))y(i))x(i)j+λmθj           for j 1

In linearRegCostFunction.m, add code to calculate the gradient, returning it in the variable grad.

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost and gradient of regularized linear 
%               regression for a particular choice of theta.
%
%               You should set J to the cost and grad to the gradient.
%
predictions = X * theta;
sqrErrors = (predictions - y) .^ 2;
theta_r = [0;theta(2:end)];
J = 1 / (2 * m) * sum(sqrErrors) + lambda / (2 * m) * sum(theta_r .^ 2);

grad = X' * (predictions - y) / m + theta_r * lambda / m;

2.1 Learning curves

You will now implement code to generate the learning curves that will be useful in debugging learning algorithms. Recall that a learning curve plots training and cross validation error as a function of training set size. Your job is to fi ll in learningCurve.m so that it returns a vector of errors for the training set and cross validation set.

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return training errors in 
%               error_train and the cross validation errors in error_val. 
%               i.e., error_train(i) and 
%               error_val(i) should give you the errors
%               obtained after training on i examples.
%
% Note: You should evaluate the training error on the first i training
%       examples (i.e., X(1:i, :) and y(1:i)).
%
%       For the cross-validation error, you should instead evaluate on
%       the _entire_ cross validation set (Xval and yval).
%
% Note: If you are using your cost function (linearRegCostFunction)
%       to compute the training and cross validation error, you should 
%       call the function with the lambda argument set to 0. 
%       Do note that you will still need to use lambda when running
%       the training to obtain the theta parameters.
%
% Hint: You can loop over the examples with the following:
%
%       for i = 1:m
%           % Compute train/cross validation errors using training examples 
%           % X(1:i, :) and y(1:i), storing the result in 
%           % error_train(i) and error_val(i)
%           ....
%           
%       end
%

% ---------------------- Sample Solution ----------------------
for i = 1:m
   theta = trainLinearReg([ones(i,1), X(1:i, :)], y(1:i), lambda);
   error_train(i) = linearRegCostFunction([ones(i,1), X(1:i, :)], y(1:i), theta, 0);
   error_val(i) = linearRegCostFunction([ones(size(Xval,1),1), Xval], yval, theta, 0);
end

3 Polynomial regression

Now, you will add more features using the higher powers of the existing feature x in the dataset. Your task in this part is to complete the code in polyFeatures.m so that the function maps the original training set X of size m × 1 into its higher powers. Speci cally, when a training set X of size m × 1 is passed into the function, the function should return a m × p matrix X_poly, where column 1 holds the original values of X, column 2 holds the values of X.^2, column 3 holds the values of X.^3, and so on. Note that you don’t have to account for the zero-eth power in this function.

% ====================== YOUR CODE HERE ======================
% Instructions: Given a vector X, return a matrix X_poly where the p-th 
%               column of X contains the values of X to the p-th power.
%
% 
for i = 1:p
    X_poly(:,i) = X .^ i;
end

3.3 Selecting λ using a cross validation set

In this section, you will implement an automated method to select the λ parameter. Concretely, you will use a cross validation set to evaluate how good each λ value is. After selecting the best λ value using the cross validation set, we can then evaluate the model on the test set to estimate how well the model will perform on actual unseen data.
Your task is to complete the code in validationCurve.m. Speci cally, you should should use the trainLinearReg function to train the model using different values of λ and compute the training error and cross validation error. You should try λ in the following range: {0; 0.001; 0.003; 0.01; 0.03; 0.1; 0.3; 1; 3; 10}.

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return training errors in 
%               error_train and the validation errors in error_val. The 
%               vector lambda_vec contains the different lambda parameters 
%               to use for each calculation of the errors, i.e, 
%               error_train(i), and error_val(i) should give 
%               you the errors obtained after training with 
%               lambda = lambda_vec(i)
%
% Note: You can loop over lambda_vec with the following:
%
%       for i = 1:length(lambda_vec)
%           lambda = lambda_vec(i);
%           % Compute train / val errors when training linear 
%           % regression with regularization parameter lambda
%           % You should store the result in error_train(i)
%           % and error_val(i)
%           ....
%           
%       end
%
%
for i = 1:length(lambda_vec);
    theta = trainLinearReg(X, y, lambda_vec(i));
    error_train(i) = linearRegCostFunction(X, y, theta, 0);
    error_val(i) = linearRegCostFunction(Xval, yval, theta, 0);
end
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值