[原创]ML吴恩达系列习题解答1_machine_learning_ex2

7 篇文章 0 订阅

在这里插入图片描述
不多说了,赶紧上车
第一部分习题
(仅保留代码部分,省点篇幅)
1.实现plotData.m

% ====================== YOUR CODE HERE ======================
% Instructions: Plot the positive and negative examples on a
%               2D plot, using the option 'k+' for the positive
%               examples and 'ko' for the negative examples.
%
pos = find(y==1); neg = find(y == 0);
% Plot Examples 
plot(X(pos, 1), X(pos, 2), 'k+','LineWidth', 2,  'MarkerSize', 7); 
plot(X(neg, 1), X(neg, 2), 'ko', 'MarkerFaceColor', 'y',  'MarkerSize', 7);
% =========================================================================

2.实现sigmoid.m

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the sigmoid of each value of z (z can be a matrix,
%               vector or scalar).

g = 1./(1 + exp(-z));

% =============================================================

3.实现costFunction.m

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Note: grad should have the same dimensions as theta
%
thetaNum = length(theta);

for i = 1:m
    J = J + (-y(i)*log(sigmoid(theta' * X(i,:)')) - (1-y(i))*log(1-sigmoid(theta' * X(i,:)')))/m;
end

for j = 1:thetaNum
    for i =1:m
        grad(j) = grad(j) + (sigmoid(theta' * X(i,:)')- y(i))*X(i,j)/m;
    end
    
end
% =============================================================

这里插一句话:
理论上应该是theta转置乘以对应X,但是X数据给的有点问题,需要转置一下,所以这里变通了一下:theta转置乘以对应X转置,下同,特此说明

4.调用fminunc这个MATLAB自带的函数

%  Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);

%  Run fminunc to obtain the optimal theta
%  This function will return theta and the cost 
[theta, cost] = ...
	fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);

这是吴老师习题自带的,fminunc使用要求:

If you have completed the costFunction correctly, fminunc will converge on the right optimization parameters and return the final values of the cost and θ. Notice that by using fminunc, you did not have to write any loops yourself, or set a learning rate like you did for gradient descent. This is all done by fminunc: you only needed to provide a function calculating the cost and the gradient.

所以完成了costFunction后,fminunc直接调用即可
好了,完成了第一部分,看看输出(每个计算结果,都给出期望输出,一目了然)

Plotting data with + indicating (y = 1) examples and o indicating (y = 0) examples.

Program paused. Press enter to continue.
Cost at initial theta (zeros): 0.693147
Expected cost (approx): 0.693
Gradient at initial theta (zeros): 
 -0.100000 
 -12.009217 
 -11.262842 
Expected gradients (approx):
 -0.1000
 -12.0092
 -11.2628

Cost at test theta: 0.218330
Expected cost (approx): 0.218
Gradient at test theta: 
 0.042903 
 2.566234 
 2.646797 
Expected gradients (approx):
 0.043
 2.566
 2.647

Program paused. Press enter to continue.

好,继续enter继续执行

Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.

<stopping criteria details>
Cost at theta found by fminunc: 0.203498
Expected cost (approx): 0.203
theta: 
 -25.161343 
 0.206232 
 0.201472 
Expected theta (approx):
 -25.161
 0.206
 0.201

Program paused. Press enter to continue.

找到了局部极小值,判定边界也画出来了
在这里插入图片描述
5.实现predict.m

% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
%               your learned logistic regression parameters. 
%               You should set p to a vector of 0's and 1's
%
for i=1:m
    if sigmoid(theta' * X(i,:)') >= 0.5
        p(i) = 1;
    else
        p(i) = 0;
    end
end
% =========================================================================

看看预测结果

For a student with scores 45 and 85, we predict an admission probability of 0.776291
Expected value: 0.775 +/- 0.002

Train Accuracy: 89.000000
Expected accuracy (approx): 89.0

到这里,程序也就不再pause直接退出了

第二部分习题
6.实现costFunctionReg.m

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta

thetaNum = length(theta);

for i = 1:m
    J = J + (-y(i)*log(sigmoid(theta' * X(i,:)')) - (1-y(i))*log(1-sigmoid(theta' * X(i,:)')))/m;
end

for j= 1:thetaNum
    J = J + lambda*theta(j)*theta(j)/(2*m);
end

for j = 1:thetaNum
    for i =1:m
        grad(j) = grad(j) + (sigmoid(theta' * X(i,:)')- y(i))*X(i,j)/m;
    end
    
    if(j >= 2)
        grad(j) = grad(j) + lambda*theta(j)/m;
    end
end
% =============================================================

6.吴老师在习题中实现了plotDecisionBoundary.m

function plotDecisionBoundary(theta, X, y)
%PLOTDECISIONBOUNDARY Plots the data points X and y into a new figure with
%the decision boundary defined by theta
%   PLOTDECISIONBOUNDARY(theta, X,y) plots the data points with + for the 
%   positive examples and o for the negative examples. X is assumed to be 
%   a either 
%   1) Mx3 matrix, where the first column is an all-ones column for the 
%      intercept.
%   2) MxN, N>3 matrix, where the first column is all-ones

% Plot Data
plotData(X(:,2:3), y);
hold on

if size(X, 2) <= 3
    % Only need 2 points to define a line, so choose two endpoints
    plot_x = [min(X(:,2))-2,  max(X(:,2))+2];

    % Calculate the decision boundary line
    plot_y = (-1./theta(3)).*(theta(2).*plot_x + theta(1));

    % Plot, and adjust axes for better viewing
    plot(plot_x, plot_y)
    
    % Legend, specific for the exercise
    legend('Admitted', 'Not admitted', 'Decision Boundary')
    axis([30, 100, 30, 100])
else
    % Here is the grid range
    u = linspace(-1, 1.5, 50);
    v = linspace(-1, 1.5, 50);

    z = zeros(length(u), length(v));
    % Evaluate z = theta*x over the grid
    for i = 1:length(u)
        for j = 1:length(v)
            z(i,j) = mapFeature(u(i), v(j))*theta;
        end
    end
    z = z'; % important to transpose z before calling contour

    % Plot z = 0
    % Notice you need to specify the range [0, 0]
    contour(u, v, z, [0, 0], 'LineWidth', 2)
end
hold off

end

学习了学习了,666
最后附输出结果

Cost at initial theta (zeros): 0.693147
Expected cost (approx): 0.693
Gradient at initial theta (zeros) - first five values only:
 0.008475 
 0.018788 
 0.000078 
 0.050345 
 0.011501 
Expected gradients (approx) - first five values only:
 0.0085
 0.0188
 0.0001
 0.0503
 0.0115

Program paused. Press enter to continue.

Cost at test theta (with lambda = 10): 3.206882
Expected cost (approx): 3.16
Gradient at test theta - first five values only:
 0.346045 
 0.161352 
 0.194796 
 0.226863 
 0.092186 
Expected gradients (approx) - first five values only:
 0.3460
 0.1614
 0.1948
 0.2269
 0.0922

Program paused. Press enter to continue.

Local minimum possible.

fminunc stopped because it cannot decrease the objective function
along the current search direction.

<stopping criteria details>
Train Accuracy: 83.050847
Expected accuracy (with lambda = 1): 83.1 (approx)

第三部分扩展习题
欠拟合情况
在这里插入图片描述
准确拟合情况
在这里插入图片描述
过拟合情况
在这里插入图片描述

谢谢大家

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值