开个帖记录一下机器学习作业代码顺便督促自己一下 不要再抄来抄去了!!当然好好借鉴大佬还是要的..
week4 ext3
- function [J, grad] = lrCostFunction(theta, X, y, lambda)
// A code block
function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with
%regularization
% J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
grad = zeros(size(theta));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
% You should set J to the cost.
% Compute the partial derivatives and set grad to the partial
% derivatives of the cost w.r.t. each parameter in theta
%
% Hint: The computation of the cost function and gradients can be
% efficiently vectorized. For example, consider the computation
%
% sigmoid(X * theta)
%
% Each row of the resulting matrix will contain the value of the
% prediction for that example. You can make use of this to vectorize
% the cost function and gradient computations.
%
% Hint: When computing the gradient of the regularized cost function,
% there're many possible vectorized solutions, but one solution
% looks like:
% grad = (unregularized gradient for logistic regression)
% temp = theta;
% temp(1) = 0; % because we don't add anything for j = 0
% grad = grad + YOUR_CODE_HERE (using the temp variable)
h = sigmoid(X*theta)
J = 1/m*(-y'*log(h)-(1-y)'*log(1-h))+lambda/2/m*(sum(theta.^2)-theta(1)*theta(1))
grad = X'*(h-y)/m + lambda/m*theta
grad(1) = grad(1) - lambda/m*theta(1)
% =============================================================
grad = grad(:);
end
由于上个week没有好好看这里借鉴了大佬的…
这里没有出现alpha 应该是默认为1了?
注意regularization的部分都是从1开始的
https://blog.csdn.net/Only_Wolfy/article/details/89893734?spm=1001.2014.3001.5506
这里
- function [all_theta] = oneVsAll(X, y, num_labels, lambda)
// 题目
function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta
%corresponds to the classifier for label i
% [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
% logistic regression classifiers and returns each of these classifiers
% in a matrix all_theta, where the i-th row of all_theta corresponds
% to the classifier for label i
% Some useful variables
m = size(X, 1); %行数
n = size(X, 2); %列数
% You need to return the following variables correctly
all_theta = zeros(num_labels, n + 1);
% Add ones to the X data matrix
X = [ones(m, 1) X];
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the following code to train num_labels
% logistic regression classifiers with regularization
% parameter lambda.
%
% Hint: theta(:) will return a column vector.
%
% Hint: You can use y == c to obtain a vector of 1's and 0's that tell you
% whether the ground truth is true/false for this class.
%
% Note: For this assignment, we recommend using fmincg to optimize the cost
% function. It is okay to use a for-loop (for c = 1:num_labels) to
% loop over the different classes.
%
% fmincg works similarly to fminunc, but is more efficient when we
% are dealing with large number of parameters.
%
% Example Code for fmincg:
%
% % Set Initial theta
% initial_theta = zeros(n + 1, 1);
%
% % Set options for fminunc
% options = optimset('GradObj', 'on', 'MaxIter', 50);
%
% % Run fmincg to obtain the optimal theta
% % This function will return theta and the cost
% [theta] = ...
% fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
% initial_theta, options);
for c = 1:num_labels
initial_theta = zeros(n + 1, 1);
options = optimset('GradObj', 'on', 'MaxIter', 50);
[theta] = ...
fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
initial_theta, options);
all_theta(c,:) = theta;
end
%=========================================================================
end
这个破校园网差点没给我气死…
还好coursera编程作业的due即是过了在结课前pass也就可以了
下次记得手机热点关5G+veee最佳线路
fmincg函数在week3
感觉就是把题目里的code抄了一遍
关于fmincg函数:
最小化一个连续可微分的多变量函数。起点由 "X "给出(D为1),用字符串 "f "命名的函数,必须返回一个函数值和一个偏导数的向量。共轭梯度的Polack-Ribiere风味被用来计算搜索方向,使用二次和三次多项式近似的直线搜索和Wolfe-Powell停止标准,以及猜测初始步长的斜率法。此外,还进行了一系列检查,以确保探索正在进行,并且推断不会无限制地大。长度 "给出了运行的长度:如果是正数,它给出了最大的行搜索次数,如果是负数,它的绝对值给出了允许的最大函数评估次数。你可以(选择性地)给 "length "一个第二部分,它将表明在第一次行搜索中预期的函数值的减少(默认为1.0)。当其长度达到时,或者如果不能取得进一步的进展(即,我们处于最低点,或者由于数字问题,我们不能再接近),该函数就会返回。如果函数在几次迭代中就终止了,这可能表明函数值和导数不一致(即,你的 "f "函数的实现可能有一个错误)。该函数返回找到的解决方案 “X”,一个表示进展的函数值向量 "fX "和迭代次数 “i”(行搜索或函数求值。取决于 "长度 "的符号)。
// fmincg函数
function [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
% Minimize a continuous differentialble multivariate function. Starting point
% is given by "X" (D by 1), and the function named in the string "f", must
% return a function value and a vector of partial derivatives. The Polack-
% Ribiere flavour of conjugate gradients is used to compute search directions,
% and a line search using quadratic and cubic polynomial approximations and the
% Wolfe-Powell stopping criteria is used together with the slope ratio method
% for guessing initial step sizes. Additionally a bunch of checks are made to
% make sure that exploration is taking place and that extrapolation will not
% be unboundedly large. The "length" gives the length of the run: if it is
% positive, it gives the maximum number of line searches, if negative its
% absolute gives the maximum allowed number of function evaluations. You can
% (optionally) give "length" a second component, which will indicate the
% reduction in function value to be expected in the first line-search (defaults
% to 1.0). The function returns when either its length is up, or if no further
% progress can be made (ie, we are at a minimum, or so close that due to
% numerical problems, we cannot get any closer). If the function terminates
% within a few iterations, it could be an indication t