function [theta, J_history] =newtownmethod(X, y, theta, num_iters)
m = length(y); % number of training examples
n = size(X,2); % number of training examples
H=zeros(n,n);
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
h = X*theta;
for i=1:n
for j=1:n
H(i,j)=sum((X(:,i).*X(:,j)));
end
end
theta = theta - pinv(H)*X'*(h-y);
end
end
主要的难点在求黑赛矩阵,在样本不大时,利用牛顿迭代法收敛速度比梯度下降法快得多
机器学习中利用牛顿迭代法代替梯度下降
最新推荐文章于 2023-01-02 12:26:45 发布