梯度算法求步长的公式_最优化算法2【最小二乘法和梯度下降法(固定步长)】

本文使用 Zhihu On VSCode 创作并发布

最小二乘法

对于给定的数据集

equation?tex=D%3D%5Clbrace%28x_1%2Cy_1%29%2C%28x_2%2Cy_2%29%2C+...%2C%28x_m%2Cy_m%29%5Crbrace,其中
equation?tex=x_i%3D%28x_%7Bi1%7D%3Bx_%7Bi2%7D%3B+...%3Bx_%7Bid%7D%29

对上述数据进行拟合:

equation?tex=f%28x_i%29%3D+%5Chat%5Comega%5ET+%5Chat%7Bx_i%7D

其中:

equation?tex=%5Chat%5Comega+%3D+%28%5Comega_1%3B%5Comega_2%3B+...%2C+%5Comega_d%3Bb%29 ,
equation?tex=%5Chat+x_i+%3D+%28x_%7Bi1%7D%3Bx_%7Bi2%7D%3B+...%3Bx_%7Bid%7D%3B1%29

最小二乘法是使用均方误差函数进行度量,可以通过求导,令导数等于零,直接求出解析解。均方误差函数为:

equation?tex=E%28%5Chat+%5Comega%29%3D%5Cfrac%7B1%7D%7Bm%7D%5Csum_%7Bi%3D1%7D%5Em%28f%28x_i%29-y_i%29%5E2

equation?tex=X%3D%28x_1%5ET%3Bx_2%5ET%3B+...%3Bx_m%5ET%29%2C+Y%3D%28y_1%3By_2%3B...%3By_m%29,则:

equation?tex=E%28%5Chat+%5Comega%29%3D%28X%2A%5Chat%5Comega%29%5ET%28X%2A%5Chat%5Comega%29

上式对

equation?tex=%5Comega求导,得:

equation?tex=%5Cfrac%7B%5Cpartial+E%7D%7B%5Cpartial%5Chat%5Comega%7D%3D%5Cfrac%7B2%7D%7Bm%7DX%5ET%28X%5Chat%5Comega-Y%29

令上述导数等于0,得:

equation?tex=%5Chat%5Comega%5E%2A%3D%28X%5ETX%29%5E%7B-1%7DX%5ETY

这就是要求的最优解
使用上述方法,随机生成三维数据集,使用最小二乘法进行线性回归

clc;
M = 50;
dim = 2;

X = 10*randn(M,dim);
Y = 10*rand(M,1);
figure(1);
scatter3(X(:,1),X(:,2),Y,'filled');

X_2 = ones(M,1);
X = [X,X_2];

omega = (X'*X)X'*Y;
[xx,yy] = meshgrid(-20:0.2:20,-20:0.2:20);

zz = omega(1,1)*xx+omega(2,1)*yy+omega(3,1);
hold on;
surf(xx,yy,zz);

效果

0a628b48dea74871b6b09f9733168d85.png

二、梯度下降法

相较于均方误差函数,对

equation?tex=%5Comega_j+%2C+j+%3D+1%2C+...%2C+d求导得:

equation?tex=%5Cfrac%7B%5Cpartial+f%7D%7B%5Cpartial+%5Comega_j%7D%3D%5Cfrac%7B2%7D%7Bm%7D%5Csum_%7Bi%3D1%7D%5Em+x_%7Bij%7D%28x_%7Bij%7D%5Comega_j-y_i%29

使用matlab生成三维随机数,检验程序有效性

clc;
close all;
M = 50; %%50个样本
dim = 2;
N = dim+1;

X = 10*randn(M,dim);
Y = 10*rand(M,1);
figure(1);
scatter3(X(:,1),X(:,2),Y,'filled');

X_2 = ones(M,1);
X = [X,X_2];
iterate = 300;  %%迭代300次
count = 0;
omega = zeros(dim+1,1);
err = 1000;
delta_t = 0.01;
loss_data = zeros(1,iterate);
while count <= iterate && err > 0.1

    count = count+1;
    delta_omega = zeros(N,1);
    for i = 1:N 
        temp_omega = 0;
        for j = 1:M
            temp_omega = temp_omega+X(j,i)*(X(j,i)*omega(i,1)-Y(j,1));    
        end

        delta_omega(i,1) = temp_omega/M;
    end
    omega = omega - delta_t*delta_omega;

    disp(omega);
    err = (Y-X*omega)'*(Y-X*omega);

    disp(err);
    loss_data(1,count) = err;
    
end
[xx,yy] = meshgrid(-20:0.2:20,-20:0.2:20);
% 
zz = omega(1,1)*xx+omega(2,1)*yy+omega(3,1);
hold on;
surf(xx,yy,zz);
figure(2);
 x_t = linspace(0,iterate,size(loss_data,2));
 plot(x_t,loss_data);

效果

样本和分类面

0dd7710c32fe7665e14d8faa481d8a65.png

损失函数随迭代次数变化

8b16d390a283228111501e0ce724810c.png

conclusion

1、对于

equation?tex=%5CDelta+%5Comega,注意是军方误差,如果不除以数据集元素个数,所得梯度向量的模可能过大,程序不能收敛
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值