共轭梯度算法之FR算法

共轭梯度算法之FR算法

引理

f ( x ) = 1 2 x T G x + δ T x + γ f(x)=\frac{1}{2}x^TGx+\delta^T x+\gamma f(x)=21xTGx+δTx+γ为正定函数,则 ϕ ( α k ) = m i n α f ( x k + α d k ) \phi(\alpha_k)=\underset{\alpha}{min}f(x_k+\alpha d_k) ϕ(αk)=αminf(xk+αdk)有如下形式的解:
α k = − ∇ f ( x k ) T d k d k T G d k \alpha_k=-\frac{\nabla f(x_k)^Td_k}{d_k^TGd_k} αk=dkTGdkf(xk)Tdk

FR算法

核心思想:

  • 共轭梯度迭代,不断更新搜索方向

算法步骤

  • S t e p 1 : 初 始 点 x 1 , 精 度 ϵ > 0 , k = 1 , d 0 = 1 , β 0 = 1 ; Step1:初始点x_1,精度\epsilon>0,k=1,d_0=1,\beta_0=1; Step1:x1,ϵ>0,k=1,d0=1,β0=1;
  • S t e p 2 : 如 果 k = 1 , 令 d k = − ∇ f ( x k ) , Step2:如果k=1,令d_k=- \nabla f(x_k), Step2:k=1,dk=f(xk),
    否 则 k > 1 , 则 有 β k − 1 = ∇ f ( x k ) T ∇ f ( x k ) ∇ f ( x k − 1 ) T ∇ f ( x k − 1 ) , d k = − ∇ f ( x k ) + β k − 1 d k − 1 ; 否则k>1,则有\beta_{k-1}=\frac{\nabla f(x_k)^T\nabla f(x_k)}{\nabla f(x_{k-1})^T\nabla f(x_{k-1})},d_k=-\nabla f(x_k)+\beta_{k-1}d_{k-1}; k>1βk1=f(xk1)Tf(xk1)f(xk)Tf(xk),dk=f(xk)+βk1dk1;
  • S t e p 3 : 计 算 α k , 使 得 ϕ ( α k ) = m i n α f ( x k + α d k ) , 即 Step3:计算\alpha_k,使得\phi(\alpha_k)=\underset{\alpha}{min}f(x_k+\alpha d_k),即 Step3:αk,使ϕ(αk)=αminf(xk+αdk),
    α k = − ∇ f ( x k ) T d k d k T G d k \alpha_k=-\frac{\nabla f(x_k)^Td_k}{d_k^TGd_k} αk=dkTGdkf(xk)Tdk
  • S t e p 4 : x k + 1 = x k + α k d k , k = k + 1 , 转 到 S t e p 2. Step4:x_{k+1}=x_k+\alpha_kd_k,k=k+1,转到Step2. Step4:xk+1=xk+αkdk,k=k+1,Step2.

程序实现后续补充

  • 2
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是强Wolfe准则的FR共轭梯度算法的MATLAB代码: %定义函数f及其梯度函数g syms x1 x2 f = @(x1, x2) 100*(x2 - x1^2)^2 + (1 - x1)^2; g = [diff(f, x1); diff(f, x2)]; %定义强Wolfe准则的线搜索函数 function [alpha, iter] = strong_wolfe(f, g, x, d, c1, c2, alpha_max, max_iter) alpha = 0; iter = 0; phi = @(a) f(x(1)+a*d(1), x(2)+a*d(2)); phi_prime = @(a) g(1)*(x(1)+a*d(1)) + g(2)*(x(2)+a*d(2)); while iter < max_iter phi_0 = f(x(1), x(2)); phi_prime_0 = g(1)*d(1) + g(2)*d(2); alpha = (alpha_max + alpha)/2; phi_alpha = phi(alpha); if phi_alpha > phi_0 + c1*alpha*phi_prime_0 || (phi_alpha >= phi(alpha/2) && iter > 0) alpha = zoom(f, g, x, d, alpha_max, alpha, c1, c2, max_iter); return end phi_prime_alpha = phi_prime(alpha); if abs(phi_prime_alpha) <= -c2*phi_prime_0 return end if phi_prime_alpha >= 0 alpha = zoom(f, g, x, d, alpha, alpha_max, c1, c2, max_iter); return end iter = iter + 1; end end %定义zoom函数 function alpha = zoom(f, g, x, d, alpha_low, alpha_high, c1, c2, max_iter) iter = 0; while iter < max_iter alpha = (alpha_high + alpha_low)/2; phi = @(a) f(x(1)+a*d(1), x(2)+a*d(2)); phi_prime = @(a) g(1)*(x(1)+a*d(1)) + g(2)*(x(2)+a*d(2)); phi_0 = f(x(1), x(2)); phi_alpha = phi(alpha); phi_low = phi(alpha_low); phi_prime_0 = g(1)*d(1) + g(2)*d(2); phi_prime_alpha = phi_prime(alpha); if phi_alpha > phi_0 + c1*alpha*phi_prime_0 || phi_alpha >= phi_low alpha_high = alpha; else if abs(phi_prime_alpha) <= -c2*phi_prime_0 return end if phi_prime_alpha*(alpha_high - alpha_low) >= 0 alpha_high = alpha_low; end alpha_low = alpha; end iter = iter + 1; end end %定义共轭梯度法函数 function [x, fval, iter] = fr_cg(f, g, x0, max_iter, tol) x = x0; g0 = double(subs(g, [x1, x2], [x(1), x(2)])); d = -g0; iter = 0; while iter < max_iter [alpha, ~] = strong_wolfe(f, g0, x, d, 1e-4, 0.9, 1, 1000); x = x + alpha*d; g1 = double(subs(g, [x1, x2], [x(1), x(2)])); beta = (g1'*g1)/(g0'*g0); d = -g1 + beta*d; g0 = g1; if norm(g0) < tol break end iter = iter + 1; end fval = double(subs(f, [x1, x2], [x(1), x(2)])); end %设置初始点、最大迭代次数和容差 x0 = [-1.2; 1]; max_iter = 1000; tol = 1e-6; %调用共轭梯度法函数并输出结果 [x, fval, iter] = fr_cg(f, g, x0, max_iter, tol); disp(['The minimum value of the function is ', num2str(fval), ' at x = [', num2str(x(1)), ', ', num2str(x(2)), '] with ', num2str(iter), ' iterations.']);
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值