近邻算子的方法应用和比较——优化算法

I use three methodsto solve the problem to contrast.

1.      ProximalGradient Method

2.      AlternatingDirection Method of Multipliers(ADMM)

3.      CVX

 

the problem:

(1). Proximal Gradient Method

introduction: Proximal gradientmethods are a generalized form of projection used to solve non-differentiableconvex optimization problems. Many interesting problems can be formulated asconvex optimization problems of form


(2). ADMM


(3).CVX

 I use the toolbox of matlab to solve theproblem as one of the reference solution. In the decrementof objective function w.r.t. iteration, I use CVX as the True solution and drawit a horizon line.

the algorithon:

 

cvx_begin quiet

        cvx_precision low

        variable x(n)

        minimize(0.5*x.'*A*x + b.'*x + c +gamma*norm(x,1))

cvx_end

 

result of picture:

the 2D contour plot of objective functionand the trajectory of the value update:



decrement of objective function w.r.t.iteration:


CVXtime elapsed: 0.19 seconds.

Proximalgradient time elapsed: 0.00 seconds.

ADMMtime elapsed: 0.02 seconds.

 

 

附件:

1.Proximal Gradient Method(proximalGradient.m)

function[x,h]=proximalGradient(A,b,c,gamma)

%%解决 1/2*X'AX+b'X+c+gamma(normx,1)

%% A:n*n   X:n*1

MAX_ITER =400;

ABSTOL   = 1e-6;

RELTOL   = 1e-2;

 

f = @(x)0.5*(x'*A*x)+b'*x+c;%%为了确定线搜索步长

lambda = 1;

beta = 0.5;

[~,n]=size(A);

tic;

x = zeros(n,1);

xprev = x;

for k = 1:MAX_ITER

     while 1

        grad_x = 0.5*(A+A')*x+b;

        z = soft_threshold(x - lambda*grad_x,lambda*gamma);%%迭代更新x

        if f(z) <= f(x) + grad_x'*(z - x) +(1/(2*lambda))*(norm(z - x))^2

            break;

        end

        lambda = beta*lambda;

    end

    xprev = x;

    x = z;

    h.x_iter(k,:) = x;

    h.prox_optval(k) = objective(A, b,c, gamma,x, x);

    if k > 1 && abs(h.prox_optval(k)- h.prox_optval(k-1)) < ABSTOL

        break;

    end

end

 

h.x_prox = x;

h.p_prox =h.prox_optval(end);

h.prox_iter =length(h.prox_optval);

h.prox_grad_toc =toc;

 

end

 

2.ADMM (ADMM.m)

function[x,h]=ADMM(A,b,c,gamma)

%%解决 1/2*X'AX+b'X+c+gamma(normx,1)

%% A:n*n   X:n*1

MAX_ITER =400;

ABSTOL   = 1e-4;

RELTOL   = 1e-2;

 

%f = @(x)0.5*(x'*A*x)+b'*x+c;%%为了确定线搜索步长

lambda = 1;

beta = 0.5;

[~,n]=size(A);

tic

x = zeros(n,1);

z = zeros(n,1);

u = zeros(n,1);

rho = 1/lambda;

for k = 1:MAX_ITER

    while 1

        x = inv(A + rho.*eye(n))*(rho*(z-u)-b);

        z_old = z;

        z = soft_threshold(x+u,lambda*gamma);

        u = u+x-z;

        h.admm_optval(k) = objective(A, b,c,gamma, x, x);

        h.r_norm(k)   = norm(x - z);

        h.s_norm(k)   = norm(-rho*(z - z_old));

        h.eps_pri(k)  = sqrt(n)*ABSTOL + RELTOL*max(norm(x),norm(-z));

        h.eps_dual(k) = sqrt(n)*ABSTOL +RELTOL*norm(rho*u);

        h.x_iter(k,:) = z;

        ifh.r_norm(k)<h.eps_pri(k)&&h.s_norm(k)<h.eps_dual(k)

            break;

        end

       

    end

end

h.x_admm = z;

h.admm_iter =length(h.admm_optval);

h.p_admm =h.admm_optval(end);

h.admm_toc = toc;

end

 

3.CVX(cvx_optimal.m)

function [x,h] =cvx_optimal(A,b,c,gamma)

    [~,n]=size(A);

    tic

    cvx_begin quiet

        cvx_precision low

        variable x(n)

        minimize(0.5*x.'*A*x + b.'*x + c +gamma*norm(x,1))

    cvx_end

    h.x_cvx = x

    h.p_cvx = cvx_optval

    %h.p_cvx = objective(A, b,c, gamma, x, x)

    h.cvx_toc = toc

end

 

4.objective function(objective.m)

function p =objective(A, b,c, gamma, x, z)

%UNTITLED5 Summaryof this function goes here

%   Detailed explanation goes here

    p=0.5*(x'*A*x)+b'*x+c+ gamma*norm(z,1);

end

 

5. soft-threshold (soft_threshold.m)

function[X]=soft_threshold(b,lambda)

%UNTITLED4 Summaryof this function goes here

%   Detailed explanation goes here

    X=sign(b).*max(abs(b) - lambda,0);

end

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值