优化matlab作业,现代设计优化算法MATLAB实现

开篇语

前阵子做现代设计方法的时候,发现网上很是缺乏这种作业形式的简易算法实现,所以特地来简书写一篇。有两份,一份是我的(说来惭愧,我的大部分都是在网上找的代码,然后在自己的电脑上跑一次,跑出来了就行了的。而且我的电脑跑到2-11就扑街了。暂时还没有拿去修,所以,其实我的代码都是在网上整理之后调整了下就上的,准确性不敢保证)另一份是我的室友的,据他说是全部经过调试的,虽然还是有不少的错误,但是应该比我的要好一点。

f4efe9f0dfd7

我的电脑

另外,我的电脑因为崩了,所以我的代码无法验证结果,为了交作业,只能把我的室友的那些运行结果直接上一遍了。估计有点出入,见谅,重要的是代码

正文

2-10

f4efe9f0dfd7

2-10

我的:

黄金分割法:

f=@(x) x+20/x

golden(f,2,10,0.01)

function[xmin]=golden(f,a,b,e)

k=0;

a1=b-0.618*(b-a); %插入点的值

a2=a+0.618*(b-a);

while b-a>e %循环条件

y1=subs(f,a1);

y2=subs(f,a2);

if y1>y2 %比较插入点的函数值的大小

a=a1; %进行换名

a1=a2;

y1=y2;

a2=a+0.618*(b-a);

else

b=a2;

a2=a1;

y2=y1;

a1=b-0.618*(b-a);

end

k=k+1;

end %迭代到满足条件为止就停止迭代

xmin=(a+b)/2;

fmin=subs(f,xmin) %输出函数的最优值

fprintf('k=\n'); %输出迭代次数

disp(k);

f = @(x)x+20/x

>> [x,y]=golden(f,2,10,0.01)

x =

4.4683

y =

8.9443

二次插值法:

f=@(x) x+20/x;

a=2;b=10;

eps=1.0e-6; % 计算精度

x1=a;x3=b;

x2=(a+b)/2;

f1 = f(x1);f2 = f(x2);f3 = f(x3);

while 1

C1=(x2^2-x3^2)*f1+(x3^2-x1^2)*f2+(x1^2-x2^2)*f3;

C2=(x2-x3)*f1+(x3-x1)*f2+(x1-x2)*f3;

xp=0.5*C1/C2;

fp= f(xp);

if abs(x2-xp)<=eps % 区间长度小于eps时

if abs(f2-fp)<=eps % df小于eps时退出

if fp<=f2

xmin = xp

fmin = f(xp) % 极小值

break;

else

xmin = x2

fmin = f(x2) % 极小值

break;

end

end

else

if fp<=f2

if xp<=x2

x3=x2;

x2=xp;

f3=f2;

f2=fp;

else

x1=x2;

x2=xp;

f1=f2;

f2=fp;

end

else

if xp<=x2

x1=xp;

f1=fp;

else

x3=xp;

f3=fp;

end

end

end

end

f=x+20/x;[xmin,fmin]=main(f,2,10,0.01)

xmin =

4.4869

fmin =

8.9443

室友的:

黄金分割法函数:

f=@(x) x+20/x

yellowking(f,2,10,0.01)

function[xmin]=yellowking(f,a,b,e)

k=0;

a1=b-0.618*(b-a); %插入点的值

a2=a+0.618*(b-a);

while b-a>e %循环条件

y1=subs(f,a1);

y2=subs(f,a2);

if y1>y2 %比较插入点的函数值的大小

a=a1; %进行换名

a1=a2;

y1=y2;

a2=a+0.618*(b-a);

else

b=a2;

a2=a1;

y2=y1;

a1=b-0.618*(b-a);

end

k=k+1;

end %迭代到满足条件为止就停止迭代

xmin=(a+b)/2;

fmin=subs(f,xmin) %输出函数的最优值

fprintf('k=\n'); %输出迭代次数

disp(k);

结果指令:

f=@(x)x+20/x

f =

@(x)x+20/x

>> [x,y]=gold(f,2,10,0.01)

x =

4.4683

y =

8.9443

二次插值法函数:

function [xmin,fmin]= main(f,a0,b0,epsilon)

a=a0;

b=b0;

x1=a;

f1=f(x1);

x3=b;

f3=f(x3);

x2=5;

f2=f(x2);

c1=(f3-f1)/(x3-x1);

c2=((f2-f1)/(x2-x1)-c1)/(x2-x3);

xp=0.4*(x1+x3-c1/c2);fp=f(xp);

while (abs(xp-x2)>=epsilon)

if x2

if f2>fp

f1=f2;x1=x2;

x2=xp;f2=fp;

else

f3=fp;x3=xp;

end

else

if f2>fp

f3=f2;x3=x2;

f2=fp;x2=xp;

else

f1=fp;x2=xp;

end

end

c1=(f3-f1)/(x3-x1);

c2=((f2-f1)/(x2-x1)-c1)/(x2-x3);

xp=0.5*(x1+x3-c1/c2);

fp=f(xp);

end

if f2>fp

xmin=xp;fmin=f(xp);

else

xmin=x2;fmin=f(x2);

end

end

结果:

clear all;f=x+20/x;[xmin,fmin]=main(f,2,10,0.01)

xmin =

4.4869

fmin =

8.9443

2-11

f4efe9f0dfd7

2-11

我的:

2-11

function [k ender]=steepest(f,x,e)

%梯度下降法,f为目标函数(两变量x1和x2),x为初始点,如[3;4]

syms x1 x2 m; %m为学习率

d=-[diff(f,x1);diff(f,x2)]; %分别求x1和x2的偏导数,即下降的方向

flag=1; %循环标志

k=0; %迭代次数

while(flag)

d_temp=subs(d,x1,x(1)); %将起始点代入,求得当次下降x1梯度值

d_temp=subs(d_temp,x2,x(2)); %将起始点代入,求得当次下降x2梯度值

nor=norm(d_temp); %范数

if(nor>=e)

x_temp=x+m*d_temp; %改变初始点x的值

f_temp=subs(f,x1,x_temp(1)); %将改变后的x1和x2代入目标函数

f_temp=subs(f_temp,x2,x_temp(2));

h=diff(f_temp,m); %对m求导,找出最佳学习率

m_temp=solve(h); %求方程,得到当次m

x=x+m_temp*d_temp; %更新起始点x

k=k+1;

else

flag=0;

end

end

ender=double(x); %终点

end

syms x1 x2;

f=x1^2+x2^2-x1*x2-10*x1-4*x2+60;

x=[0;0];

e=0.01;

[k ender]=steepest(f,x,e)

ender =

7.9961

5.9971

室友的:

2-11:

梯度函数:

function [k,ender]=tidu(f,x,e)

syms x1 x2 m;

d=-[diff(f,x1);diff(f,x2)];

flag=1;

k=0;

while(flag)

d_temp=subs(d,x1,x(1));

d_temp=subs(d_temp,x2,x(2));

nor=norm(d_temp);

if(nor>=e)

x_temp=x+m*d_temp;

f_temp=subs(f,x1,x_temp(1));

f_temp=subs(f_temp,x2,x_temp(2));

h=diff(f_temp,m);

m_temp=solve(h);

x=x+m_temp*d_temp;

k=k+1;

else

flag=0;

end

end

ender=double(x);

end

结果指令:

syms x1 x2;

f=x1^2+x2^2-x1*x2-10*x1-4*x2+60;

x=[0;0];

e=0.01;

[k ender]=tidu(f,x,e)

ender =

7.9961

5.9971

f4efe9f0dfd7

2-12

我的:

2-12

展开为二阶泰勒式

syms x1 x2;

taylor(x1^4+2*x2^3-3*x1^2*x2)

ans =

3*x2 - 2*x1 - 6*(x1 - 1)*(x2 - 1) + 3*(x1 - 1)^2 + 6*(x2 - 1)^2 - 1

牛顿法求解:

function all=newton(f,x,e)

syms x1 x2 h;

d=-[diff(f,x1);diff(f,x2)];

h=hessian(f);

flag=1;

h1=h^-1;

while (flag)

d_temp=subs(d,x1,x(1));

d_temp=subs(d_temp,x2,x(2));

nor=norm(d_temp);

if(nor>=e)

x=x+h1*d_temp;

else

flag=0;

end

end

all=double(x);

结果指令:

clear all

>> syms x1 x2;

f=3*x2 - 2*x1 - 6*(x1 - 1)*(x2 - 1) + 3*(x1 - 1)^2 + 6*(x2 - 1)^2 - 1;

x=[1;1];

e=0.01;all=newton(f,x,e)

all =

1.1667

0.8333

室友的:

2-12:

展开为二阶泰勒式

syms x1 x2;

taylor(x1^4+2*x2^3-3*x1^2*x2)

ans =

3*x2 - 2*x1 - 6*(x1 - 1)*(x2 - 1) + 3*(x1 - 1)^2 + 6*(x2 - 1)^2 - 1

牛顿函数:

function all=newton(f,x,e)

syms x1 x2 h;

d=-[diff(f,x1);diff(f,x2)];

h=hessian(f);

flag=1;

h1=h^-1;

while (flag)

d_temp=subs(d,x1,x(1));

d_temp=subs(d_temp,x2,x(2));

nor=norm(d_temp);

if(nor>=e)

x=x+h1*d_temp;

else

flag=0;

end

end

all=double(x);

结果指令:

clear all

>> syms x1 x2;

f=3*x2 - 2*x1 - 6*(x1 - 1)*(x2 - 1) + 3*(x1 - 1)^2 + 6*(x2 - 1)^2 - 1;

x=[1;1];

e=0.01;all=newton(f,x,e)

all =

1.1667

0.8333

f4efe9f0dfd7

2-13(1)

我的:

2-13(1)

外点惩罚函数法:

function [ x,y ] = Epfm_min( fx,gx,hx,xx0,s,c,a)

%fx是目标函数

%gx是不等式约束方程组(且g>=0)

%xx0是初始点

%hx是等式约束方程组(且h=0)

%s是精确度(s>0)

%c是放大系数(c>1)

%a是罚因子(默认为1)

syms x1 x2

xx1=xx0;

v=[x1,x2];

a1=a;

Pxk=1;%假设Px等于1,以免不必要错误

G=-subs(gx,v,xx1);%用于判别max{0,-g(x)}

while Pxk>s

if(G<0)

Px=a1*hx*hx;

else

Px=a1*hx*hx+a1*gx*gx;

end

Fx=fx+a1*Px;%将约束问题化为了一个无约束的问题

% 接下来解min F(x)

dFx1=diff(Fx,x1);%分别对x1,x2求偏导数

dFx2=diff(Fx,x2);

[k,b]=solve(dFx1,dFx2,'x1','x2');%求出

xx2=xx1+[k,b];

Pxk=a1*subs(Px,v,xx2);

xx1=xx2;%相当于置k=k+1

a1=c*a1;%罚因子放大

G=-subs(gx,v,xx1);%用于判别max{0,-g(x)}

end

x=xx1;

y=a1/c;

syms x1 x2;

fx=x1+x2;

gx=-x1;

hx=x1^2-x2;

s=10.^-5

c=10

xx0=[0,0]

a=1;

[x,y]=Epfm_min( fx,gx,hx,xx0,s,c,a)

>> x=[0.1;0.2];k=0.1;e=0.01;r=1;[x,minf]= Epfm_min (p,x,k,r,e)

x =

0.0015

minf =

0.0030

Ans=

0.0045

内点惩罚函数法

function [x,minf]=minNF(f,x0,g,u,v,var,eps)

format long;

if nargin==6

eps=1.0e-4;

end

k=0;

FE=0;

for i=1:length(g)

FE=FE+1/g(i);

end

x1=transpose(x0);

x2=inf;

while 1

FF=u*FE;

SumF=f+FF;

[x2,minf]=minNT(SumF,transpose(x1),var);

Bx=Funval(FE,var,x2);

if u*Bx

if norm(x2-x1)<=eps

x=x2;

break;

else

u=v*u;

x1=x2;

end

else

if norm(x2-x1)<=eps

x=x2;

break;

else

u=v*u;

x1=x2;

end

end

end

minf=Funval(f,var,x);

format short;

syms x1 x2 r1;

>> p=taylor(x1+x2-r1*(1/(x1^2-x2)-1/x1),[x1 x2],[0.001 0.002],'Order',3);

>> x=[0.1;0.2];k=0.1;e=0.01;r=1;[x,minf]= minNF(p,x,k,r,e)

x =

0.0015

minf =

0.0030

Ans=

0.0045

室友的:

2-13:

内点:

惩罚函数:

function anll=neicheng(p,x,k,r,e)

syms x1 x2 r1;

flag1=1;

while (flag1)

pd=subs(p,r1,r);

xold=x;

flag2=1;

while (flag2)

dp=-[diff(pd,x1);diff(pd,x2)];

h=hessian(pd,[x1,x2]);

h1=h^-1;

dp_temp=subs(dp,x1,x(1));

dp_temp=subs(dp_temp,x2,x(2));

nor=norm(dp_temp);

if(nor>=e)

x=x+h1*dp_temp;

else

flag2=0;

end

end

x_temp=x;

nor2=norm(x_temp-xold);

if double(nor2)>=e

r=k*r;

else

flag1=0;

end

end

anll=double(x);

结果:

clear all; syms x1 x2 r1;

>> p=taylor(x1+x2-r1*(1/(x1^2-x2)-1/x1),[x1 x2],[0.001 0.002],'Order',3);

>> x=[0.1;0.2];k=0.1;e=0.01;r=1;anll=neicheng(p,x,k,r,e)

anll =

0.0015

0.0030

Ans=

0.0045

外点:

惩罚函数:

function annn=waicheng(p,x,k,r,e)

syms x1 x2 r1;

flag1=1;

while (flag1)

pd=subs(p,r1,r);

xold=x;

flag2=1;

while (flag2)

dp=-[diff(pd,x1);diff(pd,x2)];

h=hessian(pd,[x1,x2]);

h1=h^-1;

dp_temp=subs(dp,x1,x(1));

dp_temp=subs(dp_temp,x2,x(2));

nor=norm(dp_temp);

if(nor>=e)

x=x+h1*dp_temp;

else

flag2=0;

end

end

x_temp=x;

nor2=norm(x_temp-xold);

if double(nor2)>=e

r=k*r;

else

flag1=0;

end

end

annn=double(x);

结果:

clear all; syms x1 x2 r1;

p=taylor(x1+x2,[x1 x2],[0.001 0.002],'Order',3);

x=[0.1;0.2];k=0.1;e=0.01;r=1;annn=waicheng(p,x,k,r,e)

annn =

0.0015

0.0030

Ans=

0.0045

f4efe9f0dfd7

2-14

我的(貌似这题抄的他的):

2-14

function [x,minf] = minMixFun(f,g,h,x0,r0,c,var,eps)

gx0 = Funval(g,var,x0);

if gx0 >= 0;

else

disp('初始点必须满足不等式约束!');

x = NaN;

minf = NaN;

return;

end

if r0 <= 0

disp('初始障碍因子必须大于0!');

x = NaN;

minf = NaN;

return;

end

if c >= 1 || c < 0

disp('缩小系数必须大于0且小于1!');

x = NaN;

minf = NaN;

return;

end

if nargin == 7

eps = 1.0e-6;

end

FE = 0;

for i=1:length(g)

FE = FE + 1/g(i);

end

FH = transpose(h)*h;

x1 = transpose(x0);

x2 = inf;

while 1

FF = r0*FE + FH/sqrt(r0);

SumF = f + FF ;

[x2,minf] = minNT(SumF,transpose(x1),var);

if norm(x2 - x1)<=eps

x = x2;

break;

else

r0 = c*r0;

x1 = x2;

end

end

minf = Funval(f,var,x);

Funval.m

function fv = Funval(f,varvec,varval)

var = findsym(f);

varc = findsym(varvec);

s1 = length(var);

s2 = length(varc);

m =floor((s1-1)/3+1);

varv = zeros(1,m);

if s1 ~= s2

for i=0: ((s1-1)/3)

k = findstr(varc,var(3*i+1));

index = (k-1)/3;

varv(i+1) = varval(index+1);

end

fv = subs(f,var,varv);

else

fv = subs(f,varvec,varval);

end

Syms x1 x2;

f=x1^2-x2^2-3*x2;

g=1-x1;

h=x2-2;

[x,minf]=minMixFun(f,g,h,[2,2],2,0.5,[x1 x2 ],0.001)

x =

1.0015

minf=

2.0002

室友的:

2-14:

混合:

惩罚函数:

function [x,minf] = MixPunish(f,g,h,x0,r0,c,var,eps)

gx0 = Funval(g,var,x0);

if gx0 >= 0;

else

disp('初始点必须满足不等式约束!');

x = NaN;

minf = NaN;

return;

end

if r0 <= 0

disp('初始障碍因子必须大于0!');

x = NaN;

minf = NaN;

return;

end

if c >= 1 || c < 0

disp('缩小系数必须大于0且小于1!');

x = NaN;

minf = NaN;

return;

end

if nargin == 7

eps = 1.0e-6;

end

FE = 0;

for i=1:length(g)

FE = FE + 1/g(i);

end

FH = transpose(h)*h;

x1 = transpose(x0);

x2 = inf;

while 1

FF = r0*FE + FH/sqrt(r0);

SumF = f + FF ;

[x2,minf] = minNT(SumF,transpose(x1),var);

if norm(x2 - x1)<=eps

x = x2;

break;

else

r0 = c*r0;

x1 = x2;

end

end

minf = Funval(f,var,x);

Funval.m

function fv = Funval(f,varvec,varval)

var = findsym(f);

varc = findsym(varvec);

s1 = length(var);

s2 = length(varc);

m =floor((s1-1)/3+1);

varv = zeros(1,m);

if s1 ~= s2

for i=0: ((s1-1)/3)

k = findstr(varc,var(3*i+1));

index = (k-1)/3;

varv(i+1) = varval(index+1);

end

fv = subs(f,var,varv);

else

fv = subs(f,varvec,varval);

end

Syms x1 x2;

f=x1^2-x2^2-3*x2;

g=1-x1;

h=x2-2;

[x,minf]=MixPunish(f,g,h,[2,2],2,0.5,[x1 x2 ],0.001)

x =

1.0015

minf=

2.0002

结束语

无聊到这地步想必也是没谁了。不过,刚考完,我总不能一直玩手机啊。前几天重新看《盘龙》让我在手机上刚了一星期,不能再这么毫无节制的玩耍了,但是又不想学习,所以只好写简书了~~

不过网上毕竟这方面的资源不是很多,我就算是为后来人做点好事吧,让你们好找一点~~

个人宣言

知识传递力量,技术无国界,文化改变生活!

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
matlab优化程序包括:无约束一维极值问题、进退法、黄金分割法、斐波那契法、牛顿法基本牛顿法、全局牛顿法、割线法、抛物线法、三次插值法、可接受搜索法、Goidstein法、Wolfe Powell法、单纯形搜索法、Powell法、最速下降法、共轭梯度法、牛顿法、修正牛顿法、拟牛顿法、信赖域法、显式最速下降法、Rosen梯度投影法、罚函数法、外点罚函数法、內点罚函数法、混合罚函数法、乘子法、G-N法、修正G-N法、L-M法、线性规划、单纯形法、修正单纯形法、大M法、变量有界单纯形法、整数规划、割平面法、分支定界法、0-1规划、二次规划、拉格朗曰法、起作用集算法、路径跟踪法、粒子群优化算法、基本粒子群算法、带压缩因子的粒子群算法、权重改进的粒子群算法、线性递减权重法、自适应权重法、随机权重法、变学习因子的粒子群算法、同步变化的学习因子、异步变化的学习因子、二阶粒子群算法、二阶振荡粒子群算法 (matlab optimization process includes Non-binding one-dimensional extremum problems Advance and retreat method Golden Section Fibonacci method of basic Newton s method Newton s method Newton s Law of the global secant method parabola method acceptable to the three interpolation search method Goidstein France Wolfe.Powell France Simplex search method Powell steepest descent method Conjugate gradient method Newton s method Newton s method to amend Quasi-Newton Method trust region method explicitly steepest descent method, Rosen gradient projection method Penalty function method outside the penalty function method within the penalty function method Mixed penalty function multiplier method G-N was amended in G-N method L-M method Of linear programming simplex method, revised simplex method Big M method variables bounded simplex method, Cutting Plane Method integer programming branch and bound method 0-1 programming quadratic programming )
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值