01模拟退火算法

模拟退火算法

一句话概括:模拟退火算法的本质就是双层循环;

模拟退火算法与金属退火过程相似关系对照表

Alt

模拟退火算法流程图

Alt

模拟退火算法核心步骤

  • Metroplis法则,概率接受差值解,跳出局部最优解。
  • Metroplis法则:Alt

举个栗子

  • 当外层条件,即当前温度T较大时,即使 E 2 E_2 E2- E 1 E_1 E1的绝对值较大,即较差的差值解。但 Δ E / T \Delta E/T ΔE/T的比值仍然很小,故计算出的概率 p p p更大。
import numpy as np

# 此时E2相对于E1为很糟糕的差值解
E1 = 5
E2 = 30
T = 1000
p = np.exp(-(E2-E1)/T)
print(p)
#0.9753099120283326

可以看到此时计算出接受差值的概率并不低。

# 此时E2相对于E1为比较好的差值解,同样在外层温度高的情况下
E1 = 5
E2 = 10
T = 1000
p = np.exp(-(E2-E1)/T)
print(p)
#0.9950124791926823

说明在外层温度高时,接受差值的能力较强

# 外层温度显著降低时
E1 = 5
E2 = 30
T = 100
p = np.exp(-(E2-E1)/T)
print(p)
#0.7788007830714049

相比与在高温1000度下,概率是大幅度降低的.

# 更进一步,降低
E1 = 5
E2 = 30
T = 50
p = np.exp(-(E2-E1)/T)
print(p)
#0.6065306597126334

仿真实例

f ( x ) = ∑ i = 1 n x i 2 ( − 20 ⩽ x i ⩽ 20 ) f(x)=\sum_{i=1}^{n} x_{i}^{2}\left(-20 \leqslant x_{i} \leqslant 20\right) f(x)=i=1nxi2(20xi20)

MATLAB代码

书上的例子和代码,书上的代码有些地方我认为不太好。同时有些地方我看不懂。比如,边界条件处理,只处理一次不能保证处理成功。这部分是我的python代码重写的主要地方。

%%%%%%%%%%%%%%%%%%%%%%模拟退火算法解决函数极值%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%初始化%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear;                      %清除所有变量
close all;                      %清图
clc;                            %清屏
D=10;                           %变量维数 
Xs=20;                          %上限                                
Xx=-20;                         %下限
%%%%%%%%%%%%%%%%%%%%%%%%%%%冷却表参数%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
L = 200;                        %马可夫链长度
K = 0.998;                      %衰减参数
S = 0.01;                       %步长因子
T=100;                          %初始温度
YZ = 1e-8;                      %容差
P = 0;                          %Metropolis过程中总接受点
%%%%%%%%%%%%%%%%%%%%%%%%%%随机选点 初值设定%%%%%%%%%%%%%%%%%%%%%%%%%
PreX = rand(D,1)*(Xs-Xx)+Xx;
PreBestX = PreX;
PreX =  rand(D,1)*(Xs-Xx)+Xx;
BestX = PreX;
%%%%%%%%%%%每迭代一次退火一次(降温), 直到满足迭代条件为止%%%%%%%%%%%%
deta=abs( func1( BestX)-func1(PreBestX));
while (deta > YZ) && (T>0.001)
    T=K*T; 
    %%%%%%%%%%%%%%%%%%%%%在当前温度T下迭代次数%%%%%%%%%%%%%%%%%%%%%%
    for i=1:L  
        %%%%%%%%%%%%%%%%%在此点附近随机选下一点%%%%%%%%%%%%%%%%%%%%%
            NextX = PreX + S* (rand(D,1) *(Xs-Xx)+Xx);
            %%%%%%%%%%%%%%%%%边界条件处理%%%%%%%%%%%%%%%%%%%%%%%%%%
            for ii=1:D
                if NextX(ii)>Xs || NextX(ii)<Xx
                    NextX(ii)=PreX(ii) + S* (rand *(Xs-Xx)+Xx);
                end
            end            
        %%%%%%%%%%%%%%%%%%%%%%%是否全局最优解%%%%%%%%%%%%%%%%%%%%%%
        if (func1(BestX) > func1(NextX))
            %%%%%%%%%%%%%%%%%%保留上一个最优解%%%%%%%%%%%%%%%%%%%%%
            PreBestX = BestX;
            %%%%%%%%%%%%%%%%%%%此为新的最优解%%%%%%%%%%%%%%%%%%%%%%
            BestX=NextX;
        end
        %%%%%%%%%%%%%%%%%%%%%%%% Metropolis过程%%%%%%%%%%%%%%%%%%%
        if( func1(PreX) - func1(NextX) > 0 )
            %%%%%%%%%%%%%%%%%%%%%%%接受新解%%%%%%%%%%%%%%%%%%%%%%%%
            PreX=NextX;
            P=P+1;
        else
            changer = -1*(func1(NextX)-func1(PreX))/ T ;
            p1=exp(changer);
            %%%%%%%%%%%%%%%%%%%%%%%%接受较差的解%%%%%%%%%%%%%%%%%%%%
            if p1 > rand        
                PreX=NextX;
                P=P+1;         
            end
        end
    trace(P+1)=func1( BestX);    
    end
    deta=abs( func1( BestX)-func1 (PreBestX)); 
end
disp('最小值在点:');
BestX
disp( '最小值为:');
func1(BestX)
figure
plot(trace(2:end))
xlabel('迭代次数')
ylabel('目标函数值')
title('适应度进化曲线')
%%%%适应度函数%%%%
function result = func1(x)
summ = sum(x.^2);
result = summ;
end
实验结果分析
  • 寻优次数
    Alt
  • 结果
    Alt
  • 适应度曲线
    Alt

Python代码

import time
import numpy as np
import matplotlib.pyplot as plt

class SAA:
    def __init__(self,D,Xs,Xx,L,K,S,T,YZ):
        self.D = D # 变量维数
        self.Xs = Xs # 上限
        self.Xx = Xx # 下限
        self.L = L # 马可夫链长度
        self.K = K # 衰减参数
        self.S = S # 步长因子
        self.T = T # 初始温度
        self.YZ = YZ # 容差
        self.drawer()


    # 仿真求解函数1
    def fun1(self,prex):
         return sum(prex ** 2)

    # 求解
    def run(self):
        start = time.time()
        # 产生初始解
        prex = np.random.random(self.D) * (self.Xs - self.Xx) +  self.Xx
        preBestx = prex
        prex = np.random.random(self.D) * (self.Xs - self.Xx) +  self.Xx
        Bestx = prex # Bestx存放当前最优解
        deta = abs(self.fun1(Bestx) - self.fun1(preBestx)) # 终止条件的容差
        record_y = [] # 记录每一次的最优适应度,即函数最小值
        numbers = 0
        while deta > self.YZ and self.T > 0.001:
            self.T = self.K*self.T # 当前温度
            for i in range(self.L): #内层循环条件
                Nextx = prex + self.S*(np.random.random(self.D) * (self.Xs - self.Xx) +  self.Xx) # 在当前解附近产生新的解
                # 判断当前温度下的全局最优,以计算下一个温度值的deta
                if self.fun1(Bestx) > self.fun1(Nextx):
                    preBestx = Bestx # 存放上一个全局最优
                    Bestx = Nextx # 存放当前全局最优

                # 处理边界条件
                condition = (Nextx > self.Xs).sum() + (Nextx < self.Xx).sum()
                while condition > 0:
                    p1 = Nextx > self.Xs
                    c1 = np.where(p1 == True)[0] # 找出大于上边界的索引
                    for i in c1:
                        Nextx[i] = prex[i] +  self.S*(np.random.rand() * (self.Xs - self.Xx) +  self.Xx)
                    p2 = Nextx < self.Xx
                    c2 = np.where(p2 == True)[0] # 找出小于下边界的索引
                    for i in c2:
                        Nextx[i] = prex[i] +  self.S*(np.random.rand() * (self.Xs - self.Xx) +  self.Xx)

                    condition = (Nextx > self.Xs).sum() + (Nextx < self.Xx).sum()

                # 解迭代过程
                if self.fun1(prex) >= self.fun1(Nextx):
                    prex = Nextx
                    record_y.append(self.fun1(Nextx))
                    numbers += 1
                # Metropolis过程
                else:
                    changer = (self.fun1(prex)-self.fun1(Nextx))/self.T
                    p = np.exp(changer) # 接受差值解得概率
                    if p > np.random.rand():
                        prex = Nextx
                        record_y.append(self.fun1(Nextx))
                        numbers += 1
            deta = abs(self.fun1(Bestx) - self.fun1(preBestx))
        stop = time.time()
        print("寻优耗时:%d秒"%(stop - start))
        return record_y,numbers,Bestx

    # 绘图函数
    def drawer(self):
        # print(self.run())
        y,numbers,Bestx = self.run()
        print(min(y))  # 结果与下面的相同,说明最后一个寻优结果就是最佳结果
        print(Bestx)
        print("共寻优%d次"%numbers)
        plt.plot(y)
        plt.show()

saa = SAA(10,20,-20,200,0.998,0.01,100,1e-8)
实验结果分析
  • Python的耗时整整23秒;MATLAB好像两秒。
    Alt
  • 寻优的次数一模一样;但是MATLAB寻优过程更平滑,不知道什么原因。
    Alt
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值