【智能优化算法】改进灰狼优化器(Improved Grey Wolf Optimizer,I-GWO)

目录

01.引言

02.代码流程图

03.部分代码

04.代码效果图


改进的灰狼优化器(Improved Grey Wolf Optimizer,I-GWO),是由科院二区期刊“Expert Systems with Applications”上文章“An improved grey wolf optimizer for solving engineering problems”提出的。

01.引言

改进的灰狼优化器(Improved Grey Wolf Optimizer,I-GWO),用于解决全局优化和工程设计问题。这种改进是为了解决GWO算法缺乏种群多样性、开发与探索不平衡以及过早收敛等问题。I- GWO算法得益于一种新的运动策略,即基于维度学习的狩猎(DLH)搜索策略,该策略继承自自然界狼的个体狩猎行为。DLH使用不同的方法为每只狼构建一个邻域,在这个邻域中狼之间可以共享邻域信息。DLH搜索策略中使用的维度学习增强了局部搜索和全局搜索之间的平衡,并保持了多样性。在CEC 2018基准测试套件和四个工程问题上对所提出的I-GWO算法的性能进行了评估。在所有实验中,将I-GWO与其他六种最先进的元启发式方法进行比较。结果还通过Friedman和MAE统计检验进行了分析。实验结果和统计测试表明,与实验中使用的算法相比,I-GWO算法具有很强的竞争力,并且往往优于实验中使用的算法。该算法在工程设计问题上的应用证明了它的有效性和适用性。

02.代码流程图

03.部分代码

% Improved Grey Wolf Optimizer (I-GWO)
function [Alpha_score,Alpha_pos,Convergence_curve]=IGWO(N,Max_iter,lb,ub,dim,fobj)
lu = [lb .* ones(1, dim); ub .* ones(1, dim)];
% Initialize alpha, beta, and delta positions
Alpha_pos=zeros(1,dim);
Alpha_score=inf; %change this to -inf for maximization problems
Beta_pos=zeros(1,dim);
Beta_score=inf; %change this to -inf for maximization problems
Delta_pos=zeros(1,dim);
Delta_score=inf; %change this to -inf for maximization problems
% Initialize the positions of wolves
Positions=initialization(N,dim,ub,lb);
Positions = boundConstraint (Positions, Positions, lu);
% Calculate objective function for each wolf
for i=1:size(Positions,1)
    Fit(i) = fobj(Positions(i,:));
end
% Personal best fitness and position obtained by each wolf
pBestScore = Fit;
pBest = Positions;
neighbor = zeros(N,N);
Convergence_curve=zeros(1,Max_iter);
iter = 0;% Loop counter
%% Main loop
while iter < Max_iter
    for i=1:size(Positions,1)
        fitness = Fit(i);
        
        % Update Alpha, Beta, and Delta
        if fitness<Alpha_score
            Alpha_score=fitness; % Update alpha
            Alpha_pos=Positions(i,:);
        end
        
        if fitness>Alpha_score && fitness<Beta_score
            Beta_score=fitness; % Update beta
            Beta_pos=Positions(i,:);
        end
        
        if fitness>Alpha_score && fitness>Beta_score && fitness<Delta_score
            Delta_score=fitness; % Update delta
            Delta_pos=Positions(i,:);
        end
    end
    
    %% Calculate the candiadate position Xi-GWO
    a=2-iter*((2)/Max_iter); % a decreases linearly from 2 to 0
    
    % Update the Position of search agents including omegas
    for i=1:size(Positions,1)
        for j=1:size(Positions,2)
            
            r1=rand(); % r1 is a random number in [0,1]
            r2=rand(); % r2 is a random number in [0,1]
            
            A1=2*a*r1-a;                                    % Equation (3.3)
            C1=2*r2;                                        % Equation (3.4)
            
            D_alpha=abs(C1*Alpha_pos(j)-Positions(i,j));    % Equation (3.5)-part 1
            X1=Alpha_pos(j)-A1*D_alpha;                     % Equation (3.6)-part 1
            
            r1=rand();
            r2=rand();
            
            A2=2*a*r1-a;                                    % Equation (3.3)
            C2=2*r2;                                        % Equation (3.4)
            
            D_beta=abs(C2*Beta_pos(j)-Positions(i,j));      % Equation (3.5)-part 2
            X2=Beta_pos(j)-A2*D_beta;                       % Equation (3.6)-part 2
            
            r1=rand();
            r2=rand();
            
            A3=2*a*r1-a;                                    % Equation (3.3)
            C3=2*r2;                                        % Equation (3.4)
            
            D_delta=abs(C3*Delta_pos(j)-Positions(i,j));    % Equation (3.5)-part 3
            X3=Delta_pos(j)-A3*D_delta;                     % Equation (3.5)-part 3
            
            X_GWO(i,j)=(X1+X2+X3)/3;                        % Equation (3.7)
            
        end
        X_GWO(i,:) = boundConstraint(X_GWO(i,:), Positions(i,:), lu);
        Fit_GWO(i) = fobj(X_GWO(i,:));
    end
    
    %% Calculate the candiadate position Xi-DLH
    radius = pdist2(Positions, X_GWO, 'euclidean');         % Equation (10)
    dist_Position = squareform(pdist(Positions));
    r1 = randperm(N,N);
    
    for t=1:N
        neighbor(t,:) = (dist_Position(t,:)<=radius(t,t));
        [~,Idx] = find(neighbor(t,:)==1);                   % Equation (11)             
        random_Idx_neighbor = randi(size(Idx,2),1,dim);
        
        for d=1:dim
            X_DLH(t,d) = Positions(t,d) + rand .*(Positions(Idx(random_Idx_neighbor(d)),d)...
                - Positions(r1(t),d));                      % Equation (12)
        end
        X_DLH(t,:) = boundConstraint(X_DLH(t,:), Positions(t,:), lu);
        Fit_DLH(t) = fobj(X_DLH(t,:));
    end
    
    %% Selection  
    tmp = Fit_GWO < Fit_DLH;                                % Equation (13)
    tmp_rep = repmat(tmp',1,dim);
    
    tmpFit = tmp .* Fit_GWO + (1-tmp) .* Fit_DLH;
    tmpPositions = tmp_rep .* X_GWO + (1-tmp_rep) .* X_DLH;
    
    %% Updating
    tmp = pBestScore <= tmpFit;                             % Equation (13)
    tmp_rep = repmat(tmp',1,dim);
    
    pBestScore = tmp .* pBestScore + (1-tmp) .* tmpFit;
    pBest = tmp_rep .* pBest + (1-tmp_rep) .* tmpPositions;
    
    Fit = pBestScore;
    Positions = pBest;
    
    %%
    iter = iter+1;
    neighbor = zeros(N,N);
    Convergence_curve(iter) = Alpha_score;  
end
end

04.代码效果图

获取代码请关注MATLAB科研小白的个人公众号(即文章下方二维码),并回复:智能优化算法本公众号致力于解决找代码难,写代码怵。各位有什么急需的代码,欢迎后台留言~不定时更新科研技巧类推文,可以一起探讨科研,写作,文献,代码等诸多学术问题,我们一起进步。

### 改进灰狼优化算法的方法及其代码实现 灰狼优化算法Grey Wolf Optimizer, GWO)是一种基于群体智能的元启发式优化算法,因其简单易懂和高效性而受到广泛关注。然而,在处理复杂的多峰函数或高维度问题时,GWO可能存在全局搜索能力不足的问题[^1]。为此,研究者提出了多种改进方案来增强其性能。 #### 1. 基于混沌映射的改进 通过引入混沌理论改善种群初始化的质量,使初始解分布更加均匀,从而提高算法的探索能力和跳出局部最优的能力。以下是基于混沌映射的灰狼优化算法的核心思想: - 利用Logistic映射或其他混沌序列生成初始种群的位置。 - 结合混沌变量调整α、β、δ三只头狼的位置更新公式。 下面是MATLAB中的一种实现方式: ```matlab function [Best_pos, Best_score] = Chaotic_GWO(SearchAgents_no, Max_iter, lb, ub, dim, fobj) % 初始化参数 Alpha_pos = zeros(1,dim); Beta_pos = zeros(1,dim); Delta_pos = zeros(1,dim); Alpha_score = inf; Beta_score = inf; Delta_score = inf; Positions = rand(SearchAgents_no,dim).*(ub-lb)+lb; % 随机初始化种群位置 for t=1:Max_iter a = 2-(t*2/Max_iter); % 参数a线性下降 for i=1:size(Positions,1) Fitness = fobj(Positions(i,:)); % 计算适应度 if Fitness<Alpha_score && fobj(Positions(i,:))~=inf Alpha_score = Fitness; Alpha_pos = Positions(i,:); end if Fitness>Beta_score && Fitness<Delta_score Beta_score = Fitness; Beta_pos = Positions(i,:); elseif Fitness>Alpha_score && Fitness<Beta_score Delta_score = Fitness; Delta_pos = Positions(i,:); end end A1 = 2*a.*rand(size(Positions))-a; % 系数矩阵A C1 = 2.*rand(size(Positions)); % 系数矩阵C D_alpha = abs(C1.*Alpha_pos - Positions); X1 = Alpha_pos - A1.*D_alpha; D_beta = abs(C1.*Beta_pos - Positions); X2 = Beta_pos - A1.*D_beta; D_delta = abs(C1.*Delta_pos - Positions); X3 = Delta_pos - A1.*D_delta; Positions = (X1+X2+X3)/3; % 更新当前位置 % 使用混沌映射微调部分个体 Chaos_indices = randperm(SearchAgents_no, round(0.2*SearchAgents_no)); for k=1:length(Chaos_indices) idx = Chaos_indices(k); r = rand(); Positions(idx,:) = lb+(ub-lb).*r.*(1-r.^t); % Logistic映射 end end Best_pos = Alpha_pos; Best_score = Alpha_score; ``` 此代码实现了标准GWO并加入了混沌扰动机制以提升多样性[^2]。 #### 2. 动态收敛因子与比例权重 另一种常见的改进方法是修改原版GWO中的静态收敛因子`a`为动态形式,并加入比例权重进一步加速收敛过程。具体做法如下: - 将收敛因子定义为按余弦规律变化的形式: \[ a(t)=\cos(\frac{\pi}{T}t)\quad,\text{其中 } T=\text{最大迭代次数} \] - 对每一代个体计算与其他优秀个体之间的欧式距离作为步长权重的一部分参与位置更新操作。 这种方法已在某些实验中证明能够显著提高求解精度和稳定性[^3]。 --- ###
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

MATLAB科研小白

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值