【改进算法】混合鲸鱼WOA和BAT算法(Matlab代码实现)

💥💥💞💞欢迎来到本博客❤️❤️💥💥

🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

⛳️座右铭:行百里者,半于九十。

📋📋📋本文目录如下:🎁🎁🎁

目录

💥1 概述

📚2 运行结果

🎉3 参考文献

🌈4 Matlab代码及文献


💥1 概述

文献来源:

㼿鲸鱼优化算法(whale optimization algorithm, WOA)是一种受自然启发的元启发式优化算法,由Mirjalili和Lewis于2016年提出。㼿这个算法已经显示出它解决许多问题的能力。对其他一些受自然启发的算法,如ABC和PSO进行了全面的调查。但是,没有对WOA进行调查搜索工作。㼿因此,本文对WOA进行了系统的meta分析调查,以帮助研究者将其应用于不同领域或与其他常用算法进行混合。本文从WOA的算法背景、特点、局限性、改进、杂交和应用等方面对WOA进行了深入介绍。接下来,提出WOA性能来解决不同的问题。㼿en,建立了WOA修饰和杂交的统计结果,并与最常用的优化算法和WOA进行了比较。㼿e调查结果表明,WOA在收敛速度和勘探与开采之间的平衡方面优于其他常用算法。与WOA相比,WOA修饰和杂化也表现良好。此外,我们的研究为提出一种混合WOA和BAT算法的新技术铺平了道路。㼿e BAT算法用于探索阶段,而WOA算法用于开发阶段。最后,从WOA- bata获得的统计结果在16个基准函数中非常具有竞争力,并且优于WOA。WOA-BAT在CEC2005的13个功能和CEC2019的7个功能上也表现出色。 

原文摘要:

+e whale optimization algorithm (WOA) is a nature-inspired metaheuristic optimization algorithm, which was proposed by Mirjalili and Lewis in 2016. +is algorithm has shown its ability to solve many problems. Comprehensive surveys have been conducted about some other nature-inspired algorithms, such as ABC and PSO. Nonetheless, no survey search work has been

conducted on WOA. +erefore, in this paper, a systematic and meta-analysis survey of WOA is conducted to help researchers to use it in difffferent areas or hybridize it with other common algorithms. +us, WOA is presented in depth in terms of algorithmic backgrounds, its characteristics, limitations, modififications, hybridizations, and applications. Next, WOA performances are presented to solve difffferent problems. +en, the statistical results of WOA modififications and hybridizations are established and compared with the most common optimization algorithms and WOA. +e survey’s results indicate that WOA performs better than other common algorithms in terms of convergence speed and balancing between exploration and exploitation. WOA modififications and hybridizations also perform well compared to WOA. In addition, our investigation paves a way to present a new technique by hybridizing both WOA and BAT algorithms. +e BAT algorithm is used for the exploration phase, whereas the WOA algorithm is used for the exploitation phase. Finally, statistical results obtained from WOA-BAT are very competitive and better than WOA in 16 benchmarks functions. WOA-BAT also outperforms well in 13 functions from CEC2005 and 7 functions from CEC2019.

📚2 运行结果

部分代码:

% WOABAT
function [Leader_score,Leader_pos,Convergence_curve]=WOABAT(SearchAgents_no,Max_iter,lb,ub,dim,fobj)
% initialize position vector and score for the leader
Leader_pos=zeros(1,dim);
Leader_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents
Positions=initialization(SearchAgents_no,dim,ub,lb);
Convergence_curve=zeros(1,Max_iter);
%bat algorithm addition
Qmin=0;         % Frequency minimum
Qmax=2;         % Frequency maximum
Q=zeros(SearchAgents_no,1);   % Frequency
v=zeros(SearchAgents_no,dim);   % Velocities
r=0.5;
A1=0.5;
t=0;% Loop counter
% summ=0;
% Main loop
while t<Max_iter
    for i=1:size(Positions,1)
        % Return back the search agents that go beyond the boundaries of the search space
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        % Calculate objective function for each search agent
        fitness=fobj(Positions(i,:));
        % Update the leader
        if fitness<Leader_score % Change this to > for maximization problem
            Leader_score=fitness; % Update alpha
            Leader_pos=Positions(i,:);
        end
    end
    a=2-t*((2)/Max_iter); % a decreases linearly fron 2 to 0 in Eq. (2.3)
    % a2 linearly dicreases from -1 to -2 to calculate t in Eq. (3.12)
    a2=-1+t*((-1)/Max_iter);
    % Update the Position of search agents
    for i=1:size(Positions,1)
        r1=rand(); % r1 is a random number in [0,1]
        r2=rand(); % r2 is a random number in [0,1]
        A=2*a*r1-a;  % Eq. (2.3) in the paper
        C=2*r2;      % Eq. (2.4) in the paper
        b=1;               %  parameters in Eq. (2.5)
        l=(a2-1)*rand+1;   %  parameters in Eq. (2.5)
        p = rand();        % p in Eq. (2.6)
        for j=1:size(Positions,2)
            if p<0.5
                if abs(A)>=1
                    rand_leader_index = floor(SearchAgents_no*rand()+1);
                    X_rand = Positions(rand_leader_index, :);
                    Q(i)=Qmin+(Qmin-Qmax)*rand;
                    v(i,:)=v(i,j)+(X_rand(j)-Leader_pos(j))*Q(i);
                    z(i,:)= Positions(i,:)+ v(i,:);
                    %%%% problem
                    if rand>r
                        % The factor 0.001 limits the step sizes of random walks
                        z (i,:)=Leader_pos(j)+0.001*randn(1,dim);
                    end
                    % Evaluate new solutions
                    Fnew=fobj(z(i,:));
                    % Update if the solution improves, or not too loud
                    if (Fnew<=fitness) && (rand<A1)
                        Positions(i,:)=z(i,:);
                        fitness=Fnew;
                    end
                elseif abs(A)<1
                    Q(i)=Qmin+(Qmin-Qmax)*rand;
                    v(i,:)=v(i,j)+(Positions(i,:)-Leader_pos(j))*Q(i);
                    z(i,:)= Positions(i,:)+ v(i,:);
                    %%%% problem
                    if rand>r
                        % The factor 0.001 limits the step sizes of random walks
                        z (i,:)=Leader_pos(j)+0.001*randn(1,dim);
                    end
                    % Evaluate new solutions
                    Fnew=fobj(z(i,:));
                    % Update if the solution improves, or not too loud
                    if (Fnew<=fitness) && (rand<A1)
                        Positions(i,:)=z(i,:);
                        fitness=Fnew;
                    end
                end
            elseif p>=0.5
                distance2Leader=abs(Leader_pos(j)-Positions(i,j));
                % Eq. (2.5)
                Positions(i,j)=distance2Leader*exp(b.*l).*cos(l.*2*pi)+Leader_pos(j);
            end

        end
    end
    t=t+1;
    Convergence_curve(t)=Leader_score;
    [t Leader_score]
end

🎉3 参考文献

部分理论来源于网络,如有侵权请联系删除。

🌈4 Matlab代码及文献

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
WOA-BP鲸鱼算法优化BP神经网络是一种常见的神经网络优化方法。下面是一些基本概念和实现步骤: 1. WOA-BP鲸鱼算法简介 WOA-BP鲸鱼算法是一种基于鲸鱼群智能优化算法和BP神经网络的优化方法。其基本思想是通过模拟鲸鱼的迁徙和捕食行为来寻找最优解。 2. BP神经网络简介 BP神经网络是一种常见的人工神经网络模型,其基本思想是通过反向传播算法来训练神经网络,从而实现对输入输出之间的映射关系进行学习和建模。 3. WOA-BP鲸鱼算法优化BP神经网络步骤 (1) 初始化BP神经网络参数和WOA算法参数; (2) 根据当前种群位置计算适应度函数值; (3) 利用WOA算法更新种群位置; (4) 根据更新后的位置计算新的适应度函数值,并根据新的适应度函数值对种群进行排序; (5) 判断是否满足停止条件,如果满足,则输出最优解,否则返回步骤2。 4. Matlab代码实现 以下是Matlab代码实现的基本框架: ``` % Step 1: 初始化BP神经网络参数和WOA算法参数 % Step 2: 根据当前种群位置计算适应度函数值 % Step 3: 利用WOA算法更新种群位置 % Step 4: 根据更新后的位置计算新的适应度函数值,并根据新的适应度函数值对种群进行排序 % Step 5: 判断是否满足停止条件,如果满足,则输出最优解,否则返回步骤2 % 以下是一个简单的示例代码: % Step 1: 初始化BP神经网络参数和WOA算法参数 pop_size = 10; % 种群大小 max_iter = 100; % 最大迭代次数 dim = 10; % 每个个体的维度 c1 = 2; % 常数c1 c2 = 2; % 常数c2 c3 = 2; % 常数c3 a = 2; % 常数a x_max = 100; % 变量x的上限 x_min = -100; % 变量x的下限 w_max = 1; % 权重w的上限 w_min = -1; % 权重w的下限 pop_position = rand(pop_size,dim); % 随机初始化种群位置 pop_fitness = zeros(1,pop_size); % 初始化种群适应度函数值 % Step 2: 根据当前种群位置计算适应度函数值 for i=1:pop_size pop_fitness(i) = fitness_func(pop_position(i,:)); % 计算适应度函数值 end % Step 3: 利用WOA算法更新种群位置 for t=1:max_iter % 迭代次数循环 for i=1:pop_size % 种群个体循环 r1 = rand(); r2 = rand(); A = 2*a*r1-a; C = 2*r2; b = 1; l = (a-1)*rand()+1; p = rand(); % 随机生成参数p if p<0.5 % 更新个体位置 for j=1:dim if rand()<0.5 D = abs(C*pop_position(i,j)-pop_position(i,j)); pop_position(i,j) = D*exp(b*l)*cos(2*pi*l)+pop_position(i,j); else D = abs(C*pop_position(i,j)-pop_position(i,j)); pop_position(i,j) = D*exp(b*l)*sin(2*pi*l)+pop_position(i,j); end if pop_position(i,j)>x_max % 边界处理 pop_position(i,j) = x_max; elseif pop_position(i,j)<x_min pop_position(i,j) = x_min; end end else % 更新种群位置 for j=1:dim % 根据WOA-BP算法来更新种群中所有个体的位置,并求出每个个体的适应度函数值 r3 = rand(); D = abs(pop_position(i,j)-pop_position(r3,j)); pop_position(i,j) = D*cos(c1*2*pi)*pop_position(r3,j)+D*cos(c2*2*pi)*pop_position(best_index,j)+D*cos(c3*2*pi)*rand(); if pop_position(i,j)>x_max % 边界处理 pop_position(i,j) = x_max; elseif pop_position(i,j)<x_min pop_position(i,j) = x_min; end end for j=1:dim % 根据新位置计算适应度函数值并更新最优解个体序号best_index fitness_val_new = fitness_func(pop_position(i,:)); if fitness_val_new<pop_fitness(i) pop_fitness(i) = fitness_val_new; best_index=i; end if fitness_val_new<pop_fitness(best_index) best_index=i; end end end end end % Step 4: 根据更新后的位置计算新的适应度函数值,并根据新的适应度函数值对种群进行排序 for i=1:pop_size % 根据新位置计算适应度函数值并更新最优解个体序号best_index fitness_val_new = fitness_func(pop_position(i,:)); if fitness_val_new<pop_fitness(i) pop_fitness(i) = fitness_val_new; best_index=i; end if fitness_val_new<pop_fitness(best_index) best_index=i; end end [sorted_fit, sorted_index] = sort(pop_fitness); % 排序 % Step 5: 判断是否满足停止条件,如果满足,则输出最优解,否则返回步骤2 if sorted_fit(1)<min_fitness_val % 达到最小误差则停止迭代,输出最优解 best_solution = pop_position(sorted_index(1),:); fprintf('The best solution is:\n'); disp(best_solution); else % 没有达到最小误差,则继续迭代下去 continue; end % 定义适应度函数fitness_func,根据当前权重计算误差值并返回fitness_val function fitness_val=fitness_func(weights) ... (根据权重weights计算误差并返回fitness_val) end ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值