【matlab】【2024年最新多目标优化算法】【多目标马群优化算法】【MOHOA】【附代码】

  代码地址(非免费):https://download.csdn.net/download/wq6qeg88/89016339

        多目标优化算法用于解决具有多个冲突目标的优化问题。这些算法的目的是找到一组最佳的解决方案,代表了不同的目标之间的权衡。我们的目标是确定一组帕累托最优的解决方案,没有其他解决方案可以提高一个目标,而不牺牲另一个。

        实现了一个基于HOA (Horse Herd Optimization Algorithm)的多目标优化算法,称为MOHOA (Multi-Objective Horse Herd Optimization Algorithm) 。

        HOA算法是一种受自然启发的优化算法,它模仿马群的行为来搜索最优解。

        通过扩展HOA算法来处理多目标优化问题, MOHOA算法增强了搜索过程,以找到不同的Pareto最优解集。它结合了平衡探索和利用的机制,允许算法收敛到问题的帕累托前沿。

clear all
clc

global Data trn vald;
d=input('select Dataset : 1=zoo   2=tic-toc-toe  3=Wine: ');
switch d
    case 1
        Data=load('zoo.dat');
    case 2
        Data=load('tic-tac-toe.data');
    case 3
        Data=load('wine.data');
        Data=[Data(:,2:end),Data(:,1)];
end
nVar=size(Data,2)-1; % number of decision variables
costFunction=@FitFun;
foldNum=5;
sampleNum= size(Data,1);
c = cvpartition(sampleNum,'k',foldNum);%Partition Data
for fold=1:foldNum
    tic
    trn=c.training(fold);
    vald=c.test(fold);
    VarSize=[1 nVar];

    Archive=MOHOA(nVar,costFunction);   
end
function Archive=MOHOA(varNum,costFunction)
Archive_size=100;   % Repository Size
alpha=0.1;  % Grid Inflation Parameter
nGrid=10;   % Number of Grids per each Dimension
beta=4; %=4;    % Leader Selection Pressure Parameter
gamma=2;    % Extra (to be deleted) Repository Member Selection Pressure

%%Define problem parameters

%% HOA Parameters
N=150; % Number of search agents
maxLoop=100; % Maximum numbef of iterations

VelMax=0.2*ones(1,varNum);
VelMin=-VelMax;

w=.999;
Pn=0.04; %Number of Best hourse in Imitaion
Qn=0.04; %Number of Worst hourse in Defense

Alpha.g=1.50;      % Grazing
Alpha.d=0.5;       % Defense Mechanism
Alpha.h=1.5;       % Hierarchy

Beta.g=1.50;       % Grazing
Beta.d=0.20;       % Defense Mechanism
Beta.h=0.9;        % Hierarchy
Beta.s=0.20;       % Sociability

Gamma.g=1.50;      % Grazing
Gamma.d=0.10;      % Defense Mechanism
Gamma.h=0.50;      % Hierarchy
Gamma.s=0.10;      % Sociability
Gamma.i=0.30;      % Imitation
Gamma.r=0.05;      % Random (Wandering and Curiosity)

Delta.g=1.50;      % Grazing
Delta.r=0.10;      % Random (Wandering and Curiosity)

%% Define Function and Solution
solution=[];
solution.Position=[];
solution.Cost=0;
solution.Velocity=ones(1,varNum);
%% Initialization Step
Hourse=CreateEmptyParticle(N);

for i=1:N
    Hourse(i).Position=round(rand(1,varNum));
    Hourse(i).Cost=costFunction(Hourse(i).Position);
end
PersonalBest=Hourse;
% Find best GrassHoper
[value,index]=sort([Hourse.Cost]);
Hourse=DetermineDomination(Hourse);

Archive=GetNonDominatedParticles(Hourse);

Archive_costs=GetCosts(Archive);
G=CreateHypercubes(Archive_costs,nGrid,alpha);
for i=1:numel(Archive)
    [Archive(i).GridIndex Archive(i).GridSubIndex]=GetGridIndex(Archive(i),G);
    ArchivePos(i,:)=Archive(i).Position;
end
% Main loop
for it=1:maxLoop
    for i=1:N
        Costi(i)=Hourse(i).Cost(2);
    end
    [value,index]=sort(Costi);
    
    Hourse=Hourse(index);
    for i=1:N
        AllPositin(i,:)=Hourse(i).Position;
    end
    clear rep2
    clear rep3
    
    P=floor(Pn*N);
    Q=floor(Qn*N);
    %     GoodPosition=mean(AllPositin(1:P,1));
    %     BadPosition=mean(AllPositin(N-Q:N,1));
    %     MeanPosition=mean([Hourse.Cost]);
   
    
    GoodPosition=mean(ArchivePos);
    BadPosition=mean(AllPositin(N-Q:N,:));
    MeanPosition=mean(AllPositin);
    GlobalBest=SelectLeader(Archive,beta);
    for i=1:N
        if i<=.1*N % Alpha Hourses
            Hourse(i).Velocity = +Alpha.h*rand([1,varNum]).*(GlobalBest.Position-Hourse(i).Position)...
                -Alpha.d*rand([1,varNum]).*(Hourse(i).Position)...
                +Alpha.g*(0.95+0.1*rand)*(PersonalBest(i).Position-Hourse(i).Position);
            
        elseif i<=.3*N % Beta Hourses
            Hourse(i).Velocity = Beta.s*rand([1,varNum]).*(MeanPosition-Hourse(i).Position)...
                -Beta.d*rand([1,varNum]).*(BadPosition-Hourse(i).Position)...
                +Beta.h*rand([1,varNum]).*(GlobalBest.Position-Hourse(i).Position)...
                +Beta.g*(0.95+0.1*rand)*(PersonalBest(i).Position-Hourse(i).Position);
        elseif i<=.6*N % Gama Hourses
            Hourse(i).Velocity = Gamma.s*rand([1,varNum]).*(MeanPosition-Hourse(i).Position)...
                +Gamma.r*rand([1,varNum]).*(Hourse(i).Position)...
                -Gamma.d*rand([1,varNum]).*(BadPosition-Hourse(i).Position)...
                +Gamma.h*rand([1,varNum]).*(GlobalBest.Position-Hourse(i).Position)...
                +Gamma.i*rand([1,varNum]).*(GoodPosition-Hourse(i).Position)...
                +Gamma.g*(0.95+0.1*rand)*(PersonalBest(i).Position-Hourse(i).Position);
            
        else              % Delta Hourses
            Hourse(i).Velocity = +Delta.r*rand([1,varNum]).*(Hourse(i).Position)...
                +Delta.g*(0.95+0.1*rand)*(PersonalBest(i).Position-Hourse(i).Position);
        end
        % Check Boundari
        Hourse(i).Velocity=max(Hourse(i).Velocity,VelMin);
        Hourse(i).Velocity=min(Hourse(i).Velocity,VelMax);
        
        % Update Position
        R = rand(1,varNum);
        cStep=1./(1+exp(-Hourse(i).Velocity));
        Hourse(i).Position =R<cStep;% Hourse(i).Position + Hourse(i).Velocity;
        
        % Check Boundari
        %         Hourse(i).Position=max(Hourse(i).Position,lowerBound);
        %         Hourse(i).Position=min(Hourse(i).Position,upperBound);
        
        
        
        %calc fitness
        Hourse(i).Cost=costFunction(Hourse(i).Position);
        %% Update Personal Best
        %Update local and global best solution
        if Dominates( Hourse(i),PersonalBest(i))
            PersonalBest(i)=Hourse(i);
        elseif Dominates(PersonalBest(i),Hourse(i))
            % Do Nothing
        else
            if rand<0.5
                PersonalBest(i)=Hourse(i);
            end
        end
        % Update Global Best
        if Dominates( Hourse(i),GlobalBest)
            GlobalBest=Hourse(i);
        end
        
    end
    Hourse=DetermineDomination(Hourse);
    non_dominated_wolves=GetNonDominatedParticles(Hourse);
    
    Archive=[Archive
        non_dominated_wolves];
    
    Archive=DetermineDomination(Archive);
    Archive=GetNonDominatedParticles(Archive);
    
    for i=1:numel(Archive)
        [Archive(i).GridIndex Archive(i).GridSubIndex]=GetGridIndex(Archive(i),G);
    end
    
    if numel(Archive)>Archive_size
        EXTRA=numel(Archive)-Archive_size;
        Archive=DeleteFromRep(Archive,EXTRA,gamma);
        
        Archive_costs=GetCosts(Archive);
        G=CreateHypercubes(Archive_costs,nGrid,alpha);
        
    end
    
    disp(['In iteration ' num2str(it) ': Number of solutions in the archive = ' num2str(numel(Archive))]);
    save results
    
    % Results
    
    costs=GetCosts(Hourse);
    Archive_costs=GetCosts(Archive);
    
    
    hold off
    plot(costs(1,:),costs(2,:),'k.');
    hold on
    plot(Archive_costs(1,:),Archive_costs(2,:),'rd');
    legend('Hourses','Non-dominated solutions');
    drawnow
    
    
    % Update Parameters
    Alpha.d=Alpha.d*w;
    Alpha.g=Alpha.g*w;
    Beta.d=Beta.d*w;
    Beta.s=Beta.s*w;
    Beta.g=Beta.g*w;
    Gamma.d=Gamma.d*w;
    Gamma.s=Gamma.s*w;
    Gamma.r=Gamma.r*w;
    Gamma.i=Gamma.i*w;
    Gamma.g=Gamma.g*w;
    Delta.r=Delta.r*w;
    Delta.g=Delta.g*w;
end

 代码地址(非免费):https://download.csdn.net/download/wq6qeg88/89016339

  • 17
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

资源存储库

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值