竞争粒子群(competitive swarm optimizer,CSO)优化算法学习笔记
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
先导知识
粒子群优化算法:
https://blog.csdn.net/chensanwa/article/details/79284698?spm=1001.2014.3001.5506
参考文献:
R. Cheng and Y. Jin, A competitive swarm optimizer for large scale optimization, IEEE Transactions on Cybernetics, 2014, 45(2): 191-204.
竞争粒子群与粒子群的不同
竞争粒子群与粒子群优化算法不同,竞争粒子群中的每个粒子的最佳位置和全局最佳位置不再由pbest和gbest更新,而是由一种竞争机制更新。
每次竞争后,将根据胜者的信息更新败者的信息。
算法思想
一个种群P(t)有m个粒子,t为迭代次数。
每个粒子位置为:Xi(t) = (xi,1(t),xi,2(t), . . . ,xi,n(t))
速度为:Vi(t) = (vi,1(t),vi,2(t) ,…,vi,n(t))
种群被分成m/2对(假设m为偶数),每对间两个粒子进行竞争。
拥有更好适应值得为胜者,直接进入下一代种群,败者则根据胜者信息更新自己的位置和速度。更新完毕后传入下一代种群
更新公式为:
V(l,k) (t+1)=R1 (k,t) V(l,k) (t)+R2 (k,t) (X(w,k) (t)-X(l,k) (t))
+ φR3k,t(Xkt -Xl,kt)(1)
X(l,k) (t+1)=X(l,k) (t))+V(l,k) (t+1) (2)
l表示败者,w表示胜者,R()为随机产生的向量,X ̅k (.)模式是全局粒子平均位置,φ为控制X ̅k (.)影响的参数。
公式(1)的第一部分与PSO中的惯性部分类似,第二部分为败者从胜者处学习,第三部分则是全局学习,可以与PSO中的三部分对照看。
由上可以得出伪代码
文章图片均来自:R. Cheng and Y. Jin, A competitive swarm optimizer for large scale optimization, IEEE Transactions on Cybernetics, 2014, 45(2): 191-204.
示例代码
代码来自PlatEMO工具箱,下载地址:https://github.com/BIMK/PlatEMO
classdef CSO < ALGORITHM
% <large/none> <constrained/none>
% Competitive swarm optimizer
% phi — 0.1 — Social factor
%------------------------------- Reference --------------------------------
% R. Cheng and Y. Jin, A competitive swarm optimizer for large scale
% optimization, IEEE Transactions on Cybernetics, 2014, 45(2): 191-204.
%------------------------------- Copyright --------------------------------
% Copyright © 2021 BIMK Group. You are free to use the PlatEMO for
% research purposes. All publications which use this platform or any code
% in the platform should acknowledge the use of “PlatEMO” and reference “Ye
% Tian, Ran Cheng, Xingyi Zhang, and Yaochu Jin, PlatEMO: A MATLAB platform
% for evolutionary multi-objective optimization [educational forum], IEEE
% Computational Intelligence Magazine, 2017, 12(4): 73-87”.
%--------------------------------------------------------------------------
methods
function main(Algorithm,Problem)
%% Parameter setting
phi = Algorithm.ParameterSet(0.1);
%% Generate random population
Population = Problem.Initialization();
%% Optimization
while Algorithm.NotTerminated(Population)
% Determine the losers and winners
rank = randperm(Problem.N);
loser = rank(1:end/2);
winner = rank(end/2+1:end);
replace = FitnessSingle(Population(loser)) < FitnessSingle(Population(winner));
temp = loser(replace);
loser(replace) = winner(replace);
winner(replace) = temp;
% Update the losers by learning from the winners
LoserDec = Population(loser).decs;
WinnerDec = Population(winner).decs;
LoserVel = Population(loser).adds(zeros(size(LoserDec)));
R1 = repmat(rand(Problem.N/2,1),1,Problem.D);
R2 = repmat(rand(Problem.N/2,1),1,Problem.D);
R3 = repmat(rand(Problem.N/2,1),1,Problem.D);
LoserVel = R1.*LoserVel + R2.*(WinnerDec-LoserDec) + phi.*R3.*(repmat(mean(Population.decs,1),Problem.N/2,1)-LoserDec);
LoserDec = LoserDec + LoserVel;
Population(loser) = SOLUTION(LoserDec,LoserVel);
end
end
end
end