1 简介
粒子群算法是一种启发式优化算法。粒子群算法是将优化问题的解比作一个粒子,粒子具有初始速度和位置以及一个适应值。粒子的速度能够确定其移动的方向和距离,粒子的适应值是由算法的适应度函数计算得出。粒子通过比较适应值和极值来不断更新自己的速度和位置本文将极限学习机的输入层与隐含层神经元的连接权值和隐含层神经元阈值作为粒子群算法中的粒子进行 优 化。因 均 方 根 误 差(Rootmeansquareerror,RMSE)的作用是用来衡量观测值和真实值之间的偏差,因此以 RMSE作为适应度函数。具体优化步骤如下。
(1)将训练样本作为输入样本和期望输出样本输入到模型中。
(2)设定 ELM 的初始参数,其中包括隐含层神经元个数,激活函数等参数。
(3)将权值和阈值作为粒子带入到粒子群算法中,并设置相应参数,如学习因子、迭代次数等。
(4)将均 方 根 误 差 RMSE 作 为 适 应 度 函 数,计算每一个粒 子 的 适 应 度 值。因 RMSE 为 越 小 越 优型函数,所以选取满足条件下 RMSE 最小的值作为最优粒子,并输出最优的粒子。输出的粒子对应最优的权值和阈值。
(5)将 输 出 的 权 值 和 阈 值 带 入 ELM 中,完 成优化。基于 PSO 算法优化 ELM 的技术流程见图3。
2 部分代码
% function [time, Et]=ANFISELMELM(adaptive_mode) tic; clc; clearvars -except adaptive_mode; close all; adaptive_mode = 'none'; % load dataset data = csvread('iris.csv'); input_data = data(:, 1:end-1); output_data = data(:, end); % Parameter initialization [center,U] = fcm(input_data, 3, [2 100 1e-6]); %center = center cluster, U = membership level [total_examples, total_features] = size(input_data); class = 3; % [Changeable] epoch = 0; epochmax = 400; % [Changeable] %Et = zeros(epochmax, 1); [Yy, Ii] = max(U); % Yy = max value between both membership function, Ii = the class corresponding to the max value % Population initialization pop_size = 10; population = zeros(pop_size, 3, class, total_features); % parameter: population size * 6 * total classes * total features velocity = zeros(pop_size, 3, class, total_features); % velocity matrix of an iteration c1 = 1.2; c2 = 1.2; original_c1 = c1; original_c2 = c2; r1 = 0.4; r2 = 0.6; max_c1c2 = 2; % adaptive c1 c2 % adaptive_mode = 'none'; % class(adaptive_mode) iteration_tolerance = 50; iteration_counter = 0; change_tolerance = 10; is_first_on = 1; is_trapped = 0; %out_success = 0; for particle=1:pop_size a = zeros(class, total_features); b = repmat(2, class, total_features); c = zeros(class, total_features); for k =1:class for i = 1:total_features % looping for all features % premise parameter: a aTemp = (max(input_data(:, i))-min(input_data(:, i)))/(2*sum(Ii' == k)-2); aLower = aTemp*0.5; aUpper = aTemp*1.5; a(k, i) = (aUpper-aLower).*rand()+aLower; %premise parameter: c dcc = (2.1-1.9).*rand()+1.9; cLower = center(k,total_features)-dcc/2; cUpper = center(k,total_features)+dcc/2; c(k,i) = (cUpper-cLower).*rand()+cLower; end % calculate fitness value and update pBest for i=1:pop_size particle_position = squeeze(population(i, :, :, :)); e = get_fitness(particle_position, class, input_data, output_data); if e < pBest_fitness(i) pBest_fitness(i) = e; pBest_position(i, :, :, :) = particle_position; end end % find gBest [gBest_fitness, idx] = min(pBest_fitness); gBest_position = squeeze(pBest_position(idx, :, :, :)); Et(epoch) = gBest_fitness; % Adaptive c1 and c2 if strcmpi('none', adaptive_mode) == 0 if (epoch > 1) && (Et(epoch) == Et(epoch-1)) iteration_counter = iteration_counter + 1; else iteration_counter = 0; c1 = original_c1; c2 = original_c2; % if is_trapped == 1 % out_success = out_success + 1; % is_trapped = 0; % end end if iteration_counter > iteration_tolerance % is_trapped = 1; if is_first_on == 1 iteration_left_since_on = (epochmax - epoch) - change_tolerance; epoch_when_on = epoch; is_first_on = 0; end curr_pos = (epoch-epoch_when_on); if curr_pos <= iteration_left_since_on if strcmpi('c1', adaptive_mode) == 1 c1 = original_c1 + (max_c1c2 - original_c1)/iteration_left_since_on*(curr_pos); %fprintf('iter %d - %d\n', epoch, c1) elseif strcmpi('c2', adaptive_mode) == 1 c2 = original_c2 + (max_c1c2 - original_c2)/iteration_left_since_on*(curr_pos); %fprintf('iter %d - %d\n', epoch, c2) elseif strcmpi('c1c2', adaptive_mode) == 1 c1 = original_c1 + (max_c1c2 - original_c1)/iteration_left_since_on*(curr_pos); c2 = original_c2 + (max_c1c2 - original_c2)/iteration_left_since_on*(curr_pos); end end end end % Draw the SSE plot plot(1:epoch, Et); title(['Epoch ' int2str(epoch) ' -> MSE = ' num2str(Et(epoch))]); grid pause(0.001); end %[out output out-output] % ---------------------------------------------------------------- time = toc;
3 仿真结果
4 参考文献
[1]陈晓青. 改进PSO算法与ELM在基因数据分类中的应用. Diss. 中国计量大学, 2019.