💥💥💞💞欢迎来到本博客❤️❤️💥💥
🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。
⛳️座右铭:行百里者,半于九十。
目录
💥1 概述
本文基于PSO训练常规自动编码器,粒子群优化是优化中最著名的基于随机搜索算法之一。这里不过多介绍。
📚2 运行结果
部分代码:
clear all;
clc;
addpath('NEW_PSO','AE');
%% data preparation
original=imread('tu_pian.png');
original=imresize(original,[150,90]);
x=rgb2gray(original);
Inputs=double(x);
%% network initialization
number_neurons=89;% number of neurons
LB=-10; % lower bands of weights
UB=10; % upperbands of weights
n=10; % number of population
%% training process
[net]=PSO_AE(Inputs,number_neurons,LB,UB,n);
%% Illustration
regenerated=net.code*pinv(net.B');
subplot(121)
imagesc(regenerated);
colormap(gray);
Tc=num2str(net.prefomance);
Tc= ['RMSE = ' Tc];
xlabel('regenerated image')
title(Tc)
subplot(122)
plot(smooth(net.errors,52),'LineWidth',2);
xlabel('iterations')
ylabel('RMSE')
title('loss function behavior')
axis([0 length(net.errors) min(net.errors) max(net.errors)])
grid
function[net]=PSO_AE(Inputs,number_neurons,LB,UB,n)
% PSO_AE:this function trains an auto-encoder based random search tool (PSO).
% number_neurons:number of neurons in hidden layer.
% Inputs: the training set.
% LB: Lower band constraints for the weights
% LB: Lower band constraints for the weights
% n : number of population (in PSO).
% net: this variable contains the important Characteristics of training
Inputs = scaledata(Inputs,0,1);% data Normalization
alpha=size(Inputs);
% Initialize the PSO parameters
m=number_neurons*alpha(2);
LB=ones(1,m)*(LB);
UB=ones(1,m)*(UB);
% Solving Optimization problem based on random search
[Fvalue,B,nb_iterations,fit_behavior]=PSO(m,n,LB,UB,Inputs,number_neurons);%
% prepare the problem solution
B=reshape(B,number_neurons,alpha(2));
% calculate the Inputs_hat : Unlike other networks the AEs uses the same weight
% beta as an input weigth for coding and output weights for decoding
% we will no longer use the old input weights:input_weights.
Hnew=Inputs*B'; % the hidden layer
Inputs_hat=Hnew*pinv(B');% the estimated Input
% store the network Characteristics
net.errors=fit_behavior;% the training loss function behavior
net.prefomance=sqrt(mse(Inputs-Inputs_hat));% the training preformance
net.B=B;% the reconstructio weights
net.code=Hnew;% the training hidden layer
end
🎉3 参考文献
部分理论来源于网络,如有侵权请联系删除。
[1] M. N. Alam, “Particle Swarm Optimization : Algorithm and its Codes in MATLAB Particle Swarm Optimization : Algorithm and its Codes in MATLAB,” no. March, 2016.
[2] Y. Liu, B. He, D. Dong, Y. Shen, and T. Yan, “ROS-ELM: A Robust Online Sequential Extreme Learning Machine for Big Data Analytics,” Proc. ELM-2014 Vol. 1, Algorthims Theor., vol. 3, pp. 325–344, 2015.
[3] H. Zhou, G.-B. Huang, Z. Lin, H. Wang, and Y. C. Soh, “Stacked Extreme Learning Machines.,” IEEE Trans. Cybern., vol. PP, no. 99, p. 1, 2014.
[4]路强,滕进风,黎杰,凌亮,丁超,黄健刚.基于自动编码器的时间序列预测混合模型[J].计算机系统应用,2022,31(7):55-65