基于PSO训练常规自动编码器(Matlab代码实现)

 💥💥💞💞欢迎来到本博客❤️❤️💥💥

🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

⛳️座右铭:行百里者,半于九十。

目录

💥1 概述

📚2 运行结果

🎉3 参考文献

🌈4 Matlab代码实现

💥1 概述

本文基于PSO训练常规自动编码器,粒子群优化是优化中最著名的基于随机搜索算法之一。这里不过多介绍。

📚2 运行结果

部分代码:

clear all;
clc;
addpath('NEW_PSO','AE');
%% data preparation
original=imread('tu_pian.png');
original=imresize(original,[150,90]);
x=rgb2gray(original);
Inputs=double(x);
%% network initialization
number_neurons=89;% number of neurons
LB=-10;           % lower bands of weights
UB=10;            % upperbands of weights
n=10;             % number of population
%% training process
[net]=PSO_AE(Inputs,number_neurons,LB,UB,n);
%% Illustration
regenerated=net.code*pinv(net.B');
subplot(121)
imagesc(regenerated);
colormap(gray);
Tc=num2str(net.prefomance);
Tc= ['RMSE = ' Tc];
xlabel('regenerated image')
title(Tc)
subplot(122)
plot(smooth(net.errors,52),'LineWidth',2);
xlabel('iterations')
ylabel('RMSE')
title('loss function behavior')
axis([0 length(net.errors) min(net.errors) max(net.errors)])
grid

function[net]=PSO_AE(Inputs,number_neurons,LB,UB,n)
% PSO_AE:this function  trains  an auto-encoder based random search tool (PSO). 
% number_neurons:number of neurons in hidden layer.
% Inputs: the training set.
% LB: Lower band constraints for the weights
% LB: Lower band constraints for the weights
% n : number of population (in PSO).
% net: this variable contains the important Characteristics  of training
Inputs = scaledata(Inputs,0,1);% data Normalization
alpha=size(Inputs);
% Initialize the PSO parameters
m=number_neurons*alpha(2);
LB=ones(1,m)*(LB);
UB=ones(1,m)*(UB);
% Solving Optimization problem based on random search
[Fvalue,B,nb_iterations,fit_behavior]=PSO(m,n,LB,UB,Inputs,number_neurons);%
% prepare the problem solution 
B=reshape(B,number_neurons,alpha(2));
% calculate the Inputs_hat : Unlike other networks the AEs uses the same weight
% beta as an input weigth for coding and output weights for decoding
% we will no longer use the old input weights:input_weights. 
Hnew=Inputs*B';          % the hidden layer
Inputs_hat=Hnew*pinv(B');% the estimated Input
% store the network Characteristics 
net.errors=fit_behavior;% the training loss function behavior
net.prefomance=sqrt(mse(Inputs-Inputs_hat));% the training preformance
net.B=B;% the reconstructio weights 
net.code=Hnew;% the training hidden layer
end 

🎉3 参考文献

部分理论来源于网络,如有侵权请联系删除。

[1] M. N. Alam, “Particle Swarm Optimization : Algorithm and its Codes in MATLAB Particle Swarm Optimization : Algorithm and its Codes in MATLAB,” no. March, 2016.
[2] Y. Liu, B. He, D. Dong, Y. Shen, and T. Yan, “ROS-ELM: A Robust Online Sequential Extreme Learning Machine for Big Data Analytics,” Proc. ELM-2014 Vol. 1, Algorthims Theor., vol. 3, pp. 325–344, 2015.
[3] H. Zhou, G.-B. Huang, Z. Lin, H. Wang, and Y. C. Soh, “Stacked Extreme Learning Machines.,” IEEE Trans. Cybern., vol. PP, no. 99, p. 1, 2014.

[4]路强,滕进风,黎杰,凌亮,丁超,黄健刚.基于自动编码器的时间序列预测混合模型[J].计算机系统应用,2022,31(7):55-65

🌈4 Matlab代码实现

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

荔枝科研社

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值