【改进极限学习机】双向极限学习机(B-ELM)(Matlab代码实现)

 📝个人主页:研学社的博客 

💥💥💞💞欢迎来到本博客❤️❤️💥💥

🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

⛳️座右铭:行百里者,半于九十。

目录

💥1 概述

📚2 运行结果

🎉3 参考文献

🌈4 Matlab代码实现

💥1 概述

文献来源:

很明显,神经网络的学习效率和学习速度通常远低于要求,这一直是许多应用的主要瓶颈。最近,黄提出了一种简单高效的学习方法,称为极限学习机(ELM),该方法表明,与一些常规方法相比,神经网络的训练时间可以减少一千倍。然而,ELM研究中的一个悬而未决的问题是,是否可以在不影响学习效果的情况下进一步减少隐藏节点的数量。本简报提出了一种新的学习算法,称为双向极限学习机(B-ELM),其中一些隐藏节点不是随机选择的。理论上,该算法倾向于在极早的学习阶段将网络输出误差降低到0。此外,我们在提出的B-ELM中发现了网络输出误差与网络输出权重之间的关系。仿真结果表明,所提方法比其他增量ELM算法快几十到几百倍。 

📚2 运行结果

部分代码:

load fried.mat;
data=fried';
data=mapminmax(data);
data(11,:)=(data(11,:)/2)+0.5;
data=[data(11,:); data(1:10,:)]';
rand_sequence=randperm(size(data,1));
    temp_data=data;
    data=temp_data(rand_sequence, :);
    Training=data(1:20768,:);
    Testing=data(20769:40768,:);

    
[train_time, test_time,  train_accuracy11, test_accuracy11]=B_ELM(Training,Testing,0,1,'sig',kkk);


    D_ELM_test(rnd,1)=test_accuracy11;
    D_ELM_train(rnd,1)=train_accuracy11;
    D_ELM_train_time(rnd,1)=train_time;
    

end


   DD_ELM_learn_time(kkk)=mean(D_ELM_train_time);
   

   
   DD_ELM_train_accuracy(kkk)=mean(D_ELM_train);
   
   DD_ELM_test_accuracy(kkk)=mean(D_ELM_test);
   
   
end

%%%%%%%%%%% Load training dataset
train_data=TrainingData_File;
T=train_data(:,1)';
aaa=T;
P=train_data(:,2:size(train_data,2))';
clear train_data;                                   %   Release raw training data array

%%%%%%%%%%% Load testing dataset
test_data=TestingData_File;
TV.T=test_data(:,1)';
TV.P=test_data(:,2:size(test_data,2))';
clear test_data;                                    %   Release raw testing data array

NumberofTrainingData=size(P,2);
NumberofTestingData=size(TV.P,2);
NumberofInputNeurons=size(P,1);


%%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
tempH=InputWeight*P;
                                        %   Release input of training data 
ind=ones(1,NumberofTrainingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH=tempH+BiasMatrix;

D_YYM=[];
D_Input=[];
D_beta=[];
D_beta1=[];
TY=[];
FY=[];
BiasofHiddenNeurons1=[];

%%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
start_time_train=cputime;

for i=1:kkk
    %%%%%%%%%% B-ELM when number of hidden nodes L=2n-1 %%%%%%%
InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
BiasofHiddenNeurons1=[BiasofHiddenNeurons1;BiasofHiddenNeurons];
tempH=P'*InputWeight';
YYM=pinv(P')*tempH;
YJX=P'*YYM;

 tempH=tempH';                                         %   Release input of training data 
ind=ones(1,NumberofTrainingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH=tempH+BiasMatrix;

%%%%%%%%%%% Calculate hidden neuron output matrix H
switch lower(ActivationFunction)
    case {'sig','sigmoid'}
        %%%%%%%% Sigmoid 
        H = 1 ./ (1 + exp(-tempH));
    case {'sin','sine'}
        %%%%%%%% Sine
        H = sin(tempH);    
            %%%%%%%% More activation functions can be added here                
end
                                     %   Release the temparary array for calculation of hidden neuron output matrix H

%%%%%%%%%%% Calculate output weights OutputWeight (beta_i)
OutputWeight=pinv(H') * T';                        % slower implementation
% OutputWeight=inv(H * H') * H * T';                         % faster
% implementation
Y=(H' * OutputWeight)'; 


%%%%%%%%%% B-ELM when number of hidden nodes L=2n %%%%%%%
if i==1
    FY=Y;
else
FY=FY+Y;
end
E1=T-Y;
E1_2n_1(i)=norm(E1,2);
TrainingAccuracy2=sqrt(mse(E1));
Y2=E1'*pinv(OutputWeight);
Y2=Y2';
switch lower(ActivationFunction)
    case {'sig','sigmoid'}
        %%%%%%%% Sigmoid 
        [Y22,PS(i)]=mapminmax(Y2,0.1,0.9);
    case {'sin','sine'}
        %%%%%%%% Sine
       [Y22,PS(i)]=mapminmax(Y2,0,1);
end

Y222=Y2;
Y2=Y22';

T1=(Y2* OutputWeight)';
switch lower(ActivationFunction)
    case {'sig','sigmoid'}
        %%%%%%%% Sigmoid 
Y3=1./Y2; 
Y3=Y3-1;
Y3=log(Y3);
Y3=-Y3';
    case {'sin','sine'}
        %%%%%%%% Sine
       Y3=asin(Y2)';
end

T2=(Y3'* OutputWeight)';

Y4=Y3;

YYM=pinv(P')*Y4';
YJX=P'*YYM;

BB1=size(Y4);
BB(i)=sum(YJX-Y4')/BB1(2);

GXZ1=P'*YYM-BB(i);

cc=pinv(P')*(GXZ1-Y4');
Y5=P'*cc-(GXZ1-Y4');
GXZ11=P'*(YYM-cc)-BB(i);
BBB(i)=mean(GXZ11-Y4');
GXZ111=P'*(YYM-cc)-BB(i)-BBB(i);
BBBB(i)=BB(i)+BBB(i);
switch lower(ActivationFunction)
    case {'sig','sigmoid'}
        %%%%%%%% Sigmoid 
GXZ2=1./(1+exp(-GXZ111'));
    case {'sin','sine'}
        %%%%%%%% Sine
GXZ2=sin(GXZ111');
end


FYY = mapminmax('reverse',GXZ2,PS(i));

%FYY=GXZ2;
OutputWeight1=pinv(FYY') * E1'; 
FT1=FYY'*OutputWeight1;
FY=FY+FT1';
TrainingAccuracy=sqrt(mse(FT1'-E1));
D_Input=[D_Input;InputWeight];
D_beta=[D_beta;OutputWeight];
D_beta1=[D_beta1;OutputWeight1];
D_YYM=[D_YYM;(YYM-cc)'];
T=FT1'-E1;
E1_2n(i)=norm(T,2);

end
end_time_train=cputime;
TrainingTime=end_time_train-start_time_train;

    


start_time_test=cputime;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%test%%%%%%%%%%%%%%%%%%%%%

tempH_test=D_Input*TV.P;
clear TV.P;             %   Release input of testing data             
ind=ones(1,NumberofTestingData);
BiasMatrix=BiasofHiddenNeurons1(:,ind);              
tempH_test=tempH_test + BiasMatrix;
 

🎉3 参考文献

部分理论来源于网络,如有侵权请联系删除。

[1]Y. Yang, Y. Wang, and X. Yuan, "Bidirectional extreme learning machine for regression problem and its learning effectiveness," IEEE Transactions on Neural Networks and Learning Systems, Vol. 23, pp. 1498 - 1505, 2012

🌈4 Matlab代码实现

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

荔枝科研社

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值