Bp和ELM的matlab代码

概览

本文给出了Bp神经网络,ELM-极限学习机的matlab代码(代码来源于前人整理),使用的例程,以及介绍算法原理优秀的链接。

程序包:
链接:https://pan.baidu.com/s/1i_ztrqEmyUDUTQKlAMZGBw
提取码:o5yv

例程

%% 机器学习-例程
%% 简单介绍
% 
% 
% 功能:基于训练数据,利用学习器,构建预测模型。
% 
% 输入:训练数据(特征+标签),测试数据(特征+标签)。
% 
% 输出:预测的标签
% 
% 学习器的选择:优先ELM。(可以对分类数据做可视化分析,根据分布特点选择对应偏好的分类器)
% 
% 
% 注意:以下例程中,测试数据是‘特征+标签’,输出是测试精度和预测样本。在实际应用中,可直接将测试样本的标签设置为0,然后进行预测标签。
%% 数据整理

load iris.mat       % 加载数据集,特征在前,标签在后,每一行为一个样本,每一列为一个特征
tr=iris(1:100,:);   % 取iris数据的前100行作为训练样本
te=iris(101:end,:); % 取后50行作为测试样本
%% 学习器
% 1. Bp神经网络

    hiddenSizes = 5; % 隐层单元
    [A_Bp,bplabel] = f_Bp(tr,te,hiddenSizes);
    % 原理:https://blog.csdn.net/u014303046/article/details/78200010

% 2. ELM

    NumberofHiddenNeurons = 10;
    [A_ELM, Label_ELM, TrainingAccuracy, TrainingTime, TestingTime] = f_ELM(tr,te,NumberofHiddenNeurons);
    % 原理:https://blog.csdn.net/qq_32892383/article/details/90760481

子程序

function [A,bplabel]=f_Bp(tr,te,hiddenSizes)

%Input:      tr: Training set
%            te: testing set
%            Note that: each row represents a instance, last column is label, begins from 1
%            hiddenSizes:the size of hidden layer(default = 10)隐层神经元个数;
%Output:     A: Testing Accuracy
%            bplabel: predict label by svm for testingdata
    if ~exist('hiddenSizes', 'var')
        hiddenSizes = 10;
    end

input_train=tr(:,1:end-1)';
output_train=tr(:,end);
input_test=te(:,1:end-1)';
output_test=te(:,end);
label=[output_train;output_test];
L=unique(label);      % 合并A中相同数据
ls=length(L(:));  %统计类别总数

ns=length(output_train);
output=zeros(ns,ls);
for j=1:ns
    output(j,output_train(j))=1;
end

%% BP网络训练
% %初始化网络结构
net=newff(input_train,output',[hiddenSizes]);
%newff函数的格式为:
%net=newff(PR,[S1 S2 ...SN],{TF1 TF2...TFN},BTF,BLF,PF),函数newff建立一个可训练的前馈网络。输入参数说明:
%PR:Rx2的矩阵以定义R个输入向量的最小值和最大值;
%Si:第i层神经元个数;
%TFi:第i层的传递函数,默认函数为tansig函数;
%BTF:训练函数,默认函数为trainlm函数;
%BLF:权值/阈值学习函数,默认函数为learngdm函数;
%PF:性能函数,默认函数为mse函数。
%% 指定训练参数
net.trainParam.epochs=1000;%%%最大迭代次数
net.trainParam.lr=0.01;%%%学习率
net.trainParam.goal=0.01;%%%最小均方误差
%% 指定隐层函数和训练函数
%  hiddenFcn:Hidden Layer function (default = 'tansig')隐层的传递函数,默认函数为tansig函数;
%  trainFcn:Training function (default = 'trainlm')训练函数,默认函数为trainlm函数;
net.trainFcn = 'trainlm'; % 梯度下降算法
% net.trainFcn = 'traingd'; % 梯度下降算法
% net.trainFcn = 'traingdm'; % 动量梯度下降算法
% net.trainFcn = 'traingda'; % 变学习率梯度下降算法
% net.trainFcn = 'traingdx'; % 变学习率动量梯度下降算法
% (大型网络的首选算法)
% net.trainFcn = 'trainrp'; % RPROP(弹性BP)算法,内存需求最小
% (共轭梯度算法)
% net.trainFcn = 'traincgf'; % Fletcher-Reeves修正算法
% net.trainFcn = 'traincgp'; % Polak-Ribiere修正算法,内存需求比Fletcher-Reeves修正算法略大
% net.trainFcn = 'traincgb'; % Powell-Beal复位算法,内存需求比Polak-Ribiere修正算法略大
% (大型网络的首选算法)
% net.trainFcn = 'trainscg'; % Scaled ConjugateGradient算法,内存需求与Fletcher-Reeves修正算法相同,计算量比上面三种算法都小很多
% net.trainFcn = 'trainbfg'; % Quasi-NewtonAlgorithms - BFGS Algorithm,计算量和内存需求均比共轭梯度算法大,但收敛比较快
% net.trainFcn = 'trainoss'; % One Step SecantAlgorithm,计算量和内存需求均比BFGS算法小,比共轭梯度算法略大
% (中型网络的首选算法)
% net.trainFcn = 'trainlm'; % Levenberg-Marquardt算法,内存需求最大,收敛速度最快
% net.trainFcn = 'trainbr'; % 贝叶斯正则化算法
% 有代表性的五种算法为:'traingdx','trainrp','trainscg','trainoss','trainlm'
%% 网络训练
net=train(net,input_train,output');
%% BP网络预测
%网络预测输出
Y=sim(net,input_test);
%% 结果分析
%根据网络输出找出数据属于哪类
s=size(Y,2);
for i=1:s
    [m,I]=max(Y(:,i));
    bplabel(i,1)=I;
end
net.iw{1,1};%隐层权值
net.b{1};%隐层阈值
net.lw{2,1};%输出层权值
net.b{2};%输出层阈值

%预测正确率
rightnumber=0;
for i=1:length(output_test)
if bplabel(i)==output_test(i)
rightnumber=rightnumber+1;
end
end
A=rightnumber/length(output_test);

end
function [TestingAccuracy, elmlabel, TrainingAccuracy, TrainingTime, TestingTime] = f_ELM(tr,te,NumberofHiddenNeurons)

% Input:
% Tr                    - Filename of training data set
% Te                    - Filename of testing data set
% Note that: each row represents a instance, last column is label, begins from 1
% Elm_Type              - 0 for regression; 1 for (both binary and multi-classes) classification
% NumberofHiddenNeurons - Number of hidden neurons assigned to the ELM
% ActivationFunction    - Type of activation function:
%                           'sig' for Sigmoidal function
%                           'sin' for Sine function
%                           'hardlim' for Hardlim function
%                           'tribas' for Triangular basis function
%                           'radbas' for Radial basis function (for additive type of SLFNs instead of RBF type of SLFNs)
%
% Output: 
% TrainingTime          - Time (seconds) spent on training ELM
% TestingTime           - Time (seconds) spent on predicting ALL testing data
% TrainingAccuracy      - Training accuracy: 
%                           RMSE for regression or correct classification rate for classification
% TestingAccuracy       - Testing accuracy: 
%                           RMSE for regression or correct classification rate for classification
%elmlabel               - predict label by elm for testingdata

Elm_Type=1;
ActivationFunction='sig';
Tr=tr;
Te=te;
    if ~exist('Elm_Type', 'var')
        Elm_Type = 1;
    end
    if ~exist('NumberofHiddenNeurons', 'var')
        NumberofHiddenNeurons = 10;
    end
    if ~exist('ActivationFunction', 'var')
        ActivationFunction = 'sig';
    end

%%%%%%%%%%% Macro definition
REGRESSION=0;
CLASSIFIER=1;
%%%%%%%%%%% Load training dataset
T=Tr(:,end)';
P=Tr(:,1:end-1)';
clear Tr;                                   %   Release raw training data array
%%%%%%%%%%% Load testing dataset
TV.T=Te(:,end)';
TV.P=Te(:,1:end-1)';
clear Te;                                    %   Release raw testing data array

NumberofTrainingData=size(P,2);
NumberofTestingData=size(TV.P,2);
NumberofInputNeurons=size(P,1);

if Elm_Type~=REGRESSION
    %%%%%%%%%%%% Preprocessing the data of classification
    sorted_target=sort(cat(2,T,TV.T),2);
    label=zeros(1,1);                     %   Find and save in 'label' class label from training and testing data sets
    label(1,1)=sorted_target(1,1);
    j=1;
    for i = 2:(NumberofTrainingData+NumberofTestingData)
        if sorted_target(1,i) ~= label(1,j)
            j=j+1;
            label(1,j) = sorted_target(1,i);
        end
    end
    number_class=j;
    NumberofOutputNeurons=number_class;
       
    %%%%%%%%%% Processing the targets of training
    temp_T=zeros(NumberofOutputNeurons, NumberofTrainingData);
    for i = 1:NumberofTrainingData
        for j = 1:number_class
            if label(1,j) == T(1,i)
                break; 
            end
        end
        temp_T(j,i)=1;
    end
    T=temp_T*2-1;

    %%%%%%%%%% Processing the targets of testing
    temp_TV_T=zeros(NumberofOutputNeurons, NumberofTestingData);
    for i = 1:NumberofTestingData
        for j = 1:number_class
            if label(1,j) == TV.T(1,i)
                break; 
            end
        end
        temp_TV_T(j,i)=1;
    end
    TV.T=temp_TV_T*2-1;

end                                                 %   end if of Elm_Type

%%%%%%%%%%% Calculate weights & biases
start_time_train=cputime;

%%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
tempH=InputWeight*P;
clear P;                                    %   Release input of training data 
ind=ones(1,NumberofTrainingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);      %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH=tempH+BiasMatrix;

%%%%%%%%%%% Calculate hidden neuron output matrix H
switch lower(ActivationFunction)
    case {'sig','sigmoid'}
        %%%%%%%% Sigmoid 
        H = 1 ./ (1 + exp(-tempH));
    case {'sin','sine'}
        %%%%%%%% Sine
        H = sin(tempH);    
    case {'hardlim'}
        %%%%%%%% Hard Limit
        H = double(hardlim(tempH));
    case {'tribas'}
        %%%%%%%% Triangular basis function
        H = tribas(tempH);
    case {'radbas'}
        %%%%%%%% Radial basis function
        H = radbas(tempH);
        %%%%%%%% More activation functions can be added here                
end
clear tempH;                           %   Release the temparary array for calculation of hidden neuron output matrix H

%%%%%%%%%%% Calculate output weights OutputWeight (beta_i)
OutputWeight=pinv(H') * T';         % implementation without regularization factor //refer to 2006 Neurocomputing paper
%OutputWeight=inv(eye(size(H,1))/C+H * H') * H * T';   % faster method 1 //refer to 2012 IEEE TSMC-B paper
%implementation; one can set regularizaiton factor C properly in classification applications 
%OutputWeight=(eye(size(H,1))/C+H * H') \ H * T';      % faster method 2 //refer to 2012 IEEE TSMC-B paper
%implementation; one can set regularizaiton factor C properly in classification applications

%If you use faster methods or kernel method, PLEASE CITE in your paper properly: 
%Guang-Bin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang, 
%"Extreme Learning Machine for Regression and Multi-Class Classification," 
%submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence, October 2010. 

end_time_train=cputime;
TrainingTime=end_time_train-start_time_train;      %   Calculate CPU time (seconds) spent for training ELM

%%%%%%%%%%% Calculate the training accuracy
Y=(H' * OutputWeight)';                          %   Y: the actual output of the training data
if Elm_Type == REGRESSION
    TrainingAccuracy=sqrt(mse(T - Y));           %   Calculate training accuracy (RMSE) for regression case
end
clear H;

%%%%%%%%%%% Calculate the output of testing input
start_time_test=cputime;
tempH_test=InputWeight*TV.P;
clear TV.P;             %   Release input of testing data             
ind=ones(1,NumberofTestingData);
BiasMatrix=BiasofHiddenNeurons(:,ind);         %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
tempH_test=tempH_test + BiasMatrix;
switch lower(ActivationFunction)
    case {'sig','sigmoid'}
        %%%%%%%% Sigmoid 
        H_test = 1 ./ (1 + exp(-tempH_test));
    case {'sin','sine'}
        %%%%%%%% Sine
        H_test = sin(tempH_test);        
    case {'hardlim'}
        %%%%%%%% Hard Limit
        H_test = hardlim(tempH_test);        
    case {'tribas'}
        %%%%%%%% Triangular basis function
        H_test = tribas(tempH_test);        
    case {'radbas'}
        %%%%%%%% Radial basis function
        H_test = radbas(tempH_test);        
        %%%%%%%% More activation functions can be added here        
end
TY=(H_test' * OutputWeight)';                   %   TY: the actual output of the testing data
[TYmax,elmlabel]=max(TY);
elmlabel=elmlabel';
end_time_test=cputime;
TestingTime=end_time_test-start_time_test;      %   Calculate CPU time (seconds) spent by ELM predicting the whole testing data

if Elm_Type == REGRESSION
    TestingAccuracy=sqrt(mse(TV.T - TY));       %   Calculate testing accuracy (RMSE) for regression case
end

if Elm_Type == CLASSIFIER
%%%%%%%%%% Calculate training & testing classification accuracy
    MissClassificationRate_Training=0;
    MissClassificationRate_Testing=0;

    for i = 1 : size(T, 2)
        [x, label_index_expected]=max(T(:,i));
        [x, label_index_actual]=max(Y(:,i));
        if label_index_actual~=label_index_expected
            MissClassificationRate_Training=MissClassificationRate_Training+1;
        end
    end
    TrainingAccuracy=1-MissClassificationRate_Training/size(T,2);
    for i = 1 : size(TV.T, 2)
        [x, label_index_expected]=max(TV.T(:,i));
        [x, label_index_actual]=max(TY(:,i));
        if label_index_actual~=label_index_expected
            MissClassificationRate_Testing=MissClassificationRate_Testing+1;
        end
    end
    TestingAccuracy=1-MissClassificationRate_Testing/size(TV.T,2);  
end
  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值