【数据分析】多层感知器人工神经网络的浮点量化分析【含Matlab源码 7211期】

在这里插入图片描述

⛄一、获取代码方式

获取代码方式1:
完整代码已上传我的资源:【数据分析】基于matlab多层感知器人工神经网络的浮点量化分析【含Matlab源码 7211期】
点击上面蓝色字体,直接付费下载,即可。

获取代码方式2:
付费专栏数据分析(Matlab)

备注:
点击上面蓝色字体付费专栏数据分析(Matlab),扫描上面二维码,付费299.9元订阅海神之光博客付费专栏,凭支付凭证,私信博主,可免费获得5份本博客上传CSDN资源代码(有效期为订阅日起,三天内有效);
点击CSDN资源下载链接:5份本博客上传CSDN资源代码

⛄二、部分源代码

clear
clc
load trainedANNexample;
%% Floating-Point Quantizer
p=24 ; expW=8; lenW=p+expW;
q= quantizer(‘float’,‘round’,[lenW expW]);
%% Extracting Coefficients
L= length(net.layers); %L = No. of Layers
B= net.b; % Ectracting Biases
LW= net.LW; % Ectracting Layers weights
IW= net.IW; % Ectracting Input layer weights
N_in_o=quantize(q,x); % Quantization of the input.
%% Normalization
in_range=net.inputs{:}.range; % The range of the training
in_min= in_range(:,1); % dataset should be extracted
% here from the network variable,
in_max= in_range(:,2); % and quantizaed so that they will
% also be neglegted

f2=(in_max-in_min); % The input process has
K=2./f2; % a K = [-1 to 1] / [input range]

f1=N_in_o-in_min; % first operation in the normalization part;
% subtracting in_min
f4=f1.*K; % Then multipling by K

f5=f4-1; % then just subtracting one (adding y_min which is {-1})

N_in_Nor=f5; % making that variable that always defines the input matrix to each neuron
% it will be updated after each layer

%----------- The same operations above will be repeated but with quantization-----------%
N_in= N_in_o ;

in_minq=quantize(q,in_min);
in_maxq=quantize(q,in_max);

qf2=(in_maxq-in_minq); qf2=quantize(q,qf2);
Kq=2./f2; Kq=quantize(q,Kq);

qf1=N_in-in_minq; thf1= 0.18*(2(-2*p))*((qf1.2)); qf1=quantize(q,qf1);
b_qf1= -(in_minq-in_min).ones(size(qf1));
% The th
* variables all over the code refers to the theoretical QNP of the corresponding stage

qf4=qf1.*Kq; thf4=(thf1.Kq.^2) + 0.18(2(-2*p)).*((qf4.2)); qf44=quantize(q,qf4);
b_qf4=(Kq-K).*qf1 + b_qf1.Kq;
qf5=qf44-1; thf5=thf4 + 0.18
(2(-2*p))*((qf5.2)); qf5=quantize(q,qf5);
b_qf5= b_qf4;
N_in_Norq=qf5; % qf5 is the output the quantizaed input process block which is similar to
% the f5 above but quantized

z_th_qnp=0;
z_sim_qnp=0;

%% One neuron QNP Loop
for k=1:L % L = No. of Layers

 if k==1                                                      % Here the weights are combined to one
    W=IW{1,1};                                         %  variable that will be used inside the   
    qW=quantize(q,W);
 else                                                          %  loops...... also they are qunaitzed so that
    W=LW{k,k-1};                                   % they will be ignored in this calculations 
    qW=quantize(q,W);
 end                                 
 
 m=width(W);                                        % m defines the number of inputs to the current layer
 n= height(W);                                        % n is the number of neurons in this layer
 acfu=net.layers{k}.transferFcn;               % acfu is the AF of this layer

 N_out=zeros(n,length(x(1,:)));                 % Reseting the neuron output vectors and defining 
 N_outq=zeros(n,length(x(1,:)));                 % their new size.
     
      if k ~=1                                               % The thf5 is the QNP at the output of the previous
      thf5=thnuu;                                          % block (input process) it will be updated after the 
      b_qf5=b_nuu;                                        % first layer.
      end                                                        

thnu=zeros(n,length(x(1,:))); % reseting the QNP at the output of a neuron (size)
b_nu=zeros(n,length(x(1,:)));
for j=1:n % n = No. of Neurons in the Layer k

     Mul=zeros(m,length(x(1,:)));                 % reseting and size definition of the multiplication
     Mulq=zeros(m,length(x(1,:)));                % vector, which will be the input * weight
      
     for i=1:m                                                      % m is the no. of Multiplications in this neuron
            Mul(i,:)= N_in_Nor(i,:).*W(j,i);                % mult. input to the corresponding weight
            Mulq(i,:)= N_in_Norq(i,:).*qW(j,i);           Mulqq=quantize(q,Mulq);    
     end                                                                                               % here, thmul finds QNP
            thmul=(W(j,:).^2)*thf5 + sum( 0.18*(2^(-2*p))*((Mulq.^2)) );    % of all multiplications
      % thmul sums the QNP of the multiplications, that sum should be
      % done later after the SoE, but it is the same, so this is easier.
          d_w=qW(j,:)-W(j,:);     
      b_mul =    (d_w'.*ones(size(N_in_Nor))).*N_in_Nor + ...
          ((W(j,:)'.*ones(size(N_in_Nor)))).* b_qf5;


      SoE= sum(Mul,1);           
      SoEq= sum(Mulqq,1);  thsom=thmul + 0.18*(2^(-2*p))*((SoEq.^2));
      SoEqq=quantize(q,SoEq);
      b_soe= sum(b_mul);                                            %   b_soe= vecnorm(b_mul)

            b=B{k} ;                                                           % The bias vector is quantized here to
      Biased= SoE + b(j);                                                % neglect the coeffitient quantization
            bq=  quantize(q,b);  
      Biasedq= SoEqq + bq(j);    thbi=thsom + 0.18*(2^(-2*p))*((Biasedq.^2));
      Biasedqq=quantize(q,Biasedq);                 %Adding the Bias and quantizing the result.
     
      b_bis= b_soe + (bq(j)-b(j));

      %   Activation Function
          switch acfu

              case 'purelin'                                % For each AF, there are two paths one is
             N_out(j,:)=Biased;                          % for the normal BPANN and the other is
             N_outq(j,:)=Biasedqq;                    % for the quantized one. so there will be
             thnu(j,:)=thbi;                                   % 'N_out' and 'N_outq' inside each case.
            %The PureLin function is just G(z)=z (paper notation)
            %QNP(G)= QNP(z) , no added QNP.
             b_nu(j,:)=b_bis;

                case 'tansig'
             Ag=(-2)*Biased;
             Ae=exp(Ag);
             As=Ae+1;
             Ar=1./ As;
             Agg=(2)*Ar;
             Af=Agg-1;
             N_out(j,:)= Af; 

            qAg=(-2)*Biasedqq;   thAg=thbi.*4;  Agq=quantize(q,qAg);   b_Ag= -b_bis.*2;     

            qAe=exp(Agq);  thAe=thAg.*((qAe.^2)) + 0.18*(2^(-2*p))*((qAe.^2));
              Aeq=quantize(q,qAe); b_Ae=qAe.*b_Ag;
            qAs=Aeq+1;      thAs=thAe + 0.18*(2^(-2*p))*((qAs.^2)); 
              Asq=quantize(q,qAs);   b_As=b_Ae;  
            qAr=1./ Asq;      thAr=((qAr.^4)).*thAs + 0.18*(2^(-2*p))*((qAr.^2)); 
              Arq=quantize(q,qAr);    b_Ar=-b_As./(Asq.^2);
            Agg=(2)*Arq;     thAgg=thAr.*4;                                      
             Agg= quantize(q,Agg);  b_Agg=b_Ar.*2;
            qAf=Agg-1;        thAf=thAgg + 0.18*(2^(-2*p))*((qAf.^2));         
              qAf=quantize(q,qAf);      b_Af=  b_Agg;        
             N_outq(j,:)= qAf;                                        
            thnu(j,:)=thAf;                                            
            b_nu(j,:)=b_Af;

              case 'logsig'
             Am=(-1)*Biased;                              
             Ae=exp(Am);
             As=Ae+1;
             Ar=1./ As;
             N_out(j,:)= Ar;
                                                                                                                                                  
             Am=(-1)*Biasedqq;          b_Ag= -b_bis;          
             Ae=exp(Am);   thAe=thbi.*(Ae.^2) + 0.18*(2^(-2*p))*(Ae.^2);  
                Ae=quantize(q,Ae);      b_Ae=Ae.*b_Ag;
             As=Ae+1;        thAs=thAe + 0.18*(2^(-2*p))*((As.^2));     
                As=quantize(q,As);      b_As=b_Ae;
             Ar=1./ As;        thAr= (Ar.^4).*thAs + 0.18*(2^(-2*p))*((Ar.^2)); 
                Ar=quantize(q,Ar);        b_Ar=-b_As./(As.^2);
             N_outq(j,:)= Ar; 
             thnu(j,:)=thAr;
             b_nu(j,:)=b_Ar;

                case 'radbas'
              Ap=Biased.*Biased;
              Am=(-1)*Ap;
              Ae=exp(Am);
              N_out(j,:)= Ae;         

             Ap=Biasedqq.*Biasedqq;  thAp=(4.*Ap.*thbi) + 0.18*(2^(-2*p))*((Ap.^2));
                   Ap=quantize(q,Ap);       b_Ap=2*Ap.*b_bis./Biasedqq;
              Am=(-1)*Ap;                        b_Am=b_Ap.*(-1);
              Ae=exp(Am);               thAe=thAp.*((Ae.^2)) + 0.18*(2^(-2*p))*((Ae.^2)); 
                  Ae=quantize(q,Ae);         b_Ae=Ae.*b_Am;
              N_outq(j,:)= Ae;
              thnu(j,:)=thAe;
             b_nu(j,:)=b_Ae;

                case 'tribas'

                  A_abs=abs(Biased);
                  A_1minus=1-A_abs;
                  A_max=max(A_1minus,0);
                  N_out(j,:)= A_max;               
                  A_absq=abs(Biasedqq);                                  
              ii=size(Biasedqq);
             ii=ii(2);
             b_abs=zeros(size(Biasedqq));
            for z= 1:ii
                if Biasedqq(1,z) < 0
                   b_abs(1,z)=-b_bis(1,z);
                elseif Biasedqq(1,z) > 0
                  b_abs(1,z)=b_bis(1,z);
                else
                  b_abs(1,z)=0;
                end
            end 

⛄三、运行结果

在这里插入图片描述

⛄四、matlab版本及参考文献

1 matlab版本
2014a

2 参考文献
[1] 由伟,刘亚秀.MATLAB数据分析教程[M].清华大学出版社,2020.
[2]王岩,隋思涟.试验设计与MATLAB数据分析[M].清华大学出版社,2012.

3 备注
简介此部分摘自互联网,仅供参考,若侵权,联系删除

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Matlab领域

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值