matlab-netsum求和、dotprod点积与newcf级联前向ANN

help dotprod
 dotprod Dot product weight function.
 
   Weight functions apply weights to an input to get weighted inputs.
 
   dotprod(W,P) returns the dot product W * P of a weight matrix W and
   an input P.
 
   dotprod('size',S,R) returns the size of a weight matrix required by
   this function to weight an input vector with R elements for a layer
   with R neurons.
 
   dotprod('dp',W,P,Z,FP) returns the derivative of Z with respect to P.
   dotprod('dw',W,P,Z,FP) returns the derivative of Z with respect to W.
 
   Here we define a random weight matrix W and input vector P
   and calculate the corresponding weighted input Z.
 
     W = rand(4,3);
     P = rand(3,1);
     Z = dotprod(W,P)
 
  See also sim, ddotprod, dist, negdist, normprod.

    Reference page in Help browser
       doc dotprod

>> a=[1 2;3 4]

a =

     1     2
     3     4

>> b=[3;9]

b =

     3
     9

>> dotprod(a,b)

ans =

    21
    45

>>

 

级联前向ANN的特点

1)每一层的权重都来前一个层、前前一个层等所有前面的层,第一层的权重来自于输入

2)DOTPROD做为权重函数

3)NETSUM为输入函数

4)传输函数为自定义

5)每层权重和阀值使用INITNW初始化

6)

 

 Cascade-forward networks consists of Nl layers using the DOTPROD
     weight function, NETSUM net input function, and the specified
     transfer functions.
 
     The first layer has weights coming from the input.  Each subsequent
     layer has weights coming from the input and all previous layers.
     All layers have biases.  The last layer is the network output.
 
     Each layer's weights and biases are initialized with INITNW.
 
     Adaption is done with TRAINS which updates weights with the
     specified learning function. Training is done with the specified
     training function. Performance is measured according to the specified
     performance function.
  newcf Create a cascade-forward backpropagation network.
 
   Obsoleted in R2010b NNET 7.0.  Last used in R2010a NNET 6.0.4.
   The recommended function is cascadeforwardnet.
 
   Syntax
 
    net = newcf(P,T,[S1 S2...S(N-1)],{TF1 TF2...TFN},BTF,BLF,PF,IPF,OPF,DDF)
 
   Description
 
     newcf(P,T,[S1 S2...S(N-1)],{TF1 TF2...TFN},BTF,BLF,PF,IPF,OPF,DDF) takes,
       P  - RxQ1 matrix of Q1 representative R-element input vectors.
       T  - SNxQ2 matrix of Q2 representative SN-element target vectors.
       Si  - Sizes of N-1 hidden layers, S1 to S(N-1), default = [].
             (Output layer size SN is determined from T.)
       TFi - Transfer function of ith layer. Default is 'tansig' for
             hidden layers, and 'purelin' for output layer.
       BTF - Backprop network training function, default = 'trainlm'.
       BLF - Backprop weight/bias learning function, default = 'learngdm'.
       PF  - Performance function, default = 'mse'.
       IPF - Row cell array of input processing functions.
             Default is {'fixunknowns','remconstantrows','mapminmax'}.
       OPF - Row cell array of output processing functions.
             Default is {'remconstantrows','mapminmax'}.
       DDF - Data division function, default = 'dividerand';
     and returns an N layer cascade-forward backprop network.
 
     The transfer functions TFi can be any differentiable transfer
     function such as TANSIG, LOGSIG, or PURELIN.
 
     The training function BTF can be any of the backprop training
     functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
 
     *WARNING*: TRAINLM is the default training function because it
     is very fast, but it requires a lot of memory to run.  If you get
     an "out-of-memory" error when training try doing one of these:
 
     (1) Slow TRAINLM training, but reduce memory requirements, by
         setting NET.efficiency.memoryReduction to 2 or more. (See HELP TRAINLM.)
     (2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
     (3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
 
     The learning function BLF can be either of the backpropagation
     learning functions such as LEARNGD, or LEARNGDM.
 
     The performance function can be any of the differentiable performance
     functions such as MSE or MSEREG.
例子

1)一个隐含层,层中有5个神经元

>>  P = [0 1 2 3 4 5 6 7 8 9 10]

P =

     0     1     2     3     4     5     6     7     8     9    10

>>  T = [0 1 2 3 4 3 2 1 2 3 4]

T =

     0     1     2     3     4     3     2     1     2     3     4

>> net = newcf(P,T,5);

 net.trainParam.epochs = 50;
       net = train(net,P,T);



 

2)2个隐含层,每层5个神经元

>> net = newcf(P,T,[5 5]);
>>  net.trainParam.epochs = 50;
       net = train(net,P,T);

 

 

 

关于netsum求和方式

>> a

a =

     1     2
     3     4

>> b=[5,6;7,8]

b =

     5     6
     7     8

>> netsum(a,b)

ans =

     6     8
    10    12

>>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值