matlab神经网络及其应用,matlab神经网络及其应用.ppt

41528d3028836879cd698677c3999917.gifmatlab神经网络及其应用.ppt

MATLAB中的神经网络及其应用:以BP为例,主讲:王茂芝 副教授 wangmz@,1 一个预测问题,已知:一组标准输入和输出数据(见附件) 求解:预测另外一组输入对应的输出 背景:略,2 BP网络,,3 MATLAB中的newff命令,NEWFF Create a feed-forward backpropagation network. Syntax net = newff net = newff(PR,[S1 S2.SNl],{TF1 TF2.TFNl},BTF,BLF,PF),命令newff中的参数说明,NET = NEWFF creates a new network with a dialog box. NEWFF(PR,[S1 S2.SNl],{TF1 TF2.TFNl},BTF,BLF,PF) takes, PR - Rx2 matrix of min and max values for R elements. Si - Size of ith layer, for Nl layers. TFi - Transfer function of ith layer, default = tansig . BTF - Backprop network training function, default = trainlm . BLF - Backprop weight/bias learning function, default = learngdm . PF - Perance function, default = mse . and returns an N layer feed-forward backprop network.,参数说明,The transfer functions TFi can be any differentiable transfer function such as TANSIG, LOGSIG, or PURELIN. The training function BTF can be any of the backprop training functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.,参数说明,*WARNING*: TRAINLM is the default training function because it is very fast, but it requires a lot of memory to run. If you get an “out-of-memory“ error when training try doing one of these: (1) Slow TRAINLM training, but reduce memory requirements, by setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.) (2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM. (3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.,参数说明,The learning function BLF can be either of the backpropagation learning functions such as LEARNGD, or LEARNGDM. The perance function can be any of the differentiable perance functions such as MSE or MSEREG.,4 MATLAB中的train命令,TRAIN Train a neural network. Syntax [net,tr,Y,E,Pf,Af] = train(NET,P,T,Pi,Ai,VV,TV) Description TRAIN trains a network NET according to NET.trainFcn and NET.trainParam.,输入参数说明,TRAIN(NET,P,T,Pi,Ai) takes, NET - Network. P - Network s. T - Network targets, default = zeros. Pi - Initial delay conditions, default = zeros. Ai - Initial layer delay conditions, default = zeros. VV - Structure of validation vectors, default = []. TV - Structure of test vectors, default = [].,输出参数说明,and returns, NET - New network. TR - Training record (epoch and perf). Y - Network outputs. E - Network errors. Pf - Final delay conditions. Af - Final layer delay conditions.,说明,Note that T is optional and need only be used for networks that require targets. Pi and Pf are also optional and need only be used for networks that have or layer delays.,输入参数数据结构说明,The cell array at is easiest to describe. It is most convenient for networks with multiple s and outputs, and allows sequences of s to be presented: P - NixTS cell array, each element P{i,ts} is an RixQ matrix. T - NtxTS cell array, each element P{i,ts} is an VixQ matrix. Pi - NixID cell array, each element Pi{i,k} is an RixQ matrix. Ai - NlxLD cell array, each element Ai{i,k} is an SixQ matrix. Y - NOxTS cell array, each element Y{i,ts} is an UixQ matrix. E - NtxTS cell array, each element P{i,ts} is an VixQ matrix. Pf - NixID cell array, each element Pf{i,k} is an RixQ matrix. Af - NlxLD cell array, each element Af{i,k} is an SixQ matrix.,

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值