matlab神经网络工具箱_数学问题的非传统解法 | 神经网络算法工具箱新旧版本对比示例(1)...

dd2296b91df09b4366b48e428c9ceb79.png

cd4188906efa4111783f63290220e78d.png

  • 本专辑参考了薛定宇老师的早年的一本专著《高等应用数学问题 MATLAB求解》,遴选部分习题供初学者参考,仅用作学习资料传播,版权属于原作者,特此致谢。

  • 如有不妥,请联系删除。

  • 主要内容改编或来源于:薛定宇、高等应用数学问题 MATLAB求解:习题参考解答(预印本)

7054ecad6d4f3a9f9f31a42d51118361.png

数学问题的非传统解法 | 神经网络算法工具箱新旧版本对比示例(1)

目前高等学校购买的Matlab各类版本都有,很多同学学习时机房、实验室用的是旧版,今天推荐低版本神经网络工具箱与高版本的对比研究,感兴趣可以试试。

7054ecad6d4f3a9f9f31a42d51118361.png

问题:

已知如下的样本点(xi; yi) 数据,试利用神经网络理论在x \in (1; 10) 求解绘制出样本对应的函数曲线。

还可以尝试不同的神经网络结构和训练算法,将基于神经网络的曲线拟合结果和前面介绍的分段三次多项式插值的算法进行比较。

164fd26a260ac66d91cd8fdf721f9d30.png

7054ecad6d4f3a9f9f31a42d51118361.png

解答:

可以选择6 个隐层节点,每层的传输函数均选为tansig(),这样就可以构造并训练出所需的神经网络了。

>> x=1:10;

y=[244.0,221.0,208.0,208.0,211.5,216.0,219.0,221.0,221.5,220.0];

net=newff([0,10],[5,1],{'tansig','purelin'});

net.trainParam.epochs=100; % 设置最大步数

net=train(net,x,y)

如果你在高版本运行上述命令,会报错。弹出以下窗口:

6b947db419feeb4235e53efc09d51de1.png

1a1a375a58bfcc813a86f98273cc9d9f.png

c3c9c0f269f750c8932ad6dd12667bca.png

7a5b9d20d3fbc25394494d2876bb4ec8.png

显然新工具箱好用得多!

7054ecad6d4f3a9f9f31a42d51118361.png

拓展题:

假设已知实测数据由下表给出,试利用神经网络对(x; y) 在(0:1; 0:1)~(1:1; 1:1) 区域内的点进行插值,并用三维曲面的方式绘制出基于神经网络的插值结果。

34766f69353533417c11841580abf21d.png

提示:这个函式已经废弃,新版需要使用函数feedforwardnet

7054ecad6d4f3a9f9f31a42d51118361.png

>> help newff

newff Create a feed-forward backpropagation network.

Obsoleted in R2010b NNET 7.0.  Last used in R2010a NNET 6.0.4. 

The recommended function is feedforwardnet.

Syntax

     net = newff(P,T,S)

     net = newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF)

Description

newff(P,T,S) takes,

P  - RxQ1 matrix of Q1 representative R-element input vectors.

T  - SNxQ2 matrix of Q2 representative SN-element target vectors.

Si  - Sizes of N-1 hidden layers, S1 to S(N-1), default = []. (Output layer size SN is determined from T.) and returns an N layer feed-forward backprop network.

newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF) takes optional inputs, 

TFi - Transfer function of ith layer. Default is 'tansig' for hidden layers, and 'purelin' for output layer.

BTF - Backprop network training function, default = 'trainlm'.

BLF - Backprop weight/bias learning function, default = 'learngdm'.      PF  - Performance function, default = 'mse'.

IPF - Row cell array of input processing functions.

Default is {'fixunknowns','remconstantrows','mapminmax'}.

OPF - Row cell array of output processing functions.

Default is {'remconstantrows','mapminmax'}.

DDF - Data division function, default = 'dividerand'; and returns an N layer feed-forward backprop network.

The transfer functions TF{i} can be any differentiable transfer function such as TANSIG, LOGSIG, or PURELIN.

The training function BTF can be any of the backprop training functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.

*WARNING*: 

TRAINLM is the default training function because it is very fast, but it requires a lot of memory to run.  If you get an "out-of-memory" error when training try doing one of these:

(1) Slow TRAINLM training, but reduce memory requirements, by setting NET.efficiency.memoryReduction to 2 or more. (See HELP TRAINLM.)

(2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.

(3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.

The learning function BLF can be either of the backpropagation learning functions such as LEARNGD, or LEARNGDM.

The performance function can be any of the differentiable performance functions such as MSE or MSEREG.

Examples

     [inputs,targets] = simplefitdata;

     net = newff(inputs,targets,20);

     net = train(net,inputs,targets);

     outputs = net(inputs);

     errors = outputs - targets;

     perf = perform(net,outputs,targets)

Algorithm

Feed-forward networks consist of Nl layers using the DOTPROD weight function, NETSUM net input function, and the specified transfer functions. 

The first layer has weights coming from the input. Each subsequent layer has a weight coming from the previous layer.  All layers have biases.  The last layer is the network output.

Each layer's weights and biases are initialized with INITNW.

Adaption is done with TRAINS which updates weights with the specified learning function. Training is done with the specified training function. Performance is measured according to the specified performance function.

See also newcf, newelm, sim, init, adapt, train, trains

7054ecad6d4f3a9f9f31a42d51118361.png

对比一下:

>> help feedforwardnet

 feedforwardnet Feedforward neural network.

   Two (or more) layer feedforward networks can implement any finite

   input-output function arbitrarily well given enough hidden neurons.

   feedforwardnet(hiddenSizes,trainFcn) takes a 1xN vector of N hidden

   layer sizes, and a backpropagation training function, and returns

   a feed-forward neural network with N+1 layers.

   Input, output and output layers sizes are set to 0.  These sizes will

   automatically be configured to match particular data by train. Or the

   user can manually configure inputs and outputs with configure.

   Defaults are used if feedforwardnet is called with fewer arguments.

   The default arguments are (10,'trainlm').

   Here a feed-forward network is used to solve a simple fitting problem:

     [x,t] = simplefit_dataset;

     net = feedforwardnet(10);

     net = train(net,x,t);

     view(net)

     y = net(x);

     perf = perform(net,t,y)

   See also fitnet, patternnet, cascadeforwardnet.

7054ecad6d4f3a9f9f31a42d51118361.png

快去试试吧!

65087975ecf9270e2486f9cff67da211.png

7b05693aaf09ca31cfb97d417d5dba24.png

697952dbea2b2b6af953e8a476e9ba1b.gif

喜欢就点个在看吧!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值