matlab神经网络训练准确率,Matlab神经网络 训练数据量的多少对训练误差的影响...

本文详细介绍了Elman网络的创建过程,包括使用newelm函数的语法和参数,如输入范围、层数、转移函数等。此外,还展示了如何用训练数据训练网络,并通过例子解释了Elman网络如何识别连续事件。警告指出,某些大的步长训练算法可能不适合Elman网络,因为它们的梯度仅是近似值,可能导致学习困难。最后,概述了Elman网络的训练和适应过程以及性能度量方法。
摘要由CSDN通过智能技术生成

版本不一样 所以描述的也不同 2007市这样的

help newelm

NEWELM Create an Elman backpropagation network.

Syntax

net = newelm

net = newelm(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)

Description

NET = NEWELM creates a new network with a dialog box.

NET = NEWELM(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes several arguments,

PR  - Rx2 matrix of min and max values for R input elements.

Si  - Size of ith layer, for Nl layers.

TFi - Transfer function of ith layer, default = 'tansig'.

BTF - Backprop network training function, default = 'traingdx'.

BLF - Backprop weight/bias learning function, default = 'learngdm'.

PF  - Performance function, default = 'mse'.

and returns an Elman network.

The training function BTF can be any of the backprop training

functions such as TRAINGD, TRAINGDM, TRAINGDA, TRAINGDX, etc.

*WARNING*: Algorithms which take large step sizes, such as TRAINLM,

and TRAINRP, etc., are not recommended for Elman networks.  Because

of the delays in Elman networks the gradient of performance used

by these algorithms is only approximated making learning difficult

for large step algorithms.

The learning function BLF can be either of the backpropagation

learning functions such as LEARNGD, or LEARNGDM.

The performance function can be any of the differentiable performance

functions such as MSE or MSEREG.

Examples

Here is a series of Boolean inputs P, and another sequence T

which is 1 wherever P has had two 1's in a row.

P = round(rand(1,20));

T = [0 (P(1:end-1)+P(2:end) == 2)];

We would like the network to recognize whenever two 1's

occur in a row.  First we arrange these values as sequences.

Pseq = con2seq(P);

Tseq = con2seq(T);

Next we create an Elman network whose input varies from 0 to 1,

and has five hidden neurons and 1 output.

net = newelm([0 1],[10 1],{'tansig','logsig'});

Then we train the network with a mean squared error goal of

0.1, and simulate it.

net = train(net,Pseq,Tseq);

Y = sim(net,Pseq)

Algorithm

Elman networks consists of Nl layers using the DOTPROD

weight function, NETSUM net input function, and the specified

transfer functions.

The first layer has weights coming from the input.  Each subsequent

layer has a weight coming from the previous layer.  All layers except

the last have a recurrent weight. All layers have biases.  The last

layer is the network output.

Each layer's weights and biases are initialized with INITNW.

Adaption is done with TRAINS which updates weights with the

specified learning function. Training is done with the specified

training function. Performance is measured according to the specified

performance function.

See also newff, newcf, sim, init, adapt, train, trains

Reference page in Help browser

doc newelm

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值