训练方法 | 训练函数 |
---|---|
梯度下降法 | traingd |
有动量的梯度下降法 | traingdm |
弹性梯度下降法 | trainrp |
拟牛顿算法 | trainbfg |
神经网络构建
n
e
t
=
n
e
w
f
f
(
P
R
,
[
S
1
S
2
⋅
⋅
⋅
S
N
]
)
,
T
F
1
T
F
2
⋅
⋅
⋅
T
F
N
,
B
T
F
net = newff(PR,[S1~S2~···SN]),{TF1~TF2···TFN},BTF
net=newff(PR,[S1 S2 ⋅⋅⋅SN]),TF1 TF2⋅⋅⋅TFN,BTF
- P R PR PR:由R维的输入样本最小最大值构成的R*2维矩阵
- [ S 1 S 2 ⋅ ⋅ ⋅ S N ] [S1~S2~···SN] [S1 S2 ⋅⋅⋅SN]:各层的神经元个数
- T F 1 T F 2 ⋅ ⋅ ⋅ T F N {TF1~TF2···TFN} TF1 TF2⋅⋅⋅TFN:各层的神经元传递函数
- B T F BTF BTF:训练函数名称
网络训练
n
e
t
=
t
r
a
i
n
(
n
e
t
,
P
,
T
)
net = train(net,P,T)
net=train(net,P,T)
- P P P:输入数据
- T T T:输出数据
网络仿真
Y
2
=
s
i
m
(
n
e
t
,
P
2
)
Y2 = sim(net,P2)
Y2=sim(net,P2)
- P 2 P2 P2:待测试数据。
pn = p1*; %p1的转置给pn
tn = t1*; %t1的转置给tn
[m,n] = size(t); %得到矩阵尺寸
[pn,minp,maxp,tn,mint,max] = premnmx(p,t); %归一化,使量纲一致
net = newff(minmax(pn),[5,1],{'tansig','purelin'},'traingd'); %建立神经网络
net.trainParam.show = 50%;
net.trainParam.lr = 0.01;
net.trainParam.epochs = 1000;
net.trainParam.goal = 1e-5;
[net,tr] = train(net,pn,tn);
anewn = sim(net,pn);
figure;
hold on
plot(anewn,'b',tn,'r');
wucha=sum(abs(b-r)/n); %输出误差