python训练手势分类器_python-Keras分类器的准确性在训练过程中稳定...

我有以下神经网络,使用Tensorflow作为后端用Keras编写,我在Windows 10的Python 3.5(Anaconda)上运行:

model = Sequential()

model.add(Dense(100, input_dim=283, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(150, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(4, init='normal', activation='sigmoid'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)

model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

我正在使用我的GPU进行训练.在训练期间(10000个纪元),幼稚网络的准确性从0.25稳步提高到0.7到0.9之间,然后突然下降并保持在0.25:

Epoch 1/10000

6120/6120 [==============================] - 1s - loss: 1.5329 - acc: 0.2665

Epoch 2/10000

6120/6120 [==============================] - 1s - loss: 1.2985 - acc: 0.3784

Epoch 3/10000

6120/6120 [==============================] - 1s - loss: 1.2259 - acc: 0.4891

Epoch 4/10000

6120/6120 [==============================] - 1s - loss: 1.1867 - acc: 0.5208

Epoch 5/10000

6120/6120 [==============================] - 1s - loss: 1.1494 - acc: 0.5199

Epoch 6/10000

6120/6120 [==============================] - 1s - loss: 1.1042 - acc: 0.4953

Epoch 7/10000

6120/6120 [==============================] - 1s - loss: 1.0491 - acc: 0.4982

Epoch 8/10000

6120/6120 [==============================] - 1s - loss: 1.0066 - acc: 0.5065

Epoch 9/10000

6120/6120 [==============================] - 1s - loss: 0.9749 - acc: 0.5338

Epoch 10/10000

6120/6120 [==============================] - 1s - loss: 0.9456 - acc: 0.5696

Epoch 11/10000

6120/6120 [==============================] - 1s - loss: 0.9252 - acc: 0.5995

Epoch 12/10000

6120/6120 [==============================] - 1s - loss: 0.9111 - acc: 0.6106

Epoch 13/10000

6120/6120 [==============================] - 1s - loss: 0.8772 - acc: 0.6160

Epoch 14/10000

6120/6120 [==============================] - 1s - loss: 0.8517 - acc: 0.6245

Epoch 15/10000

6120/6120 [==============================] - 1s - loss: 0.8170 - acc: 0.6345

Epoch 16/10000

6120/6120 [==============================] - 1s - loss: 0.7850 - acc: 0.6428

Epoch 17/10000

6120/6120 [==============================] - 1s - loss: 0.7633 - acc: 0.6580

Epoch 18/10000

6120/6120 [==============================] - 4s - loss: 0.7375 - acc: 0.6717

Epoch 19/10000

6120/6120 [==============================] - 1s - loss: 0.7058 - acc: 0.6850

Epoch 20/10000

6120/6120 [==============================] - 1s - loss: 0.6787 - acc: 0.7018

Epoch 21/10000

6120/6120 [==============================] - 1s - loss: 0.6557 - acc: 0.7093

Epoch 22/10000

6120/6120 [==============================] - 1s - loss: 0.6304 - acc: 0.7208

Epoch 23/10000

6120/6120 [==============================] - 1s - loss: 0.6052 - acc: 0.7270

Epoch 24/10000

6120/6120 [==============================] - 1s - loss: 0.5848 - acc: 0.7371

Epoch 25/10000

6120/6120 [==============================] - 1s - loss: 0.5564 - acc: 0.7536

Epoch 26/10000

6120/6120 [==============================] - 1s - loss: 0.1787 - acc: 0.4163

Epoch 27/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 28/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 29/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 30/10000

6120/6120 [==============================] - 2s - loss: 1.1921e-07 - acc: 0.2500

Epoch 31/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500

Epoch 32/10000

6120/6120 [==============================] - 1s - loss: 1.1921e-07 - acc: 0.2500 ...

我猜这是由于优化器陷入了局部最小值,该最小值将所有数据分配到一个类别.我如何禁止它这样做?

我尝试过的事情(但似乎并没有阻止这种情况的发生):

>使用其他优化器(adam)

>确保培训数据包括每个类别中相同数量的示例

>增加培训数据量(目前为6000)

>在2到5之间变化类别的数量

>将网络中的隐藏层数从1增加到5

>更改图层宽度(从50到500)

这些都没有帮助.还有其他想法为什么会发生和/或如何抑制呢?难道是Keras中的错误?非常感谢您的任何建议.

编辑:

通过将最终激活更改为softmax(从sigmoid)并向最后两个隐藏层添加maxnorm(3)正则化,似乎已解决了该问题:

model = Sequential()

model.add(Dense(100, input_dim=npoints, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(150, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu'))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu', W_constraint=maxnorm(3)))

model.add(Dropout(0.2))

model.add(Dense(200, init='normal', activation='relu', W_constraint=maxnorm(3)))

model.add(Dropout(0.2))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)

model.add(Dense(ncat, init='normal', activation='softmax'))

model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['accuracy'])

非常感谢您的建议.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值