神经网络用以变形文本矫正系列第三篇

0.前言

接上一篇对于dataFormat格式的实验结果分析(https://blog.csdn.net/qq_35546153/article/details/80393277),这一篇记录isRandom对于实验结果的影响;

1.实验过程记录

假定第一类(网络结构)情况不变,第二类中dataFormat选择1,因为其收敛速度较快,数据间的相关性强;samplesNum s i s R不变, 还是之前的11000, 观察记录isRandom改变对实验结果的影响。

1.1.1 网络训练过程记录

Using TensorFlow backend.
load data begin...
dataFormat = 1
load data..., random!
load done
Data.shape= (11000, 858)
label.shape= (11000, 5)
reshape data shape= (11000, 858)
reshape label shape= (11000, 5)
data_train.shape= (8000, 858)
data_test.shape= (3000, 858)
label_train.shape= (8000, 5)
label_test.shape= (3000, 5)
construct net begin...
Training...
Epoch 1/200
 200/8000 [..............................] - ETA: 1:29 - loss: 0.1844 - acc: 0.0850
1600/8000 [=====>........................] - ETA: 9s - loss: 0.4693 - acc: 0.6169  
3000/8000 [==========>...................] - ETA: 4s - loss: 0.3116 - acc: 0.6453
4400/8000 [===============>..............] - ETA: 2s - loss: 0.2220 - acc: 0.6764
5800/8000 [====================>.........] - ETA: 0s - loss: 0.1735 - acc: 0.7021
7200/8000 [==========================>...] - ETA: 0s - loss: 0.1425 - acc: 0.7278
8000/8000 [==============================] - 3s 328us/step - loss: 0.1294 - acc: 0.7406
Epoch 2/200

 200/8000 [..............................] - ETA: 0s - loss: 0.0094 - acc: 0.8300
1600/8000 [=====>........................] - ETA: 0s - loss: 0.0109 - acc: 0.8625
3000/8000 [==========>...................] - ETA: 0s - loss: 0.0106 - acc: 0.8637
4400/8000 [===============>..............] - ETA: 0s - loss: 0.0104 - acc: 0.8655
5600/8000 [====================>.........] - ETA: 0s - loss: 0.0107 - acc: 0.8548
6800/8000 [========================>.....] - ETA: 0s - loss: 0.0103 - acc: 0.8590
8000/8000 [==============================] - 0s 42us/step - loss: 0.0100 - acc: 0.8616
Epoch 3/200

 200/8000 [..............................] - ETA: 0s - loss: 0.0078 - acc: 0.8500
1400/8000 [====>.........................] - ETA: 0s - loss: 0.0070 - acc: 0.8836
2600/8000 [========>.....................] - ETA: 0s - loss: 0.0077 - acc: 0.8762
4000/8000 [==============>...............] - ETA: 0s - loss: 0.0080 - acc: 0.8728
5400/8000 [===================>..........] - ETA: 0s - loss: 0.0079 - acc: 0.8669
6600/8000 [=======================>......] - ETA: 0s - loss: 0.0079 - acc: 0.8642
7800/8000 [============================>.] - ETA: 0s - loss: 0.0077 - acc: 0.8632
8000/8000 [==============================] - 0s 42us/step - loss: 0.0077 - acc: 0.8638

Epoch 182/200

 200/8000 [..............................] - ETA: 0s - loss: 0.0023 - acc: 0.9350
1600/8000 [=====>........................] - ETA: 0s - loss: 0.0025 - acc: 0.9163
3000/8000 [==========>...................] - ETA: 0s - loss: 0.0024 - acc: 0.9220
4400/8000 [===============>..............] - ETA: 0s - loss: 0.0024 - acc: 0.9264
5800/8000 [====================>.........] - ETA: 0s - loss: 0.0025 - acc: 0.9257
7200/8000 [==========================>...] - ETA: 0s - loss: 0.0025 - acc: 0.9240
8000/8000 [==============================] - 0s 40us/step - loss: 0.0025 - acc: 0.9234
Epoch 183/200

 200/8000 [..............................] - ETA: 0s - loss: 0.0025 - acc: 0.9200
1600/8000 [=====>........................] - ETA: 0s - loss: 0.0025 - acc: 0.9288
3000/8000 [==========>...................] - ETA: 0s - loss: 0.0025 - acc: 0.9320
4400/8000 [===============>..............] - ETA: 0s - loss: 0.0025 - acc: 0.9286
5600/8000 [====================>.........] - ETA: 0s - loss: 0.0025 - acc: 0.9234
7000/8000 [=========================>....] - ETA: 0s - loss: 0.0025 - acc: 0.9244
8000/8000 [==============================] - 0s 39us/step - loss: 0.0025 - acc: 0.9234
Epoch 184/200

 200/8000 [..............................] - ETA: 0s - loss: 0.0025 - acc: 0.9000
1600/8000 [=====>........................] - ETA: 0s - loss: 0.0023 - acc: 0.9287
3000/8000 [==========>...................] - ETA: 0s - loss: 0.0024 - acc: 0.9263
4400/8000 [===============>..............] - ETA: 0s - loss: 0.0025 - acc: 0.9255
5800/8000 [====================>.........] - ETA: 0s - loss: 0.0025 - acc: 0.9245
7200/8000 [==========================>...] - ETA: 0s - loss: 0.0025 - acc: 0.9251
8000/8000 [==============================] - 0s 39us/step - loss: 0.0025 - acc: 0.9234
Epoch 00184: early stopping

Testing ------------

 200/3000 [=>............................] - ETA: 0s
2400/3000 [=======================>......] - ETA: 0s
3000/3000 [==============================] - 0s 27us/step
test cost: [0.0033225238323211668, 0.92233333190282185]
./saveProcessResult/processb_200FIL_858FO_572SO_572EO_5_0521-17_17dataFormat1.png

结合上一篇的没有random的数据,分析得出以下结论:

(1)epoch都是在1的时候,loss下降明显;

(2)随机选择样本在训练的过程中,由于数据加载,所以花费的时间长于按顺序选择的时间;

(3)随机选择样本训练结束的epoch为184,loss: 0.0025 - acc: 0.9234;按顺序的情况下,结束的epoch为175, loss: 0.0020 - acc: 0.9210;可见,当样本进行随机选择时,有可能会造成训练结果下降,但也有可能会上升。在这种情况下,loss 下较之顺序情况下,下降了-0.05%, 准确率上升了0.24%;

(4)猜测:随机情况下,结果不可控,有时好有时坏;

总结,目前样本数为11000时的情况下,随机选取样本与否和dataFormat两种情况下的网络结果都已实验,接下来,增加样本数,看看准确率能否突破92%?也需要试试样本数降低时,准确率是否会下降?也许在小样本时,准确率在随机的情况下会优于按顺序的情况,拭目以待!

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值