SAP Leonardo机器学习模型重新训练的日志

Scanning dataset flowersjerry …
Dataset used: flowersjerry
Dataset has labels: [‘sunflowers’, ‘roses’, ‘daisy’, ‘dandelion’, ‘tulips’]
364 images are used for test
2832 images are used for training
362 images are used for validation
Scale of 0 disables regularizer.
Layer ‘InceptionV3/Logits’ will be retrained.
Restoring parameters from base-model/inception_v3.ckpt
Serialized 64 features vectors to training1.tfrecords
Serialized 128 features vectors to training1.tfrecords
Serialized 192 features vectors to training1.tfrecords
Serialized 256 features vectors to training1.tfrecords
Serialized 320 features vectors to training1.tfrecords
Serialized 384 features vectors to training1.tfrecords
Serialized 448 features vectors to training1.tfrecords
Serialized 512 features vectors to training1.tfrecords
Serialized 576 features vectors to training1.tfrecords
Serialized 640 features vectors to training1.tfrecords
Serialized 704 features vectors to training1.tfrecords
Serialized 768 features vectors to training1.tfrecords
Serialized 832 features vectors to training1.tfrecords
Serialized 896 features vectors to training1.tfrecords
Serialized 960 features vectors to training1.tfrecords
Serialized 1024 features vectors to training1.tfrecords
Serialized 1088 features vectors to training1.tfrecords
Serialized 1152 features vectors to training1.tfrecords
Serialized 1216 features vectors to training1.tfrecords
Serialized 1280 features vectors to training1.tfrecords
Serialized 1344 features vectors to training1.tfrecords
Serialized 1408 features vectors to training1.tfrecords
Serialized 1472 features vectors to training1.tfrecords
Serialized 1536 features vectors to training1.tfrecords
Serialized 1600 features vectors to training1.tfrecords
Serialized 1664 features vectors to training1.tfrecords
Serialized 1728 features vectors to training1.tfrecords
Serialized 1792 features vectors to training1.tfrecords
Serialized 1856 features vectors to training1.tfrecords
Serialized 1920 features vectors to training1.tfrecords
Serialized 1984 features vectors to training1.tfrecords
Serialized 2048 features vectors to training1.tfrecords
Serialized 2112 features vectors to training1.tfrecords
Serialized 2176 features vectors to training1.tfrecords
Serialized 2240 features vectors to training1.tfrecords
Serialized 2304 features vectors to training1.tfrecords
Serialized 2368 features vectors to training1.tfrecords
Serialized 2432 features vectors to training1.tfrecords
Serialized 2496 features vectors to training1.tfrecords
Serialized 2560 features vectors to training1.tfrecords
Serialized 2624 features vectors to training1.tfrecords
Serialized 2688 features vectors to training1.tfrecords
Serialized 2752 features vectors to training1.tfrecords
Serialized 2816 features vectors to training1.tfrecords
Serialized 2832 features vectors to training1.tfrecords
Total number of training features vectors: 2832.
Serialized 64 features vectors to validation1.tfrecords
Serialized 128 features vectors to validation1.tfrecords
Serialized 192 features vectors to validation1.tfrecords
Serialized 256 features vectors to validation1.tfrecords
Serialized 320 features vectors to validation1.tfrecords
Serialized 362 features vectors to validation1.tfrecords
Total number of validation features vectors: 362.
Serialized 64 features vectors to test1.tfrecords
Serialized 128 features vectors to test1.tfrecords
Serialized 192 features vectors to test1.tfrecords
Serialized 256 features vectors to test1.tfrecords
Serialized 320 features vectors to test1.tfrecords
Serialized 364 features vectors to test1.tfrecords
Total number of test features vectors: 364.
2019-07-15 09:06:11
tf records cache duration: 0:01:00.919958
List of consumed tfrecord files:
[’./training1.tfrecords’]
[’./validation1.tfrecords’]
[’./test1.tfrecords’]
Restoring parameters from base-model/inception_v3.ckpt
Training Epoch_0 Batch_0, Accuracy: 0.109375, Loss: 2.563277
Training Epoch_0 Batch_1, Accuracy: 0.117188, Loss: 2.537273
Training Epoch_0 Batch_2, Accuracy: 0.145833, Loss: 2.378087
Training Epoch_0 Batch_3, Accuracy: 0.152344, Loss: 2.422476
Training Epoch_0 Batch_4, Accuracy: 0.156250, Loss: 2.419325
Training Epoch_0 Batch_5, Accuracy: 0.164062, Loss: 2.354400
Training Epoch_0 Batch_6, Accuracy: 0.178571, Loss: 2.295770
Training Epoch_0 Batch_7, Accuracy: 0.189453, Loss: 2.244730
Training Epoch_0 Batch_8, Accuracy: 0.213542, Loss: 2.176006
Training Epoch_0 Batch_9, Accuracy: 0.212500, Loss: 2.151114
Training Epoch_0 Batch_10, Accuracy: 0.232955, Loss: 2.090990
Training Epoch_0 Batch_11, Accuracy: 0.255208, Loss: 2.031858
Training Epoch_0 Batch_12, Accuracy: 0.259615, Loss: 1.995807
Training Epoch_0 Batch_13, Accuracy: 0.275670, Loss: 1.944686
Training Epoch_0 Batch_14, Accuracy: 0.293750, Loss: 1.902863
Training Epoch_0 Batch_15, Accuracy: 0.310547, Loss: 1.866501
Training Epoch_0 Batch_16, Accuracy: 0.314338, Loss: 1.844065
Training Epoch_0 Batch_17, Accuracy: 0.321181, Loss: 1.825293
Training Epoch_0 Batch_18, Accuracy: 0.327303, Loss: 1.795689
Training Epoch_0 Batch_19, Accuracy: 0.337500, Loss: 1.767071
Training Epoch_0 Batch_20, Accuracy: 0.345238, Loss: 1.737025
Training Epoch_0 Batch_21, Accuracy: 0.352273, Loss: 1.719176
Training Epoch_0 Batch_22, Accuracy: 0.362772, Loss: 1.695234
Training Epoch_0 Batch_23, Accuracy: 0.370443, Loss: 1.672519
Training Epoch_0 Batch_24, Accuracy: 0.380000, Loss: 1.647754
Training Epoch_0 Batch_25, Accuracy: 0.387019, Loss: 1.627843
Training Epoch_0 Batch_26, Accuracy: 0.393519, Loss: 1.608370
Training Epoch_0 Batch_27, Accuracy: 0.402344, Loss: 1.585529
Training Epoch_0 Batch_28, Accuracy: 0.408405, Loss: 1.567893
Training Epoch_0 Batch_29, Accuracy: 0.414583, Loss: 1.548632
Training Epoch_0 Batch_30, Accuracy: 0.420867, Loss: 1.529036
Training Epoch_0 Batch_31, Accuracy: 0.426758, Loss: 1.511770
Training Epoch_0 Batch_32, Accuracy: 0.432765, Loss: 1.494842
Training Epoch_0 Batch_33, Accuracy: 0.442096, Loss: 1.474720
Training Epoch_0 Batch_34, Accuracy: 0.449107, Loss: 1.456457
Training Epoch_0 Batch_35, Accuracy: 0.454861, Loss: 1.438937
Training Epoch_0 Batch_36, Accuracy: 0.462838, Loss: 1.422089
Training Epoch_0 Batch_37, Accuracy: 0.469161, Loss: 1.404765
Training Epoch_0 Batch_38, Accuracy: 0.479567, Loss: 1.381244
Training Epoch_0 Batch_39, Accuracy: 0.484375, Loss: 1.367808
Training Epoch_0 Batch_40, Accuracy: 0.491235, Loss: 1.352589
Training Epoch_0 Batch_41, Accuracy: 0.496652, Loss: 1.340101
Training Epoch_0 Batch_42, Accuracy: 0.498183, Loss: 1.329238
Training Epoch_0 Batch_43, Accuracy: 0.502841, Loss: 1.319494
Training Epoch_0 Batch_44, Accuracy: 0.504237, Loss: 1.309505
********************* Summary for epoch: 0 *********************
2019-07-15 09:06:15: Step 0: Training accuracy = 50.4%
2019-07-15 09:06:15: Step 0: Training cross entropy = 1.309505
2019-07-15 09:06:15: Step 0: Validation accuracy = 77.9%
2019-07-15 09:06:15: Step 0: Validation cross entropy = 0.654041


Saving intermediate result.
Training Epoch_1 Batch_0, Accuracy: 0.671875, Loss: 0.751208
Training Epoch_1 Batch_1, Accuracy: 0.695312, Loss: 0.711504
Training Epoch_1 Batch_2, Accuracy: 0.682292, Loss: 0.759423
Training Epoch_1 Batch_3, Accuracy: 0.699219, Loss: 0.731099
Training Epoch_1 Batch_4, Accuracy: 0.703125, Loss: 0.725658
Training Epoch_1 Batch_5, Accuracy: 0.705729, Loss: 0.717074
Training Epoch_1 Batch_6, Accuracy: 0.718750, Loss: 0.702291
Training Epoch_1 Batch_7, Accuracy: 0.726562, Loss: 0.687430
Training Epoch_1 Batch_8, Accuracy: 0.717014, Loss: 0.714519
Training Epoch_1 Batch_9, Accuracy: 0.723437, Loss: 0.718709
Training Epoch_1 Batch_10, Accuracy: 0.727273, Loss: 0.717455
Training Epoch_1 Batch_11, Accuracy: 0.730469, Loss: 0.713418
Training Epoch_1 Batch_12, Accuracy: 0.733173, Loss: 0.701071
Training Epoch_1 Batch_13, Accuracy: 0.724330, Loss: 0.713550
Training Epoch_1 Batch_14, Accuracy: 0.731250, Loss: 0.699249
Training Epoch_1 Batch_15, Accuracy: 0.735352, Loss: 0.691954
Training Epoch_1 Batch_16, Accuracy: 0.728860, Loss: 0.700308
Training Epoch_1 Batch_17, Accuracy: 0.733507, Loss: 0.694882
Training Epoch_1 Batch_18, Accuracy: 0.730263, Loss: 0.705729
Training Epoch_1 Batch_19, Accuracy: 0.734375, Loss: 0.691516
Training Epoch_1 Batch_20, Accuracy: 0.734375, Loss: 0.685375
Training Epoch_1 Batch_21, Accuracy: 0.738636, Loss: 0.676779
Training Epoch_1 Batch_22, Accuracy: 0.741848, Loss: 0.667718
Training Epoch_1 Batch_23, Accuracy: 0.744141, Loss: 0.662658
Training Epoch_1 Batch_24, Accuracy: 0.743750, Loss: 0.662525
Training Epoch_1 Batch_25, Accuracy: 0.744591, Loss: 0.663695
Training Epoch_1 Batch_26, Accuracy: 0.743634, Loss: 0.664986
Training Epoch_1 Batch_27, Accuracy: 0.742746, Loss: 0.667460
Training Epoch_1 Batch_28, Accuracy: 0.744073, Loss: 0.661358
Training Epoch_1 Batch_29, Accuracy: 0.746354, Loss: 0.655288
Training Epoch_1 Batch_30, Accuracy: 0.747480, Loss: 0.651553
Training Epoch_1 Batch_31, Accuracy: 0.749023, Loss: 0.650370
Training Epoch_1 Batch_32, Accuracy: 0.750473, Loss: 0.649224
Training Epoch_1 Batch_33, Accuracy: 0.752757, Loss: 0.643674
Training Epoch_1 Batch_34, Accuracy: 0.750893, Loss: 0.644554
Training Epoch_1 Batch_35, Accuracy: 0.751302, Loss: 0.644887
Training Epoch_1 Batch_36, Accuracy: 0.752111, Loss: 0.645837
Training Epoch_1 Batch_37, Accuracy: 0.754112, Loss: 0.639875
Training Epoch_1 Batch_38, Accuracy: 0.754808, Loss: 0.636543
Training Epoch_1 Batch_39, Accuracy: 0.757812, Loss: 0.629792
Training Epoch_1 Batch_40, Accuracy: 0.757241, Loss: 0.629104
Training Epoch_1 Batch_41, Accuracy: 0.759301, Loss: 0.624449
Training Epoch_1 Batch_42, Accuracy: 0.759811, Loss: 0.621141
Training Epoch_1 Batch_43, Accuracy: 0.762429, Loss: 0.615316
Training Epoch_1 Batch_44, Accuracy: 0.762712, Loss: 0.612268
********************* Summary for epoch: 1 *********************
2019-07-15 09:06:18: Step 1: Training accuracy = 76.3%
2019-07-15 09:06:18: Step 1: Training cross entropy = 0.612268
2019-07-15 09:06:18: Step 1: Validation accuracy = 83.1%
2019-07-15 09:06:18: Step 1: Validation cross entropy = 0.482805


Saving intermediate result.
Training Epoch_2 Batch_0, Accuracy: 0.765625, Loss: 0.545952
Training Epoch_2 Batch_1, Accuracy: 0.820312, Loss: 0.483130
Training Epoch_2 Batch_2, Accuracy: 0.833333, Loss: 0.457798
Training Epoch_2 Batch_3, Accuracy: 0.832031, Loss: 0.500564
Training Epoch_2 Batch_4, Accuracy: 0.831250, Loss: 0.480784
Training Epoch_2 Batch_5, Accuracy: 0.835938, Loss: 0.482301
Training Epoch_2 Batch_6, Accuracy: 0.825893, Loss: 0.513604
Training Epoch_2 Batch_7, Accuracy: 0.814453, Loss: 0.515944
Training Epoch_2 Batch_8, Accuracy: 0.814236, Loss: 0.511148
Training Epoch_2 Batch_9, Accuracy: 0.821875, Loss: 0.497770
Training Epoch_2 Batch_10, Accuracy: 0.821023, Loss: 0.490167
Training Epoch_2 Batch_11, Accuracy: 0.821615, Loss: 0.489295
Training Epoch_2 Batch_12, Accuracy: 0.830529, Loss: 0.471221
Training Epoch_2 Batch_13, Accuracy: 0.829241, Loss: 0.464837
Training Epoch_2 Batch_14, Accuracy: 0.832292, Loss: 0.461480
Training Epoch_2 Batch_15, Accuracy: 0.832031, Loss: 0.458648
Training Epoch_2 Batch_16, Accuracy: 0.830882, Loss: 0.455557
Training Epoch_2 Batch_17, Accuracy: 0.830729, Loss: 0.455328
Training Epoch_2 Batch_18, Accuracy: 0.830592, Loss: 0.455617
Training Epoch_2 Batch_19, Accuracy: 0.830469, Loss: 0.460975
Training Epoch_2 Batch_20, Accuracy: 0.830357, Loss: 0.455672
Training Epoch_2 Batch_21, Accuracy: 0.829545, Loss: 0.453876
Training Epoch_2 Batch_22, Accuracy: 0.828125, Loss: 0.457756
Training Epoch_2 Batch_23, Accuracy: 0.830078, Loss: 0.451772
Training Epoch_2 Batch_24, Accuracy: 0.825625, Loss: 0.463894
Training Epoch_2 Batch_25, Accuracy: 0.824519, Loss: 0.466378
Training Epoch_2 Batch_26, Accuracy: 0.822917, Loss: 0.472907
Training Epoch_2 Batch_27, Accuracy: 0.821429, Loss: 0.474513
Training Epoch_2 Batch_28, Accuracy: 0.822737, Loss: 0.475553
Training Epoch_2 Batch_29, Accuracy: 0.825000, Loss: 0.472901
Training Epoch_2 Batch_30, Accuracy: 0.826109, Loss: 0.470144
Training Epoch_2 Batch_31, Accuracy: 0.826660, Loss: 0.468937
Training Epoch_2 Batch_32, Accuracy: 0.825758, Loss: 0.471495
Training Epoch_2 Batch_33, Accuracy: 0.828125, Loss: 0.466930
Training Epoch_2 Batch_34, Accuracy: 0.828571, Loss: 0.464859
Training Epoch_2 Batch_35, Accuracy: 0.829427, Loss: 0.463874
Training Epoch_2 Batch_36, Accuracy: 0.829392, Loss: 0.463486
Training Epoch_2 Batch_37, Accuracy: 0.828947, Loss: 0.464327
Training Epoch_2 Batch_38, Accuracy: 0.828526, Loss: 0.465110
Training Epoch_2 Batch_39, Accuracy: 0.826563, Loss: 0.465563
Training Epoch_2 Batch_40, Accuracy: 0.826219, Loss: 0.466115
Training Epoch_2 Batch_41, Accuracy: 0.827381, Loss: 0.465076
Training Epoch_2 Batch_42, Accuracy: 0.827762, Loss: 0.466185
Training Epoch_2 Batch_43, Accuracy: 0.827060, Loss: 0.468324
Training Epoch_2 Batch_44, Accuracy: 0.826624, Loss: 0.475583
********************* Summary for epoch: 2 *********************
2019-07-15 09:06:20: Step 2: Training accuracy = 82.7%
2019-07-15 09:06:20: Step 2: Training cross entropy = 0.475583
2019-07-15 09:06:21: Step 2: Validation accuracy = 87.3%
2019-07-15 09:06:21: Step 2: Validation cross entropy = 0.387852


Saving intermediate result.
Training Epoch_3 Batch_0, Accuracy: 0.843750, Loss: 0.386550
Training Epoch_3 Batch_1, Accuracy: 0.875000, Loss: 0.381919
Training Epoch_3 Batch_2, Accuracy: 0.869792, Loss: 0.399332
Training Epoch_3 Batch_3, Accuracy: 0.871094, Loss: 0.406080
Training Epoch_3 Batch_4, Accuracy: 0.871875, Loss: 0.390268
Training Epoch_3 Batch_5, Accuracy: 0.861979, Loss: 0.386229
Training Epoch_3 Batch_6, Accuracy: 0.857143, Loss: 0.398981
Training Epoch_3 Batch_7, Accuracy: 0.871094, Loss: 0.375413
Training Epoch_3 Batch_8, Accuracy: 0.868056, Loss: 0.380946
Training Epoch_3 Batch_9, Accuracy: 0.865625, Loss: 0.386623
Training Epoch_3 Batch_10, Accuracy: 0.862216, Loss: 0.401170
Training Epoch_3 Batch_11, Accuracy: 0.860677, Loss: 0.395036
Training Epoch_3 Batch_12, Accuracy: 0.854567, Loss: 0.405177
Training Epoch_3 Batch_13, Accuracy: 0.854911, Loss: 0.398718
Training Epoch_3 Batch_14, Accuracy: 0.857292, Loss: 0.394599
Training Epoch_3 Batch_15, Accuracy: 0.854492, Loss: 0.415238
Training Epoch_3 Batch_16, Accuracy: 0.852022, Loss: 0.421683
Training Epoch_3 Batch_17, Accuracy: 0.849826, Loss: 0.423800
Training Epoch_3 Batch_18, Accuracy: 0.851974, Loss: 0.424755
Training Epoch_3 Batch_19, Accuracy: 0.850000, Loss: 0.428699
Training Epoch_3 Batch_20, Accuracy: 0.849702, Loss: 0.427310
Training Epoch_3 Batch_21, Accuracy: 0.849432, Loss: 0.425332
Training Epoch_3 Batch_22, Accuracy: 0.849185, Loss: 0.427090
Training Epoch_3 Batch_23, Accuracy: 0.850911, Loss: 0.425830
Training Epoch_3 Batch_24, Accuracy: 0.852500, Loss: 0.422880
Training Epoch_3 Batch_25, Accuracy: 0.852163, Loss: 0.423236
Training Epoch_3 Batch_26, Accuracy: 0.852431, Loss: 0.420517
Training Epoch_3 Batch_27, Accuracy: 0.851004, Loss: 0.420552
Training Epoch_3 Batch_28, Accuracy: 0.852371, Loss: 0.417895
Training Epoch_3 Batch_29, Accuracy: 0.853646, Loss: 0.413613
Training Epoch_3 Batch_30, Accuracy: 0.853831, Loss: 0.411079
Training Epoch_3 Batch_31, Accuracy: 0.854004, Loss: 0.412024
Training Epoch_3 Batch_32, Accuracy: 0.854640, Loss: 0.409635
Training Epoch_3 Batch_33, Accuracy: 0.852941, Loss: 0.408398
Training Epoch_3 Batch_34, Accuracy: 0.854018, Loss: 0.406726
Training Epoch_3 Batch_35, Accuracy: 0.856337, Loss: 0.404702
Training Epoch_3 Batch_36, Accuracy: 0.857264, Loss: 0.403092
Training Epoch_3 Batch_37, Accuracy: 0.857319, Loss: 0.400782
Training Epoch_3 Batch_38, Accuracy: 0.859776, Loss: 0.395582
Training Epoch_3 Batch_39, Accuracy: 0.860547, Loss: 0.392964
Training Epoch_3 Batch_40, Accuracy: 0.862043, Loss: 0.389462
Training Epoch_3 Batch_41, Accuracy: 0.861979, Loss: 0.389961
Training Epoch_3 Batch_42, Accuracy: 0.861192, Loss: 0.391154
Training Epoch_3 Batch_43, Accuracy: 0.860440, Loss: 0.393648
Training Epoch_3 Batch_44, Accuracy: 0.860876, Loss: 0.391895
********************* Summary for epoch: 3 *********************
2019-07-15 09:06:23: Step 3: Training accuracy = 86.1%
2019-07-15 09:06:23: Step 3: Training cross entropy = 0.391895
2019-07-15 09:06:23: Step 3: Validation accuracy = 87.8%
2019-07-15 09:06:23: Step 3: Validation cross entropy = 0.367210


Saving intermediate result.
Training Epoch_4 Batch_0, Accuracy: 0.890625, Loss: 0.329162
Training Epoch_4 Batch_1, Accuracy: 0.890625, Loss: 0.326003
Training Epoch_4 Batch_2, Accuracy: 0.880208, Loss: 0.327425
Training Epoch_4 Batch_3, Accuracy: 0.875000, Loss: 0.350870
Training Epoch_4 Batch_4, Accuracy: 0.893750, Loss: 0.317453
Training Epoch_4 Batch_5, Accuracy: 0.893229, Loss: 0.320997
Training Epoch_4 Batch_6, Accuracy: 0.892857, Loss: 0.326266
Training Epoch_4 Batch_7, Accuracy: 0.894531, Loss: 0.313522
Training Epoch_4 Batch_8, Accuracy: 0.887153, Loss: 0.330112
Training Epoch_4 Batch_9, Accuracy: 0.882812, Loss: 0.341584
Training Epoch_4 Batch_10, Accuracy: 0.883523, Loss: 0.335757
Training Epoch_4 Batch_11, Accuracy: 0.888021, Loss: 0.329095
Training Epoch_4 Batch_12, Accuracy: 0.891827, Loss: 0.325028
Training Epoch_4 Batch_13, Accuracy: 0.887277, Loss: 0.327699
Training Epoch_4 Batch_14, Accuracy: 0.884375, Loss: 0.334582
Training Epoch_4 Batch_15, Accuracy: 0.886719, Loss: 0.329636
Training Epoch_4 Batch_16, Accuracy: 0.880515, Loss: 0.339447
Training Epoch_4 Batch_17, Accuracy: 0.880208, Loss: 0.340498
Training Epoch_4 Batch_18, Accuracy: 0.879112, Loss: 0.341115
Training Epoch_4 Batch_19, Accuracy: 0.880469, Loss: 0.337844
Training Epoch_4 Batch_20, Accuracy: 0.878720, Loss: 0.346900
Training Epoch_4 Batch_21, Accuracy: 0.879261, Loss: 0.344490
Training Epoch_4 Batch_22, Accuracy: 0.876359, Loss: 0.348916
Training Epoch_4 Batch_23, Accuracy: 0.877604, Loss: 0.343538
Training Epoch_4 Batch_24, Accuracy: 0.877500, Loss: 0.343013
Training Epoch_4 Batch_25, Accuracy: 0.878005, Loss: 0.343667
Training Epoch_4 Batch_26, Accuracy: 0.876736, Loss: 0.348186
Training Epoch_4 Batch_27, Accuracy: 0.877790, Loss: 0.345688
Training Epoch_4 Batch_28, Accuracy: 0.879849, Loss: 0.343217
Training Epoch_4 Batch_29, Accuracy: 0.880729, Loss: 0.339540
Training Epoch_4 Batch_30, Accuracy: 0.880544, Loss: 0.339423
Training Epoch_4 Batch_31, Accuracy: 0.882324, Loss: 0.337853
Training Epoch_4 Batch_32, Accuracy: 0.882102, Loss: 0.340176
Training Epoch_4 Batch_33, Accuracy: 0.881893, Loss: 0.339913
Training Epoch_4 Batch_34, Accuracy: 0.881696, Loss: 0.341317
Training Epoch_4 Batch_35, Accuracy: 0.881510, Loss: 0.339798
Training Epoch_4 Batch_36, Accuracy: 0.880912, Loss: 0.342544
Training Epoch_4 Batch_37, Accuracy: 0.881168, Loss: 0.341945
Training Epoch_4 Batch_38, Accuracy: 0.880208, Loss: 0.343122
Training Epoch_4 Batch_39, Accuracy: 0.880078, Loss: 0.345260
Training Epoch_4 Batch_40, Accuracy: 0.880335, Loss: 0.345862
Training Epoch_4 Batch_41, Accuracy: 0.880208, Loss: 0.345118
Training Epoch_4 Batch_42, Accuracy: 0.880087, Loss: 0.343386
Training Epoch_4 Batch_43, Accuracy: 0.877841, Loss: 0.347044
Training Epoch_4 Batch_44, Accuracy: 0.878178, Loss: 0.346361
********************* Summary for epoch: 4 *********************
2019-07-15 09:06:24: Step 4: Training accuracy = 87.8%
2019-07-15 09:06:24: Step 4: Training cross entropy = 0.346361
2019-07-15 09:06:25: Step 4: Validation accuracy = 87.8%
2019-07-15 09:06:25: Step 4: Validation cross entropy = 0.352124


Training Epoch_5 Batch_0, Accuracy: 0.953125, Loss: 0.216654
Training Epoch_5 Batch_1, Accuracy: 0.937500, Loss: 0.262810
Training Epoch_5 Batch_2, Accuracy: 0.916667, Loss: 0.297077
Training Epoch_5 Batch_3, Accuracy: 0.933594, Loss: 0.267867
Training Epoch_5 Batch_4, Accuracy: 0.934375, Loss: 0.246724
Training Epoch_5 Batch_5, Accuracy: 0.924479, Loss: 0.260343
Training Epoch_5 Batch_6, Accuracy: 0.924107, Loss: 0.253318
Training Epoch_5 Batch_7, Accuracy: 0.921875, Loss: 0.259803
Training Epoch_5 Batch_8, Accuracy: 0.916667, Loss: 0.266396
Training Epoch_5 Batch_9, Accuracy: 0.906250, Loss: 0.280965
Training Epoch_5 Batch_10, Accuracy: 0.906250, Loss: 0.280554
Training Epoch_5 Batch_11, Accuracy: 0.904948, Loss: 0.285538
Training Epoch_5 Batch_12, Accuracy: 0.906250, Loss: 0.287169
Training Epoch_5 Batch_13, Accuracy: 0.901786, Loss: 0.293962
Training Epoch_5 Batch_14, Accuracy: 0.903125, Loss: 0.293311
Training Epoch_5 Batch_15, Accuracy: 0.907227, Loss: 0.283366
Training Epoch_5 Batch_16, Accuracy: 0.905331, Loss: 0.289903
Training Epoch_5 Batch_17, Accuracy: 0.905382, Loss: 0.289019
Training Epoch_5 Batch_18, Accuracy: 0.899671, Loss: 0.299447
Training Epoch_5 Batch_19, Accuracy: 0.900781, Loss: 0.297205
Training Epoch_5 Batch_20, Accuracy: 0.900298, Loss: 0.294994
Training Epoch_5 Batch_21, Accuracy: 0.898438, Loss: 0.298418
Training Epoch_5 Batch_22, Accuracy: 0.898777, Loss: 0.296848
Training Epoch_5 Batch_23, Accuracy: 0.895833, Loss: 0.303306
Training Epoch_5 Batch_24, Accuracy: 0.894375, Loss: 0.309395
Training Epoch_5 Batch_25, Accuracy: 0.893029, Loss: 0.311368
Training Epoch_5 Batch_26, Accuracy: 0.893519, Loss: 0.312818
Training Epoch_5 Batch_27, Accuracy: 0.893973, Loss: 0.319733
Training Epoch_5 Batch_28, Accuracy: 0.892780, Loss: 0.323022
Training Epoch_5 Batch_29, Accuracy: 0.892187, Loss: 0.324150
Training Epoch_5 Batch_30, Accuracy: 0.892641, Loss: 0.322375
Training Epoch_5 Batch_31, Accuracy: 0.893555, Loss: 0.318701
Training Epoch_5 Batch_32, Accuracy: 0.892519, Loss: 0.317574
Training Epoch_5 Batch_33, Accuracy: 0.892463, Loss: 0.315459
Training Epoch_5 Batch_34, Accuracy: 0.894643, Loss: 0.312008
Training Epoch_5 Batch_35, Accuracy: 0.895833, Loss: 0.308733
Training Epoch_5 Batch_36, Accuracy: 0.896537, Loss: 0.307614
Training Epoch_5 Batch_37, Accuracy: 0.897615, Loss: 0.304330
Training Epoch_5 Batch_38, Accuracy: 0.896635, Loss: 0.306598
Training Epoch_5 Batch_39, Accuracy: 0.894531, Loss: 0.309921
Training Epoch_5 Batch_40, Accuracy: 0.894436, Loss: 0.311751
Training Epoch_5 Batch_41, Accuracy: 0.894345, Loss: 0.313383
Training Epoch_5 Batch_42, Accuracy: 0.894985, Loss: 0.310908
Training Epoch_5 Batch_43, Accuracy: 0.894886, Loss: 0.309949
Training Epoch_5 Batch_44, Accuracy: 0.895127, Loss: 0.306966
********************* Summary for epoch: 5 *********************
2019-07-15 09:06:25: Step 5: Training accuracy = 89.5%
2019-07-15 09:06:25: Step 5: Training cross entropy = 0.306966
2019-07-15 09:06:25: Step 5: Validation accuracy = 88.4%
2019-07-15 09:06:25: Step 5: Validation cross entropy = 0.383488


Saving intermediate result.
Training Epoch_6 Batch_0, Accuracy: 0.890625, Loss: 0.333912
Training Epoch_6 Batch_1, Accuracy: 0.867188, Loss: 0.342801
Training Epoch_6 Batch_2, Accuracy: 0.859375, Loss: 0.343220
Training Epoch_6 Batch_3, Accuracy: 0.882812, Loss: 0.322664
Training Epoch_6 Batch_4, Accuracy: 0.887500, Loss: 0.315726
Training Epoch_6 Batch_5, Accuracy: 0.895833, Loss: 0.296717
Training Epoch_6 Batch_6, Accuracy: 0.899554, Loss: 0.290483
Training Epoch_6 Batch_7, Accuracy: 0.898438, Loss: 0.293599
Training Epoch_6 Batch_8, Accuracy: 0.894097, Loss: 0.297049
Training Epoch_6 Batch_9, Accuracy: 0.900000, Loss: 0.288423
Training Epoch_6 Batch_10, Accuracy: 0.901989, Loss: 0.286477
Training Epoch_6 Batch_11, Accuracy: 0.899740, Loss: 0.284916
Training Epoch_6 Batch_12, Accuracy: 0.899038, Loss: 0.286076
Training Epoch_6 Batch_13, Accuracy: 0.900670, Loss: 0.296784
Training Epoch_6 Batch_14, Accuracy: 0.905208, Loss: 0.288400
Training Epoch_6 Batch_15, Accuracy: 0.909180, Loss: 0.282342
Training Epoch_6 Batch_16, Accuracy: 0.911765, Loss: 0.276499
Training Epoch_6 Batch_17, Accuracy: 0.909722, Loss: 0.274532
Training Epoch_6 Batch_18, Accuracy: 0.913651, Loss: 0.269754
Training Epoch_6 Batch_19, Accuracy: 0.911719, Loss: 0.271768
Training Epoch_6 Batch_20, Accuracy: 0.910714, Loss: 0.273091
Training Epoch_6 Batch_21, Accuracy: 0.911222, Loss: 0.275602
Training Epoch_6 Batch_22, Accuracy: 0.908967, Loss: 0.280339
Training Epoch_6 Batch_23, Accuracy: 0.909505, Loss: 0.279066
Training Epoch_6 Batch_24, Accuracy: 0.908125, Loss: 0.280438
Training Epoch_6 Batch_25, Accuracy: 0.908053, Loss: 0.281015
Training Epoch_6 Batch_26, Accuracy: 0.907407, Loss: 0.281546
Training Epoch_6 Batch_27, Accuracy: 0.908482, Loss: 0.278066
Training Epoch_6 Batch_28, Accuracy: 0.906789, Loss: 0.280328
Training Epoch_6 Batch_29, Accuracy: 0.908333, Loss: 0.276936
Training Epoch_6 Batch_30, Accuracy: 0.909274, Loss: 0.276086
Training Epoch_6 Batch_31, Accuracy: 0.909668, Loss: 0.275050
Training Epoch_6 Batch_32, Accuracy: 0.908617, Loss: 0.276027
Training Epoch_6 Batch_33, Accuracy: 0.908088, Loss: 0.274154
Training Epoch_6 Batch_34, Accuracy: 0.907589, Loss: 0.274132
Training Epoch_6 Batch_35, Accuracy: 0.907118, Loss: 0.274794
Training Epoch_6 Batch_36, Accuracy: 0.907095, Loss: 0.276556
Training Epoch_6 Batch_37, Accuracy: 0.906661, Loss: 0.276158
Training Epoch_6 Batch_38, Accuracy: 0.908253, Loss: 0.273295
Training Epoch_6 Batch_39, Accuracy: 0.908984, Loss: 0.272844
Training Epoch_6 Batch_40, Accuracy: 0.909680, Loss: 0.272887
Training Epoch_6 Batch_41, Accuracy: 0.909226, Loss: 0.272746
Training Epoch_6 Batch_42, Accuracy: 0.908430, Loss: 0.275067
Training Epoch_6 Batch_43, Accuracy: 0.908026, Loss: 0.275922
Training Epoch_6 Batch_44, Accuracy: 0.908192, Loss: 0.273958
********************* Summary for epoch: 6 *********************
2019-07-15 09:06:27: Step 6: Training accuracy = 90.8%
2019-07-15 09:06:27: Step 6: Training cross entropy = 0.273958
2019-07-15 09:06:27: Step 6: Validation accuracy = 89.2%
2019-07-15 09:06:27: Step 6: Validation cross entropy = 0.331215


Saving intermediate result.
Training Epoch_7 Batch_0, Accuracy: 0.890625, Loss: 0.311307
Training Epoch_7 Batch_1, Accuracy: 0.937500, Loss: 0.236115
Training Epoch_7 Batch_2, Accuracy: 0.921875, Loss: 0.267698
Training Epoch_7 Batch_3, Accuracy: 0.929688, Loss: 0.253164
Training Epoch_7 Batch_4, Accuracy: 0.931250, Loss: 0.245653
Training Epoch_7 Batch_5, Accuracy: 0.924479, Loss: 0.247045
Training Epoch_7 Batch_6, Accuracy: 0.921875, Loss: 0.251042
Training Epoch_7 Batch_7, Accuracy: 0.923828, Loss: 0.254129
Training Epoch_7 Batch_8, Accuracy: 0.923611, Loss: 0.255598
Training Epoch_7 Batch_9, Accuracy: 0.926562, Loss: 0.252969
Training Epoch_7 Batch_10, Accuracy: 0.928977, Loss: 0.248094
Training Epoch_7 Batch_11, Accuracy: 0.930990, Loss: 0.237894
Training Epoch_7 Batch_12, Accuracy: 0.929087, Loss: 0.238857
Training Epoch_7 Batch_13, Accuracy: 0.927455, Loss: 0.241279
Training Epoch_7 Batch_14, Accuracy: 0.929167, Loss: 0.238359
Training Epoch_7 Batch_15, Accuracy: 0.927734, Loss: 0.238520
Training Epoch_7 Batch_16, Accuracy: 0.925551, Loss: 0.243740
Training Epoch_7 Batch_17, Accuracy: 0.924479, Loss: 0.242944
Training Epoch_7 Batch_18, Accuracy: 0.925164, Loss: 0.240230
Training Epoch_7 Batch_19, Accuracy: 0.925000, Loss: 0.243683
Training Epoch_7 Batch_20, Accuracy: 0.924107, Loss: 0.242883
Training Epoch_7 Batch_21, Accuracy: 0.924716, Loss: 0.242641
Training Epoch_7 Batch_22, Accuracy: 0.925272, Loss: 0.242854
Training Epoch_7 Batch_23, Accuracy: 0.924479, Loss: 0.241415
Training Epoch_7 Batch_24, Accuracy: 0.923125, Loss: 0.243632
Training Epoch_7 Batch_25, Accuracy: 0.921274, Loss: 0.248577
Training Epoch_7 Batch_26, Accuracy: 0.919560, Loss: 0.252238
Training Epoch_7 Batch_27, Accuracy: 0.919643, Loss: 0.252530
Training Epoch_7 Batch_28, Accuracy: 0.919720, Loss: 0.252360
Training Epoch_7 Batch_29, Accuracy: 0.919792, Loss: 0.250872
Training Epoch_7 Batch_30, Accuracy: 0.919859, Loss: 0.249566
Training Epoch_7 Batch_31, Accuracy: 0.918457, Loss: 0.250885
Training Epoch_7 Batch_32, Accuracy: 0.916193, Loss: 0.255996
Training Epoch_7 Batch_33, Accuracy: 0.918199, Loss: 0.254316
Training Epoch_7 Batch_34, Accuracy: 0.917857, Loss: 0.253371
Training Epoch_7 Batch_35, Accuracy: 0.918403, Loss: 0.251104
Training Epoch_7 Batch_36, Accuracy: 0.919764, Loss: 0.250108
Training Epoch_7 Batch_37, Accuracy: 0.919408, Loss: 0.251324
Training Epoch_7 Batch_38, Accuracy: 0.920673, Loss: 0.249901
Training Epoch_7 Batch_39, Accuracy: 0.921094, Loss: 0.248663
Training Epoch_7 Batch_40, Accuracy: 0.920732, Loss: 0.248181
Training Epoch_7 Batch_41, Accuracy: 0.920387, Loss: 0.248065
Training Epoch_7 Batch_42, Accuracy: 0.919695, Loss: 0.250186
Training Epoch_7 Batch_43, Accuracy: 0.919034, Loss: 0.250639
Training Epoch_7 Batch_44, Accuracy: 0.919138, Loss: 0.249605
********************* Summary for epoch: 7 *********************
2019-07-15 09:06:29: Step 7: Training accuracy = 91.9%
2019-07-15 09:06:29: Step 7: Training cross entropy = 0.249605
2019-07-15 09:06:29: Step 7: Validation accuracy = 89.0%
2019-07-15 09:06:29: Step 7: Validation cross entropy = 0.336089


Training Epoch_8 Batch_0, Accuracy: 0.906250, Loss: 0.279785
Training Epoch_8 Batch_1, Accuracy: 0.914062, Loss: 0.255684
Training Epoch_8 Batch_2, Accuracy: 0.927083, Loss: 0.246078
Training Epoch_8 Batch_3, Accuracy: 0.929688, Loss: 0.234279
Training Epoch_8 Batch_4, Accuracy: 0.937500, Loss: 0.218643
Training Epoch_8 Batch_5, Accuracy: 0.937500, Loss: 0.207166
Training Epoch_8 Batch_6, Accuracy: 0.941964, Loss: 0.189717
Training Epoch_8 Batch_7, Accuracy: 0.939453, Loss: 0.193084
Training Epoch_8 Batch_8, Accuracy: 0.937500, Loss: 0.202597
Training Epoch_8 Batch_9, Accuracy: 0.934375, Loss: 0.214143
Training Epoch_8 Batch_10, Accuracy: 0.936080, Loss: 0.214512
Training Epoch_8 Batch_11, Accuracy: 0.934896, Loss: 0.214855
Training Epoch_8 Batch_12, Accuracy: 0.933894, Loss: 0.219330
Training Epoch_8 Batch_13, Accuracy: 0.935268, Loss: 0.218076
Training Epoch_8 Batch_14, Accuracy: 0.933333, Loss: 0.224000
Training Epoch_8 Batch_15, Accuracy: 0.935547, Loss: 0.217346
Training Epoch_8 Batch_16, Accuracy: 0.936581, Loss: 0.215468
Training Epoch_8 Batch_17, Accuracy: 0.935764, Loss: 0.217060
Training Epoch_8 Batch_18, Accuracy: 0.934211, Loss: 0.218617
Training Epoch_8 Batch_19, Accuracy: 0.934375, Loss: 0.219308
Training Epoch_8 Batch_20, Accuracy: 0.931548, Loss: 0.221882
Training Epoch_8 Batch_21, Accuracy: 0.930398, Loss: 0.225982
Training Epoch_8 Batch_22, Accuracy: 0.930707, Loss: 0.228091
Training Epoch_8 Batch_23, Accuracy: 0.927734, Loss: 0.235832
Training Epoch_8 Batch_24, Accuracy: 0.927500, Loss: 0.239090
Training Epoch_8 Batch_25, Accuracy: 0.929087, Loss: 0.234835
Training Epoch_8 Batch_26, Accuracy: 0.929398, Loss: 0.235682
Training Epoch_8 Batch_27, Accuracy: 0.929129, Loss: 0.235079
Training Epoch_8 Batch_28, Accuracy: 0.929957, Loss: 0.232526
Training Epoch_8 Batch_29, Accuracy: 0.930208, Loss: 0.233583
Training Epoch_8 Batch_30, Accuracy: 0.931452, Loss: 0.231901
Training Epoch_8 Batch_31, Accuracy: 0.931152, Loss: 0.232252
Training Epoch_8 Batch_32, Accuracy: 0.931345, Loss: 0.230556
Training Epoch_8 Batch_33, Accuracy: 0.931066, Loss: 0.232163
Training Epoch_8 Batch_34, Accuracy: 0.930357, Loss: 0.232022
Training Epoch_8 Batch_35, Accuracy: 0.930122, Loss: 0.233254
Training Epoch_8 Batch_36, Accuracy: 0.930321, Loss: 0.232956
Training Epoch_8 Batch_37, Accuracy: 0.929688, Loss: 0.233153
Training Epoch_8 Batch_38, Accuracy: 0.929487, Loss: 0.232950
Training Epoch_8 Batch_39, Accuracy: 0.929688, Loss: 0.233736
Training Epoch_8 Batch_40, Accuracy: 0.931021, Loss: 0.231494
Training Epoch_8 Batch_41, Accuracy: 0.929688, Loss: 0.234876
Training Epoch_8 Batch_42, Accuracy: 0.928779, Loss: 0.235670
Training Epoch_8 Batch_43, Accuracy: 0.928622, Loss: 0.235667
Training Epoch_8 Batch_44, Accuracy: 0.928672, Loss: 0.233655
********************* Summary for epoch: 8 *********************
2019-07-15 09:06:29: Step 8: Training accuracy = 92.9%
2019-07-15 09:06:29: Step 8: Training cross entropy = 0.233655
2019-07-15 09:06:29: Step 8: Validation accuracy = 87.8%
2019-07-15 09:06:29: Step 8: Validation cross entropy = 0.349216


Training Epoch_9 Batch_0, Accuracy: 0.937500, Loss: 0.194806
Training Epoch_9 Batch_1, Accuracy: 0.921875, Loss: 0.272636
Training Epoch_9 Batch_2, Accuracy: 0.937500, Loss: 0.230672
Training Epoch_9 Batch_3, Accuracy: 0.941406, Loss: 0.206742
Training Epoch_9 Batch_4, Accuracy: 0.946875, Loss: 0.197886
Training Epoch_9 Batch_5, Accuracy: 0.937500, Loss: 0.202434
Training Epoch_9 Batch_6, Accuracy: 0.935268, Loss: 0.203474
Training Epoch_9 Batch_7, Accuracy: 0.931641, Loss: 0.223197
Training Epoch_9 Batch_8, Accuracy: 0.932292, Loss: 0.217900
Training Epoch_9 Batch_9, Accuracy: 0.926562, Loss: 0.225021
Training Epoch_9 Batch_10, Accuracy: 0.927557, Loss: 0.222901
Training Epoch_9 Batch_11, Accuracy: 0.928385, Loss: 0.223239
Training Epoch_9 Batch_12, Accuracy: 0.927885, Loss: 0.230671
Training Epoch_9 Batch_13, Accuracy: 0.926339, Loss: 0.229222
Training Epoch_9 Batch_14, Accuracy: 0.928125, Loss: 0.225232
Training Epoch_9 Batch_15, Accuracy: 0.928711, Loss: 0.224511
Training Epoch_9 Batch_16, Accuracy: 0.929228, Loss: 0.222772
Training Epoch_9 Batch_17, Accuracy: 0.927951, Loss: 0.224251
Training Epoch_9 Batch_18, Accuracy: 0.928454, Loss: 0.221315
Training Epoch_9 Batch_19, Accuracy: 0.928125, Loss: 0.222695
Training Epoch_9 Batch_20, Accuracy: 0.927827, Loss: 0.223084
Training Epoch_9 Batch_21, Accuracy: 0.927557, Loss: 0.222552
Training Epoch_9 Batch_22, Accuracy: 0.926630, Loss: 0.223097
Training Epoch_9 Batch_23, Accuracy: 0.927734, Loss: 0.222380
Training Epoch_9 Batch_24, Accuracy: 0.925625, Loss: 0.226589
Training Epoch_9 Batch_25, Accuracy: 0.925481, Loss: 0.227073
Training Epoch_9 Batch_26, Accuracy: 0.925926, Loss: 0.226659
Training Epoch_9 Batch_27, Accuracy: 0.926897, Loss: 0.228365
Training Epoch_9 Batch_28, Accuracy: 0.928340, Loss: 0.223999
Training Epoch_9 Batch_29, Accuracy: 0.927604, Loss: 0.225889
Training Epoch_9 Batch_30, Accuracy: 0.926915, Loss: 0.226362
Training Epoch_9 Batch_31, Accuracy: 0.926270, Loss: 0.228748
Training Epoch_9 Batch_32, Accuracy: 0.926610, Loss: 0.229843
Training Epoch_9 Batch_33, Accuracy: 0.926011, Loss: 0.231054
Training Epoch_9 Batch_34, Accuracy: 0.927679, Loss: 0.227772
Training Epoch_9 Batch_35, Accuracy: 0.929253, Loss: 0.225137
Training Epoch_9 Batch_36, Accuracy: 0.928632, Loss: 0.226325
Training Epoch_9 Batch_37, Accuracy: 0.928043, Loss: 0.226191
Training Epoch_9 Batch_38, Accuracy: 0.927083, Loss: 0.229360
Training Epoch_9 Batch_39, Accuracy: 0.926953, Loss: 0.229410
Training Epoch_9 Batch_40, Accuracy: 0.927973, Loss: 0.226688
Training Epoch_9 Batch_41, Accuracy: 0.928199, Loss: 0.224598
Training Epoch_9 Batch_42, Accuracy: 0.928779, Loss: 0.222762
Training Epoch_9 Batch_43, Accuracy: 0.929332, Loss: 0.222159
Training Epoch_9 Batch_44, Accuracy: 0.929732, Loss: 0.218934
********************* Summary for epoch: 9 *********************
2019-07-15 09:06:29: Step 9: Training accuracy = 93.0%
2019-07-15 09:06:29: Step 9: Training cross entropy = 0.218934
2019-07-15 09:06:29: Step 9: Validation accuracy = 90.1%
2019-07-15 09:06:29: Step 9: Validation cross entropy = 0.315098


Saving intermediate result.
Training Epoch_10 Batch_0, Accuracy: 0.937500, Loss: 0.237889
Training Epoch_10 Batch_1, Accuracy: 0.945312, Loss: 0.205256
Training Epoch_10 Batch_2, Accuracy: 0.937500, Loss: 0.237911
Training Epoch_10 Batch_3, Accuracy: 0.941406, Loss: 0.229666
Training Epoch_10 Batch_4, Accuracy: 0.931250, Loss: 0.224035
Training Epoch_10 Batch_5, Accuracy: 0.932292, Loss: 0.219702
Training Epoch_10 Batch_6, Accuracy: 0.930804, Loss: 0.221524
Training Epoch_10 Batch_7, Accuracy: 0.929688, Loss: 0.228570
Training Epoch_10 Batch_8, Accuracy: 0.932292, Loss: 0.226865
Training Epoch_10 Batch_9, Accuracy: 0.931250, Loss: 0.235821
Training Epoch_10 Batch_10, Accuracy: 0.936080, Loss: 0.224559
Training Epoch_10 Batch_11, Accuracy: 0.933594, Loss: 0.221562
Training Epoch_10 Batch_12, Accuracy: 0.933894, Loss: 0.220616
Training Epoch_10 Batch_13, Accuracy: 0.936384, Loss: 0.217183
Training Epoch_10 Batch_14, Accuracy: 0.938542, Loss: 0.216205
Training Epoch_10 Batch_15, Accuracy: 0.939453, Loss: 0.214542
Training Epoch_10 Batch_16, Accuracy: 0.939338, Loss: 0.215251
Training Epoch_10 Batch_17, Accuracy: 0.935764, Loss: 0.221595
Training Epoch_10 Batch_18, Accuracy: 0.934211, Loss: 0.222027
Training Epoch_10 Batch_19, Accuracy: 0.933594, Loss: 0.220579
Training Epoch_10 Batch_20, Accuracy: 0.933036, Loss: 0.221454
Training Epoch_10 Batch_21, Accuracy: 0.933239, Loss: 0.220472
Training Epoch_10 Batch_22, Accuracy: 0.935462, Loss: 0.216979
Training Epoch_10 Batch_23, Accuracy: 0.936849, Loss: 0.214729
Training Epoch_10 Batch_24, Accuracy: 0.937500, Loss: 0.212175
Training Epoch_10 Batch_25, Accuracy: 0.939904, Loss: 0.208994
Training Epoch_10 Batch_26, Accuracy: 0.938657, Loss: 0.209581
Training Epoch_10 Batch_27, Accuracy: 0.937500, Loss: 0.209968
Training Epoch_10 Batch_28, Accuracy: 0.937500, Loss: 0.208737
Training Epoch_10 Batch_29, Accuracy: 0.938542, Loss: 0.205931
Training Epoch_10 Batch_30, Accuracy: 0.938508, Loss: 0.205851
Training Epoch_10 Batch_31, Accuracy: 0.938965, Loss: 0.205354
Training Epoch_10 Batch_32, Accuracy: 0.938920, Loss: 0.204226
Training Epoch_10 Batch_33, Accuracy: 0.939338, Loss: 0.202809
Training Epoch_10 Batch_34, Accuracy: 0.940179, Loss: 0.200688
Training Epoch_10 Batch_35, Accuracy: 0.940972, Loss: 0.199172
Training Epoch_10 Batch_36, Accuracy: 0.940456, Loss: 0.199054
Training Epoch_10 Batch_37, Accuracy: 0.941201, Loss: 0.197849
Training Epoch_10 Batch_38, Accuracy: 0.940705, Loss: 0.197073
Training Epoch_10 Batch_39, Accuracy: 0.939844, Loss: 0.198861
Training Epoch_10 Batch_40, Accuracy: 0.940168, Loss: 0.197776
Training Epoch_10 Batch_41, Accuracy: 0.939732, Loss: 0.199591
Training Epoch_10 Batch_42, Accuracy: 0.938590, Loss: 0.200299
Training Epoch_10 Batch_43, Accuracy: 0.938565, Loss: 0.198847
Training Epoch_10 Batch_44, Accuracy: 0.938559, Loss: 0.199099
********************* Summary for epoch: 10 *********************
2019-07-15 09:06:31: Step 10: Training accuracy = 93.9%
2019-07-15 09:06:31: Step 10: Training cross entropy = 0.199099
2019-07-15 09:06:31: Step 10: Validation accuracy = 90.6%
2019-07-15 09:06:31: Step 10: Validation cross entropy = 0.301814


Saving intermediate result.
Training Epoch_11 Batch_0, Accuracy: 0.968750, Loss: 0.150117
Training Epoch_11 Batch_1, Accuracy: 0.960938, Loss: 0.164467
Training Epoch_11 Batch_2, Accuracy: 0.963542, Loss: 0.167218
Training Epoch_11 Batch_3, Accuracy: 0.968750, Loss: 0.153234
Training Epoch_11 Batch_4, Accuracy: 0.962500, Loss: 0.162519
Training Epoch_11 Batch_5, Accuracy: 0.963542, Loss: 0.158034
Training Epoch_11 Batch_6, Accuracy: 0.966518, Loss: 0.152684
Training Epoch_11 Batch_7, Accuracy: 0.966797, Loss: 0.156313
Training Epoch_11 Batch_8, Accuracy: 0.967014, Loss: 0.152906
Training Epoch_11 Batch_9, Accuracy: 0.967188, Loss: 0.153032
Training Epoch_11 Batch_10, Accuracy: 0.965909, Loss: 0.153625
Training Epoch_11 Batch_11, Accuracy: 0.963542, Loss: 0.158549
Training Epoch_11 Batch_12, Accuracy: 0.962740, Loss: 0.161283
Training Epoch_11 Batch_13, Accuracy: 0.959821, Loss: 0.166773
Training Epoch_11 Batch_14, Accuracy: 0.960417, Loss: 0.168265
Training Epoch_11 Batch_15, Accuracy: 0.958008, Loss: 0.176387
Training Epoch_11 Batch_16, Accuracy: 0.957721, Loss: 0.174992
Training Epoch_11 Batch_17, Accuracy: 0.959201, Loss: 0.171310
Training Epoch_11 Batch_18, Accuracy: 0.957237, Loss: 0.175314
Training Epoch_11 Batch_19, Accuracy: 0.957031, Loss: 0.176632
Training Epoch_11 Batch_20, Accuracy: 0.957589, Loss: 0.174082
Training Epoch_11 Batch_21, Accuracy: 0.955966, Loss: 0.180657
Training Epoch_11 Batch_22, Accuracy: 0.956522, Loss: 0.179538
Training Epoch_11 Batch_23, Accuracy: 0.953125, Loss: 0.188285
Training Epoch_11 Batch_24, Accuracy: 0.952500, Loss: 0.189223
Training Epoch_11 Batch_25, Accuracy: 0.950721, Loss: 0.190942
Training Epoch_11 Batch_26, Accuracy: 0.950231, Loss: 0.189913
Training Epoch_11 Batch_27, Accuracy: 0.948103, Loss: 0.191394
Training Epoch_11 Batch_28, Accuracy: 0.948815, Loss: 0.190835
Training Epoch_11 Batch_29, Accuracy: 0.948438, Loss: 0.190155
Training Epoch_11 Batch_30, Accuracy: 0.949093, Loss: 0.189772
Training Epoch_11 Batch_31, Accuracy: 0.947754, Loss: 0.189681
Training Epoch_11 Batch_32, Accuracy: 0.946970, Loss: 0.190391
Training Epoch_11 Batch_33, Accuracy: 0.947151, Loss: 0.189689
Training Epoch_11 Batch_34, Accuracy: 0.947768, Loss: 0.187281
Training Epoch_11 Batch_35, Accuracy: 0.947483, Loss: 0.187329
Training Epoch_11 Batch_36, Accuracy: 0.948057, Loss: 0.186801
Training Epoch_11 Batch_37, Accuracy: 0.947780, Loss: 0.187480
Training Epoch_11 Batch_38, Accuracy: 0.947516, Loss: 0.188583
Training Epoch_11 Batch_39, Accuracy: 0.947656, Loss: 0.188714
Training Epoch_11 Batch_40, Accuracy: 0.946265, Loss: 0.189383
Training Epoch_11 Batch_41, Accuracy: 0.946429, Loss: 0.188984
Training Epoch_11 Batch_42, Accuracy: 0.947311, Loss: 0.188126
Training Epoch_11 Batch_43, Accuracy: 0.947088, Loss: 0.188116
Training Epoch_11 Batch_44, Accuracy: 0.947387, Loss: 0.184483
********************* Summary for epoch: 11 *********************
2019-07-15 09:06:33: Step 11: Training accuracy = 94.7%
2019-07-15 09:06:33: Step 11: Training cross entropy = 0.184483
2019-07-15 09:06:33: Step 11: Validation accuracy = 90.6%
2019-07-15 09:06:33: Step 11: Validation cross entropy = 0.304847


Training Epoch_12 Batch_0, Accuracy: 0.937500, Loss: 0.334266
Training Epoch_12 Batch_1, Accuracy: 0.937500, Loss: 0.295809
Training Epoch_12 Batch_2, Accuracy: 0.947917, Loss: 0.256578
Training Epoch_12 Batch_3, Accuracy: 0.953125, Loss: 0.221270
Training Epoch_12 Batch_4, Accuracy: 0.953125, Loss: 0.211997
Training Epoch_12 Batch_5, Accuracy: 0.955729, Loss: 0.195105
Training Epoch_12 Batch_6, Accuracy: 0.953125, Loss: 0.193769
Training Epoch_12 Batch_7, Accuracy: 0.953125, Loss: 0.186174
Training Epoch_12 Batch_8, Accuracy: 0.951389, Loss: 0.191602
Training Epoch_12 Batch_9, Accuracy: 0.951563, Loss: 0.189179
Training Epoch_12 Batch_10, Accuracy: 0.953125, Loss: 0.188748
Training Epoch_12 Batch_11, Accuracy: 0.955729, Loss: 0.181060
Training Epoch_12 Batch_12, Accuracy: 0.955529, Loss: 0.176602
Training Epoch_12 Batch_13, Accuracy: 0.952009, Loss: 0.179099
Training Epoch_12 Batch_14, Accuracy: 0.954167, Loss: 0.178699
Training Epoch_12 Batch_15, Accuracy: 0.956055, Loss: 0.175381
Training Epoch_12 Batch_16, Accuracy: 0.956801, Loss: 0.173719
Training Epoch_12 Batch_17, Accuracy: 0.958333, Loss: 0.171617
Training Epoch_12 Batch_18, Accuracy: 0.958059, Loss: 0.171296
Training Epoch_12 Batch_19, Accuracy: 0.957031, Loss: 0.171655
Training Epoch_12 Batch_20, Accuracy: 0.957589, Loss: 0.171064
Training Epoch_12 Batch_21, Accuracy: 0.958097, Loss: 0.169875
Training Epoch_12 Batch_22, Accuracy: 0.957880, Loss: 0.170037
Training Epoch_12 Batch_23, Accuracy: 0.958333, Loss: 0.169265
Training Epoch_12 Batch_24, Accuracy: 0.958750, Loss: 0.168306
Training Epoch_12 Batch_25, Accuracy: 0.958534, Loss: 0.167550
Training Epoch_12 Batch_26, Accuracy: 0.959491, Loss: 0.165609
Training Epoch_12 Batch_27, Accuracy: 0.958705, Loss: 0.166711
Training Epoch_12 Batch_28, Accuracy: 0.957435, Loss: 0.169317
Training Epoch_12 Batch_29, Accuracy: 0.957812, Loss: 0.169684
Training Epoch_12 Batch_30, Accuracy: 0.957157, Loss: 0.172305
Training Epoch_12 Batch_31, Accuracy: 0.955078, Loss: 0.176072
Training Epoch_12 Batch_32, Accuracy: 0.954545, Loss: 0.175845
Training Epoch_12 Batch_33, Accuracy: 0.954044, Loss: 0.176809
Training Epoch_12 Batch_34, Accuracy: 0.954464, Loss: 0.177008
Training Epoch_12 Batch_35, Accuracy: 0.953559, Loss: 0.177009
Training Epoch_12 Batch_36, Accuracy: 0.953125, Loss: 0.176423
Training Epoch_12 Batch_37, Accuracy: 0.952714, Loss: 0.176682
Training Epoch_12 Batch_38, Accuracy: 0.952324, Loss: 0.176326
Training Epoch_12 Batch_39, Accuracy: 0.952344, Loss: 0.176329
Training Epoch_12 Batch_40, Accuracy: 0.951601, Loss: 0.177348
Training Epoch_12 Batch_41, Accuracy: 0.952009, Loss: 0.176648
Training Epoch_12 Batch_42, Accuracy: 0.952762, Loss: 0.174998
Training Epoch_12 Batch_43, Accuracy: 0.952770, Loss: 0.174853
Training Epoch_12 Batch_44, Accuracy: 0.952684, Loss: 0.175119
********************* Summary for epoch: 12 *********************
2019-07-15 09:06:34: Step 12: Training accuracy = 95.3%
2019-07-15 09:06:34: Step 12: Training cross entropy = 0.175119
2019-07-15 09:06:34: Step 12: Validation accuracy = 90.3%
2019-07-15 09:06:34: Step 12: Validation cross entropy = 0.298666


Training Epoch_13 Batch_0, Accuracy: 0.921875, Loss: 0.286614
Training Epoch_13 Batch_1, Accuracy: 0.914062, Loss: 0.237163
Training Epoch_13 Batch_2, Accuracy: 0.916667, Loss: 0.262085
Training Epoch_13 Batch_3, Accuracy: 0.929688, Loss: 0.231108
Training Epoch_13 Batch_4, Accuracy: 0.937500, Loss: 0.209875
Training Epoch_13 Batch_5, Accuracy: 0.945312, Loss: 0.194161
Training Epoch_13 Batch_6, Accuracy: 0.948661, Loss: 0.186328
Training Epoch_13 Batch_7, Accuracy: 0.955078, Loss: 0.176772
Training Epoch_13 Batch_8, Accuracy: 0.949653, Loss: 0.189306
Training Epoch_13 Batch_9, Accuracy: 0.950000, Loss: 0.184783
Training Epoch_13 Batch_10, Accuracy: 0.951705, Loss: 0.182025
Training Epoch_13 Batch_11, Accuracy: 0.949219, Loss: 0.183897
Training Epoch_13 Batch_12, Accuracy: 0.947115, Loss: 0.184625
Training Epoch_13 Batch_13, Accuracy: 0.947545, Loss: 0.180960
Training Epoch_13 Batch_14, Accuracy: 0.945833, Loss: 0.182025
Training Epoch_13 Batch_15, Accuracy: 0.948242, Loss: 0.178886
Training Epoch_13 Batch_16, Accuracy: 0.950368, Loss: 0.175698
Training Epoch_13 Batch_17, Accuracy: 0.952257, Loss: 0.172556
Training Epoch_13 Batch_18, Accuracy: 0.953125, Loss: 0.172449
Training Epoch_13 Batch_19, Accuracy: 0.952344, Loss: 0.173429
Training Epoch_13 Batch_20, Accuracy: 0.953125, Loss: 0.174218
Training Epoch_13 Batch_21, Accuracy: 0.954545, Loss: 0.172915
Training Epoch_13 Batch_22, Accuracy: 0.955842, Loss: 0.169834
Training Epoch_13 Batch_23, Accuracy: 0.956380, Loss: 0.167966
Training Epoch_13 Batch_24, Accuracy: 0.957500, Loss: 0.165282
Training Epoch_13 Batch_25, Accuracy: 0.957933, Loss: 0.162620
Training Epoch_13 Batch_26, Accuracy: 0.958333, Loss: 0.160673
Training Epoch_13 Batch_27, Accuracy: 0.958147, Loss: 0.161229
Training Epoch_13 Batch_28, Accuracy: 0.956897, Loss: 0.162169
Training Epoch_13 Batch_29, Accuracy: 0.957292, Loss: 0.163345
Training Epoch_13 Batch_30, Accuracy: 0.957157, Loss: 0.163238
Training Epoch_13 Batch_31, Accuracy: 0.957520, Loss: 0.161539
Training Epoch_13 Batch_32, Accuracy: 0.958333, Loss: 0.159570
Training Epoch_13 Batch_33, Accuracy: 0.958640, Loss: 0.159223
Training Epoch_13 Batch_34, Accuracy: 0.956250, Loss: 0.165246
Training Epoch_13 Batch_35, Accuracy: 0.954427, Loss: 0.166700
Training Epoch_13 Batch_36, Accuracy: 0.953125, Loss: 0.166883
Training Epoch_13 Batch_37, Accuracy: 0.953125, Loss: 0.166908
Training Epoch_13 Batch_38, Accuracy: 0.952324, Loss: 0.169745
Training Epoch_13 Batch_39, Accuracy: 0.951563, Loss: 0.170131
Training Epoch_13 Batch_40, Accuracy: 0.952363, Loss: 0.168914
Training Epoch_13 Batch_41, Accuracy: 0.952753, Loss: 0.167565
Training Epoch_13 Batch_42, Accuracy: 0.952762, Loss: 0.167044
Training Epoch_13 Batch_43, Accuracy: 0.953125, Loss: 0.167643
Training Epoch_13 Batch_44, Accuracy: 0.953390, Loss: 0.164880
********************* Summary for epoch: 13 *********************
2019-07-15 09:06:34: Step 13: Training accuracy = 95.3%
2019-07-15 09:06:34: Step 13: Training cross entropy = 0.164880
2019-07-15 09:06:34: Step 13: Validation accuracy = 90.9%
2019-07-15 09:06:34: Step 13: Validation cross entropy = 0.304545


Saving intermediate result.
Training Epoch_14 Batch_0, Accuracy: 0.984375, Loss: 0.142403
Training Epoch_14 Batch_1, Accuracy: 0.968750, Loss: 0.170713
Training Epoch_14 Batch_2, Accuracy: 0.968750, Loss: 0.150472
Training Epoch_14 Batch_3, Accuracy: 0.957031, Loss: 0.171528
Training Epoch_14 Batch_4, Accuracy: 0.943750, Loss: 0.205558
Training Epoch_14 Batch_5, Accuracy: 0.937500, Loss: 0.217575
Training Epoch_14 Batch_6, Accuracy: 0.939732, Loss: 0.205680
Training Epoch_14 Batch_7, Accuracy: 0.943359, Loss: 0.197289
Training Epoch_14 Batch_8, Accuracy: 0.944444, Loss: 0.191572
Training Epoch_14 Batch_9, Accuracy: 0.945312, Loss: 0.188940
Training Epoch_14 Batch_10, Accuracy: 0.948864, Loss: 0.188887
Training Epoch_14 Batch_11, Accuracy: 0.951823, Loss: 0.183650
Training Epoch_14 Batch_12, Accuracy: 0.950721, Loss: 0.182705
Training Epoch_14 Batch_13, Accuracy: 0.952009, Loss: 0.184409
Training Epoch_14 Batch_14, Accuracy: 0.950000, Loss: 0.182855
Training Epoch_14 Batch_15, Accuracy: 0.952148, Loss: 0.177978
Training Epoch_14 Batch_16, Accuracy: 0.952206, Loss: 0.176891
Training Epoch_14 Batch_17, Accuracy: 0.951389, Loss: 0.180681
Training Epoch_14 Batch_18, Accuracy: 0.951480, Loss: 0.181727
Training Epoch_14 Batch_19, Accuracy: 0.953125, Loss: 0.175289
Training Epoch_14 Batch_20, Accuracy: 0.953869, Loss: 0.175034
Training Epoch_14 Batch_21, Accuracy: 0.952415, Loss: 0.176190
Training Epoch_14 Batch_22, Accuracy: 0.953125, Loss: 0.173999
Training Epoch_14 Batch_23, Accuracy: 0.951823, Loss: 0.176524
Training Epoch_14 Batch_24, Accuracy: 0.951250, Loss: 0.175858
Training Epoch_14 Batch_25, Accuracy: 0.951923, Loss: 0.173824
Training Epoch_14 Batch_26, Accuracy: 0.951389, Loss: 0.173826
Training Epoch_14 Batch_27, Accuracy: 0.950335, Loss: 0.173979
Training Epoch_14 Batch_28, Accuracy: 0.950431, Loss: 0.173571
Training Epoch_14 Batch_29, Accuracy: 0.951042, Loss: 0.172441
Training Epoch_14 Batch_30, Accuracy: 0.952117, Loss: 0.171335
Training Epoch_14 Batch_31, Accuracy: 0.953125, Loss: 0.168280
Training Epoch_14 Batch_32, Accuracy: 0.953598, Loss: 0.166376
Training Epoch_14 Batch_33, Accuracy: 0.952206, Loss: 0.167657
Training Epoch_14 Batch_34, Accuracy: 0.953125, Loss: 0.165528
Training Epoch_14 Batch_35, Accuracy: 0.954427, Loss: 0.163560
Training Epoch_14 Batch_36, Accuracy: 0.955236, Loss: 0.161957
Training Epoch_14 Batch_37, Accuracy: 0.954770, Loss: 0.164693
Training Epoch_14 Batch_38, Accuracy: 0.954728, Loss: 0.164252
Training Epoch_14 Batch_39, Accuracy: 0.955469, Loss: 0.162498
Training Epoch_14 Batch_40, Accuracy: 0.955793, Loss: 0.161618
Training Epoch_14 Batch_41, Accuracy: 0.955357, Loss: 0.161408
Training Epoch_14 Batch_42, Accuracy: 0.954215, Loss: 0.162374
Training Epoch_14 Batch_43, Accuracy: 0.954545, Loss: 0.162072
Training Epoch_14 Batch_44, Accuracy: 0.954802, Loss: 0.160140
********************* Summary for epoch: 14 *********************
2019-07-15 09:06:36: Step 14: Training accuracy = 95.5%
2019-07-15 09:06:36: Step 14: Training cross entropy = 0.160140
2019-07-15 09:06:36: Step 14: Validation accuracy = 89.5%
2019-07-15 09:06:36: Step 14: Validation cross entropy = 0.333367


Training Epoch_15 Batch_0, Accuracy: 0.937500, Loss: 0.160379
Training Epoch_15 Batch_1, Accuracy: 0.937500, Loss: 0.173555
Training Epoch_15 Batch_2, Accuracy: 0.937500, Loss: 0.188902
Training Epoch_15 Batch_3, Accuracy: 0.937500, Loss: 0.200560
Training Epoch_15 Batch_4, Accuracy: 0.934375, Loss: 0.195234
Training Epoch_15 Batch_5, Accuracy: 0.940104, Loss: 0.190065
Training Epoch_15 Batch_6, Accuracy: 0.946429, Loss: 0.175979
Training Epoch_15 Batch_7, Accuracy: 0.941406, Loss: 0.184002
Training Epoch_15 Batch_8, Accuracy: 0.937500, Loss: 0.188133
Training Epoch_15 Batch_9, Accuracy: 0.940625, Loss: 0.179754
Training Epoch_15 Batch_10, Accuracy: 0.941761, Loss: 0.175408
Training Epoch_15 Batch_11, Accuracy: 0.944010, Loss: 0.170843
Training Epoch_15 Batch_12, Accuracy: 0.947115, Loss: 0.167966
Training Epoch_15 Batch_13, Accuracy: 0.947545, Loss: 0.164618
Training Epoch_15 Batch_14, Accuracy: 0.948958, Loss: 0.162176
Training Epoch_15 Batch_15, Accuracy: 0.949219, Loss: 0.161943
Training Epoch_15 Batch_16, Accuracy: 0.949449, Loss: 0.162085
Training Epoch_15 Batch_17, Accuracy: 0.952257, Loss: 0.158458
Training Epoch_15 Batch_18, Accuracy: 0.953125, Loss: 0.156774
Training Epoch_15 Batch_19, Accuracy: 0.952344, Loss: 0.159827
Training Epoch_15 Batch_20, Accuracy: 0.952381, Loss: 0.162613
Training Epoch_15 Batch_21, Accuracy: 0.953125, Loss: 0.160653
Training Epoch_15 Batch_22, Accuracy: 0.952446, Loss: 0.162462
Training Epoch_15 Batch_23, Accuracy: 0.953776, Loss: 0.162307
Training Epoch_15 Batch_24, Accuracy: 0.954375, Loss: 0.163360
Training Epoch_15 Batch_25, Accuracy: 0.951923, Loss: 0.166535
Training Epoch_15 Batch_26, Accuracy: 0.951389, Loss: 0.168186
Training Epoch_15 Batch_27, Accuracy: 0.952567, Loss: 0.167227
Training Epoch_15 Batch_28, Accuracy: 0.952586, Loss: 0.167125
Training Epoch_15 Batch_29, Accuracy: 0.953125, Loss: 0.165982
Training Epoch_15 Batch_30, Accuracy: 0.954637, Loss: 0.162561
Training Epoch_15 Batch_31, Accuracy: 0.955566, Loss: 0.160969
Training Epoch_15 Batch_32, Accuracy: 0.955492, Loss: 0.160066
Training Epoch_15 Batch_33, Accuracy: 0.956342, Loss: 0.158313
Training Epoch_15 Batch_34, Accuracy: 0.956250, Loss: 0.157995
Training Epoch_15 Batch_35, Accuracy: 0.956163, Loss: 0.157806
Training Epoch_15 Batch_36, Accuracy: 0.956926, Loss: 0.156073
Training Epoch_15 Batch_37, Accuracy: 0.957648, Loss: 0.155241
Training Epoch_15 Batch_38, Accuracy: 0.957131, Loss: 0.155084
Training Epoch_15 Batch_39, Accuracy: 0.957422, Loss: 0.154692
Training Epoch_15 Batch_40, Accuracy: 0.957317, Loss: 0.154463
Training Epoch_15 Batch_41, Accuracy: 0.957217, Loss: 0.154181
Training Epoch_15 Batch_42, Accuracy: 0.957849, Loss: 0.152889
Training Epoch_15 Batch_43, Accuracy: 0.957741, Loss: 0.151992
Training Epoch_15 Batch_44, Accuracy: 0.957627, Loss: 0.153577
********************* Summary for epoch: 15 *********************
2019-07-15 09:06:37: Step 15: Training accuracy = 95.8%
2019-07-15 09:06:37: Step 15: Training cross entropy = 0.153577
2019-07-15 09:06:37: Step 15: Validation accuracy = 89.0%
2019-07-15 09:06:37: Step 15: Validation cross entropy = 0.351252


Training Epoch_16 Batch_0, Accuracy: 0.937500, Loss: 0.186099
Training Epoch_16 Batch_1, Accuracy: 0.968750, Loss: 0.125357
Training Epoch_16 Batch_2, Accuracy: 0.973958, Loss: 0.114145
Training Epoch_16 Batch_3, Accuracy: 0.972656, Loss: 0.122107
Training Epoch_16 Batch_4, Accuracy: 0.965625, Loss: 0.144967
Training Epoch_16 Batch_5, Accuracy: 0.958333, Loss: 0.153131
Training Epoch_16 Batch_6, Accuracy: 0.959821, Loss: 0.150852
Training Epoch_16 Batch_7, Accuracy: 0.960938, Loss: 0.152720
Training Epoch_16 Batch_8, Accuracy: 0.961806, Loss: 0.145985
Training Epoch_16 Batch_9, Accuracy: 0.959375, Loss: 0.146676
Training Epoch_16 Batch_10, Accuracy: 0.961648, Loss: 0.142186
Training Epoch_16 Batch_11, Accuracy: 0.962240, Loss: 0.142396
Training Epoch_16 Batch_12, Accuracy: 0.961538, Loss: 0.142948
Training Epoch_16 Batch_13, Accuracy: 0.960938, Loss: 0.143694
Training Epoch_16 Batch_14, Accuracy: 0.959375, Loss: 0.146099
Training Epoch_16 Batch_15, Accuracy: 0.960938, Loss: 0.143040
Training Epoch_16 Batch_16, Accuracy: 0.961397, Loss: 0.142155
Training Epoch_16 Batch_17, Accuracy: 0.960069, Loss: 0.144070
Training Epoch_16 Batch_18, Accuracy: 0.960526, Loss: 0.142350
Training Epoch_16 Batch_19, Accuracy: 0.960938, Loss: 0.143688
Training Epoch_16 Batch_20, Accuracy: 0.960565, Loss: 0.142514
Training Epoch_16 Batch_21, Accuracy: 0.960938, Loss: 0.143026
Training Epoch_16 Batch_22, Accuracy: 0.961277, Loss: 0.142532
Training Epoch_16 Batch_23, Accuracy: 0.959635, Loss: 0.145693
Training Epoch_16 Batch_24, Accuracy: 0.961250, Loss: 0.142736
Training Epoch_16 Batch_25, Accuracy: 0.962139, Loss: 0.140889
Training Epoch_16 Batch_26, Accuracy: 0.962963, Loss: 0.139675
Training Epoch_16 Batch_27, Accuracy: 0.963170, Loss: 0.141098
Training Epoch_16 Batch_28, Accuracy: 0.962823, Loss: 0.141132
Training Epoch_16 Batch_29, Accuracy: 0.960938, Loss: 0.145423
Training Epoch_16 Batch_30, Accuracy: 0.962198, Loss: 0.143426
Training Epoch_16 Batch_31, Accuracy: 0.963379, Loss: 0.142315
Training Epoch_16 Batch_32, Accuracy: 0.964015, Loss: 0.141290
Training Epoch_16 Batch_33, Accuracy: 0.964154, Loss: 0.141063
Training Epoch_16 Batch_34, Accuracy: 0.962500, Loss: 0.142437
Training Epoch_16 Batch_35, Accuracy: 0.963108, Loss: 0.141336
Training Epoch_16 Batch_36, Accuracy: 0.962416, Loss: 0.142452
Training Epoch_16 Batch_37, Accuracy: 0.962171, Loss: 0.143772
Training Epoch_16 Batch_38, Accuracy: 0.962740, Loss: 0.144064
Training Epoch_16 Batch_39, Accuracy: 0.962109, Loss: 0.145030
Training Epoch_16 Batch_40, Accuracy: 0.962652, Loss: 0.144816
Training Epoch_16 Batch_41, Accuracy: 0.962426, Loss: 0.145519
Training Epoch_16 Batch_42, Accuracy: 0.962573, Loss: 0.144802
Training Epoch_16 Batch_43, Accuracy: 0.963423, Loss: 0.142877
Training Epoch_16 Batch_44, Accuracy: 0.962218, Loss: 0.149453
********************* Summary for epoch: 16 *********************
2019-07-15 09:06:37: Step 16: Training accuracy = 96.2%
2019-07-15 09:06:37: Step 16: Training cross entropy = 0.149453
2019-07-15 09:06:37: Step 16: Validation accuracy = 91.2%
2019-07-15 09:06:37: Step 16: Validation cross entropy = 0.296960


Saving intermediate result.
Training Epoch_17 Batch_0, Accuracy: 0.984375, Loss: 0.085774
Training Epoch_17 Batch_1, Accuracy: 0.984375, Loss: 0.098889
Training Epoch_17 Batch_2, Accuracy: 0.984375, Loss: 0.095262
Training Epoch_17 Batch_3, Accuracy: 0.984375, Loss: 0.103728
Training Epoch_17 Batch_4, Accuracy: 0.981250, Loss: 0.114544
Training Epoch_17 Batch_5, Accuracy: 0.979167, Loss: 0.117575
Training Epoch_17 Batch_6, Accuracy: 0.975446, Loss: 0.124934
Training Epoch_17 Batch_7, Accuracy: 0.972656, Loss: 0.138373
Training Epoch_17 Batch_8, Accuracy: 0.973958, Loss: 0.133701
Training Epoch_17 Batch_9, Accuracy: 0.975000, Loss: 0.131907
Training Epoch_17 Batch_10, Accuracy: 0.974432, Loss: 0.133045
Training Epoch_17 Batch_11, Accuracy: 0.975260, Loss: 0.138625
Training Epoch_17 Batch_12, Accuracy: 0.974760, Loss: 0.136561
Training Epoch_17 Batch_13, Accuracy: 0.974330, Loss: 0.136458
Training Epoch_17 Batch_14, Accuracy: 0.973958, Loss: 0.134930
Training Epoch_17 Batch_15, Accuracy: 0.974609, Loss: 0.132352
Training Epoch_17 Batch_16, Accuracy: 0.975184, Loss: 0.131192
Training Epoch_17 Batch_17, Accuracy: 0.973090, Loss: 0.134148
Training Epoch_17 Batch_18, Accuracy: 0.970395, Loss: 0.138444
Training Epoch_17 Batch_19, Accuracy: 0.970312, Loss: 0.138471
Training Epoch_17 Batch_20, Accuracy: 0.971726, Loss: 0.136180
Training Epoch_17 Batch_21, Accuracy: 0.970881, Loss: 0.137314
Training Epoch_17 Batch_22, Accuracy: 0.971467, Loss: 0.135269
Training Epoch_17 Batch_23, Accuracy: 0.970703, Loss: 0.137878
Training Epoch_17 Batch_24, Accuracy: 0.970625, Loss: 0.138404
Training Epoch_17 Batch_25, Accuracy: 0.971154, Loss: 0.136176
Training Epoch_17 Batch_26, Accuracy: 0.971065, Loss: 0.136189
Training Epoch_17 Batch_27, Accuracy: 0.971540, Loss: 0.135353
Training Epoch_17 Batch_28, Accuracy: 0.970366, Loss: 0.135775
Training Epoch_17 Batch_29, Accuracy: 0.970312, Loss: 0.136823
Training Epoch_17 Batch_30, Accuracy: 0.969758, Loss: 0.135676
Training Epoch_17 Batch_31, Accuracy: 0.970703, Loss: 0.135030
Training Epoch_17 Batch_32, Accuracy: 0.971117, Loss: 0.133639
Training Epoch_17 Batch_33, Accuracy: 0.971048, Loss: 0.133917
Training Epoch_17 Batch_34, Accuracy: 0.970536, Loss: 0.134635
Training Epoch_17 Batch_35, Accuracy: 0.970486, Loss: 0.135886
Training Epoch_17 Batch_36, Accuracy: 0.970861, Loss: 0.134647
Training Epoch_17 Batch_37, Accuracy: 0.970395, Loss: 0.135364
Training Epoch_17 Batch_38, Accuracy: 0.970753, Loss: 0.134751
Training Epoch_17 Batch_39, Accuracy: 0.970703, Loss: 0.134096
Training Epoch_17 Batch_40, Accuracy: 0.969512, Loss: 0.135673
Training Epoch_17 Batch_41, Accuracy: 0.969122, Loss: 0.135625
Training Epoch_17 Batch_42, Accuracy: 0.968750, Loss: 0.135518
Training Epoch_17 Batch_43, Accuracy: 0.969105, Loss: 0.135082
Training Epoch_17 Batch_44, Accuracy: 0.969280, Loss: 0.135205
********************* Summary for epoch: 17 *********************
2019-07-15 09:06:39: Step 17: Training accuracy = 96.9%
2019-07-15 09:06:39: Step 17: Training cross entropy = 0.135205
2019-07-15 09:06:39: Step 17: Validation accuracy = 90.6%
2019-07-15 09:06:39: Step 17: Validation cross entropy = 0.317019


Training Epoch_18 Batch_0, Accuracy: 0.984375, Loss: 0.113303
Training Epoch_18 Batch_1, Accuracy: 0.992188, Loss: 0.107064
Training Epoch_18 Batch_2, Accuracy: 0.984375, Loss: 0.109660
Training Epoch_18 Batch_3, Accuracy: 0.980469, Loss: 0.115614
Training Epoch_18 Batch_4, Accuracy: 0.984375, Loss: 0.106066
Training Epoch_18 Batch_5, Accuracy: 0.986979, Loss: 0.097507
Training Epoch_18 Batch_6, Accuracy: 0.982143, Loss: 0.103460
Training Epoch_18 Batch_7, Accuracy: 0.984375, Loss: 0.099452
Training Epoch_18 Batch_8, Accuracy: 0.984375, Loss: 0.099346
Training Epoch_18 Batch_9, Accuracy: 0.984375, Loss: 0.101221
Training Epoch_18 Batch_10, Accuracy: 0.981534, Loss: 0.103243
Training Epoch_18 Batch_11, Accuracy: 0.980469, Loss: 0.103204
Training Epoch_18 Batch_12, Accuracy: 0.980769, Loss: 0.106429
Training Epoch_18 Batch_13, Accuracy: 0.982143, Loss: 0.103191
Training Epoch_18 Batch_14, Accuracy: 0.980208, Loss: 0.108763
Training Epoch_18 Batch_15, Accuracy: 0.978516, Loss: 0.113447
Training Epoch_18 Batch_16, Accuracy: 0.977941, Loss: 0.115759
Training Epoch_18 Batch_17, Accuracy: 0.977431, Loss: 0.115010
Training Epoch_18 Batch_18, Accuracy: 0.976974, Loss: 0.117320
Training Epoch_18 Batch_19, Accuracy: 0.976562, Loss: 0.116334
Training Epoch_18 Batch_20, Accuracy: 0.976190, Loss: 0.116101
Training Epoch_18 Batch_21, Accuracy: 0.975852, Loss: 0.118609
Training Epoch_18 Batch_22, Accuracy: 0.974864, Loss: 0.120901
Training Epoch_18 Batch_23, Accuracy: 0.973307, Loss: 0.121894
Training Epoch_18 Batch_24, Accuracy: 0.972500, Loss: 0.122194
Training Epoch_18 Batch_25, Accuracy: 0.970553, Loss: 0.125660
Training Epoch_18 Batch_26, Accuracy: 0.971644, Loss: 0.124600
Training Epoch_18 Batch_27, Accuracy: 0.972656, Loss: 0.123915
Training Epoch_18 Batch_28, Accuracy: 0.971983, Loss: 0.125247
Training Epoch_18 Batch_29, Accuracy: 0.971354, Loss: 0.127322
Training Epoch_18 Batch_30, Accuracy: 0.970766, Loss: 0.128387
Training Epoch_18 Batch_31, Accuracy: 0.970215, Loss: 0.129167
Training Epoch_18 Batch_32, Accuracy: 0.970170, Loss: 0.128624
Training Epoch_18 Batch_33, Accuracy: 0.970129, Loss: 0.130786
Training Epoch_18 Batch_34, Accuracy: 0.970982, Loss: 0.128632
Training Epoch_18 Batch_35, Accuracy: 0.970920, Loss: 0.128630
Training Epoch_18 Batch_36, Accuracy: 0.971706, Loss: 0.127896
Training Epoch_18 Batch_37, Accuracy: 0.972451, Loss: 0.126214
Training Epoch_18 Batch_38, Accuracy: 0.972756, Loss: 0.125230
Training Epoch_18 Batch_39, Accuracy: 0.973047, Loss: 0.125848
Training Epoch_18 Batch_40, Accuracy: 0.972942, Loss: 0.127689
Training Epoch_18 Batch_41, Accuracy: 0.973214, Loss: 0.126827
Training Epoch_18 Batch_42, Accuracy: 0.972747, Loss: 0.127159
Training Epoch_18 Batch_43, Accuracy: 0.973011, Loss: 0.126601
Training Epoch_18 Batch_44, Accuracy: 0.972811, Loss: 0.126125
********************* Summary for epoch: 18 *********************
2019-07-15 09:06:40: Step 18: Training accuracy = 97.3%
2019-07-15 09:06:40: Step 18: Training cross entropy = 0.126125
2019-07-15 09:06:40: Step 18: Validation accuracy = 90.3%
2019-07-15 09:06:40: Step 18: Validation cross entropy = 0.305406


Training Epoch_19 Batch_0, Accuracy: 0.984375, Loss: 0.110507
Training Epoch_19 Batch_1, Accuracy: 0.976562, Loss: 0.124849
Training Epoch_19 Batch_2, Accuracy: 0.984375, Loss: 0.117236
Training Epoch_19 Batch_3, Accuracy: 0.972656, Loss: 0.133644
Training Epoch_19 Batch_4, Accuracy: 0.975000, Loss: 0.120101
Training Epoch_19 Batch_5, Accuracy: 0.971354, Loss: 0.129340
Training Epoch_19 Batch_6, Accuracy: 0.970982, Loss: 0.126905
Training Epoch_19 Batch_7, Accuracy: 0.972656, Loss: 0.122389
Training Epoch_19 Batch_8, Accuracy: 0.973958, Loss: 0.118838
Training Epoch_19 Batch_9, Accuracy: 0.976562, Loss: 0.112783
Training Epoch_19 Batch_10, Accuracy: 0.977273, Loss: 0.111738
Training Epoch_19 Batch_11, Accuracy: 0.977865, Loss: 0.108880
Training Epoch_19 Batch_12, Accuracy: 0.978365, Loss: 0.107675
Training Epoch_19 Batch_13, Accuracy: 0.978795, Loss: 0.109838
Training Epoch_19 Batch_14, Accuracy: 0.979167, Loss: 0.107701
Training Epoch_19 Batch_15, Accuracy: 0.972656, Loss: 0.116396
Training Epoch_19 Batch_16, Accuracy: 0.970588, Loss: 0.118918
Training Epoch_19 Batch_17, Accuracy: 0.972222, Loss: 0.116928
Training Epoch_19 Batch_18, Accuracy: 0.972862, Loss: 0.117127
Training Epoch_19 Batch_19, Accuracy: 0.973437, Loss: 0.117480
Training Epoch_19 Batch_20, Accuracy: 0.973958, Loss: 0.116257
Training Epoch_19 Batch_21, Accuracy: 0.973722, Loss: 0.117222
Training Epoch_19 Batch_22, Accuracy: 0.974864, Loss: 0.115886
Training Epoch_19 Batch_23, Accuracy: 0.974609, Loss: 0.114484
Training Epoch_19 Batch_24, Accuracy: 0.974375, Loss: 0.114708
Training Epoch_19 Batch_25, Accuracy: 0.974159, Loss: 0.115787
Training Epoch_19 Batch_26, Accuracy: 0.973958, Loss: 0.116079
Training Epoch_19 Batch_27, Accuracy: 0.974888, Loss: 0.115221
Training Epoch_19 Batch_28, Accuracy: 0.974138, Loss: 0.114839
Training Epoch_19 Batch_29, Accuracy: 0.974479, Loss: 0.113843
Training Epoch_19 Batch_30, Accuracy: 0.974798, Loss: 0.113428
Training Epoch_19 Batch_31, Accuracy: 0.974609, Loss: 0.114462
Training Epoch_19 Batch_32, Accuracy: 0.974905, Loss: 0.114376
Training Epoch_19 Batch_33, Accuracy: 0.975184, Loss: 0.117073
Training Epoch_19 Batch_34, Accuracy: 0.974554, Loss: 0.118026
Training Epoch_19 Batch_35, Accuracy: 0.974826, Loss: 0.117457
Training Epoch_19 Batch_36, Accuracy: 0.974240, Loss: 0.117732
Training Epoch_19 Batch_37, Accuracy: 0.974507, Loss: 0.117725
Training Epoch_19 Batch_38, Accuracy: 0.974359, Loss: 0.117418
Training Epoch_19 Batch_39, Accuracy: 0.973828, Loss: 0.117866
Training Epoch_19 Batch_40, Accuracy: 0.972561, Loss: 0.121359
Training Epoch_19 Batch_41, Accuracy: 0.973214, Loss: 0.121370
Training Epoch_19 Batch_42, Accuracy: 0.973110, Loss: 0.120997
Training Epoch_19 Batch_43, Accuracy: 0.973011, Loss: 0.120931
Training Epoch_19 Batch_44, Accuracy: 0.973164, Loss: 0.119984
********************* Summary for epoch: 19 *********************
2019-07-15 09:06:40: Step 19: Training accuracy = 97.3%
2019-07-15 09:06:40: Step 19: Training cross entropy = 0.119984
2019-07-15 09:06:40: Step 19: Validation accuracy = 89.0%
2019-07-15 09:06:40: Step 19: Validation cross entropy = 0.337119


Training Epoch_20 Batch_0, Accuracy: 0.968750, Loss: 0.138611
Training Epoch_20 Batch_1, Accuracy: 0.960938, Loss: 0.157745
Training Epoch_20 Batch_2, Accuracy: 0.963542, Loss: 0.139920
Training Epoch_20 Batch_3, Accuracy: 0.968750, Loss: 0.147386
Training Epoch_20 Batch_4, Accuracy: 0.968750, Loss: 0.141428
Training Epoch_20 Batch_5, Accuracy: 0.973958, Loss: 0.131997
Training Epoch_20 Batch_6, Accuracy: 0.977679, Loss: 0.121816
Training Epoch_20 Batch_7, Accuracy: 0.976562, Loss: 0.120974
Training Epoch_20 Batch_8, Accuracy: 0.977431, Loss: 0.124707
Training Epoch_20 Batch_9, Accuracy: 0.973437, Loss: 0.128788
Training Epoch_20 Batch_10, Accuracy: 0.974432, Loss: 0.126465
Training Epoch_20 Batch_11, Accuracy: 0.975260, Loss: 0.128341
Training Epoch_20 Batch_12, Accuracy: 0.975962, Loss: 0.125162
Training Epoch_20 Batch_13, Accuracy: 0.975446, Loss: 0.124382
Training Epoch_20 Batch_14, Accuracy: 0.976042, Loss: 0.121095
Training Epoch_20 Batch_15, Accuracy: 0.977539, Loss: 0.118328
Training Epoch_20 Batch_16, Accuracy: 0.977941, Loss: 0.117769
Training Epoch_20 Batch_17, Accuracy: 0.978299, Loss: 0.116741
Training Epoch_20 Batch_18, Accuracy: 0.977796, Loss: 0.117937
Training Epoch_20 Batch_19, Accuracy: 0.976562, Loss: 0.118451
Training Epoch_20 Batch_20, Accuracy: 0.974702, Loss: 0.121373
Training Epoch_20 Batch_21, Accuracy: 0.975142, Loss: 0.118976
Training Epoch_20 Batch_22, Accuracy: 0.976223, Loss: 0.116589
Training Epoch_20 Batch_23, Accuracy: 0.976562, Loss: 0.114402
Training Epoch_20 Batch_24, Accuracy: 0.975000, Loss: 0.115162
Training Epoch_20 Batch_25, Accuracy: 0.974760, Loss: 0.114718
Training Epoch_20 Batch_26, Accuracy: 0.975116, Loss: 0.115149
Training Epoch_20 Batch_27, Accuracy: 0.973772, Loss: 0.118241
Training Epoch_20 Batch_28, Accuracy: 0.974138, Loss: 0.117966
Training Epoch_20 Batch_29, Accuracy: 0.975000, Loss: 0.117006
Training Epoch_20 Batch_30, Accuracy: 0.975302, Loss: 0.116795
Training Epoch_20 Batch_31, Accuracy: 0.975586, Loss: 0.116523
Training Epoch_20 Batch_32, Accuracy: 0.975379, Loss: 0.116869
Training Epoch_20 Batch_33, Accuracy: 0.975643, Loss: 0.117128
Training Epoch_20 Batch_34, Accuracy: 0.975446, Loss: 0.118139
Training Epoch_20 Batch_35, Accuracy: 0.975260, Loss: 0.117795
Training Epoch_20 Batch_36, Accuracy: 0.974662, Loss: 0.118050
Training Epoch_20 Batch_37, Accuracy: 0.974918, Loss: 0.116895
Training Epoch_20 Batch_38, Accuracy: 0.974760, Loss: 0.116518
Training Epoch_20 Batch_39, Accuracy: 0.975000, Loss: 0.115650
Training Epoch_20 Batch_40, Accuracy: 0.974848, Loss: 0.115287
Training Epoch_20 Batch_41, Accuracy: 0.974702, Loss: 0.115672
Training Epoch_20 Batch_42, Accuracy: 0.973110, Loss: 0.117357
Training Epoch_20 Batch_43, Accuracy: 0.973366, Loss: 0.116947
Training Epoch_20 Batch_44, Accuracy: 0.973517, Loss: 0.117189
********************* Summary for epoch: 20 *********************
2019-07-15 09:06:41: Step 20: Training accuracy = 97.4%
2019-07-15 09:06:41: Step 20: Training cross entropy = 0.117189
2019-07-15 09:06:41: Step 20: Validation accuracy = 90.9%
2019-07-15 09:06:41: Step 20: Validation cross entropy = 0.294643


Training Epoch_21 Batch_0, Accuracy: 0.984375, Loss: 0.110458
Training Epoch_21 Batch_1, Accuracy: 0.976562, Loss: 0.131621
Training Epoch_21 Batch_2, Accuracy: 0.958333, Loss: 0.173354
Training Epoch_21 Batch_3, Accuracy: 0.949219, Loss: 0.159698
Training Epoch_21 Batch_4, Accuracy: 0.953125, Loss: 0.153367
Training Epoch_21 Batch_5, Accuracy: 0.955729, Loss: 0.147718
Training Epoch_21 Batch_6, Accuracy: 0.953125, Loss: 0.147352
Training Epoch_21 Batch_7, Accuracy: 0.953125, Loss: 0.143559
Training Epoch_21 Batch_8, Accuracy: 0.956597, Loss: 0.137734
Training Epoch_21 Batch_9, Accuracy: 0.959375, Loss: 0.136341
Training Epoch_21 Batch_10, Accuracy: 0.960227, Loss: 0.134317
Training Epoch_21 Batch_11, Accuracy: 0.963542, Loss: 0.130774
Training Epoch_21 Batch_12, Accuracy: 0.966346, Loss: 0.125531
Training Epoch_21 Batch_13, Accuracy: 0.966518, Loss: 0.124334
Training Epoch_21 Batch_14, Accuracy: 0.967708, Loss: 0.122937
Training Epoch_21 Batch_15, Accuracy: 0.968750, Loss: 0.121501
Training Epoch_21 Batch_16, Accuracy: 0.970588, Loss: 0.119214
Training Epoch_21 Batch_17, Accuracy: 0.972222, Loss: 0.117729
Training Epoch_21 Batch_18, Accuracy: 0.973684, Loss: 0.114910
Training Epoch_21 Batch_19, Accuracy: 0.974219, Loss: 0.112509
Training Epoch_21 Batch_20, Accuracy: 0.974702, Loss: 0.111567
Training Epoch_21 Batch_21, Accuracy: 0.975142, Loss: 0.111766
Training Epoch_21 Batch_22, Accuracy: 0.974185, Loss: 0.112561
Training Epoch_21 Batch_23, Accuracy: 0.974609, Loss: 0.111796
Training Epoch_21 Batch_24, Accuracy: 0.974375, Loss: 0.113986
Training Epoch_21 Batch_25, Accuracy: 0.974159, Loss: 0.113139
Training Epoch_21 Batch_26, Accuracy: 0.974537, Loss: 0.112168
Training Epoch_21 Batch_27, Accuracy: 0.973772, Loss: 0.112638
Training Epoch_21 Batch_28, Accuracy: 0.974677, Loss: 0.111739
Training Epoch_21 Batch_29, Accuracy: 0.975000, Loss: 0.110790
Training Epoch_21 Batch_30, Accuracy: 0.974294, Loss: 0.111676
Training Epoch_21 Batch_31, Accuracy: 0.974609, Loss: 0.110322
Training Epoch_21 Batch_32, Accuracy: 0.974905, Loss: 0.109266
Training Epoch_21 Batch_33, Accuracy: 0.975184, Loss: 0.109871
Training Epoch_21 Batch_34, Accuracy: 0.975446, Loss: 0.109530
Training Epoch_21 Batch_35, Accuracy: 0.975694, Loss: 0.108393
Training Epoch_21 Batch_36, Accuracy: 0.976351, Loss: 0.108094
Training Epoch_21 Batch_37, Accuracy: 0.975740, Loss: 0.109441
Training Epoch_21 Batch_38, Accuracy: 0.975962, Loss: 0.108991
Training Epoch_21 Batch_39, Accuracy: 0.975781, Loss: 0.108370
Training Epoch_21 Batch_40, Accuracy: 0.976372, Loss: 0.107470
Training Epoch_21 Batch_41, Accuracy: 0.976935, Loss: 0.106331
Training Epoch_21 Batch_42, Accuracy: 0.976744, Loss: 0.107791
Training Epoch_21 Batch_43, Accuracy: 0.976562, Loss: 0.107752
Training Epoch_21 Batch_44, Accuracy: 0.976695, Loss: 0.107333
********************* Summary for epoch: 21 *********************
2019-07-15 09:06:41: Step 21: Training accuracy = 97.7%
2019-07-15 09:06:41: Step 21: Training cross entropy = 0.107333
2019-07-15 09:06:41: Step 21: Validation accuracy = 90.9%
2019-07-15 09:06:41: Step 21: Validation cross entropy = 0.293601


Training Epoch_22 Batch_0, Accuracy: 0.921875, Loss: 0.166458
Training Epoch_22 Batch_1, Accuracy: 0.953125, Loss: 0.121733
Training Epoch_22 Batch_2, Accuracy: 0.942708, Loss: 0.139395
Training Epoch_22 Batch_3, Accuracy: 0.949219, Loss: 0.130641
Training Epoch_22 Batch_4, Accuracy: 0.956250, Loss: 0.125795
Training Epoch_22 Batch_5, Accuracy: 0.958333, Loss: 0.133377
Training Epoch_22 Batch_6, Accuracy: 0.955357, Loss: 0.133411
Training Epoch_22 Batch_7, Accuracy: 0.958984, Loss: 0.123612
Training Epoch_22 Batch_8, Accuracy: 0.960069, Loss: 0.118971
Training Epoch_22 Batch_9, Accuracy: 0.964063, Loss: 0.116373
Training Epoch_22 Batch_10, Accuracy: 0.965909, Loss: 0.116452
Training Epoch_22 Batch_11, Accuracy: 0.967448, Loss: 0.114959
Training Epoch_22 Batch_12, Accuracy: 0.969952, Loss: 0.111753
Training Epoch_22 Batch_13, Accuracy: 0.972098, Loss: 0.108702
Training Epoch_22 Batch_14, Accuracy: 0.971875, Loss: 0.107713
Training Epoch_22 Batch_15, Accuracy: 0.973633, Loss: 0.105043
Training Epoch_22 Batch_16, Accuracy: 0.972426, Loss: 0.107669
Training Epoch_22 Batch_17, Accuracy: 0.972222, Loss: 0.113006
Training Epoch_22 Batch_18, Accuracy: 0.972039, Loss: 0.110856
Training Epoch_22 Batch_19, Accuracy: 0.973437, Loss: 0.109555
Training Epoch_22 Batch_20, Accuracy: 0.973214, Loss: 0.108373
Training Epoch_22 Batch_21, Accuracy: 0.974432, Loss: 0.107316
Training Epoch_22 Batch_22, Accuracy: 0.974185, Loss: 0.107661
Training Epoch_22 Batch_23, Accuracy: 0.974609, Loss: 0.107784
Training Epoch_22 Batch_24, Accuracy: 0.975000, Loss: 0.108988
Training Epoch_22 Batch_25, Accuracy: 0.975962, Loss: 0.107853
Training Epoch_22 Batch_26, Accuracy: 0.976273, Loss: 0.107458
Training Epoch_22 Batch_27, Accuracy: 0.976004, Loss: 0.108412
Training Epoch_22 Batch_28, Accuracy: 0.976832, Loss: 0.106632
Training Epoch_22 Batch_29, Accuracy: 0.977083, Loss: 0.106408
Training Epoch_22 Batch_30, Accuracy: 0.977319, Loss: 0.106136
Training Epoch_22 Batch_31, Accuracy: 0.976562, Loss: 0.106562
Training Epoch_22 Batch_32, Accuracy: 0.976799, Loss: 0.105565
Training Epoch_22 Batch_33, Accuracy: 0.977022, Loss: 0.105128
Training Epoch_22 Batch_34, Accuracy: 0.977232, Loss: 0.104927
Training Epoch_22 Batch_35, Accuracy: 0.977431, Loss: 0.104891
Training Epoch_22 Batch_36, Accuracy: 0.977618, Loss: 0.103883
Training Epoch_22 Batch_37, Accuracy: 0.977385, Loss: 0.103924
Training Epoch_22 Batch_38, Accuracy: 0.977163, Loss: 0.103918
Training Epoch_22 Batch_39, Accuracy: 0.977734, Loss: 0.103176
Training Epoch_22 Batch_40, Accuracy: 0.977896, Loss: 0.104536
Training Epoch_22 Batch_41, Accuracy: 0.978051, Loss: 0.103622
Training Epoch_22 Batch_42, Accuracy: 0.978198, Loss: 0.103417
Training Epoch_22 Batch_43, Accuracy: 0.978693, Loss: 0.102958
Training Epoch_22 Batch_44, Accuracy: 0.978814, Loss: 0.104149
********************* Summary for epoch: 22 *********************
2019-07-15 09:06:41: Step 22: Training accuracy = 97.9%
2019-07-15 09:06:41: Step 22: Training cross entropy = 0.104149
2019-07-15 09:06:41: Step 22: Validation accuracy = 90.6%
2019-07-15 09:06:41: Step 22: Validation cross entropy = 0.296015


Training Epoch_23 Batch_0, Accuracy: 0.984375, Loss: 0.114337
Training Epoch_23 Batch_1, Accuracy: 0.984375, Loss: 0.102329
Training Epoch_23 Batch_2, Accuracy: 0.968750, Loss: 0.116497
Training Epoch_23 Batch_3, Accuracy: 0.972656, Loss: 0.100771
Training Epoch_23 Batch_4, Accuracy: 0.978125, Loss: 0.094894
Training Epoch_23 Batch_5, Accuracy: 0.971354, Loss: 0.096644
Training Epoch_23 Batch_6, Accuracy: 0.975446, Loss: 0.098260
Training Epoch_23 Batch_7, Accuracy: 0.976562, Loss: 0.097029
Training Epoch_23 Batch_8, Accuracy: 0.970486, Loss: 0.103822
Training Epoch_23 Batch_9, Accuracy: 0.971875, Loss: 0.099653
Training Epoch_23 Batch_10, Accuracy: 0.973011, Loss: 0.100441
Training Epoch_23 Batch_11, Accuracy: 0.973958, Loss: 0.097693
Training Epoch_23 Batch_12, Accuracy: 0.975962, Loss: 0.095859
Training Epoch_23 Batch_13, Accuracy: 0.973214, Loss: 0.101321
Training Epoch_23 Batch_14, Accuracy: 0.975000, Loss: 0.100698
Training Epoch_23 Batch_15, Accuracy: 0.975586, Loss: 0.101074
Training Epoch_23 Batch_16, Accuracy: 0.977022, Loss: 0.097814
Training Epoch_23 Batch_17, Accuracy: 0.978299, Loss: 0.096322
Training Epoch_23 Batch_18, Accuracy: 0.978618, Loss: 0.095854
Training Epoch_23 Batch_19, Accuracy: 0.979688, Loss: 0.095368
Training Epoch_23 Batch_20, Accuracy: 0.979911, Loss: 0.095982
Training Epoch_23 Batch_21, Accuracy: 0.980114, Loss: 0.096678
Training Epoch_23 Batch_22, Accuracy: 0.980978, Loss: 0.095752
Training Epoch_23 Batch_23, Accuracy: 0.979167, Loss: 0.098110
Training Epoch_23 Batch_24, Accuracy: 0.978125, Loss: 0.099250
Training Epoch_23 Batch_25, Accuracy: 0.978365, Loss: 0.097921
Training Epoch_23 Batch_26, Accuracy: 0.978009, Loss: 0.098601
Training Epoch_23 Batch_27, Accuracy: 0.978237, Loss: 0.097345
Training Epoch_23 Batch_28, Accuracy: 0.977910, Loss: 0.099494
Training Epoch_23 Batch_29, Accuracy: 0.978646, Loss: 0.098946
Training Epoch_23 Batch_30, Accuracy: 0.978831, Loss: 0.097882
Training Epoch_23 Batch_31, Accuracy: 0.979492, Loss: 0.097098
Training Epoch_23 Batch_32, Accuracy: 0.977746, Loss: 0.101716
Training Epoch_23 Batch_33, Accuracy: 0.977941, Loss: 0.101468
Training Epoch_23 Batch_34, Accuracy: 0.978571, Loss: 0.101099
Training Epoch_23 Batch_35, Accuracy: 0.979167, Loss: 0.100480
Training Epoch_23 Batch_36, Accuracy: 0.978885, Loss: 0.100296
Training Epoch_23 Batch_37, Accuracy: 0.979030, Loss: 0.099665
Training Epoch_23 Batch_38, Accuracy: 0.979567, Loss: 0.099380
Training Epoch_23 Batch_39, Accuracy: 0.980078, Loss: 0.098917
Training Epoch_23 Batch_40, Accuracy: 0.980564, Loss: 0.098204
Training Epoch_23 Batch_41, Accuracy: 0.980655, Loss: 0.098604
Training Epoch_23 Batch_42, Accuracy: 0.980015, Loss: 0.100967
Training Epoch_23 Batch_43, Accuracy: 0.980469, Loss: 0.100278
Training Epoch_23 Batch_44, Accuracy: 0.980579, Loss: 0.099203
********************* Summary for epoch: 23 *********************
2019-07-15 09:06:42: Step 23: Training accuracy = 98.1%
2019-07-15 09:06:42: Step 23: Training cross entropy = 0.099203
2019-07-15 09:06:42: Step 23: Validation accuracy = 91.2%
2019-07-15 09:06:42: Step 23: Validation cross entropy = 0.303983


Training Epoch_24 Batch_0, Accuracy: 1.000000, Loss: 0.045694
Training Epoch_24 Batch_1, Accuracy: 0.984375, Loss: 0.079725
Training Epoch_24 Batch_2, Accuracy: 0.979167, Loss: 0.076127
Training Epoch_24 Batch_3, Accuracy: 0.980469, Loss: 0.075484
Training Epoch_24 Batch_4, Accuracy: 0.981250, Loss: 0.082493
Training Epoch_24 Batch_5, Accuracy: 0.984375, Loss: 0.078489
Training Epoch_24 Batch_6, Accuracy: 0.986607, Loss: 0.074970
Training Epoch_24 Batch_7, Accuracy: 0.984375, Loss: 0.082443
Training Epoch_24 Batch_8, Accuracy: 0.980903, Loss: 0.089273
Training Epoch_24 Batch_9, Accuracy: 0.981250, Loss: 0.090496
Training Epoch_24 Batch_10, Accuracy: 0.977273, Loss: 0.094243
Training Epoch_24 Batch_11, Accuracy: 0.979167, Loss: 0.093460
Training Epoch_24 Batch_12, Accuracy: 0.980769, Loss: 0.090806
Training Epoch_24 Batch_13, Accuracy: 0.981027, Loss: 0.091759
Training Epoch_24 Batch_14, Accuracy: 0.982292, Loss: 0.089425
Training Epoch_24 Batch_15, Accuracy: 0.981445, Loss: 0.093083
Training Epoch_24 Batch_16, Accuracy: 0.981618, Loss: 0.090832
Training Epoch_24 Batch_17, Accuracy: 0.980903, Loss: 0.092571
Training Epoch_24 Batch_18, Accuracy: 0.981086, Loss: 0.092348
Training Epoch_24 Batch_19, Accuracy: 0.981250, Loss: 0.090851
Training Epoch_24 Batch_20, Accuracy: 0.981399, Loss: 0.088859
Training Epoch_24 Batch_21, Accuracy: 0.980824, Loss: 0.091236
Training Epoch_24 Batch_22, Accuracy: 0.981658, Loss: 0.090248
Training Epoch_24 Batch_23, Accuracy: 0.981771, Loss: 0.090452
Training Epoch_24 Batch_24, Accuracy: 0.981875, Loss: 0.090225
Training Epoch_24 Batch_25, Accuracy: 0.980769, Loss: 0.090040
Training Epoch_24 Batch_26, Accuracy: 0.980903, Loss: 0.090496
Training Epoch_24 Batch_27, Accuracy: 0.981027, Loss: 0.090624
Training Epoch_24 Batch_28, Accuracy: 0.980603, Loss: 0.092308
Training Epoch_24 Batch_29, Accuracy: 0.980208, Loss: 0.093044
Training Epoch_24 Batch_30, Accuracy: 0.979839, Loss: 0.092685
Training Epoch_24 Batch_31, Accuracy: 0.979980, Loss: 0.093499
Training Epoch_24 Batch_32, Accuracy: 0.980114, Loss: 0.092334
Training Epoch_24 Batch_33, Accuracy: 0.979320, Loss: 0.092327
Training Epoch_24 Batch_34, Accuracy: 0.978125, Loss: 0.093045
Training Epoch_24 Batch_35, Accuracy: 0.977431, Loss: 0.094070
Training Epoch_24 Batch_36, Accuracy: 0.978041, Loss: 0.093838
Training Epoch_24 Batch_37, Accuracy: 0.978207, Loss: 0.093854
Training Epoch_24 Batch_38, Accuracy: 0.978365, Loss: 0.094096
Training Epoch_24 Batch_39, Accuracy: 0.978906, Loss: 0.094028
Training Epoch_24 Batch_40, Accuracy: 0.979040, Loss: 0.094085
Training Epoch_24 Batch_41, Accuracy: 0.978423, Loss: 0.094446
Training Epoch_24 Batch_42, Accuracy: 0.978561, Loss: 0.096468
Training Epoch_24 Batch_43, Accuracy: 0.977983, Loss: 0.096937
Training Epoch_24 Batch_44, Accuracy: 0.978107, Loss: 0.095626
********************* Summary for epoch: 24 *********************
2019-07-15 09:06:42: Step 24: Training accuracy = 97.8%
2019-07-15 09:06:42: Step 24: Training cross entropy = 0.095626
2019-07-15 09:06:42: Step 24: Validation accuracy = 90.6%
2019-07-15 09:06:42: Step 24: Validation cross entropy = 0.319605


Training Epoch_25 Batch_0, Accuracy: 0.984375, Loss: 0.094298
Training Epoch_25 Batch_1, Accuracy: 0.984375, Loss: 0.108426
Training Epoch_25 Batch_2, Accuracy: 0.989583, Loss: 0.089599
Training Epoch_25 Batch_3, Accuracy: 0.984375, Loss: 0.091933
Training Epoch_25 Batch_4, Accuracy: 0.984375, Loss: 0.088868
Training Epoch_25 Batch_5, Accuracy: 0.984375, Loss: 0.089187
Training Epoch_25 Batch_6, Accuracy: 0.986607, Loss: 0.084327
Training Epoch_25 Batch_7, Accuracy: 0.984375, Loss: 0.088403
Training Epoch_25 Batch_8, Accuracy: 0.984375, Loss: 0.085688
Training Epoch_25 Batch_9, Accuracy: 0.985937, Loss: 0.086277
Training Epoch_25 Batch_10, Accuracy: 0.985795, Loss: 0.089176
Training Epoch_25 Batch_11, Accuracy: 0.981771, Loss: 0.094181
Training Epoch_25 Batch_12, Accuracy: 0.980769, Loss: 0.094157
Training Epoch_25 Batch_13, Accuracy: 0.981027, Loss: 0.093670
Training Epoch_25 Batch_14, Accuracy: 0.981250, Loss: 0.093606
Training Epoch_25 Batch_15, Accuracy: 0.981445, Loss: 0.093567
Training Epoch_25 Batch_16, Accuracy: 0.982537, Loss: 0.094025
Training Epoch_25 Batch_17, Accuracy: 0.981771, Loss: 0.094519
Training Epoch_25 Batch_18, Accuracy: 0.982730, Loss: 0.093246
Training Epoch_25 Batch_19, Accuracy: 0.982813, Loss: 0.092844
Training Epoch_25 Batch_20, Accuracy: 0.982143, Loss: 0.093753
Training Epoch_25 Batch_21, Accuracy: 0.982955, Loss: 0.091708
Training Epoch_25 Batch_22, Accuracy: 0.983016, Loss: 0.090671
Training Epoch_25 Batch_23, Accuracy: 0.983073, Loss: 0.091544
Training Epoch_25 Batch_24, Accuracy: 0.982500, Loss: 0.092996
Training Epoch_25 Batch_25, Accuracy: 0.982572, Loss: 0.092328
Training Epoch_25 Batch_26, Accuracy: 0.983218, Loss: 0.091261
Training Epoch_25 Batch_27, Accuracy: 0.983259, Loss: 0.090911
Training Epoch_25 Batch_28, Accuracy: 0.982759, Loss: 0.092662
Training Epoch_25 Batch_29, Accuracy: 0.983333, Loss: 0.091862
Training Epoch_25 Batch_30, Accuracy: 0.983871, Loss: 0.091052
Training Epoch_25 Batch_31, Accuracy: 0.983887, Loss: 0.089966
Training Epoch_25 Batch_32, Accuracy: 0.984375, Loss: 0.088547
Training Epoch_25 Batch_33, Accuracy: 0.983915, Loss: 0.088611
Training Epoch_25 Batch_34, Accuracy: 0.983929, Loss: 0.087800
Training Epoch_25 Batch_35, Accuracy: 0.984375, Loss: 0.087970
Training Epoch_25 Batch_36, Accuracy: 0.984375, Loss: 0.089299
Training Epoch_25 Batch_37, Accuracy: 0.983964, Loss: 0.089882
Training Epoch_25 Batch_38, Accuracy: 0.983574, Loss: 0.091301
Training Epoch_25 Batch_39, Accuracy: 0.983984, Loss: 0.089893
Training Epoch_25 Batch_40, Accuracy: 0.984375, Loss: 0.089193
Training Epoch_25 Batch_41, Accuracy: 0.984003, Loss: 0.090131
Training Epoch_25 Batch_42, Accuracy: 0.983648, Loss: 0.090745
Training Epoch_25 Batch_43, Accuracy: 0.983310, Loss: 0.090645
Training Epoch_25 Batch_44, Accuracy: 0.983051, Loss: 0.092350
********************* Summary for epoch: 25 *********************
2019-07-15 09:06:42: Step 25: Training accuracy = 98.3%
2019-07-15 09:06:42: Step 25: Training cross entropy = 0.092350
2019-07-15 09:06:42: Step 25: Validation accuracy = 90.6%
2019-07-15 09:06:42: Step 25: Validation cross entropy = 0.308258


Training Epoch_26 Batch_0, Accuracy: 0.968750, Loss: 0.107592
Training Epoch_26 Batch_1, Accuracy: 0.976562, Loss: 0.088650
Training Epoch_26 Batch_2, Accuracy: 0.984375, Loss: 0.084874
Training Epoch_26 Batch_3, Accuracy: 0.984375, Loss: 0.088146
Training Epoch_26 Batch_4, Accuracy: 0.984375, Loss: 0.084663
Training Epoch_26 Batch_5, Accuracy: 0.984375, Loss: 0.082778
Training Epoch_26 Batch_6, Accuracy: 0.984375, Loss: 0.084132
Training Epoch_26 Batch_7, Accuracy: 0.984375, Loss: 0.085810
Training Epoch_26 Batch_8, Accuracy: 0.984375, Loss: 0.086639
Training Epoch_26 Batch_9, Accuracy: 0.981250, Loss: 0.091645
Training Epoch_26 Batch_10, Accuracy: 0.982955, Loss: 0.087930
Training Epoch_26 Batch_11, Accuracy: 0.983073, Loss: 0.087738
Training Epoch_26 Batch_12, Accuracy: 0.981971, Loss: 0.091457
Training Epoch_26 Batch_13, Accuracy: 0.983259, Loss: 0.091711
Training Epoch_26 Batch_14, Accuracy: 0.984375, Loss: 0.089520
Training Epoch_26 Batch_15, Accuracy: 0.985352, Loss: 0.088094
Training Epoch_26 Batch_16, Accuracy: 0.986213, Loss: 0.087341
Training Epoch_26 Batch_17, Accuracy: 0.986979, Loss: 0.087536
Training Epoch_26 Batch_18, Accuracy: 0.987664, Loss: 0.087144
Training Epoch_26 Batch_19, Accuracy: 0.987500, Loss: 0.087615
Training Epoch_26 Batch_20, Accuracy: 0.988095, Loss: 0.087417
Training Epoch_26 Batch_21, Accuracy: 0.987926, Loss: 0.086451
Training Epoch_26 Batch_22, Accuracy: 0.987772, Loss: 0.087451
Training Epoch_26 Batch_23, Accuracy: 0.988281, Loss: 0.085882
Training Epoch_26 Batch_24, Accuracy: 0.987500, Loss: 0.088882
Training Epoch_26 Batch_25, Accuracy: 0.987380, Loss: 0.089907
Training Epoch_26 Batch_26, Accuracy: 0.987847, Loss: 0.088745
Training Epoch_26 Batch_27, Accuracy: 0.987165, Loss: 0.090691
Training Epoch_26 Batch_28, Accuracy: 0.987608, Loss: 0.089826
Training Epoch_26 Batch_29, Accuracy: 0.988021, Loss: 0.089038
Training Epoch_26 Batch_30, Accuracy: 0.987399, Loss: 0.088958
Training Epoch_26 Batch_31, Accuracy: 0.987305, Loss: 0.088802
Training Epoch_26 Batch_32, Accuracy: 0.987689, Loss: 0.087876
Training Epoch_26 Batch_33, Accuracy: 0.988051, Loss: 0.087000
Training Epoch_26 Batch_34, Accuracy: 0.987500, Loss: 0.087953
Training Epoch_26 Batch_35, Accuracy: 0.987847, Loss: 0.088048
Training Epoch_26 Batch_36, Accuracy: 0.987753, Loss: 0.088877
Training Epoch_26 Batch_37, Accuracy: 0.987253, Loss: 0.088776
Training Epoch_26 Batch_38, Accuracy: 0.987179, Loss: 0.088602
Training Epoch_26 Batch_39, Accuracy: 0.987500, Loss: 0.087693
Training Epoch_26 Batch_40, Accuracy: 0.987805, Loss: 0.087352
Training Epoch_26 Batch_41, Accuracy: 0.987723, Loss: 0.087553
Training Epoch_26 Batch_42, Accuracy: 0.988009, Loss: 0.087030
Training Epoch_26 Batch_43, Accuracy: 0.987926, Loss: 0.087414
Training Epoch_26 Batch_44, Accuracy: 0.987641, Loss: 0.088824
********************* Summary for epoch: 26 *********************
2019-07-15 09:06:43: Step 26: Training accuracy = 98.8%
2019-07-15 09:06:43: Step 26: Training cross entropy = 0.088824
2019-07-15 09:06:43: Step 26: Validation accuracy = 90.9%
2019-07-15 09:06:43: Step 26: Validation cross entropy = 0.299158


Training Epoch_27 Batch_0, Accuracy: 0.968750, Loss: 0.135192
Training Epoch_27 Batch_1, Accuracy: 0.976562, Loss: 0.123551
Training Epoch_27 Batch_2, Accuracy: 0.973958, Loss: 0.111881
Training Epoch_27 Batch_3, Accuracy: 0.976562, Loss: 0.109741
Training Epoch_27 Batch_4, Accuracy: 0.981250, Loss: 0.096825
Training Epoch_27 Batch_5, Accuracy: 0.981771, Loss: 0.097974
Training Epoch_27 Batch_6, Accuracy: 0.984375, Loss: 0.093442
Training Epoch_27 Batch_7, Accuracy: 0.986328, Loss: 0.091948
Training Epoch_27 Batch_8, Accuracy: 0.984375, Loss: 0.094663
Training Epoch_27 Batch_9, Accuracy: 0.985937, Loss: 0.091069
Training Epoch_27 Batch_10, Accuracy: 0.987216, Loss: 0.088407
Training Epoch_27 Batch_11, Accuracy: 0.988281, Loss: 0.087459
Training Epoch_27 Batch_12, Accuracy: 0.989183, Loss: 0.084626
Training Epoch_27 Batch_13, Accuracy: 0.988839, Loss: 0.085817
Training Epoch_27 Batch_14, Accuracy: 0.989583, Loss: 0.083844
Training Epoch_27 Batch_15, Accuracy: 0.990234, Loss: 0.083851
Training Epoch_27 Batch_16, Accuracy: 0.990809, Loss: 0.082939
Training Epoch_27 Batch_17, Accuracy: 0.991319, Loss: 0.083162
Training Epoch_27 Batch_18, Accuracy: 0.990954, Loss: 0.082484
Training Epoch_27 Batch_19, Accuracy: 0.990625, Loss: 0.081392
Training Epoch_27 Batch_20, Accuracy: 0.991071, Loss: 0.082054
Training Epoch_27 Batch_21, Accuracy: 0.989347, Loss: 0.084774
Training Epoch_27 Batch_22, Accuracy: 0.988451, Loss: 0.084112
Training Epoch_27 Batch_23, Accuracy: 0.988932, Loss: 0.084690
Training Epoch_27 Batch_24, Accuracy: 0.988750, Loss: 0.084831
Training Epoch_27 Batch_25, Accuracy: 0.989183, Loss: 0.083220
Training Epoch_27 Batch_26, Accuracy: 0.989583, Loss: 0.082135
Training Epoch_27 Batch_27, Accuracy: 0.989955, Loss: 0.081937
Training Epoch_27 Batch_28, Accuracy: 0.989763, Loss: 0.081243
Training Epoch_27 Batch_29, Accuracy: 0.989583, Loss: 0.080823
Training Epoch_27 Batch_30, Accuracy: 0.989919, Loss: 0.080252
Training Epoch_27 Batch_31, Accuracy: 0.989746, Loss: 0.079385
Training Epoch_27 Batch_32, Accuracy: 0.990057, Loss: 0.079562
Training Epoch_27 Batch_33, Accuracy: 0.989430, Loss: 0.080384
Training Epoch_27 Batch_34, Accuracy: 0.989732, Loss: 0.080017
Training Epoch_27 Batch_35, Accuracy: 0.989583, Loss: 0.080057
Training Epoch_27 Batch_36, Accuracy: 0.989020, Loss: 0.081471
Training Epoch_27 Batch_37, Accuracy: 0.988487, Loss: 0.083327
Training Epoch_27 Batch_38, Accuracy: 0.988381, Loss: 0.082801
Training Epoch_27 Batch_39, Accuracy: 0.988281, Loss: 0.082811
Training Epoch_27 Batch_40, Accuracy: 0.988186, Loss: 0.082538
Training Epoch_27 Batch_41, Accuracy: 0.988095, Loss: 0.082700
Training Epoch_27 Batch_42, Accuracy: 0.988372, Loss: 0.082647
Training Epoch_27 Batch_43, Accuracy: 0.988281, Loss: 0.084223
Training Epoch_27 Batch_44, Accuracy: 0.988347, Loss: 0.084933
********************* Summary for epoch: 27 *********************
2019-07-15 09:06:43: Step 27: Training accuracy = 98.8%
2019-07-15 09:06:43: Step 27: Training cross entropy = 0.084933
2019-07-15 09:06:43: Step 27: Validation accuracy = 91.7%
2019-07-15 09:06:43: Step 27: Validation cross entropy = 0.294613


Saving intermediate result.
Training Epoch_28 Batch_0, Accuracy: 1.000000, Loss: 0.090878
Training Epoch_28 Batch_1, Accuracy: 1.000000, Loss: 0.070669
Training Epoch_28 Batch_2, Accuracy: 1.000000, Loss: 0.066178
Training Epoch_28 Batch_3, Accuracy: 0.996094, Loss: 0.073883
Training Epoch_28 Batch_4, Accuracy: 0.993750, Loss: 0.071189
Training Epoch_28 Batch_5, Accuracy: 0.994792, Loss: 0.068525
Training Epoch_28 Batch_6, Accuracy: 0.995536, Loss: 0.069277
Training Epoch_28 Batch_7, Accuracy: 0.996094, Loss: 0.068824
Training Epoch_28 Batch_8, Accuracy: 0.994792, Loss: 0.067216
Training Epoch_28 Batch_9, Accuracy: 0.995313, Loss: 0.065284
Training Epoch_28 Batch_10, Accuracy: 0.995739, Loss: 0.064069
Training Epoch_28 Batch_11, Accuracy: 0.996094, Loss: 0.066422
Training Epoch_28 Batch_12, Accuracy: 0.995192, Loss: 0.067674
Training Epoch_28 Batch_13, Accuracy: 0.992188, Loss: 0.073613
Training Epoch_28 Batch_14, Accuracy: 0.992708, Loss: 0.073787
Training Epoch_28 Batch_15, Accuracy: 0.991211, Loss: 0.075496
Training Epoch_28 Batch_16, Accuracy: 0.990809, Loss: 0.075441
Training Epoch_28 Batch_17, Accuracy: 0.991319, Loss: 0.074365
Training Epoch_28 Batch_18, Accuracy: 0.991776, Loss: 0.073863
Training Epoch_28 Batch_19, Accuracy: 0.990625, Loss: 0.076298
Training Epoch_28 Batch_20, Accuracy: 0.990327, Loss: 0.076533
Training Epoch_28 Batch_21, Accuracy: 0.989347, Loss: 0.078319
Training Epoch_28 Batch_22, Accuracy: 0.989810, Loss: 0.077475
Training Epoch_28 Batch_23, Accuracy: 0.988932, Loss: 0.078396
Training Epoch_28 Batch_24, Accuracy: 0.989375, Loss: 0.077516
Training Epoch_28 Batch_25, Accuracy: 0.989183, Loss: 0.080418
Training Epoch_28 Batch_26, Accuracy: 0.989005, Loss: 0.080379
Training Epoch_28 Batch_27, Accuracy: 0.989397, Loss: 0.080536
Training Epoch_28 Batch_28, Accuracy: 0.989224, Loss: 0.080100
Training Epoch_28 Batch_29, Accuracy: 0.989062, Loss: 0.080905
Training Epoch_28 Batch_30, Accuracy: 0.989415, Loss: 0.080162
Training Epoch_28 Batch_31, Accuracy: 0.989746, Loss: 0.079959
Training Epoch_28 Batch_32, Accuracy: 0.989110, Loss: 0.079009
Training Epoch_28 Batch_33, Accuracy: 0.988511, Loss: 0.079314
Training Epoch_28 Batch_34, Accuracy: 0.988839, Loss: 0.079093
Training Epoch_28 Batch_35, Accuracy: 0.988281, Loss: 0.079375
Training Epoch_28 Batch_36, Accuracy: 0.987753, Loss: 0.079484
Training Epoch_28 Batch_37, Accuracy: 0.987664, Loss: 0.079903
Training Epoch_28 Batch_38, Accuracy: 0.987981, Loss: 0.079946
Training Epoch_28 Batch_39, Accuracy: 0.987891, Loss: 0.080989
Training Epoch_28 Batch_40, Accuracy: 0.988186, Loss: 0.081108
Training Epoch_28 Batch_41, Accuracy: 0.988095, Loss: 0.081308
Training Epoch_28 Batch_42, Accuracy: 0.987645, Loss: 0.082446
Training Epoch_28 Batch_43, Accuracy: 0.987926, Loss: 0.081818
Training Epoch_28 Batch_44, Accuracy: 0.987994, Loss: 0.081409
********************* Summary for epoch: 28 *********************
2019-07-15 09:06:45: Step 28: Training accuracy = 98.8%
2019-07-15 09:06:45: Step 28: Training cross entropy = 0.081409
2019-07-15 09:06:45: Step 28: Validation accuracy = 91.2%
2019-07-15 09:06:45: Step 28: Validation cross entropy = 0.296750


Training Epoch_29 Batch_0, Accuracy: 1.000000, Loss: 0.053512
Training Epoch_29 Batch_1, Accuracy: 1.000000, Loss: 0.063451
Training Epoch_29 Batch_2, Accuracy: 1.000000, Loss: 0.065303
Training Epoch_29 Batch_3, Accuracy: 1.000000, Loss: 0.068061
Training Epoch_29 Batch_4, Accuracy: 1.000000, Loss: 0.066575
Training Epoch_29 Batch_5, Accuracy: 0.997396, Loss: 0.066818
Training Epoch_29 Batch_6, Accuracy: 0.993304, Loss: 0.069728
Training Epoch_29 Batch_7, Accuracy: 0.992188, Loss: 0.073571
Training Epoch_29 Batch_8, Accuracy: 0.993056, Loss: 0.071739
Training Epoch_29 Batch_9, Accuracy: 0.993750, Loss: 0.070035
Training Epoch_29 Batch_10, Accuracy: 0.994318, Loss: 0.068725
Training Epoch_29 Batch_11, Accuracy: 0.993490, Loss: 0.071371
Training Epoch_29 Batch_12, Accuracy: 0.993990, Loss: 0.070469
Training Epoch_29 Batch_13, Accuracy: 0.993304, Loss: 0.070800
Training Epoch_29 Batch_14, Accuracy: 0.991667, Loss: 0.071956
Training Epoch_29 Batch_15, Accuracy: 0.991211, Loss: 0.070500
Training Epoch_29 Batch_16, Accuracy: 0.991728, Loss: 0.069884
Training Epoch_29 Batch_17, Accuracy: 0.990451, Loss: 0.070784
Training Epoch_29 Batch_18, Accuracy: 0.990132, Loss: 0.070029
Training Epoch_29 Batch_19, Accuracy: 0.989844, Loss: 0.069821
Training Epoch_29 Batch_20, Accuracy: 0.990327, Loss: 0.069477
Training Epoch_29 Batch_21, Accuracy: 0.989347, Loss: 0.071854
Training Epoch_29 Batch_22, Accuracy: 0.989810, Loss: 0.070619
Training Epoch_29 Batch_23, Accuracy: 0.990234, Loss: 0.070737
Training Epoch_29 Batch_24, Accuracy: 0.989375, Loss: 0.074036
Training Epoch_29 Batch_25, Accuracy: 0.989784, Loss: 0.073542
Training Epoch_29 Batch_26, Accuracy: 0.989583, Loss: 0.073038
Training Epoch_29 Batch_27, Accuracy: 0.989955, Loss: 0.072789
Training Epoch_29 Batch_28, Accuracy: 0.989763, Loss: 0.072554
Training Epoch_29 Batch_29, Accuracy: 0.989583, Loss: 0.072577
Training Epoch_29 Batch_30, Accuracy: 0.989919, Loss: 0.072237
Training Epoch_29 Batch_31, Accuracy: 0.989258, Loss: 0.073429
Training Epoch_29 Batch_32, Accuracy: 0.989583, Loss: 0.072707
Training Epoch_29 Batch_33, Accuracy: 0.989430, Loss: 0.072977
Training Epoch_29 Batch_34, Accuracy: 0.989286, Loss: 0.073908
Training Epoch_29 Batch_35, Accuracy: 0.989149, Loss: 0.074533
Training Epoch_29 Batch_36, Accuracy: 0.989443, Loss: 0.074182
Training Epoch_29 Batch_37, Accuracy: 0.988487, Loss: 0.075704
Training Epoch_29 Batch_38, Accuracy: 0.988381, Loss: 0.075819
Training Epoch_29 Batch_39, Accuracy: 0.987891, Loss: 0.076007
Training Epoch_29 Batch_40, Accuracy: 0.987805, Loss: 0.076303
Training Epoch_29 Batch_41, Accuracy: 0.988095, Loss: 0.076173
Training Epoch_29 Batch_42, Accuracy: 0.988009, Loss: 0.076455
Training Epoch_29 Batch_43, Accuracy: 0.988281, Loss: 0.076225
Training Epoch_29 Batch_44, Accuracy: 0.988347, Loss: 0.075715
********************* Summary for epoch: 29 *********************
2019-07-15 09:06:46: Step 29: Training accuracy = 98.8%
2019-07-15 09:06:46: Step 29: Training cross entropy = 0.075715
2019-07-15 09:06:46: Step 29: Validation accuracy = 91.2%
2019-07-15 09:06:46: Step 29: Validation cross entropy = 0.302572


Training Epoch_30 Batch_0, Accuracy: 0.984375, Loss: 0.096098
Training Epoch_30 Batch_1, Accuracy: 0.984375, Loss: 0.069876
Training Epoch_30 Batch_2, Accuracy: 0.984375, Loss: 0.079016
Training Epoch_30 Batch_3, Accuracy: 0.988281, Loss: 0.074249
Training Epoch_30 Batch_4, Accuracy: 0.990625, Loss: 0.070932
Training Epoch_30 Batch_5, Accuracy: 0.992188, Loss: 0.067350
Training Epoch_30 Batch_6, Accuracy: 0.993304, Loss: 0.069983
Training Epoch_30 Batch_7, Accuracy: 0.992188, Loss: 0.067371
Training Epoch_30 Batch_8, Accuracy: 0.989583, Loss: 0.069260
Training Epoch_30 Batch_9, Accuracy: 0.990625, Loss: 0.069572
Training Epoch_30 Batch_10, Accuracy: 0.991477, Loss: 0.069056
Training Epoch_30 Batch_11, Accuracy: 0.992188, Loss: 0.068111
Training Epoch_30 Batch_12, Accuracy: 0.991587, Loss: 0.070340
Training Epoch_30 Batch_13, Accuracy: 0.992188, Loss: 0.069042
Training Epoch_30 Batch_14, Accuracy: 0.992708, Loss: 0.068532
Training Epoch_30 Batch_15, Accuracy: 0.993164, Loss: 0.068812
Training Epoch_30 Batch_16, Accuracy: 0.993566, Loss: 0.069390
Training Epoch_30 Batch_17, Accuracy: 0.991319, Loss: 0.070788
Training Epoch_30 Batch_18, Accuracy: 0.991776, Loss: 0.070675
Training Epoch_30 Batch_19, Accuracy: 0.991406, Loss: 0.071399
Training Epoch_30 Batch_20, Accuracy: 0.991071, Loss: 0.071105
Training Epoch_30 Batch_21, Accuracy: 0.991477, Loss: 0.070976
Training Epoch_30 Batch_22, Accuracy: 0.991848, Loss: 0.070199
Training Epoch_30 Batch_23, Accuracy: 0.991536, Loss: 0.074578
Training Epoch_30 Batch_24, Accuracy: 0.991250, Loss: 0.073303
Training Epoch_30 Batch_25, Accuracy: 0.991587, Loss: 0.072342
Training Epoch_30 Batch_26, Accuracy: 0.991319, Loss: 0.072844
Training Epoch_30 Batch_27, Accuracy: 0.991629, Loss: 0.072694
Training Epoch_30 Batch_28, Accuracy: 0.991918, Loss: 0.072104
Training Epoch_30 Batch_29, Accuracy: 0.991146, Loss: 0.072257
Training Epoch_30 Batch_30, Accuracy: 0.990927, Loss: 0.072430
Training Epoch_30 Batch_31, Accuracy: 0.989746, Loss: 0.073376
Training Epoch_30 Batch_32, Accuracy: 0.990057, Loss: 0.073097
Training Epoch_30 Batch_33, Accuracy: 0.989890, Loss: 0.072705
Training Epoch_30 Batch_34, Accuracy: 0.990179, Loss: 0.072226
Training Epoch_30 Batch_35, Accuracy: 0.990017, Loss: 0.072824
Training Epoch_30 Batch_36, Accuracy: 0.990287, Loss: 0.072250
Training Epoch_30 Batch_37, Accuracy: 0.990543, Loss: 0.072014
Training Epoch_30 Batch_38, Accuracy: 0.990785, Loss: 0.071223
Training Epoch_30 Batch_39, Accuracy: 0.990625, Loss: 0.070778
Training Epoch_30 Batch_40, Accuracy: 0.990091, Loss: 0.071712
Training Epoch_30 Batch_41, Accuracy: 0.989583, Loss: 0.073573
Training Epoch_30 Batch_42, Accuracy: 0.989826, Loss: 0.073184
Training Epoch_30 Batch_43, Accuracy: 0.989702, Loss: 0.072543
Training Epoch_30 Batch_44, Accuracy: 0.989760, Loss: 0.072462
********************* Summary for epoch: 30 *********************
2019-07-15 09:06:46: Step 30: Training accuracy = 99.0%
2019-07-15 09:06:46: Step 30: Training cross entropy = 0.072462
2019-07-15 09:06:46: Step 30: Validation accuracy = 91.7%
2019-07-15 09:06:46: Step 30: Validation cross entropy = 0.285933


Training Epoch_31 Batch_0, Accuracy: 0.984375, Loss: 0.075138
Training Epoch_31 Batch_1, Accuracy: 0.992188, Loss: 0.062828
Training Epoch_31 Batch_2, Accuracy: 0.994792, Loss: 0.055388
Training Epoch_31 Batch_3, Accuracy: 0.996094, Loss: 0.053737
Training Epoch_31 Batch_4, Accuracy: 0.993750, Loss: 0.060783
Training Epoch_31 Batch_5, Accuracy: 0.992188, Loss: 0.058767
Training Epoch_31 Batch_6, Accuracy: 0.993304, Loss: 0.059756
Training Epoch_31 Batch_7, Accuracy: 0.994141, Loss: 0.060875
Training Epoch_31 Batch_8, Accuracy: 0.994792, Loss: 0.058525
Training Epoch_31 Batch_9, Accuracy: 0.995313, Loss: 0.057453
Training Epoch_31 Batch_10, Accuracy: 0.995739, Loss: 0.058465
Training Epoch_31 Batch_11, Accuracy: 0.994792, Loss: 0.059771
Training Epoch_31 Batch_12, Accuracy: 0.993990, Loss: 0.062480
Training Epoch_31 Batch_13, Accuracy: 0.994420, Loss: 0.062366
Training Epoch_31 Batch_14, Accuracy: 0.994792, Loss: 0.061906
Training Epoch_31 Batch_15, Accuracy: 0.993164, Loss: 0.070527
Training Epoch_31 Batch_16, Accuracy: 0.993566, Loss: 0.069466
Training Epoch_31 Batch_17, Accuracy: 0.993056, Loss: 0.069970
Training Epoch_31 Batch_18, Accuracy: 0.991776, Loss: 0.070410
Training Epoch_31 Batch_19, Accuracy: 0.991406, Loss: 0.072244
Training Epoch_31 Batch_20, Accuracy: 0.991071, Loss: 0.072070
Training Epoch_31 Batch_21, Accuracy: 0.990767, Loss: 0.071966
Training Epoch_31 Batch_22, Accuracy: 0.991168, Loss: 0.071221
Training Epoch_31 Batch_23, Accuracy: 0.991536, Loss: 0.071378
Training Epoch_31 Batch_24, Accuracy: 0.991250, Loss: 0.071153
Training Epoch_31 Batch_25, Accuracy: 0.990986, Loss: 0.070841
Training Epoch_31 Batch_26, Accuracy: 0.991319, Loss: 0.070672
Training Epoch_31 Batch_27, Accuracy: 0.991629, Loss: 0.069668
Training Epoch_31 Batch_28, Accuracy: 0.990840, Loss: 0.069645
Training Epoch_31 Batch_29, Accuracy: 0.990104, Loss: 0.070586
Training Epoch_31 Batch_30, Accuracy: 0.989919, Loss: 0.069856
Training Epoch_31 Batch_31, Accuracy: 0.990234, Loss: 0.069641
Training Epoch_31 Batch_32, Accuracy: 0.990057, Loss: 0.069838
Training Epoch_31 Batch_33, Accuracy: 0.989430, Loss: 0.071614
Training Epoch_31 Batch_34, Accuracy: 0.989286, Loss: 0.071847
Training Epoch_31 Batch_35, Accuracy: 0.989583, Loss: 0.071733
Training Epoch_31 Batch_36, Accuracy: 0.989443, Loss: 0.072047
Training Epoch_31 Batch_37, Accuracy: 0.989720, Loss: 0.071405
Training Epoch_31 Batch_38, Accuracy: 0.989583, Loss: 0.073024
Training Epoch_31 Batch_39, Accuracy: 0.989844, Loss: 0.073425
Training Epoch_31 Batch_40, Accuracy: 0.990091, Loss: 0.072772
Training Epoch_31 Batch_41, Accuracy: 0.990327, Loss: 0.072494
Training Epoch_31 Batch_42, Accuracy: 0.990189, Loss: 0.073043
Training Epoch_31 Batch_43, Accuracy: 0.990057, Loss: 0.073470
Training Epoch_31 Batch_44, Accuracy: 0.990113, Loss: 0.072853
********************* Summary for epoch: 31 *********************
2019-07-15 09:06:46: Step 31: Training accuracy = 99.0%
2019-07-15 09:06:46: Step 31: Training cross entropy = 0.072853
2019-07-15 09:06:46: Step 31: Validation accuracy = 91.4%
2019-07-15 09:06:46: Step 31: Validation cross entropy = 0.291798


Training Epoch_32 Batch_0, Accuracy: 1.000000, Loss: 0.056957
Training Epoch_32 Batch_1, Accuracy: 1.000000, Loss: 0.059223
Training Epoch_32 Batch_2, Accuracy: 1.000000, Loss: 0.061711
Training Epoch_32 Batch_3, Accuracy: 0.988281, Loss: 0.079640
Training Epoch_32 Batch_4, Accuracy: 0.990625, Loss: 0.072816
Training Epoch_32 Batch_5, Accuracy: 0.992188, Loss: 0.067494
Training Epoch_32 Batch_6, Accuracy: 0.993304, Loss: 0.065176
Training Epoch_32 Batch_7, Accuracy: 0.994141, Loss: 0.064520
Training Epoch_32 Batch_8, Accuracy: 0.994792, Loss: 0.063583
Training Epoch_32 Batch_9, Accuracy: 0.995313, Loss: 0.063494
Training Epoch_32 Batch_10, Accuracy: 0.994318, Loss: 0.062915
Training Epoch_32 Batch_11, Accuracy: 0.993490, Loss: 0.070225
Training Epoch_32 Batch_12, Accuracy: 0.991587, Loss: 0.068727
Training Epoch_32 Batch_13, Accuracy: 0.992188, Loss: 0.067901
Training Epoch_32 Batch_14, Accuracy: 0.992708, Loss: 0.067998
Training Epoch_32 Batch_15, Accuracy: 0.993164, Loss: 0.067926
Training Epoch_32 Batch_16, Accuracy: 0.993566, Loss: 0.068300
Training Epoch_32 Batch_17, Accuracy: 0.993056, Loss: 0.068446
Training Epoch_32 Batch_18, Accuracy: 0.992599, Loss: 0.069073
Training Epoch_32 Batch_19, Accuracy: 0.992188, Loss: 0.068606
Training Epoch_32 Batch_20, Accuracy: 0.992560, Loss: 0.067410
Training Epoch_32 Batch_21, Accuracy: 0.992898, Loss: 0.066095
Training Epoch_32 Batch_22, Accuracy: 0.993207, Loss: 0.065819
Training Epoch_32 Batch_23, Accuracy: 0.993490, Loss: 0.066093
Training Epoch_32 Batch_24, Accuracy: 0.993750, Loss: 0.065209
Training Epoch_32 Batch_25, Accuracy: 0.992788, Loss: 0.065732
Training Epoch_32 Batch_26, Accuracy: 0.991898, Loss: 0.067061
Training Epoch_32 Batch_27, Accuracy: 0.991629, Loss: 0.066735
Training Epoch_32 Batch_28, Accuracy: 0.991918, Loss: 0.067276
Training Epoch_32 Batch_29, Accuracy: 0.992188, Loss: 0.067028
Training Epoch_32 Batch_30, Accuracy: 0.991431, Loss: 0.067145
Training Epoch_32 Batch_31, Accuracy: 0.991211, Loss: 0.068295
Training Epoch_32 Batch_32, Accuracy: 0.991004, Loss: 0.068748
Training Epoch_32 Batch_33, Accuracy: 0.990349, Loss: 0.068980
Training Epoch_32 Batch_34, Accuracy: 0.990625, Loss: 0.067872
Training Epoch_32 Batch_35, Accuracy: 0.990885, Loss: 0.067971
Training Epoch_32 Batch_36, Accuracy: 0.990709, Loss: 0.068345
Training Epoch_32 Batch_37, Accuracy: 0.990954, Loss: 0.068433
Training Epoch_32 Batch_38, Accuracy: 0.991186, Loss: 0.068902
Training Epoch_32 Batch_39, Accuracy: 0.991406, Loss: 0.068812
Training Epoch_32 Batch_40, Accuracy: 0.991235, Loss: 0.069804
Training Epoch_32 Batch_41, Accuracy: 0.991443, Loss: 0.069519
Training Epoch_32 Batch_42, Accuracy: 0.990916, Loss: 0.069525
Training Epoch_32 Batch_43, Accuracy: 0.991122, Loss: 0.069546
Training Epoch_32 Batch_44, Accuracy: 0.991172, Loss: 0.070339
********************* Summary for epoch: 32 *********************
2019-07-15 09:06:47: Step 32: Training accuracy = 99.1%
2019-07-15 09:06:47: Step 32: Training cross entropy = 0.070339
2019-07-15 09:06:47: Step 32: Validation accuracy = 90.3%
2019-07-15 09:06:47: Step 32: Validation cross entropy = 0.313330


Training Epoch_33 Batch_0, Accuracy: 1.000000, Loss: 0.062636
Training Epoch_33 Batch_1, Accuracy: 1.000000, Loss: 0.046387
Training Epoch_33 Batch_2, Accuracy: 0.994792, Loss: 0.063752
Training Epoch_33 Batch_3, Accuracy: 0.996094, Loss: 0.063284
Training Epoch_33 Batch_4, Accuracy: 0.996875, Loss: 0.061521
Training Epoch_33 Batch_5, Accuracy: 0.997396, Loss: 0.061060
Training Epoch_33 Batch_6, Accuracy: 0.997768, Loss: 0.060422
Training Epoch_33 Batch_7, Accuracy: 0.996094, Loss: 0.057772
Training Epoch_33 Batch_8, Accuracy: 0.993056, Loss: 0.065262
Training Epoch_33 Batch_9, Accuracy: 0.993750, Loss: 0.062960
Training Epoch_33 Batch_10, Accuracy: 0.988636, Loss: 0.073824
Training Epoch_33 Batch_11, Accuracy: 0.988281, Loss: 0.072892
Training Epoch_33 Batch_12, Accuracy: 0.989183, Loss: 0.071307
Training Epoch_33 Batch_13, Accuracy: 0.989955, Loss: 0.069506
Training Epoch_33 Batch_14, Accuracy: 0.989583, Loss: 0.070672
Training Epoch_33 Batch_15, Accuracy: 0.990234, Loss: 0.070079
Training Epoch_33 Batch_16, Accuracy: 0.990809, Loss: 0.068974
Training Epoch_33 Batch_17, Accuracy: 0.991319, Loss: 0.067877
Training Epoch_33 Batch_18, Accuracy: 0.990954, Loss: 0.067689
Training Epoch_33 Batch_19, Accuracy: 0.991406, Loss: 0.066585
Training Epoch_33 Batch_20, Accuracy: 0.991815, Loss: 0.065685
Training Epoch_33 Batch_21, Accuracy: 0.992188, Loss: 0.064977
Training Epoch_33 Batch_22, Accuracy: 0.991848, Loss: 0.064987
Training Epoch_33 Batch_23, Accuracy: 0.992188, Loss: 0.064237
Training Epoch_33 Batch_24, Accuracy: 0.992500, Loss: 0.063986
Training Epoch_33 Batch_25, Accuracy: 0.992788, Loss: 0.063313
Training Epoch_33 Batch_26, Accuracy: 0.993056, Loss: 0.064215
Training Epoch_33 Batch_27, Accuracy: 0.992746, Loss: 0.064369
Training Epoch_33 Batch_28, Accuracy: 0.992457, Loss: 0.064140
Training Epoch_33 Batch_29, Accuracy: 0.992188, Loss: 0.063658
Training Epoch_33 Batch_30, Accuracy: 0.992440, Loss: 0.063119
Training Epoch_33 Batch_31, Accuracy: 0.992188, Loss: 0.064620
Training Epoch_33 Batch_32, Accuracy: 0.992424, Loss: 0.064598
Training Epoch_33 Batch_33, Accuracy: 0.992647, Loss: 0.064720
Training Epoch_33 Batch_34, Accuracy: 0.992857, Loss: 0.064451
Training Epoch_33 Batch_35, Accuracy: 0.993056, Loss: 0.064697
Training Epoch_33 Batch_36, Accuracy: 0.992821, Loss: 0.064508
Training Epoch_33 Batch_37, Accuracy: 0.993010, Loss: 0.063919
Training Epoch_33 Batch_38, Accuracy: 0.992788, Loss: 0.064834
Training Epoch_33 Batch_39, Accuracy: 0.992578, Loss: 0.065245
Training Epoch_33 Batch_40, Accuracy: 0.992759, Loss: 0.064796
Training Epoch_33 Batch_41, Accuracy: 0.992932, Loss: 0.065040
Training Epoch_33 Batch_42, Accuracy: 0.992369, Loss: 0.065060
Training Epoch_33 Batch_43, Accuracy: 0.992543, Loss: 0.064656
Training Epoch_33 Batch_44, Accuracy: 0.992585, Loss: 0.066176
********************* Summary for epoch: 33 *********************
2019-07-15 09:06:47: Step 33: Training accuracy = 99.3%
2019-07-15 09:06:47: Step 33: Training cross entropy = 0.066176
2019-07-15 09:06:47: Step 33: Validation accuracy = 91.2%
2019-07-15 09:06:47: Step 33: Validation cross entropy = 0.297880


Training Epoch_34 Batch_0, Accuracy: 1.000000, Loss: 0.035336
Training Epoch_34 Batch_1, Accuracy: 1.000000, Loss: 0.046866
Training Epoch_34 Batch_2, Accuracy: 0.994792, Loss: 0.051939
Training Epoch_34 Batch_3, Accuracy: 0.996094, Loss: 0.051345
Training Epoch_34 Batch_4, Accuracy: 0.996875, Loss: 0.053034
Training Epoch_34 Batch_5, Accuracy: 0.994792, Loss: 0.055089
Training Epoch_34 Batch_6, Accuracy: 0.995536, Loss: 0.055083
Training Epoch_34 Batch_7, Accuracy: 0.996094, Loss: 0.054374
Training Epoch_34 Batch_8, Accuracy: 0.996528, Loss: 0.053800
Training Epoch_34 Batch_9, Accuracy: 0.996875, Loss: 0.053510
Training Epoch_34 Batch_10, Accuracy: 0.997159, Loss: 0.053247
Training Epoch_34 Batch_11, Accuracy: 0.996094, Loss: 0.055052
Training Epoch_34 Batch_12, Accuracy: 0.993990, Loss: 0.057939
Training Epoch_34 Batch_13, Accuracy: 0.993304, Loss: 0.058775
Training Epoch_34 Batch_14, Accuracy: 0.993750, Loss: 0.058140
Training Epoch_34 Batch_15, Accuracy: 0.993164, Loss: 0.058242
Training Epoch_34 Batch_16, Accuracy: 0.993566, Loss: 0.056787
Training Epoch_34 Batch_17, Accuracy: 0.993056, Loss: 0.057365
Training Epoch_34 Batch_18, Accuracy: 0.992599, Loss: 0.059777
Training Epoch_34 Batch_19, Accuracy: 0.992188, Loss: 0.059498
Training Epoch_34 Batch_20, Accuracy: 0.992560, Loss: 0.059106
Training Epoch_34 Batch_21, Accuracy: 0.992898, Loss: 0.058321
Training Epoch_34 Batch_22, Accuracy: 0.992527, Loss: 0.059243
Training Epoch_34 Batch_23, Accuracy: 0.992188, Loss: 0.059195
Training Epoch_34 Batch_24, Accuracy: 0.992500, Loss: 0.059333
Training Epoch_34 Batch_25, Accuracy: 0.992188, Loss: 0.060100
Training Epoch_34 Batch_26, Accuracy: 0.992477, Loss: 0.059796
Training Epoch_34 Batch_27, Accuracy: 0.992746, Loss: 0.059443
Training Epoch_34 Batch_28, Accuracy: 0.992996, Loss: 0.060257
Training Epoch_34 Batch_29, Accuracy: 0.993229, Loss: 0.060770
Training Epoch_34 Batch_30, Accuracy: 0.993448, Loss: 0.059991
Training Epoch_34 Batch_31, Accuracy: 0.993164, Loss: 0.062242
Training Epoch_34 Batch_32, Accuracy: 0.992898, Loss: 0.062583
Training Epoch_34 Batch_33, Accuracy: 0.993107, Loss: 0.062389
Training Epoch_34 Batch_34, Accuracy: 0.992411, Loss: 0.063321
Training Epoch_34 Batch_35, Accuracy: 0.991753, Loss: 0.063463
Training Epoch_34 Batch_36, Accuracy: 0.991554, Loss: 0.062999
Training Epoch_34 Batch_37, Accuracy: 0.991776, Loss: 0.062317
Training Epoch_34 Batch_38, Accuracy: 0.991987, Loss: 0.061954
Training Epoch_34 Batch_39, Accuracy: 0.992188, Loss: 0.062489
Training Epoch_34 Batch_40, Accuracy: 0.991616, Loss: 0.062804
Training Epoch_34 Batch_41, Accuracy: 0.991815, Loss: 0.063255
Training Epoch_34 Batch_42, Accuracy: 0.992006, Loss: 0.063149
Training Epoch_34 Batch_43, Accuracy: 0.992188, Loss: 0.062715
Training Epoch_34 Batch_44, Accuracy: 0.992232, Loss: 0.061839
********************* Summary for epoch: 34 *********************
2019-07-15 09:06:47: Step 34: Training accuracy = 99.2%
2019-07-15 09:06:47: Step 34: Training cross entropy = 0.061839
2019-07-15 09:06:47: Step 34: Validation accuracy = 91.2%
2019-07-15 09:06:47: Step 34: Validation cross entropy = 0.296852


Training Epoch_35 Batch_0, Accuracy: 1.000000, Loss: 0.077878
Training Epoch_35 Batch_1, Accuracy: 0.992188, Loss: 0.088464
Training Epoch_35 Batch_2, Accuracy: 0.994792, Loss: 0.082249
Training Epoch_35 Batch_3, Accuracy: 0.996094, Loss: 0.068983
Training Epoch_35 Batch_4, Accuracy: 0.993750, Loss: 0.068331
Training Epoch_35 Batch_5, Accuracy: 0.994792, Loss: 0.067186
Training Epoch_35 Batch_6, Accuracy: 0.995536, Loss: 0.067935
Training Epoch_35 Batch_7, Accuracy: 0.994141, Loss: 0.067316
Training Epoch_35 Batch_8, Accuracy: 0.994792, Loss: 0.067171
Training Epoch_35 Batch_9, Accuracy: 0.995313, Loss: 0.065328
Training Epoch_35 Batch_10, Accuracy: 0.995739, Loss: 0.061632
Training Epoch_35 Batch_11, Accuracy: 0.996094, Loss: 0.061530
Training Epoch_35 Batch_12, Accuracy: 0.996394, Loss: 0.061014
Training Epoch_35 Batch_13, Accuracy: 0.996652, Loss: 0.060418
Training Epoch_35 Batch_14, Accuracy: 0.996875, Loss: 0.059863
Training Epoch_35 Batch_15, Accuracy: 0.997070, Loss: 0.058485
Training Epoch_35 Batch_16, Accuracy: 0.996324, Loss: 0.059284
Training Epoch_35 Batch_17, Accuracy: 0.996528, Loss: 0.059607
Training Epoch_35 Batch_18, Accuracy: 0.996711, Loss: 0.059333
Training Epoch_35 Batch_19, Accuracy: 0.996875, Loss: 0.059276
Training Epoch_35 Batch_20, Accuracy: 0.997024, Loss: 0.059794
Training Epoch_35 Batch_21, Accuracy: 0.995739, Loss: 0.064801
Training Epoch_35 Batch_22, Accuracy: 0.995245, Loss: 0.065137
Training Epoch_35 Batch_23, Accuracy: 0.995443, Loss: 0.064576
Training Epoch_35 Batch_24, Accuracy: 0.995625, Loss: 0.064450
Training Epoch_35 Batch_25, Accuracy: 0.994591, Loss: 0.064691
Training Epoch_35 Batch_26, Accuracy: 0.994213, Loss: 0.064118
Training Epoch_35 Batch_27, Accuracy: 0.994420, Loss: 0.064018
Training Epoch_35 Batch_28, Accuracy: 0.994612, Loss: 0.064561
Training Epoch_35 Batch_29, Accuracy: 0.994271, Loss: 0.064834
Training Epoch_35 Batch_30, Accuracy: 0.994456, Loss: 0.064498
Training Epoch_35 Batch_31, Accuracy: 0.994141, Loss: 0.064343
Training Epoch_35 Batch_32, Accuracy: 0.994318, Loss: 0.063794
Training Epoch_35 Batch_33, Accuracy: 0.994485, Loss: 0.063764
Training Epoch_35 Batch_34, Accuracy: 0.994643, Loss: 0.062812
Training Epoch_35 Batch_35, Accuracy: 0.994792, Loss: 0.062247
Training Epoch_35 Batch_36, Accuracy: 0.994510, Loss: 0.062460
Training Epoch_35 Batch_37, Accuracy: 0.994655, Loss: 0.061708
Training Epoch_35 Batch_38, Accuracy: 0.994792, Loss: 0.061652
Training Epoch_35 Batch_39, Accuracy: 0.994531, Loss: 0.061580
Training Epoch_35 Batch_40, Accuracy: 0.994665, Loss: 0.061305
Training Epoch_35 Batch_41, Accuracy: 0.994792, Loss: 0.061218
Training Epoch_35 Batch_42, Accuracy: 0.994549, Loss: 0.061412
Training Epoch_35 Batch_43, Accuracy: 0.994673, Loss: 0.061115
Training Epoch_35 Batch_44, Accuracy: 0.994703, Loss: 0.060576
********************* Summary for epoch: 35 *********************
2019-07-15 09:06:48: Step 35: Training accuracy = 99.5%
2019-07-15 09:06:48: Step 35: Training cross entropy = 0.060576
2019-07-15 09:06:48: Step 35: Validation accuracy = 90.9%
2019-07-15 09:06:48: Step 35: Validation cross entropy = 0.311814


Training Epoch_36 Batch_0, Accuracy: 0.984375, Loss: 0.059544
Training Epoch_36 Batch_1, Accuracy: 0.992188, Loss: 0.057562
Training Epoch_36 Batch_2, Accuracy: 0.994792, Loss: 0.053105
Training Epoch_36 Batch_3, Accuracy: 0.996094, Loss: 0.055322
Training Epoch_36 Batch_4, Accuracy: 0.990625, Loss: 0.068423
Training Epoch_36 Batch_5, Accuracy: 0.992188, Loss: 0.061937
Training Epoch_36 Batch_6, Accuracy: 0.993304, Loss: 0.062378
Training Epoch_36 Batch_7, Accuracy: 0.994141, Loss: 0.060484
Training Epoch_36 Batch_8, Accuracy: 0.993056, Loss: 0.063105
Training Epoch_36 Batch_9, Accuracy: 0.992188, Loss: 0.062673
Training Epoch_36 Batch_10, Accuracy: 0.992898, Loss: 0.061398
Training Epoch_36 Batch_11, Accuracy: 0.993490, Loss: 0.060080
Training Epoch_36 Batch_12, Accuracy: 0.993990, Loss: 0.058518
Training Epoch_36 Batch_13, Accuracy: 0.994420, Loss: 0.057908
Training Epoch_36 Batch_14, Accuracy: 0.994792, Loss: 0.056568
Training Epoch_36 Batch_15, Accuracy: 0.994141, Loss: 0.057131
Training Epoch_36 Batch_16, Accuracy: 0.994485, Loss: 0.057424
Training Epoch_36 Batch_17, Accuracy: 0.994792, Loss: 0.056927
Training Epoch_36 Batch_18, Accuracy: 0.994243, Loss: 0.056544
Training Epoch_36 Batch_19, Accuracy: 0.994531, Loss: 0.056197
Training Epoch_36 Batch_20, Accuracy: 0.994048, Loss: 0.055689
Training Epoch_36 Batch_21, Accuracy: 0.994318, Loss: 0.054455
Training Epoch_36 Batch_22, Accuracy: 0.994565, Loss: 0.054508
Training Epoch_36 Batch_23, Accuracy: 0.994792, Loss: 0.054216
Training Epoch_36 Batch_24, Accuracy: 0.995000, Loss: 0.053877
Training Epoch_36 Batch_25, Accuracy: 0.994591, Loss: 0.053811
Training Epoch_36 Batch_26, Accuracy: 0.994213, Loss: 0.054319
Training Epoch_36 Batch_27, Accuracy: 0.994420, Loss: 0.054347
Training Epoch_36 Batch_28, Accuracy: 0.994073, Loss: 0.054353
Training Epoch_36 Batch_29, Accuracy: 0.994271, Loss: 0.053932
Training Epoch_36 Batch_30, Accuracy: 0.994456, Loss: 0.053955
Training Epoch_36 Batch_31, Accuracy: 0.994141, Loss: 0.053848
Training Epoch_36 Batch_32, Accuracy: 0.994318, Loss: 0.054071
Training Epoch_36 Batch_33, Accuracy: 0.992647, Loss: 0.055727
Training Epoch_36 Batch_34, Accuracy: 0.992857, Loss: 0.055586
Training Epoch_36 Batch_35, Accuracy: 0.992622, Loss: 0.056158
Training Epoch_36 Batch_36, Accuracy: 0.992821, Loss: 0.056777
Training Epoch_36 Batch_37, Accuracy: 0.992599, Loss: 0.057714
Training Epoch_36 Batch_38, Accuracy: 0.992388, Loss: 0.059862
Training Epoch_36 Batch_39, Accuracy: 0.992188, Loss: 0.060178
Training Epoch_36 Batch_40, Accuracy: 0.992378, Loss: 0.060041
Training Epoch_36 Batch_41, Accuracy: 0.992560, Loss: 0.060564
Training Epoch_36 Batch_42, Accuracy: 0.992733, Loss: 0.059821
Training Epoch_36 Batch_43, Accuracy: 0.992898, Loss: 0.059344
Training Epoch_36 Batch_44, Accuracy: 0.992938, Loss: 0.059015
********************* Summary for epoch: 36 *********************
2019-07-15 09:06:48: Step 36: Training accuracy = 99.3%
2019-07-15 09:06:48: Step 36: Training cross entropy = 0.059015
2019-07-15 09:06:48: Step 36: Validation accuracy = 90.9%
2019-07-15 09:06:48: Step 36: Validation cross entropy = 0.298097


Training Epoch_37 Batch_0, Accuracy: 1.000000, Loss: 0.064870
Training Epoch_37 Batch_1, Accuracy: 1.000000, Loss: 0.051294
Training Epoch_37 Batch_2, Accuracy: 0.994792, Loss: 0.050364
Training Epoch_37 Batch_3, Accuracy: 0.992188, Loss: 0.051708
Training Epoch_37 Batch_4, Accuracy: 0.993750, Loss: 0.048819
Training Epoch_37 Batch_5, Accuracy: 0.989583, Loss: 0.052106
Training Epoch_37 Batch_6, Accuracy: 0.991071, Loss: 0.051820
Training Epoch_37 Batch_7, Accuracy: 0.992188, Loss: 0.049768
Training Epoch_37 Batch_8, Accuracy: 0.991319, Loss: 0.050537
Training Epoch_37 Batch_9, Accuracy: 0.992188, Loss: 0.049462
Training Epoch_37 Batch_10, Accuracy: 0.992898, Loss: 0.050315
Training Epoch_37 Batch_11, Accuracy: 0.992188, Loss: 0.053010
Training Epoch_37 Batch_12, Accuracy: 0.991587, Loss: 0.052968
Training Epoch_37 Batch_13, Accuracy: 0.992188, Loss: 0.052689
Training Epoch_37 Batch_14, Accuracy: 0.992708, Loss: 0.051919
Training Epoch_37 Batch_15, Accuracy: 0.993164, Loss: 0.052518
Training Epoch_37 Batch_16, Accuracy: 0.993566, Loss: 0.052469
Training Epoch_37 Batch_17, Accuracy: 0.993924, Loss: 0.052867
Training Epoch_37 Batch_18, Accuracy: 0.994243, Loss: 0.052106
Training Epoch_37 Batch_19, Accuracy: 0.994531, Loss: 0.052098
Training Epoch_37 Batch_20, Accuracy: 0.994792, Loss: 0.051780
Training Epoch_37 Batch_21, Accuracy: 0.995028, Loss: 0.051421
Training Epoch_37 Batch_22, Accuracy: 0.995245, Loss: 0.051178
Training Epoch_37 Batch_23, Accuracy: 0.995443, Loss: 0.051241
Training Epoch_37 Batch_24, Accuracy: 0.995000, Loss: 0.052234
Training Epoch_37 Batch_25, Accuracy: 0.995192, Loss: 0.052542
Training Epoch_37 Batch_26, Accuracy: 0.994792, Loss: 0.053066
Training Epoch_37 Batch_27, Accuracy: 0.994978, Loss: 0.053005
Training Epoch_37 Batch_28, Accuracy: 0.995151, Loss: 0.052848
Training Epoch_37 Batch_29, Accuracy: 0.994792, Loss: 0.053089
Training Epoch_37 Batch_30, Accuracy: 0.994960, Loss: 0.052776
Training Epoch_37 Batch_31, Accuracy: 0.994629, Loss: 0.053809
Training Epoch_37 Batch_32, Accuracy: 0.994792, Loss: 0.053676
Training Epoch_37 Batch_33, Accuracy: 0.994945, Loss: 0.053238
Training Epoch_37 Batch_34, Accuracy: 0.995089, Loss: 0.053397
Training Epoch_37 Batch_35, Accuracy: 0.994792, Loss: 0.053061
Training Epoch_37 Batch_36, Accuracy: 0.994932, Loss: 0.053141
Training Epoch_37 Batch_37, Accuracy: 0.993832, Loss: 0.055635
Training Epoch_37 Batch_38, Accuracy: 0.993990, Loss: 0.055650
Training Epoch_37 Batch_39, Accuracy: 0.994141, Loss: 0.055300
Training Epoch_37 Batch_40, Accuracy: 0.993902, Loss: 0.056439
Training Epoch_37 Batch_41, Accuracy: 0.993304, Loss: 0.057290
Training Epoch_37 Batch_42, Accuracy: 0.993459, Loss: 0.056880
Training Epoch_37 Batch_43, Accuracy: 0.993253, Loss: 0.057184
Training Epoch_37 Batch_44, Accuracy: 0.993291, Loss: 0.057351
********************* Summary for epoch: 37 *********************
2019-07-15 09:06:48: Step 37: Training accuracy = 99.3%
2019-07-15 09:06:48: Step 37: Training cross entropy = 0.057351
2019-07-15 09:06:48: Step 37: Validation accuracy = 91.2%
2019-07-15 09:06:48: Step 37: Validation cross entropy = 0.306366


Training Epoch_38 Batch_0, Accuracy: 0.984375, Loss: 0.066696
Training Epoch_38 Batch_1, Accuracy: 0.992188, Loss: 0.052272
Training Epoch_38 Batch_2, Accuracy: 0.989583, Loss: 0.057254
Training Epoch_38 Batch_3, Accuracy: 0.992188, Loss: 0.051500
Training Epoch_38 Batch_4, Accuracy: 0.993750, Loss: 0.053012
Training Epoch_38 Batch_5, Accuracy: 0.994792, Loss: 0.053062
Training Epoch_38 Batch_6, Accuracy: 0.995536, Loss: 0.052568
Training Epoch_38 Batch_7, Accuracy: 0.996094, Loss: 0.051209
Training Epoch_38 Batch_8, Accuracy: 0.996528, Loss: 0.049250
Training Epoch_38 Batch_9, Accuracy: 0.996875, Loss: 0.048512
Training Epoch_38 Batch_10, Accuracy: 0.997159, Loss: 0.047260
Training Epoch_38 Batch_11, Accuracy: 0.997396, Loss: 0.047529
Training Epoch_38 Batch_12, Accuracy: 0.997596, Loss: 0.047190
Training Epoch_38 Batch_13, Accuracy: 0.997768, Loss: 0.048097
Training Epoch_38 Batch_14, Accuracy: 0.996875, Loss: 0.053892
Training Epoch_38 Batch_15, Accuracy: 0.996094, Loss: 0.055209
Training Epoch_38 Batch_16, Accuracy: 0.995404, Loss: 0.054860
Training Epoch_38 Batch_17, Accuracy: 0.995660, Loss: 0.054087
Training Epoch_38 Batch_18, Accuracy: 0.993421, Loss: 0.055386
Training Epoch_38 Batch_19, Accuracy: 0.993750, Loss: 0.054870
Training Epoch_38 Batch_20, Accuracy: 0.993304, Loss: 0.056833
Training Epoch_38 Batch_21, Accuracy: 0.993608, Loss: 0.056586
Training Epoch_38 Batch_22, Accuracy: 0.993886, Loss: 0.056596
Training Epoch_38 Batch_23, Accuracy: 0.994141, Loss: 0.057062
Training Epoch_38 Batch_24, Accuracy: 0.994375, Loss: 0.057017
Training Epoch_38 Batch_25, Accuracy: 0.994591, Loss: 0.056788
Training Epoch_38 Batch_26, Accuracy: 0.994213, Loss: 0.057092
Training Epoch_38 Batch_27, Accuracy: 0.994420, Loss: 0.056386
Training Epoch_38 Batch_28, Accuracy: 0.992996, Loss: 0.057827
Training Epoch_38 Batch_29, Accuracy: 0.993229, Loss: 0.057035
Training Epoch_38 Batch_30, Accuracy: 0.992440, Loss: 0.056990
Training Epoch_38 Batch_31, Accuracy: 0.992676, Loss: 0.056252
Training Epoch_38 Batch_32, Accuracy: 0.992424, Loss: 0.056091
Training Epoch_38 Batch_33, Accuracy: 0.992647, Loss: 0.055372
Training Epoch_38 Batch_34, Accuracy: 0.992857, Loss: 0.055191
Training Epoch_38 Batch_35, Accuracy: 0.993056, Loss: 0.055624
Training Epoch_38 Batch_36, Accuracy: 0.993243, Loss: 0.054813
Training Epoch_38 Batch_37, Accuracy: 0.993421, Loss: 0.054662
Training Epoch_38 Batch_38, Accuracy: 0.993590, Loss: 0.054623
Training Epoch_38 Batch_39, Accuracy: 0.993750, Loss: 0.054813
Training Epoch_38 Batch_40, Accuracy: 0.993521, Loss: 0.055600
Training Epoch_38 Batch_41, Accuracy: 0.993304, Loss: 0.055955
Training Epoch_38 Batch_42, Accuracy: 0.993459, Loss: 0.055944
Training Epoch_38 Batch_43, Accuracy: 0.993608, Loss: 0.055851
Training Epoch_38 Batch_44, Accuracy: 0.993644, Loss: 0.054891
********************* Summary for epoch: 38 *********************
2019-07-15 09:06:49: Step 38: Training accuracy = 99.4%
2019-07-15 09:06:49: Step 38: Training cross entropy = 0.054891
2019-07-15 09:06:49: Step 38: Validation accuracy = 91.2%
2019-07-15 09:06:49: Step 38: Validation cross entropy = 0.300030


Training Epoch_39 Batch_0, Accuracy: 0.984375, Loss: 0.038292
Training Epoch_39 Batch_1, Accuracy: 0.992188, Loss: 0.040538
Training Epoch_39 Batch_2, Accuracy: 0.994792, Loss: 0.038144
Training Epoch_39 Batch_3, Accuracy: 0.996094, Loss: 0.039898
Training Epoch_39 Batch_4, Accuracy: 0.996875, Loss: 0.039866
Training Epoch_39 Batch_5, Accuracy: 0.992188, Loss: 0.044601
Training Epoch_39 Batch_6, Accuracy: 0.993304, Loss: 0.043706
Training Epoch_39 Batch_7, Accuracy: 0.994141, Loss: 0.044500
Training Epoch_39 Batch_8, Accuracy: 0.993056, Loss: 0.046525
Training Epoch_39 Batch_9, Accuracy: 0.992188, Loss: 0.047661
Training Epoch_39 Batch_10, Accuracy: 0.992898, Loss: 0.048357
Training Epoch_39 Batch_11, Accuracy: 0.993490, Loss: 0.048859
Training Epoch_39 Batch_12, Accuracy: 0.993990, Loss: 0.048877
Training Epoch_39 Batch_13, Accuracy: 0.993304, Loss: 0.049638
Training Epoch_39 Batch_14, Accuracy: 0.993750, Loss: 0.050211
Training Epoch_39 Batch_15, Accuracy: 0.994141, Loss: 0.050552
Training Epoch_39 Batch_16, Accuracy: 0.994485, Loss: 0.049765
Training Epoch_39 Batch_17, Accuracy: 0.994792, Loss: 0.048907
Training Epoch_39 Batch_18, Accuracy: 0.995066, Loss: 0.048173
Training Epoch_39 Batch_19, Accuracy: 0.995313, Loss: 0.047850
Training Epoch_39 Batch_20, Accuracy: 0.995536, Loss: 0.047749
Training Epoch_39 Batch_21, Accuracy: 0.995739, Loss: 0.048017
Training Epoch_39 Batch_22, Accuracy: 0.995245, Loss: 0.049126
Training Epoch_39 Batch_23, Accuracy: 0.994792, Loss: 0.049873
Training Epoch_39 Batch_24, Accuracy: 0.995000, Loss: 0.049875
Training Epoch_39 Batch_25, Accuracy: 0.995192, Loss: 0.049627
Training Epoch_39 Batch_26, Accuracy: 0.994792, Loss: 0.049732
Training Epoch_39 Batch_27, Accuracy: 0.994978, Loss: 0.049711
Training Epoch_39 Batch_28, Accuracy: 0.995151, Loss: 0.049360
Training Epoch_39 Batch_29, Accuracy: 0.994792, Loss: 0.051169
Training Epoch_39 Batch_30, Accuracy: 0.994960, Loss: 0.051330
Training Epoch_39 Batch_31, Accuracy: 0.995117, Loss: 0.052019
Training Epoch_39 Batch_32, Accuracy: 0.995265, Loss: 0.051900
Training Epoch_39 Batch_33, Accuracy: 0.995404, Loss: 0.051128
Training Epoch_39 Batch_34, Accuracy: 0.995089, Loss: 0.051402
Training Epoch_39 Batch_35, Accuracy: 0.995226, Loss: 0.051231
Training Epoch_39 Batch_36, Accuracy: 0.994932, Loss: 0.051397
Training Epoch_39 Batch_37, Accuracy: 0.995066, Loss: 0.051554
Training Epoch_39 Batch_38, Accuracy: 0.994391, Loss: 0.054144
Training Epoch_39 Batch_39, Accuracy: 0.994531, Loss: 0.054169
Training Epoch_39 Batch_40, Accuracy: 0.994284, Loss: 0.053987
Training Epoch_39 Batch_41, Accuracy: 0.994048, Loss: 0.053957
Training Epoch_39 Batch_42, Accuracy: 0.994186, Loss: 0.054339
Training Epoch_39 Batch_43, Accuracy: 0.994318, Loss: 0.054062
Training Epoch_39 Batch_44, Accuracy: 0.994350, Loss: 0.053988
********************* Summary for epoch: 39 *********************
2019-07-15 09:06:49: Step 39: Training accuracy = 99.4%
2019-07-15 09:06:49: Step 39: Training cross entropy = 0.053988
2019-07-15 09:06:49: Step 39: Validation accuracy = 90.9%
2019-07-15 09:06:49: Step 39: Validation cross entropy = 0.307623


Training Epoch_40 Batch_0, Accuracy: 1.000000, Loss: 0.034501
Training Epoch_40 Batch_1, Accuracy: 1.000000, Loss: 0.041418
Training Epoch_40 Batch_2, Accuracy: 0.994792, Loss: 0.050829
Training Epoch_40 Batch_3, Accuracy: 0.996094, Loss: 0.051816
Training Epoch_40 Batch_4, Accuracy: 0.996875, Loss: 0.050120
Training Epoch_40 Batch_5, Accuracy: 0.994792, Loss: 0.052038
Training Epoch_40 Batch_6, Accuracy: 0.995536, Loss: 0.049090
Training Epoch_40 Batch_7, Accuracy: 0.996094, Loss: 0.046684
Training Epoch_40 Batch_8, Accuracy: 0.996528, Loss: 0.044483
Training Epoch_40 Batch_9, Accuracy: 0.996875, Loss: 0.047052
Training Epoch_40 Batch_10, Accuracy: 0.997159, Loss: 0.047000
Training Epoch_40 Batch_11, Accuracy: 0.997396, Loss: 0.047128
Training Epoch_40 Batch_12, Accuracy: 0.997596, Loss: 0.046301
Training Epoch_40 Batch_13, Accuracy: 0.997768, Loss: 0.046788
Training Epoch_40 Batch_14, Accuracy: 0.997917, Loss: 0.048624
Training Epoch_40 Batch_15, Accuracy: 0.997070, Loss: 0.048999
Training Epoch_40 Batch_16, Accuracy: 0.997243, Loss: 0.049601
Training Epoch_40 Batch_17, Accuracy: 0.997396, Loss: 0.049117
Training Epoch_40 Batch_18, Accuracy: 0.997533, Loss: 0.050096
Training Epoch_40 Batch_19, Accuracy: 0.997656, Loss: 0.049638
Training Epoch_40 Batch_20, Accuracy: 0.997768, Loss: 0.050901
Training Epoch_40 Batch_21, Accuracy: 0.997869, Loss: 0.050469
Training Epoch_40 Batch_22, Accuracy: 0.997962, Loss: 0.050674
Training Epoch_40 Batch_23, Accuracy: 0.998047, Loss: 0.049726
Training Epoch_40 Batch_24, Accuracy: 0.998125, Loss: 0.049268
Training Epoch_40 Batch_25, Accuracy: 0.998197, Loss: 0.049070
Training Epoch_40 Batch_26, Accuracy: 0.998264, Loss: 0.048981
Training Epoch_40 Batch_27, Accuracy: 0.997768, Loss: 0.049312
Training Epoch_40 Batch_28, Accuracy: 0.997306, Loss: 0.050257
Training Epoch_40 Batch_29, Accuracy: 0.996875, Loss: 0.052892
Training Epoch_40 Batch_30, Accuracy: 0.996976, Loss: 0.052201
Training Epoch_40 Batch_31, Accuracy: 0.997070, Loss: 0.051617
Training Epoch_40 Batch_32, Accuracy: 0.997159, Loss: 0.051317
Training Epoch_40 Batch_33, Accuracy: 0.997243, Loss: 0.051550
Training Epoch_40 Batch_34, Accuracy: 0.996429, Loss: 0.052838
Training Epoch_40 Batch_35, Accuracy: 0.996094, Loss: 0.052750
Training Epoch_40 Batch_36, Accuracy: 0.995777, Loss: 0.053241
Training Epoch_40 Batch_37, Accuracy: 0.995888, Loss: 0.052340
Training Epoch_40 Batch_38, Accuracy: 0.995593, Loss: 0.052484
Training Epoch_40 Batch_39, Accuracy: 0.995313, Loss: 0.052318
Training Epoch_40 Batch_40, Accuracy: 0.995046, Loss: 0.052619
Training Epoch_40 Batch_41, Accuracy: 0.994792, Loss: 0.052725
Training Epoch_40 Batch_42, Accuracy: 0.994913, Loss: 0.052446
Training Epoch_40 Batch_43, Accuracy: 0.995028, Loss: 0.052168
Training Epoch_40 Batch_44, Accuracy: 0.995057, Loss: 0.051903
********************* Summary for epoch: 40 *********************
2019-07-15 09:06:49: Step 40: Training accuracy = 99.5%
2019-07-15 09:06:49: Step 40: Training cross entropy = 0.051903
2019-07-15 09:06:49: Step 40: Validation accuracy = 89.8%
2019-07-15 09:06:49: Step 40: Validation cross entropy = 0.311860


Training Epoch_41 Batch_0, Accuracy: 1.000000, Loss: 0.045238
Training Epoch_41 Batch_1, Accuracy: 1.000000, Loss: 0.032117
Training Epoch_41 Batch_2, Accuracy: 1.000000, Loss: 0.037885
Training Epoch_41 Batch_3, Accuracy: 1.000000, Loss: 0.038287
Training Epoch_41 Batch_4, Accuracy: 1.000000, Loss: 0.041582
Training Epoch_41 Batch_5, Accuracy: 1.000000, Loss: 0.042222
Training Epoch_41 Batch_6, Accuracy: 1.000000, Loss: 0.040808
Training Epoch_41 Batch_7, Accuracy: 1.000000, Loss: 0.041590
Training Epoch_41 Batch_8, Accuracy: 0.998264, Loss: 0.043802
Training Epoch_41 Batch_9, Accuracy: 0.998438, Loss: 0.043770
Training Epoch_41 Batch_10, Accuracy: 0.998580, Loss: 0.042354
Training Epoch_41 Batch_11, Accuracy: 0.997396, Loss: 0.043632
Training Epoch_41 Batch_12, Accuracy: 0.997596, Loss: 0.043600
Training Epoch_41 Batch_13, Accuracy: 0.997768, Loss: 0.044624
Training Epoch_41 Batch_14, Accuracy: 0.997917, Loss: 0.043784
Training Epoch_41 Batch_15, Accuracy: 0.998047, Loss: 0.042860
Training Epoch_41 Batch_16, Accuracy: 0.998162, Loss: 0.042946
Training Epoch_41 Batch_17, Accuracy: 0.998264, Loss: 0.042312
Training Epoch_41 Batch_18, Accuracy: 0.998355, Loss: 0.043454
Training Epoch_41 Batch_19, Accuracy: 0.998438, Loss: 0.042966
Training Epoch_41 Batch_20, Accuracy: 0.998512, Loss: 0.043243
Training Epoch_41 Batch_21, Accuracy: 0.998580, Loss: 0.043204
Training Epoch_41 Batch_22, Accuracy: 0.998641, Loss: 0.042517
Training Epoch_41 Batch_23, Accuracy: 0.998698, Loss: 0.042444
Training Epoch_41 Batch_24, Accuracy: 0.998750, Loss: 0.042306
Training Epoch_41 Batch_25, Accuracy: 0.998197, Loss: 0.042797
Training Epoch_41 Batch_26, Accuracy: 0.998264, Loss: 0.043077
Training Epoch_41 Batch_27, Accuracy: 0.998326, Loss: 0.042975
Training Epoch_41 Batch_28, Accuracy: 0.997845, Loss: 0.043260
Training Epoch_41 Batch_29, Accuracy: 0.997917, Loss: 0.043916
Training Epoch_41 Batch_30, Accuracy: 0.997480, Loss: 0.044309
Training Epoch_41 Batch_31, Accuracy: 0.997559, Loss: 0.044268
Training Epoch_41 Batch_32, Accuracy: 0.997633, Loss: 0.045234
Training Epoch_41 Batch_33, Accuracy: 0.997702, Loss: 0.045671
Training Epoch_41 Batch_34, Accuracy: 0.997768, Loss: 0.045314
Training Epoch_41 Batch_35, Accuracy: 0.997830, Loss: 0.045978
Training Epoch_41 Batch_36, Accuracy: 0.997466, Loss: 0.046707
Training Epoch_41 Batch_37, Accuracy: 0.997533, Loss: 0.046399
Training Epoch_41 Batch_38, Accuracy: 0.997196, Loss: 0.047355
Training Epoch_41 Batch_39, Accuracy: 0.996875, Loss: 0.048770
Training Epoch_41 Batch_40, Accuracy: 0.996951, Loss: 0.048869
Training Epoch_41 Batch_41, Accuracy: 0.997024, Loss: 0.048376
Training Epoch_41 Batch_42, Accuracy: 0.997093, Loss: 0.048226
Training Epoch_41 Batch_43, Accuracy: 0.996449, Loss: 0.049328
Training Epoch_41 Batch_44, Accuracy: 0.996469, Loss: 0.049255
********************* Summary for epoch: 41 *********************
2019-07-15 09:06:50: Step 41: Training accuracy = 99.6%
2019-07-15 09:06:50: Step 41: Training cross entropy = 0.049255
2019-07-15 09:06:50: Step 41: Validation accuracy = 90.6%
2019-07-15 09:06:50: Step 41: Validation cross entropy = 0.296016


Training Epoch_42 Batch_0, Accuracy: 1.000000, Loss: 0.040280
Training Epoch_42 Batch_1, Accuracy: 0.992188, Loss: 0.073096
Training Epoch_42 Batch_2, Accuracy: 0.994792, Loss: 0.061634
Training Epoch_42 Batch_3, Accuracy: 0.992188, Loss: 0.059888
Training Epoch_42 Batch_4, Accuracy: 0.993750, Loss: 0.059358
Training Epoch_42 Batch_5, Accuracy: 0.994792, Loss: 0.054415
Training Epoch_42 Batch_6, Accuracy: 0.995536, Loss: 0.051803
Training Epoch_42 Batch_7, Accuracy: 0.996094, Loss: 0.051426
Training Epoch_42 Batch_8, Accuracy: 0.996528, Loss: 0.048975
Training Epoch_42 Batch_9, Accuracy: 0.996875, Loss: 0.050396
Training Epoch_42 Batch_10, Accuracy: 0.997159, Loss: 0.050148
Training Epoch_42 Batch_11, Accuracy: 0.997396, Loss: 0.049472
Training Epoch_42 Batch_12, Accuracy: 0.996394, Loss: 0.051555
Training Epoch_42 Batch_13, Accuracy: 0.995536, Loss: 0.051639
Training Epoch_42 Batch_14, Accuracy: 0.995833, Loss: 0.050224
Training Epoch_42 Batch_15, Accuracy: 0.996094, Loss: 0.049613
Training Epoch_42 Batch_16, Accuracy: 0.996324, Loss: 0.050139
Training Epoch_42 Batch_17, Accuracy: 0.996528, Loss: 0.049966
Training Epoch_42 Batch_18, Accuracy: 0.996711, Loss: 0.049722
Training Epoch_42 Batch_19, Accuracy: 0.996875, Loss: 0.049934
Training Epoch_42 Batch_20, Accuracy: 0.997024, Loss: 0.049468
Training Epoch_42 Batch_21, Accuracy: 0.997159, Loss: 0.048496
Training Epoch_42 Batch_22, Accuracy: 0.996603, Loss: 0.048740
Training Epoch_42 Batch_23, Accuracy: 0.996745, Loss: 0.048207
Training Epoch_42 Batch_24, Accuracy: 0.996875, Loss: 0.047460
Training Epoch_42 Batch_25, Accuracy: 0.996995, Loss: 0.047701
Training Epoch_42 Batch_26, Accuracy: 0.997106, Loss: 0.047170
Training Epoch_42 Batch_27, Accuracy: 0.996652, Loss: 0.048138
Training Epoch_42 Batch_28, Accuracy: 0.996767, Loss: 0.047338
Training Epoch_42 Batch_29, Accuracy: 0.996875, Loss: 0.047137
Training Epoch_42 Batch_30, Accuracy: 0.996976, Loss: 0.047077
Training Epoch_42 Batch_31, Accuracy: 0.997070, Loss: 0.047607
Training Epoch_42 Batch_32, Accuracy: 0.997159, Loss: 0.047384
Training Epoch_42 Batch_33, Accuracy: 0.997243, Loss: 0.047580
Training Epoch_42 Batch_34, Accuracy: 0.996875, Loss: 0.047777
Training Epoch_42 Batch_35, Accuracy: 0.996962, Loss: 0.047098
Training Epoch_42 Batch_36, Accuracy: 0.997044, Loss: 0.047349
Training Epoch_42 Batch_37, Accuracy: 0.997122, Loss: 0.046938
Training Epoch_42 Batch_38, Accuracy: 0.996795, Loss: 0.047569
Training Epoch_42 Batch_39, Accuracy: 0.996875, Loss: 0.047540
Training Epoch_42 Batch_40, Accuracy: 0.996570, Loss: 0.048078
Training Epoch_42 Batch_41, Accuracy: 0.996652, Loss: 0.048032
Training Epoch_42 Batch_42, Accuracy: 0.996730, Loss: 0.047536
Training Epoch_42 Batch_43, Accuracy: 0.996804, Loss: 0.048185
Training Epoch_42 Batch_44, Accuracy: 0.996822, Loss: 0.047740
********************* Summary for epoch: 42 *********************
2019-07-15 09:06:50: Step 42: Training accuracy = 99.7%
2019-07-15 09:06:50: Step 42: Training cross entropy = 0.047740
2019-07-15 09:06:50: Step 42: Validation accuracy = 91.2%
2019-07-15 09:06:50: Step 42: Validation cross entropy = 0.295156


The training results have stopped improving in last 15 epochs.
Restoring parameters from /data/jerry-flow-training-2019-07-15t0858z810007/interval-model-27

##########################################
########### Retraining Summary ###########
##########################################
Job id : jerry-flow-training-2019-07-15t0858z810007
Training depth : 1
Training batch size : 64
Learning rate : 0.001000
Total retraining epochs : 150
Stop after no improved epochs: 15
Epoch with best accuracy : 27
Best validation accuracy : 0.917127
Final test accuracy is : 0.917355
Predict top classes : 5
Retraining started at : 2019-07-15 09:05:10
Retraining ended at : 2019-07-15 09:06:52
Retraining lasted : 0:01:42.118730
Scale of 0 disables regularizer.
Restoring parameters from /data/jerry-flow-training-2019-07-15t0858z810007/interval-model-27
No assets to save.
No assets to write.
SavedModel written to: b’/data/jerry-flow-training-2019-07-15t0858z810007/tfs/saved_model.pb’
TF Serving model saved.
Model is uploaded to repository with name flowerjerrymodel and version 1.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

汪子熙

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值