炼丹笔记一——基于TensorFlow的vgg16的cifar10和100简单探究超参数对训练集收敛情况的影响

炼丹笔记一——基于TensorFlow的vgg16的cifar10和100超参数试错

文章目录

先说数据集:

数据集的图分辨率很小,500003232*3
cifar10分10类,cifar100分100类。官网链接:
http://www.cs.toronto.edu/~kriz/cifar.html

网络结构:

类似于vgg16的结构,去掉了后面的多余全连接和最后一个卷积5.
基于tf.layers这个集成度很高的函数构建的函数。

源代码:

https://github.com/kaixindelele/tensorflow_cifar10_vgg16_keras_read
求star~

先说结构的问题:

目前我没有用过测试集,只是用训练集,但是如果从头开始训练,如果你的超参数或者结构没有设置好,训练集根本就不会收敛!

比如我这里用的是CNN这个类作为构建的网络结构,我都已经将输出结点self.predict变成属性了,如果我还将这个self.predict return 到创建网络的这个变量,那么就有可能不会收敛!
我待会儿再试试只调这个问题,看看能不能收敛。参见实验十一

def _build_net(self, batch_images):
    with tf.name_scope("CNN"):
        with tf.name_scope("conv1"):
            self.conv1_1 = tf.layers.conv2d(batch_images, filters=32, kernel_size=3, activation=tf.nn.relu, name="conv1_1", padding="Same")
            self.conv1_2 = tf.layers.conv2d(self.conv1_1, filters=32, kernel_size=3, activation=tf.nn.relu, name="conv1_2", padding="Same")
            self.pool_1 = tf.layers.max_pooling2d(self.conv1_2, 2, 2, name="pool_1")

        with tf.name_scope("conv2"):
            self.conv2_1 = tf.layers.conv2d(self.pool_1, filters=64, kernel_size=3, activation=tf.nn.relu, name="conv2_1", padding="Same")
            self.conv2_2 = tf.layers.conv2d(self.conv2_1, filters=64, kernel_size=3, activation=tf.nn.relu, name="conv2_2", padding="Same")
            self.pool_2 = tf.layers.max_pooling2d(self.conv2_2, 2, 2, name="pool_2")

        with tf.name_scope("conv3"):
            self.conv3_1 = tf.layers.conv2d(self.pool_2, filters=128, kernel_size=3, activation=tf.nn.relu, name="conv3_1", padding="Same")
            self.conv3_2 = tf.layers.conv2d(self.conv3_1, filters=128, kernel_size=3, activation=tf.nn.relu, name="conv3_2", padding="Same")
            self.pool_3 = tf.layers.max_pooling2d(self.conv3_2, 2, 2, name="pool_3")

        with tf.name_scope("conv4"):
            self.conv4_1 = tf.layers.conv2d(self.pool_3, filters=256, kernel_size=3, activation=tf.nn.relu, name="conv4_1", padding="Same")
            self.conv4_2 = tf.layers.conv2d(self.conv4_1, filters=256, kernel_size=3, activation=tf.nn.relu, name="conv4_2", padding="Same")
            self.pool_4 = tf.layers.max_pooling2d(self.conv4_2, 2, 2, name="pool_4")

        with tf.name_scope("FC"):
            self.flatten = tf.layers.flatten(self.pool_4, "flatten")
            self.fc = tf.layers.dense(self.flatten, 256, activation=tf.nn.relu, name="fc")
            self.predict = tf.layers.dense(self.fc, num_classes, name="predict")

关于交叉熵的问题

我用的是这个函数:

tf.losses.softmax_cross_entropy(self.y_, self.predict, reduction=tf.losses.Reduction.MEAN)

很不明显,这个函数已经集成了算softmax的这个功能,所以你没有必要在最后的一个输出层也加上softmax激活函数。

关于动量系数的问题:

self.train_op = tf.train.MomentumOptimizer(self.learning_rate, momentum=0.9).minimize(self.loss)

这里的 momentum 得是0.9,但0.99就是不行!
不知道为啥,反正在cifar100上,0.99就是不行,accuracy最大提高到0.14,就停了,然后下降。。。虽然loss降低的挺多。

关于学习率和学习率衰减的问题

固定学习率时


0.001是我目前试过最佳的一个;
0.01好像不行,不太收敛,基本上就是accuracy到0.1就凉了;
0.0001太慢了?这个没试过,没有单独试过。

学习率衰减


刚开始我的学习率参数没有传进去,导致以为是更新了学习率,其实学习率还是没变化,做了好几次实验都是错的!
现在听室友的指导,给学习率也加了placeholder,这样就可以传进新的学习率了:

self.learning_rate = tf.placeholder(tf.float32, name="learningRate")

关于是否图像增强

在我动量参数没调好之前,没有图像增强,没收敛;
后来没有单独试图像增强的对比。

记录一些原始数据吧:

实验一:cifar100,收敛

固定学习率
超参:

Momentum = 0.9
Start_learning_rate = 0.001
Augmentation = True
num_classes = 100

训练结果:

epochs: 0
mean acc 0.083266646 mean loss 4.137112
epochs: 1
mean acc 0.12757082 mean loss 3.8559039
epochs: 2
mean acc 0.15922429 mean loss 3.658945
epochs: 3
mean acc 0.18565442 mean loss 3.5044305
epochs: 4
mean acc 0.20904289 mean loss 3.3744433
epochs: 5
mean acc 0.22995359 mean loss 3.260925
epochs: 6
mean acc 0.24909972 mean loss 3.1600833
epochs: 7
mean acc 0.26639524 mean loss 3.069504
epochs: 8
mean acc 0.28243038 mean loss 2.9882329
epochs: 9
mean acc 0.29706305 mean loss 2.9148538
epochs: 10
mean acc 0.31069943 mean loss 2.8467572
------
epochs: 90
mean acc 0.63300145 mean loss 1.4312437
epochs: 91
mean acc 0.6345324 mean loss 1.4255744
epochs: 92
mean acc 0.636067 mean loss 1.4199661
epochs: 93
mean acc 0.6375434 mean loss 1.4144751
epochs: 94
mean acc 0.63900554 mean loss 1.4091035
epochs: 95
mean acc 0.6404208 mean loss 1.4039328
epochs: 96
mean acc 0.6418316 mean loss 1.3987502
epochs: 97
mean acc 0.6432277 mean loss 1.3935468
epochs: 98
mean acc 0.6445592 mean loss 1.3885933
epochs: 99
mean acc 0.64591026 mean loss 1.3836055

实验二:cifar10、未增强、收敛—实验基准

后面实验的基准参数配置,学习率固定死了!
超参数:

# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.001
Augmentation = False
num_classes = 10

实验记录:

epochs: 0
mean acc 0.43882042 mean loss 1.6053573
epochs: 1
mean acc 0.52449787 mean loss 1.3848304
epochs: 2
mean acc 0.58076584 mean loss 1.2375178
-----
epochs: 19
mean acc 0.8541833 mean loss 0.4846778
epochs: 39
mean acc 0.9165743 mean loss 0.3164173
epochs: 59
mean acc 0.94105047 mean loss 0.25184512
epochs: 79
mean acc 0.95396703 mean loss 0.21821181
epochs: 199
mean acc 0.9801561 mean loss 0.14653033

实验三:cifar10、变了动量参数,不收敛

超参数:
和实验二相比,变了momentum,将momentum变成了0.99,看看是否真的会让程序不收敛!
学习率仍然是固定死了!

# hyper-parameters:
Momentum = 0.99
Start_learning_rate = 0.001
Augmentation = False
num_classes = 10

实验记录:

epochs: 0
mean acc 0.3515525 mean loss 1.8056705
epochs: 1
mean acc 0.41921416 mean loss 1.650648
epochs: 2
mean acc 0.4538052 mean loss 1.5754703
epochs: 3
mean acc 0.46581906 mean loss 1.5587379
epochs: 4
mean acc 0.46643725 mean loss 1.571738
epochs: 5
mean acc 0.45592588 mean loss 1.6101527
-----很神奇,到这儿就开始降低了,我之前以为acc在上升,看来mom对训练影响不大,现在觉得还真的有影响。
我们来试试将学习率在这时候下降,看看能不能好些。

epochs: 6
mean acc 0.4427131 mean loss 1.6552846
epochs: 7
mean acc 0.4306203 mean loss 1.6891361
epochs: 8
mean acc 0.42266193 mean loss 1.7147105
epochs: 9
mean acc 0.41392645 mean loss 1.740706
epochs: 10
mean acc 0.4038947 mean loss 1.7722086
epochs: 11
mean acc 0.39670026 mean loss 1.7953676
epochs: 12
mean acc 0.39213163 mean loss 1.8123565
epochs: 13
mean acc 0.38756546 mean loss 1.8287452
epochs: 14
mean acc 0.38213694 mean loss 1.847038
epochs: 15
mean acc 0.37815976 mean loss 1.8610609
epochs: 16
mean acc 0.37498823 mean loss 1.8734491
epochs: 17
mean acc 0.3728882 mean loss 1.8824495
epochs: 18
mean acc 0.37011 mean loss 1.8937194
------

epochs: 19
mean acc 0.36636224 mean loss 1.9065921
epochs: 39
mean acc 0.33855835 mean loss 2.0264928
epochs: 59
mean acc 0.32258022 mean loss 2.0870316
epochs: 199
mean acc 0.19710158 mean loss 6.4359646

实验四:和实验三一样,重复实验,不收敛

超参数:

和实验三相比,啥也没变~一样的不收敛,但是精度会有变化,毕竟有随机性

# hyper-parameters:
Momentum = 0.99
Start_learning_rate = 0.001
Augmentation = False
num_classes = 10

训练数据:

------------------------------------------------learning_rate: 0.0001
epochs: 0
mean acc 0.3456506 mean loss 1.8079883
epochs: 1
mean acc 0.412432 mean loss 1.6535847
epochs: 2
mean acc 0.4464762 mean loss 1.5819843
epochs: 3
mean acc 0.4644136 mean loss 1.5538613
epochs: 4
mean acc 0.4665853 mean loss 1.5618719
epochs: 5
mean acc 0.46301484 mean loss 1.584357
epochs: 6
mean acc 0.45415103 mean loss 1.6190563
epochs: 7
mean acc 0.44566512 mean loss 1.6503143

所以,我得搜一下momentum的设定:

关于momentum的设定:

参考链接:
深度学习——优化器算法Optimizer详解(BGD、SGD、MBGD、Momentum、NAG、Adagrad、Adadelta、RMSprop、Adam)

梯度更新规则:

Momentum 通过加入 γv_t−1 ,可以加速 SGD, 并且抑制震荡
在这里插入图片描述
当我们将一个小球从山上滚下来时,没有阻力的话,它的动量会越来越大,但是如果遇到了阻力,速度就会变小。
加入的这一项,可以使得梯度方向不变的维度上速度变快,梯度方向有所改变的维度上的更新速度变慢,这样就可以加快收敛并减小震荡。

超参数设定值: 一般 γ 取值 0.9 左右。 ,也就是说momentum一般就是0.9!

缺点:

这种情况相当于小球从山上滚下来时是在盲目地沿着坡滚,如果它能具备一些先知,例如快要上坡时,就知道需要减速了的话,适应性会更好。

实验五: 增大起始学习率实验,不收敛!

超参数:
和实验二的差不多,改了起始学习率。学习率固定死了

# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10

实验记录:

epochs: 0
mean acc 0.38324264 mean loss 1.728589
epochs: 1
mean acc 0.45358515 mean loss 1.5695941
epochs: 2
mean acc 0.48598218 mean loss 1.5029417
epochs: 3
mean acc 0.49505842 mean loss 1.4954492
epochs: 4
mean acc 0.48923656 mean loss 1.5249578
epochs: 5
mean acc 0.47353485 mean loss 1.5797603
epochs: 6

epochs: 17
mean acc 0.37059304 mean loss 1.9200767
epochs: 18
mean acc 0.36695215 mean loss 1.9329796
epochs: 19
mean acc 0.36473972 mean loss 1.9423943
epochs: 20
mean acc 0.36152902 mean loss 1.9542843
epochs: 21
mean acc 0.3577445 mean loss 1.967315
epochs: 22
mean acc 0.355011 mean loss 1.9770654
epochs: 23
mean acc 0.35247028 mean loss 1.9865345


实验六: 实验五的重复实验,不收敛

和实验五超参数一致,效果差不多。
超参数配置:

# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10

实验记录:

epochs: 0
mean acc 0.31486076 mean loss 1.9245098
epochs: 1
mean acc 0.39771727 mean loss 1.7234868
epochs: 2
mean acc 0.4367731 mean loss 1.6324943
epochs: 3
mean acc 0.45704126 mean loss 1.5905874
epochs: 4
-------------learning_rate: 0.005
mean acc 0.46341228 mean loss 1.5834486
epochs: 5
mean acc 0.46135098 mean loss 1.6008581
epochs: 6
mean acc 0.45252767 mean loss 1.6354115
epochs: 7
mean acc 0.4396557 mean loss 1.678641
epochs: 8
mean acc 0.4286127 mean loss 1.7160242
epochs: 9
--------------learning_rate: 0.0025
mean acc 0.41849792 mean loss 1.7481647

学习率衰减实验

重新设计了网络结构,把更新的学习率传到了网络中,其实用自带的decay应该会好些,但是目前还不会操作
先根据实验二这个基准实验,看看如果衰减,会不会让精度提高一些

实验八:实验二的配置,学习率衰减。

超参数:
# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.001
Augmentation = False
num_classes = 10
frequency = 20
divisor = 2
实验记录
epochs: 0
mean acc 0.4331386 mean loss 1.6316696
epochs: 1
mean acc 0.5258183 mean loss 1.393325
epochs: 2
mean acc 0.58289987 mean loss 1.240909
epochs: 3
mean acc 0.6243998 mean loss 1.1280674
epochs: 4
mean acc 0.65602595 mean loss 1.0402917
epochs: 5
mean acc 0.682285 mean loss 0.9671161
epochs: 6
mean acc 0.7051285 mean loss 0.90227515
epochs: 7
mean acc 0.7254446 mean loss 0.8447789
epochs: 8
mean acc 0.74347794 mean loss 0.79400563
epochs: 9
mean acc 0.75947505 mean loss 0.7486313
epochs: 10
mean acc 0.77404404 mean loss 0.7077557
epochs: 11
mean acc 0.7871369 mean loss 0.6711558
epochs: 12
mean acc 0.7986971 mean loss 0.63891447
epochs: 13
mean acc 0.80878454 mean loss 0.61083585
epochs: 14
mean acc 0.81786036 mean loss 0.5855307
epochs: 15
mean acc 0.8264557 mean loss 0.5620217
epochs: 16
mean acc 0.83412576 mean loss 0.54077786
epochs: 17
mean acc 0.84097356 mean loss 0.5219092
epochs: 18
mean acc 0.8474923 mean loss 0.50388855
epochs: 19
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.0005
mean acc 0.8546655 mean loss 0.48397136
epochs: 20
mean acc 0.86152524 mean loss 0.46486455
epochs: 21
mean acc 0.8678159 mean loss 0.44729254
epochs: 22
mean acc 0.873563 mean loss 0.4312179
epochs: 23
mean acc 0.8788312 mean loss 0.4164749
epochs: 24
mean acc 0.88367796 mean loss 0.4029055
epochs: 25
mean acc 0.8881519 mean loss 0.39037478
epochs: 26
mean acc 0.8922944 mean loss 0.3787675
epochs: 27
mean acc 0.89614105 mean loss 0.36798483
epochs: 28
mean acc 0.8997224 mean loss 0.35794157
epochs: 29
mean acc 0.90306497 mean loss 0.34856382
epochs: 30
mean acc 0.90619195 mean loss 0.3397872
epochs: 31
mean acc 0.9091234 mean loss 0.3315554
epochs: 32
mean acc 0.9118773 mean loss 0.3238189
epochs: 33
mean acc 0.9144691 mean loss 0.316534
epochs: 34
mean acc 0.91691285 mean loss 0.309662
epochs: 35
mean acc 0.9192208 mean loss 0.30316857
epochs: 36
mean acc 0.92140406 mean loss 0.29702294
epochs: 37
mean acc 0.92347234 mean loss 0.29119772
epochs: 38
mean acc 0.9254346 mean loss 0.28566825
epochs: 39
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.00025
mean acc 0.9272987 mean loss 0.28041303
epochs: 40
mean acc 0.92907196 mean loss 0.27541283
epochs: 41
mean acc 0.9307607 mean loss 0.27064928
epochs: 42
mean acc 0.9323709 mean loss 0.266106
epochs: 197
mean acc 0.9853129 mean loss 0.11544871
epochs: 198
mean acc 0.98538667 mean loss 0.1152373
epochs: 199
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 9.765625e-07
mean acc 0.98545974 mean loss 0.11502801

实验分析

最终的结果比实验二要好些,至少在训练集上,同样的步数。
总之如果能衰减应该要好些。

实验九:实验五的配置,高起始,但学习率衰减,收敛!

超参数:
# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10
frequency = 5
divisor = 2
实验记录
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.01
epochs: 0
mean acc 0.10243278 mean loss 2.432594
epochs: 1
mean acc 0.10110235 mean loss 2.4207637
epochs: 2
mean acc 0.10023207 mean loss 2.4156797
epochs: 3
mean acc 0.09986196 mean loss 2.412375
epochs: 4
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.005
mean acc 0.13130602 mean loss 2.3461585
epochs: 5
mean acc 0.17287199 mean loss 2.2540128
epochs: 6
mean acc 0.21309961 mean loss 2.1615977
epochs: 7
mean acc 0.24983494 mean loss 2.0760014
epochs: 8
mean acc 0.28333288 mean loss 1.9973909
epochs: 9
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.0025
mean acc 0.31810978 mean loss 1.9117811
epochs: 10
mean acc 0.3503794 mean loss 1.831724
epochs: 11
mean acc 0.37919468 mean loss 1.7593539
epochs: 12
mean acc 0.40491882 mean loss 1.6943661
epochs: 13
mean acc 0.42797267 mean loss 1.6355875
epochs: 14
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.00125
mean acc 0.45308498 mean loss 1.569931
epochs: 15
mean acc 0.4774353 mean loss 1.5059106
epochs: 16
mean acc 0.5002813 mean loss 1.4457506
epochs: 17
mean acc 0.5213213 mean loss 1.3904637
epochs: 89
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 3.814697265625e-08
mean acc 0.90185016 mean loss 0.37589005
epochs: 129
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 1.4901161193847657e-10
mean acc 0.9320501 mean loss 0.29506406
epochs: 194
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 1.8189894035458565e-14
mean acc 0.95470005 mean loss 0.23444453
epochs: 195
mean acc 0.9549312 mean loss 0.23382597
epochs: 196
mean acc 0.95515996 mean loss 0.23321366
epochs: 197
mean acc 0.95538646 mean loss 0.23260756
epochs: 198
mean acc 0.95561063 mean loss 0.23200753
epochs: 199
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 9.094947017729283e-15
mean acc 0.95583254 mean loss 0.23141353
-----------后面学习率虽然特别低了,竟然也是一直处于精度提高状态,感觉就很神奇。。。
实验分析:

刚开始都快不收敛了,但学习率一旦下降了,立刻开始学到了东西,感觉就很有意思!

实验十:让网络先不收敛,然后降低学习率,看是否能拉回来

超参数
# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10
frequency = 30
divisor = 2
实验记录
-------------learning_rate=0.01
epochs: 0
mean acc 0.3502721 mean loss 1.8152207
epochs: 1
mean acc 0.4243958 mean loss 1.6427108
epochs: 2
mean acc 0.45951372 mean loss 1.5667261
epochs: 3
mean acc 0.47259122 mean loss 1.5488328
epochs: 4
mean acc 0.47245118 mean loss 1.5624263
epochs: 5
mean acc 0.4589602 mean loss 1.6106796
epochs: 6
mean acc 0.4412155 mean loss 1.6678122
epochs: 7
mean acc 0.42863467 mean loss 1.709448
epochs: 8
mean acc 0.41760918 mean loss 1.7457528
epochs: 9
mean acc 0.4084667 mean loss 1.7756698
epochs: 10
mean acc 0.39932233 mean loss 1.8058679

epochs: 29
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.005
mean acc 0.2684319 mean loss 2.581868

epochs: 30
mean acc 0.26296416 mean loss 2.61394
epochs: 31
mean acc 0.25783375 mean loss 2.6433897
epochs: 32
mean acc 0.25303552 mean loss 2.6704667

epochs: 59
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.0025
mean acc 0.18362376 mean loss 2.9546802
epochs: 60
mean acc 0.18224192 mean loss 2.9572103
epochs: 61
mean acc 0.18089499 mean loss 2.9595563
epochs: 62
mean acc 0.17961049 mean loss 2.9617302

epochs: 89
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.00125
mean acc 0.15530547 mean loss 2.9779181
epochs: 90
mean acc 0.15467389 mean loss 2.9775486
epochs: 91
mean acc 0.15407452 mean loss 2.9771595

实验十总结:感觉刚开始的学习率比较大,没有及时的降低,后面就算降低了,也拉不回来了。

有大佬知道这是为啥吗?难道说炼丹一旦炼坏了,就救不回来了?
还是说参数一旦到了一个不好整的地方,后面就很难修改了?陷入局部最优?

实验十一:和实验十相比,将第一次衰减提前到了15,并且将衰减率定为10. 被拉回来,收敛了!

超参数:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10
frequency = 15
divisor = 10
实验记录:
epochs: 0
mean acc 0.37171894 mean loss 1.7686458
epochs: 1
mean acc 0.43991077 mean loss 1.6058524
epochs: 2
mean acc 0.47173095 mean loss 1.5389296
epochs: 3
mean acc 0.48051876 mean loss 1.5311631
epochs: 4
mean acc 0.47678456 mean loss 1.5584286
epochs: 5
mean acc 0.46006387 mean loss 1.6142032
epochs: 6
mean acc 0.44140983 mean loss 1.6717417
epochs: 7
mean acc 0.42602384 mean loss 1.7190814
epochs: 8
mean acc 0.41471493 mean loss 1.755009
epochs: 9
mean acc 0.40475354 mean loss 1.7875776
epochs: 10
mean acc 0.39859664 mean loss 1.8099294
epochs: 11
mean acc 0.39351094 mean loss 1.8281552
epochs: 12
mean acc 0.3894754 mean loss 1.8437744
epochs: 13
mean acc 0.38574058 mean loss 1.8584819
epochs: 14
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.001
mean acc 0.38659438 mean loss 1.8604057
epochs: 15
mean acc 0.38977972 mean loss 1.8548477
epochs: 16
mean acc 0.3941626 mean loss 1.8455536
epochs: 17
mean acc 0.3991255 mean loss 1.833803
epochs: 18
mean acc 0.4049043 mean loss 1.8200147
epochs: 19
mean acc 0.41118458 mean loss 1.8044626
epochs: 20
mean acc 0.4182424 mean loss 1.7870674
epochs: 21
mean acc 0.4256453 mean loss 1.7684902
epochs: 22
mean acc 0.43346566 mean loss 1.7488524
epochs: 23
mean acc 0.4415513 mean loss 1.7284179
epochs: 24
mean acc 0.44989038 mean loss 1.7071755
epochs: 25
mean acc 0.45837668 mean loss 1.6855098
epochs: 26
mean acc 0.46666119 mean loss 1.6642932
epochs: 27
mean acc 0.4748348 mean loss 1.6432877
epochs: 28
mean acc 0.48273242 mean loss 1.6230751
epochs: 29
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.0001
mean acc 0.49325183 mean loss 1.5963138
epochs: 30
mean acc 0.5045892 mean loss 1.5675131
epochs: 31
mean acc 0.5159732 mean loss 1.5386506
epochs: 32
mean acc 0.52717596 mean loss 1.5102627
epochs: 33
mean acc 0.5381363 mean loss 1.4825264
epochs: 34
mean acc 0.5488019 mean loss 1.4555479
epochs: 35
mean acc 0.55917174 mean loss 1.4293284
epochs: 36
mean acc 0.5692454 mean loss 1.4039083
epochs: 37
mean acc 0.5789721 mean loss 1.3793111
epochs: 38
mean acc 0.58838266 mean loss 1.3554584
epochs: 39
mean acc 0.5974687 mean loss 1.3324233
epochs: 40
mean acc 0.6062428 mean loss 1.3101101
epochs: 41
mean acc 0.6147 mean loss 1.2885604
epochs: 42
mean acc 0.62286025 mean loss 1.2677239
epochs: 43
mean acc 0.63073045 mean loss 1.2475477
epochs: 44
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 1e-05
mean acc 0.63833624 mean loss 1.2279937
epochs: 45
mean acc 0.6456257 mean loss 1.2092435
epochs: 46
mean acc 0.6526131 mean loss 1.1912773

epochs: 73
mean acc 0.77084124 mean loss 0.88623136
epochs: 74
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 1.0000000000000002e-07
mean acc 0.7735998 mean loss 0.87909746

实验十二:和实验十一相比,增大了第一步衰减时间,仍然定为30,但是衰减率定为10。

超参数:
# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10
frequency = 30
divisor = 10

实验记录
epochs: 0
mean acc 0.29947582 mean loss 1.9350481
epochs: 1
mean acc 0.37985155 mean loss 1.7416394
epochs: 2
mean acc 0.42397568 mean loss 1.6470906
epochs: 3
mean acc 0.44865358 mean loss 1.5950112
epochs: 4
mean acc 0.45703825 mean loss 1.5871133
epochs: 5
mean acc 0.45279488 mean loss 1.6135334
epochs: 6
mean acc 0.44145554 mean loss 1.6543571
epochs: 7
mean acc 0.42851213 mean loss 1.6979703
epochs: 8
mean acc 0.417029 mean loss 1.7351016
epochs: 9
mean acc 0.40617597 mean loss 1.7697875
epochs: 10
mean acc 0.39850023 mean loss 1.7956904
epochs: 11
mean acc 0.3909401 mean loss 1.8193597
epochs: 12
mean acc 0.38457075 mean loss 1.8403857
epochs: 13
mean acc 0.37909558 mean loss 1.8586966
epochs: 14
mean acc 0.37488395 mean loss 1.8737082
epochs: 15
mean acc 0.37059483 mean loss 1.88814
epochs: 16
mean acc 0.36720693 mean loss 1.9001142
epochs: 17
mean acc 0.36426213 mean loss 1.9102623
epochs: 18
mean acc 0.36120823 mean loss 1.9204879
epochs: 19
mean acc 0.35793954 mean loss 1.9318262
epochs: 20
mean acc 0.35550424 mean loss 1.9404215
epochs: 21
mean acc 0.35338035 mean loss 1.9477633
epochs: 22
mean acc 0.35148987 mean loss 1.9542899
epochs: 23
mean acc 0.34993115 mean loss 1.9595525
epochs: 24
mean acc 0.3480874 mean loss 1.9657788
epochs: 25
mean acc 0.34570217 mean loss 1.9735298
epochs: 26
mean acc 0.3440575 mean loss 1.9790225
epochs: 27
mean acc 0.34284472 mean loss 1.983516
epochs: 28
mean acc 0.34151134 mean loss 1.9883422
epochs: 29
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.001
mean acc 0.34157196 mean loss 1.9888836
epochs: 30
mean acc 0.34254122 mean loss 1.9868942
epochs: 31
mean acc 0.3442339 mean loss 1.9832424
epochs: 32
mean acc 0.3464066 mean loss 1.9784789
epochs: 33
mean acc 0.34889516 mean loss 1.9726986
epochs: 34
mean acc 0.35172683 mean loss 1.9660012
epochs: 35
mean acc 0.35482854 mean loss 1.9584854
epochs: 36
mean acc 0.35825625 mean loss 1.950122
epochs: 37
mean acc 0.3619295 mean loss 1.9410344
epochs: 38
mean acc 0.36595917 mean loss 1.9311523
epochs: 39
mean acc 0.37025097 mean loss 1.9205551
epochs: 40
mean acc 0.3747965 mean loss 1.9092942
epochs: 41
mean acc 0.37953097 mean loss 1.8974929
epochs: 42
mean acc 0.38439742 mean loss 1.8852509
epochs: 43
mean acc 0.38945326 mean loss 1.8725324
epochs: 44
mean acc 0.39455736 mean loss 1.8596927
epochs: 45
mean acc 0.39981315 mean loss 1.8467455
epochs: 46
mean acc 0.40506026 mean loss 1.8335495
epochs: 47
mean acc 0.4103784 mean loss 1.8201649
epochs: 48
mean acc 0.41562074 mean loss 1.8069721
epochs: 49
mean acc 0.42081225 mean loss 1.7940239
epochs: 50
mean acc 0.42590022 mean loss 1.7812409
epochs: 51
mean acc 0.43096715 mean loss 1.7685583
epochs: 52
mean acc 0.4361554 mean loss 1.7556452
epochs: 53
mean acc 0.4412949 mean loss 1.7428336
epochs: 54
mean acc 0.4463148 mean loss 1.7302086
epochs: 55
mean acc 0.45124012 mean loss 1.7179444
epochs: 56
mean acc 0.45611754 mean loss 1.7057418
epochs: 57
mean acc 0.4609789 mean loss 1.693602
epochs: 58
mean acc 0.46595046 mean loss 1.6813067
epochs: 59
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.0001
mean acc 0.47228447 mean loss 1.6652794
epochs: 60
mean acc 0.47919202 mean loss 1.6478146
epochs: 61
mean acc 0.48613912 mean loss 1.6301967
epochs: 62
mean acc 0.49305397 mean loss 1.6126517
epochs: 63
mean acc 0.49990433 mean loss 1.5952554
epochs: 64
mean acc 0.50664276 mean loss 1.5781473
epochs: 65
mean acc 0.5132485 mean loss 1.5613043
epochs: 66
mean acc 0.51973706 mean loss 1.5447683
epochs: 67
mean acc 0.52608246 mean loss 1.5285506
epochs: 68
mean acc 0.5322886 mean loss 1.5126675
epochs: 69
mean acc 0.53837085 mean loss 1.4970969
epochs: 70
mean acc 0.5443187 mean loss 1.4818456
epochs: 71
mean acc 0.5501308 mean loss 1.4669278
epochs: 72
mean acc 0.555814 mean loss 1.4523314
epochs: 73
mean acc 0.5613604 mean loss 1.4380492
epochs: 74
mean acc 0.5667931 mean loss 1.4240638
epochs: 75
mean acc 0.57209283 mean loss 1.4103936

实验十二分析:

和实验十相比,这次到第30epoch的时候,精度明显要高很多;
然后直接衰减到0.001,整个网络就被拉回来了。
所以我来试试,到第60epoch的时候才衰减,看看能不能拉回来!

实验十三:将第一次衰减推迟到第60epoch左右,然后衰减率仍然是十倍。

超参数:
# hyper-parameters:
Momentum = 0.9
Start_learning_rate = 0.01
Augmentation = False
num_classes = 10
frequency = 60
divisor = 10
实验记录:哈哈,带不动了!
epochs: 0
mean acc 0.35637403 mean loss 1.80535
epochs: 1
mean acc 0.4362296 mean loss 1.6199259
epochs: 2
mean acc 0.47559887 mean loss 1.5303277
epochs: 3
mean acc 0.48985675 mean loss 1.5043377
epochs: 4
mean acc 0.49210948 mean loss 1.51258
epochs: 5
mean acc 0.48447835 mean loss 1.5463331
epochs: 6
mean acc 0.46763536 mean loss 1.6042495
epochs: 7
mean acc 0.44995397 mean loss 1.6589243
epochs: 8
mean acc 0.43243617 mean loss 1.7100128
epochs: 9
mean acc 0.41886804 mean loss 1.7504764
epochs: 10
mean acc 0.40614086 mean loss 1.7890843
epochs: 11
mean acc 0.39699036 mean loss 1.8172948
epochs: 12
mean acc 0.38901064 mean loss 1.8429579
epochs: 13
mean acc 0.37475848 mean loss 1.8837632
epochs: 14
mean acc 0.3564514 mean loss 1.9322864
epochs: 15
mean acc 0.34035265 mean loss 1.9738648
epochs: 16
mean acc 0.3263091 mean loss 2.009866
epochs: 17
mean acc 0.31377152 mean loss 2.0413055
epochs: 18
mean acc 0.30245256 mean loss 2.068959
epochs: 19
mean acc 0.29231855 mean loss 2.0934002
epochs: 20
mean acc 0.28313154 mean loss 2.1151338
epochs: 21
mean acc 0.27470428 mean loss 2.1345317
epochs: 22
mean acc 0.26701763 mean loss 2.1519217
epochs: 23
mean acc 0.2600207 mean loss 2.1675591
epochs: 24
mean acc 0.25365716 mean loss 2.1816568
epochs: 25
mean acc 0.24768849 mean loss 2.1944206
epochs: 26
mean acc 0.24229456 mean loss 2.2059884
epochs: 27
mean acc 0.23716733 mean loss 2.2165177
epochs: 28
mean acc 0.2325551 mean loss 2.2261014
epochs: 29
mean acc 0.22802097 mean loss 2.2348537
epochs: 30
mean acc 0.22394522 mean loss 2.242853
epochs: 31
mean acc 0.22007543 mean loss 2.2501812
epochs: 32
mean acc 0.21640986 mean loss 2.256906
epochs: 33
mean acc 0.21295285 mean loss 2.2630796
epochs: 34
mean acc 0.20967966 mean loss 2.2687564
epochs: 35
mean acc 0.20663223 mean loss 2.2739804
epochs: 36
mean acc 0.20372249 mean loss 2.2787895
epochs: 37
mean acc 0.20100537 mean loss 2.2832294
epochs: 38
mean acc 0.19839323 mean loss 2.2873268
epochs: 39
mean acc 0.19589119 mean loss 2.2911127
epochs: 40
mean acc 0.19353364 mean loss 2.2946048
epochs: 41
mean acc 0.1912893 mean loss 2.2978373
epochs: 42
mean acc 0.18913075 mean loss 2.3008323
epochs: 43
mean acc 0.18703485 mean loss 2.3036053
epochs: 44
mean acc 0.18510678 mean loss 2.3061714
epochs: 45
mean acc 0.18326125 mean loss 2.3085475
epochs: 46
mean acc 0.18150744 mean loss 2.3107493
epochs: 47
mean acc 0.17976461 mean loss 2.3127925
epochs: 48
mean acc 0.17814761 mean loss 2.3146858
epochs: 49
mean acc 0.1765489 mean loss 2.3164413
epochs: 50
mean acc 0.17501718 mean loss 2.318066
epochs: 51
mean acc 0.173589 mean loss 2.319571
epochs: 52
mean acc 0.17212339 mean loss 2.3209693
epochs: 53
mean acc 0.17080466 mean loss 2.3222597
epochs: 54
mean acc 0.16952552 mean loss 2.3234553
epochs: 55
mean acc 0.16832314 mean loss 2.3245595
epochs: 56
mean acc 0.16710259 mean loss 2.3255844
epochs: 57
mean acc 0.1658755 mean loss 2.3265324
epochs: 58
mean acc 0.164769 mean loss 2.327406
epochs: 59
**×*×*×*×*×*×*×*×*×*×*×*×*×*learning_rate: 0.001
mean acc 0.16366738 mean loss 2.3282213
epochs: 60
mean acc 0.16259333 mean loss 2.3290036
epochs: 61
mean acc 0.16155331 mean loss 2.3297567
epochs: 62
mean acc 0.16053233 mean loss 2.3304825
epochs: 63
mean acc 0.15958075 mean loss 2.3311815
epochs: 64
mean acc 0.15864123 mean loss 2.331856
epochs: 65
mean acc 0.15774229 mean loss 2.3325062
epochs: 66
mean acc 0.15683705 mean loss 2.3331342
epochs: 67
mean acc 0.15597609 mean loss 2.3337402
epochs: 68
mean acc 0.15510152 mean loss 2.3343258
epochs: 69
mean acc 0.15427822 mean loss 2.334891
epochs: 70
mean acc 0.15348771 mean loss 2.3354373
epochs: 71
mean acc 0.1527197 mean loss 2.3359654
epochs: 72
mean acc 0.15197274 mean loss 2.3364756
epochs: 73
mean acc 0.15124461 mean loss 2.3369694
epochs: 74
mean acc 0.15051404 mean loss 2.337447

所有实验总结:

今天稀里糊涂的炼了十几炉丹,做了好几组控制变量的对比实验。
没有系统的归纳,只是做一个笔记式的记录。
先说说动量吧:
动量基本上就是固定的0.9,试过0.99,不收敛,其他的没试过,也不知道效果如何;

疑问一: 为什么不收敛的时候,尤其是学习率过大,为啥越练精度和loss越大?难道这时候梯度上升了?

疑问二:不收敛了还能不能拉回来,是否存在及时不及时的问题?

然后就是学习率的问题,刚开始可以稍微高一些,渐渐的如果不降低那么就会不收敛,如果在开始不收敛之后,及时的降低学习率,那么还能纠正回来,如果不能及时纠正的话,那么就一直发散~

图像增强这块,因为没有测试集,就放弃探究了。

  • 6
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

hehedadaq

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值