总体策略:
从简单的出发: 开始实验
一:
如: MNIST数据集, 开始不知如何设置, 可以先简化使用0,1两类图, 减少80%数据量, 用两层神经网络[784, 2] (比[784, 30, 2]快)
更快的获取反馈: 之前每个epoch来检测准确率, 可以替换为每1000个图之后,
或者减少validation set的量, 比如用100代替10,000
重复实验:
>>> net = network2.Network([784, 10])
>>> net.SGD(training_data[:1000], 30, 10, 10.0, lmbda = 1000.0, \
... evaluation_data=validation_data[:100], \
... monitor_evaluation_accuracy=True)
Epoch 0 training complete
Accuracy on evaluation data: 10 / 100
Epoch 1 training complete
Accuracy on evaluation data: 10 / 100
Epoch 2 training complete
Accuracy on evaluation data: 10 / 100...
更快得到反馈, 之前可能每轮要等10秒,现在不到1秒:
二:
λ之前设置为1000, 因为减少了训练集的数量, λ为了保证weight decay一样,对应的减少λ = 20.0
>>> net = network2.Network([784, 10])
>>> net.SGD(training_data[:1000], 30, 10, 10.0, lmbda = 20.0, \
... evaluation_data=validation_data[:100], \
... monitor_evaluation_accuracy=True)
结果:
Epoch 0 training complete
Accuracy on evaluation data: 12 / 100
Epoch 1 training complete
Accuracy on evaluation data: 14 / 100
Epoch 2 training complete
Accuracy on evaluation data: 25 / 100
Epoch 3 training complete
Accuracy on evaluation data: 18 / 100
三:
也许学习率η=10.0太低? 应该更高?
增大到100:
>>> net = network2.Network([784, 10])
>>> net.SGD(training_data[:1000], 30, 10, 100.0, lmbda = 20.0, \
... evaluation_data=validation_data[:100], \
... monitor_evaluation_accuracy=True)
结果:
Epoch 0 training complete
Accuracy on evaluation data: 10 / 100
Epoch 1 training complete
Accuracy on evaluation data: 10 / 100
Epoch 2 training complete
Accuracy on evaluation data: 10 / 100
Epoch 3 training complete
Accuracy on evaluation data: 10 / 100
四:
结果非常差, 也许结果学习率应该更低? =10
>>> net = network2.Network([784, 10])
>>> net.SGD(training_data[:1000], 30, 10, 1.0, lmbda = 20.0, \
... evaluation_data=validation_data[:100], \
... monitor_evaluation_accuracy=True)
结果好很多:Epoch 0 training complete
Accuracy on evaluation data: 62 / 100
Epoch 1 training complete
Accuracy on evaluation data: 42 / 100
Epoch 2 training complete
Accuracy on evaluation data: 43 / 100
Epoch 3 training complete
Accuracy on evaluation data: 61 / 100
五:
假设保持其他参数不变: 30 epochs, mini-batch size: 10, λ=5.0
实验学习率=0.025, 0.25, 2.5
如果学习率太大, 可能造成越走越高, 跳过局部最低点太小, 学习可能太慢
对于学习率:
可以从0.001, 0.01, 0.1, 1, 10 开始尝试, 如果发现cost开始增大, 停止, 实验更小的微调
对于MNIST, 先找到0.1, 然后0.5, 然后0.25
对于提前停止学习的条件设置, 如果accuracy在一段时间内变化很小 (不是一两次)
之前一直使用学习率是常数, 可以开始设置大一下, 后面逐渐减少: 比如开始设定常数, 直到在验证集上准确率开始下降, 减少学习率 (/2, /3)
对于regularization parameter λ:
先不设定regularization, 把学习率调整好, 然后再开始实验λ, 1.0, 10, 100..., 找到合适的, 再微调
对于mini-batch size:
太小: 没有充分利用矩阵计算的library和硬件的整合的快速计算
太大: 更新权重和偏向不够频繁
好在mini-batch size和其他参数变化相对独立, 所以不用重新尝试, 一旦选定
总之先后调整学习率、regularization parameter λ、mini-batch size(刚才这个过程主要先后调整了regularization parameter λ、学习率)