1.一个深度网络,预训练时网络入口为224 x 224,而fine-tune时为448 x 448,这会带来预训练网络与实际训练网络识别图像尺寸的不兼容。yolov2直接使用448 x 448的网络入口进行预训练,然后在检测任务上进行训练,效果得到3.7%的提升。
2. 一个二分类网络,如何accrucy一直是0.5,除了label标签有问题外,另外还可能是网络太深,而验证集的规模过小(如几百张)。解决方案两种:
第一种即加大验证集的规模;第二种即砍层将网络深度变浅。
3.The net output #k
result is the output of the net for that particular iteration / batch while the Iteration T, loss = X
output is smoothed across iterations according to the average_loss
field.
The reported "iteration loss" is the weighted sum of all loss layers of your net, averaged over average_loss
iterations. On the other hand, the reported "train net output..." reports each net output from the current iteration only. In your example, you did not set average_loss
in your 'solver'
, and thus average_loss=1
by default. Since you only have one loss output with loss_weight=1
the reported "train net output..."and "iteration loss" are the same (up to display precision).
|
@user6726469 the default value is 1. I usually set it to the same value of display parameter. It's up to you to decide what interval to average. –
Shai
Oct 18 '16 at 10:53
|
而当batch_size设置为训练集的大小时,实现的就是batch梯度下降,也就是全量梯度下降。而当batch_size设置成中间只时就相当于mini_batch梯度下降 。
在我自己的实验中发现,当使用全量梯度下降时,损失曲线下降较平滑,但是收敛速度比较慢
而当使用mini_batch梯度下降是,损失曲线会有震荡下降的趋势,这在一方面可以跳过局部最小值,另一方面也造成了正确率会剧烈震荡。当batch_size的值设置的越小,可能震荡越剧烈。而且,batch_ssize设置越大(越接近训练数据总量)训练速度也越快。
另外,有一种自适应梯度下降法,叫adam,可以使收敛速度加快,效果较好,是目前用的比较多的一种梯度下降法。
另外,可以将学习率设置成随着训练的进行慢慢减小,这样也能 使最终结果收敛到一个较好的值。根据我的经验,比如迭代10000次,在100次里面验证集损失值没有下降,就将学习率减半,而在1000次里面学习率没有下降就提前停止训练。
average_loss
in mysolver
too? I checked it ingooglenet
and it waslike average_loss: 40
. Is it an initial value for that?– user6726469 Oct 18 '16 at 10:51