【caffe源码研究】第二章:使用篇(6) : 训练过程分析工具

将caffe生成的log进行解析,可以分析训练的过程。
如果你是Linux系统,那么默认生成的log在/tmp 下,名字一般类似于

caffe.a216-All-Series.a216.log.INFO.20160426-054718.9658

至于windows系统下,自己搜一下log.INFO.就出来了,如果清理过系统那可能就没有了。
我的是默认存在了Temp里面。

如果不希望日志到处乱跑,可以写shell脚本

#!/bin/bash
LOG=log/train-`date +%Y-%m-%d-%H-%M-%S`.log
CAFFE=~/caffe/build/tools/caffe
$CAFFE train --solver=solver.prototxt --gpu=0 2>&1 | tee $LOG

Windows下也一样可以重定向到别的文档去。
记住一定要找到正确的log日志,一般看大小、内容、时间就知道了。
log日志截取部分如下

I1221 10:53:48.110986 27384 sgd_solver.cpp:106] Iteration 9600, lr = 0.00603682
I1221 10:53:58.710325 27384 solver.cpp:228] Iteration 9700, loss = 0.000123597
I1221 10:53:58.710389 27384 solver.cpp:244]     Train net output #0: loss = 0.000123596 (* 1 = 0.000123596 loss)
I1221 10:53:58.710405 27384 sgd_solver.cpp:106] Iteration 9700, lr = 0.00601382
I1221 10:54:09.642729 27384 solver.cpp:228] Iteration 9800, loss = 7.59059e-05
I1221 10:54:09.642791 27384 solver.cpp:244]     Train net output #0: loss = 7.59052e-05 (* 1 = 7.59052e-05 loss)
I1221 10:54:09.642807 27384 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102
I1221 10:54:19.939504 27384 solver.cpp:228] Iteration 9900, loss = 0.000145651
I1221 10:54:19.939613 27384 solver.cpp:244]     Train net output #0: loss = 0.000145651 (* 1 = 0.000145651 loss)
I1221 10:54:19.939630 27384 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843
I1221 10:54:29.807180 27384 solver.cpp:454] Snapshotting to binary proto file lenet_iter_10000.caffemodel
I1221 10:54:29.827361 27384 sgd_solver.cpp:273] Snapshotting solver state to binary proto file lenet_iter_10000.solverstate
I1221 10:54:29.875092 27384 solver.cpp:317] Iteration 10000, loss = 0.000216058
I1221 10:54:29.875136 27384 solver.cpp:337] Iteration 10000, Testing net (#0)
I1221 10:54:36.925678 27384 solver.cpp:404]     Test net output #0: accuracy = 0.9776
I1221 10:54:36.925737 27384 solver.cpp:404]     Test net output #1: loss = 0.118371 (* 1 = 0.118371 loss)
I1221 10:54:36.925748 27384 solver.cpp:322] Optimization Done.
I1221 10:54:36.925755 27384 caffe.cpp:254] Optimization Done. 

针对这个log文件可以进行文字解析提取,得出每次迭代时的准确率、损失率等信息。这里以matlab代码为例。[参考自 https://www.zhihu.com/question/36652304/answer/68438194]

% 作者:齐浩之
% 链接:https://www.zhihu.com/question/36652304/answer/68438194
% 来源:知乎
% 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

% Well, this is a function that write the
% iteration vs accurancy
% iteration vs loss
% To a file

clc;
clear;

% log file of caffe model
logName = 'caffe.exe.FRANK-PC.frank.log.INFO.20160429-092739.6584';

fid = fopen(logName, 'r');
fid_accuracy = fopen('output_accuracy_1.txt', 'w');
fid_loss = fopen('output_loss_1.txt', 'w');

tline = fgetl(fid);

while ischar(tline)
    % First find the accuracy line
    k = strfind(tline, 'Test net output');
    if (k)
        k = strfind(tline, 'accuracy');
        if (k)
            % If the string contain test and accuracy at the same time
            % The bias from 'accuracy' to the float number
            indexStart = k + 11; 
            indexEnd = size(tline);
            str = tline(indexStart : indexEnd(2));
        end

        % Get the number of index
        k = strfind(tline, '#');
        if (k)
            indexStart = k + 1;
            indexEnd = strfind(tline, ':');
            str2 = tline(indexStart : indexEnd - 1);
        end

        % Concatenation of two string
        res_str = strcat(str2, '/', str);
        fprintf(fid_accuracy, '%s\r\n', res_str);
    end

    % Then find the loss line
    k1 = strfind(tline, 'Iteration');
    if (k1)
       k2 = strfind(tline, 'loss');
       if (k2)
           indexStart = k2 + 7;
           indexEnd = size(tline);
           str1 = tline(indexStart:indexEnd(2));
           indexStart = k1 + 10;
           indexEnd = strfind(tline, ',') - 1;
           str2 = tline(indexStart:indexEnd);
           res_str1 = strcat(str2, '/', str1);
           fprintf(fid_loss, '%s\r\n', res_str1);
       end
    end
    tline = fgetl(fid);
end

fclose(fid);
fclose(fid_accuracy);

用python实现应该也很简单。
然后打开excel,数据->自文本,选择刚刚的文本,选好间隔符,数据导入,生成曲线。

这里写图片描述

可以看到准确率很快就上去了。
这里写图片描述

损失也是很快就降下去了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值