darknet-yolov2曲线可视化

map测试

-----------------------------------------------------------------------------------------------------------------------------------

1. 批处理测试图输出测试结果文本,-out后面的""会自动生成“类名.txt”,结果保存在darknet/results目录下

./darknet detector valid cfg/voc.data cfg/yolov3-voc_test.cfg backup/yolov3-voc_final.weights -out "" -i gpu_id

2. 使用py-faster-rcnn工程下的voc_eval.py计算MAP

a. 首先将py-faster-rcnn/lib/datasets/voc_eval.py拷贝到darknet根目录

voc_eval.py下载地址:voc_eval.py

b. 在darknet根目录下新建compute_all_mAP.py

from voc_eval import voc_eval

import os

results_path = "darknet/results"
sub_files = os.listdir(results_path)

mAP = []
for i in range(len(sub_files)):
    class_name = sub_files[i].split(".txt")[0]
    rec, prec, ap = voc_eval('darknet/results/{}.txt', 'Annotations/{}.xml', 'ImageSets/Main/test.txt', class_name, '/darknet/backup/')
    print("{} :\t {} ".format(class_name, ap))
    mAP.append(ap)

mAP = tuple(mAP)

print("***************************")
print("mAP :\t {}".format( float( sum(mAP)/len(mAP)) ))

ps:voc_eval的输入参数分别为:{  1. 第一步得到的txt路径

                                                       2. 验证集对应的xml标签路径

                                                       3. 验证集txt文本路径(无路径、无后缀的图片名)

                                                       4. 待验证的 类别名

                                                       5. pkl文件保存的路径

                                                    }

c. 执行命令得到结果

python compute_all_mAP.py

PR曲线可视化

-----------------------------------------------------------------------------------------------------------------------------------

1. PR曲线可视化需要用到上一步得到的“类名.txt”文件,

2. 使用reval_voc_py.py计算得到MAP(和上一步结果相同)并且生成pkl文件

reval_voc_py3.py文件下载路径:reval_voc_py3.py

另外需要用到voc_eval_py3.py,下载路径:voc_eval_py3.py

reval_voc_py3.py代码稍作修改,可以运行自己数据集

#!/usr/bin/env python

# Adapt from ->
# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------
# <- Written by Yaping Sun

"""Reval = re-eval. Re-evaluate saved detections."""

import os, sys, argparse
import numpy as np
#import _pickle as cPickle
#import cPickle
try:
  import cPickle
except ImportError:
  import pickle as cPickle

from voc_eval_py3 import voc_eval

def parse_args():
    """
    Parse input arguments
    """
    parser = argparse.ArgumentParser(description='Re-evaluate results')
    parser.add_argument('output_dir', nargs=1, help='results directory',
                        type=str)
    parser.add_argument('--devkit_dir', dest='devkit_dir', default='data/dilei', type=str)
    parser.add_argument('--task', dest='task', default='landmine', type=str)
    parser.add_argument('--image_set', dest='image_set', default='test', type=str)
    parser.add_argument('--classes', dest='class_file', default='data/dilei.names', type=str)

    if len(sys.argv) == 1:
        parser.print_help()
        sys.exit(1)

    args = parser.parse_args()
    return args

def get_voc_results_file_template(task, out_dir = 'results'):
    filename = task + '.txt'
    path = os.path.join(out_dir, filename)
    return path

def do_python_eval(devkit_dir, task, image_set, classes, output_dir = 'results'):
    annopath = os.path.join(devkit_dir,'Annotations','{}.xml')
    imagesetfile = os.path.join(
        devkit_dir,
        'ImageSets',
        'Main',
        image_set + '.txt')
    cachedir = os.path.join(devkit_dir, 'annotations_cache')
    aps = []
    # The PASCAL VOC metric changed in 2010
    use_07_metric = False 
    print('VOC07 metric? ' + ('Yes' if use_07_metric else 'No'))
    print('devkit_path=',devkit_dir)

    if not os.path.isdir(output_dir):
        os.mkdir(output_dir)
    for i, cls in enumerate(classes):
        if cls == '__background__':
            continue
        filename = get_voc_results_file_template(task).format(cls)
        print(filename+'hahahha')
        rec, prec, ap = voc_eval(
            filename, annopath, imagesetfile, cls, cachedir, ovthresh=0.5,
            use_07_metric=use_07_metric)
        aps += [ap]
        print('AP for {} = {:.4f}'.format(cls, ap))
        with open(os.path.join(output_dir, cls + '_pr.pkl'), 'wb') as f:
            cPickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)
    print('Mean AP = {:.4f}'.format(np.mean(aps)))
    print('~~~~~~~~')
    print('Results:')
    for ap in aps:
        print('{:.3f}'.format(ap))
    print('{:.3f}'.format(np.mean(aps)))
    print('~~~~~~~~')
    print('')
    print('--------------------------------------------------------------')
    print('Results computed with the **unofficial** Python eval code.')
    print('Results should be very close to the official MATLAB eval code.')
    print('-- Thanks, The Management')
    print('--------------------------------------------------------------')



if __name__ == '__main__':
    args = parse_args()

    output_dir = os.path.abspath(args.output_dir[0])
    with open(args.class_file, 'r') as f:
        lines = f.readlines()

    classes = [t.strip('\n') for t in lines]

    print('Evaluating detections')
    do_python_eval(args.devkit_dir, args.task, args.image_set, classes, output_dir)

执行命令得到“类名_pr.pkl”文件

python reval_voc_py3.py

3. 利用matplotlib绘制PR曲线

在得到pkl文件目录里直接创建一个python文件,比如PR_draw.py,内容如下,记得把第三行里的参数修改一下。

#import _pickle as cPickle
try:
  import cPickle
except ImportError:
  import pickle as cPickle
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt


fr = open('landmine_pr.pkl', 'rb')  
inf = cPickle.load(fr)
fr.close()

x = inf['rec']
y = inf['prec']
plt.figure()
plt.xlim([0.0,1.0])
plt.ylim([0.0,1.0])
plt.xlabel('recall')
plt.ylabel('precision')
plt.title('PR cruve')
plt.plot(x, y, '-r')
plt.savefig('testblueline.jpg')
#plt.show()

print('AP:', inf['ap'])

然后执行命令,结果就出来了

python PR_draw.py

 

Loss曲线可视化

-----------------------------------------------------------------------------------------------------------------------------------

1. 训练的同时保存log文件

./darknet detector train cfg/voc.data cfg/yolov3.cfg darknet53.conv.74 2>1 | tee visualization/train_yolov3.log 

2. 在使用脚本绘制变化曲线之前,需要先使用extract_log.py脚本,格式化log,用生成的新的log文件供可视化工具绘图,格式化log的extract_log.py脚本如下(和生成的log文件同一目录):

# coding=utf-8
# 该文件用来提取训练log,去除不可解析的log后使log文件格式化,生成新的log文件供可视化工具绘图
 
import inspect
import os
import random
import sys
def extract_log(log_file,new_log_file,key_word):
    with open(log_file, 'r') as f:
      with open(new_log_file, 'w') as train_log:
  #f = open(log_file)
    #train_log = open(new_log_file, 'w')
        for line in f:
    # 去除多gpu的同步log
          if 'Syncing' in line:
            continue
    # 去除除零错误的log
          if 'nan' in line:
            continue
          if key_word in line:
            train_log.write(line)
    f.close()
    train_log.close()
 
extract_log('train_yolov3.log','train_log_loss.txt','images')
extract_log('train_yolov3.log','train_log_iou.txt','IOU')

运行之后,会解析log文件的loss行和iou行得到两个txt文件

使用train_loss_visualization.py脚本可以绘制loss变化曲线 
train_loss_visualization.py脚本如下(也是同一目录新建py文件):

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline
 
lines =5124    #改为自己生成的train_log_loss.txt中的行数
result = pd.read_csv('train_log_loss.txt', skiprows=[x for x in range(lines) if ((x%10!=9) |(x<1000))] ,error_bad_lines=False, names=['loss', 'avg', 'rate', 'seconds', 'images'])
result.head()
 
result['loss']=result['loss'].str.split(' ').str.get(1)
result['avg']=result['avg'].str.split(' ').str.get(1)
result['rate']=result['rate'].str.split(' ').str.get(1)
result['seconds']=result['seconds'].str.split(' ').str.get(1)
result['images']=result['images'].str.split(' ').str.get(1)
result.head()
result.tail()
 
# print(result.head())
# print(result.tail())
# print(result.dtypes)
 
print(result['loss'])
print(result['avg'])
print(result['rate'])
print(result['seconds'])
print(result['images'])
 
result['loss']=pd.to_numeric(result['loss'])
result['avg']=pd.to_numeric(result['avg'])
result['rate']=pd.to_numeric(result['rate'])
result['seconds']=pd.to_numeric(result['seconds'])
result['images']=pd.to_numeric(result['images'])
result.dtypes
 
 
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(result['avg'].values,label='avg_loss')
# ax.plot(result['loss'].values,label='loss')
ax.legend(loc='best')  #图列自适应位置
ax.set_title('The loss curves')
ax.set_xlabel('batches')
fig.savefig('avg_loss')
# fig.savefig('loss')

修改train_loss_visualization.py中lines为train_log_loss.txt行数,并根据需要修改要跳过的行数:

skiprows=[x for x in range(lines) if ((x%10!=9) |(x<1000))]

运行train_loss_visualization.py会在脚本所在路径生成avg_loss.png。

可以通过分析损失变化曲线,修改cfg中的学习率变化策略。

除了可视化loss,还可以可视化Avg IOU,Avg Recall等参数 
可视化’Region Avg IOU’, ‘Class’, ‘Obj’, ‘No Obj’, ‘Avg Recall’,’count’这些参数可以使用脚本train_iou_visualization.py,使用方式和train_loss_visualization.py相同,train_iou_visualization.py脚本如下(#lines根据train_log_iou.txt的行数修改):
 

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline
 
lines = 122956    #根据train_log_iou.txt的行数修改
result = pd.read_csv('train_log_iou.txt', skiprows=[x for x in range(lines) if (x%10==0 or x%10==9) ] ,error_bad_lines=False, names=['Region Avg IOU', 'Class', 'Obj', 'No Obj', 'Avg Recall','count'])
result.head()
 
result['Region Avg IOU']=result['Region Avg IOU'].str.split(': ').str.get(1)
result['Class']=result['Class'].str.split(': ').str.get(1)
result['Obj']=result['Obj'].str.split(': ').str.get(1)
result['No Obj']=result['No Obj'].str.split(': ').str.get(1)
result['Avg Recall']=result['Avg Recall'].str.split(': ').str.get(1)
result['count']=result['count'].str.split(': ').str.get(1)
result.head()
result.tail()
 
# print(result.head())
# print(result.tail())
# print(result.dtypes)
print(result['Region Avg IOU'])
 
result['Region Avg IOU']=pd.to_numeric(result['Region Avg IOU'])
result['Class']=pd.to_numeric(result['Class'])
result['Obj']=pd.to_numeric(result['Obj'])
result['No Obj']=pd.to_numeric(result['No Obj'])
result['Avg Recall']=pd.to_numeric(result['Avg Recall'])
result['count']=pd.to_numeric(result['count'])
result.dtypes
 
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(result['Region Avg IOU'].values,label='Region Avg IOU')
# ax.plot(result['Class'].values,label='Class')
# ax.plot(result['Obj'].values,label='Obj')
# ax.plot(result['No Obj'].values,label='No Obj')
# ax.plot(result['Avg Recall'].values,label='Avg Recall')
# ax.plot(result['count'].values,label='count')
ax.legend(loc='best')
# ax.set_title('The Region Avg IOU curves')
ax.set_title('The Region Avg IOU curves')
ax.set_xlabel('batches')
# fig.savefig('Avg IOU')
fig.savefig('Region Avg IOU')

运行train_iou_visualization.py会在脚本所在路径生成相应的曲线图。

 

ps:

测试的时候直接设置-gpus gpu_id不起作用的时候,可以在执行命令的最前面加入这一句话CUDA_VISIBLE_DEVICES="gpu_id"即可

 

参考链接:

https://blog.csdn.net/qq_34806812/article/details/81459982?utm_source=blogxgwz3

https://blog.csdn.net/qq_33350808/article/details/83178002

 

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值