1. 引言
近期在做单目标检测的实验中,需要补充对比实验。于是将yolov5模型应用到本地的单目标检测任务中,实验很轻松地就跑出来了,但是一看评价指标是不同IoU下的map,原来的实验评价指标是不同IoU下的acc,简称acc@IoU。于是本文将acc@IoU指标加入到yolov5模型的评价指标里。
2. 步骤
yolov5官方GitHub:https://github.com/ultralytics/yolov5
1. 在val.py这个验证函数文件中,第254行,加入计算iou的函数(所有的行数都是相对于官方文档的,修改前请备份)
iou = compute_acc_iou(correct, iouv)
for thres in thres_lst:
if iou >= thres:
hit_lst[thres] += 1
2. 在197行插入
thres_lst = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95] #compute acc@iou
hit_lst = {}
for thres in thres_lst:
hit_lst[thres] = 0
3. 步骤1中的compute_acc_iou需要定义说明,新建函数
def compute_acc_iou(correct, iouv):
"""
iouv:tensor.size(10)
correct:tensor.size(N,10)
"""
iou = 0.0
flag = True
num_rows = len(correct)
num_cols = len(correct[0])
for i in range(num_rows):
for j in range(num_cols):
element = correct[i][j]
if not element: #如果当前检测的是FALSE
flag = False
break
if element & flag:
iou = iouv[j]
return iou
4. 在271行插入代码,用于统计10个不同IoU下的平均值,以及输出acc@iou指标,将结果保存在json格式文件中
acc_mean = 0
for thres in hit_lst.keys():
hit_lst[thres] /= len(os.listdir(data['val']))
acc_mean = acc_mean+hit_lst[thres]/len(hit_lst)
acc_info="<<<<<<<<<Validation>>>>>>>>>>\nACC@iou0.5: {}\nACC@iou0.75: {}\nACC@iou0.95: {}\nACC@iou0.5:0.95: {}\n".format(
hit_lst[0.5],hit_lst[0.75],hit_lst[0.95], acc_mean)
LOGGER.info(acc_info)
max_acc = 0.
acc_log = {}
for p in ['0.5','0.75','0.95','acc_mean']:
acc_log[p] = 0
acc_log['0.5'] = hit_lst[0.5]
acc_log['0.75'] = hit_lst[0.75]
acc_log['0.95'] = hit_lst[0.95]
acc_log['acc_mean'] = acc_mean
b = json.dumps(acc_log)
with open(os.path.join(save_dir, 'acc_log.json'), 'w') as f:
f.write(b)
5. 最后运行代码,查看最终的结果
python ./val.py --weights yolov5s.pt
可以看到结果中成功输出了acc@IoU的指标!可以和本地实验进行对比了。