Evaluation for Object Detection

交并比 IoU = 预测框与真实框的交集/预测框与真实框的并集

IoU > threshold (一般为0.5)认为预测正确(True positive)

给定一张图片,已知该图像中给定类别的实际目标数量N(TotalObjects)_C,检测结果中预测正确的数量为N(TruePositives)_C

则该图像中给定类别的精度为 Precision_C = N(TruePositives)_C/N(TotalObjects)_C

给定验证集(多张图片),某类的平均精度为 AveragePrecision_C = sum(Precision_C)/N(TotalImages)_C

为度量模型,取所有类的AP平均值作为MeanAveragePrecision(MAP)=sum(AveragePrecision_C)/N(classes)


对于数据测试结果有下面4种情况:
TP: 预测为正,实际为正
FP: 预测为正,实际为负
TN:预测为负,实际为负
FN: 预测为负,实际为正
精确率、准确率:Accuracy=(TP+TN)/(TP+TN+FN+FP)
精准率、查准率: P = TP/ (TP+FP)      正样本预测结果数/预测为正样本的数量,检索结果中有用的结果
召回率、查全率: R = TP/ (TP+FN)     正样本预测结果数/正样本实际数,相关文档中被正确检索到的
真正例率(同召回率、查全率):TPR = TP/ (TP+FN) 正样本预测结果数 / 正样本实际数
假正例率:FPR =FP/ (FP+TN) 被预测为正的负样本结果数 /负样本实际数
F1-score: 2*TP/(2*TP + FP + FN)

precision和recall双高当然是最好。但实际应用中往往precision和recall成反比关系。比如只检索出1条且相关,则precision为100%而recall则很低。而实际应用中则根据需要调整指标,比如如果是做搜索,那就是保证召回的情况下提升准确率;如果做疾病监测、反垃圾,则是保准确率的条件下,提升召回。


为了找到precision和recall的平衡点,往往会通过绘制precision-recall curve。PRC纵轴是precision,而横轴是recall,所以PRC曲线应该是越往右上凸越好(双高)而F1指则是结合了precision和recall的综合评估:2PR/(P+R)


通常还会用ROC曲线衡量分类器效果,ROC纵轴是true positive rate,横轴是false positive rate。通常tpr越高,fpr越低,分类器效果越好。所以ROC曲线越往左上凸越好。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
如何解决Loading and preparing results... DONE (t=0.01s) creating index... index created! Running per image evaluation... Evaluate annotation type *bbox* DONE (t=0.53s). Accumulating evaluation results... Traceback (most recent call last): File "tools/train.py", line 133, in <module> main() File "tools/train.py", line 129, in main runner.train() File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1721, in train model = self.train_loop.run() # type: ignore File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/mmengine/runner/loops.py", line 102, in run self.runner.val_loop.run() File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/mmengine/runner/loops.py", line 366, in run metrics = self.evaluator.evaluate(len(self.dataloader.dataset)) File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate _results = metric.evaluate(size) File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate _metrics = self.compute_metrics(results) # type: ignore File "/home/wangbei/mmdetection(coco)/mmdet/evaluation/metrics/coco_metric.py", line 512, in compute_metrics coco_eval.accumulate() File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/pycocotools-2.0-py3.8-linux-x86_64.egg/pycocotools/cocoeval.py", line 378, in accumulate tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) File "/home/wangbei/anaconda3/envs/Object_mmdetection/lib/python3.8/site-packages/numpy/__init__.py", line 305, in __getattr__ raise AttributeError(__former_attrs__[attr]) AttributeError: module 'numpy' has no attribute 'float'. `np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 30235 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 30234) of binary: /home/wangbei/anaconda3/envs/Object_mmdetection/bin/python
最新发布
06-09
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值