YOLOX报错FileNotFoundError: [Errno 2] No such file or directory: ‘xxx.xml‘个人解决方案

在训练YOLOX的时候,每训练10次,进行模型评估的时候,会报错停止,完整报错信息如下:

2022-07-05 16:19:49 | ERROR    | yolox.core.launch:98 - An error has been caught in function 'launch', process 'MainProcess' (17800), thread 'MainThread' (
26892):
Traceback (most recent call last):

  File "E:\Study\DeepLearn\DPlearn\YOLO\YOLOX\tools\train.py", line 134, in <module>
    launch(
    └ <function launch at 0x00000270311B3310>

> File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\core\launch.py", line 98, in launch
    main_func(*args)
    │          └ (╒═══════════════════╤═══════════════════════════════════════════════════════════════════════════════════════════════════════...
    └ <function main at 0x0000027031C8FE50>

  File "E:\Study\DeepLearn\DPlearn\YOLO\YOLOX\tools\train.py", line 118, in main
    trainer.train()
    │       └ <function Trainer.train at 0x0000027034191280>
    └ <yolox.core.trainer.Trainer object at 0x00000270341AEB20>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\core\trainer.py", line 76, in train
    self.train_in_epoch()
    │    └ <function Trainer.train_in_epoch at 0x0000027034191940>
    └ <yolox.core.trainer.Trainer object at 0x00000270341AEB20>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\core\trainer.py", line 86, in train_in_epoch
    self.after_epoch()
    │    └ <function Trainer.after_epoch at 0x0000027034191CA0>
    └ <yolox.core.trainer.Trainer object at 0x00000270341AEB20>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\core\trainer.py", line 222, in after_epoch
    self.evaluate_and_save_model()
    │    └ <function Trainer.evaluate_and_save_model at 0x0000027034191F70>
    └ <yolox.core.trainer.Trainer object at 0x00000270341AEB20>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\core\trainer.py", line 328, in evaluate_and_save_model
    (ap50_95, ap50, summary), predictions = self.exp.eval(
                                            │    │   └ <function Exp.eval at 0x00000270341918B0>
                                            │    └ ╒═══════════════════╤═══════════════════════════════════════════════════════════════════════════════════
═════════════════════...
                                            └ <yolox.core.trainer.Trainer object at 0x00000270341AEB20>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\exp\yolox_base.py", line 322, in eval
    return evaluator.evaluate(model, is_distributed, half, return_outputs=return_outputs)
           │         │        │      │               │                    └ True
           │         │        │      │               └ False
           │         │        │      └ False
           │         │        └ YOLOX(
           │         │            (backbone): YOLOPAFPN(
           │         │              (backbone): CSPDarknet(
           │         │                (stem): Focus(
           │         │                  (conv): BaseConv(
           │         │                    (conv): ...
           │         └ <function VOCEvaluator.evaluate at 0x0000027034178D30>
           └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x000002703A303550>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\evaluators\voc_evaluator.py", line 114, in evaluate
    eval_results = self.evaluate_prediction(data_list, statistics)
                   │    │                   │          └ tensor([0.7917, 0.3148, 1.0000], device='cuda:0')
                   │    │                   └ {0: (tensor([[ 5.5528e+01, -4.4439e+01,  1.1621e+03,  1.1649e+03],
                   │    │                             [ 4.6180e+02, -1.1025e+01,  1.1711e+03,  1.1398e+0...
                   │    └ <function VOCEvaluator.evaluate_prediction at 0x0000027034178E50>
                   └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x000002703A303550>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\evaluators\voc_evaluator.py", line 186, in evaluate_prediction
    mAP50, mAP70 = self.dataloader.dataset.evaluate_detections(all_boxes, tempdir)
                   │    │          │       │                   │          └ 'C:\\Users\\76493\\AppData\\Local\\Temp\\tmp3n4ozfu8'
                   │    │          │       │                   └ [[array([[ 5.55276413e+01, -4.44387054e+01,  1.16205750e+03,
                   │    │          │       │                              1.16488818e+03,  8.09440315e-01],
                   │    │          │       │                            [ 4.61802429e+...
                   │    │          │       └ <function VOCDetection.evaluate_detections at 0x000002703417B790>
                   │    │          └ <yolox.data.datasets.voc.VOCDetection object at 0x000002703A3037F0>
                   │    └ <torch.utils.data.dataloader.DataLoader object at 0x000002703A303970>
                   └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x000002703A303550>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\data\datasets\voc.py", line 271, in evaluate_detections
    mAP = self._do_python_eval(output_dir, iou)
          │    │               │           └ 0.5
          │    │               └ 'C:\\Users\\76493\\AppData\\Local\\Temp\\tmp3n4ozfu8'
          │    └ <function VOCDetection._do_python_eval at 0x000002703417B940>
          └ <yolox.data.datasets.voc.VOCDetection object at 0x000002703A3037F0>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\data\datasets\voc.py", line 335, in _do_python_eval
    rec, prec, ap = voc_eval(
                    └ <function voc_eval at 0x0000027034178F70>

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\evaluators\voc_eval.py", line 92, in voc_eval
    recs[imagename] = parse_rec(annopath.format(imagename))
    │    │            │         │        │      └ '112_3'
    │    │            │         │        └ <method 'format' of 'str' objects>
    │    │            │         └ '{:s}.xml'
    │    │            └ <function parse_rec at 0x000002703414C8B0>
    │    └ '112_3'
    └ {}

  File "C:\ProgramData\Anaconda3\lib\site-packages\yolox-0.3.0-py3.9.egg\yolox\evaluators\voc_eval.py", line 16, in parse_rec
    tree = ET.parse(filename)
           │  │     └ '112_3.xml'
           │  └ <function parse at 0x0000027031236820>
           └ <module 'xml.etree.ElementTree' from 'C:\\ProgramData\\Anaconda3\\lib\\xml\\etree\\ElementTree.py'>

  File "C:\ProgramData\Anaconda3\lib\xml\etree\ElementTree.py", line 1229, in parse
    tree.parse(source, parser)
    │    │     │       └ None
    │    │     └ '112_3.xml'
    │    └ <function ElementTree.parse at 0x00000270312358B0>
    └ <xml.etree.ElementTree.ElementTree object at 0x0000027085E47910>

  File "C:\ProgramData\Anaconda3\lib\xml\etree\ElementTree.py", line 569, in parse
    source = open(source, "rb")
                  └ '112_3.xml'

FileNotFoundError: [Errno 2] No such file or directory: '112_3.xml'

参考了大佬文章中的解决方案windows10搭建YOLOx环境 训练+测试+评估

更改/yolox/evaluators/voc_eval.py文件中代码,在def parse_rec(filename)前面加上一句话。

import numpy as np

#这里写入你数据集的Annotations文件夹的绝对路径
root=r'E:DeepLearn/dataset/yolox/VOCdevkit/VOC2007/Annotations/'
def parse_rec(filename):
    """Parse a PASCAL VOC xml file"""
    tree = ET.parse(filename)
    objects = []
    for obj in tree.findall("object"):

在终端里执行python setup.py install 更新yolox

PS:如果想每训练一轮就评估一下模型,更改/yolox/exp/yolox_base.py文件中

self.eval_interval = 1

,将10改为1,就是每训练一轮评估一下,更改之后记得执行一下python setup.py install更新一下。

上面操作执行完了之后,运行之后,发现还是报一样的错,评估的时候还是找不到xml文件。

又去翻github,最后在github的issue中找到了答案

如果是windows环境的话,需要把/yolox/data/datasets/voc.py文件中的

annopath = os.path.join(rootpath, "Annotations", "{:s}.xml")

替换为

annopath = os.path.join(rootpath, "Annotations", "{}.xml")

更新一下后再执行,没有报错了

  • 10
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值