复现YOLOX训练自己的数据集报错

复现YOLOX出现报错:

(py39) PS E:\project\MODEL\YOLO\YOLOX-main\YOLOX-main> python tools/train.py –f exps/example/yolox_voc/yolox_voc_s.py –d 0 –b 2 –c yu/yolox_s.pth
e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py:47: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
  self.scaler = torch.cuda.amp.GradScaler(enabled=args.fp16)
2024-08-06 10:44:57 | INFO     | yolox.core.trainer:132 - args: Namespace(experiment_name='E:\\project\\MODEL\\YOLO\\YOLOX-main\\YOLOX-main\\weights\\1first_0731', name=None, dist_backend='nccl', dist_url=None, batch_size=64, devices=0, exp_file='E:\\project\\MODEL\\YOLO\\YOLOX-main\\YOLOX-main\\exps\\exam
ple\\yolox_voc\\yolox_voc_s.py', resume=False, ckpt='E:\\project\\MODEL\\YOLO\\YOLOX-main\\YOLOX-main\\yu\\yolox_s.pth', start_epoch=None, num_machines=1, machine_rank=0, fp16=False, cache=None, occupy=False, logger='tensorboard', opts=['–f', 'exps/example/yolox_voc/yolox_voc_s.py', '–d', '0', '–b', '2', '–c', 'yu/yolox_s.pth'])
2024-08-06 10:44:57 | INFO     | yolox.core.trainer:133 - exp value:
╒═══════════════════╤════════════════════════════╕
│ keys              │ values                     │
╞═══════════════════╪════════════════════════════╡
│ seed              │ None                       │
├───────────────────┼────────────────────────────┤
│ output_dir        │ './YOLOX_outputs'          │
├───────────────────┼────────────────────────────┤
│ print_interval    │ 1                          │
├───────────────────┼────────────────────────────┤
│ eval_interval     │ 1                          │
├───────────────────┼────────────────────────────┤
│ dataset           │ None                       │
├───────────────────┼────────────────────────────┤
│ num_classes       │ 1                          │
├───────────────────┼────────────────────────────┤
│ depth             │ 0.33                       │
├───────────────────┼────────────────────────────┤
│ width             │ 0.5                        │
├───────────────────┼────────────────────────────┤
│ act               │ 'silu'                     │
├───────────────────┼────────────────────────────┤
│ data_num_workers  │ 4                          │
├───────────────────┼────────────────────────────┤
│ input_size        │ (640, 640)                 │
├───────────────────┼────────────────────────────┤
│ multiscale_range  │ 5                          │
├───────────────────┼────────────────────────────┤
│ data_dir          │ None                       │
├───────────────────┼────────────────────────────┤
│ train_ann         │ 'instances_train2017.json' │
├───────────────────┼────────────────────────────┤
│ val_ann           │ 'instances_val2017.json'   │
├───────────────────┼────────────────────────────┤
│ test_ann          │ 'instances_test2017.json'  │
├───────────────────┼────────────────────────────┤
│ mosaic_prob       │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ mixup_prob        │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ hsv_prob          │ 1.0                        │
├───────────────────┼────────────────────────────┤
│ flip_prob         │ 0.5                        │
├───────────────────┼────────────────────────────┤
│ degrees           │ 10.0                       │
├───────────────────┼────────────────────────────┤
│ translate         │ 0.1                        │
├───────────────────┼────────────────────────────┤
│ mosaic_scale      │ (0.1, 2)                   │
├───────────────────┼────────────────────────────┤
│ enable_mixup      │ True                       │
├───────────────────┼────────────────────────────┤
│ mixup_scale       │ (0.5, 1.5)                 │
├───────────────────┼────────────────────────────┤
│ shear             │ 2.0                        │
├───────────────────┼────────────────────────────┤
│ warmup_epochs     │ 1                          │
├───────────────────┼────────────────────────────┤
│ max_epoch         │ 300                        │
├───────────────────┼────────────────────────────┤
│ warmup_lr         │ 0                          │
├───────────────────┼────────────────────────────┤
│ min_lr_ratio      │ 0.05                       │
├───────────────────┼────────────────────────────┤
│ basic_lr_per_img  │ 0.00015625                 │
├───────────────────┼────────────────────────────┤
│ scheduler         │ 'yoloxwarmcos'             │
├───────────────────┼────────────────────────────┤
│ no_aug_epochs     │ 15                         │
├───────────────────┼────────────────────────────┤
│ ema               │ True                       │
├───────────────────┼────────────────────────────┤
│ weight_decay      │ 0.0005                     │
├───────────────────┼────────────────────────────┤
│ momentum          │ 0.9                        │
├───────────────────┼────────────────────────────┤
│ save_history_ckpt │ True                       │
├───────────────────┼────────────────────────────┤
│ exp_name          │ 'yolox_voc_s'              │
├───────────────────┼────────────────────────────┤
│ test_size         │ (640, 640)                 │
├───────────────────┼────────────────────────────┤
│ test_conf         │ 0.01                       │
├───────────────────┼────────────────────────────┤
│ nmsthre           │ 0.65                       │
╘═══════════════════╧════════════════════════════╛
2024-08-06 10:44:57 | INFO     | yolox.core.trainer:138 - Model Summary: Params: 8.94M, Gflops: 26.76
e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py:340: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during
 unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed 
to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  ckpt = torch.load(ckpt_file, map_location=self.device)["model"]
2024-08-06 10:44:57 | INFO     | yolox.core.trainer:338 - loading checkpoint for fine tuning
2024-08-06 10:44:58 | WARNING  | yolox.utils.checkpoint:24 - Shape of head.cls_preds.0.weight in checkpoint is torch.Size([80, 128, 1, 1]), while shape of head.cls_preds.0.weight in model is torch.Size([1, 128, 1, 1]).
2024-08-06 10:44:58 | WARNING  | yolox.utils.checkpoint:24 - Shape of head.cls_preds.0.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.0.bias in model is torch.Size([1]).
2024-08-06 10:44:58 | WARNING  | yolox.utils.checkpoint:24 - Shape of head.cls_preds.1.weight in checkpoint is torch.Size([80, 128, 1, 1]), while shape of head.cls_preds.1.weight in model is torch.Size([1, 128, 1, 1]).
2024-08-06 10:44:58 | WARNING  | yolox.utils.checkpoint:24 - Shape of head.cls_preds.1.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.1.bias in model is torch.Size([1]).
2024-08-06 10:44:58 | WARNING  | yolox.utils.checkpoint:24 - Shape of head.cls_preds.2.weight in checkpoint is torch.Size([80, 128, 1, 1]), while shape of head.cls_preds.2.weight in model is torch.Size([1, 128, 1, 1]).
2024-08-06 10:44:58 | WARNING  | yolox.utils.checkpoint:24 - Shape of head.cls_preds.2.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.2.bias in model is torch.Size([1]).
2024-08-06 10:44:58 | INFO     | yolox.core.trainer:157 - init prefetcher, this might take one minute or less...
2024-08-06 10:45:08 | INFO     | yolox.core.trainer:196 - Training start...
2024-08-06 10:45:08 | INFO     | yolox.core.trainer:197 -
YOLOX(
  (backbone): YOLOPAFPN(
    (backbone): CSPDarknet(
      (stem): Focus(
        (conv): BaseConv(
          (conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (dark2): Sequential(
        (0): BaseConv(
          (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark3): Sequential(
        (0): BaseConv(
          (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (1): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (2): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark4): Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (1): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
            (2): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
      (dark5): Sequential(
        (0): BaseConv(
          (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): SPPBottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): ModuleList(
            (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
            (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)
            (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False)
          )
          (conv2): BaseConv(
            (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
        (2): CSPLayer(
          (conv1): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv3): BaseConv(
            (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (m): Sequential(
            (0): Bottleneck(
              (conv1): BaseConv(
                (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
                (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
              (conv2): BaseConv(
                (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
                (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
                (act): SiLU(inplace=True)
              )
            )
          )
        )
      )
    )
    (upsample): Upsample(scale_factor=2.0, mode='nearest')
    (lateral_conv0): BaseConv(
      (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_p4): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (reduce_conv1): BaseConv(
      (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_p3): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (bu_conv2): BaseConv(
      (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_n3): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
    (bu_conv1): BaseConv(
      (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (C3_n4): CSPLayer(
      (conv1): BaseConv(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv2): BaseConv(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (conv3): BaseConv(
        (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (m): Sequential(
        (0): Bottleneck(
          (conv1): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
          (conv2): BaseConv(
            (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
            (act): SiLU(inplace=True)
          )
        )
      )
    )
  )
  (head): YOLOXHead(
    (cls_convs): ModuleList(
      (0-2): 3 x Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
    (reg_convs): ModuleList(
      (0-2): 3 x Sequential(
        (0): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
    (cls_preds): ModuleList(
      (0-2): 3 x Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
    )
    (reg_preds): ModuleList(
      (0-2): 3 x Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))
    )
    (obj_preds): ModuleList(
      (0-2): 3 x Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
    )
    (stems): ModuleList(
      (0): BaseConv(
        (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): BaseConv(
        (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (2): BaseConv(
        (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
    )
    (l1_loss): L1Loss()
    (bcewithlog_loss): BCEWithLogitsLoss()
    (iou_loss): IOUloss()
  )
)
2024-08-06 10:45:08 | INFO     | yolox.core.trainer:218 - ---> start train epoch1
e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py:106: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with torch.cuda.amp.autocast(enabled=self.amp_training):
e:\project\model\yolo\yolox-main\yolox-main\yolox\models\yolo_head.py:474: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with torch.cuda.amp.autocast(enabled=False):
2024-08-06 10:49:48 | INFO     | yolox.core.trainer:270 - epoch: 1/300, iter: 1/3, gpu mem: 23756Mb, mem: 22.8Gb, iter_time: 280.177s, data_time: 0.044s, total_loss: 9.4, iou_loss: 1.7, l1_loss: 0.0, conf_loss: 4.9, cls_loss: 2.8, lr: 1.111e-03, size: 640, ETA: 2 days, 21:57:59
2024-08-06 10:50:06 | INFO     | yolox.core.trainer:270 - epoch: 1/300, iter: 2/3, gpu mem: 23812Mb, mem: 41.8Gb, iter_time: 17.414s, data_time: 0.003s, total_loss: 9.7, iou_loss: 1.7, l1_loss: 0.0, conf_loss: 5.1, cls_loss: 2.9, lr: 4.444e-03, size: 640, ETA: 1 day, 13:06:58
2024-08-06 10:50:21 | INFO     | yolox.core.trainer:270 - epoch: 1/300, iter: 3/3, gpu mem: 23812Mb, mem: 41.9Gb, iter_time: 15.935s, data_time: 0.000s, total_loss: 6.5, iou_loss: 1.6, l1_loss: 0.0, conf_loss: 2.7, cls_loss: 2.2, lr: 1.000e-02, size: 640, ETA: 1 day, 2:02:24
2024-08-06 10:50:21 | INFO     | yolox.core.trainer:402 - Save weights to E:\project\MODEL\YOLO\YOLOX-main\YOLOX-main\weights\1first_0731
100%|###############################################################################################################################################################################################################################################################################| 1/1 [00:07<00:00,  7.13s/it]
e:\project\model\yolo\yolox-main\yolox-main\yolox\evaluators\voc_evaluator.py:108: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\tensor\python_tensor.cpp:80.)
  statistics = torch.cuda.FloatTensor([inference_time, nms_time, n_samples])
2024-08-06 10:50:29 | INFO     | yolox.evaluators.voc_evaluator:144 - Evaluate in main process...
Writing mineral_water VOC results file
2024-08-06 10:50:29 | ERROR    | yolox.core.trainer:79 - Exception in training: 
2024-08-06 10:50:29 | INFO     | yolox.core.trainer:200 - Training of experiment is done and the best AP is 0.00
2024-08-06 10:50:29 | ERROR    | yolox.core.launch:98 - An error has been caught in function 'launch', process 'MainProcess' (33516), thread 'MainThread' (35676):
Traceback (most recent call last):

  File "E:\project\MODEL\YOLO\YOLOX-main\YOLOX-main\tools\train.py", line 139, in <module>
    launch(
    └ <function launch at 0x0000019A376239D0>

> File "e:\project\model\yolo\yolox-main\yolox-main\yolox\core\launch.py", line 98, in launch
    main_func(*args)
    │          └ (╒═══════════════════╤═══════════════════════════════════════════════════════════════════════════════════════════════════════...
    └ <function main at 0x0000019A3BBB43A0>

  File "E:\project\MODEL\YOLO\YOLOX-main\YOLOX-main\tools\train.py", line 119, in main
    trainer.train()
    │       └ <function Trainer.train at 0x0000019A3C087160>
    └ <yolox.core.trainer.Trainer object at 0x0000019A3C08BE20>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py", line 77, in train
    self.train_in_epoch()
    │    └ <function Trainer.train_in_epoch at 0x0000019A3C087AF0>
    └ <yolox.core.trainer.Trainer object at 0x0000019A3C08BE20>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py", line 88, in train_in_epoch
    self.after_epoch()
    │    └ <function Trainer.after_epoch at 0x0000019A3C087E50>
    └ <yolox.core.trainer.Trainer object at 0x0000019A3C08BE20>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py", line 237, in after_epoch
    self.evaluate_and_save_model()
    │    └ <function Trainer.evaluate_and_save_model at 0x0000019A3C08C160>
    └ <yolox.core.trainer.Trainer object at 0x0000019A3C08BE20>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\core\trainer.py", line 355, in evaluate_and_save_model
    (ap50_95, ap50, summary), predictions = self.exp.eval(
                                            │    │   └ <function Exp.eval at 0x0000019A3C087A60>
                                            │    └ ╒═══════════════════╤════════════════════════════════════════════════════════════════════════════════════════════════════════...
                                            └ <yolox.core.trainer.Trainer object at 0x0000019A3C08BE20>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\exp\yolox_base.py", line 353, in eval
    return evaluator.evaluate(model, is_distributed, half, return_outputs=return_outputs)
           │         │        │      │               │                    └ True
           │         │        │      │               └ False
           │         │        │      └ False
           │         │        └ YOLOX(
           │         │            (backbone): YOLOPAFPN(
           │         │              (backbone): CSPDarknet(
           │         │                (stem): Focus(
           │         │                  (conv): BaseConv(
           │         │                    (conv): ...
           │         └ <function VOCEvaluator.evaluate at 0x0000019A3C07EB80>
           └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x0000019A430A35E0>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\evaluators\voc_evaluator.py", line 114, in evaluate
    eval_results = self.evaluate_prediction(data_list, statistics)
                   │    │                   │          └ tensor([0., 0., 1.], device='cuda:0')
                   │    │                   └ {0: (tensor([[ 3.0676e+02, -6.0223e-01,  3.5026e+02,  3.4164e+01],
                   │    │                             [ 3.0479e+02, -2.0055e+00,  3.9067e+02,  1.4434e+0...
                   │    └ <function VOCEvaluator.evaluate_prediction at 0x0000019A3C07ECA0>
                   └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x0000019A430A35E0>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\evaluators\voc_evaluator.py", line 186, in evaluate_prediction
    mAP50, mAP70 = self.dataloader.dataset.evaluate_detections(all_boxes, tempdir)
                   │    │          │       │                   │          └ 'C:\\Users\\ZH\\AppData\\Local\\Temp\\tmpi3v_j1ad'
                   │    │          │       │                   └ [[array([[ 3.06761780e+02, -6.02231979e-01,  3.50264893e+02,
                   │    │          │       │                              3.41639709e+01,  1.45626113e-01],
                   │    │          │       │                            [ 3.04794586e+...
                   │    │          │       └ <function VOCDetection.evaluate_detections at 0x0000019A3C084550>
                   │    │          └ <yolox.data.datasets.voc.VOCDetection object at 0x0000019A4301F670>
                   │    └ <torch.utils.data.dataloader.DataLoader object at 0x0000019A430A3B50>
                   └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x0000019A430A35E0>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\data\datasets\voc.py", line 239, in evaluate_detections
    self._write_voc_results_file(all_boxes)
    │    │                       └ [[array([[ 3.06761780e+02, -6.02231979e-01,  3.50264893e+02,
    │    │                                  3.41639709e+01,  1.45626113e-01],
    │    │                                [ 3.04794586e+...
    │    └ <function VOCDetection._write_voc_results_file at 0x0000019A3C084670>
    └ <yolox.data.datasets.voc.VOCDetection object at 0x0000019A4301F670>

  File "e:\project\model\yolo\yolox-main\yolox-main\yolox\data\datasets\voc.py", line 274, in _write_voc_results_file
    if dets == []:
       └ array([[ 3.06761780e+02, -6.02231979e-01,  3.50264893e+02,
                  3.41639709e+01,  1.45626113e-01],
                [ 3.04794586e+02...

ValueError: operands could not be broadcast together with shapes (20,5) (0,) 

解决办法:找到YOLOX-main\yolox\data\datasets\voc.py里面的VOCDetection类的_write_voc_results_file函数的if dets == []改成if 0 in dets.shape就解决这个问题了,再次运行就可以跑通代码了

下面是对该错误的解释:

错误“ValueError: operands could not be broadcast together with shapes (7,5) (0,)”出现在 yolox.data.datasets.voc.VOCDetection.evaluate_detections() 方法中。这个错误通常发生在尝试将一个形状为 (7, 5) 的数组与一个形状为 (0,) 的标量或空数组进行操作时。

在 evaluate_detections() 方法里,您可以看到如下相关部分

def evaluate_detections(self, all_boxes, tempdir): #... self._write_voc_results_file(all_boxes) #...

在 _write_voc_results_file() 方法内部:

def _write_voc_results_file(self, all_boxes): #... if dets == []: continue #...

问题出在检查 all_boxes 是否为空。在循环外部和内部,都有代码块检查 dets 是否为空。在循环外部检查 dets 是为了确认是否存在检测结果,而在循环内部检查则是为了处理每个类别的检测结果。

确保 all_boxes 数组在调用 evaluate_detections() 方法时总是包含有效的数据

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值