报错场景
调试时发现这段代码报错
train_loader = DataLoader(dataset=train_dataset, batch_size=args.batch_size,
shuffle=True, drop_last=True, num_workers=args.loader_workers, pin_memory=True)
报错输出:
Traceback (most recent call last):
File "end2end.py", line 75, in <module>
main(config, mydevice)
File "end2end.py", line 33, in main
shuffle=True, drop_last=True, num_workers=args.loader_workers, pin_memory=True)
File "/data/yiheng_huang/miniconda3/envs/StyleGesture/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 270, in __init__
sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
File "/data/yiheng_huang/miniconda3/envs/StyleGesture/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 103, in __init__
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0
报错原因
这时很疑惑,这不是官方的dataloader很正常地调用吗?之后在其他博主的评论区找到了答案,可能因为len(dataset)=0!
然后pdb发现 len(train_dataset)确实为0, 由于之前调试quit导致数据没处理好导致的,debug完成。