直接训练第二步微调的模型。
参考:
https://blog.csdn.net/weixin_53610475/article/details/126306184
按步骤配置后报错:
1.num_samples should be a positive integer value, but got num_samples=0
修改
mmediting/mmedit/datasets/builder.py
if dist:
sampler = DistributedSampler(
dataset,
world_size,
rank,
shuffle=shuffle,
samples_per_gpu=samples_per_gpu,
seed=seed)
shuffle = False
batch_size = samples_per_gpu
num_workers = workers_per_gpu
else:
sampler = None
shuffle = False ####添加上这一句即可
batch_size = num_gpus * samples_per_gpu
num_workers = num_gpus * workers_per_gpu
2.报错:return next(self._sampler_iter) StopIteration
解决:数据路径问题,修改路径即可。
3.修改参数cfg.gpus = 2
AssertionError: MMDataParallel only supports single GPU training, if you need to train with multiple GPUs, please use MMDistributedDataParallel instead.
这样只能用1个gpu训练?