use D-Adaptation AdamPreprint optimizer | {}
running training / 学習開始
num train images * repeats / 学習画像の数×繰り返し回数: 1850
num reg images / 正則化画像の数: 0
num batches per epoch / 1epochのバッチ数: 370
num epochs / epoch数: 5
batch size per device / バッチサイズ: 5
gradient accumulation steps / 勾配を合計するステップ数 = 1
total optimization steps / 学習ステップ数: 1850
fatal: not a git repository (or any of the parent directories): .git
steps: 0%| | 0/1850 [00:00<?, ?it/s]
epoch 1/5
Traceback (most recent call last):
File "F:\stable diffusion\moudle train\kohya_ss-master\train_network.py", line 1009, in <module>
trainer.train(args)
File "F:\stable diffusion\moudle train\kohya_ss-master\train_network.py", line 822, in train
optimizer.step()
File "F:\stable diffusion\moudle train\kohya_ss-master\venv\lib\site-packages\accelerate\optimizer.py", lin
Kohya_ss Lora训练报错(RuntimeError、raise subprocess.CalledProcessError)
最新推荐文章于 2024-08-04 23:52:08 发布