检查点恢复
看了一下目前搜不到llama2中文社区提供的finetune_lora.sh中修改使得可以从检查点恢复的方法,所以小记一下
1.修改finetune_lora.sh
添加这段代码
--resume_from_checkpoint ${output_model}/checkpoint-1800 \
把具体的checkpoint字段修改为你的checkpoint文件夹目录名称
2.修改finetune_clm_lora.py (全量微调同)
# Training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
resume_from_checkpoint = training_args.resume_from_checkpoint
checkpoint_name = os.path.join(resume_from_checkpoint, "pytorch_model.bin")
if not os.path.exists(checkpoint_name):
checkpoint_name = os.path.join(
resume_from_checkpoint, "adapter_model.bin"
) # only LoRA model - LoRA config above has to fit
resume_from_checkpoint = (
False # So the trainer won't try loading its state
)
# The two files above have a different name depending on how they were saved, but are actually the same.
if os.path.exists(checkpoint_name):
checkpoint = True
print(f"Restarting from {checkpoint_name}")
adapters_weights = torch.load(checkpoint_name)
set_peft_model_state_dict(model, adapters_weights)
else:
print(f"Checkpoint {checkpoint_name} not found")
# checkpoint = Fa
elif last_checkpoint is not None:
checkpoint = last_checkpoint
if torch.__version__ >= "2" and sys.platform != "win32":
model = torch.compile(model)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model() # Saves the tokenizer too for easy upload
然后就ok了