一、定义
1.强化学习-reward训练
2.reward 模型重新加载与训练
二、实现
https://www.kaggle.com/code/neuqsnail/open-llama-finetune-sequenceclassification/notebook#Save-and-reload-Model
1.trl 强化训练-reward训练案例
#注意:lora训练需要 task_type 为 SEQ_CLS
1. 下载trl 训练脚本
2. 指令训练
python examples/scripts/reward_modeling.py \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--dataset_name trl-lib/ultrafeedback_binarized \
--output_dir Qwen2-0.5B-Reward-LoRA \
--per_device_train_batch_size 8 \
--num_train_epochs 1 \
--gradient_checkpointing True \
--learning_rate 1.0e-4 \
--logging_steps 25 \
--eval_strategy steps \
--eval_steps 50 \
--max_length 2048 \
--task_type SEQ_CLS\
--use_peft \
--lora_r 32 \
--lora_alpha 16
对应代码
import warnings
import torch
from datasets import load_dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer, HfArgumentParser
from trl import (
ModelConfig,
RewardConfig,
RewardTrainer,
ScriptArguments,
get_kbit_device_map,
get_peft_config,
get_quantization_config,
setup_chat_format,
)
if __name__ == "__main__":
parser = HfArgumentParser((ScriptArguments, RewardConfig, Model