Alpaca大模型复现
参考 https://github.com/tatsu-lab/stanford_alpaca 。
主要分为以下几个步骤:
- 下载llama预训练模型,这个在huggingface上都有公开,根据参数选择,比如llama-7b在 https://huggingface.co/decapoda-research/llama-7b-hf 下载(使用git-lfs)
- 配置参数:按照官网的模板,比如下面这个:
torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py
–model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer>
–data_path ./alpaca_data.json
–bf16 True
–output_dir <your_output_dir>
–num_train_epochs 3
–per_device_train_batch_size 4
–per_device_eval_batch_size 4
–gradient_accumulation_steps 8
–evaluation_strategy “no”
–save_strategy “steps”
–save_steps 2000
–save_total_limit 1
–learning_rate 2e-5
–weight_decay 0.
–warmup_ratio 0.03
–lr_scheduler_typ