QLoRA代码实战

QLoRA原理参考:
BiliBili:4bit量化与QLoRA模型训练
zhihu:QLoRA(Quantized LoRA)详解

下载llama3-8b模型

from modelscope import snapshot_download
model_dir = snapshot_download('LLM-Research/Meta-Llama-3-8B-Instruct')

设置quantization_config

from transformers import BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
)

加载模型

加载量化后的llama3-8b模型,大概需要6G的GPU显存。

from transformers import AutoModelForCausalLM,AutoTokenizer,TrainingArguments,Trainer,DataCollatorForSeq2Seq
model = AutoModelForCausalLM.from_pretrained(model_dir,quantization_config=quantization_config,low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(model_dir)

一层的数据类型,可以看到除了layernorm,linear层都进行了量化。

model.layers.0.self_attn.q_proj.weight torch.uint8
model.layers.0.self_attn.k_proj.weight torch.uint8
model.layers.0.self_attn.v_proj.weight torch.uint8
model.layers.0.self_attn.o_proj.weight torch.uint8
model.layers.0.mlp.gate_proj.weight torch.uint8
model.layers.0.mlp.up_proj.weight torch.uint8
model.layers.0.mlp.down_proj.weight torch.uint8
model.layers.0.input_layernorm.weight torch.float16
model.layers.0.post_attention_layernorm.weight torch.float16

预处理模型

from peft import prepare_model_for_kbit_training
model = prepare_model_for_kbit_training(model)

设置LoRA参数

这里使用了默认设置,参数target_modules和modules_to_save可以设置具体训练哪些模块。
在peft/utils/constants.py中,默认定义了各种模型的LoRA target modules,llama模型对Q和V进行lora。

"llama": ["q_proj", "v_proj"],
config = LoraConfig(task_type=TaskType.CAUSAL_LM)
model = get_peft_model(model, config)
model.print_trainable_parameters()
#trainable params: 3,407,872 || all params: 8,033,669,120 || trainable%: 0.0424
print(model) #加入了LoRA后的模型结构。

加载并处理数据

数据下载:AI-ModelScope/alpaca-gpt4-data-zh
需要把下载的数据中dataset_infos.json 重命名为datasets_info.json,这样才能正确加载。

from datasets import load_dataset

dataset = load_dataset("alpaca-data-zh")

def process_func(example):
    # print(example)
    MAX_LENGTH = 256
    input_ids, attention_mask, labels = [], [], []
    # 将prompt进行tokenize,这里我们没有利用tokenizer进行填充和截断
    # 这里我们自己进行截断,在DataLoader的collate_fn函数中进行填充
    input = example["input"] if example["input"] is not None else ''
    instruction = tokenizer("\n".join(["Human: " + example["instruction"], input]).strip() + "\n\nAssistant: ")
    # 将output进行tokenize,注意添加eos_token
    response = tokenizer(example["output"] + tokenizer.eos_token)
    # 将instruction + output组合为input
    input_ids = instruction["input_ids"] + response["input_ids"]
    attention_mask = instruction["attention_mask"] + response["attention_mask"]
    # prompt设置为-100,不计算loss
    labels = [-100] * len(instruction["input_ids"]) + response["input_ids"]
    # 设置最大长度,进行截断
    if len(input_ids) > MAX_LENGTH:
        input_ids = input_ids[:MAX_LENGTH]
        attention_mask = attention_mask[:MAX_LENGTH]
        labels = labels[:MAX_LENGTH]
    return {
        "input_ids": input_ids,
        "attention_mask": attention_mask,
        "labels": labels
    }

tokenized_ds = dataset['train'].map(process_func, remove_columns=dataset['train'].column_names)

设置TrainingArguments

在per_device_train_batch_size=1的情况下,大概需要9G显存。

args = TrainingArguments(
    output_dir="./llama3_4bit",
    per_device_train_batch_size=4,
    gradient_accumulation_steps=32,
    logging_steps=10,
    num_train_epochs=1,
    save_strategy='epoch',
    learning_rate=1e-4,
    # gradient_checkpointing=True,
    # optim="paged_adamw_32bit"

)

训练

trainer = Trainer(
    model=model,
    args=args,
    tokenizer=tokenizer,
    train_dataset=tokenized_ds,
    data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True),
)
trainer.train(resume_from_checkpoint=False)

加载qlora

from transformers import AutoModelForCausalLM,AutoTokenizer
model_path = model_dir #llama3-8b的路径
model = AutoModelForCausalLM.from_pretrained(model_path,quantization_config=config,low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model_qlora = PeftModel.from_pretrained(model=model,model_id="llama3_4bit/checkpoint-7") #qlora路径
#预测
ipt = tokenizer("Human: {}\n{}".format("怎么学习llm", "").strip() + "\n\nAssistant: ", return_tensors="pt").to(model.device)
tokenizer.decode(model_qlora.generate(**ipt, max_length=128, do_sample=True)[0], skip_special_tokens=True)

合并LoRA

合并后的模型大概5.4G。

merge_model = model_qlora.merge_and_unload()
merge_model.save_pretrained("llama3")
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值