- 在通过
PeftModel.from_pretrained加载第一个适配器时,你可以使用adapter_name参数给它命名。否则,默认的适配器名称为default。 - 要加载另一个适配器,请使用
PeftModel的load_adapter()方法,例如:model.load_adapter(peft_model_path, adapter_name) - 要在适配器之间切换,请使用
PeftModel的set_adapter()方法,例如:model.set_adapter(adapter_name) - 要禁用适配器,请使用上下文管理器
disable_adapter(),例如:with model.disable_adapter():
对于 LoRA 方法特别说明:要合并并卸载当前激活的适配器,以便 LoRA 的权重被添加到基础模型的权重中,并移除注入的模型以恢复基础的 transformers 模型(同时保留添加的 LoRA 权重),请使用 merge_and_unload() 方法,例如:model = model.merge_and_unload()
示例代码:
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
model_name = "decapoda-research/llama-7b-hf"
tokenizer = LlamaTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(
model_name,
load_in_8bit=True,
device_map="auto",
use_auth_token=True
)
model = PeftModel.from_pretrained(model, "tloen/alpaca-lora-7b", adapter_name="eng_alpaca")
model.load_adapter("22h/cabrita-lora-v0-1", adapter_name="portuguese_alpaca")
model.set_adapter("eng_alpaca")
instruction = "Tell me about alpacas."
print(evaluate(instruction))
model.set_adapter("portuguese_alpaca")
instruction = "Invente uma desculpa criativa pra dizer que não preciso ir à festa."
print(evaluate(instruction))
with model.disable_adapter():
instruction = "Invente uma desculpa criativa pra dizer que não preciso ir à festa."
print(evaluate(instruction))
797

被折叠的 条评论
为什么被折叠?



