PEFT
文章平均质量分 59
大模型 FT 的技术方案
张博208
知识搬运工
展开
-
Understanding Low-Rank Adaptation (LoRA) for Efficient Fine-Tuning of Large Language Models
This blog post will go into detail about how LoRA works to fine-tune LLMs, following the methodology set out in the “LoRA: Low-Rank Adaptation of Large Language Models” paper原创 2024-07-26 15:12:17 · 740 阅读 · 0 评论 -
主流微调训练方法总结 LoRA、Adapter、Prefix-tuning、P-tuning、Prompt-tuning
一文搞清楚LORA、Prompt Tuning、P-Tuning、Adapter 、Prefix等大模型微调方法原创 2024-07-16 17:47:12 · 286 阅读 · 0 评论