使用LORA进行LLM微调
- PEFT安装
- LORA使用:
PEFT安装
由于LORA,AdaLORA都集成在PEFT上了,所以在使用的时候安装PEFT是必备项
方法一:PyPI
To install 🤗 PEFT from PyPI:
pip install peft
方法二:Source
New features that haven’t been released yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository:
pip install git+https://github.com/huggingface/peft
If you’re working on contributing to the library or wish to play with the source code and see live results as you run the code, an editable version can be installed from a locally-cloned version of the repository:
git clone https://github.com/huggingface/peft
cd peft
pip install -e .
LORA使用:
推荐文章:
HuggingFace 官方文档doc
HuggingFace官方认证文章(知乎)
HuggingFace官网blog,PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware
视频手把手教学:Boost Fine-Tuning Performance of LLM: Optimal Architecture w/ PEFT LoRA Adapter-Tuning on Your GPU
八月末最新出炉:使用 LoRA 进行高效微调:大型语言模型最佳参数选择指南
大模型高效微调-PEFT框架介绍
大模型微调(finetune)方法总结-LoRA,Adapter,Prefix-tuning,P-tuning,Prompt-tuning
peft代码解读:Prefix Tuning/LoRA/P-Tuning/Prompt Tuning
学习了几天,发现这些文章是最有含金量的,这里帮大家总结好了,看完就可肯定会了,不用自己再东找西找资源了,如果感觉有用的话,欢迎点赞收藏