docs: add LLaMA-Efficient-Tuning to awesome-transformers (#25441)

Co-authored-by: statelesshz <jihuazhong1@huawei.com>
This commit is contained in:
Alan Ji 2023-08-10 23:13:39 +08:00 committed by GitHub
parent a7da2996a0
commit 347001237a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -601,3 +601,9 @@ All Hugging Face models and pipelines can be seamlessly integrated into BentoML
Keywords: BentoML, Framework, Deployment, AI Applications
## [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)
[LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) offers a user-friendly fine-tuning framework that incorporates PEFT. The repository includes training(fine-tuning) and inference examples for LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, and other LLMs. A ChatGLM version is also available in [ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning).
Keywords: PEFT, fine-tuning, LLaMA-2, ChatGLM, Qwen