Docs / PEFT: Add PEFT API documentation (#31078)

* add peft references

* add peft references

* Update docs/source/en/peft.md

* Update docs/source/en/peft.md
This commit is contained in:
Younes Belkada 2024-05-28 15:04:43 +02:00 committed by GitHub
parent 779bc360ff
commit 4f98b14465
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -81,6 +81,8 @@ model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
```
Check out the [API documentation](#transformers.integrations.PeftAdapterMixin) section below for more details.
## Load in 8bit or 4bit
The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map="auto"` to effectively distribute the model to your hardware:
@ -227,6 +229,19 @@ lora_config = LoraConfig(
model.add_adapter(lora_config)
```
## API docs
[[autodoc]] integrations.PeftAdapterMixin
- load_adapter
- add_adapter
- set_adapter
- disable_adapters
- enable_adapters
- active_adapters
- get_adapter_state_dict
<!--
TODO: (@younesbelkada @stevhliu)