mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Docs / PEFT: Add PEFT API documentation (#31078)
* add peft references * add peft references * Update docs/source/en/peft.md * Update docs/source/en/peft.md
This commit is contained in:
parent
779bc360ff
commit
4f98b14465
@ -81,6 +81,8 @@ model = AutoModelForCausalLM.from_pretrained(model_id)
|
||||
model.load_adapter(peft_model_id)
|
||||
```
|
||||
|
||||
Check out the [API documentation](#transformers.integrations.PeftAdapterMixin) section below for more details.
|
||||
|
||||
## Load in 8bit or 4bit
|
||||
|
||||
The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map="auto"` to effectively distribute the model to your hardware:
|
||||
@ -227,6 +229,19 @@ lora_config = LoraConfig(
|
||||
model.add_adapter(lora_config)
|
||||
```
|
||||
|
||||
## API docs
|
||||
|
||||
[[autodoc]] integrations.PeftAdapterMixin
|
||||
- load_adapter
|
||||
- add_adapter
|
||||
- set_adapter
|
||||
- disable_adapters
|
||||
- enable_adapters
|
||||
- active_adapters
|
||||
- get_adapter_state_dict
|
||||
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
TODO: (@younesbelkada @stevhliu)
|
||||
|
Loading…
Reference in New Issue
Block a user