mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Add link to existing documentation (#17931)
This commit is contained in:
parent
a045cbd6c9
commit
7b18702ca7
@ -114,15 +114,6 @@ If you want to directly load such a sharded checkpoint inside a model without us
|
||||
|
||||
## Low memory loading
|
||||
|
||||
Sharded checkpoints reduce the memory usage during step 2 of the worflow mentioned above, but when loadin a pretrained model, why keep the random weights in memory? The option `low_cpu_mem_usage` will destroy the weights of the randomly initialized model, then progressively load the weights inside, then perform a random initialization for potential missing weights (if you are loadding a model with a newly initialized head for a fine-tuning task for instance).
|
||||
|
||||
It's very easy to use, just add `low_cpu_mem_usage=True` to your call to [`~PreTrainedModel.from_pretrained`]:
|
||||
|
||||
```py
|
||||
from transformers import AutoModelForSequenceClas
|
||||
|
||||
model = AutoModel.from_pretrained("bert-base-cased", low_cpu_mem_usage=True)
|
||||
```
|
||||
|
||||
This can be used in conjunction with a sharded checkpoint.
|
||||
Sharded checkpoints reduce the memory usage during step 2 of the workflow mentioned above, but in order to use that model in a low memory setting, we recommend leveraging our tools based on the Accelerate library.
|
||||
|
||||
Please read the following guide for more information: [Large model loading using Accelerate](./main_classes/model#large-model-loading)
|
Loading…
Reference in New Issue
Block a user