mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-01 18:51:14 +06:00
Added a --reduce_memory option to the training script to keep training
data on disc as a memmap rather than in memory
This commit is contained in:
parent
7d1ae644ef
commit
06a30cfdf3
@ -58,7 +58,9 @@ recent GPUs. `--max_seq_len` defaults to 128 but can be set as high as 512.
|
|||||||
Higher values may yield stronger language models at the cost of slower and more memory-intensive training
|
Higher values may yield stronger language models at the cost of slower and more memory-intensive training
|
||||||
|
|
||||||
In addition, if memory usage is an issue, especially when training on a single GPU, reducing `--train_batch_size` from
|
In addition, if memory usage is an issue, especially when training on a single GPU, reducing `--train_batch_size` from
|
||||||
the default 32 to a lower number (4-16) can be helpful.
|
the default 32 to a lower number (4-16) can be helpful. There is also a `--reduce_memory` option for both the
|
||||||
|
`pregenerate_training_data.py` and `finetune_on_pregenerated.py` scripts that spills data to disc in shelf objects
|
||||||
|
or numpy memmaps rather than retaining it in memory, which hugely reduces memory usage with little performance impact.
|
||||||
|
|
||||||
###Examples
|
###Examples
|
||||||
#####Simple fine-tuning
|
#####Simple fine-tuning
|
||||||
|
Loading…
Reference in New Issue
Block a user