From 06a30cfdf36df5370ac3fb18e4258688e7a00a13 Mon Sep 17 00:00:00 2001 From: Matthew Carrigan Date: Thu, 21 Mar 2019 17:04:12 +0000 Subject: [PATCH] Added a --reduce_memory option to the training script to keep training data on disc as a memmap rather than in memory --- examples/lm_finetuning/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/examples/lm_finetuning/README.md b/examples/lm_finetuning/README.md index 1440a78a3bc..7c41870b7df 100644 --- a/examples/lm_finetuning/README.md +++ b/examples/lm_finetuning/README.md @@ -58,7 +58,9 @@ recent GPUs. `--max_seq_len` defaults to 128 but can be set as high as 512. Higher values may yield stronger language models at the cost of slower and more memory-intensive training In addition, if memory usage is an issue, especially when training on a single GPU, reducing `--train_batch_size` from -the default 32 to a lower number (4-16) can be helpful. +the default 32 to a lower number (4-16) can be helpful. There is also a `--reduce_memory` option for both the +`pregenerate_training_data.py` and `finetune_on_pregenerated.py` scripts that spills data to disc in shelf objects +or numpy memmaps rather than retaining it in memory, which hugely reduces memory usage with little performance impact. ###Examples #####Simple fine-tuning