Matthew Carrigan
|
8d1d1ffde2
|
Corrected the displayed loss when gradient_accumulation_steps > 1
|
2019-03-25 12:15:19 +00:00 |
|
Matthew Carrigan
|
7d1ae644ef
|
Added a --reduce_memory option to the training script to keep training
data on disc as a memmap rather than in memory
|
2019-03-21 17:02:18 +00:00 |
|
Matthew Carrigan
|
8a861048dd
|
Fixed up the notes on a possible future low-memory path
|
2019-03-21 14:08:39 +00:00 |
|
Matthew Carrigan
|
a8a577ba93
|
Reduced memory usage for pregenerating the data a lot by writing it
out on the fly without shuffling - the Sampler in the finetuning script
will shuffle for us.
|
2019-03-21 14:05:52 +00:00 |
|
Matthew Carrigan
|
0ae59e662d
|
Reduced memory usage for pregenerating the data a lot by writing it
out on the fly without shuffling - the Sampler in the finetuning script
will shuffle for us.
|
2019-03-21 14:04:17 +00:00 |
|
Matthew Carrigan
|
7de5c6aa5e
|
PEP8 and formatting cleanups
|
2019-03-20 16:44:04 +00:00 |
|
Matthew Carrigan
|
1798e98e5a
|
Added final TODOs
|
2019-03-20 16:42:37 +00:00 |
|
Matthew Carrigan
|
c64c2fc4c2
|
Fixed embarrassing indentation problem
|
2019-03-20 15:42:57 +00:00 |
|
Matthew Carrigan
|
0540d360f2
|
Fixed logging
|
2019-03-20 15:36:51 +00:00 |
|
Matthew Carrigan
|
976554a472
|
First commit of the new LM finetuning
|
2019-03-20 14:23:51 +00:00 |
|