Thomas Wolf
3b56427a1e
Merge pull request #1040 from FeiWang96/multi_gpu
...
Fix bug of multi-gpu training in lm finetuning
2019-08-20 17:13:44 +02:00
Duzeyao
d86b49ac86
swap optimizer.step and scheduler.step
2019-08-20 16:46:34 +08:00
Duzeyao
45ab8bf60e
Revert "Update finetune_on_pregenerated.py"
...
This reverts commit a1359b970c
.
2019-08-20 16:40:39 +08:00
Zeyao Du
a1359b970c
Update finetune_on_pregenerated.py
2019-08-20 16:00:07 +08:00
Zeyao Du
28f7ca1f80
swap optimizer.step and scheduler.step
2019-08-20 15:58:42 +08:00
Thomas Wolf
5a49b793d9
Merge pull request #1023 from tuvuumass/patch-1
...
fix issue #824
2019-08-19 15:31:46 +02:00
Chi-Liang Liu
40acf6b52a
don't save model without training
2019-08-18 05:02:25 -04:00
wangfei
856a63da4d
Fix: save model/model.module
2019-08-18 11:03:47 +08:00
wangfei
1ef41b8337
Revert "Fix: save model/model.module"
...
This reverts commit 00e9c4cc96
.
2019-08-18 11:03:12 +08:00
wangfei
00e9c4cc96
Fix: save model/model.module
2019-08-18 11:02:02 +08:00
Jason Phang
d8923270e6
Correct truncation for RoBERTa in 2-input GLUE
2019-08-16 16:30:38 -04:00
LysandreJik
7e7fc53da5
Fixing run_glue example with RoBERTa
2019-08-16 11:53:10 -04:00
wangfei
b8ff56896c
Fix bug of multi-gpu training in lm finetuning
2019-08-16 12:11:05 +08:00
LysandreJik
39f426be65
Added special tokens <pad> and <mask> to RoBERTa.
2019-08-13 15:19:50 -04:00
Julien Chaumond
baf08ca1d4
[RoBERTa] run_glue: correct pad_token + reorder labels
2019-08-13 12:51:15 -04:00
tuvuumass
ba4bce2581
fix issue #824
2019-08-13 11:26:27 -04:00
Julien Chaumond
912fdff899
[RoBERTa] Update run_glue
for RoBERTa
2019-08-12 13:49:50 -04:00
Thomas Wolf
b4f9464f90
Merge pull request #960 from ethanjperez/patch-1
...
Fixing unused weight_decay argument
2019-08-07 10:09:55 +02:00
Thomas Wolf
d43dc48b34
Merge branch 'master' into auto_models
2019-08-05 19:17:35 +02:00
thomwolf
70c10caa06
add option mentioned in #940
2019-08-05 17:09:37 +02:00
thomwolf
b90e29d52c
working on automodels
2019-08-05 16:06:34 +02:00
Ethan Perez
28ba345ecc
Fixing unused weight_decay argument
...
Currently the L2 regularization is hard-coded to "0.01", even though there is a --weight_decay flag implemented (that is unused). I'm making this flag control the weight decay used for fine-tuning in this script.
2019-08-04 12:31:46 -04:00
Thomas Wolf
c054b5ee64
Merge pull request #896 from zijunsun/master
...
fix multi-gpu training bug when using fp16
2019-07-26 19:31:02 +02:00
zijunsun
f0aeb7a814
multi-gpu training also should be after apex fp16(squad)
2019-07-26 15:23:29 +08:00
zijunsun
adb3ef6368
multi-gpu training also should be after apex fp16
2019-07-25 13:09:10 +08:00
Chi-Liang Liu
a7fce6d917
fix squad v1 error (na_prob_file should be None)
2019-07-24 16:11:36 +08:00
thomwolf
6070b55443
fix #868
2019-07-23 17:46:01 +02:00
thomwolf
2c9a3115b7
fix #858
2019-07-23 16:45:55 +02:00
Thomas Wolf
268c6cc160
Merge pull request #845 from rabeehk/master
...
fixed version issues in run_openai_gpt
2019-07-23 15:29:31 +02:00
Peiqin Lin
76be189b08
typos
2019-07-21 20:39:42 +08:00
Rabeeh KARIMI
f63ff536ad
fixed version issues in run_openai_gpt
2019-07-20 12:43:07 +02:00
Thomas Wolf
a615499076
Merge pull request #797 from yzy5630/fix-examples
...
fix some errors for distributed lm_finetuning
2019-07-18 23:32:33 +02:00
yzy5630
a1fe4ba9c9
use new API for save and load
2019-07-18 15:45:23 +08:00
yzy5630
a7ba27b1b4
add parser for adam
2019-07-18 08:52:51 +08:00
yzy5630
d6522e2873
change loss and optimizer to new API
2019-07-17 21:22:34 +08:00
thomwolf
71d597dad0
fix #800
2019-07-17 13:51:09 +02:00
yzy5630
123da5a2fa
fix errors for lm_finetuning examples
2019-07-17 09:56:07 +08:00
yzy5630
60a1bdcdac
fix some errors for distributed lm_finetuning
2019-07-17 09:16:20 +08:00
thomwolf
e848b54730
fix #792
2019-07-16 21:22:19 +02:00
thomwolf
1849aa7d39
update readme and pretrained model weight files
2019-07-16 15:11:29 +02:00
thomwolf
f31154cb9d
Merge branch 'xlnet'
2019-07-16 11:51:13 +02:00
thomwolf
76da9765b6
fix run_generation test
2019-07-15 17:52:35 +02:00
thomwolf
e691fc0963
update QA models tests + run_generation
2019-07-15 17:45:24 +02:00
thomwolf
15d8b1266c
update tokenizer - update squad example for xlnet
2019-07-15 17:30:42 +02:00
thomwolf
3b469cb422
updating squad for compatibility with XLNet
2019-07-15 15:28:37 +02:00
thomwolf
0e9825e252
small fix to run_glue
2019-07-14 23:43:28 +02:00
thomwolf
2397f958f9
updating examples and doc
2019-07-14 23:20:10 +02:00
thomwolf
c490f5ce87
added generation examples in tests
2019-07-13 15:26:58 +02:00
thomwolf
7d4b200e40
good quality generation example for GPT, GPT-2, Transfo-XL, XLNet
2019-07-13 15:25:03 +02:00
thomwolf
7322c314a6
remove python2 testing for examples
2019-07-12 14:24:08 +02:00