Commit Graph

20 Commits

Author SHA1 Message Date
Thomas Wolf
3b56427a1e
Merge pull request #1040 from FeiWang96/multi_gpu
Fix bug of multi-gpu training in lm finetuning
2019-08-20 17:13:44 +02:00
Zeyao Du
28f7ca1f80
swap optimizer.step and scheduler.step 2019-08-20 15:58:42 +08:00
wangfei
856a63da4d Fix: save model/model.module 2019-08-18 11:03:47 +08:00
wangfei
1ef41b8337 Revert "Fix: save model/model.module"
This reverts commit 00e9c4cc96.
2019-08-18 11:03:12 +08:00
wangfei
00e9c4cc96 Fix: save model/model.module 2019-08-18 11:02:02 +08:00
wangfei
b8ff56896c Fix bug of multi-gpu training in lm finetuning 2019-08-16 12:11:05 +08:00
yzy5630
a1fe4ba9c9 use new API for save and load 2019-07-18 15:45:23 +08:00
yzy5630
a7ba27b1b4 add parser for adam 2019-07-18 08:52:51 +08:00
yzy5630
d6522e2873 change loss and optimizer to new API 2019-07-17 21:22:34 +08:00
yzy5630
60a1bdcdac fix some errors for distributed lm_finetuning 2019-07-17 09:16:20 +08:00
thomwolf
0bab55d5d5 [BIG] name change 2019-07-05 11:55:36 +02:00
thomwolf
c41f2bad69 WIP XLM + refactoring 2019-07-03 22:54:39 +02:00
Oliver Guhr
5c08c8c273 adds the tokenizer + model config to the output 2019-06-11 13:46:33 +02:00
burcturkoglu
00c7fd2b79 Division to num_train_optimizer of global_step in lr_this_step is removed. 2019-05-09 10:57:03 +03:00
burcturkoglu
fa37b4da77 Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT 2019-05-09 10:55:24 +03:00
burcturkoglu
5289b4b9e0 Division to num_train_optimizer of global_step in lr_this_step is removed. 2019-05-09 10:51:38 +03:00
MottoX
74dbba64bc Prepare optimizer only when args.do_train is True 2019-05-02 19:09:29 +08:00
thomwolf
d94c6b0144 fix training schedules in examples to match new API 2019-04-23 11:17:06 +02:00
Matthew Carrigan
934d3f4d2f Syncing up argument names between the scripts 2019-03-20 17:23:23 +00:00
Matthew Carrigan
f19ba35b2b Move old finetuning script into the new folder 2019-03-20 16:47:06 +00:00