Commit Graph

12196 Commits

Author SHA1 Message Date
Yaser Martinez Palenzuela
0ce2f496dc
Port tokenization for the multilingual model 2018-11-05 22:34:12 +01:00
Thomas Wolf
d983eecdd3
more readme typo fixes 2018-11-05 21:29:04 +01:00
Thomas Wolf
8f91b4de91
more typo fixes 2018-11-05 21:24:14 +01:00
Thomas Wolf
7316b0d6d0
fix typo 2018-11-05 21:22:45 +01:00
Clement
d130cb5139
typos 2018-11-05 15:09:24 -05:00
Clement
2a8fee495b
typos 2018-11-05 15:04:06 -05:00
Clement
f968b11657
typo 2018-11-05 14:59:44 -05:00
thomwolf
88e793f31a fix typos 2018-11-05 16:14:19 +01:00
thomwolf
3914eed505 update readme 2018-11-05 16:09:27 +01:00
thomwolf
bab5d13077 update optimizer documentation 2018-11-05 16:09:21 +01:00
thomwolf
7394eb47a5 update readme 2018-11-05 15:35:44 +01:00
thomwolf
e6646751ac update notebooks 2018-11-05 15:02:50 +01:00
thomwolf
b705c9eff5 remove small script, moved notebooks to notebook folder 2018-11-05 14:55:08 +01:00
thomwolf
3a301d443b update gitignore 2018-11-05 14:53:43 +01:00
thomwolf
711d3f9f2b remove tensorflow_code 2018-11-05 14:53:03 +01:00
thomwolf
7875b1a8e0 notebook update 2018-11-05 14:50:44 +01:00
thomwolf
c3527cfbc4 ignore SQuAD targets outside of seq_length 2018-11-05 14:18:48 +01:00
thomwolf
1b99cdf71b script that use a small portion of squad only 2018-11-05 13:54:54 +01:00
thomwolf
2f4765d3ed fix multi-gpu squad loss 2018-11-05 13:46:14 +01:00
thomwolf
955cee33a5 updating SQuAD comparison 2018-11-05 13:21:53 +01:00
thomwolf
5622d8320f allowing to load small number of examples 2018-11-05 13:21:24 +01:00
thomwolf
a725db4f6c fixing BertForQuestionAnswering loss computation 2018-11-05 13:21:11 +01:00
thomwolf
bb5ce67a14 adding back tf code + adding models comparison on SQuAD 2018-11-05 12:11:32 +01:00
VictorSanh
290633b882 Fix args.gradient_accumulation_steps used before assigment. 2018-11-04 17:31:50 -05:00
VictorSanh
649e9774cd Fix bug train_batch_size not an int.
Division makes args.train_batch_size becoming a float.
cc @thomwolf
2018-11-04 17:19:40 -05:00
VictorSanh
d55c3ae83f Small logger bug (multi-gpu, distribution) in training 2018-11-04 16:28:10 -05:00
thomwolf
3d291dea4a clean up tests 2018-11-04 21:27:19 +01:00
thomwolf
87da161c2a finishing model test 2018-11-04 21:27:10 +01:00
thomwolf
d69b0b0e90 fixes + clean up + mask is long 2018-11-04 21:26:54 +01:00
thomwolf
3ddff783c1 clean up + mask is long 2018-11-04 21:26:44 +01:00
thomwolf
88c1037991 update requirements 2018-11-04 21:26:18 +01:00
thomwolf
d0cb9fa2a7 clean up model 2018-11-04 21:26:11 +01:00
thomwolf
6cc651778a update readme 2018-11-04 21:26:03 +01:00
thomwolf
efb44a8310 distributed in extract features 2018-11-04 21:25:55 +01:00
thomwolf
d9d7d1a462 update float() 2018-11-04 21:25:36 +01:00
thomwolf
c6207d85b6 remove old methods 2018-11-04 15:34:00 +01:00
thomwolf
965b2565a0 add distributed training 2018-11-04 15:32:04 +01:00
thomwolf
1ceac85e23 add gradient accumulation 2018-11-04 15:26:14 +01:00
thomwolf
6b0da96b4b clean up 2018-11-04 15:17:55 +01:00
thomwolf
834b485b2e logging + update copyright 2018-11-04 12:07:38 +01:00
thomwolf
1701291ef9 multi-gpu cleanup 2018-11-04 11:54:57 +01:00
thomwolf
5ee171689c what's in loss again 2018-11-04 11:45:44 +01:00
thomwolf
0b7a20c651 add tqdm, clean up logging 2018-11-04 11:07:34 +01:00
thomwolf
d4e3cf3520 add numpy import 2018-11-04 10:54:16 +01:00
thomwolf
cf366417d5 remove run_squad_pytorch 2018-11-04 09:56:00 +01:00
thomwolf
26bdef4321 fixing verbose_argument 2018-11-04 09:53:29 +01:00
thomwolf
d6418c5ef3 tweaking the readme 2018-11-03 23:52:35 +01:00
thomwolf
3b70b270e0 update readme 2018-11-03 23:39:55 +01:00
thomwolf
eaa6db92f1 Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT 2018-11-03 23:35:16 +01:00
thomwolf
f8276008df update readme, file names, removing TF code, moving tests 2018-11-03 23:35:14 +01:00