Commit Graph

15053 Commits

Author SHA1 Message Date
Thomas Wolf
1eb89ddf73
Merge pull request #2044 from huggingface/cli_upload
CLI for authenticated file sharing
2019-12-05 09:44:07 +01:00
Guillaume B
7f998b1b83 special_tokens_mask value was unused and calculated twice 2019-12-05 09:01:39 +01:00
VictorSanh
fb0d2f1da1 preparing release distil-mBERT 2019-12-05 03:00:16 -05:00
Julien Chaumond
3ba417e1a8 [cli] ls: Tabular formatting 2019-12-04 18:40:52 -05:00
LysandreJik
ce158a076f Return dataset (pytorch) 2019-12-04 17:55:52 -05:00
LysandreJik
7a03519975 Documentation 2019-12-04 17:24:35 -05:00
Julien Chaumond
96fa9a8a70 Python 2 + Post mime-type to S3 2019-12-04 17:22:50 -05:00
LysandreJik
33508ae310 Remove only_first 2019-12-04 16:26:45 -05:00
LysandreJik
f7e4a7cdfa Cleanup 2019-12-04 16:24:15 -05:00
LysandreJik
a7ca6d738b Padding side is tokenizer-dependant 2019-12-04 15:43:34 -05:00
LysandreJik
cca75e7884 Kill the demon spawn 2019-12-04 15:42:29 -05:00
LysandreJik
bf119c0568 TFDS dataset can now be evaluated 2019-12-04 11:34:59 -05:00
Julien Plu
ff98b041da Fix whitespace issue 2019-12-04 16:53:06 +01:00
LysandreJik
9ddc3f1a12 Naming update + XLNet/XLM evaluation 2019-12-04 10:37:00 -05:00
thomwolf
5bfcd0485e fix #1991 2019-12-04 14:53:11 +01:00
Thomas Wolf
cae641ff26
Merge pull request #1846 from tamuhey/patch/iss1845
fix summary_type value of SequenceSummary
2019-12-04 13:28:39 +01:00
Julien Plu
254ebb979c Bugfix on init file. Missing comma. 2019-12-04 10:00:25 +01:00
Julien Plu
ecb923da9c Create a NER example similar to the Pytorch one. It takes the same options, and can be run the same way. 2019-12-04 09:43:15 +01:00
Aymeric Augustin
40255ab002 Remove dead code in tests. 2019-12-04 08:21:02 +01:00
Julien Chaumond
e4fbf3e2cc CLI for authenticated file sharing 2019-12-04 00:52:23 -05:00
LysandreJik
de276de1c1 Working evaluation 2019-12-03 17:15:51 -05:00
Julien Chaumond
7edb51f3a5 [pplm] split classif head into its own file 2019-12-03 22:07:25 +00:00
LysandreJik
c835bc85c2 Compute predictions 2019-12-03 15:28:16 -05:00
LysandreJik
285b1241e3 Added SquadResult 2019-12-03 15:00:49 -05:00
LysandreJik
8101924a68 Patch: v2.2.1 2019-12-03 11:20:26 -05:00
VictorSanh
48cbf267c9 Use full dataset for eval (SequentialSampler in Distributed setting) 2019-12-03 11:01:37 -05:00
Julien Chaumond
f434bfc623 [pplm] Update S3 links
Co-Authored-By: Piero Molino <w4nderlust@gmail.com>
2019-12-03 10:53:02 -05:00
Ethan Perez
96e83506d1 Always use SequentialSampler during evaluation
When evaluating, shouldn't we always use the SequentialSampler instead of DistributedSampler? Evaluation only runs on 1 GPU no matter what, so if you use the DistributedSampler with N GPUs, I think you'll only evaluate on 1/N of the evaluation set. That's at least what I'm finding when I run an older/modified version of this repo.
2019-12-03 10:15:39 -05:00
Julien Chaumond
3b48806f75 [pplm] README: add setup + tweaks 2019-12-03 10:14:02 -05:00
Julien Chaumond
0cb2c90890 readme
Co-Authored-By: Rosanne Liu <mimosavvy@gmail.com>
2019-12-03 10:14:02 -05:00
Julien Chaumond
1efb2ae7fc [pplm] move scripts under examples/pplm/ 2019-12-03 10:14:02 -05:00
Piero Molino
a59fdd1627 generate_text_pplm now works with batch_size > 1 2019-12-03 10:14:02 -05:00
w4nderlust
893d0d64fe Changed order of some parameters to be more consistent. Identical results. 2019-12-03 10:14:02 -05:00
w4nderlust
f42816e7fc Added additional check for url and path in discriminator model params 2019-12-03 10:14:02 -05:00
w4nderlust
f10b925015 Imrpovements: model_path renamed pretrained_model, tokenizer loaded from pretrained_model, pretrained_model set to discriminator's when discrim is specified, sample = False by default but cli parameter introduced. To obtain identical samples call the cli with --sample 2019-12-03 10:14:02 -05:00
w4nderlust
75904dae66 Removed global variable device 2019-12-03 10:14:02 -05:00
piero
7fd54b55a3 Added support for generic discriminators 2019-12-03 10:14:02 -05:00
piero
b0eaff36e6 Added a +1 to epoch when saving weights 2019-12-03 10:14:02 -05:00
piero
611961ade7 Added tqdm to preprocessing 2019-12-03 10:14:02 -05:00
piero
afc7dcd94d Now run_pplm works on cpu. Identical output as before (when using gpu). 2019-12-03 10:14:02 -05:00
piero
61399e5afe Cleaned perturb_past. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
ffc2935405 Fix for making unditioned generation work. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
9f693a0c48 Cleaned generate_text_pplm. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
61a12f790d Renamed SmallConst to SMALL_CONST and introduced BIG_CONST. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
ef47b2c03a Removed commented code. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
7ea12db3f5 Removed commented code. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
08c6e456a3 Cleaned full_text_generation. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
6c9c131780 More cleanup for run_model. Identical output as before. 2019-12-03 10:14:02 -05:00
piero
7ffe47c888 Improved device specification 2019-12-03 10:14:02 -05:00
piero
4f2164e40e First cleanup step, changing function names and passing parameters all the way through without using args. Identical output as before. 2019-12-03 10:14:02 -05:00