Thomas Wolf
5482822a2b
Merge pull request #2046 from jplu/tf2-ner-example
...
Add NER TF2 example.
2019-12-06 12:12:22 +01:00
Thomas Wolf
fc1bb1f867
Merge pull request #2068 from huggingface/fix-2042
...
Nicer error message when Bert's input is missing batch size
2019-12-06 12:06:42 +01:00
VictorSanh
35ff345fc9
update requirements
2019-12-05 12:07:04 -05:00
VictorSanh
552c44a9b1
release distilm-bert
2019-12-05 10:14:58 -05:00
Rosanne Liu
ee53de7aac
Pr for pplm ( #2060 )
...
* license
* changes
* ok
* Update paper link and commands to run
* pointer to uber repo
2019-12-05 09:20:07 -05:00
Thomas Wolf
bebaa14039
Merge pull request #2045 from aaugustin/remove-dead-code
...
Remove dead code in tests.
2019-12-05 14:41:56 +01:00
thomwolf
18fb93530b
fixing #2042 - Nicer error message
2019-12-05 14:36:34 +01:00
thomwolf
2d5d86e037
fix #2031
2019-12-05 14:06:29 +01:00
Thomas Wolf
af077b15e2
Merge pull request #2065 from huggingface/fixing-camembert
...
Fixing camembert tokenization
2019-12-05 13:45:44 +01:00
thomwolf
3268ebd229
fix xlnet test
2019-12-05 13:35:29 +01:00
thomwolf
6c5297a423
Fixing camembert tokenization
2019-12-05 13:27:58 +01:00
Julien Plu
9200a759d7
Add few tests on the TF optimization file with some info in the documentation. Complete the README.
2019-12-05 12:56:43 +01:00
Thomas Wolf
1f179f095f
Merge pull request #2011 from AdityaSoni19031997/patch-1
...
typo fix on the docs as per Pytorch v1.1+
2019-12-05 12:39:04 +01:00
Thomas Wolf
1eaf44e713
Merge pull request #2007 from roskoN/xlnet_attention_fix
...
fixed XLNet attention output for both attention streams whenever target_mapping is provided
2019-12-05 12:32:39 +01:00
thomwolf
71e4693f08
fix #1968
2019-12-05 12:14:24 +01:00
Thomas Wolf
f9f395b21c
Merge pull request #1735 from ondewo/tf-do-not-use-gpu-on-import
...
Do not use GPU when importing transformers
2019-12-05 11:56:48 +01:00
thomwolf
75a97af6bc
fix #1450 - add doc
2019-12-05 11:26:55 +01:00
thomwolf
8b388827b5
fix #1920
2019-12-05 11:18:43 +01:00
Thomas Wolf
d425a4d60b
Merge pull request #1870 from alexzubiaga/xlnet-for-token-classification
...
XLNet for Token classification
2019-12-05 09:54:09 +01:00
Thomas Wolf
1eb89ddf73
Merge pull request #2044 from huggingface/cli_upload
...
CLI for authenticated file sharing
2019-12-05 09:44:07 +01:00
VictorSanh
fb0d2f1da1
preparing release distil-mBERT
2019-12-05 03:00:16 -05:00
Julien Chaumond
3ba417e1a8
[cli] ls: Tabular formatting
2019-12-04 18:40:52 -05:00
Julien Chaumond
96fa9a8a70
Python 2 + Post mime-type to S3
2019-12-04 17:22:50 -05:00
Julien Plu
ff98b041da
Fix whitespace issue
2019-12-04 16:53:06 +01:00
thomwolf
5bfcd0485e
fix #1991
2019-12-04 14:53:11 +01:00
Thomas Wolf
cae641ff26
Merge pull request #1846 from tamuhey/patch/iss1845
...
fix summary_type value of SequenceSummary
2019-12-04 13:28:39 +01:00
Julien Plu
254ebb979c
Bugfix on init file. Missing comma.
2019-12-04 10:00:25 +01:00
Julien Plu
ecb923da9c
Create a NER example similar to the Pytorch one. It takes the same options, and can be run the same way.
2019-12-04 09:43:15 +01:00
Aymeric Augustin
40255ab002
Remove dead code in tests.
2019-12-04 08:21:02 +01:00
Julien Chaumond
e4fbf3e2cc
CLI for authenticated file sharing
2019-12-04 00:52:23 -05:00
Julien Chaumond
7edb51f3a5
[pplm] split classif head into its own file
2019-12-03 22:07:25 +00:00
LysandreJik
8101924a68
Patch: v2.2.1
2019-12-03 11:20:26 -05:00
VictorSanh
48cbf267c9
Use full dataset for eval (SequentialSampler in Distributed setting)
2019-12-03 11:01:37 -05:00
Julien Chaumond
f434bfc623
[pplm] Update S3 links
...
Co-Authored-By: Piero Molino <w4nderlust@gmail.com>
2019-12-03 10:53:02 -05:00
Ethan Perez
96e83506d1
Always use SequentialSampler during evaluation
...
When evaluating, shouldn't we always use the SequentialSampler instead of DistributedSampler? Evaluation only runs on 1 GPU no matter what, so if you use the DistributedSampler with N GPUs, I think you'll only evaluate on 1/N of the evaluation set. That's at least what I'm finding when I run an older/modified version of this repo.
2019-12-03 10:15:39 -05:00
Julien Chaumond
3b48806f75
[pplm] README: add setup + tweaks
2019-12-03 10:14:02 -05:00
Julien Chaumond
0cb2c90890
readme
...
Co-Authored-By: Rosanne Liu <mimosavvy@gmail.com>
2019-12-03 10:14:02 -05:00
Julien Chaumond
1efb2ae7fc
[pplm] move scripts under examples/pplm/
2019-12-03 10:14:02 -05:00
Piero Molino
a59fdd1627
generate_text_pplm now works with batch_size > 1
2019-12-03 10:14:02 -05:00
w4nderlust
893d0d64fe
Changed order of some parameters to be more consistent. Identical results.
2019-12-03 10:14:02 -05:00
w4nderlust
f42816e7fc
Added additional check for url and path in discriminator model params
2019-12-03 10:14:02 -05:00
w4nderlust
f10b925015
Imrpovements: model_path renamed pretrained_model, tokenizer loaded from pretrained_model, pretrained_model set to discriminator's when discrim is specified, sample = False by default but cli parameter introduced. To obtain identical samples call the cli with --sample
2019-12-03 10:14:02 -05:00
w4nderlust
75904dae66
Removed global variable device
2019-12-03 10:14:02 -05:00
piero
7fd54b55a3
Added support for generic discriminators
2019-12-03 10:14:02 -05:00
piero
b0eaff36e6
Added a +1 to epoch when saving weights
2019-12-03 10:14:02 -05:00
piero
611961ade7
Added tqdm to preprocessing
2019-12-03 10:14:02 -05:00
piero
afc7dcd94d
Now run_pplm works on cpu. Identical output as before (when using gpu).
2019-12-03 10:14:02 -05:00
piero
61399e5afe
Cleaned perturb_past. Identical output as before.
2019-12-03 10:14:02 -05:00
piero
ffc2935405
Fix for making unditioned generation work. Identical output as before.
2019-12-03 10:14:02 -05:00
piero
9f693a0c48
Cleaned generate_text_pplm. Identical output as before.
2019-12-03 10:14:02 -05:00