Commit Graph

76 Commits

Author SHA1 Message Date
alberduris
dd4df80f0b Moved the encoded_prompts to correct device 2020-01-06 15:11:12 +01:00
Aymeric Augustin
d6eaf4e6d2 Update comments mentioning Python 2. 2019-12-22 18:38:56 +01:00
Aymeric Augustin
c824d15aa1 Remove __future__ imports. 2019-12-22 17:47:54 +01:00
Aymeric Augustin
fa2ccbc081 Fix E266 flake8 warning (x90). 2019-12-22 10:59:08 +01:00
Aymeric Augustin
631be27078 Fix E722 flake8 warnings (x26). 2019-12-22 10:59:07 +01:00
Aymeric Augustin
158e82e061 Sort imports with isort.
This is the result of:

    $ isort --recursive examples templates transformers utils hubconf.py setup.py
2019-12-22 10:57:46 +01:00
Aymeric Augustin
fa84ae26d6 Reformat source code with black.
This is the result of:

    $ black --line-length 119 examples templates transformers utils hubconf.py setup.py

There's a lot of fairly long lines in the project. As a consequence, I'm
picking the longest widely accepted line length, 119 characters.

This is also Thomas' preference, because it allows for explicit variable
names, to make the code easier to understand.
2019-12-21 17:52:29 +01:00
Thomas Wolf
eeb70cdd77
Merge branch 'master' into saving-and-resuming 2019-12-21 14:29:59 +01:00
Stefan Schweter
a26ce4dee1 examples: add XLM-RoBERTa to glue script 2019-12-19 02:23:01 +01:00
Alan deLevie
fbf5455a86 Fix typo in examples/run_glue.py args declaration.
deay -> decay
2019-12-12 11:16:19 -05:00
Bilal Khan
854ec5784e Update run_glue to save optimizer and scheduler states, then resume training from a checkpoint 2019-12-10 19:30:36 -06:00
VictorSanh
48cbf267c9 Use full dataset for eval (SequentialSampler in Distributed setting) 2019-12-03 11:01:37 -05:00
Thomas Wolf
f19a78a634
Merge pull request #1903 from valohai/master
Valohai integration
2019-12-03 16:13:01 +01:00
Juha Kiili
41aa0e8003 Refactor logs and fix loss bug 2019-11-29 15:33:25 +02:00
Lysandre
c110c41fdb Run GLUE and remove LAMB 2019-11-26 13:08:12 -05:00
Juha Kiili
2cf3447e0a Glue: log in Valohai-compatible JSON format too 2019-11-21 12:35:25 +02:00
Jin Young Sohn
e70cdf083d Cleanup TPU bits from run_glue.py
TPU runner is currently implemented in:
https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py.

We plan to upstream this directly into `huggingface/transformers`
(either `master` or `tpu`) branch once it's been more thoroughly tested.
2019-11-20 17:54:34 -05:00
Thomas Wolf
9629e2c676
Merge pull request #1804 from ronakice/master
fix multi-gpu eval in torch examples
2019-11-14 22:24:05 +01:00
Rémi Louf
2276bf69b7 update the examples, docs and template 2019-11-14 20:38:02 +01:00
ronakice
2e31176557 fix multi-gpu eval 2019-11-12 05:55:11 -05:00
thomwolf
89d6272898 Fix #1623 2019-11-04 16:21:12 +01:00
Pasquale Minervini
abd7110e21 gradient norm clipping should be done right before calling the optimiser - fixing run_glue and run_ner as well 2019-10-21 19:56:52 +01:00
Lysandre
639f4b7190 Don't save/load when on TPU 2019-10-10 19:17:25 +00:00
Lysandre
d4e7934ac3 GLUE on TPU 2019-10-10 19:03:06 +00:00
Thomas Wolf
6596e3d566
Merge pull request #1454 from bkkaggle/pytorch-built-in-tensorboard
Change tensorboard imports to use built-in tensorboard if available
2019-10-10 11:56:55 +02:00
Lysandre Debut
e84470ef81
Merge pull request #1384 from huggingface/encoding-qol
Quality of life enhancements in encoding + patch MLM masking
2019-10-09 11:18:24 -04:00
Thomas Wolf
439fac723a
Merge pull request #1409 from brian41005/master
Evaluation result.txt path changing #1286
2019-10-09 03:14:34 +02:00
Bilal Khan
5ce8d29abe Change tensorboard imports to use built-in tensorboard if available 2019-10-08 16:29:43 -05:00
thomwolf
92c0f2fb90 Merge remote-tracking branch 'origin/julien_multiple-choice' into encoding-qol 2019-10-04 15:48:06 -04:00
Julien Chaumond
9e136ff57c Honor args.overwrite_cache (h/t @erenup) 2019-10-04 15:00:56 -04:00
Brian Ma
7af0777910 Update run_glue.py
add DistilBert model shortcut into ALL_MODELS
2019-10-03 15:31:11 +00:00
Brian Ma
2195c0d5f9 Evaluation result.txt path changing #1286 2019-10-03 12:49:12 +08:00
VictorSanh
702f589848 fix input in run_glue for distilbert 2019-09-27 00:20:14 -04:00
thomwolf
31c23bd5ee [BIG] pytorch-transformers => transformers 2019-09-26 10:15:53 +02:00
thomwolf
a049c8043b push fix to training 2019-09-25 17:33:16 +02:00
thomwolf
5def3302f4 update run_glue 2019-09-25 12:38:08 +02:00
thomwolf
f71758f7a4 update internal glue processors 2019-09-25 12:00:50 +02:00
thomwolf
b5ec526f85 updated data processor and metrics 2019-09-24 17:10:50 +02:00
LysandreJik
f09e5ecef0 [Proposal] GLUE processors included in library 2019-09-24 09:47:34 -04:00
LysandreJik
75635072e1 Updated GLUE script to add DistilBERT. Cleaned up unused args in the utils file. 2019-09-19 10:55:06 +02:00
Thomas Wolf
0a2fecdf90
Merge branch 'master' into master 2019-08-30 16:30:08 +02:00
Rabeeh KARIMI
39eb31e11e remove reloading tokenizer in the training, adding it to the evaluation part 2019-08-30 15:44:41 +02:00
Rabeeh KARIMI
350bb6bffa updated tokenizer loading for addressing reproducibility issues 2019-08-30 15:34:28 +02:00
Thomas Wolf
01ad55f8cf
Merge pull request #1026 from rabeehk/master
loads the tokenizer for each checkpoint, to solve the reproducability…
2019-08-30 14:15:36 +02:00
Luis
fe8fb10b44 Small modification of comment in the run_glue.py example
Add RoBERTa to the comment as it was not explicit that RoBERTa don't use token_type_ids.
2019-08-29 14:43:30 +02:00
VictorSanh
57272d5ddf fix for glue 2019-08-22 00:25:49 -04:00
Peng Qi
3bffd2e8e5 more fixes 2019-08-20 10:59:28 -07:00
LysandreJik
7e7fc53da5 Fixing run_glue example with RoBERTa 2019-08-16 11:53:10 -04:00
Rabeeh KARIMI
3d47a7f8ab loads the tokenizer for each checkpoint, to solve the reproducability issue 2019-08-14 10:58:26 +02:00
LysandreJik
39f426be65 Added special tokens <pad> and <mask> to RoBERTa. 2019-08-13 15:19:50 -04:00