Lysandre
|
c110c41fdb
|
Run GLUE and remove LAMB
|
2019-11-26 13:08:12 -05:00 |
|
manansanghi
|
5d3b8daad2
|
Minor bug fixes on run_ner.py
|
2019-11-25 16:48:03 -05:00 |
|
İbrahim Ethem Demirci
|
aa92a184d2
|
resize model when special tokenizer present
|
2019-11-25 15:06:32 -05:00 |
|
Lysandre
|
7485caefb0
|
fix #1894
|
2019-11-25 09:33:39 -05:00 |
|
Julien Chaumond
|
176cd1ce1b
|
[doc] homogenize instructions slightly
|
2019-11-23 11:18:54 -05:00 |
|
Rémi Louf
|
26db31e0c0
|
update the documentation
|
2019-11-21 14:41:19 -05:00 |
|
Thomas Wolf
|
0cdfcca24b
|
Merge pull request #1860 from stefan-it/camembert-for-token-classification
[WIP] Add support for CamembertForTokenClassification
|
2019-11-21 10:56:07 +01:00 |
|
Jin Young Sohn
|
e70cdf083d
|
Cleanup TPU bits from run_glue.py
TPU runner is currently implemented in:
https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py.
We plan to upstream this directly into `huggingface/transformers`
(either `master` or `tpu`) branch once it's been more thoroughly tested.
|
2019-11-20 17:54:34 -05:00 |
|
Lysandre
|
454455c695
|
fix #1879
|
2019-11-20 09:42:48 -05:00 |
|
Kazutoshi Shinoda
|
f3386d9383
|
typo "deay" -> "decay"
|
2019-11-18 11:50:06 -05:00 |
|
Stefan Schweter
|
56c84863a1
|
camembert: add support for CamemBERT in run_ner example
|
2019-11-18 17:06:57 +01:00 |
|
Julien Chaumond
|
26858f27cb
|
[camembert] Upload to s3 + rename script
|
2019-11-16 00:11:07 -05:00 |
|
Louis MARTIN
|
3e20c2e871
|
Update demo_camembert.py with new classes
|
2019-11-16 00:11:07 -05:00 |
|
Louis MARTIN
|
f12e4d8da7
|
Move demo_camembert.py to examples/contrib
|
2019-11-16 00:11:07 -05:00 |
|
Louis MARTIN
|
6e72fd094c
|
Add demo_camembert.py
|
2019-11-16 00:11:07 -05:00 |
|
Thomas Wolf
|
74ce8de7d8
|
Merge pull request #1792 from stefan-it/distilbert-for-token-classification
DistilBERT for token classification
|
2019-11-14 22:47:53 +01:00 |
|
Thomas Wolf
|
05db5bc1af
|
added small comparison between BERT, RoBERTa and DistilBERT
|
2019-11-14 22:40:22 +01:00 |
|
Thomas Wolf
|
9629e2c676
|
Merge pull request #1804 from ronakice/master
fix multi-gpu eval in torch examples
|
2019-11-14 22:24:05 +01:00 |
|
Thomas Wolf
|
df99f8c5a1
|
Merge pull request #1832 from huggingface/memory-leak-schedulers
replace LambdaLR scheduler wrappers by function
|
2019-11-14 22:10:31 +01:00 |
|
Rémi Louf
|
2276bf69b7
|
update the examples, docs and template
|
2019-11-14 20:38:02 +01:00 |
|
Lysandre
|
d7929899da
|
Specify checkpoint in saved file for run_lm_finetuning.py
|
2019-11-14 10:49:00 -05:00 |
|
ronakice
|
2e31176557
|
fix multi-gpu eval
|
2019-11-12 05:55:11 -05:00 |
|
Stefan Schweter
|
2b07b9e5ee
|
examples: add DistilBert support for NER fine-tuning
|
2019-11-11 16:19:34 +01:00 |
|
Adrian Bauer
|
7a9aae1044
|
Fix run_bertology.py
Make imports and args.overwrite_cache match run_glue.py
|
2019-11-08 16:28:40 -05:00 |
|
Julien Chaumond
|
f88c104d8f
|
[run_tf_glue] Add comment for context
|
2019-11-05 19:56:43 -05:00 |
|
Julien Chaumond
|
30968d70af
|
misc doc
|
2019-11-05 19:06:12 -05:00 |
|
Thomas Wolf
|
e99071f105
|
Merge pull request #1734 from orena1/patch-1
add progress bar to convert_examples_to_features
|
2019-11-05 11:34:20 +01:00 |
|
Thomas Wolf
|
ba973342e3
|
Merge pull request #1553 from WilliamTambellini/timeSquadInference
Add speed log to examples/run_squad.py
|
2019-11-05 11:13:12 +01:00 |
|
Thomas Wolf
|
237fad339c
|
Merge pull request #1709 from oneraghavan/master
Fixing mode in evaluate during training
|
2019-11-05 10:55:33 +01:00 |
|
Oren Amsalem
|
d7906165a3
|
add progress bar for convert_examples_to_features
It takes considerate amount of time (~10 min) to parse the examples to features, it is good to have a progress-bar to track this
|
2019-11-05 10:34:27 +02:00 |
|
thomwolf
|
89d6272898
|
Fix #1623
|
2019-11-04 16:21:12 +01:00 |
|
Thomas Wolf
|
9a3b173cd3
|
Merge branch 'master' into master
|
2019-11-04 11:41:26 +01:00 |
|
thomwolf
|
ad90868627
|
Update example readme
|
2019-11-04 11:27:22 +01:00 |
|
Raghavan
|
e5b1048bae
|
Fixing mode in evaluate during training
|
2019-11-03 16:14:46 +05:30 |
|
Lysandre
|
1a2b40cb53
|
run_tf_glue MRPC evaluation only for MRPC
|
2019-10-31 18:00:51 -04:00 |
|
Timothy Liu
|
be36cf92fb
|
Added mixed precision support to benchmarks.py
|
2019-10-31 17:24:37 -04:00 |
|
Julien Chaumond
|
f96ce1c241
|
[run_generation] Fix generation with batch_size>1
|
2019-10-31 18:27:11 +00:00 |
|
Julien Chaumond
|
3c1b6f594e
|
Merge branch 'master' into fix_top_k_top_p_filtering
|
2019-10-31 13:53:51 -04:00 |
|
Victor SANH
|
fa735208c9
|
update readme - fix example command distil*
|
2019-10-30 14:27:28 -04:00 |
|
Thomas Wolf
|
c7058d8224
|
Merge pull request #1608 from focox/master
Error raised by "tmp_eval_loss += tmp_eval_loss.item()" when using multi-gpu
|
2019-10-30 17:14:07 +01:00 |
|
Thomas Wolf
|
04c69db399
|
Merge pull request #1628 from huggingface/tfglue
run_tf_glue works with all tasks
|
2019-10-30 17:04:03 +01:00 |
|
Thomas Wolf
|
3df4367244
|
Merge pull request #1601 from huggingface/clean-roberta
Clean roberta model & all tokenizers now add special tokens by default (breaking change)
|
2019-10-30 17:00:40 +01:00 |
|
Thomas Wolf
|
36174696cc
|
Merge branch 'master' into clean-roberta
|
2019-10-30 16:51:06 +01:00 |
|
Thomas Wolf
|
228cdd6a6e
|
Merge branch 'master' into conditional-generation
|
2019-10-30 16:40:35 +01:00 |
|
Rémi Louf
|
070507df1f
|
format utils for summarization
|
2019-10-30 11:24:12 +01:00 |
|
Rémi Louf
|
da10de8466
|
fix bug with padding mask + add corresponding test
|
2019-10-30 11:19:58 +01:00 |
|
Rémi Louf
|
3b0d2fa30e
|
rename seq2seq to encoder_decoder
|
2019-10-30 10:54:46 +01:00 |
|
Rémi Louf
|
9c1bdb5b61
|
revert renaming of lm_labels to ltr_lm_labels
|
2019-10-30 10:43:13 +01:00 |
|
Rémi Louf
|
098a89f312
|
update docstrings; rename lm_labels to more explicit ltr_lm_labels
|
2019-10-29 20:08:03 +01:00 |
|
Rémi Louf
|
dfce409691
|
resolve PR comments
|
2019-10-29 17:10:20 +01:00 |
|