Commit Graph

5759 Commits

Author SHA1 Message Date
Stas Bekman
2c0da7803a
minor doc fixes (#5831)
* minor doc fixes

correct superclass name and small grammar fixes

* correct the instance name in the error message

It appears to be `BaseTokenizer` from looking at:

`from tokenizers.implementations import BaseTokenizer as BaseTokenizerFast`

and not `Tokenizer` as it currently says.
2020-07-22 13:22:34 -04:00
Sam Shleifer
feeb956a19
[docs] Add integration test example to copy pasta template (#5961)
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-07-22 12:48:38 -04:00
Sam Shleifer
01116d3c5b
T5 Model Cards (#5759)
* T5 Model Cards

* Fix paths

* Fix tags

* lang-en
2020-07-22 11:38:37 -04:00
Funtowicz Morgan
896300177b
Expose padding_strategy on squad processor to fix QA pipeline performance regression (#5932)
* Attempt to fix the way squad_convert_examples_to_features pad the elements for the QA pipeline.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Quality

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Make the code easier to read and avoid testing multiple test the same thing.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* missing enum value on truncation_strategy.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Rethinking for the easiest fix: expose the padding strategy on squad_convert_examples_to_features.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Remove unused imports.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>
2020-07-22 16:11:57 +02:00
Sam Shleifer
ae67b2439f
[CI] Install examples/requirements.txt (#5956) 2020-07-21 21:07:48 -04:00
Sylvain Gugger
e714412fe6
Update doc to new model outputs (#5946)
* Update doc to new model outputs

* Fix outputs in quicktour
2020-07-21 18:13:55 -04:00
Sam Shleifer
ddd40b3211
[CI] self-scheduled runner tests examples/ (#5927) 2020-07-21 17:01:07 -04:00
Sam Shleifer
9dab39feea
seq2seq/run_eval.py can take decoder_start_token_id (#5949) 2020-07-21 16:58:45 -04:00
Sam Shleifer
5b193b39b0
[examples/seq2seq]: add --label_smoothing option (#5919) 2020-07-21 16:51:39 -04:00
Sam Shleifer
95d1962b9c
[Doc] explaining romanian postprocessing for MBART BLEU hacking (#5943) 2020-07-21 14:12:48 -04:00
Jannes
604a2355dc
Create README.md (#5876) 2020-07-21 13:28:22 -04:00
Jannes
77c718edef
Create README.md (#5873) 2020-07-21 13:28:06 -04:00
Jannes
325b277db9
Create README.md (#5874) 2020-07-21 13:27:30 -04:00
Jannes
d15be2216c
Create README.md (#5879) 2020-07-21 13:27:13 -04:00
Jannes
f3e23dd90a
Create README.md (#5878) 2020-07-21 13:20:47 -04:00
Jannes
8b01d15c05
Create README.md (#5877) 2020-07-21 13:20:43 -04:00
Jannes
05bddf304e
Create README.md (#5875) 2020-07-21 13:20:32 -04:00
Jannes
783a0c7ee9
Create README.md (#5872) 2020-07-21 13:20:21 -04:00
Jannes
e7844d60c2
Create README.md (#5871) 2020-07-21 13:19:48 -04:00
tuner007
b1ee69763c
Create README.md (#5864) 2020-07-21 13:15:07 -04:00
Manuel Romero
5f809e4976
Update README.md (#5857)
Add nlp dataset used
2020-07-21 13:14:27 -04:00
Manuel Romero
4215f59c99
Update README.md (#5856)
Add dataset used as it is now part of nlp package
2020-07-21 13:11:08 -04:00
Ali Hamdi Ali Fadel
1d72460d55
Add ComVE model cards (#5884)
* Add ComVE model cards

* Apply suggestions from code review

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-07-21 12:54:29 -04:00
Aditya Soni
ccbf74a685
typos in seq2seq/readme (#5937) 2020-07-21 09:44:59 -04:00
BatJedi
d32279438a
Created model card for my extreme summarization model (#5839)
* Created model card for my extreme summarization model

* Update model_cards/yuvraj/xSumm/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-07-21 03:54:57 -04:00
BatJedi
abf5c56e9d
Created model card for my summarization model (#5838)
* Created model card for my summarization model

* Update model_cards/yuvraj/summarizer-cnndm/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-07-21 03:54:14 -04:00
Manuel Romero
d73baeebc5
Create README.md (#5921)
- Maybe the result of this query answers the question You did some days ago @julien-c ;-)
2020-07-21 03:52:52 -04:00
Manuel Romero
50acfc8717
Create README.md (#5924) 2020-07-21 03:41:37 -04:00
Manuel Romero
7249533404
Create README.md (#5920) 2020-07-21 03:31:42 -04:00
Sylvain Gugger
4781afd045
Clarify arg class (#5916) 2020-07-20 19:47:06 -04:00
Qingqing Cao
8e0bcb56ec
DataParallel fix: multi gpu evaluation (#5926)
The DataParallel training was fixed in https://github.com/huggingface/transformers/pull/5733, this commit also fixes the evaluation. It's more convenient when the user enables both `do_train` and `do_eval`.
2020-07-20 17:54:08 -04:00
Sylvain Gugger
a20969170b
Add AlbertForPretraining to doc (#5914) 2020-07-20 17:53:21 -04:00
Sam Shleifer
f1a4e06f1f
[Fix] seq2seq pack_dataset.py actually packs (#5913)
Huge MT speedup!
2020-07-20 15:18:26 -04:00
Sylvain Gugger
32883b310b
Improve doc of use_cache (#5912)
* Improve doc of use_cache

* Update src/transformers/configuration_xlnet.py

Co-authored-by: Teven <teven.lescao@gmail.com>

Co-authored-by: Teven <teven.lescao@gmail.com>
2020-07-20 11:50:41 -04:00
Clement
9ccb45a263
Update gpt2-README.md 2020-07-20 11:40:33 -04:00
Clement
f19751117d
Create gpt2-medium-README.md 2020-07-20 10:47:42 -04:00
Clement
511523672b
Create gpt2-large-README.md 2020-07-20 10:47:27 -04:00
Clement
182c611934
Update gpt2-README.md 2020-07-20 10:47:11 -04:00
Clement
a9ae27cd0f
add link to write with transformers to model card 2020-07-20 10:46:10 -04:00
Sam Shleifer
01c40db4f8
[cleanup] squad processor (#5868) 2020-07-20 10:44:10 -04:00
Stas Bekman
35cb101eae
DataParallel fixes (#5733)
* DataParallel fixes:

1. switched to a more precise check
-        if self.args.n_gpu > 1:
+        if isinstance(model, nn.DataParallel):

2. fix tests - require the same fixup under DataParallel as the training module

* another fix
2020-07-20 09:29:12 -04:00
Pradhy729
290b6e18ac
Trainer support for iterabledataset (#5834)
* Don't pass sampler for iterable dataset

* Added check for test and eval dataloaders.

* Formatting

* Don't pass sampler for iterable dataset

* Added check for test and eval dataloaders.

* Formatting

* Cleaner if nesting.

* Added test for trainer and iterable dataset

* Formatting for test

* Fixed import when torch is available only.

* Added require torch decorator to helper class

* Moved dataset class inside unittest

* Removed nested if and changed model in test

* Checking torch availability for IterableDataset
2020-07-20 09:07:37 -04:00
Julien Chaumond
82dd96cae7 [model_cards] Dataset ids are case-sensitive
cc @lhoestq @thomwolf

Also cc'ing model author @nreimers => Model pages now properly link to the dataset pages (and in the future, eval results, etc.)
2020-07-20 12:47:28 +02:00
Manuel Romero
b01a8844a9
Create README.md (#5813) 2020-07-20 04:06:42 -04:00
Alan deLevie
223bad242d
fix typo in (#5893) 2020-07-20 03:53:03 -04:00
Alan deLevie
d441f8d29d
fix typo in training_args_tf.py (#5894) 2020-07-20 03:48:22 -04:00
Sam Shleifer
09a2f40684
Seq2SeqDataset uses linecache to save memory by @Pradhy729 (#5792)
Co-authored-by: Pradhy729 <49659913+Pradhy729@users.noreply.github.com>
2020-07-18 13:57:33 -04:00
Teven
4b506a37e3
Xlnet outputs (#5883)
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
2020-07-18 17:33:13 +02:00
Teven
a55809241f
Revert "Xlnet outputs (#5881)" (#5882)
This reverts commit 13be487212.
2020-07-18 17:15:40 +02:00
Teven
13be487212
Xlnet outputs (#5881)
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
2020-07-18 16:53:29 +02:00