Commit Graph

5759 Commits

Author SHA1 Message Date
Stas Bekman
03e363f9ae
[generation] consistently add eos tokens (#6982)
Currently beam search returns inconsistent outputs - if hypos have different lengths we get eos, if they are the same - we don't.

This PR makes the output consistent.

Also why not also replace:

```
            if sent_lengths[i] < max_length:
                decoded[i, sent_lengths[i]] = eos_token_id
```
with:
```
            decoded[i, sent_lengths[i]] = eos_token_id
```
Shouldn't eos always be there? If the data gets truncated, the caller needs to user a larger `max_length`.

Please correct me if my logic is flawed.
2020-09-09 04:08:36 -04:00
Stas Bekman
d0963486c1
adding TRANSFORMERS_VERBOSITY env var (#6961)
* introduce TRANSFORMERS_VERBOSITY env var + test + test helpers

* cleanup

* remove helper function
2020-09-09 04:08:01 -04:00
Sam Shleifer
f0fc0aea6b
pegasus.rst: fix expected output (#7017) 2020-09-08 13:29:16 -04:00
Patrick von Platen
120176ea29
[Longformer] Fix longformer documentation (#7016)
* fix longformer

* allow position ids to not be initialized
2020-09-08 18:51:28 +02:00
Lysandre Debut
5c4eb4b1ac
Fixing FLOPS merge by checking if torch is available (#7013)
* Should check if `torch` is available

* fixed samples_count error, distributed_concat arguments

* style

* Import torch at beginning of file

Co-authored-by: TevenLeScao <teven.lescao@gmail.com>
2020-09-08 10:51:58 -04:00
Teven
01d340adfa
Floating-point operations logging in trainer (#6768)
* neFLOs calculation, logging, and reloading (#1)

* testing distributed consecutive batches

* fixed AttributeError from DataParallel

* removed verbosity

* rotate with use_mtime=True

* removed print

* fixed interaction with gradient accumulation

* indent formatting

* distributed neflo counting

* fixed typo

* fixed typo

* mean distributed losses

* exporting log history

* moved a few functions

* floating_point_ops clarification for transformers with parameter-reuse

* code quality

* double import

* made flo estimation more task-agnostic

* only logging flos if computed

* code quality

* unused import

* Update src/transformers/trainer.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/modeling_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Sylvain review

* Update src/transformers/modeling_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* black

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-08 10:00:56 -04:00
Sylvain Gugger
d155b38d6e
Funnel transformer (#6908)
* Initial model

* Fix upsampling

* Add special cls token id and test

* Formatting

* Test and fist FunnelTokenizerFast

* Common tests

* Fix the check_repo script and document Funnel

* Doc fixes

* Add all models

* Write doc

* Fix test

* Initial model

* Fix upsampling

* Add special cls token id and test

* Formatting

* Test and fist FunnelTokenizerFast

* Common tests

* Fix the check_repo script and document Funnel

* Doc fixes

* Add all models

* Write doc

* Fix test

* Fix copyright

* Forgot some layers can be repeated

* Apply suggestions from code review

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/transformers/modeling_funnel.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Address review comments

* Update src/transformers/modeling_funnel.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Address review comments

* Update src/transformers/modeling_funnel.py

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* Slow integration test

* Make small integration test

* Formatting

* Add checkpoint and separate classification head

* Formatting

* Expand list, fix link and add in pretrained models

* Styling

* Add the model in all summaries

* Typo fixes

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sam Shleifer <sshleifer@gmail.com>
2020-09-08 08:08:08 -04:00
Stuart Mesham
25afb4ea50
fixed trainer tr_loss memory leak (#6999)
* fixed trainer tr_loss memory leak

* detached returned training loss from computation graph in the Trainer class' training_step() method

* Revert "fixed trainer tr_loss memory leak"

This reverts commit 47226e4e
2020-09-08 08:07:33 -04:00
Manuel Romero
1b76936d1a
Fix typo (#6994) 2020-09-08 04:22:57 -04:00
Philipp Schmid
8235426ee8
New Community NB "Fine tune GPT-2 with Trainer class" (#7005) 2020-09-08 03:42:20 -04:00
Stas Bekman
c18f5916a0
typo (#7001)
apologies for the tiny PRs, just sending those as I find them.
2020-09-08 01:22:20 -04:00
Mehrdad Farahani
60fc03290b
README for HooshvareLab/bert-fa-base-uncased (#6990)
ParsBERT v2.0 is a fine-tuned and vocab-reconstructed version of ParsBERT, and it's able to be used in other scopes!

It includes these features:
- We added some unused-vocab for use in summarization and other scopes.
- We fine-tuned the model on vast styles of writing in the Persian language.
2020-09-07 16:43:50 -04:00
Jangwon Park
90ec78b514
Add missing arguments for BertWordPieceTokenizer (#5810) 2020-09-07 08:35:41 -04:00
Lysandre Debut
77cd0e13d2
Conversion scripts shouldn't have relative imports (#6991) 2020-09-07 08:31:06 -04:00
Lysandre
1650130b0f Remove misleading docstring 2020-09-07 14:16:59 +02:00
Stas Bekman
159ef07e4c
match CI's version of flake8 (#6941)
my flake8 wasn't up-to-date enough `make quality` wasn't reporting the same things CI did - this PR adds the actual required version.

Thinking more about some of these minimal versions - CI will always install afresh and thus will always run the latest version. Is there a way to tell pip to always install the latest versions of certain dependencies on `pip install -i ".[dev]"`, rather than hardcoding the minimals which quickly become outdated?
2020-09-07 08:12:25 -04:00
Abed khooli
e9d0d4c75c
Create README.md (#6974) 2020-09-07 07:31:22 -04:00
Stas Bekman
848fbe1e35
[gen utils] missing else case (#6980)
* [gen utils] missing else case

1. `else` is missing - I hit that case while porting a model. Probably needs to assert there?
2. also the comment on top seems to be outdated (just vocab_size is being set there)

* typo
2020-09-07 07:28:06 -04:00
tznurmin
f7e80721eb
Fixed the default number of attention heads in Reformer Configuration (#6973) 2020-09-07 12:12:22 +02:00
Richard Bownes
e20d8895bd
Create README.md model card (#6964)
* Create README.md

* Add some custom prompts

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-07 06:01:40 -04:00
Stas Bekman
b4a9c95f1b
[testing] add dependency: parametrize (#6958)
unittest doesn't support pytest's super-handy `@pytest.mark.parametrize`, I researched and there are many proposed workarounds, most tedious at best. If we include https://pypi.org/project/parameterized/ in dev dependencies - it will provide a very easy to write parameterization in tests. Same as pytest's fixture, plus quite a few other ways. 

Example:
```
from parameterized import parameterized
@parameterized([
    (2, 2, 4),
    (2, 3, 8),
    (1, 9, 1),
    (0, 9, 0),
])
def test_pow(base, exponent, expected):
   assert_equal(math.pow(base, exponent), expected)
```
(extra `self`var if inside a test class)

To remind the pytest style is slightly different:
```
    @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
    def test_eval(test_input, expected):
```
More examples here: https://pypi.org/project/parameterized

May I suggest that it will make it much easier to write some types of tests?
2020-09-07 05:50:18 -04:00
Stas Bekman
acfaad74ab
[docstring] missing arg (#6933)
* [docstring] missing arg

add the missing `tie_word_embeddings` entry

* cleanup

* Update src/transformers/configuration_reformer.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-07 05:36:16 -04:00
Stas Bekman
c3317e1f80
typo (#6959)
there is no var `decoder_input_ids`, but there is `input_ids` for decoder :)
2020-09-07 05:16:24 -04:00
Julien Chaumond
10c6f94adc
[model_card] register jplu/tf-xlm-r-ner-40-lang as multilingual 2020-09-07 05:03:40 -04:00
Lysandre Debut
9ef9c39728
Cannot index None (#6984) 2020-09-07 04:56:08 -04:00
Sylvain Gugger
08de989a0a
Trainer with grad accum (#6930)
* Add warning for gradient accumulation

* Formatting
2020-09-07 04:54:00 -04:00
Julien Chaumond
d4aa7284c8
[model_card] jplu/tf-xlm-r-ner-40-lang: Fix link
cc @jplu
2020-09-07 04:33:15 -04:00
Boris Dayma
995a958dd1
feat: allow prefix for any generative model (#5885)
* feat: allow padding_text for any generative model

* docs(pipelines.py): correct typo

* Update src/transformers/pipelines.py

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* feat: rename padding_text to prefix

* fix: cannot tokenize empty text

* fix: pass prefix arg to pipeline

* test: add prefix to text-generetation pipeline

* style: fix style

* style: clean code and variable name more explicit

* set arg docstring to optional

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-07 03:03:45 -04:00
Sam Shleifer
ce37be9d94
[s2s] warn if --fp16 for torch 1.6 (#6977) 2020-09-06 20:41:29 -04:00
Patrick von Platen
f72fe1f31a
Correct wrong spacing in README 2020-09-06 13:26:56 +02:00
Steven Liu
d31031f603
create model card for astroGPT (#6960)
* create model card for astroGPT

* Hotlink to actual image file

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-05 12:50:19 -04:00
Naveenkhasyap
56742e9f61
Create Readme.MD for KanBERTo (#6942)
* Create Readme.MD for KanBERTo

KanBERTo language model readme for Kannada language.

* Update model_cards/Naveen-k/KanBERTo/README.md

Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-04 18:24:32 -04:00
Stas Bekman
48ff6d5109
[doc] remove the implied defaults to :obj:None, s/True/ :obj:`True/, etc. (#6956)
* remove the implied defaults to :obj:`None`

* fix bug in the original

* replace to :obj:`True`, :obj:`False`
2020-09-04 18:22:25 -04:00
Stas Bekman
eff274d629
typo (#6952) 2020-09-04 16:14:37 -04:00
Sam Shleifer
a4fc0c80b1
[s2s] run_eval.py parses generate_kwargs (#6948) 2020-09-04 14:19:31 -04:00
Sam Shleifer
6078b12098
[s2s] distill: --normalize_hidden --supervise_forward (#6834) 2020-09-04 14:05:56 -04:00
Stas Bekman
c5d43a872f
[docstring] misc arg doc corrections (#6932)
* correct bool types

fix docstring s/int/bool/

* fix description

* fix num_labels to match reality
2020-09-04 10:09:42 -04:00
Patrick von Platen
e3990d137a
fix (#6946) 2020-09-04 16:08:54 +02:00
Yih-Dar
a75e319819
Fix mixed precision issue in TF DistilBert (#6915)
* Remove hard-coded uses of float32 to fix mixed precision use in TF Distilbert

* fix style

* fix gelu dtype issue in TF Distilbert

* fix numeric overflow while using half precision
2020-09-04 14:29:57 +02:00
Sam Shleifer
e95d262f25
[s2s] support early stopping based on loss, rather than rouge (#6927) 2020-09-03 17:31:35 -04:00
Sam Shleifer
207ed8cb78
[s2s] use --eval_beams command line arg (#6926) 2020-09-03 12:42:09 -04:00
krfricke
0f360d3d1c
move wandb/comet logger init to train() to allow parallel logging (#6850)
* move wandb/comet logger init to train() to allow parallel logging

* Setup wandb/comet loggers on first call to log()
2020-09-03 11:49:14 -04:00
Sam Shleifer
39ed68d597
[s2s] allow task_specific_params=summarization_xsum (#6923) 2020-09-03 11:11:40 -04:00
Sam Shleifer
5a318f075a
[s2s]: script to convert pl checkpoints to hf checkpoints (#6911)
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-03 09:47:00 -04:00
brett koonce
b8e4906c97
tweak tar command in readme (#6919) 2020-09-03 09:29:01 -04:00
Stefan Engl
a66db7d828
Corrected link to paper (#6905) 2020-09-03 09:23:42 -04:00
David Mark Nemeskey
55d61ce8d6
Added a link to the thesis. (#6906) 2020-09-03 09:20:03 -04:00
abdullaholuk-loodos
653a79ccad
Loodos model cards had errors on "Usage" section. It is fixed. Also "electra-base-turkish-uncased" model removed from s3 and re-uploaded as "electra-base-turkish-uncased-discriminator". Its README added. (#6921)
Co-authored-by: Abdullah Oluk <abdullaholuk123@gmail.com>
2020-09-03 09:13:43 -04:00
Julien Chaumond
5a3aec90a9
[model_card] link to correctly cased piaf dataset
cc @psorianom @rachelker
2020-09-03 08:57:32 -04:00
Sylvain Gugger
722b5807d8
Template updates (#6914) 2020-09-03 04:14:58 -04:00