Stas Bekman
848fbe1e35
[gen utils] missing else case ( #6980 )
...
* [gen utils] missing else case
1. `else` is missing - I hit that case while porting a model. Probably needs to assert there?
2. also the comment on top seems to be outdated (just vocab_size is being set there)
* typo
2020-09-07 07:28:06 -04:00
tznurmin
f7e80721eb
Fixed the default number of attention heads in Reformer Configuration ( #6973 )
2020-09-07 12:12:22 +02:00
Richard Bownes
e20d8895bd
Create README.md model card ( #6964 )
...
* Create README.md
* Add some custom prompts
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-07 06:01:40 -04:00
Stas Bekman
b4a9c95f1b
[testing] add dependency: parametrize ( #6958 )
...
unittest doesn't support pytest's super-handy `@pytest.mark.parametrize`, I researched and there are many proposed workarounds, most tedious at best. If we include https://pypi.org/project/parameterized/ in dev dependencies - it will provide a very easy to write parameterization in tests. Same as pytest's fixture, plus quite a few other ways.
Example:
```
from parameterized import parameterized
@parameterized([
(2, 2, 4),
(2, 3, 8),
(1, 9, 1),
(0, 9, 0),
])
def test_pow(base, exponent, expected):
assert_equal(math.pow(base, exponent), expected)
```
(extra `self`var if inside a test class)
To remind the pytest style is slightly different:
```
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
```
More examples here: https://pypi.org/project/parameterized
May I suggest that it will make it much easier to write some types of tests?
2020-09-07 05:50:18 -04:00
Stas Bekman
acfaad74ab
[docstring] missing arg ( #6933 )
...
* [docstring] missing arg
add the missing `tie_word_embeddings` entry
* cleanup
* Update src/transformers/configuration_reformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-07 05:36:16 -04:00
Stas Bekman
c3317e1f80
typo ( #6959 )
...
there is no var `decoder_input_ids`, but there is `input_ids` for decoder :)
2020-09-07 05:16:24 -04:00
Julien Chaumond
10c6f94adc
[model_card] register jplu/tf-xlm-r-ner-40-lang as multilingual
2020-09-07 05:03:40 -04:00
Lysandre Debut
9ef9c39728
Cannot index None
( #6984 )
2020-09-07 04:56:08 -04:00
Sylvain Gugger
08de989a0a
Trainer with grad accum ( #6930 )
...
* Add warning for gradient accumulation
* Formatting
2020-09-07 04:54:00 -04:00
Julien Chaumond
d4aa7284c8
[model_card] jplu/tf-xlm-r-ner-40-lang: Fix link
...
cc @jplu
2020-09-07 04:33:15 -04:00
Boris Dayma
995a958dd1
feat: allow prefix for any generative model ( #5885 )
...
* feat: allow padding_text for any generative model
* docs(pipelines.py): correct typo
* Update src/transformers/pipelines.py
Co-authored-by: Sam Shleifer <sshleifer@gmail.com>
* feat: rename padding_text to prefix
* fix: cannot tokenize empty text
* fix: pass prefix arg to pipeline
* test: add prefix to text-generetation pipeline
* style: fix style
* style: clean code and variable name more explicit
* set arg docstring to optional
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sam Shleifer <sshleifer@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-07 03:03:45 -04:00
Sam Shleifer
ce37be9d94
[s2s] warn if --fp16 for torch 1.6 ( #6977 )
2020-09-06 20:41:29 -04:00
Patrick von Platen
f72fe1f31a
Correct wrong spacing in README
2020-09-06 13:26:56 +02:00
Steven Liu
d31031f603
create model card for astroGPT ( #6960 )
...
* create model card for astroGPT
* Hotlink to actual image file
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-05 12:50:19 -04:00
Naveenkhasyap
56742e9f61
Create Readme.MD for KanBERTo ( #6942 )
...
* Create Readme.MD for KanBERTo
KanBERTo language model readme for Kannada language.
* Update model_cards/Naveen-k/KanBERTo/README.md
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-04 18:24:32 -04:00
Stas Bekman
48ff6d5109
[doc] remove the implied defaults to :obj:None
, s/True/ :obj:`True/, etc. ( #6956 )
...
* remove the implied defaults to :obj:`None`
* fix bug in the original
* replace to :obj:`True`, :obj:`False`
2020-09-04 18:22:25 -04:00
Stas Bekman
eff274d629
typo ( #6952 )
2020-09-04 16:14:37 -04:00
Sam Shleifer
a4fc0c80b1
[s2s] run_eval.py parses generate_kwargs ( #6948 )
2020-09-04 14:19:31 -04:00
Sam Shleifer
6078b12098
[s2s] distill: --normalize_hidden --supervise_forward ( #6834 )
2020-09-04 14:05:56 -04:00
Stas Bekman
c5d43a872f
[docstring] misc arg doc corrections ( #6932 )
...
* correct bool types
fix docstring s/int/bool/
* fix description
* fix num_labels to match reality
2020-09-04 10:09:42 -04:00
Patrick von Platen
e3990d137a
fix ( #6946 )
2020-09-04 16:08:54 +02:00
Yih-Dar
a75e319819
Fix mixed precision issue in TF DistilBert ( #6915 )
...
* Remove hard-coded uses of float32 to fix mixed precision use in TF Distilbert
* fix style
* fix gelu dtype issue in TF Distilbert
* fix numeric overflow while using half precision
2020-09-04 14:29:57 +02:00
Sam Shleifer
e95d262f25
[s2s] support early stopping based on loss, rather than rouge ( #6927 )
2020-09-03 17:31:35 -04:00
Sam Shleifer
207ed8cb78
[s2s] use --eval_beams command line arg ( #6926 )
2020-09-03 12:42:09 -04:00
krfricke
0f360d3d1c
move wandb/comet logger init to train() to allow parallel logging ( #6850 )
...
* move wandb/comet logger init to train() to allow parallel logging
* Setup wandb/comet loggers on first call to log()
2020-09-03 11:49:14 -04:00
Sam Shleifer
39ed68d597
[s2s] allow task_specific_params=summarization_xsum ( #6923 )
2020-09-03 11:11:40 -04:00
Sam Shleifer
5a318f075a
[s2s]: script to convert pl checkpoints to hf checkpoints ( #6911 )
...
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-03 09:47:00 -04:00
brett koonce
b8e4906c97
tweak tar command in readme ( #6919 )
2020-09-03 09:29:01 -04:00
Stefan Engl
a66db7d828
Corrected link to paper ( #6905 )
2020-09-03 09:23:42 -04:00
David Mark Nemeskey
55d61ce8d6
Added a link to the thesis. ( #6906 )
2020-09-03 09:20:03 -04:00
abdullaholuk-loodos
653a79ccad
Loodos model cards had errors on "Usage" section. It is fixed. Also "electra-base-turkish-uncased" model removed from s3 and re-uploaded as "electra-base-turkish-uncased-discriminator". Its README added. ( #6921 )
...
Co-authored-by: Abdullah Oluk <abdullaholuk123@gmail.com>
2020-09-03 09:13:43 -04:00
Julien Chaumond
5a3aec90a9
[model_card] link to correctly cased piaf dataset
...
cc @psorianom @rachelker
2020-09-03 08:57:32 -04:00
Sylvain Gugger
722b5807d8
Template updates ( #6914 )
2020-09-03 04:14:58 -04:00
Antonio V Mendoza
ea2c6f1afc
Adding the LXMERT pretraining model (MultiModal languageXvision) to HuggingFace's suite of models ( #5793 )
...
* added template files for LXMERT and competed the configuration_lxmert.py
* added modeling, tokization, testing, and finishing touched for lxmert [yet to be tested]
* added model card for lxmert
* cleaning up lxmert code
* Update src/transformers/modeling_lxmert.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/modeling_tf_lxmert.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/modeling_tf_lxmert.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/modeling_lxmert.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* tested torch lxmert, changed documtention, updated outputs, and other small fixes
* Update src/transformers/convert_pytorch_checkpoint_to_tf2.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/convert_pytorch_checkpoint_to_tf2.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/convert_pytorch_checkpoint_to_tf2.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* renaming, other small issues, did not change TF code in this commit
* added lxmert question answering model in pytorch
* added capability to edit number of qa labels for lxmert
* made answer optional for lxmert question answering
* add option to return hidden_states for lxmert
* changed default qa labels for lxmert
* changed config archive path
* squshing 3 commits: merged UI + testing improvments + more UI and testing
* changed some variable names for lxmert
* TF LXMERT
* Various fixes to LXMERT
* Final touches to LXMERT
* AutoTokenizer order
* Add LXMERT to index.rst and README.md
* Merge commit test fixes + Style update
* TensorFlow 2.3.0 sequential model changes variable names
Remove inherited test
* Update src/transformers/modeling_tf_pytorch_utils.py
* Update docs/source/model_doc/lxmert.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update docs/source/model_doc/lxmert.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/modeling_tf_lxmert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* added suggestions
* Fixes
* Final fixes for TF model
* Fix docs
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-09-03 04:02:25 -04:00
Puneetha Pai
4ebb52afdb
test_tf_common: remove un_used mixin class parameters ( #6866 )
2020-09-02 10:54:40 -04:00
Stas Bekman
e71f32c0ef
[testing] fix ambiguous test ( #6898 )
...
Since `generate()` does:
```
num_beams = num_beams if num_beams is not None else self.config.num_beams
```
This test fails if `model.config.num_beams > 1` (which is the case in the model I'm porting).
This fix makes the test setup unambiguous by passing an explicit `num_beams=1` to `generate()`.
Thanks.
2020-09-02 16:18:17 +02:00
Sylvain Gugger
8f2723caf0
Output attention takes an s ( #6903 )
...
* Fix output_attention -> output_attentions
* Formatting
* One unsaved file
2020-09-02 08:11:45 -04:00
Yohei Tamura
485da7222f
fix error class instantiation ( #6634 )
2020-09-02 07:36:32 -04:00
Suraj Patil
4230d30f77
[pipelines] Text2TextGenerationPipeline ( #6744 )
...
* add Text2TextGenerationPipeline
* remove max length warning
* remove comments
* remove input_length
* fix typo
* add tests
* use TFAutoModelForSeq2SeqLM
* doc
* typo
* add the doc below TextGenerationPipeline
* doc nit
* style
* delete comment
2020-09-02 07:34:35 -04:00
Prajjwal Bhargava
6b24281229
fix typo in comments ( #6838 )
2020-09-02 06:55:37 -04:00
Stas Bekman
7351ef83c1
[doc] typos ( #6867 )
...
* [doc] typos
fixed typos
* Update README.md
2020-09-02 06:51:51 -04:00
Harry Wang
ee1bff06f8
minor docs grammar fixes ( #6889 )
2020-09-02 06:45:19 -04:00
Patrick von Platen
8abd7f69fc
fix warning for position ids ( #6884 )
2020-09-02 06:44:51 -04:00
Parthe Pandit
7cb0572c64
Update modeling_bert.py ( #6897 )
...
outptus -> outputs in example of BertForPreTraining
2020-09-02 06:39:01 -04:00
David Mark Nemeskey
e3c55ceb8d
Model card for huBERT ( #6893 )
...
* Create README.md
Model card for huBERT.
* Update README.md
lowercase h
* Update model_cards/SZTAKI-HLT/hubert-base-cc/README.md
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
2020-09-02 04:50:10 -04:00
Patrick von Platen
1889e96c8c
fix QA example for PT ( #6890 )
2020-09-02 09:53:09 +02:00
Julien Chaumond
d822ab636b
[model_cards] Fix file path for flexudy/t5-base-multi-sentence-doctor
2020-09-02 00:02:40 +02:00
Rohan Rajpal
ad5fb33c9a
Create README.md ( #6598 )
2020-09-01 17:59:15 -04:00
Rohan Rajpal
f9dadcd85b
Create README.md ( #6602 )
2020-09-01 17:58:43 -04:00
Igli Manaj
f5d69c75f7
Update multilingual passage rereanking model card ( #6788 )
...
Fix range of possible score, add inference .
2020-09-01 17:56:19 -04:00