Patrick von Platen
d5b40d6693
[Setup.py] update jaxlib ( #9831 )
...
* update jaxlib
* Update setup.py
* update table
2021-01-27 11:34:21 +03:00
abhishek thakur
f617490e71
ConvBERT Model ( #9717 )
...
* finalize convbert
* finalize convbert
* fix
* fix
* fix
* push
* fix
* tf image patches
* fix torch model
* tf tests
* conversion
* everything aligned
* remove print
* tf tests
* fix tf
* make tf tests pass
* everything works
* fix init
* fix
* special treatment for sepconv1d
* style
* 🙏🏽
* add doc and cleanup
* add electra test again
* fix doc
* fix doc again
* fix doc again
* Update src/transformers/modeling_tf_pytorch_utils.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update src/transformers/models/conv_bert/configuration_conv_bert.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Update docs/source/model_doc/conv_bert.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/auto/configuration_auto.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/conv_bert/configuration_conv_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* conv_bert -> convbert
* more fixes from review
* add conversion script
* dont use pretrained embed
* unused config
* suggestions from julien
* some more fixes
* p -> param
* fix copyright
* fix doc
* Update src/transformers/models/convbert/configuration_convbert.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* comments from reviews
* fix-copies
* fix style
* revert shape_list
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2021-01-27 03:20:09 -05:00
Patrick von Platen
e575e06287
fix led not defined ( #9828 )
2021-01-27 10:43:14 +03:00
Yusuke Mori
059bb25817
Fix a bug in run_glue.py ( #9812 ) ( #9815 )
2021-01-26 14:32:19 -05:00
Tristan Deleu
eba418ac5d
Commit the last step on world_process_zero in WandbCallback ( #9805 )
...
* Commit the last step on world_process_zero in WandbCallback
* Use the environment variable WANDB_LOG_MODEL as a default value in WandbCallback
2021-01-26 13:21:26 -05:00
Derrick Blakely
8edc98bb70
Allow RAG to output decoder cross-attentions ( #9789 )
...
* get cross attns
* add cross-attns doc strings
* fix typo
* line length
* Apply suggestions from code review
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
2021-01-26 20:32:46 +03:00
Magdalena Biesialska
8f6c12d306
Fix fine-tuning translation scripts ( #9809 )
2021-01-26 11:30:31 -05:00
Michael Glass
c37dcff764
Fixed parameter name for logits_processor ( #9790 )
2021-01-26 18:44:02 +03:00
Sylvain Gugger
0d0efd3a0e
Smdistributed trainer ( #9798 )
...
* Add a debug print
* Adapt Trainer to use smdistributed if available
* Forgotten parenthesis
* Real check for sagemaker
* Donforget to define device...
* Woopsie, local)rank is defined differently
* Update since local_rank has the proper value
* Remove debug statement
* More robust check for smdistributed
* Quality
* Deal with key not present error
2021-01-26 10:28:21 -05:00
Lysandre
897a24c869
Fix head_mask for model templates
2021-01-26 11:02:48 +01:00
Andrea Cappelli
10e5f28212
Improve pytorch examples for fp16 ( #9796 )
...
* Pad to 8x for fp16 multiple choice example (#9752 )
* Pad to 8x for fp16 squad trainer example (#9752 )
* Pad to 8x for fp16 ner example (#9752 )
* Pad to 8x for fp16 swag example (#9752 )
* Pad to 8x for fp16 qa beam search example (#9752 )
* Pad to 8x for fp16 qa example (#9752 )
* Pad to 8x for fp16 seq2seq example (#9752 )
* Pad to 8x for fp16 glue example (#9752 )
* Pad to 8x for fp16 new ner example (#9752 )
* update script template #9752
* Update examples/multiple-choice/run_swag.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update examples/question-answering/run_qa.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update examples/question-answering/run_qa_beam_search.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* improve code quality #9752
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-01-26 04:47:07 -05:00
Nicolas Patry
781e4b1384
Adding skip_special_tokens=True
to FillMaskPipeline ( #9783 )
...
* We most likely don't want special tokens in this output.
* Adding `skip_special_tokens=True` to FillMaskPipeline
- It's backward incompatible.
- It makes for sense for pipelines to remove references to
special_tokens (all of the other pipelines do that).
- Keeping special tokens makes it hard for users to actually remove them
because all models have different tokens (<s>, <cls>, [CLS], ....)
* Fixing `token_str` in the same vein, and actually fix the tests too !
2021-01-26 10:06:28 +01:00
Daniel Stancl
1867d9a8d7
Add head_mask/decoder_head_mask for TF BART models ( #9639 )
...
* Add head_mask/decoder_head_mask for TF BART models
* Add head_mask and decoder_head_mask input arguments for TF BART-based
models as a TF counterpart to the PR #9569
* Add test_headmasking functionality to tests/test_modeling_tf_common.py
* TODO: Add a test to verify that we can get a gradient back for
importance score computation
* Remove redundant #TODO note
Remove redundant #TODO note from tests/test_modeling_tf_common.py
* Fix assertions
* Make style
* Fix ...Model input args and adjust one new test
* Add back head_mask and decoder_head_mask to BART-based ...Model
after the last commit
* Remove head_mask ande decoder_head_mask from input_dict
in TF test_train_pipeline_custom_model as these two have different
shape than other input args (Necessary for passing this test)
* Revert adding global_rng in test_modeling_tf_common.py
2021-01-26 03:50:00 -05:00
Yusuke Mori
cb73ab5a38
Fix broken links in the converting tf ckpt document ( #9791 )
...
* Fix broken links in the converting tf ckpt document
* Update docs/source/converting_tensorflow_models.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Reflect the review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-01-26 03:37:57 -05:00
Patrick von Platen
d94cc2f904
[Flaky Generation Tests] Make sure that no early stopping is happening for beam search ( #9794 )
...
* fix ci
* fix ci
* renaming
* fix dup line
2021-01-26 03:21:44 -05:00
Stas Bekman
0fdbf0850a
[PR/Issue templates] normalize, group, sort + add myself for deepspeed ( #9706 )
...
* normalize, group, sort + add myself for deepspeed
* new structure
* add ray
* typo
* more suggestions
* more suggestions
* white space
* Update .github/ISSUE_TEMPLATE/bug-report.md
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* add bullets
* sync
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Apply suggestions from code review
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* sync
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2021-01-25 21:09:01 -08:00
Sylvain Gugger
af41da5097
Fix style
2021-01-25 12:40:58 -05:00
Sylvain Gugger
caf4abf768
Auto-resume training from checkpoint ( #9776 )
...
* Auto-resume training from checkpoint
* Update examples/text-classification/run_glue.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Roll out to other examples
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2021-01-25 12:03:51 -05:00
Lysandre Debut
0f443436fb
Actual fix ( #9787 )
2021-01-25 11:12:07 -05:00
Stas Bekman
fac7cfb16a
[fsmt] onnx triu workaround ( #9738 )
...
* onnx triu workaround
* style
* working this time
* add test
* more efficient version
2021-01-25 08:57:37 -05:00
Sorami Hisamoto
626116b7d7
Fix a typo in Trainer.hyperparameter_search docstring ( #9762 )
...
`compute_objectie` => `compute_objective`
2021-01-25 06:40:03 -05:00
Kai Fricke
d63ab61525
Use object store to pass trainer object to Ray Tune ( #9749 )
2021-01-25 05:01:55 -05:00
Maria Janina Sarol
6312fed47d
Fix TFTrainer prediction output ( #9662 )
...
* Fix TFTrainer prediction output
* Update trainer_tf.py
* Fix TFTrainer prediction output
* Fix evaluation_loss update in TFTrainer
* Fix TFTrainer prediction output
2021-01-25 10:27:12 +01:00
Wilfried L. Bounsi
9152f16023
Fix broken [Open in Colab] links ( #9761 )
2021-01-23 15:11:46 +05:30
Stas Bekman
b7b7e5d049
token_type_ids isn't used ( #9736 )
2021-01-22 20:38:53 -08:00
Julien Plu
a449ffcbd2
Fix test ( #9755 )
2021-01-22 17:40:16 +01:00
Sylvain Gugger
82d46febeb
Add report_to
training arguments to control the reporting integrations used ( #9735 )
2021-01-22 10:34:34 -05:00
Sylvain Gugger
411c582109
Fixes to run_seq2seq and instructions ( #9734 )
...
* Fixes to run_seq2seq and instructions
* Add more defaults for summarization
2021-01-22 10:03:57 -05:00
Julien Plu
d7c31abf38
Fix some TF slow tests ( #9728 )
...
* Fix saved model tests + fix a graph issue in longformer
* Apply style
2021-01-22 14:50:46 +01:00
Stefan Schweter
08b22722c7
examples: fix XNLI url ( #9741 )
2021-01-22 18:13:52 +05:30
Sylvain Gugger
5f80c15ef5
Fix memory regression in Seq2Seq example ( #9713 )
...
* Fix memory regression in Seq2Seq example
* Fix test and properly deal with -100
* Easier condition with device safety
* Patch for MBartTokenzierFast
2021-01-21 12:05:46 -05:00
Julien Plu
a7dabfb3d1
Fix TF s2s models ( #9478 )
...
* Fix Seq2Seq models for serving
* Apply style
* Fix lonfgormer
* Fix mBart/Pegasus/Blenderbot
* Apply style
* Add a main intermediate layer
* Apply style
* Remove import
* Apply tf.function to Longformer
* Fix utils check_copy
* Update S2S template
* Fix BART + Blenderbot
* Fix BlenderbotSmall
* Fix BlenderbotSmall
* Fix BlenderbotSmall
* Fix MBart
* Fix Marian
* Fix Pegasus + template
* Apply style
* Fix common attributes test
* Forgot to fix the LED test
* Apply Patrick's comment on LED Decoder
2021-01-21 17:03:29 +01:00
Nicolas Patry
23e5a36ee6
Changing model default for TableQuestionAnsweringPipeline. ( #9729 )
...
* Changing model default for TableQuestionAnsweringPipeline.
- Discussion: https://discuss.huggingface.co/t/table-question-answering-is-not-an-available-task-under-pipeline/3284/6
* Updating slow tests that were out of sync.
2021-01-21 14:31:51 +01:00
Julien Plu
3f290e6c84
Fix mixed precision in TF models ( #9163 )
...
* Fix Gelu precision
* Fix gelu_fast
* Naming
* Fix usage and apply style
* add TF gelu approximate version
* add TF gelu approximate version
* add TF gelu approximate version
* Apply style
* Fix albert
* Remove the usage of the Activation layer
2021-01-21 07:00:11 -05:00
Suraj Patil
248fa1ae72
fix T5 head mask in model_parallel ( #9726 )
...
* fix head mask in model_parallel
* pass correct head mask
2021-01-21 12:16:14 +01:00
Patrick von Platen
ca422e3d7d
finish ( #9721 )
2021-01-21 05:17:13 -05:00
Patrick von Platen
c8ea582ed6
reduce led memory ( #9723 )
2021-01-21 05:16:15 -05:00
guillaume-be
fb36c273a2
Allow text generation for ProphetNetForCausalLM ( #9707 )
...
* Moved ProphetNetForCausalLM's parent initialization after config update
* Added unit tests for generation for ProphetNetForCausalLM
2021-01-21 11:13:38 +01:00
Lysandre Debut
910aa89671
Temporarily deactivate TPU tests while we work on fixing them ( #9720 )
2021-01-21 04:17:39 -05:00
Muennighoff
6a346f0358
fix typo ( #9708 )
...
* fix typo
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2021-01-21 13:51:01 +05:30
Stas Bekman
4a20b7c450
[trainer] no --deepspeed and --sharded_ddp together ( #9712 )
...
* no --deepspeed and --sharded_ddp together
* Update src/transformers/trainer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* style
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-01-20 16:50:21 -08:00
Sylvain Gugger
7acfa95afb
Add missing new line
2021-01-20 14:13:16 -05:00
Darigov Research
5a307ece82
Adds flashcards to Glossary & makes small corrections ( #8949 )
...
* fix: Makes small typo corrections & standardises glossary
* feat: Adds introduction & links to transformer flashcards
* feat: Adds attribution & adjustments requested in #8949
* feat: Adds flashcards to community.md
* refactor: Removes flashcards from glossary
2021-01-20 13:28:40 -05:00
Sylvain Gugger
3cd91e8162
Fix WAND_DISABLED test ( #9703 )
...
* Fix WAND_DISABLED test
* Remove duplicate import
* Make a test that actually works...
* Fix style
2021-01-20 12:30:24 -05:00
Sylvain Gugger
2a703773aa
Fix style
2021-01-20 12:17:40 -05:00
Stas Bekman
cd5565bed3
fix the backward for deepspeed ( #9705 )
2021-01-20 09:07:07 -08:00
Gunjan Chhablani
538245b0c2
Fix Trainer and Args to mention AdamW, not Adam. ( #9685 )
...
* Fix Trainer and Args to mention AdamW, not Adam.
* Update the docs for Training Arguments.
* Change arguments adamw_* to adam_*
* Fixed links to AdamW in TrainerArguments docs
* Fix line length in Training Args docs.
2021-01-20 11:59:31 -05:00
NielsRogge
88583d4958
Add notebook ( #9696 )
2021-01-20 10:19:26 -05:00
NielsRogge
d1370d29b1
Add DeBERTa head models ( #9691 )
...
* Add DebertaForMaskedLM, DebertaForTokenClassification, DebertaForQuestionAnswering
* Add docs and fix quality
* Fix Deberta not having pooler
2021-01-20 10:18:50 -05:00
Sylvain Gugger
a7b62fece5
Fix Funnel Transformer conversion script ( #9683 )
2021-01-20 09:50:20 -05:00