Stas Bekman
d14e0af274
sync LayerDrop for Wav2Vec2Encoder + tests ( #12076 )
2021-06-09 13:21:03 +01:00
Koichi Yasuoka
82a2b76c95
Update run_ner.py with id2label config ( #12001 )
2021-06-09 07:27:05 -04:00
Stas Bekman
0e82f0cbc2
typo
2021-06-08 12:55:17 -07:00
Stas Bekman
11d86d3de4
[Deepspeed Wav2vec2] integration ( #11638 )
...
* wip
* wip - but working with https://github.com/microsoft/DeepSpeed/pull/1044
* cleanup
* workaround
* working 5/8 modes
* solve fp32 distributed zero3
* style
* sync
* sync
* rework
* deprecation
* cleanup
* https://github.com/microsoft/DeepSpeed/pull/1044 pr was merged
* clean up
* add a guide
* more prose
* more prose
* fix
* more prose
* sub_group_size was too big
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* refactor
* bug fix
* make the true check explicit
* new deepspeed release
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-06-08 12:32:03 -07:00
Stas Bekman
32290d87f6
[Deepspeed] various fixes ( #12058 )
...
* replace deprecated config
* sub_group_size was too big
* complete deprecation removal
2021-06-08 08:36:15 -07:00
Sylvain Gugger
fd6902838a
Properly indent block_size ( #12070 )
2021-06-08 10:27:02 -04:00
cdleong
49bee0aea4
Add torch to requirements.txt in language-modeling ( #12040 )
...
* Add torch to requirements.txt in language-modeling
* Update examples/pytorch/language-modeling/requirements.txt
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-06-08 09:02:35 -04:00
Mario Šaško
f5eec0d8e9
Replace legacy tensor.Tensor with torch.tensor/torch.empty ( #12027 )
...
* Replace legacy torch.Tensor constructor with torch.{tensor, empty}
* Remove torch.Tensor in examples
2021-06-08 13:58:38 +01:00
Shamane Siri
e33085d648
updated the original RAG implementation to be compatible with latest Pytorch-Lightning ( #11806 )
...
* updated the original RAG implementation to be compatible with the latest PL version
* updated the requirements.txt file
* execute make style
* code quality test
* code quality
* conflix resolved in requirement.txt
* code quality
* changed the MyDDP class name to CustomDDP
2021-06-08 13:42:49 +01:00
NielsRogge
70f88eeccc
Fix tapas issue ( #12063 )
...
* Fix scatter function to be compatible with torch-scatter 2.7.0
* Allow test again
2021-06-08 05:22:31 -04:00
NielsRogge
e56e3140dd
Fix integration tests ( #12066 )
2021-06-08 05:21:38 -04:00
Stas Bekman
4abc6dd690
skip failing test ( #12059 )
2021-06-07 20:48:41 -07:00
Russell Klopfer
e363e1d936
adds metric prefix. ( #12057 )
...
* adds metric prefix.
* update tests to include prefix
2021-06-07 22:34:10 -04:00
Peter Izsak
8994c1e472
Add optional grouped parsers description to HfArgumentParser ( #12042 )
...
* Adding optional argument group to HfArgumentParser
* Minor
* remove whitespace
* Minor styling
2021-06-07 11:47:12 -04:00
Nicolas Patry
2056f26e85
Extend pipelines for automodel tupels ( #12025 )
...
* fix_torch_device_generate_test
* remove @
* finish
* refactor
* add test
* fix test
* Attempt at simplification.
* Small fix.
* Fixing non existing AutoModel for TF.
* Naming.
* Remove extra condition.
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>
2021-06-07 17:41:27 +02:00
François Lagunas
f8bd8c6c7e
Fixes bug that appears when using QA bert and distilation. ( #12026 )
...
* Fixing bug that appears when using distilation (and potentially other uses).
During backward pass Pytorch complains with:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
This happens because the QA model code modifies the start_positions and end_positions input tensors, using clamp_ function: as a consequence the teacher and the student both modifies the inputs, and backward pass fails.
* Fixing all models QA clamp_ bug.
2021-06-07 11:21:59 -04:00
Patrick von Platen
59f75d538b
[JAX] Bump jax lib ( #12053 )
...
* fix_torch_device_generate_test
* remove @
* bump up jax lib
2021-06-07 13:04:18 +01:00
Suraj Patil
185122ef22
fix docs of past_key_values ( #12049 )
2021-06-07 15:24:03 +05:30
Philip May
3857f2b4e3
fix deberta 2 tokenizer integration test ( #12017 )
2021-06-07 04:55:55 -04:00
Shiva Pundir
20b6f3b80c
Fixed Typo in modeling_bart.py ( #12035 )
...
* Fixed Typo in modeling_bart.py - Issue #11895
* Fixed Typo in modeling_bart.py
2021-06-07 11:44:25 +05:30
Stas Bekman
1f335aef3b
[TrainerArguments] format and sort __repr__, add __str__ ( #12018 )
...
* format and sort __repr__, add __str__
* typo
* use __str__ directly
* alias __repr__ = __str__
2021-06-04 09:39:38 -07:00
Stas Bekman
2c73b93099
[Deepspeed] Assert on mismatches between ds and hf args ( #12021 )
...
* wip
* add mismatch validation + test
* renames
* Update docs/source/main_classes/deepspeed.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* renames
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-06-04 08:58:23 -07:00
Patrick von Platen
242ec31aa5
[Flax] Refactor MLM ( #12013 )
...
* fix_torch_device_generate_test
* remove @
* finish refactor
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
2021-06-03 16:31:32 +01:00
Nicholas Vadivelu
4674061b2a
Fix weight decay masking in run_flax_glue.py
( #11964 )
...
* Fix weight decay masking in `run_flax_glue.py`
Issues with the previous implementation:
- The `dict` from `traverse_util.flatten_dict` has keys which are tuples of strings, not one long string with the path separated by periods.
- `optax.masked` applies the transformation wherever the mask is True, so the masks are flipped.
- Flax's LayerNorm calls the scale parameter `scale` not `weight`
* Fix formatting with black
* adapt results
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
2021-06-03 11:35:26 +01:00
Stas Bekman
61c5063491
[deepspeed] add nvme test skip rule ( #11997 )
...
* add nvme skip rule
* fix
2021-06-02 12:06:37 -07:00
Stas Bekman
640318befa
[deepspeed] Move code and doc into standalone files ( #11984 )
...
* move code and docs
* style
* moved
* restore
2021-06-02 09:56:00 -07:00
Kou Yong Kang
d6d747cb28
Update return introduction ( #11976 )
...
Make it clear that the `forward` method now returns a dict instead of tuple.
Fix style
2021-06-02 12:53:09 -04:00
Stas Bekman
d406a2729a
[docs] fix xref to PreTrainedModel.generate
( #11049 )
...
* fix xref to generate
* do the same for search methods
* style
* style
2021-06-02 09:21:05 -07:00
Gunjan Chhablani
123b597f5d
Fix examples ( #11990 )
2021-06-02 10:12:52 -04:00
Gunjan Chhablani
88ca6a231d
VisualBERT ( #10534 )
...
* Init VisualBERT
* Add cookie-cutter, Config, and Embeddings
* Add preliminary Model
* Add Bert analogous classes
* Add basic code for NLVR, VQA, Flickr
* Update Init
* Fix VisualBert Downstream Models
* Rename classifier to cls
* Comment position_ids buffer
* Remove sentence image predictor output
* Update output dicts
* Remove unnecessary files
* Fix Auto Modeling
* Fix transformers init
* Add conversion script
* Add conversion script
* Fix docs
* Update visualbert modelling
* Update configuration
* Style fixes
* Add model and integration tests
* Add all tests
* Update model mapping
* Add simple detector from original repository
* Update docs and configs
* Fix style
* Fix style
* Update docs
* Fix style
* Fix import issues in style
* Fix style
* Add changes from review
* Fix style
* Fix style
* Update docs
* Fix style
* Fix style
* Update docs/source/model_doc/visual_bert.rst
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update tests/test_modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Add changes from review
* Remove convert run script
* Add changes from review
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/visual_bert/modeling_visual_bert.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Add changes from review
* Add changes from review
* Add visual embedding example in docs
* Fix "copied from" comments
* Add changes from review
* Fix error, style, checkpoints
* Update docs
* Fix integration tests
* Fix style
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-06-02 18:13:08 +05:30
Patrick von Platen
43f46aa7fd
[RAG] Fix rag from pretrained question encoder generator behavior ( #11962 )
...
* fix_torch_device_generate_test
* remove @
* fix rag from pretrained loading
* add test
* uplaod
* finish
2021-06-02 09:17:14 +01:00
dependabot[bot]
6db3a87de2
Bump urllib3 from 1.25.8 to 1.26.5 in /examples/research_projects/lxmert ( #11983 )
...
Bumps [urllib3](https://github.com/urllib3/urllib3 ) from 1.25.8 to 1.26.5.
- [Release notes](https://github.com/urllib3/urllib3/releases )
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst )
- [Commits](https://github.com/urllib3/urllib3/compare/1.25.8...1.26.5 )
---
updated-dependencies:
- dependency-name: urllib3
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-06-02 03:40:20 -04:00
Stas Bekman
4ba203d9d3
[Trainer] add train loss and flops metrics reports ( #11980 )
...
* add train loss and flops metrics reports
* consistency
* add train_loss to skip keys
* restore on_train_end call timing
2021-06-01 15:58:31 -07:00
Stas Bekman
7ec596ecda
[DeepSpeed] decouple DeepSpeedConfigHF
from Trainer
( #11966 )
...
* decouple DeepSpeedConfigHF from Trainer
* add LoggingLevel ctx manager; add new test
* cleanup
* add docs
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* implemented suggested renames
* formatter workaround
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-06-01 13:24:52 -07:00
Alberto Villa
1c3ab3e5d6
Typo in usage example, changed to device instead of torch_device ( #11979 )
2021-06-01 14:58:49 -04:00
Patrick von Platen
47a98fc4cb
ByT5 model ( #11971 )
...
* allow tf to use uneven num of layers
* add tokenizer
* finish docs
* finish docs
* Apply suggestions from code review
* include in index
* finish
* Update docs/source/model_doc/byt5.rst
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* apply sylvais suggestions
* make style
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
2021-06-01 19:07:37 +01:00
Jeoung-Minju
1eb58b4560
typo correction ( #11973 )
...
* typo correction
* type corrections
2021-06-01 12:24:59 -04:00
Stas Bekman
79712e7e7a
[deepspeed] docs ( #11940 )
...
* deepspeed docs
* cleanup
* cleanup
2021-06-01 09:21:21 -07:00
Lysandre
985d708842
Run the integration tests on schedule tests instead of master tests
2021-06-01 15:58:31 +02:00
Volodymyr Byno
9996558bff
Neptune.ai integration ( #11937 )
...
An option that turns on neptune.ai logging
--report_to 'neptune'
Additional ENV variables:
NEPTUNE_PROJECT
NEPTUNE_API_TOKEN
NEPTUNE_RUN_NAME (optional)
NEPTUNE_STOP_TIMEOUT (optional)
2021-06-01 09:40:52 -04:00
Lysandre Debut
ae6ce28f31
Authorize args when instantiating an AutoModel ( #11956 )
2021-06-01 09:27:54 -04:00
Philip May
fcad801825
Add regression tests for slow sentencepiece tokenizers. ( #11737 )
...
* add test_vocab_size for sentencepiece tok.
* add test_get_vocab for sentencepiece tok.
* add test_convert_token_and_id for sentencepiece tok.
* add test_tokenize_and_convert_tokens_to_string for all tok.
* improve test_tokenize_and_convert_tokens_to_string for sp. tok.
* add common tokenizer integration tests
- for albert
- for barthez
* add tokenizer integration tests to bert gen.
* add most tokenizer integration tests
* fix camembert tokenizer integration test
* add tokenizer integration test to marian
* add tokenizer integration test to reformer
* add typing and doc to tokenizer_integration_test_util
* fix tokenizer integration test of reformer
* improve test_sentencepiece_tokenize_and_convert_tokens_to_string
* empty commit to trigger CI
* fix tokenizer integration test of reformer
* remove code not needed anymore
* empty commit to trigger CI
* empty commit to trigger CI
2021-06-01 09:24:39 -04:00
Josh Tanner
c3d958b2c0
reinitialize wandb config for each hyperparameter search run ( #11945 )
2021-06-01 09:18:33 -04:00
Riccardo Bassani
99dbbdb91e
bugfixes training_args.py ( #11922 )
...
modified according to:
https://pytorch.org/xla/release/1.8.1/_modules/torch_xla/core/xla_model.html
2021-06-01 09:04:51 -04:00
Fan Zhang
7e73601f32
modify qa-trainer ( #11872 )
...
* modify qa-trainer
* fix flax model
2021-06-01 08:28:41 -04:00
Shamane Siri
9ec0f01b6c
RAG-2nd2end-revamp ( #11893 )
...
* initial
* code quality test
* code quality
* added test functions in test_modeling_rag.py and test_retrieval_rag.py to test end2end retreiver
* minor change in test_modeling_rag
* fixed tests
* Update examples/research_projects/rag-end2end-retriever/README.md
typo corrected as suggested by lhoestq
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* Update examples/research_projects/rag-end2end-retriever/finetune_rag.py
type change suggested by lhoestq
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* Update src/transformers/models/rag/retrieval_rag.py
Adding this change as mentioned by lhoestq.
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
* completed the minor changes suggested by the reviewers
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
2021-06-01 07:32:26 +01:00
Suraj Patil
ad25fd62bd
Add FlaxCLIP ( #11883 )
...
* add flax CLIP
* default input_shape
* add tests
* fix test
* fix name
* fix docs
* fix shapes
* attend at least 1 token
* flax conv to torch conv
* return floats
* fix equivalence tests
* fix import
* return attention_weights and update tests
* fix dosctrings
* address patricks comments
* input_shape arg
* add tests for get_image_features and get_text_features methods
* fix tests
2021-06-01 09:44:31 +05:30
Philip May
cfca638acb
Add MT5ForConditionalGeneration as supported arch. to summarization README ( #11961 )
...
* Add MT5ForConditionalGeneration as supported arch.
* Update README.md
2021-05-31 21:24:33 +05:30
Nicholas Vadivelu
1ab147d648
Remove redundant nn.log_softmax
in run_flax_glue.py
( #11920 )
...
* Remove redundant `nn.log_softmax` in `run_flax_glue.py`
`optax.softmax_cross_entropy` expects unnormalized logits, and so it already calls `nn.log_softmax`, so I believe it is not needed here. `nn.log_softmax` is idempotent so mathematically it shouldn't have made a difference.
* Remove unused 'flax.linen' import
2021-05-31 15:29:04 +01:00
Philip May
fb60c309c6
fix assert ( #11935 )
2021-05-31 04:02:10 -04:00