Commit Graph

8919 Commits

Author SHA1 Message Date
Yih-Dar
6a5472a8e1
Force use_cache to be False in PyTorch (#15385)
* use_cache = False for PT models if labels is passed

* Fix for BigBirdPegasusForConditionalGeneration

* add warning if users specify use_cache=True

* Use logger.warning instead of warnings.warn

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-08 16:20:53 +01:00
Suraj Patil
0acd84f7cb
[GPTJ] fix docs (#15558) 2022-02-08 15:54:19 +01:00
aaron
87d08afb16
electra is added to onnx supported model (#15084)
* electra is added to onnx supported model

* add google/electra-base-generator for test onnx module

Co-authored-by: Lewis Tunstall <lewis.c.tunstall@gmail.com>
2022-02-08 15:47:49 +01:00
Michael Benayoun
0fe17f375a
FX tracing improvement (#14321)
* Change the way tracing happens, enabling dynamic axes out of the box

* Update the tests and modeling xlnet

* Add the non recoding of leaf modules to avoid recording more values for the methods to record than what will be seen at tracing time (which would otherwise desynchronize the recorded values and the values that need to be given to the proxies during tracing, causing errors).

* Comments and making tracing work for gpt-j and xlnet

* Refactore things related to num_choices (and batch_size, sequence_length)

* Update fx to work on PyTorch 1.10

* Postpone autowrap_function feature usage for later

* Add copyrights

* Remove unnecessary file

* Fix issue with add_new_model_like

* Apply suggestions
2022-02-07 22:25:33 +01:00
Steven Liu
552f8d3091
Create a custom model guide (#15489)
* 📝 add config section

* 📝 finish first draft

* 📝 add feature extractor and processor

* 🖍 apply feedback from review

* 📝 minor edits

* last review
2022-02-07 12:34:56 -06:00
Yih-Dar
ad1d3c4d4b
Make TF Wav2Vec2 outputs the same as PT's version (#15530)
* fix outputs

* fix for CTC

* fix doc

* make style

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-07 18:09:57 +01:00
Yih-Dar
131e258411
Fix TF T5/LED missing cross attn in retrun values (#15511)
* add cross attn to outputs

* add cross attn to outputs for TFLED

* add undo padding

* remove unused import

* fix style

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-07 17:41:48 +01:00
lewtun
6775b211b6
Remove Longformers from ONNX-supported models (#15273) 2022-02-07 17:32:13 +01:00
François REMY
7a1412e12b
Wav2Vec2 models must either throw or deal with add_apater (#15409)
* Wav2Vec2 models must either throw or deal with add_apater

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add pre-add_adapter backwards compatibility

* Add pre-add_adapter backwards compatibility

* Fix issue in tests/test_modeling_wav2vec2.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-02-07 17:03:12 +01:00
Anton Lozhkov
a459f7f97d
Add ASR CTC streaming example (#15309)
* Single-epoch run

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Infinite dataset

* Trainer fix + distributed benchmark

* Benchmark fix

* unused import

* interleaved splits

* interleaved splits

* has_length util

* Move to research projects

* Leftover Sized checks

* Bump min version

* Unused import

* Revert trainer changes

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-02-07 18:35:37 +03:00
Anton Lozhkov
75b13f82e9
[Trainer] Deeper length checks for IterableDatasetShard (#15539)
* Unused import

* Make `has_length()` torch-independent to use in callbacks

* Update src/transformers/trainer_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-02-07 18:34:56 +03:00
NielsRogge
84eec9e6ba
Add ConvNeXT (#15277)
* First draft

* Add conversion script

* Improve conversion script

* Improve docs and implement tests

* Define model output class

* Fix tests

* Fix more tests

* Add model to README

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Apply more suggestions from code review

* Apply suggestions from code review

* Rename dims to hidden_sizes

* Fix equivalence test

* Rename gamma to gamma_parameter

* Clean up conversion script

* Add ConvNextFeatureExtractor

* Add corresponding tests

* Implement feature extractor correctly

* Make implementation cleaner

* Add ConvNextStem class

* Improve design

* Update design to also include encoder

* Fix gamma parameter

* Use sample docstrings

* Finish conversion, add center cropping

* Replace nielsr by facebook, make feature extractor tests smaller

* Fix integration test

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-02-07 16:11:37 +01:00
Patrick von Platen
c47d259241
[torch_int_div] Correct true division in generation (#15498)
* [torch_int_div] Correct true division in generation

* up

* up
2022-02-07 16:04:18 +01:00
Patrick von Platen
5f1918a4a8
[ASR pipeline] correct asr pipeline for seq2seq models (#15541) 2022-02-07 15:35:44 +01:00
Patrick von Platen
e02bdce791
Revert "Handle PyTorch to Flax conversion of 1D convolutions (#15519)" (#15540)
This reverts commit 854a0d526c.
2022-02-07 12:33:49 +01:00
Stas Bekman
8ce1330631
[deepspeed docs] DeepSpeed ZeRO Inference (#15486)
* [deepspeed docs] DeepSpeed ZeRO Inference

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* tweak

* deal with black

* extra cleanup, better comments

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-02-04 13:51:02 -08:00
Sylvain Gugger
ac6aa10f23
Standardize semantic segmentation models outputs (#15469)
* Standardize instance segmentation models outputs

* Rename output

* Update src/transformers/modeling_outputs.py

Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>

* Add legacy argument to the config and model forward

* Update src/transformers/models/beit/modeling_beit.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Copy fix in Segformer

Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2022-02-04 14:52:07 -05:00
Stas Bekman
31be2f45a9
[deepspeed docs] Megatron-Deepspeed info (#15488) 2022-02-04 11:15:13 -08:00
Yih-Dar
bbe9c6981b
Fix TFRemBertEncoder all_hidden_states (#15510)
* fix

* fix test

* remove expected_num_hidden_layers

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-04 16:32:14 +00:00
Sanchit Gandhi
854a0d526c
Handle PyTorch to Flax conversion of 1D convolutions (#15519) 2022-02-04 17:08:03 +01:00
Yih-Dar
486260c68e
use kwargs (#15509)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-04 15:25:37 +00:00
Yih-Dar
525dbbf84a
Remove loss from some flax models docs & examples (#15492)
* Remove return_loss from Flax models

* fix more

* fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-03 21:39:46 +01:00
Stas Bekman
21dcaec5d5
[deepspeed docs] memory requirements (#15506) 2022-02-03 10:55:14 -08:00
davidleonfdez
f1a4c4ead5
[WIP] Add preprocess_logits_for_metrics Trainer param (#15473)
* Add preprocess_logits_for_metrics Trainer param

* Compute accuracy in LM examples

* Improve comments
2022-02-03 12:07:20 -05:00
Stas Bekman
4f5faaf044
[deepspeed] fix a bug in a test (#15493)
* [deepspeed] fix a bug in a test

* consistency
2022-02-03 08:55:45 -08:00
NielsRogge
90166121ee
Add general vision docstrings (#15501)
* Add general docstrings

* Remove legacy docstrings

* Add BEiT

* Add DEiT

* Add SegFormer

* Fix beit output class

* Fix missing return_dict
2022-02-03 17:47:22 +01:00
Patrick von Platen
e2b6e73fa2
[Flax tests] Disable scheduled GPU tests (#15503) 2022-02-03 17:12:14 +01:00
Yih-Dar
f5d98da29e
fix load_weight_prefix (#15101)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-03 15:11:53 +00:00
Yih-Dar
71dccd0774
fix (#15494)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-03 12:57:28 +01:00
CHI LIU
5ec368d79e
Correct eos_token_id settings in generate (#15403)
* Correct eos_token_id set in generate

* Set eos_token_id in test

* Correct eos_token_id set in generate

* Set eos_token_id in test
2022-02-03 00:24:40 +01:00
SaulLu
39b5d1a63a
fix set truncation attribute in __init__ of PreTrainedTokenizerBase (#15456)
* change truncation_side in init of `PreTrainedTokenizerBase`

Co-authored-by: LSinev <LSinev@users.noreply.github.com>

* add test

* Revert "replace assert with exception for `padding_side` arg in `PreTrainedTokenizerBase` `__init__`"

This reverts commit 7a98b87962.

* fix kwargs

* Revert "fix kwargs"

This reverts commit 67b0a5270e8cf1dbf70e6b0232e94c0452b6946f.

* Update tests/test_tokenization_common.py

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>

* delete truncation_side variable

* reorganize test

* format

* complete doc

* Revert "Revert "replace assert with exception for `padding_side` arg in `PreTrainedTokenizerBase` `__init__`""

This reverts commit d5a10a7e2680539e5d9e98ae5d896c893d224b80.

* fix typo

* fix typos to render documentation

* Revert "Revert "Revert "replace assert with exception for `padding_side` arg in `PreTrainedTokenizerBase` `__init__`"""

This reverts commit 16cf58811943a08f43409a7c83eaa330686591d0.

* format

Co-authored-by: LSinev <LSinev@users.noreply.github.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2022-02-02 23:18:09 +01:00
Sylvain Gugger
45cac3fade
Fix labels stored in model config for token classification examples (#15482)
* Playing

* Properly set labels in model config for token classification example

* Port to run_ner_no_trainer

* Quality
2022-02-02 14:23:43 -05:00
Ayush Chaurasia
c74f3d4c48
Add W&B backend for hyperparameter sweep (#14582)
# Add support for W&B hyperparameter sweep
This PR:
* allows using wandb for running hyperparameter search.
* The runs are visualized on W&B sweeps dashboard
* This supports runnning sweeps on parallel devices, all reporting to the same central dashboard.

### Usage
**To run new a hyperparameter search:**
```
trainer.hyperparameter_search(
    backend="wandb", 
    project="transformers_sweep", # name of the project
    n_trials=5,
    metric="eval/loss", # metric to be optimized, default 'eval/loss'. A warning is raised if the passed metric is not found
)
```
This outputs a sweep id. Eg. `my_project/sweep_id`

**To run sweeps on parallel devices:**
Just pass sweep id which you want to run parallel
```
trainer.hyperparameter_search(
    backend="wandb", 
    sweep_id = "my_project/sweep_id"
)
```
2022-02-02 14:06:14 -05:00
Sylvain Gugger
13297ac71c
Fic docstring of ASR pipeline (#15481) 2022-02-02 12:12:22 -05:00
bugface
dd360d58d9
fix error posted in issue #15448 (#15480)
* fix error posted in issue #15448

Signed-off-by: bugface <alexgre@ufl.edu>

* clean up - remove commented line

Signed-off-by: bugface <alexgre@ufl.edu>
2022-02-02 10:45:51 -05:00
Sylvain Gugger
44b21f117b
Save code of registered custom models (#15379)
* Allow dynamic modules to use relative imports

* Work for configs

* Fix last merge conflict

* Save code of registered custom objects

* Map strings to strings

* Fix test

* Add tokenizer

* Rework tests

* Tests

* Ignore fixtures py files for tests

* Tokenizer test + fix collection

* With full path

* Rework integration

* Fix typo

* Remove changes in conftest

* Test for tokenizers

* Add documentation

* Update docs/source/custom_models.mdx

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Add file structure and file content

* Add more doc

* Style

* Update docs/source/custom_models.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Address review comments

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-02-02 10:44:37 -05:00
Nicolas Patry
623d8cb475
Adding support for microphone streaming within pipeline. (#15046)
* Adding support for `microphone` streaming within pipeline.

- Uses `ffmpeg` to get microphone data.
- Makes sure alignment is made to `size_of_sample`.
- Works by sending `{"raw": ..data.., "stride": (n, left, right),
"partial": bool}`
directly to the pipeline enabling to stream partial results and still
get inference.
- Let's `partial` information flow through the pipeline to enable caller
  to get it back and choose to display text or not.

- The striding reconstitution is bound to have errors since CTC does not
keep previous state. Currently most of the errors are we don't know if
there's a space or not between two chunks.
Since we have some left striding info, we could use that during decoding
to choose what to do with those spaces and even extra letters maybe (if
the stride is long enough, it's bound to cover at least a few symbols)

Fixing tests.

Protecting with `require_torch`.

`raw_ctc` support for nicer demo.

Post rebase fixes.

Revamp to split raw_mic_data from it's live chunking.

- Requires a refactor to make everything a bit cleaner.

Automatic resampling.

Small fix.

Small fix.

* Post rebase fix (need to let super handle more logic, reorder args.)

* Update docstrings

* Docstring format.

* Remove print.

* Prevent flow of `input_values`.

* Fixing `stride` too.

* Fixing the PR by removing `raw_ctc`.

* Better docstrings.

* Fixing init.

* Update src/transformers/pipelines/audio_utils.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update tests/test_pipelines_automatic_speech_recognition.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Quality.

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-02-02 15:12:12 +01:00
Patrick von Platen
d718c0c3a8
[Wav2Vec2ProcessorWithLM] add alpha & beta to batch decode & decode (#15465) 2022-02-02 12:59:40 +01:00
NielsRogge
1d94d57546
Add option to resize like torchvision's Resize (#15419)
* Add torchvision's resize

* Rename torch_resize to default_to_square

* Apply suggestions from code review

* Add support for default_to_square and tuple of length 1
2022-02-02 09:44:22 +01:00
Steven Liu
b9418a1d97
Update tutorial docs (#15165)
* first draft of pipeline, autoclass, preprocess tutorials

* apply review feedback

* 🖍 apply feedback from patrick/niels

* 📝add output image to preprocessed image

* 🖍 apply feedback from patrick
2022-02-01 18:31:35 -06:00
Steven Liu
c157c7e3fd
Update fine-tune docs (#15259)
* add fine-tune tutorial

* make edits, fix style

* 📝 make edits

* 🖍 fix code format links to external libraries

* 🔄revert code formatting

* 🖍 use DefaultDataCollator instead of DataCollatorWithPadding
2022-02-01 18:28:12 -06:00
Sylvain Gugger
d0b5ed110a
Harder check for IndexErrors in QA scripts (#15438)
* Harder check for IndexErrors in QA scripts

* Make test stronger
2022-02-01 15:49:13 -05:00
Sylvain Gugger
8e5d4e4906
Trainer.push_to_hub always tries to push to the Hub (#15463) 2022-02-01 15:49:04 -05:00
Suraj Patil
37800f1365
[BartTokenizer] remove inheritance on RobertaTokenizer (#15461)
* refactor bart tokenizers

* doc

* replace assert with ValueError
2022-02-01 20:59:24 +01:00
Yih-Dar
f427e75049
use mean instead of elementwise_mean in XLMPredLayer (#15436)
* use mean instead of elementwise_mean

* make style

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 19:08:17 +01:00
SaulLu
7b8bdd8601
fix the tokenizer_config.json file for the slow tokenizer when a fast version is available (#15319)
* add new test

* update test

* remove `tokenizer_file` from `additional_files_names` in `tokenization_utils_base.py`

* add `tokenizer_file` for the fast only tokenizer

* change global variables layoutxml

* remove `"tokenizer_file"` from DPR tokenizer's Global variables

* remove `tokenizer_file` from herbert slow tokenizer init

* `"tokenizer_file"` from LED tokenizer's Global variables

* remove `tokenizer_file` from mbart slow tokenizer init

* remove `tokenizer_file` from slow tokenizer template

* adapt to versioning

* adapt the `test_tokenizer_mismatch_warning` test

* clean test

* clarify `VOCAB_FILES_NAMES` in tokenization_utils_fast.py

* Revert "remove `tokenizer_file` from mbart slow tokenizer init"

This reverts commit 0dbb723fa9.

* Revert "`"tokenizer_file"` from LED tokenizer's Global variables"

This reverts commit 5a3f879bdd.

* Revert "remove `tokenizer_file` from herbert slow tokenizer init"

This reverts commit f5e10007b7.

* Revert "remove `"tokenizer_file"` from DPR tokenizer's Global variables"

This reverts commit da0895330b.

* set `tokenizer_file` in super `__init__` of mbart
2022-02-01 16:48:25 +01:00
SaulLu
6d585fe0f0
replace assert with exception for padding_side arg in PreTrainedTokenizerBase __init__ (#15454)
* replace assert with exception for `padding_side` arg in `PreTrainedTokenizerBase` `__init__`

* add test

* fix kwargs

* reformat test

* format

* format

* fix typo to render the documentation
2022-02-01 16:13:58 +01:00
Kamal Raj
d2749cf72e
Update README.md (#15462)
fix typo
2022-02-01 10:04:30 -05:00
Suraj Patil
1c9648c457
[M2M100, XGLM] fix positional emb resize (#15444) 2022-02-01 14:32:55 +01:00
Yih-Dar
2ca6268394
fix from_vision_text_pretrained doc example (#15453)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 12:20:22 +01:00