Commit Graph

8888 Commits

Author SHA1 Message Date
Sylvain Gugger
45cac3fade
Fix labels stored in model config for token classification examples (#15482)
* Playing

* Properly set labels in model config for token classification example

* Port to run_ner_no_trainer

* Quality
2022-02-02 14:23:43 -05:00
Ayush Chaurasia
c74f3d4c48
Add W&B backend for hyperparameter sweep (#14582)
# Add support for W&B hyperparameter sweep
This PR:
* allows using wandb for running hyperparameter search.
* The runs are visualized on W&B sweeps dashboard
* This supports runnning sweeps on parallel devices, all reporting to the same central dashboard.

### Usage
**To run new a hyperparameter search:**
```
trainer.hyperparameter_search(
    backend="wandb", 
    project="transformers_sweep", # name of the project
    n_trials=5,
    metric="eval/loss", # metric to be optimized, default 'eval/loss'. A warning is raised if the passed metric is not found
)
```
This outputs a sweep id. Eg. `my_project/sweep_id`

**To run sweeps on parallel devices:**
Just pass sweep id which you want to run parallel
```
trainer.hyperparameter_search(
    backend="wandb", 
    sweep_id = "my_project/sweep_id"
)
```
2022-02-02 14:06:14 -05:00
Sylvain Gugger
13297ac71c
Fic docstring of ASR pipeline (#15481) 2022-02-02 12:12:22 -05:00
bugface
dd360d58d9
fix error posted in issue #15448 (#15480)
* fix error posted in issue #15448

Signed-off-by: bugface <alexgre@ufl.edu>

* clean up - remove commented line

Signed-off-by: bugface <alexgre@ufl.edu>
2022-02-02 10:45:51 -05:00
Sylvain Gugger
44b21f117b
Save code of registered custom models (#15379)
* Allow dynamic modules to use relative imports

* Work for configs

* Fix last merge conflict

* Save code of registered custom objects

* Map strings to strings

* Fix test

* Add tokenizer

* Rework tests

* Tests

* Ignore fixtures py files for tests

* Tokenizer test + fix collection

* With full path

* Rework integration

* Fix typo

* Remove changes in conftest

* Test for tokenizers

* Add documentation

* Update docs/source/custom_models.mdx

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Add file structure and file content

* Add more doc

* Style

* Update docs/source/custom_models.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Address review comments

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-02-02 10:44:37 -05:00
Nicolas Patry
623d8cb475
Adding support for microphone streaming within pipeline. (#15046)
* Adding support for `microphone` streaming within pipeline.

- Uses `ffmpeg` to get microphone data.
- Makes sure alignment is made to `size_of_sample`.
- Works by sending `{"raw": ..data.., "stride": (n, left, right),
"partial": bool}`
directly to the pipeline enabling to stream partial results and still
get inference.
- Let's `partial` information flow through the pipeline to enable caller
  to get it back and choose to display text or not.

- The striding reconstitution is bound to have errors since CTC does not
keep previous state. Currently most of the errors are we don't know if
there's a space or not between two chunks.
Since we have some left striding info, we could use that during decoding
to choose what to do with those spaces and even extra letters maybe (if
the stride is long enough, it's bound to cover at least a few symbols)

Fixing tests.

Protecting with `require_torch`.

`raw_ctc` support for nicer demo.

Post rebase fixes.

Revamp to split raw_mic_data from it's live chunking.

- Requires a refactor to make everything a bit cleaner.

Automatic resampling.

Small fix.

Small fix.

* Post rebase fix (need to let super handle more logic, reorder args.)

* Update docstrings

* Docstring format.

* Remove print.

* Prevent flow of `input_values`.

* Fixing `stride` too.

* Fixing the PR by removing `raw_ctc`.

* Better docstrings.

* Fixing init.

* Update src/transformers/pipelines/audio_utils.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Update tests/test_pipelines_automatic_speech_recognition.py

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>

* Quality.

Co-authored-by: Anton Lozhkov <aglozhkov@gmail.com>
2022-02-02 15:12:12 +01:00
Patrick von Platen
d718c0c3a8
[Wav2Vec2ProcessorWithLM] add alpha & beta to batch decode & decode (#15465) 2022-02-02 12:59:40 +01:00
NielsRogge
1d94d57546
Add option to resize like torchvision's Resize (#15419)
* Add torchvision's resize

* Rename torch_resize to default_to_square

* Apply suggestions from code review

* Add support for default_to_square and tuple of length 1
2022-02-02 09:44:22 +01:00
Steven Liu
b9418a1d97
Update tutorial docs (#15165)
* first draft of pipeline, autoclass, preprocess tutorials

* apply review feedback

* 🖍 apply feedback from patrick/niels

* 📝add output image to preprocessed image

* 🖍 apply feedback from patrick
2022-02-01 18:31:35 -06:00
Steven Liu
c157c7e3fd
Update fine-tune docs (#15259)
* add fine-tune tutorial

* make edits, fix style

* 📝 make edits

* 🖍 fix code format links to external libraries

* 🔄revert code formatting

* 🖍 use DefaultDataCollator instead of DataCollatorWithPadding
2022-02-01 18:28:12 -06:00
Sylvain Gugger
d0b5ed110a
Harder check for IndexErrors in QA scripts (#15438)
* Harder check for IndexErrors in QA scripts

* Make test stronger
2022-02-01 15:49:13 -05:00
Sylvain Gugger
8e5d4e4906
Trainer.push_to_hub always tries to push to the Hub (#15463) 2022-02-01 15:49:04 -05:00
Suraj Patil
37800f1365
[BartTokenizer] remove inheritance on RobertaTokenizer (#15461)
* refactor bart tokenizers

* doc

* replace assert with ValueError
2022-02-01 20:59:24 +01:00
Yih-Dar
f427e75049
use mean instead of elementwise_mean in XLMPredLayer (#15436)
* use mean instead of elementwise_mean

* make style

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 19:08:17 +01:00
SaulLu
7b8bdd8601
fix the tokenizer_config.json file for the slow tokenizer when a fast version is available (#15319)
* add new test

* update test

* remove `tokenizer_file` from `additional_files_names` in `tokenization_utils_base.py`

* add `tokenizer_file` for the fast only tokenizer

* change global variables layoutxml

* remove `"tokenizer_file"` from DPR tokenizer's Global variables

* remove `tokenizer_file` from herbert slow tokenizer init

* `"tokenizer_file"` from LED tokenizer's Global variables

* remove `tokenizer_file` from mbart slow tokenizer init

* remove `tokenizer_file` from slow tokenizer template

* adapt to versioning

* adapt the `test_tokenizer_mismatch_warning` test

* clean test

* clarify `VOCAB_FILES_NAMES` in tokenization_utils_fast.py

* Revert "remove `tokenizer_file` from mbart slow tokenizer init"

This reverts commit 0dbb723fa9.

* Revert "`"tokenizer_file"` from LED tokenizer's Global variables"

This reverts commit 5a3f879bdd.

* Revert "remove `tokenizer_file` from herbert slow tokenizer init"

This reverts commit f5e10007b7.

* Revert "remove `"tokenizer_file"` from DPR tokenizer's Global variables"

This reverts commit da0895330b.

* set `tokenizer_file` in super `__init__` of mbart
2022-02-01 16:48:25 +01:00
SaulLu
6d585fe0f0
replace assert with exception for padding_side arg in PreTrainedTokenizerBase __init__ (#15454)
* replace assert with exception for `padding_side` arg in `PreTrainedTokenizerBase` `__init__`

* add test

* fix kwargs

* reformat test

* format

* format

* fix typo to render the documentation
2022-02-01 16:13:58 +01:00
Kamal Raj
d2749cf72e
Update README.md (#15462)
fix typo
2022-02-01 10:04:30 -05:00
Suraj Patil
1c9648c457
[M2M100, XGLM] fix positional emb resize (#15444) 2022-02-01 14:32:55 +01:00
Yih-Dar
2ca6268394
fix from_vision_text_pretrained doc example (#15453)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 12:20:22 +01:00
Yih-Dar
dc05dd539f
Fix TF Causal LM models' returned logits (#15256)
* Fix TF Causal LM models' returned logits

* Fix expected shape in the tests

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 11:04:07 +00:00
Yih-Dar
af5c3329d7
remove "inputs" in tf common test script (no longer required) (#15262)
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-02-01 10:09:49 +00:00
Stas Bekman
d12ae81664
[generate] fix synced_gpus default (#15446) 2022-01-31 13:58:27 -08:00
Suraj Patil
d4f201b860
skip test for XGLM (#15445) 2022-01-31 16:53:16 -05:00
Sylvain Gugger
0c17e766cb
Error when group_by_length is used with an IterableDataset (#15437) 2022-01-31 15:33:16 -05:00
peregilk
125a2882b4
Update modeling_wav2vec2.py (#15423)
* Update modeling_wav2vec2.py

With very tiny sound files (less than 0.1 seconds) the num_masked_span can be too long. The issue is described in issue #15366 and discussed with @patrickvonplaten.

* correct errors with mask time indices

* remove bogus file

* make fix-copies

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-01-31 21:22:11 +01:00
Tavin Turner
d984b10335
Add 'with torch.no_grad()' to BEiT integration test forward passes (#14961)
* Add 'with torch.no_grad()' to BEiT integration test forward pass

* Fix inconsistent use of tabs and spaces in indentation
2022-01-31 15:12:10 -05:00
Matt
09f9d07271
Misfiring tf warnings (#15442)
* Fix spurious warning in TF TokenClassification models

* Fixing one last spurious warning

* Removing outdated warning altogether
2022-01-31 19:17:59 +00:00
Suraj Patil
6915174e68
[RobertaTokenizer] remove inheritance on GPT2Tokenizer (#15429)
* refactor roberta tokenizer

* refactor fast tokenizer

* remove old comment
2022-01-31 19:50:25 +01:00
Suraj Patil
a5ecbf7348
correct positionla emb size (#15441) 2022-01-31 19:47:49 +01:00
Yih-Dar
5a70987301
Fix TFLEDModel (#15356)
* fix tf led

* fix

* fix

* Add test_pt_tf_model_equivalence_extra for TFLED

* add a (temporary) test

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-31 19:35:54 +01:00
Suraj Patil
87918d3221
[examples/Flax] add a section about GPUs (#15198)
* add a section about GPUs

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-01-31 19:20:53 +01:00
Patrick von Platen
b8810847d0
[Trainer] suppress warning for length-related columns (#15421)
* [Trainer] suppress warning for length-related columns

* improve message

* Update src/transformers/trainer.py
2022-01-31 18:51:29 +01:00
Sylvain Gugger
3385ca2582
Change REALM checkpoint to new ones (#15439)
* Change REALM checkpoint to new ones

* Last checkpoint missing
2022-01-31 12:50:20 -05:00
Matt
7e56ba2864
Fix spurious warning in TF TokenClassification models (#15435) 2022-01-31 17:09:16 +00:00
Yih-Dar
554d333ece
Fix loss calculation in TFXXXForTokenClassification models (#15294)
* Fix loss calculation in TFFunnelForTokenClassification

* revert the change in TFFunnelForTokenClassification

* fix FunnelForTokenClassification loss

* fix other TokenClassification loss

* fix more

* fix more

* add num_labels to ElectraForTokenClassification

* revert the change to research projects

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-31 11:43:08 -05:00
Stas Bekman
44c7857b87
[deepspeed doc] fix import, extra notes (#15400)
* [deepspeed doc] fix import, extra notes

* typo
2022-01-31 08:28:10 -08:00
NielsRogge
47df0f2234
Add header (#15434) 2022-01-31 11:15:54 -05:00
Sylvain Gugger
7fc6f41d91
Add doc for add-new-model-like command (#15433) 2022-01-31 11:10:45 -05:00
Ogundepo Odunayo
282ae123e2
add t5 ner finetuning (#15432) 2022-01-31 17:03:06 +01:00
NielsRogge
d4b3e56d64
[Hotfix] Fix Swin model outputs (#15414)
* Fix Swin model outputs

* Rename pooler
2022-01-31 16:32:14 +01:00
Suraj Patil
38dfb40ae3
import torch.utils.checkpoint (#15427) 2022-01-31 15:51:50 +01:00
Jonatas Grosman
f624249d8b
[Robust Speech Challenge] Add missing LR parameter (#15428) 2022-01-31 15:50:56 +01:00
Kamal Raj
3254080d45
Update README.md (#15430)
fix typo
2022-01-31 09:48:20 -05:00
Julien Plu
aa19f478ac
Add (M)Luke model training for Token Classification in the examples (#14880)
* Add Luke training

* Fix true label tags

* Fix true label tags

* Fix true label tags

* Update the data collator for Luke

* Some training refactor for Luke

* Improve data collator for Luke

* Fix import

* Fix datasets concatenation

* Add the --max_entity_length argument for Luke models

* Remove unused code

* Fix style issues

* Fix style issues

* Move the Luke training into a separate folder

* Fix style

* Fix naming

* Fix filtering

* Fix filtering

* Fix filter

* Update some preprocessing

* Move luke to research_projects

* Checkstyle

* Address comments

* Fix style
2022-01-31 07:58:18 -05:00
François REMY
0094eba363
Fix additional DataTrainingArguments documentation (#15408)
(This is an editorial change only)
2022-01-31 07:45:11 -05:00
NielsRogge
ee5de66349
Add SegformerFeatureExtractor to Auto API (#15410) 2022-01-31 11:38:08 +01:00
Suraj Patil
0f69b924fb
[XGLMTokenizer] fix init and add in AutoTokenizer (#15406) 2022-01-30 15:35:53 +01:00
Yih-Dar
f380bf2b61
Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel (#15298)
* Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel

* overwrite test_loss_computation

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-01-29 15:08:35 +00:00
Soonhwan-Kwon
e09473a817
Add support for XLM-R XL and XXL models by modeling_xlm_roberta_xl.py (#13727)
* add xlm roberta xl

* add convert xlm xl fairseq checkpoint to pytorch

* fix init and documents for xlm-roberta-xl

* fix indention

* add test for XLM-R xl,xxl

* fix model hub name

* fix some stuff

* up

* correct init

* fix more

* fix as suggestions

* add torch_device

* fix default values of doc strings

* fix leftovers

* merge to master

* up

* correct hub names

* fix docs

* fix model

* up

* finalize

* last fix

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add copied from

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-01-29 13:42:37 +01:00
Steven Liu
16d4acbfdb
Get started docs (#15098)
* clean commit of changes

* apply review feedback, make edits

* fix backticks, minor formatting

* 🖍 make fixup and minor edits

* 🖍 fix # in header

* 📝 update code sample without from_pt

* 📝 final review
2022-01-28 19:01:37 -06:00