Commit Graph

8590 Commits

Author SHA1 Message Date
Patrick von Platen
1e847b40c0
[WavLM] give model for precision (#14958) 2021-12-28 11:07:05 +01:00
Patrick von Platen
1c121916f3
Add Speech Seq2Seq Training script (#14792)
* start

* add gradient checkpointing and feature extractor freezing

* Apply suggestions from code review

* up

* up

* up

* correct

* up

* more changes

* up

* up

* up

* remove rst
2021-12-28 10:20:51 +01:00
Stas Bekman
10fd4fa1a6
[doc] :class: hunt (#14955)
* [doc] :class: hunt

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* fix the fix + style

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-12-27 17:17:38 -08:00
Sylvain Gugger
2c5597f6c7 Style 2021-12-27 19:18:08 -05:00
Sylvain Gugger
b5e2b183af
Doc styler examples (#14953)
* Fix bad examples

* Add black formatting to style_doc

* Use first nonempty line

* Put it at the right place

* Don't add spaces to empty lines

* Better templates

* Deal with triple quotes in docstrings

* Result of style_doc

* Enable mdx treatment and fix code examples in MDXs

* Result of doc styler on doc source files

* Last fixes

* Break copy from
2021-12-27 19:07:46 -05:00
Stas Bekman
e13f72fbff
[doc] :obj: hunt (#14954)
* redo sans examples

* style
2021-12-27 15:49:48 -08:00
Stas Bekman
133c5e40c4
[doc] consistent True/False/None default format (#14951)
* [doc] consistent True/False/None default format

* Update src/transformers/models/xlnet/modeling_xlnet.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-12-27 14:31:40 -08:00
Sylvain Gugger
b2f500256e
Convert last rst file (#14952) 2021-12-27 17:09:37 -05:00
Sylvain Gugger
87e6e4fe5c
Doc styler v2 (#14950)
* New doc styler

* Fix issue with args at the start

* Code sample fixes

* Style code examples in MDX

* Fix more patterns

* Typo

* Typo

* More patterns

* Do without black for now

* Get more info in error

* Docstring style

* Re-enable check

* Quality

* Fix add_end_docstring decorator

* Fix docstring
2021-12-27 16:31:21 -05:00
Mihai Balint
c1138273d4
Fix duplicate call to save_checkpoint when using deepspeed (#14946)
* Fix duplicate call to save_checkpoint when using deepspeed / stage3_gather_fp16_weights_on_model_save

* Revert "Fix duplicate call to save_checkpoint when using deepspeed / stage3_gather_fp16_weights_on_model_save"

This reverts commit 6a3dec0397.

* Delete correct duplicate invocation of deepspeed save_checkpoint
2021-12-27 11:25:26 -08:00
Ayal Klein
03885a3f50
fix to issue #14833 in data_collator - consider no labels (#14930) 2021-12-27 11:48:48 -05:00
Daniel Stancl
501307b58b
Add ElectraForCausalLM -> Enable Electra encoder-decoder model (#14729)
* Add ElectraForCausalLM and cover some basic tests & need to fix a few tests

* Fix bugs

* make style

* make fix-copies

* Update doc

* Change docstring to markdown format

* Remove redundant update_keys_to_ignore
2021-12-27 12:37:52 +01:00
Nicolas Patry
b058490ceb
ChunkPipeline (batch_size enabled on zero-cls and qa pipelines. (#14225)
* Pipeline chunks.

* Batching for Chunking pipelines ?

* Batching for `question-answering` and `zero-shot-cls`.

* Fixing for FNet.

* Making ASR a chunk pipeline.

* Chunking ASR API.

* doc style.

* Fixing ASR test.

* Fixing QA eror (p_mask, padding is 1, not 0).

* Enable both vad and simple chunking.

* Max length for vad.

* remove inference mode, crashing on s2t.

* Revert ChunkPipeline for ASRpipeline.

Too many knobs for simple integration within the pipeline, better stick
to external convenience functions instead, more control to be had,
simpler pipeline and also easier to replace with other things later.

* Drop necessity for PT for these.

* Enabling generators.

* Add mic + cleanup.

* Typo.

* Typo2.

* Remove ASR work, it does not belong in this PR anymore.

* Update src/transformers/pipelines/pt_utils.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Update src/transformers/pipelines/zero_shot_classification.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* Adding many comments.

* Doc quality.

* `hidden_states` handling.

* Adding doc.

* Bad rebase.

* Autofixing docs.

* Fixing CRITICAL bug in the new Zerocls pipeline.

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2021-12-27 11:26:20 +01:00
Qing
705ca7f21b
Fix Perceiver docs (#14917) 2021-12-24 11:28:47 +01:00
Patrick von Platen
116829900a
[WavLM] fix wavlm docs (#14910) 2021-12-23 23:17:20 +01:00
Stas Bekman
415810664b
[doc] install - add jax (#14912)
As `jax` cuda requires special instructions to be installed correctly add a link to jax installation instructions. 

Note: Flax install page only covers cpu jax installation info.
2021-12-23 13:12:59 -08:00
Sylvain Gugger
676643c6d6
Better logic for getting tokenizer config in AutoTokenizer (#14906)
* Better logic for getting tokenizer config in AutoTokenizer

* Remove needless import

* Remove debug statement

* Address review comments
2021-12-23 14:18:07 -05:00
Sylvain Gugger
f566c6e3b7
Fix failing GPU trainer tests (#14903)
* Fix failing GPU trainer tests

* Remove print statements
2021-12-23 13:59:33 -05:00
Patrick von Platen
fe4197ab11
[Generate] Remove attention_mask and integrate model_main_input_name (#14856)
* up

* save

* correct

* up

* correct more

* up

* up

* up

* up

* up

* correct

* fix tf

* fix

* remove tokenizer
2021-12-23 19:43:37 +01:00
Stas Bekman
86b40073e9
[doc] post-porting (#14890)
found a few oddities:

1. https://huggingface.co/docs/transformers/main_classes/logging#transformers.utils.logging.enable_explicit_format
has a :: - this PR fixes it

2.  this looks borked too:
https://huggingface.co/docs/transformers/main_classes/logging#transformers.utils.logging.set_verbosity
 has a <

but I'm not sure where this one is coming from
2021-12-23 10:19:34 -08:00
Anton Lozhkov
ee55ea692b
Update diarization and WavLM tolerances (#14902) 2021-12-23 19:53:56 +03:00
Patrick von Platen
ef47d4f848
[AutoTokenizer] Fix incorrect from pretrained (#14900) 2021-12-23 17:22:33 +01:00
Yih-Dar
8f2cc1c3ab
Add TFCLIPModel (#13967)
* Start the work for TFCLIPModel

* Convert to TF code (TODO: loss + doc)

* Clean up

* Fix pooled_output for TFCLIPTextTransformer - using tf.gather_nd

* assert -> raise error

* Expose TFCLIPModel

* Deal with dummy_inputs

* Add tests

* Fix all tests. TODO: manual check weight loading + add more comments

* Fix pt tf equivalence test

* fixes

* update TFCLIPVisionEmbeddings's Conv2D

* Fix loss + overwrite test_pt_tf_model_equivalence from common

* Add a comment about the change about MainLayer in test_keras_save_load

* Set return_loss=True in TFCLIPModelTester + make tests pass

* overwrite test_pt_tf_model_equivalence from tf common

* fix base_model_prefix

* Fix examples

* remove unused

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* apply review suggestions

* change self.pre_layrnorm to self.pre_layernorm

* apply more review suggestions

* return attention probs before dropout (to align with PT)

* fix weight init

* fix

* build doc

* fix missing doc

* fix for test

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-12-23 11:19:44 -05:00
Yang Dong
2d30443cd3
Set run_name in MLflowCallback (#14894)
* Set run_name in MLflowCallback

* Update the docs for `run_name` argument
2021-12-23 10:53:33 -05:00
Leandro von Werra
1d651868d6
add custom stopping criteria to human eval script (#14897) 2021-12-23 14:59:11 +01:00
lewtun
6b655cc63f
Add ONNX support for MarianMT models (#14586)
* First commit to add MarianMT to ONNX

* Now MarianModel.forward() automatically generates decoder_input_ids, like BartModel.forward()

* Adjusted MarianOnnxConfig.inputs and outputs to work with seq2seq-lm feature

* Style fix

* Added support for other features for already supported models

* Partial support for causal and seq2seq models

* Partial support for causal and seq2seq models

* Add default task for MarianMT ONNX

* Remove automatic creation of decoder_input_ids

* Extend inputs and outputs for MarianMT ONNX config

* Add MarianMT to ONNX unit tests

* Refactor

* OnnxSeq2SeqConfigWithPast to support seq2seq models

* Parameterized the onnx tests

* Restored run_mlm.py

* Restored run_mlm.py

* [WIP] BART update

* BART and MBART

* Add past_key_values and fix dummy decoder inputs

Using a sequence length of 1 in generate_dummy_outputs() produces large discrepancies, presumably due to some hidden optimisations.

* Refactor MarianOnnxConfig to remove custom past_key_values logic

* Fix quality

* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"

This reverts commit 0f4e39c559.

* is_torch_available test to avoid failing imports

* sorting parameterize parameters to solve ERROR gw0 gw1

* tests fix

* tests fix

* GPT2 with past fix

* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially

* Removed onnx file

* Refactor Marian export to account for base changes

* Fix copies

* Implemented suggestions

* Extend support for causal LM

* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"

This reverts commit 0f4e39c559.

* is_torch_available test to avoid failing imports

* sorting parameterize parameters to solve ERROR gw0 gw1

* tests fix

* tests fix

* GPT2 with past fix

* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially

* Removed onnx file

* Implemented suggestions

* Fixed __init__ to resolve conflict with master

* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"

This reverts commit 0f4e39c559.

* is_torch_available test to avoid failing imports

* sorting parameterize parameters to solve ERROR gw0 gw1

* tests fix

* tests fix

* GPT2 with past fix

* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially

* Removed onnx file

* Implemented suggestions

* Fixed __init__ to resolve conflict with master

* Remove commented import

* Remove ONNX model

* Remove redundant class method

* Tidy up imports

* Fix quality

* Refactor dummy input function

* Add copied from statements to Marian config functions

* Remove false copied from comments

* Fix copy from comment

Co-authored-by: Massimiliano Bruni <massimiliano.bruni@hcl.com>
Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com>
2021-12-23 13:35:56 +01:00
Henrik Holm
6a7b9da2ae
Add 'with torch.no_grad()' to integration test forward pass (#14808) 2021-12-23 04:23:39 -05:00
Alex Hedges
d8c09c6541
Fix AttributeError from PreTrainedTokenizerFast.decoder (#14691) 2021-12-23 04:19:25 -05:00
Yih-Dar
4210579522
Fix doc examples: ... takes no keyword arguments (#14701)
* Fix doc examples: ... takes no keyword arguments

* fix copies

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2021-12-23 04:07:21 -05:00
lewtun
355dc0ce67
Fix installation instructions for BART ONNX example (#14885) 2021-12-23 04:05:32 -05:00
Sylvain Gugger
207594be81
Convert rst files (#14888)
* Convert all tutorials and guides

* Convert all remaining rst to mdx

* Track and fix bad links
2021-12-22 16:14:35 -05:00
Matt
b0c7d2ec58
Keras metric callback (#14867)
* Working on splitting out labels

* First working version

* Fixed concatenation of outputs and labels

* val_dataset -> eval_dataset

* Only pass input arrays in tokenizer.model_input_names

* Only pass input arrays in tokenizer.model_input_names

* Only remove unexpected keys when predict_with_generate is True

* Adding proper docstring

* Adding example to docstring

* Add a proper ROUGE metric example

* Add a proper ROUGE metric example

* Add version checking

* Update src/transformers/keras_callbacks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/keras_callbacks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/keras_callbacks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/keras_callbacks.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Remove requirement for tokenizer with predict_with_generate

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-12-22 20:35:39 +00:00
Patrick von Platen
fa39ff9fc4 Docs for v4.16.0dev0 2021-12-22 20:39:44 +01:00
Patrick von Platen
05fa1a7ac1 Release: v4.15.0 2021-12-22 18:43:15 +01:00
Sylvain Gugger
87a033d9fa
Properly indent return block (#14887) 2021-12-22 12:28:45 -05:00
Michael Benayoun
13504dcbea
Onnx enable tasks for supported models (part 2) (#14700)
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"

This reverts commit 0f4e39c559.

* is_torch_available test to avoid failing imports

* sorting parameterize parameters to solve ERROR gw0 gw1

* tests fix

* tests fix

* GPT2 with past fix

* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially

* Removed onnx file

* Implemented suggestions

* Fixed __init__ to resolve conflict with master

* Remove commented import
2021-12-22 14:43:11 +01:00
Mario Šaško
1045a36c1f
Fix pytorch image classification example (#14883)
* Update example

* Remove skip in tests
2021-12-22 14:42:19 +01:00
NielsRogge
7df4b90c76
Fix Perceiver docs (#14879) 2021-12-22 14:18:03 +01:00
Sylvain Gugger
e37bc579fc Fix typo in error message 2021-12-22 08:19:36 -05:00
charon____
17efc806b4
IterableDatasetShard should use per device batch size instead of real batch size (#14714) 2021-12-22 07:52:07 -05:00
guillaume-be
2a56edb321
Updated deberta attention (#14625)
* Removed unused p2p attention handling

* Updated DeBERTa configuration

* Updated TF DeBERTa attention

* Rolled back accidental comment deletion

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2021-12-22 07:36:08 -05:00
Ryokan RI
824fd44fc3
Feature/fix slow test in mluke (#14749)
* make MLukeTokenizerTest fast

* make LukeTokenizerTest fast

* add entry to _toctree.yaml
2021-12-22 06:35:59 -05:00
SaulLu
c94c1b8967
update the arguments add_prefix_space and trim_offsets in backend_tokenizer.post_processor of RobertaTokenizerFast (#14752)
* add tests

* change post-processor, pre-tokenizer and decoder (can't update decoder)

* update test (remove decoder which doesn't depend on trim and add_prefix)

* just update the post_processor

* fix change

* `trim_offsets` has no influence on `pre_tokenizer`

* remove a test that need some input from the `tokenizers` lib maintainers

* format

* add new test offsets roberta

* polish comments
2021-12-22 10:51:55 +01:00
Lysandre Debut
ec3567fe20
Convert model files from rst to mdx (#14865)
* First pass

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2021-12-22 03:27:30 -05:00
Sylvain Gugger
d0422de563
Fix doc mistakes (#14874)
* Remove double returns

* Last fixes

* Quality

* Last fix for Lxmert
2021-12-21 18:54:41 -05:00
Sylvain Gugger
e846a56ca4
Fix FlaxMarianMTModel return block. (#14873)
* Fixes in marian doc

* Another time

* Add return block in FlaxMarianMTModel
2021-12-21 17:57:37 -05:00
Sylvain Gugger
a6b7b47a39
Fixes in marian doc (#14872)
* Fixes in marian doc

* Another time
2021-12-21 17:17:02 -05:00
Mishig Davaadorj
eec9c8bbd7
Fix FLAX_MULTIPLE_CHOICE_SAMPLE typo (#14871) 2021-12-21 16:54:10 -05:00
Sylvain Gugger
e51c7b5872 Skip failing test 2021-12-21 15:15:17 -05:00
Sylvain Gugger
27b3031de2
Mass conversion of documentation from rst to Markdown (#14866)
* Convert docstrings of all configurations and tokenizers

* Processors and fixes

* Last modeling files and fixes to models

* Pipeline modules

* Utils files

* Data submodule

* All the other files

* Style

* Missing examples

* Style again

* Fix copies

* Say bye bye to rst docstrings forever
2021-12-21 15:06:33 -05:00