* First commit to add MarianMT to ONNX
* Now MarianModel.forward() automatically generates decoder_input_ids, like BartModel.forward()
* Adjusted MarianOnnxConfig.inputs and outputs to work with seq2seq-lm feature
* Style fix
* Added support for other features for already supported models
* Partial support for causal and seq2seq models
* Partial support for causal and seq2seq models
* Add default task for MarianMT ONNX
* Remove automatic creation of decoder_input_ids
* Extend inputs and outputs for MarianMT ONNX config
* Add MarianMT to ONNX unit tests
* Refactor
* OnnxSeq2SeqConfigWithPast to support seq2seq models
* Parameterized the onnx tests
* Restored run_mlm.py
* Restored run_mlm.py
* [WIP] BART update
* BART and MBART
* Add past_key_values and fix dummy decoder inputs
Using a sequence length of 1 in generate_dummy_outputs() produces large discrepancies, presumably due to some hidden optimisations.
* Refactor MarianOnnxConfig to remove custom past_key_values logic
* Fix quality
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
This reverts commit 0f4e39c559.
* is_torch_available test to avoid failing imports
* sorting parameterize parameters to solve ERROR gw0 gw1
* tests fix
* tests fix
* GPT2 with past fix
* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
* Removed onnx file
* Refactor Marian export to account for base changes
* Fix copies
* Implemented suggestions
* Extend support for causal LM
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
This reverts commit 0f4e39c559.
* is_torch_available test to avoid failing imports
* sorting parameterize parameters to solve ERROR gw0 gw1
* tests fix
* tests fix
* GPT2 with past fix
* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
* Removed onnx file
* Implemented suggestions
* Fixed __init__ to resolve conflict with master
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
This reverts commit 0f4e39c559.
* is_torch_available test to avoid failing imports
* sorting parameterize parameters to solve ERROR gw0 gw1
* tests fix
* tests fix
* GPT2 with past fix
* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
* Removed onnx file
* Implemented suggestions
* Fixed __init__ to resolve conflict with master
* Remove commented import
* Remove ONNX model
* Remove redundant class method
* Tidy up imports
* Fix quality
* Refactor dummy input function
* Add copied from statements to Marian config functions
* Remove false copied from comments
* Fix copy from comment
Co-authored-by: Massimiliano Bruni <massimiliano.bruni@hcl.com>
Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com>
* Working on splitting out labels
* First working version
* Fixed concatenation of outputs and labels
* val_dataset -> eval_dataset
* Only pass input arrays in tokenizer.model_input_names
* Only pass input arrays in tokenizer.model_input_names
* Only remove unexpected keys when predict_with_generate is True
* Adding proper docstring
* Adding example to docstring
* Add a proper ROUGE metric example
* Add a proper ROUGE metric example
* Add version checking
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Remove requirement for tokenizer with predict_with_generate
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
This reverts commit 0f4e39c559.
* is_torch_available test to avoid failing imports
* sorting parameterize parameters to solve ERROR gw0 gw1
* tests fix
* tests fix
* GPT2 with past fix
* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
* Removed onnx file
* Implemented suggestions
* Fixed __init__ to resolve conflict with master
* Remove commented import
* add tests
* change post-processor, pre-tokenizer and decoder (can't update decoder)
* update test (remove decoder which doesn't depend on trim and add_prefix)
* just update the post_processor
* fix change
* `trim_offsets` has no influence on `pre_tokenizer`
* remove a test that need some input from the `tokenizers` lib maintainers
* format
* add new test offsets roberta
* polish comments
* Convert docstrings of all configurations and tokenizers
* Processors and fixes
* Last modeling files and fixes to models
* Pipeline modules
* Utils files
* Data submodule
* All the other files
* Style
* Missing examples
* Style again
* Fix copies
* Say bye bye to rst docstrings forever
* add custom `stopping_criteria` and `logits_processor` to `generate`
* add tests for custom `stopping_criteria` and `logits_processor`
* fix typo in RAG
* address reviewer comments
* improve custom logits processor/stopping criteria error message
* fix types in merge function signature
* change default for custom list from `None` to empty list
* fix rag generate
* add string split suggestion
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Convert file_utils docstrings to Markdown
* Test on BERT
* Return block indent
* Temporarily disable doc styler
* Remove from quality checks as well
* Remove doc styler mess
* Remove check from circleCI
* Fix typo
* Convert file_utils docstrings to Markdown
* Test on BERT
* Return block indent
* Temporarily disable doc styler
* Remove from quality checks as well
* Remove doc styler mess
* Remove check from circleCI
* Fix typo
* Let's go on all other model files
* Add templates too
* Styling and quality
* Add a main_input_name attribute to all models
* Fix tests
* Wtf Vs Code?
* Update src/transformers/models/imagegpt/modeling_imagegpt.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Style
* Fix copies
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>