* Working on splitting out labels
* First working version
* Fixed concatenation of outputs and labels
* val_dataset -> eval_dataset
* Only pass input arrays in tokenizer.model_input_names
* Only pass input arrays in tokenizer.model_input_names
* Only remove unexpected keys when predict_with_generate is True
* Adding proper docstring
* Adding example to docstring
* Add a proper ROUGE metric example
* Add a proper ROUGE metric example
* Add version checking
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/keras_callbacks.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Remove requirement for tokenizer with predict_with_generate
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
This reverts commit 0f4e39c559.
* is_torch_available test to avoid failing imports
* sorting parameterize parameters to solve ERROR gw0 gw1
* tests fix
* tests fix
* GPT2 with past fix
* Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
* Removed onnx file
* Implemented suggestions
* Fixed __init__ to resolve conflict with master
* Remove commented import
* add tests
* change post-processor, pre-tokenizer and decoder (can't update decoder)
* update test (remove decoder which doesn't depend on trim and add_prefix)
* just update the post_processor
* fix change
* `trim_offsets` has no influence on `pre_tokenizer`
* remove a test that need some input from the `tokenizers` lib maintainers
* format
* add new test offsets roberta
* polish comments
* Convert docstrings of all configurations and tokenizers
* Processors and fixes
* Last modeling files and fixes to models
* Pipeline modules
* Utils files
* Data submodule
* All the other files
* Style
* Missing examples
* Style again
* Fix copies
* Say bye bye to rst docstrings forever
* add custom `stopping_criteria` and `logits_processor` to `generate`
* add tests for custom `stopping_criteria` and `logits_processor`
* fix typo in RAG
* address reviewer comments
* improve custom logits processor/stopping criteria error message
* fix types in merge function signature
* change default for custom list from `None` to empty list
* fix rag generate
* add string split suggestion
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Convert file_utils docstrings to Markdown
* Test on BERT
* Return block indent
* Temporarily disable doc styler
* Remove from quality checks as well
* Remove doc styler mess
* Remove check from circleCI
* Fix typo
* Convert file_utils docstrings to Markdown
* Test on BERT
* Return block indent
* Temporarily disable doc styler
* Remove from quality checks as well
* Remove doc styler mess
* Remove check from circleCI
* Fix typo
* Let's go on all other model files
* Add templates too
* Styling and quality
* Add a main_input_name attribute to all models
* Fix tests
* Wtf Vs Code?
* Update src/transformers/models/imagegpt/modeling_imagegpt.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Style
* Fix copies
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>