* Fill mask pipelines test updates.
* Model eval !!
* Adding slow test with actual values.
* Making all tests pass (skipping quite a bit.)
* Doc styling.
* Better doc cleanup.
* Making an explicit test with no pad token tokenizer.
* Typo.
* Fix inconsistency of the last element in hidden_states between PyTorch/Flax GPT2(Neo) (#13102)
* Fix missing elements in outputs tuple
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Fix local variable 'all_hidden_states' referenced before assignment
* Fix by returning tuple containing None values
* Fix quality
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Create py.typed
This creates a [py.typed as per PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) that should be distributed to mark that the package includes (inline) type annotations.
* Update setup.py
Include py.typed as package data
* Update setup.py
Call `setup(...)` with `zip_safe=False`.
* conditional declare `TOKENIZER_MAPPING_NAMES` within a `if TYPE_CHECKING` block so that type checkers dont need to evaluate the RHS of the assignment.
this improves performance of the pylance/pyright type checkers
* Update src/transformers/models/auto/tokenization_auto.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* adding missing import
* format
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Change FlaxBartForConditionalGeneration.decode() argument: deterministic -> train
* Also change the parameter name to train for flax marian and mbart
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Doctests
* Limit to 4 decimals
* Try with separate PT/TF tests
* Remove test for TF
* Ellips the predictions
* Doctest continue on failure
Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com>
Classification head of AlbertForMultipleChoice uses `hidden_dropout_prob` instead of `classifier_dropout_prob`. This
is not desirable as we cannot change classifer head dropout probability without changing the dropout probabilities of
the whole model.
* Fix doctests for quicktour
* Adapt causal LM exemple
* Remove space
* Fix until summarization
* End of task summary
* Style
* With last changes in quicktour
* Use original key for label in DataCollatorForTokenClassification
DataCollatorForTokenClassification accepts either `label` or `labels` as key for label in it's input. However after padding the label it assigns the padded labels to key `labels`. If originally `label` was used as key than the original upadded labels still remains in the batch. Then at line 192 when we try to convert the batch elements to torch tensor than these original unpadded labels cannot be converted as the labels for different samples have different lengths.
* Fixed style.
* Adding HuggingArtists to Community Notebooks
* Adding HuggingArtists to Community Notebooks
* Adding HuggingArtists to Community Notebooks
* docs: add HuggingArtists to community notebooks
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* fix_torch_device_generate_test
* remove @
* improve docs for clm
* speed-ups
* correct t5 example as well
* push final touches
* Update examples/flax/language-modeling/README.md
* correct docs for mlm
* Update examples/flax/language-modeling/README.md
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
* Fix tied weights on TPU
* Manually tie weights in no trainer examples
* Fix for test
* One last missing
* Gettning owned by my scripts
* Address review comments
* Fix test
* Fix tests
* Fix reformer tests
T5 with past ONNX export, and more explicit past_key_values inputs and outputs names for ONNX model
Authored-by: Michael Benayoun <michael@huggingface.co>
* Initial work
* All auto models
* All tf auto models
* All flax auto models
* Tokenizers
* Add feature extractors
* Fix typos
* Fix other typo
* Use the right config
* Remove old mapping names and update logic in AutoTokenizer
* Update check_table
* Fix copies and check_repo script
* Fix last test
* Add back name
* clean up
* Update template
* Update template
* Forgot a )
* Use alternative to fixup
* Fix TF model template
* Address review comments
* Address review comments
* Style