* fix AutoModel.from_pretrained(..., torch_dtype=...)
* fix to_diff_dict
* add better test
* torch is not always available when a model has self.torch_dtype
* make flax gpt2 working with cross attention
* Remove encoder->decoder projection layer
* A draft (incomplete) for FlaxEncoderDecoderModel
* Add the method from_encoder_decoder_pretrained + the docstrings
* Fix the mistakes of using EncoderDecoderModel
* Fix style
* Add FlaxEncoderDecoderModel to the library
* Fix cyclic imports
* Add FlaxEncoderDecoderModel to modeling_flax_auto.py
* Remove question comments
* add tests for FlaxEncoderDecoderModel
* add flax_encoder_decoder to the lists of ignored entries in check_repo.py
* fix missing required positional arguments
* Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained()
Also fix generation eos/pad tokens issue
* Fix: Use sequences from the generated_output
* Change a check from assert to raise ValueError
* Fix examples and token ids issues
* Fix missing all_cross_attentions when outputting tuple in modeling_gpt2
* Remove the changes in configuration docstrings.
* allow for bert 2 gpt2
* make fix-copies
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Change remaining examples to bert2gpt2
* Change the test to Bert2GPT2
* Fix examples
* Fix import
* Fix unpack bug
* Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix: NotImplentedError -> NotImplementedError
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* up
* finalize
Co-authored-by: ydshieh <ydshieh@user.noreply>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add test
* add change in PretrainedTokenizerBase
* change Luke
* deactivate
* add the possibility to add additional special tokens for M2M100
* format
* add special test for canine
* proposed changes for mbart
* proposed changes for mbart50
* proposed changes for byt5
* proposed changes for canine
* proposed changes for t5
* test fast and slow
* remove comment
* remove comment
* add fast version for all tests
* replace break by continue
* add more comments
* add check to avoid duplicates
* remove comment
* format
* proposed change for wave2vec2
* reverse changes mbart
* uncomment
* format
* Barrier -> barrier
* added logger for metrics
* removed stream handler in trainer
* moved handler
* removed streamhandler from trainer
* updated test image and instance type added datasets version to test
* Update tests/sagemaker/scripts/pytorch/requirements.txt
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* Fill mask pipelines test updates.
* Model eval !!
* Adding slow test with actual values.
* Making all tests pass (skipping quite a bit.)
* Doc styling.
* Better doc cleanup.
* Making an explicit test with no pad token tokenizer.
* Typo.
* Fix inconsistency of the last element in hidden_states between PyTorch/Flax GPT2(Neo) (#13102)
* Fix missing elements in outputs tuple
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Fix local variable 'all_hidden_states' referenced before assignment
* Fix by returning tuple containing None values
* Fix quality
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* Create py.typed
This creates a [py.typed as per PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) that should be distributed to mark that the package includes (inline) type annotations.
* Update setup.py
Include py.typed as package data
* Update setup.py
Call `setup(...)` with `zip_safe=False`.
* conditional declare `TOKENIZER_MAPPING_NAMES` within a `if TYPE_CHECKING` block so that type checkers dont need to evaluate the RHS of the assignment.
this improves performance of the pylance/pyright type checkers
* Update src/transformers/models/auto/tokenization_auto.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* adding missing import
* format
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Change FlaxBartForConditionalGeneration.decode() argument: deterministic -> train
* Also change the parameter name to train for flax marian and mbart
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Doctests
* Limit to 4 decimals
* Try with separate PT/TF tests
* Remove test for TF
* Ellips the predictions
* Doctest continue on failure
Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com>
Classification head of AlbertForMultipleChoice uses `hidden_dropout_prob` instead of `classifier_dropout_prob`. This
is not desirable as we cannot change classifer head dropout probability without changing the dropout probabilities of
the whole model.
* Fix doctests for quicktour
* Adapt causal LM exemple
* Remove space
* Fix until summarization
* End of task summary
* Style
* With last changes in quicktour
* Use original key for label in DataCollatorForTokenClassification
DataCollatorForTokenClassification accepts either `label` or `labels` as key for label in it's input. However after padding the label it assigns the padded labels to key `labels`. If originally `label` was used as key than the original upadded labels still remains in the batch. Then at line 192 when we try to convert the batch elements to torch tensor than these original unpadded labels cannot be converted as the labels for different samples have different lengths.
* Fixed style.