* fix bug, trainer_seq2seq.py, Line 172, generation_inputs must be a dict before feeding into self.model.generation()
* fix bug, trainer_seq2seq.py, Line 172, generation_inputs must be a dict before feeding into self.model.generation()
* quick fix SummarizationPipeline error messages
Fix error messages to avoid spam errors, and errors of type:
`Your max_length is set to 50, but you input_length is only 46. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)`
* correcto SummarizationPipeline error messages fixes
* implement MLukeTokenizer and LukeForMaskedLM
* update tests
* update docs
* add LukeForMaskedLM to check_repo.py
* update README
* fix test and specify the entity pad id in tokenization_(m)luke
* fix EntityPredictionHeadTransform
* add cross_attention_hidden_size to text-2-text encoder-decoder models (PT/Flax)
* for TFEncoderDecoderModel
* add equivalence test for TFEncoderDecoderModel
* fix
* fix failed equivalence tests
* remove unused import
* add detailed comment
* Fix check_equivalence_tf_to_pt by using encoder/decoder
* cleaning
* Use cross_attention_hidden_size in speech-to-text
* clean fast init logging msg in encoder decoder models
* increase tol from 1e-5 to 1e-3 for tf test
* style
* style
* make sure projection layer can run
* remove type conversion + add check
* fix conflict (config.output_hidden_size)
* Remove TF -> PT in check_pt_tf_equivalence for TFEncoderDecoderModel
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Add AutoProcessor class
Init and tests
Add doc
Fix init
Update src/transformers/models/auto/processing_auto.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Reverts to tokenizer or feature extractor when available
Adapt test
* Revert "Adapt test"
This reverts commit bbdde5fab0.
* Revert "Reverts to tokenizer or feature extractor when available"
This reverts commit 77659ff5d2.
* Don't revert everything Lysandre!
Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com>
* Update code to resolve comments left in previous PR.
* Add README.md file for this example.
* Update examples/onnx/pytorch/translation/README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update examples/onnx/pytorch/translation/README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update examples/onnx/pytorch/translation/README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update README.md file to resolve comments.
* Add a section name.
* Update examples/onnx/pytorch/translation/README.md
Co-authored-by: Gary Miguel <garymm@garymm.org>
* Add more comments for _convert_past_list_to_tuple().
* Change the default file name to a consistent one.
* Fix a format issue.
* Update examples/onnx/pytorch/translation/README.md
Co-authored-by: Gary Miguel <garymm@garymm.org>
* Update examples/onnx/pytorch/translation/run_onnx_exporter.py
Co-authored-by: Gary Miguel <garymm@garymm.org>
* Update examples/onnx/pytorch/translation/README.md
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* Change the folder to summarization and address some other coments.
* Update the torch version.
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Gary Miguel <garymm@garymm.org>
Co-authored-by: lewtun <lewis.c.tunstall@gmail.com>
* add test for glue
* add tests for clm
* fix clm test
* add summrization tests
* more tests
* fix few tests
* add test for t5 mlm
* fix t5 mlm test
* fix tests for multi device
* cleanup
* ci job
* fix metric file name
* make t5 more robust
* Make DefaultDataCollator importable from root
* Add documentation for DefaultDataCollator and add return_tensors argument to all class docstrings
* make style
* Add DefaultDataCollator to data_collator.rst
* Add DefaultDataCollator to data_collator.rst
* fix#14524 (IndexError when mask prob is too low)
* fix formatting
* correct documentation, add option for setting min_num_masks
* change the semantic meaning of `mask_prob` in _compute_mask_indices
With this commit the meaing of `mask_prob` actually adhered to the probability for each
vector to be the start of a masked span of length.
* fix check_copies test
* fix documentation to semantic meaning of `upper bound of overall masking percentage`, revert changes to _compute_mask_indices
* fix typo
* started bf16 integration
* minor changes
* code now runs
* style
* lay foundation for bf16 testing
* lay foundation for bf16 testing
* start the tests
* better bf16 check
* style
* 2 separate checkers - one for bf16 support, another for bf16+autocast
* Update src/transformers/training_args.py
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
* a couple of comment resolutions
* more comment resolutions
* resolved a small bug
* just some print statemtns
* added todo marking
* added a todo
* adjust for API change s/fast_dtype/dtype/
* fix style
* merge 2 bf16 util functions
* bf16 now does scaling too
* Add support for bfloat16
* Revert T5 layernorm to float32
This is based on the comment at https://github.com/huggingface/transformers/pull/14448/files#r752660929 and the PyTorch PR https://github.com/pytorch/pytorch/pull/66920 .
* Add comment about conversion to float32 before returning the numpy data
* Add comment about AMP-bfloat16 incompatibility
* Fix formatting
* typo
* reformer / bf16
* cleanup
* require at least pt-1.10
* fix
* will deal with deepspeed separately
* cleanup
* revert
* cleanup
* fp16_full_eval and bf16_full_eval are separate modes
* proper deprecation
* cleanup
* test and fixes
* spelling
* cleanup
* add a note that this API is experimental
Co-authored-by: jamie <jamie@cortx.com>
Co-authored-by: Stas Bekman <stas@stason.org>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: suriya <suriya@cortx.com>
Co-authored-by: Manuel R. Ciosici <manuelrciosici@gmail.com>
* Init Flax implementation for Blenderbot
* Add a majority of stuff except for tests
* make style quality
* Add tests and fix some bugs
* Add tests
* Clean source code and fix some bugs
* Fix copies and docs
* Fix jax device condition for tests
* Fix layer norm in the encoder
* Fix a few typos in the test file
* make fix-copies
* make fix-copies
* fix layer norm
* Fix Flax params dtype (#13090)
* Fix PR reference (#13098)
* make fix-copies
* Update tests/test_modeling_flax_blenderbot.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* TF Tapas first commit
* updated docs
* updated logger message
* updated pytorch weight conversion
script to support scalar array
* added use_cache to tapas model config to
work properly with tf input_processing
* 1. rm embeddings_sum
2. added # Copied
3. + TFTapasMLMHead
4. and lot other small fixes
* updated docs
* + test for tapas
* updated testing_utils to check
is_tensorflow_probability_available
* converted model logits post processing using
numpy to work with both PT and TF models
* + TFAutoModelForTableQuestionAnswering
* added TF support
* added test for
TFAutoModelForTableQuestionAnswering
* added test for
TFAutoModelForTableQuestionAnswering pipeline
* updated auto model docs
* fixed typo in import
* added tensorflow_probability to run tests
* updated MLM head
* updated tapas.rst with TF model docs
* fixed optimizer import in docs
* updated convert to np
data from pt model is not
`transformers.tokenization_utils_base.BatchEncoding`
after pipeline upgrade
* updated pipeline:
1. with torch.no_gard removed, pipeline forward handles
2. token_type_ids converted to numpy
* updated docs.
* removed `use_cache` from config
* removed floats_tensor
* updated code comment
* updated Copyright Year and
logits_aggregation Optional
* updated docs and comments
* updated docstring
* fixed model weight loading
* make fixup
* fix indentation
* added tf slow pipeline test
* pip upgrade
* upgrade python to 3.7
* removed from_pt from tests
* revert commit f18cfa9