* correct order of overflowing_tokens for slow tokenizer (issue fix#13148)
* python 3.9 requires sentencepiece version 0.1.94 or above
* slicing of ids fixed in truncated_sequence()
* Update setup.py
* Correct order of overflowing tokens for pair of sentences
* code reformatted
* Update tokenization_utils_base.py
* reformatting file
* test to check single_input added
* missing function restored
* test to check pair_input overflowing tokens order
* test to check pair_input overflowing tokens order
* test to check pair_input overflowing tokens order
* added an error message for pair of seq and longest_first strategy
* test for pair_input modified
* variable name corrected
* fixed a typo in error message
* requested changes implemented
* required test added
* Corrected the message to match test message
* added error message for Luke Tokenizer
* lost test recovered
* docstring for truncate_sequences and prepare_for_model updated
* docstring for luke tokenizer updated
* updated ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING
* aligned text and fixed puncuatations
* improved style and quality of code
* fixed error_msg in truncate_sequences
* replaced encode_plus method with regular call method
* clean up
* rephrased the docstring
* add test in trainer and test tokenizer saving wi
th trainer
* quality
* reverse trainer changes
* replace test in test_trainer by a test for all the tokenizers
* format
* add can_save_slow_tokenizer attribute to all tokenizers
* fix Herbert
* format
* Change comment in error
* add comments and a new assert
* Update src/transformers/models/albert/tokenization_albert_fast.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* change ValueError barthez
* change ValueError BigBird
* change ValueError Camembert
* change ValueError Mbart50
* change ValueError Pegasus
* change ValueError ReFormer
* change ValueError T5
* change ValueError RoBERTa
* XLNET fast
* Update tests/test_tokenization_common.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* change `assert` into `self.assertIn`
* format
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* fix_torch_device_generate_test
* remove @
* up
* correct some bugs
* correct model
* finish speech2text extension
* up
* up
* up
* up
* Update utils/custom_init_isort.py
* up
* up
* update with tokenizer
* correct old tok
* correct old tok
* fix bug
* up
* up
* add more tests
* up
* fix docs
* up
* fix some more tests
* add better config
* correct some more things
"
* fix tests
* improve docs
* Apply suggestions from code review
* Apply suggestions from code review
* final fixes
* finalize
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* apply suggestions Lysandre and Sylvain
* apply nicos suggestions
* upload everything
* finish
Co-authored-by: Patrick von Platen <patrick@huggingface.co>
Co-authored-by: your_github_username <your_github_email>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Add the audio classification pipeline
* Remove autoconfig exception
* Mark ffmpeg test as slow
* Rearrange pipeline tests
* Add small test
* Replace asserts with ValueError
* Adding a TF variant of the DataCollatorForTokenClassification to get feedback
* Added a Numpy variant and a post_init check to fail early if a missing import is found
* Fixed call to Numpy variant
* Added a couple more of the collators
* Update src/transformers/data/data_collator.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Fixes, style pass, finished DataCollatorForSeqToSeq
* Added all the LanguageModeling DataCollators, except SOP and PermutationLanguageModeling
* Adding DataCollatorForPermutationLanguageModeling
* Style pass
* Add missing `__call__` for PLM
* Remove `post_init` checks for frameworks because the imports inside them were making us fail code quality checks
* Remove unused imports
* First attempt at some TF tests
* A second attempt to make any of those tests actually work
* TF tests, round three
* TF tests, round four
* TF tests, round five
* TF tests, all enabled!
* Style pass
* Merging tests into `test_data_collator.py`
* Merging tests into `test_data_collator.py`
* Fixing up test imports
* Fixing up test imports
* Trying shuffling the conditionals around
* Commenting out non-functional old tests
* Completed all tests for all three frameworks
* Style pass
* Fixed test typo
* Style pass
* Move standard `__call__` method to mixin
* Rearranged imports for `test_data_collator`
* Fix data collator typo "torch" -> "pt"
* Fixed the most embarrassingly obvious bug
* Update src/transformers/data/data_collator.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Renaming mixin
* Updating docs
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Dalton Walker <dalton_walker@icloud.com>
Co-authored-by: Andrew Romans <andrew.romans@hotmail.com>
* Deberta_v2 tf
* added new line at the end of file, make style
* +V2, typo
* remove never executed branch of code
* rm cmnt and fixed typo in url filter
* cleanup according to review comments
* added #Copied from
* added missing __spec__ to _LazyModule
* test __spec__ is not None after module import
* changed module_spec arg to be optional in _LazyModule
* fix style issue
* added module spec test to test_file_utils
* First commit
* Make style
* Fix dummy objects
* Add Detectron2 config
* Add LayoutLMv2 pooler
* More improvements, add documentation
* More improvements
* Add model tests
* Add clarification regarding image input
* Improve integration test
* Fix bug
* Fix another bug
* Fix another bug
* Fix another bug
* More improvements
* Make more tests pass
* Make more tests pass
* Improve integration test
* Remove gradient checkpointing and add head masking
* Add integration test
* Add LayoutLMv2ForSequenceClassification to the tests
* Add LayoutLMv2ForQuestionAnswering
* More improvements
* More improvements
* Small improvements
* Fix _LazyModule
* Fix fast tokenizer
* Move sync_batch_norm to a separate method
* Replace dummies by requires_backends
* Move calculation of visual bounding boxes to separate method + update README
* Add models to main init
* First draft
* More improvements
* More improvements
* More improvements
* More improvements
* More improvements
* Remove is_split_into_words
* More improvements
* Simply tesseract - no use of pandas anymore
* Add LayoutLMv2Processor
* Update is_pytesseract_available
* Fix bugs
* Improve feature extractor
* Fix bug
* Add print statement
* Add truncation of bounding boxes
* Add tests for LayoutLMv2FeatureExtractor and LayoutLMv2Tokenizer
* Improve tokenizer tests
* Make more tokenizer tests pass
* Make more tests pass, add integration tests
* Finish integration tests
* More improvements
* More improvements - update API of the tokenizer
* More improvements
* Remove support for VQA training
* Remove some files
* Improve feature extractor
* Improve documentation and one more tokenizer test
* Make quality and small docs improvements
* Add batched tests for LayoutLMv2Processor, remove fast tokenizer
* Add truncation of labels
* Apply suggestions from code review
* Improve processor tests
* Fix failing tests and add suggestion from code review
* Fix tokenizer test
* Add detectron2 CI job
* Simplify CI job
* Comment out non-detectron2 jobs and specify number of processes
* Add pip install torchvision
* Add durations to see which tests are slow
* Fix tokenizer test and make model tests smaller
* Frist draft
* Use setattr
* Possible fix
* Proposal with configuration
* First draft of fast tokenizer
* More improvements
* Enable fast tokenizer tests
* Make more tests pass
* Make more tests pass
* More improvements
* Addd padding to fast tokenizer
* Mkae more tests pass
* Make more tests pass
* Make all tests pass for fast tokenizer
* Make fast tokenizer support overflowing boxes and labels
* Add support for overflowing_labels to slow tokenizer
* Add support for fast tokenizer to the processor
* Update processor tests for both slow and fast tokenizers
* Add head models to model mappings
* Make style & quality
* Remove Detectron2 config file
* Add configurable option to label all subwords
* Fix test
* Skip visual segment embeddings in test
* Use ResNet-18 backbone in tests instead of ResNet-101
* Proposal
* Re-enable all jobs on CI
* Fix installation of tesseract
* Fix failing test
* Fix index table
* Add LayoutXLM doc page, first draft of code examples
* Improve documentation a lot
* Update expected boxes for Tesseract 4.0.0 beta
* Use offsets to create labels instead of checking if they start with ##
* Update expected boxes for Tesseract 4.1.1
* Fix conflict
* Make variable names cleaner, add docstring, add link to notebooks
* Revert "Fix conflict"
This reverts commit a9b46ce9afe47ebfcfe7b45e6a121d49e74ef2c5.
* Revert to make integration test pass
* Apply suggestions from @LysandreJik's review
* Address @patrickvonplaten's comments
* Remove fixtures DocVQA in favor of dataset on the hub
Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
* Add hubert classifier + tests
* Add hubert classifier + tests
* Dummies for all classification tests
* Wav2Vec2 classifier + ER test
* Fix hubert integration tests
* Add hubert IC
* Pass tests for all classification tasks on Hubert
* Pass all tests + copies
* Move models to the SUPERB org
* Moving `zero-shot-classification` pipeline to new testing.
* Cleaning up old mixins.
* Fixing tests
`sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english` is
corrupted in PT.
* Adding warning.
- Enforce `test_small_models_{tf,pt}` methods to exist (enforce checking
actual values in small tests)
- Add support for non RGB image for the pipeline.
* New test format for conversational.
* Putting back old mixin.
* Re-enabling auto tests with LazyLoading.
* Feature extraction tests.
* Remove feature-extraction.
* Feature extraction with feature_extractor (No pun intended).
* Update check_model_type for fill-mask.
* fix AutoModel.from_pretrained(..., torch_dtype=...)
* fix to_diff_dict
* add better test
* torch is not always available when a model has self.torch_dtype
* make flax gpt2 working with cross attention
* Remove encoder->decoder projection layer
* A draft (incomplete) for FlaxEncoderDecoderModel
* Add the method from_encoder_decoder_pretrained + the docstrings
* Fix the mistakes of using EncoderDecoderModel
* Fix style
* Add FlaxEncoderDecoderModel to the library
* Fix cyclic imports
* Add FlaxEncoderDecoderModel to modeling_flax_auto.py
* Remove question comments
* add tests for FlaxEncoderDecoderModel
* add flax_encoder_decoder to the lists of ignored entries in check_repo.py
* fix missing required positional arguments
* Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained()
Also fix generation eos/pad tokens issue
* Fix: Use sequences from the generated_output
* Change a check from assert to raise ValueError
* Fix examples and token ids issues
* Fix missing all_cross_attentions when outputting tuple in modeling_gpt2
* Remove the changes in configuration docstrings.
* allow for bert 2 gpt2
* make fix-copies
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Change remaining examples to bert2gpt2
* Change the test to Bert2GPT2
* Fix examples
* Fix import
* Fix unpack bug
* Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix: NotImplentedError -> NotImplementedError
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* up
* finalize
Co-authored-by: ydshieh <ydshieh@user.noreply>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add test
* add change in PretrainedTokenizerBase
* change Luke
* deactivate
* add the possibility to add additional special tokens for M2M100
* format
* add special test for canine
* proposed changes for mbart
* proposed changes for mbart50
* proposed changes for byt5
* proposed changes for canine
* proposed changes for t5
* test fast and slow
* remove comment
* remove comment
* add fast version for all tests
* replace break by continue
* add more comments
* add check to avoid duplicates
* remove comment
* format
* proposed change for wave2vec2
* reverse changes mbart
* uncomment
* format
* Barrier -> barrier
* added logger for metrics
* removed stream handler in trainer
* moved handler
* removed streamhandler from trainer
* updated test image and instance type added datasets version to test
* Update tests/sagemaker/scripts/pytorch/requirements.txt
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>