* add new test
* update test
* remove `tokenizer_file` from `additional_files_names` in `tokenization_utils_base.py`
* add `tokenizer_file` for the fast only tokenizer
* change global variables layoutxml
* remove `"tokenizer_file"` from DPR tokenizer's Global variables
* remove `tokenizer_file` from herbert slow tokenizer init
* `"tokenizer_file"` from LED tokenizer's Global variables
* remove `tokenizer_file` from mbart slow tokenizer init
* remove `tokenizer_file` from slow tokenizer template
* adapt to versioning
* adapt the `test_tokenizer_mismatch_warning` test
* clean test
* clarify `VOCAB_FILES_NAMES` in tokenization_utils_fast.py
* Revert "remove `tokenizer_file` from mbart slow tokenizer init"
This reverts commit 0dbb723fa9.
* Revert "`"tokenizer_file"` from LED tokenizer's Global variables"
This reverts commit 5a3f879bdd.
* Revert "remove `tokenizer_file` from herbert slow tokenizer init"
This reverts commit f5e10007b7.
* Revert "remove `"tokenizer_file"` from DPR tokenizer's Global variables"
This reverts commit da0895330b.
* set `tokenizer_file` in super `__init__` of mbart
* replace assert with exception for `padding_side` arg in `PreTrainedTokenizerBase` `__init__`
* add test
* fix kwargs
* reformat test
* format
* format
* fix typo to render the documentation
* Update modeling_wav2vec2.py
With very tiny sound files (less than 0.1 seconds) the num_masked_span can be too long. The issue is described in issue #15366 and discussed with @patrickvonplaten.
* correct errors with mask time indices
* remove bogus file
* make fix-copies
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add a section about GPUs
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix loss calculation in TFFunnelForTokenClassification
* revert the change in TFFunnelForTokenClassification
* fix FunnelForTokenClassification loss
* fix other TokenClassification loss
* fix more
* fix more
* add num_labels to ElectraForTokenClassification
* revert the change to research projects
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Add Luke training
* Fix true label tags
* Fix true label tags
* Fix true label tags
* Update the data collator for Luke
* Some training refactor for Luke
* Improve data collator for Luke
* Fix import
* Fix datasets concatenation
* Add the --max_entity_length argument for Luke models
* Remove unused code
* Fix style issues
* Fix style issues
* Move the Luke training into a separate folder
* Fix style
* Fix naming
* Fix filtering
* Fix filtering
* Fix filter
* Update some preprocessing
* Move luke to research_projects
* Checkstyle
* Address comments
* Fix style
* Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel
* overwrite test_loss_computation
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* add xlm roberta xl
* add convert xlm xl fairseq checkpoint to pytorch
* fix init and documents for xlm-roberta-xl
* fix indention
* add test for XLM-R xl,xxl
* fix model hub name
* fix some stuff
* up
* correct init
* fix more
* fix as suggestions
* add torch_device
* fix default values of doc strings
* fix leftovers
* merge to master
* up
* correct hub names
* fix docs
* fix model
* up
* finalize
* last fix
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add copied from
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* clean commit of changes
* apply review feedback, make edits
* fix backticks, minor formatting
* 🖍 make fixup and minor edits
* 🖍 fix # in header
* 📝 update code sample without from_pt
* 📝 final review
* [deepspeed] saving checkpoint fallback when fp16 weights aren't saved
* Bump required deepspeed version to match usage when saving checkpoints
* update version
Co-authored-by: Mihai Balint <balint.mihai@gmail.com>