* Add support for exporting PyTorch LayoutLM to ONNX
* Added tests for converting LayoutLM to ONNX
* Add support for exporting PyTorch LayoutLM to ONNX
* Added tests for converting LayoutLM to ONNX
* cleanup
* Removed regression/ folder
* Add support for exporting PyTorch LayoutLM to ONNX
* Added tests for converting LayoutLM to ONNX
* cleanup
* Fixed import error
* Remove unnecessary import statements
* Changed max_2d_positions from class variable to instance variable of the config class
* Add support for exporting PyTorch LayoutLM to ONNX
* Added tests for converting LayoutLM to ONNX
* cleanup
* Add support for exporting PyTorch LayoutLM to ONNX
* cleanup
* Fixed import error
* Changed max_2d_positions from class variable to instance variable of the config class
* Use super class generate_dummy_inputs method
Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com>
* Add support for Masked LM, sequence classification and token classification
Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com>
* Removed uncessary import and method
* Fixed code styling
* Raise error if PyTorch is not installed
* Remove unnecessary import statement
Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com>
* beit-flax
* updated FLAX_BEIT_MLM_DOCSTRING
* removed bool_masked_pos from classification
* updated Copyright
* code refactoring: x -> embeddings
* updated test: rm from_pt
* Update docs/source/model_doc/beit.rst
* model code dtype updates and
other changes according to review
* relative_position_bias
revert back to pytorch design
* Init FNet
* Update config
* Fix config
* Update model classes
* Update tokenizers to use sentencepiece
* Fix errors in model
* Fix defaults in config
* Remove position embedding type completely
* Fix typo and take only real numbers
* Fix type vocab size in configuration
* Add projection layer to embeddings
* Fix position ids bug in embeddings
* Add minor changes
* Add conversion script and remove CausalLM vestiges
* Fix conversion script
* Fix conversion script
* Remove CausalLM Test
* Update checkpoint names to dummy checkpoints
* Add tokenizer mapping
* Fix modeling file and corresponding tests
* Add tokenization test file
* Add PreTraining model test
* Make style and quality
* Make tokenization base tests work
* Update docs
* Add FastTokenizer tests
* Fix fast tokenizer special tokens
* Fix style and quality
* Remove load_tf_weights vestiges
* Add FNet to main README
* Fix configuration example indentation
* Comment tokenization slow test
* Fix style
* Add changes from review
* Fix style
* Remove bos and eos tokens from tokenizers
* Add tokenizer slow test, TPU transforms, NSP
* Add scipy check
* Add scipy availabilty check to test
* Fix tokenizer and use correct inputs
* Remove remaining TODOs
* Fix tests
* Fix tests
* Comment Fourier Test
* Uncomment Fourier Test
* Change to google checkpoint
* Add changes from review
* Fix activation function
* Fix model integration test
* Add more integration tests
* Add comparison steps to MLM integration test
* Fix style
* Add masked tokenization fix
* Improve mask tokenization fix
* Fix index docs
* Add changes from review
* Fix issue
* Fix failing import in test
* some more fixes
* correct fast tokenizer
* finalize
* make style
* Remove additional tokenization logic
* Set do_lower_case to False
* Allow keeping accents
* Fix tokenization test
* Fix FNet Tokenizer Fast
* fix tests
* make style
* Add tips to FNet docs
Co-authored-by: patrickvonplaten <patrick.v.platen@gmail.com>
* Removed misfiring warnings
* Revert "Removed misfiring warnings"
This reverts commit cea90de325056b9c1cbcda2bd2613a785c1639ce.
* Retain the warning, but only when the user actually overrides things
* Fix accidentally breaking just about every model on the hub simultaneously
* Style pass
* Fix special tokens not correctly tokenized
* Add testing
* Fix
* Fix
* Use user workflows instead of directly assigning variables
* Enable test of fast tokenizers
* Update test of canine tokenizer
* Optimize Token Classification models for TPU
As per the XLA document XLA cannot handle masked indexing well. So token classification
models for BERT and others use an implementation based on `torch.where`. This implementation
works well on TPU.
ALBERT token classification model uses the masked indexing which causes performance issues
on TPU. This PR fixes this issue by following the BERT implementation.
* Same fix for ELECTRA
* Same fix for LayoutLM
* Properly use test_fetcher for examples
* Fake example modification
* Fake modeling file modification
* Clean fake modifications
* Run example tests for any modification.
* Fix issue when labels are supplied as Numpy array instead of list
* Fix issue when labels are supplied as Numpy array instead of list
* Fix same issue in the `TokenClassification` data collator
* Style pass
Update GPT Neo ONNX config to match the changes implied by the simplification of the local attention
Co-authored-by: Michael Benayoun <michael@huggingface.co>