* Add check-ops script
* Finish to implement check_tf_ops and start the test
* Make the test mandatory only for BERT
* Update tf_ops folder
* Remove useless classes
* Add the ONNX test for GPT2 and BART
* Add a onnxruntime slow test + better opset flexibility
* Fix test + apply style
* fix tests
* Switch min opset from 12 to 10
* Update src/transformers/file_utils.py
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Fix GPT2
* Remove extra shape_list usage
* Fix GPT2
* Address Morgan's comments
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
* Add head_mask/decoder_head_mask for TF BART models
* Add head_mask and decoder_head_mask input arguments for TF BART-based
models as a TF counterpart to the PR #9569
* Add test_headmasking functionality to tests/test_modeling_tf_common.py
* TODO: Add a test to verify that we can get a gradient back for
importance score computation
* Remove redundant #TODO note
Remove redundant #TODO note from tests/test_modeling_tf_common.py
* Fix assertions
* Make style
* Fix ...Model input args and adjust one new test
* Add back head_mask and decoder_head_mask to BART-based ...Model
after the last commit
* Remove head_mask ande decoder_head_mask from input_dict
in TF test_train_pipeline_custom_model as these two have different
shape than other input args (Necessary for passing this test)
* Revert adding global_rng in test_modeling_tf_common.py
* Resize the biases in same time than the embeddings
* Trigger CI
* Biases are not reset anymore
* Remove get_output_embeddings + better LM model detection in generation utils
* Apply style
* First test on BERT
* Update docstring + new name
* Apply the new resizing logic to all the models
* fix tests
* Apply style
* Update the template
* Fix naming
* Fix naming
* Apply style
* Apply style
* Remove unused import
* Revert get_output_embeddings
* Trigger CI
* Update num parameters
* Restore get_output_embeddings in TFPretrainedModel and add comments
* Style
* Add decoder resizing
* Style
* Fix tests
* Separate bias and decoder resize
* Fix tests
* Fix tests
* Apply style
* Add bias resizing in MPNet
* Trigger CI
* Apply style
* Use the CI to identify failing tests
* Remove from all examples and tests
* More default switch
* Fixes
* More test fixes
* More fixes
* Last fixes hopefully
* Use the CI to identify failing tests
* Remove from all examples and tests
* More default switch
* Fixes
* More test fixes
* More fixes
* Last fixes hopefully
* Run on the real suite
* Fix slow tests
* cleanup torch unittests: part 2
* remove trailing comma added by isort, and which breaks flake
* one more comma
* revert odd balls
* part 3: odd cases
* more ["key"] -> .key refactoring
* .numpy() is not needed
* more unncessary .numpy() removed
* more simplification
* TF outputs and test on BERT
* Albert to DistilBert
* All remaining TF models except T5
* Documentation
* One file forgotten
* TF outputs and test on BERT
* Albert to DistilBert
* All remaining TF models except T5
* Documentation
* One file forgotten
* Add new models and fix issues
* Quality improvements
* Add T5
* A bit of cleanup
* Fix for slow tests
* Style
* Kill model archive maps
* Fixup
* Also kill model_archive_map for MaskedBertPreTrainedModel
* Unhook config_archive_map
* Tokenizers: align with model id changes
* make style && make quality
* Fix CI
There's an inconsistency right now where:
- we load some models into CACHE_DIR
- and some models in the default cache
- and often, in both for the same models
When running the RUN_SLOW tests, this takes a lot of disk space, time, and bandwidth.
I'd rather always use the default cache
* first commit
* work in progress
* make language generation task pass
* update to working version for LM
* delete print
* remove dead code
* make style
* add first copy past test to tf 2 generate
* add tf top_k_top_p_filter fn
* add generate function for TF
* add generate function for TF
* implemented generate for all models expect transfoXL
* implemented generate for all models expect transfoXL
* implemented generate for all models expect transfoXL
* make style
* change permission of test file to correct ones
* delete ipdb
* delete ipdb
* fix bug and finish simple gpt2 integration test
* clean test file
* clean test file
* make style
* make style
* make style
* make style
* change import style
* change import style
* make style
* make style
* add decorators
* add decorators
* fix tf ctrl bug dim => axis in TF
* make style
* make style
* refactored test file
* refactored test file
* take out test_torch_tf_conversion if nothing is defined
* take out test_torch_tf_conversion if nothing is defined
* remove useless files
* remove useless files
* fix conflicts
* fix conflicts
* fix conflicts
* fix conflicts
* fix conflicts
* solve conflicts
* solve conflicts
* fix conflicts
* fix conflicts
* merge conflicts
* delete ipdb
* exposed top_k_top_p_filtering fns
* delete weirdly created w! file
* add comment to test tf common modeling
* fix conflicts
* fix conflicts
* make style
* merge conflicts
* make style
* change tf.tensor.shape to shape_list(tensor)
I suspect the wrapper classes were created in order to prevent the
abstract base class (TF)CommonModelTester from being included in test
discovery and running, because that would fail.
I solved this by replacing the abstract base class with a mixin.
Code changes are just de-indenting and automatic reformattings
performed by black to use the extra line space.
This construct isn't used anymore these days.
Running python tests/test_foo.py puts the tests/ directory on
PYTHONPATH, which isn't representative of how we run tests.
Use python -m unittest tests/test_foo.py instead.