* Add option to add flax
* Add flax template for __init__.py
* Add flax template for .rst
* Copy TF modeling template
* Add a missing line in modeling_tf_... template
* Update first half of modeling_flax_..
* Update encoder flax template
* Copy test_modeling_tf... as test_modeling_flax...
* Replace some TF to Flax in test_modeling_flax_...
* Replace tf to np
some function might not work, like _assert_tensors_equal
* Replace remaining tf to np (might not work)
* Fix cookiecutter
* Add Flax in to_replace_... template
* Update transformers-cli add-new-model
* Save generate_flax in configuration.json
This will be read by transformers-cli
* Fix to_replace_... and cli
* Fix replace cli
* Fix cookiecutter name
* Move docstring earlier to avoid not defined error
* Fix a missing Module
* Add encoder-decoder flax template from bart
* Fix flax test
* Make style
* Fix endif
* Fix replace all "utf-8 -> unp-8"
* Update comment
* Fix flax template (add missing ..._DOCSTRING)
* Use flax_bart imports in template (was t5)
* Fix unp
* Update templates/adding_a_new_model/tests
* Revert "Fix unp"
This reverts commit dc9002a41d.
* Remove one line of copied from to suppress CI error
* Use generate_tensorflow_pytorch_and_flax
* Add a missing part
* fix typo
* fix flax config
* add examples for flax
* small rename
* correct modeling imports
* correct auto loading
* corrects some flax tests
* correct small typo
* correct as type
* finish modif
* correct more templates
* final fixes
* add file testers
* up
* make sure tests match template regex
* correct pytorch
* correct tf
* correct more tf
* correct imports
* minor error
* minor error
* correct init
* more fixes
* correct more flax tests
* correct flax test
* more fixes
* correct docs
* update
* fix
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* cleanup torch unittests: part 2
* remove trailing comma added by isort, and which breaks flake
* one more comma
* revert odd balls
* part 3: odd cases
* more ["key"] -> .key refactoring
* .numpy() is not needed
* more unncessary .numpy() removed
* more simplification
* TF outputs and test on BERT
* Albert to DistilBert
* All remaining TF models except T5
* Documentation
* One file forgotten
* TF outputs and test on BERT
* Albert to DistilBert
* All remaining TF models except T5
* Documentation
* One file forgotten
* Add new models and fix issues
* Quality improvements
* Add T5
* A bit of cleanup
* Fix for slow tests
* Style
* improve unit tests
this is a sample of one test according to the request in https://github.com/huggingface/transformers/issues/5973
before I apply it to the rest
* batch 1
* batch 2
* batch 3
* batch 4
* batch 5
* style
* non-tf template
* last deletion of check_loss_output
* Kill model archive maps
* Fixup
* Also kill model_archive_map for MaskedBertPreTrainedModel
* Unhook config_archive_map
* Tokenizers: align with model id changes
* make style && make quality
* Fix CI
I suspect the wrapper classes were created in order to prevent the
abstract base class (TF)CommonModelTester from being included in test
discovery and running, because that would fail.
I solved this by replacing the abstract base class with a mixin.
Code changes are just de-indenting and automatic reformattings
performed by black to use the extra line space.
This construct isn't used anymore these days.
Running python tests/test_foo.py puts the tests/ directory on
PYTHONPATH, which isn't representative of how we run tests.
Use python -m unittest tests/test_foo.py instead.
This change is mostly autogenerated with:
$ python -m autoflake --in-place --recursive --remove-all-unused-imports --ignore-init-module-imports examples templates transformers utils hubconf.py setup.py
I made minor changes in the generated diff.
This change is mostly autogenerated with:
$ python -m autoflake --in-place --recursive examples templates transformers utils hubconf.py setup.py
I made minor changes in the generated diff.
This is the result of:
$ black --line-length 119 examples templates transformers utils hubconf.py setup.py
There's a lot of fairly long lines in the project. As a consequence, I'm
picking the longest widely accepted line length, 119 characters.
This is also Thomas' preference, because it allows for explicit variable
names, to make the code easier to understand.
Caching models across test cases and across runs of the test suite makes
slow tests somewhat more bearable.
Use gettempdir() instead of /tmp in tests. This makes it easier to
change the location of the cache with semi-standard TMPDIR/TEMP/TMP
environment variables.
Fix#2222.
* Switch to plain unittest for skipping slow tests.
Add a RUN_SLOW environment variable for running them.
* Switch to plain unittest for PyTorch dependency.
* Switch to plain unittest for TensorFlow dependency.
* Avoid leaking open files in the test suite.
This prevents spurious warnings when running tests.
* Fix unicode warning on Python 2 when running tests.
The warning was:
UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
* Support running PyTorch tests on a GPU.
Reverts 27e015bd.
* Tests no longer require pytest.
* Make tests pass on cuda