* Add QA trainer example for TF
* Make data_dir optional
* Fix parameter logic
* Fix feature convert
* Update the READMEs to add the question-answering task
* Apply style
* Change 'sequence-classification' to 'text-classification' and prefix with 'eval' all the metric names
* Apply style
* Apply style
* Improvements to the wandb integration
* small reorg + no global necessary
* feat(trainer): log epoch and final metrics
* Simplify logging a bit
* Fixup
* Fix crash when just running eval
Co-authored-by: Chris Van Pelt <vanpelt@gmail.com>
Co-authored-by: Boris Dayma <boris.dayma@gmail.com>
* Allow BatchEncoding to be initialized empty.
This is required by recent changes introduced in TF 2.2.
* Attempt to unpin Tensorflow to 2.2 with the previous commit.
* catch gpu len 1 set to gpu0
* Add mpc to trainer
* Add MPC for TF
* fix TF automodel for MPC and add Albert
* Apply style
* Fix import
* Note to self: double check
* Make shape None, None for datasetgenerator output shapes
* Add from_pt bool which doesnt seem to work
* Original checkpoint dir
* Fix docstrings for automodel
* Update readme and apply style
* Colab should probably not be from users
* Colabs should probably not be from users
* Add colab
* Update README.md
* Update README.md
* Cleanup __intit__
* Cleanup flake8 trailing comma
* Update src/transformers/training_args_tf.py
* Update src/transformers/modeling_tf_auto.py
Co-authored-by: Viktor Alm <viktoralm@pop-os.localdomain>
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
* simplify cache vars and allow for TRANSFORMERS_CACHE env
As it currently stands, "TRANSFORMERS_CACHE" is not an accepted variable. It seems that the these variables were not updated when moving from version pytorch_transformers to transformers. In addition, the fallback procedure could be improved. and simplified. Pathlib seems redundant here.
* Update file_utils.py
"Migrating from pytorch-transformers to transformers" is missing in the main document. It is available in the main `readme` thought. Just move it to the document.
* Fix the issue to properly run the accumulator with TF 2.2
* Apply style
* Fix training_args_tf for TF 2.2
* Fix the TF training args when only one GPU is available
* Remove the fixed version of TF in setup.py
* example updated to use generation pipeline
* Update model_cards/LorenzoDeMattei/GePpeTto/README.md
Co-authored-by: Julien Chaumond <chaumond@gmail.com>