* Update hans data to be able to use Trainer
* Fixes
* Deal with tokenizer that don't have token_ids
* Clean up things
* Simplify data use
* Fix the input dict
* Formatting + proper path in README
* ner: add preprocessing script for examples that splits longer sentences
* ner: example shell scripts use local preprocessing now
* ner: add new example section for WNUT’17 NER task. Remove old English CoNLL-03 results
* ner: satisfy black and isort
* Glue task cleaup
* Enable writing cache to cache_dir in case dataset lives in readOnly
filesystem.
* Differentiate match vs mismatch for MNLI metrics.
* Style
* Fix pytype
* Fix type
* Use cache_dir in mnli mismatch eval dataset
* Small Tweaks
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
* Kill model archive maps
* Fixup
* Also kill model_archive_map for MaskedBertPreTrainedModel
* Unhook config_archive_map
* Tokenizers: align with model id changes
* make style && make quality
* Fix CI
The option `--do_lower_case` is currently required by the uncased models (i.e., bert-base-uncased, bert-large-uncased).
Results:
BERT-BASE without --do_lower_case: 'exact': 73.83, 'f1': 82.22
BERT-BASE with --do_lower_case: 'exact': 81.02, 'f1': 88.34
* Adds predict stage for glue tasks, and generate result files which could be submitted to gluebenchmark.com website.
* Use Split enum + always output the label name
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
* Distributed eval: SequentialDistributedSampler + gather all results
* For consistency only write to disk from world_master
Close https://github.com/huggingface/transformers/issues/4272
* Working distributed eval
* Hook into scripts
* Fix#3721 again
* TPU.mesh_reduce: stay in tensor space
Thanks @jysohn23
* Just a small comment
* whitespace
* torch.hub: pip install packaging
* Add test scenarii
* Add QA trainer example for TF
* Make data_dir optional
* Fix parameter logic
* Fix feature convert
* Update the READMEs to add the question-answering task
* Apply style
* Change 'sequence-classification' to 'text-classification' and prefix with 'eval' all the metric names
* Apply style
* Apply style
* Improvements to the wandb integration
* small reorg + no global necessary
* feat(trainer): log epoch and final metrics
* Simplify logging a bit
* Fixup
* Fix crash when just running eval
Co-authored-by: Chris Van Pelt <vanpelt@gmail.com>
Co-authored-by: Boris Dayma <boris.dayma@gmail.com>
* catch gpu len 1 set to gpu0
* Add mpc to trainer
* Add MPC for TF
* fix TF automodel for MPC and add Albert
* Apply style
* Fix import
* Note to self: double check
* Make shape None, None for datasetgenerator output shapes
* Add from_pt bool which doesnt seem to work
* Original checkpoint dir
* Fix docstrings for automodel
* Update readme and apply style
* Colab should probably not be from users
* Colabs should probably not be from users
* Add colab
* Update README.md
* Update README.md
* Cleanup __intit__
* Cleanup flake8 trailing comma
* Update src/transformers/training_args_tf.py
* Update src/transformers/modeling_tf_auto.py
Co-authored-by: Viktor Alm <viktoralm@pop-os.localdomain>
Co-authored-by: Julien Chaumond <chaumond@gmail.com>