mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-28 08:42:23 +06:00
![]() * doc
* [tests] Add sample files for a regression task
* [HUGE] Trainer
* Feedback from @sshleifer
* Feedback from @thomwolf + logging tweak
* [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes
* [glue] Use default max_seq_length of 128 like before
* [glue] move DataTrainingArguments around
* [ner] Change interface of InputExample, and align run_{tf,pl}
* Re-align the pl scripts a little bit
* ner
* [ner] Add integration test
* Fix language_modeling with API tweak
* [ci] Tweak loss target
* Don't break console output
* amp.initialize: model must be on right device before
* [multiple-choice] update for Trainer
* Re-align to
|
||
---|---|---|
.. | ||
README.md | ||
run_pl_glue.py | ||
run_pl.sh |
GLUE Benchmark
Based on the script run_glue.py
.
Run PyTorch version using PyTorch-Lightning
Run bash run_pl.sh
from the glue
directory. This will also install pytorch-lightning
and the requirements in examples/requirements.txt
. It is a shell pipeline that will automatically download, pre-process the data and run the specified models. Logs are saved in lightning_logs
directory.
Pass --n_gpu
flag to change the number of GPUs. Default uses 1. At the end, the expected results are: TEST RESULTS {'val_loss': tensor(0.0707), 'precision': 0.852427800698191, 'recall': 0.869537067011978, 'f1': 0.8608974358974358}