transformers/examples/glue
Julien Chaumond dd9d483d03
Trainer (#3800)
* doc

* [tests] Add sample files for a regression task

* [HUGE] Trainer

* Feedback from @sshleifer

* Feedback from @thomwolf + logging tweak

* [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes

* [glue] Use default max_seq_length of 128 like before

* [glue] move DataTrainingArguments around

* [ner] Change interface of InputExample, and align run_{tf,pl}

* Re-align the pl scripts a little bit

* ner

* [ner] Add integration test

* Fix language_modeling with API tweak

* [ci] Tweak loss target

* Don't break console output

* amp.initialize: model must be on right device before

* [multiple-choice] update for Trainer

* Re-align to 827d6d6ef0
2020-04-21 20:11:56 -04:00
..
README.md [WIP] Lightning glue example (#3290) 2020-03-17 11:46:42 -04:00
run_pl_glue.py Trainer (#3800) 2020-04-21 20:11:56 -04:00
run_pl.sh Trainer (#3800) 2020-04-21 20:11:56 -04:00

GLUE Benchmark

Based on the script run_glue.py.

Run PyTorch version using PyTorch-Lightning

Run bash run_pl.sh from the glue directory. This will also install pytorch-lightning and the requirements in examples/requirements.txt. It is a shell pipeline that will automatically download, pre-process the data and run the specified models. Logs are saved in lightning_logs directory.

Pass --n_gpu flag to change the number of GPUs. Default uses 1. At the end, the expected results are: TEST RESULTS {'val_loss': tensor(0.0707), 'precision': 0.852427800698191, 'recall': 0.869537067011978, 'f1': 0.8608974358974358}