transformers/examples
Joel Hanson 4db2fa77d7
Allow tests in examples to use cuda or fp16,if they are available (#5512)
* Allow tests in examples to use cuda or fp16,if they are available

The tests in examples didn't use the cuda or fp16 even if they where available.
- The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
  the device was take based on the availablity(cuda/cpu).
- The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
  which made the test to work without cuda. This example is having issue when running with fp16
  thus it not enabled (got an assertion error for perplexity due to it higher value).
- The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
  difference in the f1 score.
- The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.

Resolves some of: #5057

* Unwanted import of is_apex_available was removed

* Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable
- run_glue.py: Removed the check for cuda and fp16.
- run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.

* Incorrectly sorted imports fixed

* The model needs to be converted to half precision

* Formatted single line if condition statement to multiline

* The torch_device also needed to be checked before running the test on examples
- The tests in examples which uses cuda should also depend from the USE_CUDA flag,
  similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
  True by default, setting USE_CUDA to False should result in the examples not using CUDA

* Format some of the code in test_examples file

* The improper import of is_apex_available was sorted

* Formatted the code to keep the style standards

* The comma at the end of list giving a flake8 issue was fixed

* Import sort was fixed

* Removed the clean_test_dir function as its not used right now
2020-08-25 06:02:07 -04:00
..
adversarial Update repo to isort v5 (#6686) 2020-08-24 11:03:01 -04:00
benchmarking readme for benchmark (#5363) 2020-07-07 23:21:23 +02:00
bert-loses-patience [testing] a new TestCasePlus subclass + get_auto_remove_tmp_dir() (#6494) 2020-08-17 08:12:19 -04:00
bertology Make DataCollator a callable (#5015) 2020-06-15 11:58:33 -04:00
contrib save_pretrained: mkdir(exist_ok=True) (#5258) 2020-06-28 14:53:47 -04:00
deebert Fix deebert tests (#6102) 2020-07-28 18:30:16 -04:00
distillation [examples] consistently use --gpus, instead of --n_gpu (#6315) 2020-08-07 10:36:32 -04:00
language-modeling XLNet PLM Readme (#6121) 2020-07-29 11:38:15 -04:00
longform-qa Add mbart-large-cc25, support translation finetuning (#5129) 2020-07-07 13:23:01 -04:00
movement-pruning save_pretrained: mkdir(exist_ok=True) (#5258) 2020-06-28 14:53:47 -04:00
multiple-choice Update repo to isort v5 (#6686) 2020-08-24 11:03:01 -04:00
question-answering Switch from return_tuple to return_dict (#6138) 2020-07-30 09:17:00 -04:00
seq2seq [s2s] round bleu, rouge to 4 digits (#6704) 2020-08-25 00:33:11 -04:00
text-classification update xnli-mt url (#6580) 2020-08-18 13:10:47 -04:00
text-generation Allow tests in examples to use cuda or fp16,if they are available (#5512) 2020-08-25 06:02:07 -04:00
token-classification Fix PL token classification examples (#6682) 2020-08-24 11:30:06 -04:00
conftest.py enable easy checkout switch (#5645) 2020-07-31 04:34:46 -04:00
lightning_base.py [lightning_base] fix s2s logging, only make train_loader once (#6404) 2020-08-16 22:49:41 -04:00
README.md correct pl link in readme (#6364) 2020-08-10 03:08:46 -04:00
requirements.txt Add POS tagging and Phrase chunking token classification examples (#6457) 2020-08-13 12:09:51 -04:00
test_examples.py Allow tests in examples to use cuda or fp16,if they are available (#5512) 2020-08-25 06:02:07 -04:00
test_xla_examples.py Add setup for TPU CI to run every hour. (#6219) 2020-08-07 11:17:07 -04:00
xla_spawn.py [TPU] Doc, fix xla_spawn.py, only preprocess dataset once (#4223) 2020-05-08 14:10:05 -04:00

Examples

Version 2.9 of 🤗 Transformers introduces a new Trainer class for PyTorch, and its equivalent TFTrainer for TF 2. Running the examples requires PyTorch 1.3.1+ or TensorFlow 2.2+.

Here is the list of all our examples:

  • grouped by task (all official examples work for multiple models)
  • with information on whether they are built on top of Trainer/TFTrainer (if not, they still work, they might just lack some features),
  • whether they also include examples for pytorch-lightning, which is a great fully-featured, general-purpose training library for PyTorch,
  • links to Colab notebooks to walk through the scripts and run them easily,
  • links to Cloud deployments to be able to deploy large-scale trainings in the Cloud with little to no setup.

This is still a work-in-progress in particular documentation is still sparse so please contribute improvements/pull requests.

The Big Table of Tasks

Task Example datasets Trainer support TFTrainer support pytorch-lightning Colab
language-modeling Raw text - - Open In Colab
text-classification GLUE, XNLI Open In Colab
token-classification CoNLL NER -
multiple-choice SWAG, RACE, ARC - Open In Colab
question-answering SQuAD - -
text-generation - n/a n/a n/a Open In Colab
distillation All - - - -
summarization CNN/Daily Mail - - -
translation WMT - - -
bertology - - - - -
adversarial HANS - - -

Important note

Important To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. Execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r ./examples/requirements.txt

One-click Deploy to Cloud (wip)

Azure

Deploy to Azure

Running on TPUs

When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.

When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the very detailed pytorch/xla README.

In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch helper for torch.distributed).

For example for run_glue:

python examples/xla_spawn.py --num_cores 8 \
	examples/text-classification/run_glue.py
	--model_name_or_path bert-base-cased \
	--task_name mnli \
	--data_dir ./data/glue_data/MNLI \
	--output_dir ./models/tpu \
	--overwrite_output_dir \
	--do_train \
	--do_eval \
	--num_train_epochs 1 \
	--save_steps 20000

Feedback and more use cases and benchmarks involving TPUs are welcome, please share with the community.

Logging & Experiment tracking

You can easily log and monitor your runs code. The following are currently supported:

Weights & Biases

To use Weights & Biases, install the wandb package with:

pip install wandb

Then log in the command line:

wandb login

If you are in Jupyter or Colab, you should login with:

import wandb
wandb.login()

Whenever you use Trainer or TFTrainer classes, your losses, evaluation metrics, model topology and gradients (for Trainer only) will automatically be logged.

When using 🤗 Transformers with PyTorch Lightning, runs can be tracked through WandbLogger. Refer to related documentation & examples.

Comet.ml

To use comet_ml, install the Python package with:

pip install comet_ml

or if in a Conda environment:

conda install -c comet_ml -c anaconda -c conda-forge comet_ml