transformers/examples
Funtowicz Morgan 75627148ee
Flax Masked Language Modeling training example (#8728)
* Remove "Model" suffix from Flax models to look more 🤗

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Initial working (forward + backward) for Flax MLM training example.

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Simply code

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Addressing comments, using module and moving to LM task.

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Restore parameter name "module" wrongly renamed model.

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Restore correct output ordering...

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Actually commit the example 😅

Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>

* Add FlaxBertModelForMaskedLM after rebasing.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Make it possible to initialize the training from scratch

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Reuse flax linen example of cross entropy loss

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Added specific data collator for flax

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Remove todo for data collator

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Added evaluation step

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Added ability to provide dtype to support bfloat16 on TPU

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Enable flax tensorboard output

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Enable jax.pmap support.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Ensure batches are correctly sized to be dispatched with jax.pmap

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Enable bfloat16 with --fp16 cmdline args

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Correctly export metrics to tensorboard

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Added dropout and ability to use it.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Effectively enable & disable during training and evaluation steps.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Oops.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Enable specifying kernel initializer scale

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Style.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Added warmup step to the learning rate scheduler.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Fix typo.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Print training loss

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Make style

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* fix linter issue (flake8)

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Fix model matching

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Fix dummies

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Fix non default dtype on Flax models

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Use the same create_position_ids_from_input_ids for FlaxRoberta

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Make Roberta attention as Bert

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* fix copy

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Wording.

Co-authored-by: Marc van Zee <marcvanzee@gmail.com>

Co-authored-by: Marc van Zee <marcvanzee@gmail.com>
2020-12-09 17:13:56 +01:00
..
adversarial Tokenizers: ability to load from model subfolder (#8586) 2020-11-17 08:58:45 -05:00
benchmarking Fix many typos (#8708) 2020-11-21 22:58:10 -05:00
bert-loses-patience Tokenizers: ability to load from model subfolder (#8586) 2020-11-17 08:58:45 -05:00
bertology Tokenizers: ability to load from model subfolder (#8586) 2020-11-17 08:58:45 -05:00
contrib Fix a few last paths for the new repo org (#8666) 2020-11-19 11:56:42 -05:00
deebert Tokenizers: ability to load from model subfolder (#8586) 2020-11-17 08:58:45 -05:00
distillation Fix many typos (#8708) 2020-11-21 22:58:10 -05:00
language-modeling Flax Masked Language Modeling training example (#8728) 2020-12-09 17:13:56 +01:00
longform-qa Fix a few last paths for the new repo org (#8666) 2020-11-19 11:56:42 -05:00
lxmert Switch return_dict to True by default. (#8530) 2020-11-16 11:43:00 -05:00
movement-pruning Fix many typos (#8708) 2020-11-21 22:58:10 -05:00
multiple-choice Tokenizers: ability to load from model subfolder (#8586) 2020-11-17 08:58:45 -05:00
question-answering New squad example (#8992) 2020-12-08 14:39:29 -05:00
rag fix rag index names in eval_rag.py example (#8730) 2020-11-24 17:04:47 +01:00
seq2seq [seq2seq] document the caveat of leaky native amp (#8930) 2020-12-04 15:43:35 -08:00
text-classification Don't pass in token_type_ids to BART for GLUE (#8929) 2020-12-05 09:52:16 -05:00
text-generation Fix PPLM (#8779) 2020-11-26 22:23:36 +01:00
token-classification Use word_ids to get labels in run_ner (#8962) 2020-12-07 14:26:36 -05:00
conftest.py [CIs] Better reports everywhere (#8275) 2020-11-03 16:57:12 -05:00
lightning_base.py [core] implement support for run-time dependency version checking (#8645) 2020-11-24 13:22:25 -05:00
README.md New squad example (#8992) 2020-12-08 14:39:29 -05:00
requirements.txt Fix run_ner script (#8664) 2020-11-19 13:59:30 -05:00
test_examples.py New squad example (#8992) 2020-12-08 14:39:29 -05:00
test_xla_examples.py using multi_gpu consistently (#8446) 2020-11-10 13:23:58 -05:00
xla_spawn.py [TPU] Doc, fix xla_spawn.py, only preprocess dataset once (#4223) 2020-05-08 14:10:05 -04:00

Examples

Version 2.9 of 🤗 Transformers introduced a new Trainer class for PyTorch, and its equivalent TFTrainer for TF 2. Running the examples requires PyTorch 1.3.1+ or TensorFlow 2.2+.

Here is the list of all our examples:

  • grouped by task (all official examples work for multiple models)
  • with information on whether they are built on top of Trainer/TFTrainer (if not, they still work, they might just lack some features),
  • whether or not they leverage the 🤗 Datasets library.
  • links to Colab notebooks to walk through the scripts and run them easily,
  • links to Cloud deployments to be able to deploy large-scale trainings in the Cloud with little to no setup.

Important note

Important

To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. Execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r ./examples/requirements.txt

Alternatively, you can run the version of the examples as they were for your current version of Transformers via (for instance with v3.4.0):

git checkout tags/v3.4.0

The Big Table of Tasks

Task Example datasets Trainer support TFTrainer support 🤗 Datasets Colab
language-modeling Raw text - Open In Colab
text-classification GLUE, XNLI Open In Colab
token-classification CoNLL NER -
multiple-choice SWAG, RACE, ARC - Open In Colab
question-answering SQuAD -
text-generation - n/a n/a - Open In Colab
distillation All - - - -
summarization CNN/Daily Mail - - -
translation WMT - - -
bertology - - - - -
adversarial HANS - - -

One-click Deploy to Cloud (wip)

Coming soon!

Running on TPUs

When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.

When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the very detailed pytorch/xla README.

In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch helper for torch.distributed). Note that this approach does not work for examples that use pytorch-lightning.

For example for run_glue:

python examples/xla_spawn.py --num_cores 8 \
	examples/text-classification/run_glue.py \
	--model_name_or_path bert-base-cased \
	--task_name mnli \
	--data_dir ./data/glue_data/MNLI \
	--output_dir ./models/tpu \
	--overwrite_output_dir \
	--do_train \
	--do_eval \
	--num_train_epochs 1 \
	--save_steps 20000

Feedback and more use cases and benchmarks involving TPUs are welcome, please share with the community.

Logging & Experiment tracking

You can easily log and monitor your runs code. The following are currently supported:

Weights & Biases

To use Weights & Biases, install the wandb package with:

pip install wandb

Then log in the command line:

wandb login

If you are in Jupyter or Colab, you should login with:

import wandb
wandb.login()

Whenever you use Trainer or TFTrainer classes, your losses, evaluation metrics, model topology and gradients (for Trainer only) will automatically be logged.

When using 🤗 Transformers with PyTorch Lightning, runs can be tracked through WandbLogger. Refer to related documentation & examples.

Comet.ml

To use comet_ml, install the Python package with:

pip install comet_ml

or if in a Conda environment:

conda install -c comet_ml -c anaconda -c conda-forge comet_ml