transformers/examples/README.md
Julien Chaumond 0ae96ff8a7 BIG Reorganize examples (#4213)
* Created using Colaboratory

* [examples] reorganize files

* remove run_tpu_glue.py as superseded by TPU support in Trainer

* Bugfix: int, not tuple

* move files around
2020-05-07 13:48:44 -04:00

2.3 KiB

Examples

In this section a few examples are put together. All of these examples work for several models, making use of the very similar API between the different models.

Important To run the latest versions of the examples, you have to install from source and install some specific requirements for the examples. Execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r ./examples/requirements.txt
Section Description
TensorFlow 2.0 models on GLUE Examples running BERT TensorFlow 2.0 model on the GLUE tasks.
Running on TPUs Examples on running fine-tuning tasks on Google TPUs to accelerate workloads.
Language Model training Fine-tuning (or training from scratch) the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa.
Language Generation Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet.
GLUE Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision.
SQuAD Using BERT/RoBERTa/XLNet/XLM for question answering, examples with distributed training.
Multiple Choice Examples running BERT/XLNet/RoBERTa on the SWAG/RACE/ARC tasks.
Named Entity Recognition Using BERT for Named Entity Recognition (NER) on the CoNLL 2003 dataset, examples with distributed training.
XNLI Examples running BERT/XLM on the XNLI benchmark.
Adversarial evaluation of model performances Testing a model with adversarial evaluation of natural language inference on the Heuristic Analysis for NLI Systems (HANS) dataset (McCoy et al., 2019.)