mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
[examples] Streamline doc
This commit is contained in:
parent
7defc6670f
commit
af2e6bf87c
@ -1,6 +1,6 @@
|
||||
# Examples
|
||||
## Examples
|
||||
|
||||
Version 2.9 of `transformers` introduces a new `Trainer` class for PyTorch, and its equivalent `TFTrainer` for TF 2.
|
||||
Version 2.9 of `transformers` introduces a new [`Trainer`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) class for PyTorch, and its equivalent [`TFTrainer`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py) for TF 2.
|
||||
|
||||
Here is the list of all our examples:
|
||||
- **grouped by task** (all official examples work for multiple models)
|
||||
@ -12,32 +12,24 @@ Here is the list of all our examples:
|
||||
This is still a work-in-progress – in particular documentation is still sparse – so please **contribute improvements/pull requests.**
|
||||
|
||||
|
||||
## Tasks built on Trainer
|
||||
# The Big Table of Tasks
|
||||
|
||||
| Task | Example datasets | Trainer support | TFTrainer support | pytorch-lightning | Colab | One-click Deploy to Azure (wip) |
|
||||
|---|---|:---:|:---:|:---:|:---:|:---:|
|
||||
| [`language-modeling`](./language-modeling) | Raw text | ✅ | - | - | - | - |
|
||||
| [`text-classification`](./text-classification) | GLUE, XNLI | ✅ | ✅ | ✅ | [](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb) | [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json) |
|
||||
| [`token-classification`](./token-classification) | CoNLL NER | ✅ | ✅ | ✅ | - | - |
|
||||
| [`multiple-choice`](./multiple-choice) | SWAG, RACE, ARC | ✅ | ✅ | - | [](https://colab.research.google.com/github/ViktorAlm/notebooks/blob/master/MPC_GPU_Demo_for_TF_and_PT.ipynb) | - |
|
||||
| [`question-answering`](./question-answering) | SQuAD | - | ✅ | - | - | - |
|
||||
| Task | Example datasets | Trainer support | TFTrainer support | pytorch-lightning | Colab
|
||||
|---|---|:---:|:---:|:---:|:---:|
|
||||
| [**`language-modeling`**](./language-modeling) | Raw text | ✅ | - | - | [](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb)
|
||||
| [**`text-classification`**](./text-classification) | GLUE, XNLI | ✅ | ✅ | ✅ | [](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb)
|
||||
| [**`token-classification`**](./token-classification) | CoNLL NER | ✅ | ✅ | ✅ | -
|
||||
| [**`multiple-choice`**](./multiple-choice) | SWAG, RACE, ARC | ✅ | ✅ | - | [](https://colab.research.google.com/github/ViktorAlm/notebooks/blob/master/MPC_GPU_Demo_for_TF_and_PT.ipynb)
|
||||
| [**`question-answering`**](./question-answering) | SQuAD | - | ✅ | - | -
|
||||
| [**`text-generation`**](./text-generation) | - | - | - | - | [](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb)
|
||||
| [**`distillation`**](./distillation) | All | - | - | - | -
|
||||
| [**`summarization`**](./summarization) | CNN/Daily Mail | - | - | - | -
|
||||
| [**`translation`**](./translation) | WMT | - | - | - | -
|
||||
| [**`bertology`**](./bertology) | - | - | - | - | -
|
||||
| [**`adversarial`**](./adversarial) | HANS | - | - | - | -
|
||||
|
||||
|
||||
|
||||
## Other examples and how-to's
|
||||
|
||||
| Section | Description |
|
||||
|---|---|
|
||||
| [TensorFlow 2.0 models on GLUE](./text-classification) | Examples running BERT TensorFlow 2.0 model on the GLUE tasks. |
|
||||
| [Running on TPUs](#running-on-tpus) | Examples on running fine-tuning tasks on Google TPUs to accelerate workloads. |
|
||||
| [Language Model training](./language-modeling) | Fine-tuning (or training from scratch) the library models for language modeling on a text dataset. Causal language modeling for GPT/GPT-2, masked language modeling for BERT/RoBERTa. |
|
||||
| [Language Generation](./text-generation) | Conditional text generation using the auto-regressive models of the library: GPT, GPT-2, Transformer-XL and XLNet. |
|
||||
| [GLUE](./text-classification) | Examples running BERT/XLM/XLNet/RoBERTa on the 9 GLUE tasks. Examples feature distributed training as well as half-precision. |
|
||||
| [SQuAD](./question-answering) | Using BERT/RoBERTa/XLNet/XLM for question answering, examples with distributed training. |
|
||||
| [Multiple Choice](./multiple-choice) | Examples running BERT/XLNet/RoBERTa on the SWAG/RACE/ARC tasks. |
|
||||
| [Named Entity Recognition](./token-classification) | Using BERT for Named Entity Recognition (NER) on the CoNLL 2003 dataset, examples with distributed training. |
|
||||
| [XNLI](./text-classification) | Examples running BERT/XLM on the XNLI benchmark. |
|
||||
| [Adversarial evaluation of model performances](./adversarial) | Testing a model with adversarial evaluation of natural language inference on the Heuristic Analysis for NLI Systems (HANS) dataset (McCoy et al., 2019.) |
|
||||
<br>
|
||||
|
||||
## Important note
|
||||
|
||||
@ -52,6 +44,12 @@ pip install .
|
||||
pip install -r ./examples/requirements.txt
|
||||
```
|
||||
|
||||
## One-click Deploy to Cloud (wip)
|
||||
|
||||
#### Azure
|
||||
|
||||
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-storage-account-create%2Fazuredeploy.json)
|
||||
|
||||
## Running on TPUs
|
||||
|
||||
When using Tensorflow, TPUs are supported out of the box as a `tf.distribute.Strategy`.
|
||||
|
Loading…
Reference in New Issue
Block a user