mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-03 03:31:05 +06:00
Convert rst files (#14888)
* Convert all tutorials and guides * Convert all remaining rst to mdx * Track and fix bad links
This commit is contained in:
parent
b0c7d2ec58
commit
207594be81
File diff suppressed because it is too large
Load Diff
138
docs/source/add_new_pipeline.mdx
Normal file
138
docs/source/add_new_pipeline.mdx
Normal file
@ -0,0 +1,138 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
-->
|
||||
|
||||
# How to add a pipeline to 🤗 Transformers?
|
||||
|
||||
First and foremost, you need to decide the raw entries the pipeline will be able to take. It can be strings, raw bytes,
|
||||
dictionaries or whatever seems to be the most likely desired input. Try to keep these inputs as pure Python as possible
|
||||
as it makes compatibility easier (even through other languages via JSON). Those will be the `inputs` of the
|
||||
pipeline (`preprocess`).
|
||||
|
||||
Then define the `outputs`. Same policy as the `inputs`. The simpler, the better. Those will be the outputs of
|
||||
`postprocess` method.
|
||||
|
||||
Start by inheriting the base class `Pipeline`. with the 4 methods needed to implement `preprocess`,
|
||||
`_forward`, `postprocess` and `_sanitize_parameters`.
|
||||
|
||||
|
||||
```python
|
||||
from transformers import Pipeline
|
||||
|
||||
class MyPipeline(Pipeline):
|
||||
def _sanitize_parameters(self, **kwargs):
|
||||
preprocess_kwargs = {}
|
||||
if "maybe_arg" in kwargs:
|
||||
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
|
||||
return preprocess_kwargs, {}, {}
|
||||
|
||||
def preprocess(self, inputs, maybe_arg=2):
|
||||
model_input = Tensor(....)
|
||||
return {"model_input": model_input}
|
||||
|
||||
def _forward(self, model_inputs):
|
||||
# model_inputs == {"model_input": model_input}
|
||||
outputs = self.model(**model_inputs)
|
||||
# Maybe {"logits": Tensor(...)}
|
||||
return outputs
|
||||
|
||||
def postprocess(self, model_outputs):
|
||||
best_class = model_outputs["logits"].softmax(-1)
|
||||
return best_class
|
||||
```
|
||||
|
||||
The structure of this breakdown is to support relatively seamless support for CPU/GPU, while supporting doing
|
||||
pre/postprocessing on the CPU on different threads
|
||||
|
||||
`preprocess` will take the originally defined inputs, and turn them into something feedable to the model. It might
|
||||
contain more information and is usually a `Dict`.
|
||||
|
||||
`_forward` is the implementation detail and is not meant to be called directly. `forward` is the preferred
|
||||
called method as it contains safeguards to make sure everything is working on the expected device. If anything is
|
||||
linked to a real model it belongs in the `_forward` method, anything else is in the preprocess/postprocess.
|
||||
|
||||
`postprocess` methods will take the output of `_forward` and turn it into the final output that were decided
|
||||
earlier.
|
||||
|
||||
`_sanitize_parameters` exists to allow users to pass any parameters whenever they wish, be it at initialization
|
||||
time `pipeline(...., maybe_arg=4)` or at call time `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`.
|
||||
|
||||
The returns of `_sanitize_parameters` are the 3 dicts of kwargs that will be passed directly to `preprocess`,
|
||||
`_forward` and `postprocess`. Don't fill anything if the caller didn't call with any extra parameter. That
|
||||
allows to keep the default arguments in the function definition which is always more "natural".
|
||||
|
||||
A classic example would be a `top_k` argument in the post processing in classification tasks.
|
||||
|
||||
```python
|
||||
>>> pipe = pipeline("my-new-task")
|
||||
>>> pipe("This is a test")
|
||||
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05}
|
||||
{"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}]
|
||||
|
||||
>>> pipe("This is a test", top_k=2)
|
||||
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}]
|
||||
```
|
||||
|
||||
In order to achieve that, we'll update our `postprocess` method with a default parameter to `5`. and edit
|
||||
`_sanitize_parameters` to allow this new parameter.
|
||||
|
||||
|
||||
```python
|
||||
def postprocess(self, model_outputs, top_k=5):
|
||||
best_class = model_outputs["logits"].softmax(-1)
|
||||
# Add logic to handle top_k
|
||||
return best_class
|
||||
|
||||
def _sanitize_parameters(self, **kwargs):
|
||||
preprocess_kwargs = {}
|
||||
if "maybe_arg" in kwargs:
|
||||
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
|
||||
|
||||
postprocess_kwargs = {}
|
||||
if "top_k" in kwargs:
|
||||
preprocess_kwargs["top_k"] = kwargs["top_k"]
|
||||
return preprocess_kwargs, {}, postprocess_kwargs
|
||||
```
|
||||
|
||||
Try to keep the inputs/outputs very simple and ideally JSON-serializable as it makes the pipeline usage very easy
|
||||
without requiring users to understand new kind of objects. It's also relatively common to support many different types
|
||||
of arguments for ease of use (audio files, can be filenames, URLs or pure bytes)
|
||||
|
||||
|
||||
|
||||
## Adding it to the list of supported tasks
|
||||
|
||||
Go to `src/transformers/pipelines/__init__.py` and fill in `SUPPORTED_TASKS` with your newly created pipeline.
|
||||
If possible it should provide a default model.
|
||||
|
||||
## Adding tests
|
||||
|
||||
Create a new file `tests/test_pipelines_MY_PIPELINE.py` with example with the other tests.
|
||||
|
||||
The `run_pipeline_test` function will be very generic and run on small random models on every possible
|
||||
architecture as defined by `model_mapping` and `tf_model_mapping`.
|
||||
|
||||
This is very important to test future compatibility, meaning if someone adds a new model for
|
||||
`XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's
|
||||
impossible to check for actual values, that's why There is a helper `ANY` that will simply attempt to match the
|
||||
output of the pipeline TYPE.
|
||||
|
||||
You also *need* to implement 2 (ideally 4) tests.
|
||||
|
||||
- `test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
|
||||
and test the pipeline outputs. The results should be the same as `test_small_model_tf`.
|
||||
- `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
|
||||
and test the pipeline outputs. The results should be the same as `test_small_model_pt`.
|
||||
- `test_large_model_pt` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
|
||||
make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
|
||||
sure there is no drift in future releases
|
||||
- `test_large_model_tf` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
|
||||
make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
|
||||
sure there is no drift in future releases
|
@ -1,143 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
|
||||
How to add a pipeline to 🤗 Transformers?
|
||||
=======================================================================================================================
|
||||
|
||||
First and foremost, you need to decide the raw entries the pipeline will be able to take. It can be strings, raw bytes,
|
||||
dictionaries or whatever seems to be the most likely desired input. Try to keep these inputs as pure Python as possible
|
||||
as it makes compatibility easier (even through other languages via JSON). Those will be the :obj:`inputs` of the
|
||||
pipeline (:obj:`preprocess`).
|
||||
|
||||
Then define the :obj:`outputs`. Same policy as the :obj:`inputs`. The simpler, the better. Those will be the outputs of
|
||||
:obj:`postprocess` method.
|
||||
|
||||
Start by inheriting the base class :obj:`Pipeline`. with the 4 methods needed to implement :obj:`preprocess`,
|
||||
:obj:`_forward`, :obj:`postprocess` and :obj:`_sanitize_parameters`.
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import Pipeline
|
||||
|
||||
class MyPipeline(Pipeline):
|
||||
def _sanitize_parameters(self, **kwargs):
|
||||
preprocess_kwargs = {}
|
||||
if "maybe_arg" in kwargs:
|
||||
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
|
||||
return preprocess_kwargs, {}, {}
|
||||
|
||||
def preprocess(self, inputs, maybe_arg=2):
|
||||
model_input = Tensor(....)
|
||||
return {"model_input": model_input}
|
||||
|
||||
def _forward(self, model_inputs):
|
||||
# model_inputs == {"model_input": model_input}
|
||||
outputs = self.model(**model_inputs)
|
||||
# Maybe {"logits": Tensor(...)}
|
||||
return outputs
|
||||
|
||||
def postprocess(self, model_outputs):
|
||||
best_class = model_outputs["logits"].softmax(-1)
|
||||
return best_class
|
||||
|
||||
|
||||
The structure of this breakdown is to support relatively seamless support for CPU/GPU, while supporting doing
|
||||
pre/postprocessing on the CPU on different threads
|
||||
|
||||
:obj:`preprocess` will take the originally defined inputs, and turn them into something feedable to the model. It might
|
||||
contain more information and is usually a :obj:`Dict`.
|
||||
|
||||
:obj:`_forward` is the implementation detail and is not meant to be called directly. :obj:`forward` is the preferred
|
||||
called method as it contains safeguards to make sure everything is working on the expected device. If anything is
|
||||
linked to a real model it belongs in the :obj:`_forward` method, anything else is in the preprocess/postprocess.
|
||||
|
||||
:obj:`postprocess` methods will take the output of :obj:`_forward` and turn it into the final output that were decided
|
||||
earlier.
|
||||
|
||||
:obj:`_sanitize_parameters` exists to allow users to pass any parameters whenever they wish, be it at initialization
|
||||
time ``pipeline(...., maybe_arg=4)`` or at call time ``pipe = pipeline(...); output = pipe(...., maybe_arg=4)``.
|
||||
|
||||
The returns of :obj:`_sanitize_parameters` are the 3 dicts of kwargs that will be passed directly to :obj:`preprocess`,
|
||||
:obj:`_forward` and :obj:`postprocess`. Don't fill anything if the caller didn't call with any extra parameter. That
|
||||
allows to keep the default arguments in the function definition which is always more "natural".
|
||||
|
||||
A classic example would be a :obj:`top_k` argument in the post processing in classification tasks.
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> pipe = pipeline("my-new-task")
|
||||
>>> pipe("This is a test")
|
||||
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05}
|
||||
{"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}]
|
||||
|
||||
>>> pipe("This is a test", top_k=2)
|
||||
[{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}]
|
||||
|
||||
In order to achieve that, we'll update our :obj:`postprocess` method with a default parameter to :obj:`5`. and edit
|
||||
:obj:`_sanitize_parameters` to allow this new parameter.
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
|
||||
def postprocess(self, model_outputs, top_k=5):
|
||||
best_class = model_outputs["logits"].softmax(-1)
|
||||
# Add logic to handle top_k
|
||||
return best_class
|
||||
|
||||
def _sanitize_parameters(self, **kwargs):
|
||||
preprocess_kwargs = {}
|
||||
if "maybe_arg" in kwargs:
|
||||
preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"]
|
||||
|
||||
postprocess_kwargs = {}
|
||||
if "top_k" in kwargs:
|
||||
preprocess_kwargs["top_k"] = kwargs["top_k"]
|
||||
return preprocess_kwargs, {}, postprocess_kwargs
|
||||
|
||||
Try to keep the inputs/outputs very simple and ideally JSON-serializable as it makes the pipeline usage very easy
|
||||
without requiring users to understand new kind of objects. It's also relatively common to support many different types
|
||||
of arguments for ease of use (audio files, can be filenames, URLs or pure bytes)
|
||||
|
||||
|
||||
|
||||
Adding it to the list of supported tasks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Go to ``src/transformers/pipelines/__init__.py`` and fill in :obj:`SUPPORTED_TASKS` with your newly created pipeline.
|
||||
If possible it should provide a default model.
|
||||
|
||||
Adding tests
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a new file ``tests/test_pipelines_MY_PIPELINE.py`` with example with the other tests.
|
||||
|
||||
The :obj:`run_pipeline_test` function will be very generic and run on small random models on every possible
|
||||
architecture as defined by :obj:`model_mapping` and :obj:`tf_model_mapping`.
|
||||
|
||||
This is very important to test future compatibility, meaning if someone adds a new model for
|
||||
:obj:`XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's
|
||||
impossible to check for actual values, that's why There is a helper :obj:`ANY` that will simply attempt to match the
|
||||
output of the pipeline TYPE.
|
||||
|
||||
You also *need* to implement 2 (ideally 4) tests.
|
||||
|
||||
- :obj:`test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
|
||||
and test the pipeline outputs. The results should be the same as :obj:`test_small_model_tf`.
|
||||
- :obj:`test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
|
||||
and test the pipeline outputs. The results should be the same as :obj:`test_small_model_pt`.
|
||||
- :obj:`test_large_model_pt` (:obj:`optional`): Tests the pipeline on a real pipeline where the results are supposed to
|
||||
make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
|
||||
sure there is no drift in future releases
|
||||
- :obj:`test_large_model_tf` (:obj:`optional`): Tests the pipeline on a real pipeline where the results are supposed to
|
||||
make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
|
||||
sure there is no drift in future releases
|
36
docs/source/bertology.mdx
Normal file
36
docs/source/bertology.mdx
Normal file
@ -0,0 +1,36 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# BERTology
|
||||
|
||||
There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT
|
||||
(that some call "BERTology"). Some good examples of this field are:
|
||||
|
||||
|
||||
- BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick:
|
||||
https://arxiv.org/abs/1905.05950
|
||||
- Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650
|
||||
- What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D.
|
||||
Manning: https://arxiv.org/abs/1906.04341
|
||||
|
||||
In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to
|
||||
help people access the inner representations, mainly adapted from the great work of Paul Michel
|
||||
(https://arxiv.org/abs/1905.10650):
|
||||
|
||||
|
||||
- accessing all the hidden-states of BERT/GPT/GPT-2,
|
||||
- accessing all the attention weights for each head of BERT/GPT/GPT-2,
|
||||
- retrieving heads output values and gradients to be able to compute head importance score and prune head as explained
|
||||
in https://arxiv.org/abs/1905.10650.
|
||||
|
||||
To help you understand and use these features, we have added a specific example script: [bertology.py](https://github.com/huggingface/transformers/tree/master/examples/research_projects/bertology/run_bertology.py) while extract information and prune a model pre-trained on
|
||||
GLUE.
|
@ -1,38 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
BERTology
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT
|
||||
(that some call "BERTology"). Some good examples of this field are:
|
||||
|
||||
|
||||
* BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick:
|
||||
https://arxiv.org/abs/1905.05950
|
||||
* Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650
|
||||
* What Does BERT Look At? An Analysis of BERT's Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D.
|
||||
Manning: https://arxiv.org/abs/1906.04341
|
||||
|
||||
In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to
|
||||
help people access the inner representations, mainly adapted from the great work of Paul Michel
|
||||
(https://arxiv.org/abs/1905.10650):
|
||||
|
||||
|
||||
* accessing all the hidden-states of BERT/GPT/GPT-2,
|
||||
* accessing all the attention weights for each head of BERT/GPT/GPT-2,
|
||||
* retrieving heads output values and gradients to be able to compute head importance score and prune head as explained
|
||||
in https://arxiv.org/abs/1905.10650.
|
||||
|
||||
To help you understand and use these features, we have added a specific example script: :prefix_link:`bertology.py
|
||||
<examples/research_projects/bertology/run_bertology.py>` while extract information and prune a model pre-trained on
|
||||
GLUE.
|
162
docs/source/converting_tensorflow_models.mdx
Normal file
162
docs/source/converting_tensorflow_models.mdx
Normal file
@ -0,0 +1,162 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Converting Tensorflow Checkpoints
|
||||
|
||||
A command-line interface is provided to convert original Bert/GPT/GPT-2/Transformer-XL/XLNet/XLM checkpoints to models
|
||||
that can be loaded using the `from_pretrained` methods of the library.
|
||||
|
||||
<Tip>
|
||||
|
||||
Since 2.3.0 the conversion script is now part of the transformers CLI (**transformers-cli**) available in any
|
||||
transformers >= 2.3.0 installation.
|
||||
|
||||
The documentation below reflects the **transformers-cli convert** command format.
|
||||
|
||||
</Tip>
|
||||
|
||||
## BERT
|
||||
|
||||
You can convert any TensorFlow checkpoint for BERT (in particular [the pre-trained models released by Google](https://github.com/google-research/bert#pre-trained-models)) in a PyTorch save file by using the
|
||||
[convert_bert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py) script.
|
||||
|
||||
This CLI takes as input a TensorFlow checkpoint (three files starting with `bert_model.ckpt`) and the associated
|
||||
configuration file (`bert_config.json`), and creates a PyTorch model for this configuration, loads the weights from
|
||||
the TensorFlow checkpoint in the PyTorch model and saves the resulting model in a standard PyTorch save file that can
|
||||
be imported using `from_pretrained()` (see example in [quicktour](quicktour) , [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification/run_glue.py) ).
|
||||
|
||||
You only need to run this conversion script **once** to get a PyTorch model. You can then disregard the TensorFlow
|
||||
checkpoint (the three files starting with `bert_model.ckpt`) but be sure to keep the configuration file (\
|
||||
`bert_config.json`) and the vocabulary file (`vocab.txt`) as these are needed for the PyTorch model too.
|
||||
|
||||
To run this specific conversion script you will need to have TensorFlow and PyTorch installed (`pip install tensorflow`). The rest of the repository only requires PyTorch.
|
||||
|
||||
Here is an example of the conversion process for a pre-trained `BERT-Base Uncased` model:
|
||||
|
||||
```bash
|
||||
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
|
||||
|
||||
transformers-cli convert --model_type bert \
|
||||
--tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \
|
||||
--config $BERT_BASE_DIR/bert_config.json \
|
||||
--pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin
|
||||
```
|
||||
|
||||
You can download Google's pre-trained models for the conversion [here](https://github.com/google-research/bert#pre-trained-models).
|
||||
|
||||
## ALBERT
|
||||
|
||||
Convert TensorFlow model checkpoints of ALBERT to PyTorch using the
|
||||
[convert_albert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/tree/master/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py) script.
|
||||
|
||||
The CLI takes as input a TensorFlow checkpoint (three files starting with `model.ckpt-best`) and the accompanying
|
||||
configuration file (`albert_config.json`), then creates and saves a PyTorch model. To run this conversion you will
|
||||
need to have TensorFlow and PyTorch installed.
|
||||
|
||||
Here is an example of the conversion process for the pre-trained `ALBERT Base` model:
|
||||
|
||||
```bash
|
||||
export ALBERT_BASE_DIR=/path/to/albert/albert_base
|
||||
|
||||
transformers-cli convert --model_type albert \
|
||||
--tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \
|
||||
--config $ALBERT_BASE_DIR/albert_config.json \
|
||||
--pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin
|
||||
```
|
||||
|
||||
You can download Google's pre-trained models for the conversion [here](https://github.com/google-research/albert#pre-trained-models).
|
||||
|
||||
## OpenAI GPT
|
||||
|
||||
Here is an example of the conversion process for a pre-trained OpenAI GPT model, assuming that your NumPy checkpoint
|
||||
save as the same format than OpenAI pretrained model (see [here](https://github.com/openai/finetune-transformer-lm)\
|
||||
)
|
||||
|
||||
```bash
|
||||
export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights
|
||||
|
||||
transformers-cli convert --model_type gpt \
|
||||
--tf_checkpoint $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--config OPENAI_GPT_CONFIG] \
|
||||
[--finetuning_task_name OPENAI_GPT_FINETUNED_TASK] \
|
||||
```
|
||||
|
||||
## OpenAI GPT-2
|
||||
|
||||
Here is an example of the conversion process for a pre-trained OpenAI GPT-2 model (see [here](https://github.com/openai/gpt-2))
|
||||
|
||||
```bash
|
||||
export OPENAI_GPT2_CHECKPOINT_PATH=/path/to/gpt2/pretrained/weights
|
||||
|
||||
transformers-cli convert --model_type gpt2 \
|
||||
--tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--config OPENAI_GPT2_CONFIG] \
|
||||
[--finetuning_task_name OPENAI_GPT2_FINETUNED_TASK]
|
||||
```
|
||||
|
||||
## Transformer-XL
|
||||
|
||||
Here is an example of the conversion process for a pre-trained Transformer-XL model (see [here](https://github.com/kimiyoung/transformer-xl/tree/master/tf#obtain-and-evaluate-pretrained-sota-models))
|
||||
|
||||
```bash
|
||||
export TRANSFO_XL_CHECKPOINT_FOLDER_PATH=/path/to/transfo/xl/checkpoint
|
||||
|
||||
transformers-cli convert --model_type transfo_xl \
|
||||
--tf_checkpoint $TRANSFO_XL_CHECKPOINT_FOLDER_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--config TRANSFO_XL_CONFIG] \
|
||||
[--finetuning_task_name TRANSFO_XL_FINETUNED_TASK]
|
||||
```
|
||||
|
||||
## XLNet
|
||||
|
||||
Here is an example of the conversion process for a pre-trained XLNet model:
|
||||
|
||||
```bash
|
||||
export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint
|
||||
export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config
|
||||
|
||||
transformers-cli convert --model_type xlnet \
|
||||
--tf_checkpoint $TRANSFO_XL_CHECKPOINT_PATH \
|
||||
--config $TRANSFO_XL_CONFIG_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--finetuning_task_name XLNET_FINETUNED_TASK] \
|
||||
```
|
||||
|
||||
## XLM
|
||||
|
||||
Here is an example of the conversion process for a pre-trained XLM model:
|
||||
|
||||
```bash
|
||||
export XLM_CHECKPOINT_PATH=/path/to/xlm/checkpoint
|
||||
|
||||
transformers-cli convert --model_type xlm \
|
||||
--tf_checkpoint $XLM_CHECKPOINT_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT
|
||||
[--config XML_CONFIG] \
|
||||
[--finetuning_task_name XML_FINETUNED_TASK]
|
||||
```
|
||||
|
||||
## T5
|
||||
|
||||
Here is an example of the conversion process for a pre-trained T5 model:
|
||||
|
||||
```bash
|
||||
export T5=/path/to/t5/uncased_L-12_H-768_A-12
|
||||
|
||||
transformers-cli convert --model_type t5 \
|
||||
--tf_checkpoint $T5/t5_model.ckpt \
|
||||
--config $T5/t5_config.json \
|
||||
--pytorch_dump_output $T5/pytorch_model.bin
|
||||
```
|
@ -1,181 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Converting Tensorflow Checkpoints
|
||||
=======================================================================================================================
|
||||
|
||||
A command-line interface is provided to convert original Bert/GPT/GPT-2/Transformer-XL/XLNet/XLM checkpoints to models
|
||||
that can be loaded using the ``from_pretrained`` methods of the library.
|
||||
|
||||
.. note::
|
||||
Since 2.3.0 the conversion script is now part of the transformers CLI (**transformers-cli**) available in any
|
||||
transformers >= 2.3.0 installation.
|
||||
|
||||
The documentation below reflects the **transformers-cli convert** command format.
|
||||
|
||||
BERT
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can convert any TensorFlow checkpoint for BERT (in particular `the pre-trained models released by Google
|
||||
<https://github.com/google-research/bert#pre-trained-models>`_) in a PyTorch save file by using the
|
||||
:prefix_link:`convert_bert_original_tf_checkpoint_to_pytorch.py
|
||||
<src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py>` script.
|
||||
|
||||
This CLI takes as input a TensorFlow checkpoint (three files starting with ``bert_model.ckpt``) and the associated
|
||||
configuration file (``bert_config.json``), and creates a PyTorch model for this configuration, loads the weights from
|
||||
the TensorFlow checkpoint in the PyTorch model and saves the resulting model in a standard PyTorch save file that can
|
||||
be imported using ``from_pretrained()`` (see example in :doc:`quicktour` , :prefix_link:`run_glue.py
|
||||
<examples/pytorch/text-classification/run_glue.py>` ).
|
||||
|
||||
You only need to run this conversion script **once** to get a PyTorch model. You can then disregard the TensorFlow
|
||||
checkpoint (the three files starting with ``bert_model.ckpt``) but be sure to keep the configuration file (\
|
||||
``bert_config.json``) and the vocabulary file (``vocab.txt``) as these are needed for the PyTorch model too.
|
||||
|
||||
To run this specific conversion script you will need to have TensorFlow and PyTorch installed (``pip install
|
||||
tensorflow``). The rest of the repository only requires PyTorch.
|
||||
|
||||
Here is an example of the conversion process for a pre-trained ``BERT-Base Uncased`` model:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
|
||||
|
||||
transformers-cli convert --model_type bert \
|
||||
--tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \
|
||||
--config $BERT_BASE_DIR/bert_config.json \
|
||||
--pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin
|
||||
|
||||
You can download Google's pre-trained models for the conversion `here
|
||||
<https://github.com/google-research/bert#pre-trained-models>`__.
|
||||
|
||||
ALBERT
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Convert TensorFlow model checkpoints of ALBERT to PyTorch using the
|
||||
:prefix_link:`convert_albert_original_tf_checkpoint_to_pytorch.py
|
||||
<src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py>` script.
|
||||
|
||||
The CLI takes as input a TensorFlow checkpoint (three files starting with ``model.ckpt-best``) and the accompanying
|
||||
configuration file (``albert_config.json``), then creates and saves a PyTorch model. To run this conversion you will
|
||||
need to have TensorFlow and PyTorch installed.
|
||||
|
||||
Here is an example of the conversion process for the pre-trained ``ALBERT Base`` model:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export ALBERT_BASE_DIR=/path/to/albert/albert_base
|
||||
|
||||
transformers-cli convert --model_type albert \
|
||||
--tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \
|
||||
--config $ALBERT_BASE_DIR/albert_config.json \
|
||||
--pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin
|
||||
|
||||
You can download Google's pre-trained models for the conversion `here
|
||||
<https://github.com/google-research/albert#pre-trained-models>`__.
|
||||
|
||||
OpenAI GPT
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of the conversion process for a pre-trained OpenAI GPT model, assuming that your NumPy checkpoint
|
||||
save as the same format than OpenAI pretrained model (see `here <https://github.com/openai/finetune-transformer-lm>`__\
|
||||
)
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights
|
||||
|
||||
transformers-cli convert --model_type gpt \
|
||||
--tf_checkpoint $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--config OPENAI_GPT_CONFIG] \
|
||||
[--finetuning_task_name OPENAI_GPT_FINETUNED_TASK] \
|
||||
|
||||
|
||||
OpenAI GPT-2
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of the conversion process for a pre-trained OpenAI GPT-2 model (see `here
|
||||
<https://github.com/openai/gpt-2>`__)
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export OPENAI_GPT2_CHECKPOINT_PATH=/path/to/gpt2/pretrained/weights
|
||||
|
||||
transformers-cli convert --model_type gpt2 \
|
||||
--tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--config OPENAI_GPT2_CONFIG] \
|
||||
[--finetuning_task_name OPENAI_GPT2_FINETUNED_TASK]
|
||||
|
||||
Transformer-XL
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of the conversion process for a pre-trained Transformer-XL model (see `here
|
||||
<https://github.com/kimiyoung/transformer-xl/tree/master/tf#obtain-and-evaluate-pretrained-sota-models>`__)
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export TRANSFO_XL_CHECKPOINT_FOLDER_PATH=/path/to/transfo/xl/checkpoint
|
||||
|
||||
transformers-cli convert --model_type transfo_xl \
|
||||
--tf_checkpoint $TRANSFO_XL_CHECKPOINT_FOLDER_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--config TRANSFO_XL_CONFIG] \
|
||||
[--finetuning_task_name TRANSFO_XL_FINETUNED_TASK]
|
||||
|
||||
|
||||
XLNet
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of the conversion process for a pre-trained XLNet model:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint
|
||||
export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config
|
||||
|
||||
transformers-cli convert --model_type xlnet \
|
||||
--tf_checkpoint $TRANSFO_XL_CHECKPOINT_PATH \
|
||||
--config $TRANSFO_XL_CONFIG_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
|
||||
[--finetuning_task_name XLNET_FINETUNED_TASK] \
|
||||
|
||||
|
||||
XLM
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of the conversion process for a pre-trained XLM model:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export XLM_CHECKPOINT_PATH=/path/to/xlm/checkpoint
|
||||
|
||||
transformers-cli convert --model_type xlm \
|
||||
--tf_checkpoint $XLM_CHECKPOINT_PATH \
|
||||
--pytorch_dump_output $PYTORCH_DUMP_OUTPUT
|
||||
[--config XML_CONFIG] \
|
||||
[--finetuning_task_name XML_FINETUNED_TASK]
|
||||
|
||||
|
||||
T5
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of the conversion process for a pre-trained T5 model:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
export T5=/path/to/t5/uncased_L-12_H-768_A-12
|
||||
|
||||
transformers-cli convert --model_type t5 \
|
||||
--tf_checkpoint $T5/t5_model.ckpt \
|
||||
--config $T5/t5_config.json \
|
||||
--pytorch_dump_output $T5/pytorch_model.bin
|
70
docs/source/fast_tokenizers.mdx
Normal file
70
docs/source/fast_tokenizers.mdx
Normal file
@ -0,0 +1,70 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Using tokenizers from 🤗 Tokenizers
|
||||
|
||||
The [`PreTrainedTokenizerFast`] depends on the [🤗 Tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 Tokenizers library can be
|
||||
loaded very simply into 🤗 Transformers.
|
||||
|
||||
Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines:
|
||||
|
||||
```python
|
||||
>>> from tokenizers import Tokenizer
|
||||
>>> from tokenizers.models import BPE
|
||||
>>> from tokenizers.trainers import BpeTrainer
|
||||
>>> from tokenizers.pre_tokenizers import Whitespace
|
||||
|
||||
>>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
|
||||
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
|
||||
|
||||
>>> tokenizer.pre_tokenizer = Whitespace()
|
||||
>>> files = [...]
|
||||
>>> tokenizer.train(files, trainer)
|
||||
```
|
||||
|
||||
We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to
|
||||
a JSON file for future re-use.
|
||||
|
||||
## Loading directly from the tokenizer object
|
||||
|
||||
Let's see how to leverage this tokenizer object in the 🤗 Transformers library. The
|
||||
[`PreTrainedTokenizerFast`] class allows for easy instantiation, by accepting the instantiated
|
||||
*tokenizer* object as an argument:
|
||||
|
||||
```python
|
||||
>>> from transformers import PreTrainedTokenizerFast
|
||||
|
||||
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
|
||||
```
|
||||
|
||||
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer
|
||||
page](main_classes/tokenizer) for more information.
|
||||
|
||||
## Loading from a JSON file
|
||||
|
||||
In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer:
|
||||
|
||||
```python
|
||||
>>> tokenizer.save("tokenizer.json")
|
||||
```
|
||||
|
||||
The path to which we saved this file can be passed to the [`PreTrainedTokenizerFast`] initialization
|
||||
method using the `tokenizer_file` parameter:
|
||||
|
||||
```python
|
||||
>>> from transformers import PreTrainedTokenizerFast
|
||||
|
||||
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
|
||||
```
|
||||
|
||||
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer
|
||||
page](main_classes/tokenizer) for more information.
|
@ -1,62 +0,0 @@
|
||||
Using tokenizers from 🤗 Tokenizers
|
||||
=======================================================================================================================
|
||||
|
||||
The :class:`~transformers.PreTrainedTokenizerFast` depends on the `tokenizers
|
||||
<https://huggingface.co/docs/tokenizers>`__ library. The tokenizers obtained from the 🤗 Tokenizers library can be
|
||||
loaded very simply into 🤗 Transformers.
|
||||
|
||||
Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from tokenizers import Tokenizer
|
||||
>>> from tokenizers.models import BPE
|
||||
>>> from tokenizers.trainers import BpeTrainer
|
||||
>>> from tokenizers.pre_tokenizers import Whitespace
|
||||
|
||||
>>> tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
|
||||
>>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
|
||||
|
||||
>>> tokenizer.pre_tokenizer = Whitespace()
|
||||
>>> files = [...]
|
||||
>>> tokenizer.train(files, trainer)
|
||||
|
||||
We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to
|
||||
a JSON file for future re-use.
|
||||
|
||||
Loading directly from the tokenizer object
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Let's see how to leverage this tokenizer object in the 🤗 Transformers library. The
|
||||
:class:`~transformers.PreTrainedTokenizerFast` class allows for easy instantiation, by accepting the instantiated
|
||||
`tokenizer` object as an argument:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from transformers import PreTrainedTokenizerFast
|
||||
|
||||
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
|
||||
|
||||
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to :doc:`the tokenizer
|
||||
page <main_classes/tokenizer>` for more information.
|
||||
|
||||
Loading from a JSON file
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> tokenizer.save("tokenizer.json")
|
||||
|
||||
The path to which we saved this file can be passed to the :class:`~transformers.PreTrainedTokenizerFast` initialization
|
||||
method using the :obj:`tokenizer_file` parameter:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from transformers import PreTrainedTokenizerFast
|
||||
|
||||
>>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
|
||||
|
||||
This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to :doc:`the tokenizer
|
||||
page <main_classes/tokenizer>` for more information.
|
297
docs/source/glossary.mdx
Normal file
297
docs/source/glossary.mdx
Normal file
@ -0,0 +1,297 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Glossary
|
||||
|
||||
## General terms
|
||||
|
||||
- autoencoding models: see MLM
|
||||
- autoregressive models: see CLM
|
||||
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
|
||||
next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future
|
||||
tokens at a certain timestep.
|
||||
- deep learning: machine learning algorithms which uses neural networks with several layers.
|
||||
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
|
||||
by masking some tokens randomly, and has to predict the original text.
|
||||
- multimodal: a task that combines texts with another kind of inputs (for instance images).
|
||||
- NLG: natural language generation, all tasks related to generating text (for instance talk with transformers,
|
||||
translation).
|
||||
- NLP: natural language processing, a generic way to say "deal with texts".
|
||||
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
|
||||
the whole text, individual words).
|
||||
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
|
||||
involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
|
||||
masking some words and trying to predict them (see MLM).
|
||||
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
|
||||
- self-attention: each element of the input finds out which other elements of the input they should attend to.
|
||||
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
|
||||
summarization models (such as [Bart](model_doc/bart) or [T5](model_doc/t5)).
|
||||
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
|
||||
or a punctuation symbol.
|
||||
- transformer: self-attention based deep learning model architecture.
|
||||
|
||||
## Model inputs
|
||||
|
||||
Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
|
||||
detailed here alongside usage examples.
|
||||
|
||||
<a id='input-ids'></a>
|
||||
|
||||
### Input IDs
|
||||
|
||||
The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
|
||||
numerical representations of tokens building the sequences that will be used as input by the model*.
|
||||
|
||||
<Youtube id="VFp38yj8h3A"/>
|
||||
|
||||
Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
|
||||
tokenizer, which is a [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) tokenizer:
|
||||
|
||||
```python
|
||||
>>> from transformers import BertTokenizer
|
||||
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
||||
|
||||
>>> sequence = "A Titan RTX has 24GB of VRAM"
|
||||
```
|
||||
|
||||
The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
|
||||
|
||||
```python
|
||||
>>> tokenized_sequence = tokenizer.tokenize(sequence)
|
||||
```
|
||||
|
||||
The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
|
||||
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
|
||||
is added for "RA" and "M":
|
||||
|
||||
```python
|
||||
>>> print(tokenized_sequence)
|
||||
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
|
||||
```
|
||||
|
||||
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
|
||||
the sentence to the tokenizer, which leverages the Rust implementation of [🤗 Tokenizers](https://github.com/huggingface/tokenizers) for peak performance.
|
||||
|
||||
```python
|
||||
>>> inputs = tokenizer(sequence)
|
||||
```
|
||||
|
||||
The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
|
||||
token indices are under the key "input_ids":
|
||||
|
||||
```python
|
||||
>>> encoded_sequence = inputs["input_ids"]
|
||||
>>> print(encoded_sequence)
|
||||
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
|
||||
```
|
||||
|
||||
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
|
||||
IDs the model sometimes uses.
|
||||
|
||||
If we decode the previous sequence of ids,
|
||||
|
||||
```python
|
||||
>>> decoded_sequence = tokenizer.decode(encoded_sequence)
|
||||
```
|
||||
|
||||
we will see
|
||||
|
||||
```python
|
||||
>>> print(decoded_sequence)
|
||||
[CLS] A Titan RTX has 24GB of VRAM [SEP]
|
||||
```
|
||||
|
||||
because this is the way a [`BertModel`] is going to expect its inputs.
|
||||
|
||||
<a id='attention-mask'></a>
|
||||
|
||||
### Attention mask
|
||||
|
||||
The attention mask is an optional argument used when batching sequences together.
|
||||
|
||||
<Youtube id="M6adb1j2jPI"/>
|
||||
|
||||
This argument indicates to the model which tokens should be attended to, and which should not.
|
||||
|
||||
For example, consider these two sequences:
|
||||
|
||||
```python
|
||||
>>> from transformers import BertTokenizer
|
||||
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
||||
|
||||
>>> sequence_a = "This is a short sequence."
|
||||
>>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
|
||||
|
||||
>>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
|
||||
>>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
|
||||
```
|
||||
|
||||
The encoded versions have different lengths:
|
||||
|
||||
```python
|
||||
>>> len(encoded_sequence_a), len(encoded_sequence_b)
|
||||
(8, 19)
|
||||
```
|
||||
|
||||
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
|
||||
of the second one, or the second one needs to be truncated down to the length of the first one.
|
||||
|
||||
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
|
||||
it to pad like this:
|
||||
|
||||
```python
|
||||
>>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
|
||||
```
|
||||
|
||||
We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
|
||||
|
||||
```python
|
||||
>>> padded_sequences["input_ids"]
|
||||
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
|
||||
```
|
||||
|
||||
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
|
||||
position of the padded indices so that the model does not attend to them. For the [`BertTokenizer`],
|
||||
`1` indicates a value that should be attended to, while `0` indicates a padded value. This attention mask is
|
||||
in the dictionary returned by the tokenizer under the key "attention_mask":
|
||||
|
||||
```python
|
||||
>>> padded_sequences["attention_mask"]
|
||||
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
|
||||
```
|
||||
|
||||
<a id='token-type-ids'></a>
|
||||
|
||||
### Token Type IDs
|
||||
|
||||
Some models' purpose is to do classification on pairs of sentences or question answering.
|
||||
|
||||
<Youtube id="0u3ioSwev3s"/>
|
||||
|
||||
These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the
|
||||
help of special tokens, such as the classifier (`[CLS]`) and separator (`[SEP]`) tokens. For example, the BERT
|
||||
model builds its two sequence input as such:
|
||||
|
||||
```python
|
||||
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
|
||||
```
|
||||
|
||||
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to `tokenizer` as two
|
||||
arguments (and not a list, like before) like this:
|
||||
|
||||
```python
|
||||
>>> from transformers import BertTokenizer
|
||||
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
||||
>>> sequence_a = "HuggingFace is based in NYC"
|
||||
>>> sequence_b = "Where is HuggingFace based?"
|
||||
|
||||
>>> encoded_dict = tokenizer(sequence_a, sequence_b)
|
||||
>>> decoded = tokenizer.decode(encoded_dict["input_ids"])
|
||||
```
|
||||
|
||||
which will return:
|
||||
|
||||
```python
|
||||
>>> print(decoded)
|
||||
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
|
||||
```
|
||||
|
||||
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
|
||||
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
|
||||
the two types of sequence in the model.
|
||||
|
||||
The tokenizer returns this mask as the "token_type_ids" entry:
|
||||
|
||||
```python
|
||||
>>> encoded_dict['token_type_ids']
|
||||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
|
||||
```
|
||||
|
||||
The first sequence, the "context" used for the question, has all its tokens represented by a `0`, whereas the
|
||||
second sequence, corresponding to the "question", has all its tokens represented by a `1`.
|
||||
|
||||
Some models, like [`XLNetModel`] use an additional token represented by a `2`.
|
||||
|
||||
<a id='position-ids'></a>
|
||||
|
||||
### Position IDs
|
||||
|
||||
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
|
||||
each token. Therefore, the position IDs (`position_ids`) are used by the model to identify each token's position in
|
||||
the list of tokens.
|
||||
|
||||
They are an optional parameter. If no `position_ids` are passed to the model, the IDs are automatically created as
|
||||
absolute positional embeddings.
|
||||
|
||||
Absolute positional embeddings are selected in the range `[0, config.max_position_embeddings - 1]`. Some models use
|
||||
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
|
||||
|
||||
<a id='labels'></a>
|
||||
|
||||
### Labels
|
||||
|
||||
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
|
||||
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
|
||||
predictions and the expected value (the label).
|
||||
|
||||
These labels are different according to the model head, for example:
|
||||
|
||||
- For sequence classification models (e.g., [`BertForSequenceClassification`]), the model expects a
|
||||
tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of the
|
||||
entire sequence.
|
||||
- For token classification models (e.g., [`BertForTokenClassification`]), the model expects a tensor
|
||||
of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual
|
||||
token.
|
||||
- For masked language modeling (e.g., [`BertForMaskedLM`]), the model expects a tensor of dimension
|
||||
`(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the
|
||||
labels being the token ID for the masked token, and values to be ignored for the rest (usually -100).
|
||||
- For sequence to sequence tasks,(e.g., [`BartForConditionalGeneration`],
|
||||
[`MBartForConditionalGeneration`]), the model expects a tensor of dimension `(batch_size, tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During
|
||||
training, both *BART* and *T5* will make the appropriate *decoder_input_ids* and decoder attention masks internally.
|
||||
They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. See
|
||||
the documentation of each model for more information on each specific model's labels.
|
||||
|
||||
The base models (e.g., [`BertModel`]) do not accept labels, as these are the base transformer
|
||||
models, simply outputting features.
|
||||
|
||||
<a id='decoder-input-ids'></a>
|
||||
|
||||
### Decoder input IDs
|
||||
|
||||
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
|
||||
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
|
||||
way specific to each model.
|
||||
|
||||
Most encoder-decoder models (BART, T5) create their `decoder_input_ids` on their own from the `labels`. In
|
||||
such models, passing the `labels` is the preferred way to handle training.
|
||||
|
||||
Please check each model's docs to see how they handle these input IDs for sequence to sequence training.
|
||||
|
||||
<a id='feed-forward-chunking'></a>
|
||||
|
||||
### Feed Forward Chunking
|
||||
|
||||
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
|
||||
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
|
||||
`bert-base-uncased`).
|
||||
|
||||
For an input of size `[batch_size, sequence_length]`, the memory required to store the intermediate feed forward
|
||||
embeddings `[batch_size, sequence_length, config.intermediate_size]` can account for a large fraction of the memory
|
||||
use. The authors of [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) noticed that since the
|
||||
computation is independent of the `sequence_length` dimension, it is mathematically equivalent to compute the output
|
||||
embeddings of both feed forward layers `[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n`
|
||||
individually and concat them afterward to `[batch_size, sequence_length, config.hidden_size]` with `n = sequence_length`, which trades increased computation time against reduced memory use, but yields a mathematically
|
||||
**equivalent** result.
|
||||
|
||||
For models employing the function [`apply_chunking_to_forward`], the `chunk_size` defines the
|
||||
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
|
||||
complexity. If `chunk_size` is set to 0, no feed forward chunking is done.
|
@ -1,322 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Glossary
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
General terms
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
- autoencoding models: see MLM
|
||||
- autoregressive models: see CLM
|
||||
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
|
||||
next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future
|
||||
tokens at a certain timestep.
|
||||
- deep learning: machine learning algorithms which uses neural networks with several layers.
|
||||
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
|
||||
by masking some tokens randomly, and has to predict the original text.
|
||||
- multimodal: a task that combines texts with another kind of inputs (for instance images).
|
||||
- NLG: natural language generation, all tasks related to generating text (for instance talk with transformers,
|
||||
translation).
|
||||
- NLP: natural language processing, a generic way to say "deal with texts".
|
||||
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
|
||||
the whole text, individual words).
|
||||
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
|
||||
involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
|
||||
masking some words and trying to predict them (see MLM).
|
||||
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
|
||||
- self-attention: each element of the input finds out which other elements of the input they should attend to.
|
||||
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
|
||||
summarization models (such as :doc:`Bart </model_doc/bart>` or :doc:`T5 </model_doc/t5>`).
|
||||
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
|
||||
or a punctuation symbol.
|
||||
- transformer: self-attention based deep learning model architecture.
|
||||
|
||||
Model inputs
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
|
||||
detailed here alongside usage examples.
|
||||
|
||||
.. _input-ids:
|
||||
|
||||
Input IDs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
|
||||
numerical representations of tokens building the sequences that will be used as input by the model*.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/VFp38yj8h3A" title="YouTube video player"
|
||||
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
|
||||
picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
|
||||
tokenizer, which is a `WordPiece <https://arxiv.org/pdf/1609.08144.pdf>`__ tokenizer:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from transformers import BertTokenizer
|
||||
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
||||
|
||||
>>> sequence = "A Titan RTX has 24GB of VRAM"
|
||||
|
||||
The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> tokenized_sequence = tokenizer.tokenize(sequence)
|
||||
|
||||
The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
|
||||
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
|
||||
is added for "RA" and "M":
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> print(tokenized_sequence)
|
||||
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
|
||||
|
||||
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
|
||||
the sentence to the tokenizer, which leverages the Rust implementation of `huggingface/tokenizers
|
||||
<https://github.com/huggingface/tokenizers>`__ for peak performance.
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> inputs = tokenizer(sequence)
|
||||
|
||||
The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
|
||||
token indices are under the key "input_ids":
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> encoded_sequence = inputs["input_ids"]
|
||||
>>> print(encoded_sequence)
|
||||
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
|
||||
|
||||
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
|
||||
IDs the model sometimes uses.
|
||||
|
||||
If we decode the previous sequence of ids,
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> decoded_sequence = tokenizer.decode(encoded_sequence)
|
||||
|
||||
we will see
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> print(decoded_sequence)
|
||||
[CLS] A Titan RTX has 24GB of VRAM [SEP]
|
||||
|
||||
because this is the way a :class:`~transformers.BertModel` is going to expect its inputs.
|
||||
|
||||
.. _attention-mask:
|
||||
|
||||
Attention mask
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The attention mask is an optional argument used when batching sequences together.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/M6adb1j2jPI" title="YouTube video player"
|
||||
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
|
||||
picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
This argument indicates to the model which tokens should be attended to, and which should not.
|
||||
|
||||
For example, consider these two sequences:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from transformers import BertTokenizer
|
||||
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
||||
|
||||
>>> sequence_a = "This is a short sequence."
|
||||
>>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
|
||||
|
||||
>>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
|
||||
>>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
|
||||
|
||||
The encoded versions have different lengths:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> len(encoded_sequence_a), len(encoded_sequence_b)
|
||||
(8, 19)
|
||||
|
||||
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
|
||||
of the second one, or the second one needs to be truncated down to the length of the first one.
|
||||
|
||||
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
|
||||
it to pad like this:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
|
||||
|
||||
We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> padded_sequences["input_ids"]
|
||||
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
|
||||
|
||||
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
|
||||
position of the padded indices so that the model does not attend to them. For the :class:`~transformers.BertTokenizer`,
|
||||
:obj:`1` indicates a value that should be attended to, while :obj:`0` indicates a padded value. This attention mask is
|
||||
in the dictionary returned by the tokenizer under the key "attention_mask":
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> padded_sequences["attention_mask"]
|
||||
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
|
||||
|
||||
.. _token-type-ids:
|
||||
|
||||
Token Type IDs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Some models' purpose is to do classification on pairs of sentences or question answering.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/0u3ioSwev3s" title="YouTube video player"
|
||||
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
|
||||
picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the
|
||||
help of special tokens, such as the classifier (``[CLS]``) and separator (``[SEP]``) tokens. For example, the BERT
|
||||
model builds its two sequence input as such:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
|
||||
|
||||
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to ``tokenizer`` as two
|
||||
arguments (and not a list, like before) like this:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from transformers import BertTokenizer
|
||||
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
||||
>>> sequence_a = "HuggingFace is based in NYC"
|
||||
>>> sequence_b = "Where is HuggingFace based?"
|
||||
|
||||
>>> encoded_dict = tokenizer(sequence_a, sequence_b)
|
||||
>>> decoded = tokenizer.decode(encoded_dict["input_ids"])
|
||||
|
||||
which will return:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> print(decoded)
|
||||
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
|
||||
|
||||
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
|
||||
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
|
||||
the two types of sequence in the model.
|
||||
|
||||
The tokenizer returns this mask as the "token_type_ids" entry:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> encoded_dict['token_type_ids']
|
||||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
|
||||
|
||||
The first sequence, the "context" used for the question, has all its tokens represented by a :obj:`0`, whereas the
|
||||
second sequence, corresponding to the "question", has all its tokens represented by a :obj:`1`.
|
||||
|
||||
Some models, like :class:`~transformers.XLNetModel` use an additional token represented by a :obj:`2`.
|
||||
|
||||
.. _position-ids:
|
||||
|
||||
Position IDs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
|
||||
each token. Therefore, the position IDs (``position_ids``) are used by the model to identify each token's position in
|
||||
the list of tokens.
|
||||
|
||||
They are an optional parameter. If no ``position_ids`` are passed to the model, the IDs are automatically created as
|
||||
absolute positional embeddings.
|
||||
|
||||
Absolute positional embeddings are selected in the range ``[0, config.max_position_embeddings - 1]``. Some models use
|
||||
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
|
||||
|
||||
.. _labels:
|
||||
|
||||
Labels
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
|
||||
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
|
||||
predictions and the expected value (the label).
|
||||
|
||||
These labels are different according to the model head, for example:
|
||||
|
||||
- For sequence classification models (e.g., :class:`~transformers.BertForSequenceClassification`), the model expects a
|
||||
tensor of dimension :obj:`(batch_size)` with each value of the batch corresponding to the expected label of the
|
||||
entire sequence.
|
||||
- For token classification models (e.g., :class:`~transformers.BertForTokenClassification`), the model expects a tensor
|
||||
of dimension :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual
|
||||
token.
|
||||
- For masked language modeling (e.g., :class:`~transformers.BertForMaskedLM`), the model expects a tensor of dimension
|
||||
:obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the
|
||||
labels being the token ID for the masked token, and values to be ignored for the rest (usually -100).
|
||||
- For sequence to sequence tasks,(e.g., :class:`~transformers.BartForConditionalGeneration`,
|
||||
:class:`~transformers.MBartForConditionalGeneration`), the model expects a tensor of dimension :obj:`(batch_size,
|
||||
tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During
|
||||
training, both `BART` and `T5` will make the appropriate `decoder_input_ids` and decoder attention masks internally.
|
||||
They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. See
|
||||
the documentation of each model for more information on each specific model's labels.
|
||||
|
||||
The base models (e.g., :class:`~transformers.BertModel`) do not accept labels, as these are the base transformer
|
||||
models, simply outputting features.
|
||||
|
||||
.. _decoder-input-ids:
|
||||
|
||||
Decoder input IDs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
|
||||
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
|
||||
way specific to each model.
|
||||
|
||||
Most encoder-decoder models (BART, T5) create their :obj:`decoder_input_ids` on their own from the :obj:`labels`. In
|
||||
such models, passing the :obj:`labels` is the preferred way to handle training.
|
||||
|
||||
Please check each model's docs to see how they handle these input IDs for sequence to sequence training.
|
||||
|
||||
.. _feed-forward-chunking:
|
||||
|
||||
Feed Forward Chunking
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
|
||||
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
|
||||
``bert-base-uncased``).
|
||||
|
||||
For an input of size ``[batch_size, sequence_length]``, the memory required to store the intermediate feed forward
|
||||
embeddings ``[batch_size, sequence_length, config.intermediate_size]`` can account for a large fraction of the memory
|
||||
use. The authors of `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ noticed that since the
|
||||
computation is independent of the ``sequence_length`` dimension, it is mathematically equivalent to compute the output
|
||||
embeddings of both feed forward layers ``[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n``
|
||||
individually and concat them afterward to ``[batch_size, sequence_length, config.hidden_size]`` with ``n =
|
||||
sequence_length``, which trades increased computation time against reduced memory use, but yields a mathematically
|
||||
**equivalent** result.
|
||||
|
||||
For models employing the function :func:`~.transformers.apply_chunking_to_forward`, the ``chunk_size`` defines the
|
||||
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
|
||||
complexity. If ``chunk_size`` is set to 0, no feed forward chunking is done.
|
46
docs/source/internal/file_utils.mdx
Normal file
46
docs/source/internal/file_utils.mdx
Normal file
@ -0,0 +1,46 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# General Utilities
|
||||
|
||||
This page lists all of Transformers general utility functions that are found in the file `file_utils.py`.
|
||||
|
||||
Most of those are only useful if you are studying the general code in the library.
|
||||
|
||||
|
||||
## Enums and namedtuples
|
||||
|
||||
[[autodoc]] file_utils.ExplicitEnum
|
||||
|
||||
[[autodoc]] file_utils.PaddingStrategy
|
||||
|
||||
[[autodoc]] file_utils.TensorType
|
||||
|
||||
## Special Decorators
|
||||
|
||||
[[autodoc]] file_utils.add_start_docstrings
|
||||
|
||||
[[autodoc]] file_utils.add_start_docstrings_to_model_forward
|
||||
|
||||
[[autodoc]] file_utils.add_end_docstrings
|
||||
|
||||
[[autodoc]] file_utils.add_code_sample_docstrings
|
||||
|
||||
[[autodoc]] file_utils.replace_return_docstrings
|
||||
|
||||
## Special Properties
|
||||
|
||||
[[autodoc]] file_utils.cached_property
|
||||
|
||||
## Other Utilities
|
||||
|
||||
[[autodoc]] file_utils._LazyModule
|
@ -1,54 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
General Utilities
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This page lists all of Transformers general utility functions that are found in the file ``file_utils.py``.
|
||||
|
||||
Most of those are only useful if you are studying the general code in the library.
|
||||
|
||||
|
||||
Enums and namedtuples
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.file_utils.ExplicitEnum
|
||||
|
||||
.. autoclass:: transformers.file_utils.PaddingStrategy
|
||||
|
||||
.. autoclass:: transformers.file_utils.TensorType
|
||||
|
||||
|
||||
Special Decorators
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.file_utils.add_start_docstrings
|
||||
|
||||
.. autofunction:: transformers.file_utils.add_start_docstrings_to_model_forward
|
||||
|
||||
.. autofunction:: transformers.file_utils.add_end_docstrings
|
||||
|
||||
.. autofunction:: transformers.file_utils.add_code_sample_docstrings
|
||||
|
||||
.. autofunction:: transformers.file_utils.replace_return_docstrings
|
||||
|
||||
|
||||
Special Properties
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.file_utils.cached_property
|
||||
|
||||
|
||||
Other Utilities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.file_utils._LazyModule
|
207
docs/source/internal/generation_utils.mdx
Normal file
207
docs/source/internal/generation_utils.mdx
Normal file
@ -0,0 +1,207 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Utilities for Generation
|
||||
|
||||
This page lists all the utility functions used by [`~generation_utils.GenerationMixin.generate`],
|
||||
[`~generation_utils.GenerationMixin.greedy_search`],
|
||||
[`~generation_utils.GenerationMixin.sample`],
|
||||
[`~generation_utils.GenerationMixin.beam_search`],
|
||||
[`~generation_utils.GenerationMixin.beam_sample`], and
|
||||
[`~generation_utils.GenerationMixin.group_beam_search`].
|
||||
|
||||
Most of those are only useful if you are studying the code of the generate methods in the library.
|
||||
|
||||
## Generate Outputs
|
||||
|
||||
The output of [`~generation_utils.GenerationMixin.generate`] is an instance of a subclass of
|
||||
[`~file_utils.ModelOutput`]. This output is a data structure containing all the information returned
|
||||
by [`~generation_utils.GenerationMixin.generate`], but that can also be used as tuple or dictionary.
|
||||
|
||||
Here's an example:
|
||||
|
||||
```python
|
||||
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|
||||
|
||||
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
||||
model = GPT2LMHeadModel.from_pretrained('gpt2')
|
||||
|
||||
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
|
||||
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
|
||||
```
|
||||
|
||||
The `generation_output` object is a [`~generation_utils.GreedySearchDecoderOnlyOutput`], as we can
|
||||
see in the documentation of that class below, it means it has the following attributes:
|
||||
|
||||
- `sequences`: the generated sequences of tokens
|
||||
- `scores` (optional): the prediction scores of the language modelling head, for each generation step
|
||||
- `hidden_states` (optional): the hidden states of the model, for each generation step
|
||||
- `attentions` (optional): the attention weights of the model, for each generation step
|
||||
|
||||
Here we have the `scores` since we passed along `output_scores=True`, but we don't have `hidden_states` and
|
||||
`attentions` because we didn't pass `output_hidden_states=True` or `output_attentions=True`.
|
||||
|
||||
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
|
||||
will get `None`. Here for instance `generation_output.scores` are all the generated prediction scores of the
|
||||
language modeling head, and `generation_output.attentions` is `None`.
|
||||
|
||||
When using our `generation_output` object as a tuple, it only keeps the attributes that don't have `None` values.
|
||||
Here, for instance, it has two elements, `loss` then `logits`, so
|
||||
|
||||
```python
|
||||
generation_output[:2]
|
||||
```
|
||||
|
||||
will return the tuple `(generation_output.sequences, generation_output.scores)` for instance.
|
||||
|
||||
When using our `generation_output` object as a dictionary, it only keeps the attributes that don't have `None`
|
||||
values. Here, for instance, it has two keys that are `sequences` and `scores`.
|
||||
|
||||
We document here all output types.
|
||||
|
||||
|
||||
### GreedySearchOutput
|
||||
|
||||
[[autodoc]] generation_utils.GreedySearchDecoderOnlyOutput
|
||||
|
||||
[[autodoc]] generation_utils.GreedySearchEncoderDecoderOutput
|
||||
|
||||
[[autodoc]] generation_flax_utils.FlaxGreedySearchOutput
|
||||
|
||||
### SampleOutput
|
||||
|
||||
[[autodoc]] generation_utils.SampleDecoderOnlyOutput
|
||||
|
||||
[[autodoc]] generation_utils.SampleEncoderDecoderOutput
|
||||
|
||||
[[autodoc]] generation_flax_utils.FlaxSampleOutput
|
||||
|
||||
### BeamSearchOutput
|
||||
|
||||
[[autodoc]] generation_utils.BeamSearchDecoderOnlyOutput
|
||||
|
||||
[[autodoc]] generation_utils.BeamSearchEncoderDecoderOutput
|
||||
|
||||
### BeamSampleOutput
|
||||
|
||||
[[autodoc]] generation_utils.BeamSampleDecoderOnlyOutput
|
||||
|
||||
[[autodoc]] generation_utils.BeamSampleEncoderDecoderOutput
|
||||
|
||||
## LogitsProcessor
|
||||
|
||||
A [`LogitsProcessor`] can be used to modify the prediction scores of a language model head for
|
||||
generation.
|
||||
|
||||
[[autodoc]] LogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] LogitsProcessorList
|
||||
- __call__
|
||||
|
||||
[[autodoc]] LogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] MinLengthLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] TemperatureLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] RepetitionPenaltyLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] TopPLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] TopKLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] NoRepeatNGramLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] NoBadWordsLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] PrefixConstrainedLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] HammingDiversityLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] ForcedBOSTokenLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] ForcedEOSTokenLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] InfNanRemoveLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxLogitsProcessorList
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxTemperatureLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxTopPLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxTopKLogitsWarper
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxForcedBOSTokenLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxForcedEOSTokenLogitsProcessor
|
||||
- __call__
|
||||
|
||||
[[autodoc]] FlaxMinLengthLogitsProcessor
|
||||
- __call__
|
||||
|
||||
## StoppingCriteria
|
||||
|
||||
A [`StoppingCriteria`] can be used to change when to stop generation (other than EOS token).
|
||||
|
||||
[[autodoc]] StoppingCriteria
|
||||
- __call__
|
||||
|
||||
[[autodoc]] StoppingCriteriaList
|
||||
- __call__
|
||||
|
||||
[[autodoc]] MaxLengthCriteria
|
||||
- __call__
|
||||
|
||||
[[autodoc]] MaxTimeCriteria
|
||||
- __call__
|
||||
|
||||
## BeamSearch
|
||||
|
||||
[[autodoc]] BeamScorer
|
||||
- process
|
||||
- finalize
|
||||
|
||||
[[autodoc]] BeamSearchScorer
|
||||
- process
|
||||
- finalize
|
||||
|
||||
## Utilities
|
||||
|
||||
[[autodoc]] top_k_top_p_filtering
|
||||
|
||||
[[autodoc]] tf_top_k_top_p_filtering
|
@ -1,230 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Utilities for Generation
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This page lists all the utility functions used by :meth:`~transformers.generation_utils.GenerationMixin.generate`,
|
||||
:meth:`~transformers.generation_utils.GenerationMixin.greedy_search`,
|
||||
:meth:`~transformers.generation_utils.GenerationMixin.sample`,
|
||||
:meth:`~transformers.generation_utils.GenerationMixin.beam_search`,
|
||||
:meth:`~transformers.generation_utils.GenerationMixin.beam_sample`, and
|
||||
:meth:`~transformers.generation_utils.GenerationMixin.group_beam_search`.
|
||||
|
||||
Most of those are only useful if you are studying the code of the generate methods in the library.
|
||||
|
||||
Generate Outputs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The output of :meth:`~transformers.generation_utils.GenerationMixin.generate` is an instance of a subclass of
|
||||
:class:`~transformers.file_utils.ModelOutput`. This output is a data structure containing all the information returned
|
||||
by :meth:`~transformers.generation_utils.GenerationMixin.generate`, but that can also be used as tuple or dictionary.
|
||||
|
||||
Here's an example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import GPT2Tokenizer, GPT2LMHeadModel
|
||||
|
||||
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
|
||||
model = GPT2LMHeadModel.from_pretrained('gpt2')
|
||||
|
||||
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
|
||||
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
|
||||
|
||||
The ``generation_output`` object is a :class:`~transformers.generation_utils.GreedySearchDecoderOnlyOutput`, as we can
|
||||
see in the documentation of that class below, it means it has the following attributes:
|
||||
|
||||
- ``sequences``: the generated sequences of tokens
|
||||
- ``scores`` (optional): the prediction scores of the language modelling head, for each generation step
|
||||
- ``hidden_states`` (optional): the hidden states of the model, for each generation step
|
||||
- ``attentions`` (optional): the attention weights of the model, for each generation step
|
||||
|
||||
Here we have the ``scores`` since we passed along ``output_scores=True``, but we don't have ``hidden_states`` and
|
||||
``attentions`` because we didn't pass ``output_hidden_states=True`` or ``output_attentions=True``.
|
||||
|
||||
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
|
||||
will get ``None``. Here for instance ``generation_output.scores`` are all the generated prediction scores of the
|
||||
language modeling head, and ``generation_output.attentions`` is ``None``.
|
||||
|
||||
When using our ``generation_output`` object as a tuple, it only keeps the attributes that don't have ``None`` values.
|
||||
Here, for instance, it has two elements, ``loss`` then ``logits``, so
|
||||
|
||||
.. code-block::
|
||||
|
||||
generation_output[:2]
|
||||
|
||||
will return the tuple ``(generation_output.sequences, generation_output.scores)`` for instance.
|
||||
|
||||
When using our ``generation_output`` object as a dictionary, it only keeps the attributes that don't have ``None``
|
||||
values. Here, for instance, it has two keys that are ``sequences`` and ``scores``.
|
||||
|
||||
We document here all output types.
|
||||
|
||||
|
||||
GreedySearchOutput
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.generation_utils.GreedySearchDecoderOnlyOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_utils.GreedySearchEncoderDecoderOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_flax_utils.FlaxGreedySearchOutput
|
||||
:members:
|
||||
|
||||
|
||||
SampleOutput
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.generation_utils.SampleDecoderOnlyOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_utils.SampleEncoderDecoderOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_flax_utils.FlaxSampleOutput
|
||||
:members:
|
||||
|
||||
|
||||
BeamSearchOutput
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.generation_utils.BeamSearchDecoderOnlyOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_utils.BeamSearchEncoderDecoderOutput
|
||||
:members:
|
||||
|
||||
|
||||
BeamSampleOutput
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.generation_utils.BeamSampleDecoderOnlyOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_utils.BeamSampleEncoderDecoderOutput
|
||||
:members:
|
||||
|
||||
|
||||
LogitsProcessor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A :class:`~transformers.LogitsProcessor` can be used to modify the prediction scores of a language model head for
|
||||
generation.
|
||||
|
||||
.. autoclass:: transformers.LogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.LogitsProcessorList
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.LogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.MinLengthLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.TemperatureLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.RepetitionPenaltyLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.TopPLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.TopKLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.NoRepeatNGramLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.NoBadWordsLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.PrefixConstrainedLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.HammingDiversityLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.ForcedBOSTokenLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.ForcedEOSTokenLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.InfNanRemoveLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxLogitsProcessorList
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxTemperatureLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxTopPLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxTopKLogitsWarper
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxForcedBOSTokenLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxForcedEOSTokenLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.FlaxMinLengthLogitsProcessor
|
||||
:members: __call__
|
||||
|
||||
|
||||
StoppingCriteria
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A :class:`~transformers.StoppingCriteria` can be used to change when to stop generation (other than EOS token).
|
||||
|
||||
.. autoclass:: transformers.StoppingCriteria
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.StoppingCriteriaList
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.MaxLengthCriteria
|
||||
:members: __call__
|
||||
|
||||
.. autoclass:: transformers.MaxTimeCriteria
|
||||
:members: __call__
|
||||
|
||||
BeamSearch
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.BeamScorer
|
||||
:members: process, finalize
|
||||
|
||||
.. autoclass:: transformers.BeamSearchScorer
|
||||
:members: process, finalize
|
||||
|
||||
Utilities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.top_k_top_p_filtering
|
||||
|
||||
.. autofunction:: transformers.tf_top_k_top_p_filtering
|
82
docs/source/internal/modeling_utils.mdx
Normal file
82
docs/source/internal/modeling_utils.mdx
Normal file
@ -0,0 +1,82 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Custom Layers and Utilities
|
||||
|
||||
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
|
||||
|
||||
Most of those are only useful if you are studying the code of the models in the library.
|
||||
|
||||
|
||||
## Pytorch custom modules
|
||||
|
||||
[[autodoc]] modeling_utils.Conv1D
|
||||
|
||||
[[autodoc]] modeling_utils.PoolerStartLogits
|
||||
- forward
|
||||
|
||||
[[autodoc]] modeling_utils.PoolerEndLogits
|
||||
- forward
|
||||
|
||||
[[autodoc]] modeling_utils.PoolerAnswerClass
|
||||
- forward
|
||||
|
||||
[[autodoc]] modeling_utils.SquadHeadOutput
|
||||
|
||||
[[autodoc]] modeling_utils.SQuADHead
|
||||
- forward
|
||||
|
||||
[[autodoc]] modeling_utils.SequenceSummary
|
||||
- forward
|
||||
|
||||
## PyTorch Helper Functions
|
||||
|
||||
[[autodoc]] apply_chunking_to_forward
|
||||
|
||||
[[autodoc]] modeling_utils.find_pruneable_heads_and_indices
|
||||
|
||||
[[autodoc]] modeling_utils.prune_layer
|
||||
|
||||
[[autodoc]] modeling_utils.prune_conv1d_layer
|
||||
|
||||
[[autodoc]] modeling_utils.prune_linear_layer
|
||||
|
||||
## TensorFlow custom layers
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFConv1D
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFSharedEmbeddings
|
||||
- call
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFSequenceSummary
|
||||
|
||||
## TensorFlow loss functions
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFCausalLanguageModelingLoss
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFMaskedLanguageModelingLoss
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFMultipleChoiceLoss
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFQuestionAnsweringLoss
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFSequenceClassificationLoss
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFTokenClassificationLoss
|
||||
|
||||
## TensorFlow Helper Functions
|
||||
|
||||
[[autodoc]] modeling_tf_utils.get_initializer
|
||||
|
||||
[[autodoc]] modeling_tf_utils.keras_serializable
|
||||
|
||||
[[autodoc]] modeling_tf_utils.shape_list
|
@ -1,97 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Custom Layers and Utilities
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
|
||||
|
||||
Most of those are only useful if you are studying the code of the models in the library.
|
||||
|
||||
|
||||
Pytorch custom modules
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.Conv1D
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.PoolerStartLogits
|
||||
:members: forward
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.PoolerEndLogits
|
||||
:members: forward
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.PoolerAnswerClass
|
||||
:members: forward
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.SquadHeadOutput
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.SQuADHead
|
||||
:members: forward
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.SequenceSummary
|
||||
:members: forward
|
||||
|
||||
|
||||
PyTorch Helper Functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.apply_chunking_to_forward
|
||||
|
||||
.. autofunction:: transformers.modeling_utils.find_pruneable_heads_and_indices
|
||||
|
||||
.. autofunction:: transformers.modeling_utils.prune_layer
|
||||
|
||||
.. autofunction:: transformers.modeling_utils.prune_conv1d_layer
|
||||
|
||||
.. autofunction:: transformers.modeling_utils.prune_linear_layer
|
||||
|
||||
TensorFlow custom layers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFConv1D
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFSharedEmbeddings
|
||||
:members: call
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFSequenceSummary
|
||||
|
||||
|
||||
TensorFlow loss functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFCausalLanguageModelingLoss
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFMaskedLanguageModelingLoss
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFMultipleChoiceLoss
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFQuestionAnsweringLoss
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFSequenceClassificationLoss
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFTokenClassificationLoss
|
||||
:members:
|
||||
|
||||
|
||||
TensorFlow Helper Functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.modeling_tf_utils.get_initializer
|
||||
|
||||
.. autofunction:: transformers.modeling_tf_utils.keras_serializable
|
||||
|
||||
.. autofunction:: transformers.modeling_tf_utils.shape_list
|
40
docs/source/internal/pipelines_utils.mdx
Normal file
40
docs/source/internal/pipelines_utils.mdx
Normal file
@ -0,0 +1,40 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Utilities for pipelines
|
||||
|
||||
This page lists all the utility functions the library provides for pipelines.
|
||||
|
||||
Most of those are only useful if you are studying the code of the models in the library.
|
||||
|
||||
|
||||
## Argument handling
|
||||
|
||||
[[autodoc]] pipelines.ArgumentHandler
|
||||
|
||||
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
|
||||
|
||||
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
|
||||
|
||||
## Data format
|
||||
|
||||
[[autodoc]] pipelines.PipelineDataFormat
|
||||
|
||||
[[autodoc]] pipelines.CsvPipelineDataFormat
|
||||
|
||||
[[autodoc]] pipelines.JsonPipelineDataFormat
|
||||
|
||||
[[autodoc]] pipelines.PipedPipelineDataFormat
|
||||
|
||||
## Utilities
|
||||
|
||||
[[autodoc]] pipelines.PipelineException
|
@ -1,50 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Utilities for pipelines
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This page lists all the utility functions the library provides for pipelines.
|
||||
|
||||
Most of those are only useful if you are studying the code of the models in the library.
|
||||
|
||||
|
||||
Argument handling
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.pipelines.ArgumentHandler
|
||||
|
||||
.. autoclass:: transformers.pipelines.ZeroShotClassificationArgumentHandler
|
||||
|
||||
.. autoclass:: transformers.pipelines.QuestionAnsweringArgumentHandler
|
||||
|
||||
|
||||
Data format
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.pipelines.PipelineDataFormat
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.pipelines.CsvPipelineDataFormat
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.pipelines.JsonPipelineDataFormat
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.pipelines.PipedPipelineDataFormat
|
||||
:members:
|
||||
|
||||
|
||||
Utilities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.pipelines.PipelineException
|
38
docs/source/internal/tokenization_utils.mdx
Normal file
38
docs/source/internal/tokenization_utils.mdx
Normal file
@ -0,0 +1,38 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Utilities for Tokenizers
|
||||
|
||||
This page lists all the utility functions used by the tokenizers, mainly the class
|
||||
[`~tokenization_utils_base.PreTrainedTokenizerBase`] that implements the common methods between
|
||||
[`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] and the mixin
|
||||
[`~tokenization_utils_base.SpecialTokensMixin`].
|
||||
|
||||
Most of those are only useful if you are studying the code of the tokenizers in the library.
|
||||
|
||||
## PreTrainedTokenizerBase
|
||||
|
||||
[[autodoc]] tokenization_utils_base.PreTrainedTokenizerBase
|
||||
- __call__
|
||||
- all
|
||||
|
||||
## SpecialTokensMixin
|
||||
|
||||
[[autodoc]] tokenization_utils_base.SpecialTokensMixin
|
||||
|
||||
## Enums and namedtuples
|
||||
|
||||
[[autodoc]] tokenization_utils_base.TruncationStrategy
|
||||
|
||||
[[autodoc]] tokenization_utils_base.CharSpan
|
||||
|
||||
[[autodoc]] tokenization_utils_base.TokenSpan
|
@ -1,45 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Utilities for Tokenizers
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This page lists all the utility functions used by the tokenizers, mainly the class
|
||||
:class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase` that implements the common methods between
|
||||
:class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast` and the mixin
|
||||
:class:`~transformers.tokenization_utils_base.SpecialTokensMixin`.
|
||||
|
||||
Most of those are only useful if you are studying the code of the tokenizers in the library.
|
||||
|
||||
PreTrainedTokenizerBase
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.tokenization_utils_base.PreTrainedTokenizerBase
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
|
||||
SpecialTokensMixin
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.tokenization_utils_base.SpecialTokensMixin
|
||||
:members:
|
||||
|
||||
|
||||
Enums and namedtuples
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.tokenization_utils_base.TruncationStrategy
|
||||
|
||||
.. autoclass:: transformers.tokenization_utils_base.CharSpan
|
||||
|
||||
.. autoclass:: transformers.tokenization_utils_base.TokenSpan
|
43
docs/source/internal/trainer_utils.mdx
Normal file
43
docs/source/internal/trainer_utils.mdx
Normal file
@ -0,0 +1,43 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Utilities for Trainer
|
||||
|
||||
This page lists all the utility functions used by [`Trainer`].
|
||||
|
||||
Most of those are only useful if you are studying the code of the Trainer in the library.
|
||||
|
||||
## Utilities
|
||||
|
||||
[[autodoc]] EvalPrediction
|
||||
|
||||
[[autodoc]] IntervalStrategy
|
||||
|
||||
[[autodoc]] set_seed
|
||||
|
||||
[[autodoc]] torch_distributed_zero_first
|
||||
|
||||
## Callbacks internals
|
||||
|
||||
[[autodoc]] trainer_callback.CallbackHandler
|
||||
|
||||
## Distributed Evaluation
|
||||
|
||||
[[autodoc]] trainer_pt_utils.DistributedTensorGatherer
|
||||
|
||||
## Distributed Evaluation
|
||||
|
||||
[[autodoc]] HfArgumentParser
|
||||
|
||||
## Debug Utilities
|
||||
|
||||
[[autodoc]] debug_utils.DebugUnderflowOverflow
|
@ -1,54 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Utilities for Trainer
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This page lists all the utility functions used by :class:`~transformers.Trainer`.
|
||||
|
||||
Most of those are only useful if you are studying the code of the Trainer in the library.
|
||||
|
||||
Utilities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.EvalPrediction
|
||||
|
||||
.. autoclass:: transformers.IntervalStrategy
|
||||
|
||||
.. autofunction:: transformers.set_seed
|
||||
|
||||
.. autofunction:: transformers.torch_distributed_zero_first
|
||||
|
||||
|
||||
Callbacks internals
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.trainer_callback.CallbackHandler
|
||||
|
||||
|
||||
Distributed Evaluation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.trainer_pt_utils.DistributedTensorGatherer
|
||||
:members:
|
||||
|
||||
|
||||
Distributed Evaluation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.HfArgumentParser
|
||||
|
||||
|
||||
Debug Utilities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.debug_utils.DebugUnderflowOverflow
|
106
docs/source/main_classes/callback.mdx
Normal file
106
docs/source/main_classes/callback.mdx
Normal file
@ -0,0 +1,106 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Callbacks
|
||||
|
||||
Callbacks are objects that can customize the behavior of the training loop in the PyTorch
|
||||
[`Trainer`] (this feature is not yet implemented in TensorFlow) that can inspect the training loop
|
||||
state (for progress reporting, logging on TensorBoard or other ML platforms...) and take decisions (like early
|
||||
stopping).
|
||||
|
||||
Callbacks are "read only" pieces of code, apart from the [`TrainerControl`] object they return, they
|
||||
cannot change anything in the training loop. For customizations that require changes in the training loop, you should
|
||||
subclass [`Trainer`] and override the methods you need (see [trainer](trainer) for examples).
|
||||
|
||||
By default a [`Trainer`] will use the following callbacks:
|
||||
|
||||
- [`DefaultFlowCallback`] which handles the default behavior for logging, saving and evaluation.
|
||||
- [`PrinterCallback`] or [`ProgressCallback`] to display progress and print the
|
||||
logs (the first one is used if you deactivate tqdm through the [`TrainingArguments`], otherwise
|
||||
it's the second one).
|
||||
- [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4
|
||||
or tensorboardX).
|
||||
- [`~integrations.WandbCallback`] if [wandb](https://www.wandb.com/) is installed.
|
||||
- [`~integrations.CometCallback`] if [comet_ml](https://www.comet.ml/site/) is installed.
|
||||
- [`~integrations.MLflowCallback`] if [mlflow](https://www.mlflow.org/) is installed.
|
||||
- [`~integrations.AzureMLCallback`] if [azureml-sdk](https://pypi.org/project/azureml-sdk/) is
|
||||
installed.
|
||||
|
||||
The main class that implements callbacks is [`TrainerCallback`]. It gets the
|
||||
[`TrainingArguments`] used to instantiate the [`Trainer`], can access that
|
||||
Trainer's internal state via [`TrainerState`], and can take some actions on the training loop via
|
||||
[`TrainerControl`].
|
||||
|
||||
|
||||
## Available Callbacks
|
||||
|
||||
Here is the list of the available [`TrainerCallback`] in the library:
|
||||
|
||||
[[autodoc]] integrations.CometCallback
|
||||
- setup
|
||||
|
||||
[[autodoc]] DefaultFlowCallback
|
||||
|
||||
[[autodoc]] PrinterCallback
|
||||
|
||||
[[autodoc]] ProgressCallback
|
||||
|
||||
[[autodoc]] EarlyStoppingCallback
|
||||
|
||||
[[autodoc]] integrations.TensorBoardCallback
|
||||
|
||||
[[autodoc]] integrations.WandbCallback
|
||||
- setup
|
||||
|
||||
[[autodoc]] integrations.MLflowCallback
|
||||
- setup
|
||||
|
||||
[[autodoc]] integrations.AzureMLCallback
|
||||
|
||||
## TrainerCallback
|
||||
|
||||
[[autodoc]] TrainerCallback
|
||||
|
||||
Here is an example of how to register a custom callback with the PyTorch [`Trainer`]:
|
||||
|
||||
```python
|
||||
class MyCallback(TrainerCallback):
|
||||
"A callback that prints a message at the beginning of training"
|
||||
|
||||
def on_train_begin(self, args, state, control, **kwargs):
|
||||
print("Starting training")
|
||||
|
||||
trainer = Trainer(
|
||||
model,
|
||||
args,
|
||||
train_dataset=train_dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
callbacks=[MyCallback] # We can either pass the callback class this way or an instance of it (MyCallback())
|
||||
)
|
||||
```
|
||||
|
||||
Another way to register a callback is to call `trainer.add_callback()` as follows:
|
||||
|
||||
```python
|
||||
trainer = Trainer(...)
|
||||
trainer.add_callback(MyCallback)
|
||||
# Alternatively, we can pass an instance of the callback class
|
||||
trainer.add_callback(MyCallback())
|
||||
```
|
||||
|
||||
## TrainerState
|
||||
|
||||
[[autodoc]] TrainerState
|
||||
|
||||
## TrainerControl
|
||||
|
||||
[[autodoc]] TrainerControl
|
@ -1,115 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Callbacks
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Callbacks are objects that can customize the behavior of the training loop in the PyTorch
|
||||
:class:`~transformers.Trainer` (this feature is not yet implemented in TensorFlow) that can inspect the training loop
|
||||
state (for progress reporting, logging on TensorBoard or other ML platforms...) and take decisions (like early
|
||||
stopping).
|
||||
|
||||
Callbacks are "read only" pieces of code, apart from the :class:`~transformers.TrainerControl` object they return, they
|
||||
cannot change anything in the training loop. For customizations that require changes in the training loop, you should
|
||||
subclass :class:`~transformers.Trainer` and override the methods you need (see :doc:`trainer` for examples).
|
||||
|
||||
By default a :class:`~transformers.Trainer` will use the following callbacks:
|
||||
|
||||
- :class:`~transformers.DefaultFlowCallback` which handles the default behavior for logging, saving and evaluation.
|
||||
- :class:`~transformers.PrinterCallback` or :class:`~transformers.ProgressCallback` to display progress and print the
|
||||
logs (the first one is used if you deactivate tqdm through the :class:`~transformers.TrainingArguments`, otherwise
|
||||
it's the second one).
|
||||
- :class:`~transformers.integrations.TensorBoardCallback` if tensorboard is accessible (either through PyTorch >= 1.4
|
||||
or tensorboardX).
|
||||
- :class:`~transformers.integrations.WandbCallback` if `wandb <https://www.wandb.com/>`__ is installed.
|
||||
- :class:`~transformers.integrations.CometCallback` if `comet_ml <https://www.comet.ml/site/>`__ is installed.
|
||||
- :class:`~transformers.integrations.MLflowCallback` if `mlflow <https://www.mlflow.org/>`__ is installed.
|
||||
- :class:`~transformers.integrations.AzureMLCallback` if `azureml-sdk <https://pypi.org/project/azureml-sdk/>`__ is
|
||||
installed.
|
||||
|
||||
The main class that implements callbacks is :class:`~transformers.TrainerCallback`. It gets the
|
||||
:class:`~transformers.TrainingArguments` used to instantiate the :class:`~transformers.Trainer`, can access that
|
||||
Trainer's internal state via :class:`~transformers.TrainerState`, and can take some actions on the training loop via
|
||||
:class:`~transformers.TrainerControl`.
|
||||
|
||||
|
||||
Available Callbacks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Here is the list of the available :class:`~transformers.TrainerCallback` in the library:
|
||||
|
||||
.. autoclass:: transformers.integrations.CometCallback
|
||||
:members: setup
|
||||
|
||||
.. autoclass:: transformers.DefaultFlowCallback
|
||||
|
||||
.. autoclass:: transformers.PrinterCallback
|
||||
|
||||
.. autoclass:: transformers.ProgressCallback
|
||||
|
||||
.. autoclass:: transformers.EarlyStoppingCallback
|
||||
|
||||
.. autoclass:: transformers.integrations.TensorBoardCallback
|
||||
|
||||
.. autoclass:: transformers.integrations.WandbCallback
|
||||
:members: setup
|
||||
|
||||
.. autoclass:: transformers.integrations.MLflowCallback
|
||||
:members: setup
|
||||
|
||||
.. autoclass:: transformers.integrations.AzureMLCallback
|
||||
|
||||
TrainerCallback
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TrainerCallback
|
||||
:members:
|
||||
|
||||
Here is an example of how to register a custom callback with the PyTorch :class:`~transformers.Trainer`:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class MyCallback(TrainerCallback):
|
||||
"A callback that prints a message at the beginning of training"
|
||||
|
||||
def on_train_begin(self, args, state, control, **kwargs):
|
||||
print("Starting training")
|
||||
|
||||
trainer = Trainer(
|
||||
model,
|
||||
args,
|
||||
train_dataset=train_dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
callbacks=[MyCallback] # We can either pass the callback class this way or an instance of it (MyCallback())
|
||||
)
|
||||
|
||||
Another way to register a callback is to call ``trainer.add_callback()`` as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
trainer = Trainer(...)
|
||||
trainer.add_callback(MyCallback)
|
||||
# Alternatively, we can pass an instance of the callback class
|
||||
trainer.add_callback(MyCallback())
|
||||
|
||||
TrainerState
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TrainerState
|
||||
:members:
|
||||
|
||||
|
||||
TrainerControl
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TrainerControl
|
||||
:members:
|
28
docs/source/main_classes/configuration.mdx
Normal file
28
docs/source/main_classes/configuration.mdx
Normal file
@ -0,0 +1,28 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Configuration
|
||||
|
||||
The base class [`PretrainedConfig`] implements the common methods for loading/saving a configuration
|
||||
either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded
|
||||
from HuggingFace's AWS S3 repository).
|
||||
|
||||
Each derived config class implements model specific attributes. Common attributes present in all config classes are:
|
||||
`hidden_size`, `num_attention_heads`, and `num_hidden_layers`. Text models further implement:
|
||||
`vocab_size`.
|
||||
|
||||
|
||||
## PretrainedConfig
|
||||
|
||||
[[autodoc]] PretrainedConfig
|
||||
- push_to_hub
|
||||
- all
|
@ -1,31 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Configuration
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The base class :class:`~transformers.PretrainedConfig` implements the common methods for loading/saving a configuration
|
||||
either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded
|
||||
from HuggingFace's AWS S3 repository).
|
||||
|
||||
Each derived config class implements model specific attributes. Common attributes present in all config classes are:
|
||||
:obj:`hidden_size`, :obj:`num_attention_heads`, and :obj:`num_hidden_layers`. Text models further implement:
|
||||
:obj:`vocab_size`.
|
||||
|
||||
|
||||
|
||||
PretrainedConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PretrainedConfig
|
||||
:special-members: push_to_hub
|
||||
:members:
|
64
docs/source/main_classes/data_collator.mdx
Normal file
64
docs/source/main_classes/data_collator.mdx
Normal file
@ -0,0 +1,64 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Data Collator
|
||||
|
||||
Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of
|
||||
the same type as the elements of `train_dataset` or `eval_dataset`.
|
||||
|
||||
To be able to build batches, data collators may apply some processing (like padding). Some of them (like
|
||||
[`DataCollatorForLanguageModeling`]) also apply some random data augmentation (like random masking)
|
||||
on the formed batch.
|
||||
|
||||
Examples of use can be found in the [example scripts](../examples) or [example notebooks](../notebooks).
|
||||
|
||||
|
||||
## Default data collator
|
||||
|
||||
[[autodoc]] data.data_collator.default_data_collator
|
||||
|
||||
## DefaultDataCollator
|
||||
|
||||
[[autodoc]] data.data_collator.DefaultDataCollator
|
||||
|
||||
## DataCollatorWithPadding
|
||||
|
||||
[[autodoc]] data.data_collator.DataCollatorWithPadding
|
||||
|
||||
## DataCollatorForTokenClassification
|
||||
|
||||
[[autodoc]] data.data_collator.DataCollatorForTokenClassification
|
||||
|
||||
## DataCollatorForSeq2Seq
|
||||
|
||||
[[autodoc]] data.data_collator.DataCollatorForSeq2Seq
|
||||
|
||||
## DataCollatorForLanguageModeling
|
||||
|
||||
[[autodoc]] data.data_collator.DataCollatorForLanguageModeling
|
||||
- numpy_mask_tokens
|
||||
- tf_mask_tokens
|
||||
- torch_mask_tokens
|
||||
|
||||
## DataCollatorForWholeWordMask
|
||||
|
||||
[[autodoc]] data.data_collator.DataCollatorForWholeWordMask
|
||||
- numpy_mask_tokens
|
||||
- tf_mask_tokens
|
||||
- torch_mask_tokens
|
||||
|
||||
## DataCollatorForPermutationLanguageModeling
|
||||
|
||||
[[autodoc]] data.data_collator.DataCollatorForPermutationLanguageModeling
|
||||
- numpy_mask_tokens
|
||||
- tf_mask_tokens
|
||||
- torch_mask_tokens
|
@ -1,78 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Data Collator
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of
|
||||
the same type as the elements of :obj:`train_dataset` or :obj:`eval_dataset`.
|
||||
|
||||
To be able to build batches, data collators may apply some processing (like padding). Some of them (like
|
||||
:class:`~transformers.DataCollatorForLanguageModeling`) also apply some random data augmentation (like random masking)
|
||||
on the formed batch.
|
||||
|
||||
Examples of use can be found in the :doc:`example scripts <../examples>` or :doc:`example notebooks <../notebooks>`.
|
||||
|
||||
|
||||
Default data collator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.data.data_collator.default_data_collator
|
||||
|
||||
|
||||
DefaultDataCollator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DefaultDataCollator
|
||||
:members:
|
||||
|
||||
|
||||
DataCollatorWithPadding
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DataCollatorWithPadding
|
||||
:members:
|
||||
|
||||
|
||||
DataCollatorForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DataCollatorForTokenClassification
|
||||
:members:
|
||||
|
||||
|
||||
DataCollatorForSeq2Seq
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DataCollatorForSeq2Seq
|
||||
:members:
|
||||
|
||||
|
||||
DataCollatorForLanguageModeling
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DataCollatorForLanguageModeling
|
||||
:members: numpy_mask_tokens, tf_mask_tokens, torch_mask_tokens
|
||||
|
||||
|
||||
DataCollatorForWholeWordMask
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DataCollatorForWholeWordMask
|
||||
:members: numpy_mask_tokens, tf_mask_tokens, torch_mask_tokens
|
||||
|
||||
|
||||
DataCollatorForPermutationLanguageModeling
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.data.data_collator.DataCollatorForPermutationLanguageModeling
|
||||
:members: numpy_mask_tokens, tf_mask_tokens, torch_mask_tokens
|
38
docs/source/main_classes/feature_extractor.mdx
Normal file
38
docs/source/main_classes/feature_extractor.mdx
Normal file
@ -0,0 +1,38 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Feature Extractor
|
||||
|
||||
A feature extractor is in charge of preparing input features for a multi-modal model. This includes feature extraction
|
||||
from sequences, *e.g.*, pre-processing audio files to Log-Mel Spectrogram features, feature extraction from images
|
||||
*e.g.* cropping image image files, but also padding, normalization, and conversion to Numpy, PyTorch, and TensorFlow
|
||||
tensors.
|
||||
|
||||
|
||||
## FeatureExtractionMixin
|
||||
|
||||
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
|
||||
- from_pretrained
|
||||
- save_pretrained
|
||||
|
||||
## SequenceFeatureExtractor
|
||||
|
||||
[[autodoc]] SequenceFeatureExtractor
|
||||
- pad
|
||||
|
||||
## BatchFeature
|
||||
|
||||
[[autodoc]] BatchFeature
|
||||
|
||||
## ImageFeatureExtractionMixin
|
||||
|
||||
[[autodoc]] image_utils.ImageFeatureExtractionMixin
|
@ -1,48 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
|
||||
Feature Extractor
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
A feature extractor is in charge of preparing input features for a multi-modal model. This includes feature extraction
|
||||
from sequences, *e.g.*, pre-processing audio files to Log-Mel Spectrogram features, feature extraction from images
|
||||
*e.g.* cropping image image files, but also padding, normalization, and conversion to Numpy, PyTorch, and TensorFlow
|
||||
tensors.
|
||||
|
||||
|
||||
FeatureExtractionMixin
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.feature_extraction_utils.FeatureExtractionMixin
|
||||
:members: from_pretrained, save_pretrained
|
||||
|
||||
|
||||
SequenceFeatureExtractor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SequenceFeatureExtractor
|
||||
:members: pad
|
||||
|
||||
|
||||
BatchFeature
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.BatchFeature
|
||||
:members:
|
||||
|
||||
|
||||
ImageFeatureExtractionMixin
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.image_utils.ImageFeatureExtractionMixin
|
||||
:members:
|
20
docs/source/main_classes/keras_callbacks.mdx
Normal file
20
docs/source/main_classes/keras_callbacks.mdx
Normal file
@ -0,0 +1,20 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Keras callbacks
|
||||
|
||||
When training a Transformers model with Keras, there are some library-specific callbacks available to automate common
|
||||
tasks:
|
||||
|
||||
## PushToHubCallback
|
||||
|
||||
[[autodoc]] keras_callbacks.PushToHubCallback
|
@ -1,22 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Keras callbacks
|
||||
=======================================================================================================================
|
||||
|
||||
When training a Transformers model with Keras, there are some library-specific callbacks available to automate common
|
||||
tasks:
|
||||
|
||||
PushToHubCallback
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
.. autoclass:: transformers.keras_callbacks.PushToHubCallback
|
80
docs/source/main_classes/logging.mdx
Normal file
80
docs/source/main_classes/logging.mdx
Normal file
@ -0,0 +1,80 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Logging
|
||||
|
||||
🤗 Transformers has a centralized logging system, so that you can setup the verbosity of the library easily.
|
||||
|
||||
Currently the default verbosity of the library is `WARNING`.
|
||||
|
||||
To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
|
||||
to the INFO level.
|
||||
|
||||
```python
|
||||
import transformers
|
||||
transformers.logging.set_verbosity_info()
|
||||
```
|
||||
|
||||
You can also use the environment variable `TRANSFORMERS_VERBOSITY` to override the default verbosity. You can set it
|
||||
to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:
|
||||
|
||||
```bash
|
||||
TRANSFORMERS_VERBOSITY=error ./myprogram.py
|
||||
```
|
||||
|
||||
Additionally, some `warnings` can be disabled by setting the environment variable
|
||||
`TRANSFORMERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using
|
||||
[`logger.warning_advice`]. For example:
|
||||
|
||||
|
||||
```bash
|
||||
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
|
||||
```
|
||||
|
||||
All the methods of this logging module are documented below, the main ones are
|
||||
[`logging.get_verbosity`] to get the current level of verbosity in the logger and
|
||||
[`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least
|
||||
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
|
||||
|
||||
- `transformers.logging.CRITICAL` or `transformers.logging.FATAL` (int value, 50): only report the most
|
||||
critical errors.
|
||||
- `transformers.logging.ERROR` (int value, 40): only report errors.
|
||||
- `transformers.logging.WARNING` or `transformers.logging.WARN` (int value, 30): only reports error and
|
||||
warnings. This the default level used by the library.
|
||||
- `transformers.logging.INFO` (int value, 20): reports error, warnings and basic information.
|
||||
- `transformers.logging.DEBUG` (int value, 10): report all information.
|
||||
|
||||
## Base setters
|
||||
|
||||
[[autodoc]] logging.set_verbosity_error
|
||||
|
||||
[[autodoc]] logging.set_verbosity_warning
|
||||
|
||||
[[autodoc]] logging.set_verbosity_info
|
||||
|
||||
[[autodoc]] logging.set_verbosity_debug
|
||||
|
||||
## Other functions
|
||||
|
||||
[[autodoc]] logging.get_verbosity
|
||||
|
||||
[[autodoc]] logging.set_verbosity
|
||||
|
||||
[[autodoc]] logging.get_logger
|
||||
|
||||
[[autodoc]] logging.enable_default_handler
|
||||
|
||||
[[autodoc]] logging.disable_default_handler
|
||||
|
||||
[[autodoc]] logging.enable_explicit_format
|
||||
|
||||
[[autodoc]] logging.reset_format
|
@ -1,83 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Logging
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
🤗 Transformers has a centralized logging system, so that you can setup the verbosity of the library easily.
|
||||
|
||||
Currently the default verbosity of the library is ``WARNING``.
|
||||
|
||||
To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
|
||||
to the INFO level.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import transformers
|
||||
transformers.logging.set_verbosity_info()
|
||||
|
||||
You can also use the environment variable ``TRANSFORMERS_VERBOSITY`` to override the default verbosity. You can set it
|
||||
to one of the following: ``debug``, ``info``, ``warning``, ``error``, ``critical``. For example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
TRANSFORMERS_VERBOSITY=error ./myprogram.py
|
||||
|
||||
Additionally, some ``warnings`` can be disabled by setting the environment variable
|
||||
``TRANSFORMERS_NO_ADVISORY_WARNINGS`` to a true value, like `1`. This will disable any warning that is logged using
|
||||
:meth:`logger.warning_advice`. For example:
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
|
||||
|
||||
All the methods of this logging module are documented below, the main ones are
|
||||
:func:`transformers.logging.get_verbosity` to get the current level of verbosity in the logger and
|
||||
:func:`transformers.logging.set_verbosity` to set the verbosity to the level of your choice. In order (from the least
|
||||
verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
|
||||
|
||||
- :obj:`transformers.logging.CRITICAL` or :obj:`transformers.logging.FATAL` (int value, 50): only report the most
|
||||
critical errors.
|
||||
- :obj:`transformers.logging.ERROR` (int value, 40): only report errors.
|
||||
- :obj:`transformers.logging.WARNING` or :obj:`transformers.logging.WARN` (int value, 30): only reports error and
|
||||
warnings. This the default level used by the library.
|
||||
- :obj:`transformers.logging.INFO` (int value, 20): reports error, warnings and basic information.
|
||||
- :obj:`transformers.logging.DEBUG` (int value, 10): report all information.
|
||||
|
||||
Base setters
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.logging.set_verbosity_error
|
||||
|
||||
.. autofunction:: transformers.logging.set_verbosity_warning
|
||||
|
||||
.. autofunction:: transformers.logging.set_verbosity_info
|
||||
|
||||
.. autofunction:: transformers.logging.set_verbosity_debug
|
||||
|
||||
Other functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: transformers.logging.get_verbosity
|
||||
|
||||
.. autofunction:: transformers.logging.set_verbosity
|
||||
|
||||
.. autofunction:: transformers.logging.get_logger
|
||||
|
||||
.. autofunction:: transformers.logging.enable_default_handler
|
||||
|
||||
.. autofunction:: transformers.logging.disable_default_handler
|
||||
|
||||
.. autofunction:: transformers.logging.enable_explicit_format
|
||||
|
||||
.. autofunction:: transformers.logging.reset_format
|
99
docs/source/main_classes/model.mdx
Normal file
99
docs/source/main_classes/model.mdx
Normal file
@ -0,0 +1,99 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Models
|
||||
|
||||
The base classes [`PreTrainedModel`], [`TFPreTrainedModel`], and
|
||||
[`FlaxPreTrainedModel`] implement the common methods for loading/saving a model either from a local
|
||||
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
|
||||
S3 repository).
|
||||
|
||||
[`PreTrainedModel`] and [`TFPreTrainedModel`] also implement a few methods which
|
||||
are common among all the models to:
|
||||
|
||||
- resize the input token embeddings when new tokens are added to the vocabulary
|
||||
- prune the attention heads of the model.
|
||||
|
||||
The other methods that are common to each model are defined in [`~modeling_utils.ModuleUtilsMixin`]
|
||||
(for the PyTorch models) and [`~modeling_tf_utils.TFModuleUtilsMixin`] (for the TensorFlow models) or
|
||||
for text generation, [`~generation_utils.GenerationMixin`] (for the PyTorch models),
|
||||
[`~generation_tf_utils.TFGenerationMixin`] (for the TensorFlow models) and
|
||||
[`~generation_flax_utils.FlaxGenerationMixin`] (for the Flax/JAX models).
|
||||
|
||||
|
||||
## PreTrainedModel
|
||||
|
||||
[[autodoc]] PreTrainedModel
|
||||
- push_to_hub
|
||||
- all
|
||||
|
||||
<a id='from_pretrained-torch-dtype'></a>
|
||||
|
||||
### Model Instantiation dtype
|
||||
|
||||
Under Pytorch a model normally gets instantiated with `torch.float32` format. This can be an issue if one tries to
|
||||
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
|
||||
either explicitly pass the desired `dtype` using `torch_dtype` argument:
|
||||
|
||||
```python
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
|
||||
```
|
||||
|
||||
or, if you want the model to always load in the most optimal memory pattern, you can use the special value `"auto"`,
|
||||
and then `dtype` will be automatically derived from the model's weights:
|
||||
|
||||
```python
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
|
||||
```
|
||||
|
||||
Models instantiated from scratch can also be told which `dtype` to use with:
|
||||
|
||||
```python
|
||||
config = T5Config.from_pretrained("t5")
|
||||
model = AutoModel.from_config(config)
|
||||
```
|
||||
|
||||
Due to Pytorch design, this functionality is only available for floating dtypes.
|
||||
|
||||
|
||||
|
||||
## ModuleUtilsMixin
|
||||
|
||||
[[autodoc]] modeling_utils.ModuleUtilsMixin
|
||||
|
||||
## TFPreTrainedModel
|
||||
|
||||
[[autodoc]] TFPreTrainedModel
|
||||
- push_to_hub
|
||||
- all
|
||||
|
||||
## TFModelUtilsMixin
|
||||
|
||||
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
|
||||
|
||||
## FlaxPreTrainedModel
|
||||
|
||||
[[autodoc]] FlaxPreTrainedModel
|
||||
- push_to_hub
|
||||
- all
|
||||
|
||||
## Generation
|
||||
|
||||
[[autodoc]] generation_utils.GenerationMixin
|
||||
|
||||
[[autodoc]] generation_tf_utils.TFGenerationMixin
|
||||
|
||||
[[autodoc]] generation_flax_utils.FlaxGenerationMixin
|
||||
|
||||
## Pushing to the Hub
|
||||
|
||||
[[autodoc]] file_utils.PushToHubMixin
|
@ -1,120 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Models
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The base classes :class:`~transformers.PreTrainedModel`, :class:`~transformers.TFPreTrainedModel`, and
|
||||
:class:`~transformers.FlaxPreTrainedModel` implement the common methods for loading/saving a model either from a local
|
||||
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
|
||||
S3 repository).
|
||||
|
||||
:class:`~transformers.PreTrainedModel` and :class:`~transformers.TFPreTrainedModel` also implement a few methods which
|
||||
are common among all the models to:
|
||||
|
||||
- resize the input token embeddings when new tokens are added to the vocabulary
|
||||
- prune the attention heads of the model.
|
||||
|
||||
The other methods that are common to each model are defined in :class:`~transformers.modeling_utils.ModuleUtilsMixin`
|
||||
(for the PyTorch models) and :class:`~transformers.modeling_tf_utils.TFModuleUtilsMixin` (for the TensorFlow models) or
|
||||
for text generation, :class:`~transformers.generation_utils.GenerationMixin` (for the PyTorch models),
|
||||
:class:`~transformers.generation_tf_utils.TFGenerationMixin` (for the TensorFlow models) and
|
||||
:class:`~transformers.generation_flax_utils.FlaxGenerationMixin` (for the Flax/JAX models).
|
||||
|
||||
|
||||
PreTrainedModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PreTrainedModel
|
||||
:special-members: push_to_hub
|
||||
:members:
|
||||
|
||||
|
||||
.. _from_pretrained-torch-dtype:
|
||||
|
||||
Model Instantiation dtype
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Under Pytorch a model normally gets instantiated with ``torch.float32`` format. This can be an issue if one tries to
|
||||
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
|
||||
either explicitly pass the desired ``dtype`` using ``torch_dtype`` argument:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
|
||||
|
||||
or, if you want the model to always load in the most optimal memory pattern, you can use the special value ``"auto"``,
|
||||
and then ``dtype`` will be automatically derived from the model's weights:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
|
||||
|
||||
Models instantiated from scratch can also be told which ``dtype`` to use with:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
config = T5Config.from_pretrained("t5")
|
||||
model = AutoModel.from_config(config)
|
||||
|
||||
Due to Pytorch design, this functionality is only available for floating dtypes.
|
||||
|
||||
|
||||
|
||||
ModuleUtilsMixin
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_utils.ModuleUtilsMixin
|
||||
:members:
|
||||
|
||||
|
||||
TFPreTrainedModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFPreTrainedModel
|
||||
:special-members: push_to_hub
|
||||
:members:
|
||||
|
||||
|
||||
TFModelUtilsMixin
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_utils.TFModelUtilsMixin
|
||||
:members:
|
||||
|
||||
|
||||
FlaxPreTrainedModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxPreTrainedModel
|
||||
:special-members: push_to_hub
|
||||
:members:
|
||||
|
||||
|
||||
Generation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.generation_utils.GenerationMixin
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_tf_utils.TFGenerationMixin
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.generation_flax_utils.FlaxGenerationMixin
|
||||
:members:
|
||||
|
||||
|
||||
Pushing to the Hub
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.file_utils.PushToHubMixin
|
||||
:members:
|
71
docs/source/main_classes/optimizer_schedules.mdx
Normal file
71
docs/source/main_classes/optimizer_schedules.mdx
Normal file
@ -0,0 +1,71 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Optimization
|
||||
|
||||
The `.optimization` module provides:
|
||||
|
||||
- an optimizer with weight decay fixed that can be used to fine-tuned models, and
|
||||
- several schedules in the form of schedule objects that inherit from `_LRSchedule`:
|
||||
- a gradient accumulation class to accumulate the gradients of multiple batches
|
||||
|
||||
## AdamW (PyTorch)
|
||||
|
||||
[[autodoc]] AdamW
|
||||
|
||||
## AdaFactor (PyTorch)
|
||||
|
||||
[[autodoc]] Adafactor
|
||||
|
||||
## AdamWeightDecay (TensorFlow)
|
||||
|
||||
[[autodoc]] AdamWeightDecay
|
||||
|
||||
[[autodoc]] create_optimizer
|
||||
|
||||
## Schedules
|
||||
|
||||
### Learning Rate Schedules (Pytorch)
|
||||
|
||||
[[autodoc]] SchedulerType
|
||||
|
||||
[[autodoc]] get_scheduler
|
||||
|
||||
[[autodoc]] get_constant_schedule
|
||||
|
||||
[[autodoc]] get_constant_schedule_with_warmup
|
||||
|
||||
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png"/>
|
||||
|
||||
[[autodoc]] get_cosine_schedule_with_warmup
|
||||
|
||||
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png"/>
|
||||
|
||||
[[autodoc]] get_cosine_with_hard_restarts_schedule_with_warmup
|
||||
|
||||
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png"/>
|
||||
|
||||
[[autodoc]] get_linear_schedule_with_warmup
|
||||
|
||||
<img alt="" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png"/>
|
||||
|
||||
[[autodoc]] get_polynomial_decay_schedule_with_warmup
|
||||
|
||||
### Warmup (TensorFlow)
|
||||
|
||||
[[autodoc]] WarmUp
|
||||
|
||||
## Gradient Strategies
|
||||
|
||||
### GradientAccumulator (TensorFlow)
|
||||
|
||||
[[autodoc]] GradientAccumulator
|
@ -1,97 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Optimization
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The ``.optimization`` module provides:
|
||||
|
||||
- an optimizer with weight decay fixed that can be used to fine-tuned models, and
|
||||
- several schedules in the form of schedule objects that inherit from ``_LRSchedule``:
|
||||
- a gradient accumulation class to accumulate the gradients of multiple batches
|
||||
|
||||
AdamW (PyTorch)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.AdamW
|
||||
:members:
|
||||
|
||||
AdaFactor (PyTorch)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Adafactor
|
||||
|
||||
AdamWeightDecay (TensorFlow)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.AdamWeightDecay
|
||||
|
||||
.. autofunction:: transformers.create_optimizer
|
||||
|
||||
Schedules
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Learning Rate Schedules (Pytorch)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.SchedulerType
|
||||
|
||||
.. autofunction:: transformers.get_scheduler
|
||||
|
||||
.. autofunction:: transformers.get_constant_schedule
|
||||
|
||||
|
||||
.. autofunction:: transformers.get_constant_schedule_with_warmup
|
||||
|
||||
.. image:: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png
|
||||
:target: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_constant_schedule.png
|
||||
:alt:
|
||||
|
||||
|
||||
.. autofunction:: transformers.get_cosine_schedule_with_warmup
|
||||
|
||||
.. image:: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png
|
||||
:target: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_schedule.png
|
||||
:alt:
|
||||
|
||||
|
||||
.. autofunction:: transformers.get_cosine_with_hard_restarts_schedule_with_warmup
|
||||
|
||||
.. image:: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png
|
||||
:target: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_cosine_hard_restarts_schedule.png
|
||||
:alt:
|
||||
|
||||
|
||||
|
||||
.. autofunction:: transformers.get_linear_schedule_with_warmup
|
||||
|
||||
.. image:: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png
|
||||
:target: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/warmup_linear_schedule.png
|
||||
:alt:
|
||||
|
||||
|
||||
.. autofunction:: transformers.get_polynomial_decay_schedule_with_warmup
|
||||
|
||||
|
||||
Warmup (TensorFlow)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.WarmUp
|
||||
:members:
|
||||
|
||||
Gradient Strategies
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
GradientAccumulator (TensorFlow)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autoclass:: transformers.GradientAccumulator
|
269
docs/source/main_classes/output.mdx
Normal file
269
docs/source/main_classes/output.mdx
Normal file
@ -0,0 +1,269 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Model outputs
|
||||
|
||||
All models have outputs that are instances of subclasses of [`~file_utils.ModelOutput`]. Those are
|
||||
data structures containing all the information returned by the model, but that can also be used as tuples or
|
||||
dictionaries.
|
||||
|
||||
Let's see of this looks on an example:
|
||||
|
||||
```python
|
||||
from transformers import BertTokenizer, BertForSequenceClassification
|
||||
import torch
|
||||
|
||||
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
||||
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
|
||||
|
||||
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
|
||||
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
|
||||
outputs = model(**inputs, labels=labels)
|
||||
```
|
||||
|
||||
The `outputs` object is a [`~modeling_outputs.SequenceClassifierOutput`], as we can see in the
|
||||
documentation of that class below, it means it has an optional `loss`, a `logits` an optional `hidden_states` and
|
||||
an optional `attentions` attribute. Here we have the `loss` since we passed along `labels`, but we don't have
|
||||
`hidden_states` and `attentions` because we didn't pass `output_hidden_states=True` or
|
||||
`output_attentions=True`.
|
||||
|
||||
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
|
||||
will get `None`. Here for instance `outputs.loss` is the loss computed by the model, and `outputs.attentions` is
|
||||
`None`.
|
||||
|
||||
When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values.
|
||||
Here for instance, it has two elements, `loss` then `logits`, so
|
||||
|
||||
```python
|
||||
outputs[:2]
|
||||
```
|
||||
|
||||
will return the tuple `(outputs.loss, outputs.logits)` for instance.
|
||||
|
||||
When considering our `outputs` object as dictionary, it only considers the attributes that don't have `None`
|
||||
values. Here for instance, it has two keys that are `loss` and `logits`.
|
||||
|
||||
We document here the generic model outputs that are used by more than one model type. Specific output types are
|
||||
documented on their corresponding model page.
|
||||
|
||||
## ModelOutput
|
||||
|
||||
[[autodoc]] file_utils.ModelOutput
|
||||
- to_tuple
|
||||
|
||||
## BaseModelOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.BaseModelOutput
|
||||
|
||||
## BaseModelOutputWithPooling
|
||||
|
||||
[[autodoc]] modeling_outputs.BaseModelOutputWithPooling
|
||||
|
||||
## BaseModelOutputWithCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_outputs.BaseModelOutputWithCrossAttentions
|
||||
|
||||
## BaseModelOutputWithPoolingAndCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
|
||||
|
||||
## BaseModelOutputWithPast
|
||||
|
||||
[[autodoc]] modeling_outputs.BaseModelOutputWithPast
|
||||
|
||||
## BaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
## Seq2SeqModelOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.Seq2SeqModelOutput
|
||||
|
||||
## CausalLMOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.CausalLMOutput
|
||||
|
||||
## CausalLMOutputWithCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_outputs.CausalLMOutputWithCrossAttentions
|
||||
|
||||
## CausalLMOutputWithPast
|
||||
|
||||
[[autodoc]] modeling_outputs.CausalLMOutputWithPast
|
||||
|
||||
## MaskedLMOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.MaskedLMOutput
|
||||
|
||||
## Seq2SeqLMOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.Seq2SeqLMOutput
|
||||
|
||||
## NextSentencePredictorOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.NextSentencePredictorOutput
|
||||
|
||||
## SequenceClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.SequenceClassifierOutput
|
||||
|
||||
## Seq2SeqSequenceClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.Seq2SeqSequenceClassifierOutput
|
||||
|
||||
## MultipleChoiceModelOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.MultipleChoiceModelOutput
|
||||
|
||||
## TokenClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.TokenClassifierOutput
|
||||
|
||||
## QuestionAnsweringModelOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.QuestionAnsweringModelOutput
|
||||
|
||||
## Seq2SeqQuestionAnsweringModelOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
|
||||
|
||||
## TFBaseModelOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFBaseModelOutput
|
||||
|
||||
## TFBaseModelOutputWithPooling
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPooling
|
||||
|
||||
## TFBaseModelOutputWithPoolingAndCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
|
||||
|
||||
## TFBaseModelOutputWithPast
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPast
|
||||
|
||||
## TFBaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
## TFSeq2SeqModelOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFSeq2SeqModelOutput
|
||||
|
||||
## TFCausalLMOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFCausalLMOutput
|
||||
|
||||
## TFCausalLMOutputWithCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
|
||||
|
||||
## TFCausalLMOutputWithPast
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFCausalLMOutputWithPast
|
||||
|
||||
## TFMaskedLMOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFMaskedLMOutput
|
||||
|
||||
## TFSeq2SeqLMOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFSeq2SeqLMOutput
|
||||
|
||||
## TFNextSentencePredictorOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFNextSentencePredictorOutput
|
||||
|
||||
## TFSequenceClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutput
|
||||
|
||||
## TFSeq2SeqSequenceClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
|
||||
|
||||
## TFMultipleChoiceModelOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFMultipleChoiceModelOutput
|
||||
|
||||
## TFTokenClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFTokenClassifierOutput
|
||||
|
||||
## TFQuestionAnsweringModelOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFQuestionAnsweringModelOutput
|
||||
|
||||
## TFSeq2SeqQuestionAnsweringModelOutput
|
||||
|
||||
[[autodoc]] modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
|
||||
|
||||
## FlaxBaseModelOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutput
|
||||
|
||||
## FlaxBaseModelOutputWithPast
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPast
|
||||
|
||||
## FlaxBaseModelOutputWithPooling
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPooling
|
||||
|
||||
## FlaxBaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
## FlaxSeq2SeqModelOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqModelOutput
|
||||
|
||||
## FlaxCausalLMOutputWithCrossAttentions
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
|
||||
|
||||
## FlaxMaskedLMOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxMaskedLMOutput
|
||||
|
||||
## FlaxSeq2SeqLMOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqLMOutput
|
||||
|
||||
## FlaxNextSentencePredictorOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxNextSentencePredictorOutput
|
||||
|
||||
## FlaxSequenceClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxSequenceClassifierOutput
|
||||
|
||||
## FlaxSeq2SeqSequenceClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
|
||||
|
||||
## FlaxMultipleChoiceModelOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxMultipleChoiceModelOutput
|
||||
|
||||
## FlaxTokenClassifierOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxTokenClassifierOutput
|
||||
|
||||
## FlaxQuestionAnsweringModelOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
|
||||
|
||||
## FlaxSeq2SeqQuestionAnsweringModelOutput
|
||||
|
||||
[[autodoc]] modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
|
@ -1,412 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Model outputs
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
All models have outputs that are instances of subclasses of :class:`~transformers.file_utils.ModelOutput`. Those are
|
||||
data structures containing all the information returned by the model, but that can also be used as tuples or
|
||||
dictionaries.
|
||||
|
||||
Let's see of this looks on an example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import BertTokenizer, BertForSequenceClassification
|
||||
import torch
|
||||
|
||||
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
||||
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
|
||||
|
||||
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
|
||||
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
|
||||
outputs = model(**inputs, labels=labels)
|
||||
|
||||
The ``outputs`` object is a :class:`~transformers.modeling_outputs.SequenceClassifierOutput`, as we can see in the
|
||||
documentation of that class below, it means it has an optional ``loss``, a ``logits`` an optional ``hidden_states`` and
|
||||
an optional ``attentions`` attribute. Here we have the ``loss`` since we passed along ``labels``, but we don't have
|
||||
``hidden_states`` and ``attentions`` because we didn't pass ``output_hidden_states=True`` or
|
||||
``output_attentions=True``.
|
||||
|
||||
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
|
||||
will get ``None``. Here for instance ``outputs.loss`` is the loss computed by the model, and ``outputs.attentions`` is
|
||||
``None``.
|
||||
|
||||
When considering our ``outputs`` object as tuple, it only considers the attributes that don't have ``None`` values.
|
||||
Here for instance, it has two elements, ``loss`` then ``logits``, so
|
||||
|
||||
.. code-block::
|
||||
|
||||
outputs[:2]
|
||||
|
||||
will return the tuple ``(outputs.loss, outputs.logits)`` for instance.
|
||||
|
||||
When considering our ``outputs`` object as dictionary, it only considers the attributes that don't have ``None``
|
||||
values. Here for instance, it has two keys that are ``loss`` and ``logits``.
|
||||
|
||||
We document here the generic model outputs that are used by more than one model type. Specific output types are
|
||||
documented on their corresponding model page.
|
||||
|
||||
ModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.file_utils.ModelOutput
|
||||
:members: to_tuple
|
||||
|
||||
|
||||
BaseModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.BaseModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
BaseModelOutputWithPooling
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.BaseModelOutputWithPooling
|
||||
:members:
|
||||
|
||||
|
||||
BaseModelOutputWithCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.BaseModelOutputWithCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
BaseModelOutputWithPoolingAndCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
BaseModelOutputWithPast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.BaseModelOutputWithPast
|
||||
:members:
|
||||
|
||||
|
||||
BaseModelOutputWithPastAndCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
Seq2SeqModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.Seq2SeqModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
CausalLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.CausalLMOutput
|
||||
:members:
|
||||
|
||||
|
||||
CausalLMOutputWithCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.CausalLMOutputWithCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
CausalLMOutputWithPast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.CausalLMOutputWithPast
|
||||
:members:
|
||||
|
||||
|
||||
MaskedLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.MaskedLMOutput
|
||||
:members:
|
||||
|
||||
|
||||
Seq2SeqLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.Seq2SeqLMOutput
|
||||
:members:
|
||||
|
||||
|
||||
NextSentencePredictorOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.NextSentencePredictorOutput
|
||||
:members:
|
||||
|
||||
|
||||
SequenceClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.SequenceClassifierOutput
|
||||
:members:
|
||||
|
||||
|
||||
Seq2SeqSequenceClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput
|
||||
:members:
|
||||
|
||||
|
||||
MultipleChoiceModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.MultipleChoiceModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
TokenClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.TokenClassifierOutput
|
||||
:members:
|
||||
|
||||
|
||||
QuestionAnsweringModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.QuestionAnsweringModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
Seq2SeqQuestionAnsweringModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFBaseModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFBaseModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFBaseModelOutputWithPooling
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling
|
||||
:members:
|
||||
|
||||
|
||||
TFBaseModelOutputWithPoolingAndCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
TFBaseModelOutputWithPast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFBaseModelOutputWithPast
|
||||
:members:
|
||||
|
||||
|
||||
TFBaseModelOutputWithPastAndCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
TFSeq2SeqModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFSeq2SeqModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFCausalLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFCausalLMOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFCausalLMOutputWithCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions
|
||||
:members:
|
||||
|
||||
|
||||
TFCausalLMOutputWithPast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFCausalLMOutputWithPast
|
||||
:members:
|
||||
|
||||
|
||||
TFMaskedLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFMaskedLMOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFSeq2SeqLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFSeq2SeqLMOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFNextSentencePredictorOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFNextSentencePredictorOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFSequenceClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFSequenceClassifierOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFSeq2SeqSequenceClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFMultipleChoiceModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFTokenClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFTokenClassifierOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFQuestionAnsweringModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
TFSeq2SeqQuestionAnsweringModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput
|
||||
:members:
|
||||
|
||||
|
||||
FlaxBaseModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxBaseModelOutput
|
||||
|
||||
|
||||
FlaxBaseModelOutputWithPast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPast
|
||||
|
||||
|
||||
FlaxBaseModelOutputWithPooling
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling
|
||||
|
||||
|
||||
FlaxBaseModelOutputWithPastAndCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions
|
||||
|
||||
|
||||
FlaxSeq2SeqModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput
|
||||
|
||||
|
||||
FlaxCausalLMOutputWithCrossAttentions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions
|
||||
|
||||
|
||||
FlaxMaskedLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxMaskedLMOutput
|
||||
|
||||
|
||||
FlaxSeq2SeqLMOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput
|
||||
|
||||
|
||||
FlaxNextSentencePredictorOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput
|
||||
|
||||
|
||||
FlaxSequenceClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput
|
||||
|
||||
|
||||
FlaxSeq2SeqSequenceClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput
|
||||
|
||||
|
||||
FlaxMultipleChoiceModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput
|
||||
|
||||
|
||||
FlaxTokenClassifierOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxTokenClassifierOutput
|
||||
|
||||
|
||||
FlaxQuestionAnsweringModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput
|
||||
|
||||
|
||||
FlaxSeq2SeqQuestionAnsweringModelOutput
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput
|
375
docs/source/main_classes/pipelines.mdx
Normal file
375
docs/source/main_classes/pipelines.mdx
Normal file
@ -0,0 +1,375 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Pipelines
|
||||
|
||||
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of
|
||||
the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity
|
||||
Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the
|
||||
[task summary](../task_summary) for examples of use.
|
||||
|
||||
There are two categories of pipeline abstractions to be aware about:
|
||||
|
||||
- The [`pipeline`] which is the most powerful object encapsulating all other pipelines.
|
||||
- The other task-specific pipelines:
|
||||
|
||||
- [`AudioClassificationPipeline`]
|
||||
- [`AutomaticSpeechRecognitionPipeline`]
|
||||
- [`ConversationalPipeline`]
|
||||
- [`FeatureExtractionPipeline`]
|
||||
- [`FillMaskPipeline`]
|
||||
- [`ImageClassificationPipeline`]
|
||||
- [`ImageSegmentationPipeline`]
|
||||
- [`ObjectDetectionPipeline`]
|
||||
- [`QuestionAnsweringPipeline`]
|
||||
- [`SummarizationPipeline`]
|
||||
- [`TableQuestionAnsweringPipeline`]
|
||||
- [`TextClassificationPipeline`]
|
||||
- [`TextGenerationPipeline`]
|
||||
- [`Text2TextGenerationPipeline`]
|
||||
- [`TokenClassificationPipeline`]
|
||||
- [`TranslationPipeline`]
|
||||
- [`ZeroShotClassificationPipeline`]
|
||||
|
||||
## The pipeline abstraction
|
||||
|
||||
The *pipeline* abstraction is a wrapper around all the other available pipelines. It is instantiated as any other
|
||||
pipeline but can provide additional quality of life.
|
||||
|
||||
Simple call on one item:
|
||||
|
||||
```python
|
||||
>>> pipe = pipeline("text-classification")
|
||||
>>> pipe("This restaurant is awesome")
|
||||
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
|
||||
```
|
||||
|
||||
If you want to use a specific model from the [hub](https://huggingface.co) you can ignore the task if the model on
|
||||
the hub already defines it:
|
||||
|
||||
```python
|
||||
>>> pipe = pipeline(model="roberta-large-mnli")
|
||||
>>> pipe("This restaurant is awesome")
|
||||
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
|
||||
```
|
||||
|
||||
To call a pipeline on many items, you can either call with a *list*.
|
||||
|
||||
```python
|
||||
>>> pipe = pipeline("text-classification")
|
||||
>>> pipe(["This restaurant is awesome", "This restaurant is aweful"])
|
||||
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
|
||||
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
|
||||
```
|
||||
|
||||
To iterate of full datasets it is recommended to use a `dataset` directly. This means you don't need to allocate
|
||||
the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on
|
||||
GPU. If it doesn't don't hesitate to create an issue.
|
||||
|
||||
```python
|
||||
import datasets
|
||||
from transformers import pipeline
|
||||
from transformers.pipelines.base import KeyDataset
|
||||
import tqdm
|
||||
|
||||
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
|
||||
dataset = datasets.load_dataset("superb", name="asr", split="test")
|
||||
|
||||
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
|
||||
# as we're not interested in the *target* part of the dataset.
|
||||
for out in tqdm.tqdm(pipe(KeyDataset(dataset, "file"))):
|
||||
print(out)
|
||||
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
|
||||
# {"text": ....}
|
||||
# ....
|
||||
```
|
||||
|
||||
[[autodoc]] pipeline
|
||||
|
||||
## Pipeline batching
|
||||
|
||||
All pipelines (except *zero-shot-classification* and *question-answering* currently) can use batching. This will work
|
||||
whenever the pipeline uses its streaming ability (so when passing lists or `Dataset`).
|
||||
|
||||
```python
|
||||
from transformers import pipeline
|
||||
from transformers.pipelines.base import KeyDataset
|
||||
import datasets
|
||||
import tqdm
|
||||
|
||||
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
|
||||
pipe = pipeline("text-classification", device=0)
|
||||
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
|
||||
print(out)
|
||||
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
|
||||
# Exactly the same output as before, but the content are passed
|
||||
# as batches to the model
|
||||
```
|
||||
|
||||
<Tip warning={true}>
|
||||
|
||||
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending
|
||||
on hardware, data and the actual model being used.
|
||||
|
||||
Example where it's most a speedup:
|
||||
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
from transformers import pipeline
|
||||
from torch.utils.data import Dataset
|
||||
import tqdm
|
||||
|
||||
|
||||
pipe = pipeline("text-classification", device=0)
|
||||
|
||||
|
||||
class MyDataset(Dataset):
|
||||
def __len__(self):
|
||||
return 5000
|
||||
|
||||
def __getitem__(self, i):
|
||||
return "This is a test"
|
||||
|
||||
|
||||
dataset = MyDataset()
|
||||
|
||||
for batch_size in [1, 8, 64, 256]:
|
||||
print("-" * 30)
|
||||
print(f"Streaming batch_size={batch_size}")
|
||||
for out in tqdm.tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
|
||||
pass
|
||||
```
|
||||
|
||||
```
|
||||
# On GTX 970
|
||||
------------------------------
|
||||
Streaming no batching
|
||||
100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=8
|
||||
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=64
|
||||
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=256
|
||||
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s]
|
||||
(diminishing returns, saturated the GPU)
|
||||
```
|
||||
|
||||
Example where it's most a slowdown:
|
||||
|
||||
```python
|
||||
class MyDataset(Dataset):
|
||||
def __len__(self):
|
||||
return 5000
|
||||
|
||||
def __getitem__(self, i):
|
||||
if i % 64 == 0:
|
||||
n = 100
|
||||
else:
|
||||
n = 1
|
||||
return "This is a test" * n
|
||||
```
|
||||
|
||||
This is a occasional very long sentence compared to the other. In that case, the **whole** batch will need to be 400
|
||||
tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on
|
||||
bigger batches, the program simply crashes.
|
||||
|
||||
|
||||
```
|
||||
------------------------------
|
||||
Streaming no batching
|
||||
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=8
|
||||
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=64
|
||||
100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=256
|
||||
0%| | 0/1000 [00:00<?, ?it/s]
|
||||
Traceback (most recent call last):
|
||||
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
|
||||
for out in tqdm.tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
|
||||
....
|
||||
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
|
||||
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
|
||||
```
|
||||
|
||||
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of
|
||||
thumb:
|
||||
|
||||
For users, a rule of thumb is:
|
||||
|
||||
- **Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the
|
||||
only way to go.**
|
||||
- If you are latency constrained (live product doing inference), don't batch
|
||||
- If you are using CPU, don't batch.
|
||||
- If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
|
||||
|
||||
- If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and
|
||||
try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't
|
||||
control the sequence_length.)
|
||||
- If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push
|
||||
it until you get OOMs.
|
||||
- The larger the GPU the more likely batching is going to be more interesting
|
||||
- As soon as you enable batching, make sure you can handle OOMs nicely.
|
||||
|
||||
## Pipeline custom code
|
||||
|
||||
If you want to override a specific pipeline.
|
||||
|
||||
Don't hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most
|
||||
cases, so `transformers` could maybe support your use case.
|
||||
|
||||
|
||||
If you want to try simply you can:
|
||||
|
||||
- Subclass your pipeline of choice
|
||||
|
||||
```python
|
||||
class MyPipeline(TextClassificationPipeline):
|
||||
def postprocess(...):
|
||||
...
|
||||
scores = scores * 100
|
||||
...
|
||||
|
||||
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
|
||||
# or if you use *pipeline* function, then:
|
||||
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
|
||||
```
|
||||
|
||||
That should enable you to do all the custom code you want.
|
||||
|
||||
|
||||
## Implementing a pipeline
|
||||
|
||||
[Implementing a new pipeline](../add_new_pipeline)
|
||||
|
||||
## The task specific pipelines
|
||||
|
||||
|
||||
### AudioClassificationPipeline
|
||||
|
||||
[[autodoc]] AudioClassificationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### AutomaticSpeechRecognitionPipeline
|
||||
|
||||
[[autodoc]] AutomaticSpeechRecognitionPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### ConversationalPipeline
|
||||
|
||||
[[autodoc]] Conversation
|
||||
|
||||
[[autodoc]] ConversationalPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### FeatureExtractionPipeline
|
||||
|
||||
[[autodoc]] FeatureExtractionPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### FillMaskPipeline
|
||||
|
||||
[[autodoc]] FillMaskPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### ImageClassificationPipeline
|
||||
|
||||
[[autodoc]] ImageClassificationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### ImageSegmentationPipeline
|
||||
|
||||
[[autodoc]] ImageSegmentationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### NerPipeline
|
||||
|
||||
[[autodoc]] NerPipeline
|
||||
|
||||
See [`TokenClassificationPipeline`] for all details.
|
||||
|
||||
### ObjectDetectionPipeline
|
||||
|
||||
[[autodoc]] ObjectDetectionPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### QuestionAnsweringPipeline
|
||||
|
||||
[[autodoc]] QuestionAnsweringPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### SummarizationPipeline
|
||||
|
||||
[[autodoc]] SummarizationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### TableQuestionAnsweringPipeline
|
||||
|
||||
[[autodoc]] TableQuestionAnsweringPipeline
|
||||
- __call__
|
||||
|
||||
### TextClassificationPipeline
|
||||
|
||||
[[autodoc]] TextClassificationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### TextGenerationPipeline
|
||||
|
||||
[[autodoc]] TextGenerationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### Text2TextGenerationPipeline
|
||||
|
||||
[[autodoc]] Text2TextGenerationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### TokenClassificationPipeline
|
||||
|
||||
[[autodoc]] TokenClassificationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### TranslationPipeline
|
||||
|
||||
[[autodoc]] TranslationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
### ZeroShotClassificationPipeline
|
||||
|
||||
[[autodoc]] ZeroShotClassificationPipeline
|
||||
- __call__
|
||||
- all
|
||||
|
||||
## Parent class: `Pipeline`
|
||||
|
||||
[[autodoc]] Pipeline
|
@ -1,407 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Pipelines
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of
|
||||
the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity
|
||||
Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the
|
||||
:doc:`task summary <../task_summary>` for examples of use.
|
||||
|
||||
There are two categories of pipeline abstractions to be aware about:
|
||||
|
||||
- The :func:`~transformers.pipeline` which is the most powerful object encapsulating all other pipelines.
|
||||
- The other task-specific pipelines:
|
||||
|
||||
- :class:`~transformers.AudioClassificationPipeline`
|
||||
- :class:`~transformers.AutomaticSpeechRecognitionPipeline`
|
||||
- :class:`~transformers.ConversationalPipeline`
|
||||
- :class:`~transformers.FeatureExtractionPipeline`
|
||||
- :class:`~transformers.FillMaskPipeline`
|
||||
- :class:`~transformers.ImageClassificationPipeline`
|
||||
- :class:`~transformers.ImageSegmentationPipeline`
|
||||
- :class:`~transformers.ObjectDetectionPipeline`
|
||||
- :class:`~transformers.QuestionAnsweringPipeline`
|
||||
- :class:`~transformers.SummarizationPipeline`
|
||||
- :class:`~transformers.TableQuestionAnsweringPipeline`
|
||||
- :class:`~transformers.TextClassificationPipeline`
|
||||
- :class:`~transformers.TextGenerationPipeline`
|
||||
- :class:`~transformers.Text2TextGenerationPipeline`
|
||||
- :class:`~transformers.TokenClassificationPipeline`
|
||||
- :class:`~transformers.TranslationPipeline`
|
||||
- :class:`~transformers.ZeroShotClassificationPipeline`
|
||||
|
||||
The pipeline abstraction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The `pipeline` abstraction is a wrapper around all the other available pipelines. It is instantiated as any other
|
||||
pipeline but can provide additional quality of life.
|
||||
|
||||
Simple call on one item:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> pipe = pipeline("text-classification")
|
||||
>>> pipe("This restaurant is awesome")
|
||||
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
|
||||
|
||||
If you want to use a specific model from the `hub <https://huggingface.co>`__ you can ignore the task if the model on
|
||||
the hub already defines it:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> pipe = pipeline(model="roberta-large-mnli")
|
||||
>>> pipe("This restaurant is awesome")
|
||||
[{'label': 'POSITIVE', 'score': 0.9998743534088135}]
|
||||
|
||||
To call a pipeline on many items, you can either call with a `list`.
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> pipe = pipeline("text-classification")
|
||||
>>> pipe(["This restaurant is awesome", "This restaurant is aweful"])
|
||||
[{'label': 'POSITIVE', 'score': 0.9998743534088135},
|
||||
{'label': 'NEGATIVE', 'score': 0.9996669292449951}]
|
||||
|
||||
|
||||
To iterate of full datasets it is recommended to use a :obj:`dataset` directly. This means you don't need to allocate
|
||||
the whole dataset at once, nor do you need to do batching yourself. This should work just as fast as custom loops on
|
||||
GPU. If it doesn't don't hesitate to create an issue.
|
||||
|
||||
.. code-block::
|
||||
|
||||
import datasets
|
||||
from transformers import pipeline
|
||||
from transformers.pipelines.base import KeyDataset
|
||||
import tqdm
|
||||
|
||||
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
|
||||
dataset = datasets.load_dataset("superb", name="asr", split="test")
|
||||
|
||||
# KeyDataset (only `pt`) will simply return the item in the dict returned by the dataset item
|
||||
# as we're not interested in the `target` part of the dataset.
|
||||
for out in tqdm.tqdm(pipe(KeyDataset(dataset, "file"))):
|
||||
print(out)
|
||||
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
|
||||
# {"text": ....}
|
||||
# ....
|
||||
|
||||
|
||||
.. autofunction:: transformers.pipeline
|
||||
|
||||
Pipeline batching
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All pipelines (except `zero-shot-classification` and `question-answering` currently) can use batching. This will work
|
||||
whenever the pipeline uses its streaming ability (so when passing lists or :obj:`Dataset`).
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import pipeline
|
||||
from transformers.pipelines.base import KeyDataset
|
||||
import datasets
|
||||
import tqdm
|
||||
|
||||
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
|
||||
pipe = pipeline("text-classification", device=0)
|
||||
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
|
||||
print(out)
|
||||
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
|
||||
# Exactly the same output as before, but the content are passed
|
||||
# as batches to the model
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
However, this is not automatically a win for performance. It can be either a 10x speedup or 5x slowdown depending
|
||||
on hardware, data and the actual model being used.
|
||||
|
||||
Example where it's most a speedup:
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import pipeline
|
||||
from torch.utils.data import Dataset
|
||||
import tqdm
|
||||
|
||||
|
||||
pipe = pipeline("text-classification", device=0)
|
||||
|
||||
|
||||
class MyDataset(Dataset):
|
||||
def __len__(self):
|
||||
return 5000
|
||||
|
||||
def __getitem__(self, i):
|
||||
return "This is a test"
|
||||
|
||||
|
||||
dataset = MyDataset()
|
||||
|
||||
for batch_size in [1, 8, 64, 256]:
|
||||
print("-" * 30)
|
||||
print(f"Streaming batch_size={batch_size}")
|
||||
for out in tqdm.tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
|
||||
pass
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
# On GTX 970
|
||||
------------------------------
|
||||
Streaming no batching
|
||||
100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=8
|
||||
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=64
|
||||
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=256
|
||||
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s]
|
||||
(diminishing returns, saturated the GPU)
|
||||
|
||||
|
||||
Example where it's most a slowdown:
|
||||
|
||||
.. code-block::
|
||||
|
||||
class MyDataset(Dataset):
|
||||
def __len__(self):
|
||||
return 5000
|
||||
|
||||
def __getitem__(self, i):
|
||||
if i % 64 == 0:
|
||||
n = 100
|
||||
else:
|
||||
n = 1
|
||||
return "This is a test" * n
|
||||
|
||||
This is a occasional very long sentence compared to the other. In that case, the **whole** batch will need to be 400
|
||||
tokens long, so the whole batch will be [64, 400] instead of [64, 4], leading to the high slowdown. Even worse, on
|
||||
bigger batches, the program simply crashes.
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
------------------------------
|
||||
Streaming no batching
|
||||
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=8
|
||||
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=64
|
||||
100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s]
|
||||
------------------------------
|
||||
Streaming batch_size=256
|
||||
0%| | 0/1000 [00:00<?, ?it/s]
|
||||
Traceback (most recent call last):
|
||||
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
|
||||
for out in tqdm.tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
|
||||
....
|
||||
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
|
||||
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
|
||||
|
||||
|
||||
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Rule of
|
||||
thumb:
|
||||
|
||||
For users, a rule of thumb is:
|
||||
|
||||
- **Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the
|
||||
only way to go.**
|
||||
- If you are latency constrained (live product doing inference), don't batch
|
||||
- If you are using CPU, don't batch.
|
||||
- If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
|
||||
|
||||
- If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and
|
||||
try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't
|
||||
control the sequence_length.)
|
||||
- If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push
|
||||
it until you get OOMs.
|
||||
- The larger the GPU the more likely batching is going to be more interesting
|
||||
- As soon as you enable batching, make sure you can handle OOMs nicely.
|
||||
|
||||
Pipeline custom code
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you want to override a specific pipeline.
|
||||
|
||||
Don't hesitate to create an issue for your task at hand, the goal of the pipeline is to be easy to use and support most
|
||||
cases, so :obj:`transformers` could maybe support your use case.
|
||||
|
||||
|
||||
If you want to try simply you can:
|
||||
|
||||
- Subclass your pipeline of choice
|
||||
|
||||
.. code-block::
|
||||
|
||||
class MyPipeline(TextClassificationPipeline):
|
||||
def postprocess(...):
|
||||
...
|
||||
scores = scores * 100
|
||||
...
|
||||
|
||||
my_pipeline = MyPipeline(model=model, tokenizer=tokenizer, ...)
|
||||
# or if you use `pipeline` function, then:
|
||||
my_pipeline = pipeline(model="xxxx", pipeline_class=MyPipeline)
|
||||
|
||||
That should enable you to do all the custom code you want.
|
||||
|
||||
|
||||
Implementing a pipeline
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:doc:`Implementing a new pipeline <../add_new_pipeline>`
|
||||
|
||||
The task specific pipelines
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
AudioClassificationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.AudioClassificationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
AutomaticSpeechRecognitionPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.AutomaticSpeechRecognitionPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
ConversationalPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.Conversation
|
||||
|
||||
.. autoclass:: transformers.ConversationalPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
FeatureExtractionPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.FeatureExtractionPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
FillMaskPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.FillMaskPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
ImageClassificationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.ImageClassificationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
ImageSegmentationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.ImageSegmentationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
NerPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.NerPipeline
|
||||
|
||||
See :class:`~transformers.TokenClassificationPipeline` for all details.
|
||||
|
||||
ObjectDetectionPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.ObjectDetectionPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
QuestionAnsweringPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.QuestionAnsweringPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
SummarizationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.SummarizationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
TableQuestionAnsweringPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.TableQuestionAnsweringPipeline
|
||||
:special-members: __call__
|
||||
|
||||
|
||||
TextClassificationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.TextClassificationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
TextGenerationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.TextGenerationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
Text2TextGenerationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.Text2TextGenerationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
TokenClassificationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.TokenClassificationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
TranslationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.TranslationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
ZeroShotClassificationPipeline
|
||||
=======================================================================================================================
|
||||
|
||||
.. autoclass:: transformers.ZeroShotClassificationPipeline
|
||||
:special-members: __call__
|
||||
:members:
|
||||
|
||||
Parent class: :obj:`Pipeline`
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Pipeline
|
||||
:members:
|
152
docs/source/main_classes/processors.mdx
Normal file
152
docs/source/main_classes/processors.mdx
Normal file
@ -0,0 +1,152 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Processors
|
||||
|
||||
This library includes processors for several traditional tasks. These processors can be used to process a dataset into
|
||||
examples that can be fed to a model.
|
||||
|
||||
## Processors
|
||||
|
||||
All processors follow the same architecture which is that of the
|
||||
[`~data.processors.utils.DataProcessor`]. The processor returns a list of
|
||||
[`~data.processors.utils.InputExample`]. These
|
||||
[`~data.processors.utils.InputExample`] can be converted to
|
||||
[`~data.processors.utils.InputFeatures`] in order to be fed to the model.
|
||||
|
||||
[[autodoc]] data.processors.utils.DataProcessor
|
||||
|
||||
[[autodoc]] data.processors.utils.InputExample
|
||||
|
||||
[[autodoc]] data.processors.utils.InputFeatures
|
||||
|
||||
## GLUE
|
||||
|
||||
[General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com/) is a benchmark that evaluates the
|
||||
performance of models across a diverse set of existing NLU tasks. It was released together with the paper [GLUE: A
|
||||
multi-task benchmark and analysis platform for natural language understanding](https://openreview.net/pdf?id=rJ4km2R5t7)
|
||||
|
||||
This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB,
|
||||
QQP, QNLI, RTE and WNLI.
|
||||
|
||||
Those processors are:
|
||||
|
||||
- [`~data.processors.utils.MrpcProcessor`]
|
||||
- [`~data.processors.utils.MnliProcessor`]
|
||||
- [`~data.processors.utils.MnliMismatchedProcessor`]
|
||||
- [`~data.processors.utils.Sst2Processor`]
|
||||
- [`~data.processors.utils.StsbProcessor`]
|
||||
- [`~data.processors.utils.QqpProcessor`]
|
||||
- [`~data.processors.utils.QnliProcessor`]
|
||||
- [`~data.processors.utils.RteProcessor`]
|
||||
- [`~data.processors.utils.WnliProcessor`]
|
||||
|
||||
Additionally, the following method can be used to load values from a data file and convert them to a list of
|
||||
[`~data.processors.utils.InputExample`].
|
||||
|
||||
automethod,transformers.data.processors.glue.glue_convert_examples_to_features
|
||||
|
||||
|
||||
### Example usage
|
||||
|
||||
An example using these processors is given in the [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/legacy/text-classification/run_glue.py) script.
|
||||
|
||||
|
||||
## XNLI
|
||||
|
||||
[The Cross-Lingual NLI Corpus (XNLI)](https://www.nyu.edu/projects/bowman/xnli/) is a benchmark that evaluates the
|
||||
quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on [*MultiNLI*](http://www.nyu.edu/projects/bowman/multinli/): pairs of text are labeled with textual entailment annotations for 15
|
||||
different languages (including both high-resource language such as English and low-resource languages such as Swahili).
|
||||
|
||||
It was released together with the paper [XNLI: Evaluating Cross-lingual Sentence Representations](https://arxiv.org/abs/1809.05053)
|
||||
|
||||
This library hosts the processor to load the XNLI data:
|
||||
|
||||
- [`~data.processors.utils.XnliProcessor`]
|
||||
|
||||
Please note that since the gold labels are available on the test set, evaluation is performed on the test set.
|
||||
|
||||
An example using these processors is given in the [run_xnli.py](https://github.com/huggingface/transformers/tree/master/examples/legacy/text-classification/run_xnli.py) script.
|
||||
|
||||
|
||||
## SQuAD
|
||||
|
||||
[The Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer//) is a benchmark that
|
||||
evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version
|
||||
(v1.1) was released together with the paper [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250). The second version (v2.0) was released alongside the paper [Know What You Don't
|
||||
Know: Unanswerable Questions for SQuAD](https://arxiv.org/abs/1806.03822).
|
||||
|
||||
This library hosts a processor for each of the two versions:
|
||||
|
||||
### Processors
|
||||
|
||||
Those processors are:
|
||||
|
||||
- [`~data.processors.utils.SquadV1Processor`]
|
||||
- [`~data.processors.utils.SquadV2Processor`]
|
||||
|
||||
They both inherit from the abstract class [`~data.processors.utils.SquadProcessor`]
|
||||
|
||||
[[autodoc]] data.processors.squad.SquadProcessor
|
||||
- all
|
||||
|
||||
Additionally, the following method can be used to convert SQuAD examples into
|
||||
[`~data.processors.utils.SquadFeatures`] that can be used as model inputs.
|
||||
|
||||
automethod,transformers.data.processors.squad.squad_convert_examples_to_features
|
||||
|
||||
|
||||
These processors as well as the aforementionned method can be used with files containing the data as well as with the
|
||||
*tensorflow_datasets* package. Examples are given below.
|
||||
|
||||
|
||||
### Example usage
|
||||
|
||||
Here is an example using the processors as well as the conversion method using data files:
|
||||
|
||||
```python
|
||||
# Loading a V2 processor
|
||||
processor = SquadV2Processor()
|
||||
examples = processor.get_dev_examples(squad_v2_data_dir)
|
||||
|
||||
# Loading a V1 processor
|
||||
processor = SquadV1Processor()
|
||||
examples = processor.get_dev_examples(squad_v1_data_dir)
|
||||
|
||||
features = squad_convert_examples_to_features(
|
||||
examples=examples,
|
||||
tokenizer=tokenizer,
|
||||
max_seq_length=max_seq_length,
|
||||
doc_stride=args.doc_stride,
|
||||
max_query_length=max_query_length,
|
||||
is_training=not evaluate,
|
||||
)
|
||||
```
|
||||
|
||||
Using *tensorflow_datasets* is as easy as using a data file:
|
||||
|
||||
```python
|
||||
# tensorflow_datasets only handle Squad V1.
|
||||
tfds_examples = tfds.load("squad")
|
||||
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
|
||||
|
||||
features = squad_convert_examples_to_features(
|
||||
examples=examples,
|
||||
tokenizer=tokenizer,
|
||||
max_seq_length=max_seq_length,
|
||||
doc_stride=args.doc_stride,
|
||||
max_query_length=max_query_length,
|
||||
is_training=not evaluate,
|
||||
)
|
||||
```
|
||||
|
||||
Another example using these processors is given in the [run_squad.py](https://github.com/huggingface/transformers/tree/master/examples/legacy/question-answering/run_squad.py) script.
|
@ -1,172 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Processors
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
This library includes processors for several traditional tasks. These processors can be used to process a dataset into
|
||||
examples that can be fed to a model.
|
||||
|
||||
Processors
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All processors follow the same architecture which is that of the
|
||||
:class:`~transformers.data.processors.utils.DataProcessor`. The processor returns a list of
|
||||
:class:`~transformers.data.processors.utils.InputExample`. These
|
||||
:class:`~transformers.data.processors.utils.InputExample` can be converted to
|
||||
:class:`~transformers.data.processors.utils.InputFeatures` in order to be fed to the model.
|
||||
|
||||
.. autoclass:: transformers.data.processors.utils.DataProcessor
|
||||
:members:
|
||||
|
||||
|
||||
.. autoclass:: transformers.data.processors.utils.InputExample
|
||||
:members:
|
||||
|
||||
|
||||
.. autoclass:: transformers.data.processors.utils.InputFeatures
|
||||
:members:
|
||||
|
||||
|
||||
GLUE
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
`General Language Understanding Evaluation (GLUE) <https://gluebenchmark.com/>`__ is a benchmark that evaluates the
|
||||
performance of models across a diverse set of existing NLU tasks. It was released together with the paper `GLUE: A
|
||||
multi-task benchmark and analysis platform for natural language understanding
|
||||
<https://openreview.net/pdf?id=rJ4km2R5t7>`__
|
||||
|
||||
This library hosts a total of 10 processors for the following tasks: MRPC, MNLI, MNLI (mismatched), CoLA, SST2, STSB,
|
||||
QQP, QNLI, RTE and WNLI.
|
||||
|
||||
Those processors are:
|
||||
|
||||
- :class:`~transformers.data.processors.utils.MrpcProcessor`
|
||||
- :class:`~transformers.data.processors.utils.MnliProcessor`
|
||||
- :class:`~transformers.data.processors.utils.MnliMismatchedProcessor`
|
||||
- :class:`~transformers.data.processors.utils.Sst2Processor`
|
||||
- :class:`~transformers.data.processors.utils.StsbProcessor`
|
||||
- :class:`~transformers.data.processors.utils.QqpProcessor`
|
||||
- :class:`~transformers.data.processors.utils.QnliProcessor`
|
||||
- :class:`~transformers.data.processors.utils.RteProcessor`
|
||||
- :class:`~transformers.data.processors.utils.WnliProcessor`
|
||||
|
||||
Additionally, the following method can be used to load values from a data file and convert them to a list of
|
||||
:class:`~transformers.data.processors.utils.InputExample`.
|
||||
|
||||
.. automethod:: transformers.data.processors.glue.glue_convert_examples_to_features
|
||||
|
||||
Example usage
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
An example using these processors is given in the :prefix_link:`run_glue.py
|
||||
<examples/legacy/text-classification/run_glue.py>` script.
|
||||
|
||||
|
||||
XNLI
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
`The Cross-Lingual NLI Corpus (XNLI) <https://www.nyu.edu/projects/bowman/xnli/>`__ is a benchmark that evaluates the
|
||||
quality of cross-lingual text representations. XNLI is crowd-sourced dataset based on `MultiNLI
|
||||
<http://www.nyu.edu/projects/bowman/multinli/>`: pairs of text are labeled with textual entailment annotations for 15
|
||||
different languages (including both high-resource language such as English and low-resource languages such as Swahili).
|
||||
|
||||
It was released together with the paper `XNLI: Evaluating Cross-lingual Sentence Representations
|
||||
<https://arxiv.org/abs/1809.05053>`__
|
||||
|
||||
This library hosts the processor to load the XNLI data:
|
||||
|
||||
- :class:`~transformers.data.processors.utils.XnliProcessor`
|
||||
|
||||
Please note that since the gold labels are available on the test set, evaluation is performed on the test set.
|
||||
|
||||
An example using these processors is given in the :prefix_link:`run_xnli.py
|
||||
<examples/legacy/text-classification/run_xnli.py>` script.
|
||||
|
||||
|
||||
SQuAD
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
`The Stanford Question Answering Dataset (SQuAD) <https://rajpurkar.github.io/SQuAD-explorer//>`__ is a benchmark that
|
||||
evaluates the performance of models on question answering. Two versions are available, v1.1 and v2.0. The first version
|
||||
(v1.1) was released together with the paper `SQuAD: 100,000+ Questions for Machine Comprehension of Text
|
||||
<https://arxiv.org/abs/1606.05250>`__. The second version (v2.0) was released alongside the paper `Know What You Don't
|
||||
Know: Unanswerable Questions for SQuAD <https://arxiv.org/abs/1806.03822>`__.
|
||||
|
||||
This library hosts a processor for each of the two versions:
|
||||
|
||||
Processors
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Those processors are:
|
||||
|
||||
- :class:`~transformers.data.processors.utils.SquadV1Processor`
|
||||
- :class:`~transformers.data.processors.utils.SquadV2Processor`
|
||||
|
||||
They both inherit from the abstract class :class:`~transformers.data.processors.utils.SquadProcessor`
|
||||
|
||||
.. autoclass:: transformers.data.processors.squad.SquadProcessor
|
||||
:members:
|
||||
|
||||
Additionally, the following method can be used to convert SQuAD examples into
|
||||
:class:`~transformers.data.processors.utils.SquadFeatures` that can be used as model inputs.
|
||||
|
||||
.. automethod:: transformers.data.processors.squad.squad_convert_examples_to_features
|
||||
|
||||
These processors as well as the aforementionned method can be used with files containing the data as well as with the
|
||||
`tensorflow_datasets` package. Examples are given below.
|
||||
|
||||
|
||||
Example usage
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example using the processors as well as the conversion method using data files:
|
||||
|
||||
.. code-block::
|
||||
|
||||
# Loading a V2 processor
|
||||
processor = SquadV2Processor()
|
||||
examples = processor.get_dev_examples(squad_v2_data_dir)
|
||||
|
||||
# Loading a V1 processor
|
||||
processor = SquadV1Processor()
|
||||
examples = processor.get_dev_examples(squad_v1_data_dir)
|
||||
|
||||
features = squad_convert_examples_to_features(
|
||||
examples=examples,
|
||||
tokenizer=tokenizer,
|
||||
max_seq_length=max_seq_length,
|
||||
doc_stride=args.doc_stride,
|
||||
max_query_length=max_query_length,
|
||||
is_training=not evaluate,
|
||||
)
|
||||
|
||||
Using `tensorflow_datasets` is as easy as using a data file:
|
||||
|
||||
.. code-block::
|
||||
|
||||
# tensorflow_datasets only handle Squad V1.
|
||||
tfds_examples = tfds.load("squad")
|
||||
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=evaluate)
|
||||
|
||||
features = squad_convert_examples_to_features(
|
||||
examples=examples,
|
||||
tokenizer=tokenizer,
|
||||
max_seq_length=max_seq_length,
|
||||
doc_stride=args.doc_stride,
|
||||
max_query_length=max_query_length,
|
||||
is_training=not evaluate,
|
||||
)
|
||||
|
||||
|
||||
Another example using these processors is given in the :prefix_link:`run_squad.py
|
||||
<examples/legacy/question-answering/run_squad.py>` script.
|
77
docs/source/main_classes/tokenizer.mdx
Normal file
77
docs/source/main_classes/tokenizer.mdx
Normal file
@ -0,0 +1,77 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Tokenizer
|
||||
|
||||
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
|
||||
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
|
||||
Rust library [🤗 Tokenizers](https://github.com/huggingface/tokenizers). The "Fast" implementations allows:
|
||||
|
||||
1. a significant speed-up in particular when doing batched tokenization and
|
||||
2. additional methods to map between the original string (character and words) and the token space (e.g. getting the
|
||||
index of the token comprising a given character or the span of characters corresponding to a given token). Currently
|
||||
no "Fast" implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLM-RoBERTa
|
||||
and XLNet models).
|
||||
|
||||
The base classes [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`]
|
||||
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
|
||||
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
|
||||
(downloaded from HuggingFace's AWS S3 repository). They both rely on
|
||||
[`~tokenization_utils_base.PreTrainedTokenizerBase`] that contains the common methods, and
|
||||
[`~tokenization_utils_base.SpecialTokensMixin`].
|
||||
|
||||
[`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] thus implement the main
|
||||
methods for using all the tokenizers:
|
||||
|
||||
- Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
|
||||
encoding/decoding (i.e., tokenizing and converting to integers).
|
||||
- Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...).
|
||||
- Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
|
||||
tokenizer for easy access and making sure they are not split during tokenization.
|
||||
|
||||
[`BatchEncoding`] holds the output of the
|
||||
[`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`,
|
||||
`encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary. When the tokenizer is a pure python
|
||||
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by
|
||||
these methods (`input_ids`, `attention_mask`...). When the tokenizer is a "Fast" tokenizer (i.e., backed by
|
||||
HuggingFace [tokenizers library](https://github.com/huggingface/tokenizers)), this class provides in addition
|
||||
several advanced alignment methods which can be used to map between the original string (character and words) and the
|
||||
token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding
|
||||
to a given token).
|
||||
|
||||
|
||||
## PreTrainedTokenizer
|
||||
|
||||
[[autodoc]] PreTrainedTokenizer
|
||||
- __call__
|
||||
- batch_decode
|
||||
- decode
|
||||
- encode
|
||||
- push_to_hub
|
||||
- all
|
||||
|
||||
## PreTrainedTokenizerFast
|
||||
|
||||
The [`PreTrainedTokenizerFast`] depend on the [tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 tokenizers library can be
|
||||
loaded very simply into 🤗 transformers. Take a look at the [Using tokenizers from 🤗 tokenizers](../fast_tokenizers) page to understand how this is done.
|
||||
|
||||
[[autodoc]] PreTrainedTokenizerFast
|
||||
- __call__
|
||||
- batch_decode
|
||||
- decode
|
||||
- encode
|
||||
- push_to_hub
|
||||
- all
|
||||
|
||||
## BatchEncoding
|
||||
|
||||
[[autodoc]] BatchEncoding
|
@ -1,78 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Tokenizer
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
|
||||
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
|
||||
Rust library `tokenizers <https://github.com/huggingface/tokenizers>`__. The "Fast" implementations allows:
|
||||
|
||||
1. a significant speed-up in particular when doing batched tokenization and
|
||||
2. additional methods to map between the original string (character and words) and the token space (e.g. getting the
|
||||
index of the token comprising a given character or the span of characters corresponding to a given token). Currently
|
||||
no "Fast" implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLM-RoBERTa
|
||||
and XLNet models).
|
||||
|
||||
The base classes :class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast`
|
||||
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
|
||||
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
|
||||
(downloaded from HuggingFace's AWS S3 repository). They both rely on
|
||||
:class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase` that contains the common methods, and
|
||||
:class:`~transformers.tokenization_utils_base.SpecialTokensMixin`.
|
||||
|
||||
:class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast` thus implement the main
|
||||
methods for using all the tokenizers:
|
||||
|
||||
- Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
|
||||
encoding/decoding (i.e., tokenizing and converting to integers).
|
||||
- Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...).
|
||||
- Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
|
||||
tokenizer for easy access and making sure they are not split during tokenization.
|
||||
|
||||
:class:`~transformers.BatchEncoding` holds the output of the
|
||||
:class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase`'s encoding methods (``__call__``,
|
||||
``encode_plus`` and ``batch_encode_plus``) and is derived from a Python dictionary. When the tokenizer is a pure python
|
||||
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by
|
||||
these methods (``input_ids``, ``attention_mask``...). When the tokenizer is a "Fast" tokenizer (i.e., backed by
|
||||
HuggingFace `tokenizers library <https://github.com/huggingface/tokenizers>`__), this class provides in addition
|
||||
several advanced alignment methods which can be used to map between the original string (character and words) and the
|
||||
token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding
|
||||
to a given token).
|
||||
|
||||
|
||||
PreTrainedTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PreTrainedTokenizer
|
||||
:special-members: __call__, batch_decode, decode, encode, push_to_hub
|
||||
:members:
|
||||
|
||||
|
||||
PreTrainedTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The :class:`~transformers.PreTrainedTokenizerFast` depend on the `tokenizers
|
||||
<https://huggingface.co/docs/tokenizers>`__ library. The tokenizers obtained from the 🤗 tokenizers library can be
|
||||
loaded very simply into 🤗 transformers. Take a look at the :doc:`Using tokenizers from 🤗 tokenizers
|
||||
<../fast_tokenizers>` page to understand how this is done.
|
||||
|
||||
.. autoclass:: transformers.PreTrainedTokenizerFast
|
||||
:special-members: __call__, batch_decode, decode, encode, push_to_hub
|
||||
:members:
|
||||
|
||||
|
||||
BatchEncoding
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.BatchEncoding
|
||||
:members:
|
@ -73,8 +73,7 @@ inside the context manager [`~MBartTokenizer.as_target_tokenizer`] to encode tar
|
||||
|
||||
## Overview of MBart-50
|
||||
|
||||
MBart-50 was introduced in the *Multilingual Translation with Extensible Multilingual Pretraining and Finetuning
|
||||
<https://arxiv.org/abs/2008.00401>* paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
|
||||
MBart-50 was introduced in the [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
|
||||
Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original *mbart-large-cc25* checkpoint by extendeding
|
||||
its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50
|
||||
languages.
|
||||
|
128
docs/source/model_doc/megatron_bert.mdx
Normal file
128
docs/source/model_doc/megatron_bert.mdx
Normal file
@ -0,0 +1,128 @@
|
||||
<!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# MegatronBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The MegatronBERT model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
|
||||
Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
|
||||
Jared Casper and Bryan Catanzaro.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
|
||||
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
|
||||
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
|
||||
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
|
||||
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
|
||||
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
|
||||
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
|
||||
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
|
||||
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
|
||||
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
|
||||
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
|
||||
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
|
||||
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
|
||||
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
|
||||
of 89.4%).*
|
||||
|
||||
Tips:
|
||||
|
||||
We have provided pretrained [BERT-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m) checkpoints
|
||||
for use to evaluate or finetuning downstream tasks.
|
||||
|
||||
To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC)
|
||||
Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
|
||||
|
||||
Alternatively, you can directly download the checkpoints using:
|
||||
|
||||
BERT-345M-uncased:
|
||||
|
||||
```bash
|
||||
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
|
||||
-O megatron_bert_345m_v0_1_uncased.zip
|
||||
```
|
||||
|
||||
BERT-345M-cased:
|
||||
|
||||
```bash
|
||||
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
|
||||
megatron_bert_345m_v0_1_cased.zip
|
||||
```
|
||||
|
||||
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
|
||||
easily be loaded by Hugging Face Transformers and our port of the BERT code.
|
||||
|
||||
The following commands allow you to do the conversion. We assume that the folder `models/megatron_bert` contains
|
||||
`megatron_bert_345m_v0_1_{cased, uncased}.zip` and that the commands are run from inside that folder:
|
||||
|
||||
```bash
|
||||
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
|
||||
```
|
||||
|
||||
```bash
|
||||
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
|
||||
```
|
||||
|
||||
This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM). That repository contains a multi-GPU and multi-node implementation of the
|
||||
Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and
|
||||
"pipeline parallel" techniques.
|
||||
|
||||
## MegatronBertConfig
|
||||
|
||||
[[autodoc]] MegatronBertConfig
|
||||
|
||||
## MegatronBertModel
|
||||
|
||||
[[autodoc]] MegatronBertModel
|
||||
- forward
|
||||
|
||||
## MegatronBertForMaskedLM
|
||||
|
||||
[[autodoc]] MegatronBertForMaskedLM
|
||||
- forward
|
||||
|
||||
## MegatronBertForCausalLM
|
||||
|
||||
[[autodoc]] MegatronBertForCausalLM
|
||||
- forward
|
||||
|
||||
## MegatronBertForNextSentencePrediction
|
||||
|
||||
[[autodoc]] MegatronBertForNextSentencePrediction
|
||||
- forward
|
||||
|
||||
## MegatronBertForPreTraining
|
||||
|
||||
[[autodoc]] MegatronBertForPreTraining
|
||||
- forward
|
||||
|
||||
## MegatronBertForSequenceClassification
|
||||
|
||||
[[autodoc]] MegatronBertForSequenceClassification
|
||||
- forward
|
||||
|
||||
## MegatronBertForMultipleChoice
|
||||
|
||||
[[autodoc]] MegatronBertForMultipleChoice
|
||||
- forward
|
||||
|
||||
## MegatronBertForTokenClassification
|
||||
|
||||
[[autodoc]] MegatronBertForTokenClassification
|
||||
- forward
|
||||
|
||||
## MegatronBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] MegatronBertForQuestionAnswering
|
||||
- forward
|
@ -1,154 +0,0 @@
|
||||
..
|
||||
Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
MegatronBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The MegatronBERT model was proposed in `Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
|
||||
Parallelism <https://arxiv.org/abs/1909.08053>`__ by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
|
||||
Jared Casper and Bryan Catanzaro.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
|
||||
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
|
||||
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
|
||||
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
|
||||
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
|
||||
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
|
||||
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
|
||||
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
|
||||
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
|
||||
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
|
||||
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
|
||||
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
|
||||
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
|
||||
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
|
||||
of 89.4%).*
|
||||
|
||||
Tips:
|
||||
|
||||
We have provided pretrained `BERT-345M <https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m>`__ checkpoints
|
||||
for use to evaluate or finetuning downstream tasks.
|
||||
|
||||
To access these checkpoints, first `sign up <https://ngc.nvidia.com/signup>`__ for and setup the NVIDIA GPU Cloud (NGC)
|
||||
Registry CLI. Further documentation for downloading models can be found in the `NGC documentation
|
||||
<https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1>`__.
|
||||
|
||||
Alternatively, you can directly download the checkpoints using:
|
||||
|
||||
BERT-345M-uncased::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
|
||||
-O megatron_bert_345m_v0_1_uncased.zip
|
||||
|
||||
BERT-345M-cased::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
|
||||
megatron_bert_345m_v0_1_cased.zip
|
||||
|
||||
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
|
||||
easily be loaded by Hugging Face Transformers and our port of the BERT code.
|
||||
|
||||
The following commands allow you to do the conversion. We assume that the folder ``models/megatron_bert`` contains
|
||||
``megatron_bert_345m_v0_1_{cased, uncased}.zip`` and that the commands are run from inside that folder::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
|
||||
|
||||
This model was contributed by `jdemouth <https://huggingface.co/jdemouth>`__. The original code can be found `here
|
||||
<https://github.com/NVIDIA/Megatron-LM>`__. That repository contains a multi-GPU and multi-node implementation of the
|
||||
Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and
|
||||
"pipeline parallel" techniques.
|
||||
|
||||
MegatronBertConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertConfig
|
||||
:members:
|
||||
|
||||
|
||||
MegatronBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertModel
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForCausalLM
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForNextSentencePrediction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForNextSentencePrediction
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForPreTraining
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForPreTraining
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
MegatronBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MegatronBertForQuestionAnswering
|
||||
:members: forward
|
||||
|
||||
|
@ -1,23 +1,21 @@
|
||||
..
|
||||
Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
|
||||
<!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
MegatronGPT2
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
# MegatronGPT2
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
## Overview
|
||||
|
||||
The MegatronGPT2 model was proposed in `Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
|
||||
Parallelism <https://arxiv.org/abs/1909.08053>`__ by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
|
||||
The MegatronGPT2 model was proposed in [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
|
||||
Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
|
||||
Jared Casper and Bryan Catanzaro.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
@ -40,32 +38,30 @@ of 89.4%).*
|
||||
|
||||
Tips:
|
||||
|
||||
We have provided pretrained `GPT2-345M <https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m>`__ checkpoints
|
||||
We have provided pretrained [GPT2-345M](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m) checkpoints
|
||||
for use to evaluate or finetuning downstream tasks.
|
||||
|
||||
To access these checkpoints, first `sign up <https://ngc.nvidia.com/signup>`__ for and setup the NVIDIA GPU Cloud (NGC)
|
||||
Registry CLI. Further documentation for downloading models can be found in the `NGC documentation
|
||||
<https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1>`__.
|
||||
To access these checkpoints, first [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC)
|
||||
Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
|
||||
|
||||
Alternatively, you can directly download the checkpoints using::
|
||||
Alternatively, you can directly download the checkpoints using:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O
|
||||
megatron_gpt2_345m_v0_0.zip
|
||||
```bash
|
||||
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O
|
||||
megatron_gpt2_345m_v0_0.zip
|
||||
```
|
||||
|
||||
Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily
|
||||
be loaded by Hugging Face Transformers GPT2 implementation.
|
||||
|
||||
The following command allows you to do the conversion. We assume that the folder ``models/megatron_gpt2`` contains
|
||||
``megatron_gpt2_345m_v0_0.zip`` and that the command is run from that folder::
|
||||
The following command allows you to do the conversion. We assume that the folder `models/megatron_gpt2` contains
|
||||
`megatron_gpt2_345m_v0_0.zip` and that the command is run from that folder:
|
||||
|
||||
.. code-block:: bash
|
||||
```bash
|
||||
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip
|
||||
```
|
||||
|
||||
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip
|
||||
|
||||
This model was contributed by `jdemouth <https://huggingface.co/jdemouth>`__. The original code can be found `here
|
||||
<https://github.com/NVIDIA/Megatron-LM>`__. That repository contains a multi-GPU and multi-node implementation of the
|
||||
This model was contributed by [jdemouth](https://huggingface.co/jdemouth). The original code can be found [here](https://github.com/NVIDIA/Megatron-LM). That repository contains a multi-GPU and multi-node implementation of the
|
||||
Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and
|
||||
"pipeline parallel" techniques.
|
||||
|
61
docs/source/model_doc/mluke.mdx
Normal file
61
docs/source/model_doc/mluke.mdx
Normal file
@ -0,0 +1,61 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# mLUKE
|
||||
|
||||
## Overview
|
||||
|
||||
The mLUKE model was proposed in [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension
|
||||
of the [LUKE model](https://arxiv.org/abs/2010.01057) trained on the basis of XLM-RoBERTa.
|
||||
|
||||
It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks
|
||||
involving reasoning about entities such as named entity recognition, extractive question answering, relation
|
||||
classification, cloze-style knowledge completion.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual
|
||||
alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining
|
||||
and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging
|
||||
entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages
|
||||
with entity representations and show the model consistently outperforms word-based pretrained models in various
|
||||
cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity
|
||||
representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a
|
||||
multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual
|
||||
knowledge more likely than using only word representations.*
|
||||
|
||||
One can directly plug in the weights of mLUKE into a LUKE model, like so:
|
||||
|
||||
```python
|
||||
from transformers import LukeModel
|
||||
|
||||
model = LukeModel.from_pretrained('studio-ousia/mluke-base')
|
||||
```
|
||||
|
||||
Note that mLUKE has its own tokenizer, [`MLukeTokenizer`]. You can initialize it as follows:
|
||||
|
||||
```python
|
||||
from transformers import MLukeTokenizer
|
||||
|
||||
tokenizer = MLukeTokenizer.from_pretrained('studio-ousia/mluke-base')
|
||||
```
|
||||
|
||||
As mLUKE's architecture is equivalent to that of LUKE, one can refer to [LUKE's documentation page](luke) for all
|
||||
tips, code examples and notebooks.
|
||||
|
||||
This model was contributed by [ryo0634](https://huggingface.co/ryo0634). The original code can be found [here](https://github.com/studio-ousia/luke).
|
||||
|
||||
## MLukeTokenizer
|
||||
|
||||
[[autodoc]] MLukeTokenizer
|
||||
- __call__
|
||||
- save_vocabulary
|
@ -1,66 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
mLUKE
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The mLUKE model was proposed in `mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models
|
||||
<https://arxiv.org/abs/2110.08151>`__ by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension
|
||||
of the `LUKE model <https://arxiv.org/abs/2010.01057>`__ trained on the basis of XLM-RoBERTa.
|
||||
|
||||
It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks
|
||||
involving reasoning about entities such as named entity recognition, extractive question answering, relation
|
||||
classification, cloze-style knowledge completion.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual
|
||||
alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining
|
||||
and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging
|
||||
entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages
|
||||
with entity representations and show the model consistently outperforms word-based pretrained models in various
|
||||
cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity
|
||||
representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a
|
||||
multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual
|
||||
knowledge more likely than using only word representations.*
|
||||
|
||||
One can directly plug in the weights of mLUKE into a LUKE model, like so:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import LukeModel
|
||||
|
||||
model = LukeModel.from_pretrained('studio-ousia/mluke-base')
|
||||
|
||||
Note that mLUKE has its own tokenizer, :class:`~transformers.MLukeTokenizer`. You can initialize it as follows:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import MLukeTokenizer
|
||||
|
||||
tokenizer = MLukeTokenizer.from_pretrained('studio-ousia/mluke-base')
|
||||
|
||||
|
||||
As mLUKE's architecture is equivalent to that of LUKE, one can refer to :doc:`LUKE's documentation page <luke>` for all
|
||||
tips, code examples and notebooks.
|
||||
|
||||
This model was contributed by `ryo0634 <https://huggingface.co/ryo0634>`__. The original code can be found `here
|
||||
<https://github.com/studio-ousia/luke>`__.
|
||||
|
||||
MLukeTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MLukeTokenizer
|
||||
:members: __call__, save_vocabulary
|
142
docs/source/model_doc/mobilebert.mdx
Normal file
142
docs/source/model_doc/mobilebert.mdx
Normal file
@ -0,0 +1,142 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# MobileBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The MobileBERT model was proposed in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny
|
||||
Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several
|
||||
approaches.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds
|
||||
of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot
|
||||
be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating
|
||||
the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to
|
||||
various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while
|
||||
equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
|
||||
To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE
|
||||
model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is
|
||||
4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the
|
||||
natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms
|
||||
latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of
|
||||
90.0/79.2 (1.5/2.1 higher than BERT_BASE).*
|
||||
|
||||
Tips:
|
||||
|
||||
- MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
|
||||
than the left.
|
||||
- MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
|
||||
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
|
||||
with a causal language modeling (CLM) objective are better in that regard.
|
||||
|
||||
This model was contributed by [vshampor](https://huggingface.co/vshampor). The original code can be found [here](https://github.com/google-research/mobilebert).
|
||||
|
||||
## MobileBertConfig
|
||||
|
||||
[[autodoc]] MobileBertConfig
|
||||
|
||||
## MobileBertTokenizer
|
||||
|
||||
[[autodoc]] MobileBertTokenizer
|
||||
|
||||
## MobileBertTokenizerFast
|
||||
|
||||
[[autodoc]] MobileBertTokenizerFast
|
||||
|
||||
## MobileBert specific outputs
|
||||
|
||||
[[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput
|
||||
|
||||
[[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput
|
||||
|
||||
## MobileBertModel
|
||||
|
||||
[[autodoc]] MobileBertModel
|
||||
- forward
|
||||
|
||||
## MobileBertForPreTraining
|
||||
|
||||
[[autodoc]] MobileBertForPreTraining
|
||||
- forward
|
||||
|
||||
## MobileBertForMaskedLM
|
||||
|
||||
[[autodoc]] MobileBertForMaskedLM
|
||||
- forward
|
||||
|
||||
## MobileBertForNextSentencePrediction
|
||||
|
||||
[[autodoc]] MobileBertForNextSentencePrediction
|
||||
- forward
|
||||
|
||||
## MobileBertForSequenceClassification
|
||||
|
||||
[[autodoc]] MobileBertForSequenceClassification
|
||||
- forward
|
||||
|
||||
## MobileBertForMultipleChoice
|
||||
|
||||
[[autodoc]] MobileBertForMultipleChoice
|
||||
- forward
|
||||
|
||||
## MobileBertForTokenClassification
|
||||
|
||||
[[autodoc]] MobileBertForTokenClassification
|
||||
- forward
|
||||
|
||||
## MobileBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] MobileBertForQuestionAnswering
|
||||
- forward
|
||||
|
||||
## TFMobileBertModel
|
||||
|
||||
[[autodoc]] TFMobileBertModel
|
||||
- call
|
||||
|
||||
## TFMobileBertForPreTraining
|
||||
|
||||
[[autodoc]] TFMobileBertForPreTraining
|
||||
- call
|
||||
|
||||
## TFMobileBertForMaskedLM
|
||||
|
||||
[[autodoc]] TFMobileBertForMaskedLM
|
||||
- call
|
||||
|
||||
## TFMobileBertForNextSentencePrediction
|
||||
|
||||
[[autodoc]] TFMobileBertForNextSentencePrediction
|
||||
- call
|
||||
|
||||
## TFMobileBertForSequenceClassification
|
||||
|
||||
[[autodoc]] TFMobileBertForSequenceClassification
|
||||
- call
|
||||
|
||||
## TFMobileBertForMultipleChoice
|
||||
|
||||
[[autodoc]] TFMobileBertForMultipleChoice
|
||||
- call
|
||||
|
||||
## TFMobileBertForTokenClassification
|
||||
|
||||
[[autodoc]] TFMobileBertForTokenClassification
|
||||
- call
|
||||
|
||||
## TFMobileBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] TFMobileBertForQuestionAnswering
|
||||
- call
|
@ -1,190 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
MobileBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The MobileBERT model was proposed in `MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
|
||||
<https://arxiv.org/abs/2004.02984>`__ by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny
|
||||
Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several
|
||||
approaches.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds
|
||||
of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot
|
||||
be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating
|
||||
the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to
|
||||
various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while
|
||||
equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
|
||||
To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE
|
||||
model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is
|
||||
4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the
|
||||
natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms
|
||||
latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of
|
||||
90.0/79.2 (1.5/2.1 higher than BERT_BASE).*
|
||||
|
||||
Tips:
|
||||
|
||||
- MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
|
||||
than the left.
|
||||
- MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
|
||||
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
|
||||
with a causal language modeling (CLM) objective are better in that regard.
|
||||
|
||||
This model was contributed by `vshampor <https://huggingface.co/vshampor>`__. The original code can be found `here
|
||||
<https://github.com/google-research/mobilebert>`__.
|
||||
|
||||
MobileBertConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertConfig
|
||||
:members:
|
||||
|
||||
|
||||
MobileBertTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertTokenizer
|
||||
:members:
|
||||
|
||||
|
||||
MobileBertTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
MobileBert specific outputs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput
|
||||
:members:
|
||||
|
||||
|
||||
MobileBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertModel
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForPreTraining
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForPreTraining
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForNextSentencePrediction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForNextSentencePrediction
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
MobileBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MobileBertForQuestionAnswering
|
||||
:members: forward
|
||||
|
||||
|
||||
TFMobileBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForPreTraining
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForPreTraining
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForMaskedLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForNextSentencePrediction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForNextSentencePrediction
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForSequenceClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForMultipleChoice
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForTokenClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFMobileBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMobileBertForQuestionAnswering
|
||||
:members: call
|
117
docs/source/model_doc/mpnet.mdx
Normal file
117
docs/source/model_doc/mpnet.mdx
Normal file
@ -0,0 +1,117 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# MPNet
|
||||
|
||||
## Overview
|
||||
|
||||
The MPNet model was proposed in [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
|
||||
|
||||
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
|
||||
masked language modeling and permuted language modeling for natural language understanding.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
|
||||
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
|
||||
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
|
||||
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
|
||||
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
|
||||
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
|
||||
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
|
||||
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
|
||||
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
|
||||
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
|
||||
BERT, XLNet, RoBERTa) under the same model setting.*
|
||||
|
||||
Tips:
|
||||
|
||||
- MPNet doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. just
|
||||
separate your segments with the separation token `tokenizer.sep_token` (or `[sep]`).
|
||||
|
||||
The original code can be found [here](https://github.com/microsoft/MPNet).
|
||||
|
||||
## MPNetConfig
|
||||
|
||||
[[autodoc]] MPNetConfig
|
||||
|
||||
## MPNetTokenizer
|
||||
|
||||
[[autodoc]] MPNetTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## MPNetTokenizerFast
|
||||
|
||||
[[autodoc]] MPNetTokenizerFast
|
||||
|
||||
## MPNetModel
|
||||
|
||||
[[autodoc]] MPNetModel
|
||||
- forward
|
||||
|
||||
## MPNetForMaskedLM
|
||||
|
||||
[[autodoc]] MPNetForMaskedLM
|
||||
- forward
|
||||
|
||||
## MPNetForSequenceClassification
|
||||
|
||||
[[autodoc]] MPNetForSequenceClassification
|
||||
- forward
|
||||
|
||||
## MPNetForMultipleChoice
|
||||
|
||||
[[autodoc]] MPNetForMultipleChoice
|
||||
- forward
|
||||
|
||||
## MPNetForTokenClassification
|
||||
|
||||
[[autodoc]] MPNetForTokenClassification
|
||||
- forward
|
||||
|
||||
## MPNetForQuestionAnswering
|
||||
|
||||
[[autodoc]] MPNetForQuestionAnswering
|
||||
- forward
|
||||
|
||||
## TFMPNetModel
|
||||
|
||||
[[autodoc]] TFMPNetModel
|
||||
- call
|
||||
|
||||
## TFMPNetForMaskedLM
|
||||
|
||||
[[autodoc]] TFMPNetForMaskedLM
|
||||
- call
|
||||
|
||||
## TFMPNetForSequenceClassification
|
||||
|
||||
[[autodoc]] TFMPNetForSequenceClassification
|
||||
- call
|
||||
|
||||
## TFMPNetForMultipleChoice
|
||||
|
||||
[[autodoc]] TFMPNetForMultipleChoice
|
||||
- call
|
||||
|
||||
## TFMPNetForTokenClassification
|
||||
|
||||
[[autodoc]] TFMPNetForTokenClassification
|
||||
- call
|
||||
|
||||
## TFMPNetForQuestionAnswering
|
||||
|
||||
[[autodoc]] TFMPNetForQuestionAnswering
|
||||
- call
|
@ -1,149 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
MPNet
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The MPNet model was proposed in `MPNet: Masked and Permuted Pre-training for Language Understanding
|
||||
<https://arxiv.org/abs/2004.09297>`__ by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
|
||||
|
||||
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
|
||||
masked language modeling and permuted language modeling for natural language understanding.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
|
||||
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
|
||||
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
|
||||
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
|
||||
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
|
||||
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
|
||||
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
|
||||
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
|
||||
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
|
||||
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
|
||||
BERT, XLNet, RoBERTa) under the same model setting.*
|
||||
|
||||
Tips:
|
||||
|
||||
- MPNet doesn't have :obj:`token_type_ids`, you don't need to indicate which token belongs to which segment. just
|
||||
separate your segments with the separation token :obj:`tokenizer.sep_token` (or :obj:`[sep]`).
|
||||
|
||||
The original code can be found `here <https://github.com/microsoft/MPNet>`__.
|
||||
|
||||
MPNetConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetConfig
|
||||
:members:
|
||||
|
||||
|
||||
MPNetTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
MPNetTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
MPNetModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetModel
|
||||
:members: forward
|
||||
|
||||
|
||||
MPNetForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
MPNetForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
MPNetForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
MPNetForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
MPNetForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MPNetForQuestionAnswering
|
||||
:members: forward
|
||||
|
||||
|
||||
TFMPNetModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMPNetModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFMPNetForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMPNetForMaskedLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFMPNetForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMPNetForSequenceClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFMPNetForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMPNetForMultipleChoice
|
||||
:members: call
|
||||
|
||||
|
||||
TFMPNetForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMPNetForTokenClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFMPNetForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMPNetForQuestionAnswering
|
||||
:members: call
|
98
docs/source/model_doc/mt5.mdx
Normal file
98
docs/source/model_doc/mt5.mdx
Normal file
@ -0,0 +1,98 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# mT5
|
||||
|
||||
## Overview
|
||||
|
||||
The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya
|
||||
Siddhant, Aditya Barua, Colin Raffel.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain
|
||||
state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a
|
||||
multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail
|
||||
the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual
|
||||
benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a
|
||||
generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model
|
||||
checkpoints used in this work are publicly available.*
|
||||
|
||||
Note: mT5 was only pre-trained on [mC4](https://huggingface.co/datasets/mc4) excluding any supervised training.
|
||||
Therefore, this model has to be fine-tuned before it is useable on a downstream task, unlike the original T5 model.
|
||||
Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
|
||||
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
|
||||
|
||||
Google has released the following variants:
|
||||
|
||||
- [google/mt5-small](https://huggingface.co/google/mt5-small)
|
||||
|
||||
- [google/mt5-base](https://huggingface.co/google/mt5-base)
|
||||
|
||||
- [google/mt5-large](https://huggingface.co/google/mt5-large)
|
||||
|
||||
- [google/mt5-xl](https://huggingface.co/google/mt5-xl)
|
||||
|
||||
- [google/mt5-xxl](https://huggingface.co/google/mt5-xxl).
|
||||
|
||||
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be
|
||||
found [here](https://github.com/google-research/multilingual-t5).
|
||||
|
||||
## MT5Config
|
||||
|
||||
[[autodoc]] MT5Config
|
||||
|
||||
## MT5Tokenizer
|
||||
|
||||
[[autodoc]] MT5Tokenizer
|
||||
|
||||
See [`T5Tokenizer`] for all details.
|
||||
|
||||
|
||||
## MT5TokenizerFast
|
||||
|
||||
[[autodoc]] MT5TokenizerFast
|
||||
|
||||
See [`T5TokenizerFast`] for all details.
|
||||
|
||||
|
||||
## MT5Model
|
||||
|
||||
[[autodoc]] MT5Model
|
||||
|
||||
## MT5ForConditionalGeneration
|
||||
|
||||
[[autodoc]] MT5ForConditionalGeneration
|
||||
|
||||
## MT5EncoderModel
|
||||
|
||||
[[autodoc]] MT5EncoderModel
|
||||
|
||||
## TFMT5Model
|
||||
|
||||
[[autodoc]] TFMT5Model
|
||||
|
||||
## TFMT5ForConditionalGeneration
|
||||
|
||||
[[autodoc]] TFMT5ForConditionalGeneration
|
||||
|
||||
## TFMT5EncoderModel
|
||||
|
||||
[[autodoc]] TFMT5EncoderModel
|
||||
|
||||
## FlaxMT5Model
|
||||
|
||||
[[autodoc]] FlaxMT5Model
|
||||
|
||||
## FlaxMT5ForConditionalGeneration
|
||||
|
||||
[[autodoc]] FlaxMT5ForConditionalGeneration
|
@ -1,129 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
mT5
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The mT5 model was presented in `mT5: A massively multilingual pre-trained text-to-text transformer
|
||||
<https://arxiv.org/abs/2010.11934>`_ by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya
|
||||
Siddhant, Aditya Barua, Colin Raffel.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain
|
||||
state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a
|
||||
multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail
|
||||
the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual
|
||||
benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a
|
||||
generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model
|
||||
checkpoints used in this work are publicly available.*
|
||||
|
||||
Note: mT5 was only pre-trained on `mC4 <https://huggingface.co/datasets/mc4>`__ excluding any supervised training.
|
||||
Therefore, this model has to be fine-tuned before it is useable on a downstream task, unlike the original T5 model.
|
||||
Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
|
||||
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
|
||||
|
||||
Google has released the following variants:
|
||||
|
||||
- `google/mt5-small <https://huggingface.co/google/mt5-small>`__
|
||||
|
||||
- `google/mt5-base <https://huggingface.co/google/mt5-base>`__
|
||||
|
||||
- `google/mt5-large <https://huggingface.co/google/mt5-large>`__
|
||||
|
||||
- `google/mt5-xl <https://huggingface.co/google/mt5-xl>`__
|
||||
|
||||
- `google/mt5-xxl <https://huggingface.co/google/mt5-xxl>`__.
|
||||
|
||||
This model was contributed by `patrickvonplaten <https://huggingface.co/patrickvonplaten>`__. The original code can be
|
||||
found `here <https://github.com/google-research/multilingual-t5>`__.
|
||||
|
||||
MT5Config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MT5Config
|
||||
:members:
|
||||
|
||||
|
||||
MT5Tokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MT5Tokenizer
|
||||
|
||||
See :class:`~transformers.T5Tokenizer` for all details.
|
||||
|
||||
|
||||
MT5TokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MT5TokenizerFast
|
||||
|
||||
See :class:`~transformers.T5TokenizerFast` for all details.
|
||||
|
||||
|
||||
MT5Model
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MT5Model
|
||||
:members:
|
||||
|
||||
|
||||
MT5ForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MT5ForConditionalGeneration
|
||||
:members:
|
||||
|
||||
|
||||
MT5EncoderModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.MT5EncoderModel
|
||||
:members:
|
||||
|
||||
|
||||
TFMT5Model
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMT5Model
|
||||
:members:
|
||||
|
||||
|
||||
TFMT5ForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMT5ForConditionalGeneration
|
||||
:members:
|
||||
|
||||
|
||||
TFMT5EncoderModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFMT5EncoderModel
|
||||
:members:
|
||||
|
||||
|
||||
FlaxMT5Model
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxMT5Model
|
||||
:members:
|
||||
|
||||
|
||||
FlaxMT5ForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxMT5ForConditionalGeneration
|
||||
:members:
|
137
docs/source/model_doc/pegasus.mdx
Normal file
137
docs/source/model_doc/pegasus.mdx
Normal file
@ -0,0 +1,137 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Pegasus
|
||||
|
||||
**DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=sshleifer&labels=&template=bug-report.md&title)
|
||||
and assign @patrickvonplaten.
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
The Pegasus model was proposed in [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
|
||||
|
||||
According to the abstract,
|
||||
|
||||
- Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
|
||||
input document and are generated together as one output sequence from the remaining sentences, similar to an
|
||||
extractive summary.
|
||||
- Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
|
||||
|
||||
This model was contributed by [sshleifer](https://huggingface.co/sshleifer). The Authors' code can be found [here](https://github.com/google-research/pegasus).
|
||||
|
||||
|
||||
## Checkpoints
|
||||
|
||||
All the [checkpoints](https://huggingface.co/models?search=pegasus) are fine-tuned for summarization, besides
|
||||
*pegasus-large*, whence the other checkpoints are fine-tuned:
|
||||
|
||||
- Each checkpoint is 2.2 GB on disk and 568M parameters.
|
||||
- FP16 is not supported (help/ideas on this appreciated!).
|
||||
- Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
|
||||
- Full replication results and correctly pre-processed data can be found in this [Issue](https://github.com/huggingface/transformers/issues/6844#issue-689259666).
|
||||
- [Distilled checkpoints](https://huggingface.co/models?search=distill-pegasus) are described in this [paper](https://arxiv.org/abs/2010.13002).
|
||||
|
||||
### Examples
|
||||
|
||||
- [Script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation/finetune_pegasus_xsum.sh) to fine-tune pegasus
|
||||
on the XSUM dataset. Data download instructions at [examples/pytorch/summarization/](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization/README.md).
|
||||
- FP16 is not supported (help/ideas on this appreciated!).
|
||||
- The adafactor optimizer is recommended for pegasus fine-tuning.
|
||||
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- All models are transformer encoder-decoders with 16 layers in each component.
|
||||
- The implementation is completely inherited from [`BartForConditionalGeneration`]
|
||||
- Some key configuration differences:
|
||||
|
||||
- static, sinusoidal position embeddings
|
||||
- the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
|
||||
- more beams are used (`num_beams=8`)
|
||||
- All pretrained pegasus checkpoints are the same besides three attributes: `tokenizer.model_max_length` (maximum
|
||||
input size), `max_length` (the maximum number of tokens to generate) and `length_penalty`.
|
||||
- The code to convert checkpoints trained in the author's [repo](https://github.com/google-research/pegasus) can be
|
||||
found in `convert_pegasus_tf_to_pytorch.py`.
|
||||
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
>>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer
|
||||
>>> import torch
|
||||
>>> src_text = [
|
||||
... """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
|
||||
>>> ]
|
||||
|
||||
>>> model_name = 'google/pegasus-xsum'
|
||||
>>> device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
||||
>>> tokenizer = PegasusTokenizer.from_pretrained(model_name)
|
||||
>>> model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
|
||||
>>> batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)
|
||||
>>> translated = model.generate(**batch)
|
||||
>>> tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
|
||||
>>> assert tgt_text[0] == "California's largest electricity provider has turned off power to hundreds of thousands of customers."
|
||||
```
|
||||
|
||||
## PegasusConfig
|
||||
|
||||
[[autodoc]] PegasusConfig
|
||||
|
||||
## PegasusTokenizer
|
||||
|
||||
warning: `add_tokens` does not work at the moment.
|
||||
|
||||
[[autodoc]] PegasusTokenizer
|
||||
|
||||
## PegasusTokenizerFast
|
||||
|
||||
[[autodoc]] PegasusTokenizerFast
|
||||
|
||||
## PegasusModel
|
||||
|
||||
[[autodoc]] PegasusModel
|
||||
- forward
|
||||
|
||||
## PegasusForConditionalGeneration
|
||||
|
||||
[[autodoc]] PegasusForConditionalGeneration
|
||||
- forward
|
||||
|
||||
## PegasusForCausalLM
|
||||
|
||||
[[autodoc]] PegasusForCausalLM
|
||||
- forward
|
||||
|
||||
## TFPegasusModel
|
||||
|
||||
[[autodoc]] TFPegasusModel
|
||||
- call
|
||||
|
||||
## TFPegasusForConditionalGeneration
|
||||
|
||||
[[autodoc]] TFPegasusForConditionalGeneration
|
||||
- call
|
||||
|
||||
## FlaxPegasusModel
|
||||
|
||||
[[autodoc]] FlaxPegasusModel
|
||||
- __call__
|
||||
- encode
|
||||
- decode
|
||||
|
||||
## FlaxPegasusForConditionalGeneration
|
||||
|
||||
[[autodoc]] FlaxPegasusForConditionalGeneration
|
||||
- __call__
|
||||
- encode
|
||||
- decode
|
@ -1,168 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Pegasus
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
**DISCLAIMER:** If you see something strange, file a `Github Issue
|
||||
<https://github.com/huggingface/transformers/issues/new?assignees=sshleifer&labels=&template=bug-report.md&title>`__
|
||||
and assign @patrickvonplaten.
|
||||
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Pegasus model was proposed in `PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
|
||||
<https://arxiv.org/pdf/1912.08777.pdf>`__ by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019.
|
||||
|
||||
According to the abstract,
|
||||
|
||||
- Pegasus' pretraining task is intentionally similar to summarization: important sentences are removed/masked from an
|
||||
input document and are generated together as one output sequence from the remaining sentences, similar to an
|
||||
extractive summary.
|
||||
- Pegasus achieves SOTA summarization performance on all 12 downstream tasks, as measured by ROUGE and human eval.
|
||||
|
||||
This model was contributed by `sshleifer <https://huggingface.co/sshleifer>`__. The Authors' code can be found `here
|
||||
<https://github.com/google-research/pegasus>`__.
|
||||
|
||||
|
||||
Checkpoints
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All the `checkpoints <https://huggingface.co/models?search=pegasus>`__ are fine-tuned for summarization, besides
|
||||
`pegasus-large`, whence the other checkpoints are fine-tuned:
|
||||
|
||||
- Each checkpoint is 2.2 GB on disk and 568M parameters.
|
||||
- FP16 is not supported (help/ideas on this appreciated!).
|
||||
- Summarizing xsum in fp32 takes about 400ms/sample, with default parameters on a v100 GPU.
|
||||
- Full replication results and correctly pre-processed data can be found in this `Issue
|
||||
<https://github.com/huggingface/transformers/issues/6844#issue-689259666>`__.
|
||||
- `Distilled checkpoints <https://huggingface.co/models?search=distill-pegasus>`__ are described in this `paper
|
||||
<https://arxiv.org/abs/2010.13002>`__.
|
||||
|
||||
Examples
|
||||
_______________________________________________________________________________________________________________________
|
||||
|
||||
- :prefix_link:`Script <examples/research_projects/seq2seq-distillation/finetune_pegasus_xsum.sh>` to fine-tune pegasus
|
||||
on the XSUM dataset. Data download instructions at :prefix_link:`examples/pytorch/summarization/
|
||||
<examples/pytorch/summarization/README.md>`.
|
||||
- FP16 is not supported (help/ideas on this appreciated!).
|
||||
- The adafactor optimizer is recommended for pegasus fine-tuning.
|
||||
|
||||
|
||||
Implementation Notes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- All models are transformer encoder-decoders with 16 layers in each component.
|
||||
- The implementation is completely inherited from :class:`~transformers.BartForConditionalGeneration`
|
||||
- Some key configuration differences:
|
||||
|
||||
- static, sinusoidal position embeddings
|
||||
- the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix.
|
||||
- more beams are used (:obj:`num_beams=8`)
|
||||
- All pretrained pegasus checkpoints are the same besides three attributes: :obj:`tokenizer.model_max_length` (maximum
|
||||
input size), :obj:`max_length` (the maximum number of tokens to generate) and :obj:`length_penalty`.
|
||||
- The code to convert checkpoints trained in the author's `repo <https://github.com/google-research/pegasus>`_ can be
|
||||
found in ``convert_pegasus_tf_to_pytorch.py``.
|
||||
|
||||
|
||||
Usage Example
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer
|
||||
>>> import torch
|
||||
>>> src_text = [
|
||||
... """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
|
||||
>>> ]
|
||||
|
||||
>>> model_name = 'google/pegasus-xsum'
|
||||
>>> device = 'cuda' if torch.cuda.is_available() else 'cpu'
|
||||
>>> tokenizer = PegasusTokenizer.from_pretrained(model_name)
|
||||
>>> model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
|
||||
>>> batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)
|
||||
>>> translated = model.generate(**batch)
|
||||
>>> tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
|
||||
>>> assert tgt_text[0] == "California's largest electricity provider has turned off power to hundreds of thousands of customers."
|
||||
|
||||
|
||||
|
||||
PegasusConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PegasusConfig
|
||||
|
||||
|
||||
PegasusTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
warning: ``add_tokens`` does not work at the moment.
|
||||
|
||||
.. autoclass:: transformers.PegasusTokenizer
|
||||
:members:
|
||||
|
||||
|
||||
PegasusTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PegasusTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
PegasusModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PegasusModel
|
||||
:members: forward
|
||||
|
||||
|
||||
PegasusForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PegasusForConditionalGeneration
|
||||
:members: forward
|
||||
|
||||
|
||||
PegasusForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PegasusForCausalLM
|
||||
:members: forward
|
||||
|
||||
|
||||
TFPegasusModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFPegasusModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFPegasusForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFPegasusForConditionalGeneration
|
||||
:members: call
|
||||
|
||||
|
||||
FlaxPegasusModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxPegasusModel
|
||||
:members: __call__, encode, decode
|
||||
|
||||
|
||||
FlaxPegasusForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxPegasusForConditionalGeneration
|
||||
:members: __call__, encode, decode
|
53
docs/source/model_doc/phobert.mdx
Normal file
53
docs/source/model_doc/phobert.mdx
Normal file
@ -0,0 +1,53 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# PhoBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The PhoBERT model was proposed in [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92.pdf) by Dat Quoc Nguyen, Anh Tuan Nguyen.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual
|
||||
language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent
|
||||
best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple
|
||||
Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and
|
||||
Natural language inference.*
|
||||
|
||||
Example of use:
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
>>> phobert = AutoModel.from_pretrained("vinai/phobert-base")
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
|
||||
|
||||
>>> # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
|
||||
>>> line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
|
||||
|
||||
>>> input_ids = torch.tensor([tokenizer.encode(line)])
|
||||
|
||||
>>> with torch.no_grad():
|
||||
... features = phobert(input_ids) # Models outputs are now tuples
|
||||
|
||||
>>> # With TensorFlow 2.0+:
|
||||
>>> # from transformers import TFAutoModel
|
||||
>>> # phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
|
||||
```
|
||||
|
||||
This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/PhoBERT).
|
||||
|
||||
## PhobertTokenizer
|
||||
|
||||
[[autodoc]] PhobertTokenizer
|
@ -1,60 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
PhoBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The PhoBERT model was proposed in `PhoBERT: Pre-trained language models for Vietnamese
|
||||
<https://www.aclweb.org/anthology/2020.findings-emnlp.92.pdf>`__ by Dat Quoc Nguyen, Anh Tuan Nguyen.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual
|
||||
language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent
|
||||
best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple
|
||||
Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and
|
||||
Natural language inference.*
|
||||
|
||||
Example of use:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> import torch
|
||||
>>> from transformers import AutoModel, AutoTokenizer
|
||||
|
||||
>>> phobert = AutoModel.from_pretrained("vinai/phobert-base")
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
|
||||
|
||||
>>> # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
|
||||
>>> line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
|
||||
|
||||
>>> input_ids = torch.tensor([tokenizer.encode(line)])
|
||||
|
||||
>>> with torch.no_grad():
|
||||
... features = phobert(input_ids) # Models outputs are now tuples
|
||||
|
||||
>>> # With TensorFlow 2.0+:
|
||||
>>> # from transformers import TFAutoModel
|
||||
>>> # phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
|
||||
|
||||
|
||||
This model was contributed by `dqnguyen <https://huggingface.co/dqnguyen>`__. The original code can be found `here
|
||||
<https://github.com/VinAIResearch/PhoBERT>`__.
|
||||
|
||||
PhobertTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.PhobertTokenizer
|
||||
:members:
|
82
docs/source/model_doc/prophetnet.mdx
Normal file
82
docs/source/model_doc/prophetnet.mdx
Normal file
@ -0,0 +1,82 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# ProphetNet
|
||||
|
||||
**DISCLAIMER:** If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title) and assign
|
||||
@patrickvonplaten
|
||||
|
||||
## Overview
|
||||
|
||||
The ProphetNet model was proposed in [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
|
||||
Zhang, Ming Zhou on 13 Jan, 2020.
|
||||
|
||||
ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just
|
||||
the next token.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
|
||||
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
|
||||
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
|
||||
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
|
||||
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
|
||||
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
|
||||
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
|
||||
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
|
||||
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.*
|
||||
|
||||
The Authors' code can be found [here](https://github.com/microsoft/ProphetNet).
|
||||
|
||||
|
||||
## ProphetNetConfig
|
||||
|
||||
[[autodoc]] ProphetNetConfig
|
||||
|
||||
## ProphetNetTokenizer
|
||||
|
||||
[[autodoc]] ProphetNetTokenizer
|
||||
|
||||
## ProphetNet specific outputs
|
||||
|
||||
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
|
||||
|
||||
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
|
||||
|
||||
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
|
||||
|
||||
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
|
||||
|
||||
## ProphetNetModel
|
||||
|
||||
[[autodoc]] ProphetNetModel
|
||||
- forward
|
||||
|
||||
## ProphetNetEncoder
|
||||
|
||||
[[autodoc]] ProphetNetEncoder
|
||||
- forward
|
||||
|
||||
## ProphetNetDecoder
|
||||
|
||||
[[autodoc]] ProphetNetDecoder
|
||||
- forward
|
||||
|
||||
## ProphetNetForConditionalGeneration
|
||||
|
||||
[[autodoc]] ProphetNetForConditionalGeneration
|
||||
- forward
|
||||
|
||||
## ProphetNetForCausalLM
|
||||
|
||||
[[autodoc]] ProphetNetForCausalLM
|
||||
- forward
|
@ -1,106 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
ProphetNet
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
**DISCLAIMER:** If you see something strange, file a `Github Issue
|
||||
<https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title>`__ and assign
|
||||
@patrickvonplaten
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ProphetNet model was proposed in `ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training,
|
||||
<https://arxiv.org/abs/2001.04063>`__ by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
|
||||
Zhang, Ming Zhou on 13 Jan, 2020.
|
||||
|
||||
ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just
|
||||
the next token.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
|
||||
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
|
||||
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
|
||||
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
|
||||
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
|
||||
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
|
||||
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
|
||||
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
|
||||
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.*
|
||||
|
||||
The Authors' code can be found `here <https://github.com/microsoft/ProphetNet>`__.
|
||||
|
||||
|
||||
ProphetNetConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetConfig
|
||||
:members:
|
||||
|
||||
|
||||
ProphetNetTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetTokenizer
|
||||
:members:
|
||||
|
||||
|
||||
ProphetNet specific outputs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
|
||||
:members:
|
||||
|
||||
ProphetNetModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetModel
|
||||
:members: forward
|
||||
|
||||
|
||||
ProphetNetEncoder
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetEncoder
|
||||
:members: forward
|
||||
|
||||
|
||||
ProphetNetDecoder
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetDecoder
|
||||
:members: forward
|
||||
|
||||
|
||||
ProphetNetForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetForConditionalGeneration
|
||||
:members: forward
|
||||
|
||||
|
||||
ProphetNetForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ProphetNetForCausalLM
|
||||
:members: forward
|
158
docs/source/model_doc/qdqbert.mdx
Normal file
158
docs/source/model_doc/qdqbert.mdx
Normal file
@ -0,0 +1,158 @@
|
||||
<!--Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# QDQBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The QDQBERT model can be referenced in [Integer Quantization for Deep Learning Inference: Principles and Empirical
|
||||
Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius
|
||||
Micikevicius.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by
|
||||
taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of
|
||||
quantization parameters and evaluate their choices on a wide range of neural network models for different application
|
||||
domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration
|
||||
by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is
|
||||
able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are
|
||||
more difficult to quantize, such as MobileNets and BERT-large.*
|
||||
|
||||
Tips:
|
||||
|
||||
- QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer
|
||||
inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model.
|
||||
|
||||
- QDQBERT requires the dependency of [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). To install `pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com`
|
||||
|
||||
- QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example *bert-base-uncased*), and
|
||||
perform Quantization Aware Training/Post Training Quantization.
|
||||
|
||||
- A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for
|
||||
SQUAD task can be found at [transformers/examples/research_projects/quantization-qdqbert/](examples/research_projects/quantization-qdqbert/).
|
||||
|
||||
This model was contributed by [shangz](https://huggingface.co/shangz).
|
||||
|
||||
|
||||
### Set default quantizers
|
||||
|
||||
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by
|
||||
`TensorQuantizer` in [Pytorch Quantization Toolkit](https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization). `TensorQuantizer` is the module
|
||||
for quantizing tensors, with `QuantDescriptor` defining how the tensor should be quantized. Refer to [Pytorch
|
||||
Quantization Toolkit userguide](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html) for more details.
|
||||
|
||||
Before creating QDQBERT model, one has to set the default `QuantDescriptor` defining default tensor quantizers.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> import pytorch_quantization.nn as quant_nn
|
||||
>>> from pytorch_quantization.tensor_quant import QuantDescriptor
|
||||
|
||||
>>> # The default tensor quantizer is set to use Max calibration method
|
||||
>>> input_desc = QuantDescriptor(num_bits=8, calib_method="max")
|
||||
>>> # The default tensor quantizer is set to be per-channel quantization for weights
|
||||
>>> weight_desc = QuantDescriptor(num_bits=8, axis=((0,)))
|
||||
>>> quant_nn.QuantLinear.set_default_quant_desc_input(input_desc)
|
||||
>>> quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc)
|
||||
```
|
||||
|
||||
### Calibration
|
||||
|
||||
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for
|
||||
tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model:
|
||||
|
||||
```python
|
||||
>>> # Find the TensorQuantizer and enable calibration
|
||||
>>> for name, module in model.named_modules():
|
||||
>>> if name.endswith('_input_quantizer'):
|
||||
>>> module.enable_calib()
|
||||
>>> module.disable_quant() # Use full precision data to calibrate
|
||||
|
||||
>>> # Feeding data samples
|
||||
>>> model(x)
|
||||
>>> # ...
|
||||
|
||||
>>> # Finalize calibration
|
||||
>>> for name, module in model.named_modules():
|
||||
>>> if name.endswith('_input_quantizer'):
|
||||
>>> module.load_calib_amax()
|
||||
>>> module.enable_quant()
|
||||
|
||||
>>> # If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process
|
||||
>>> model.cuda()
|
||||
|
||||
>>> # Keep running the quantized model
|
||||
>>> # ...
|
||||
```
|
||||
|
||||
### Export to ONNX
|
||||
|
||||
The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake
|
||||
quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of
|
||||
TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow
|
||||
the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Example:
|
||||
|
||||
```python
|
||||
>>> from pytorch_quantization.nn import TensorQuantizer
|
||||
>>> TensorQuantizer.use_fb_fake_quant = True
|
||||
|
||||
>>> # Load the calibrated model
|
||||
>>> ...
|
||||
>>> # ONNX export
|
||||
>>> torch.onnx.export(...)
|
||||
```
|
||||
|
||||
## QDQBertConfig
|
||||
|
||||
[[autodoc]] QDQBertConfig
|
||||
|
||||
## QDQBertModel
|
||||
|
||||
[[autodoc]] QDQBertModel
|
||||
- forward
|
||||
|
||||
## QDQBertLMHeadModel
|
||||
|
||||
[[autodoc]] QDQBertLMHeadModel
|
||||
- forward
|
||||
|
||||
## QDQBertForMaskedLM
|
||||
|
||||
[[autodoc]] QDQBertForMaskedLM
|
||||
- forward
|
||||
|
||||
## QDQBertForSequenceClassification
|
||||
|
||||
[[autodoc]] QDQBertForSequenceClassification
|
||||
- forward
|
||||
|
||||
## QDQBertForNextSentencePrediction
|
||||
|
||||
[[autodoc]] QDQBertForNextSentencePrediction
|
||||
- forward
|
||||
|
||||
## QDQBertForMultipleChoice
|
||||
|
||||
[[autodoc]] QDQBertForMultipleChoice
|
||||
- forward
|
||||
|
||||
## QDQBertForTokenClassification
|
||||
|
||||
[[autodoc]] QDQBertForTokenClassification
|
||||
- forward
|
||||
|
||||
## QDQBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] QDQBertForQuestionAnswering
|
||||
- forward
|
@ -1,189 +0,0 @@
|
||||
..
|
||||
Copyright 2021 NVIDIA Corporation and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
QDQBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The QDQBERT model can be referenced in `Integer Quantization for Deep Learning Inference: Principles and Empirical
|
||||
Evaluation <https://arxiv.org/abs/2004.09602>`__ by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius
|
||||
Micikevicius.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by
|
||||
taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of
|
||||
quantization parameters and evaluate their choices on a wide range of neural network models for different application
|
||||
domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration
|
||||
by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is
|
||||
able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are
|
||||
more difficult to quantize, such as MobileNets and BERT-large.*
|
||||
|
||||
Tips:
|
||||
|
||||
- QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer
|
||||
inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model.
|
||||
|
||||
- QDQBERT requires the dependency of `Pytorch Quantization Toolkit
|
||||
<https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization>`__. To install ``pip install
|
||||
pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com``
|
||||
|
||||
- QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example *bert-base-uncased*), and
|
||||
perform Quantization Aware Training/Post Training Quantization.
|
||||
|
||||
- A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for
|
||||
SQUAD task can be found at `transformers/examples/research_projects/quantization-qdqbert/
|
||||
</examples/research_projects/quantization-qdqbert/>`_.
|
||||
|
||||
This model was contributed by `shangz <https://huggingface.co/shangz>`__.
|
||||
|
||||
|
||||
Set default quantizers
|
||||
_______________________________________________________________________________________________________________________
|
||||
|
||||
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by
|
||||
:obj:`TensorQuantizer` in `Pytorch Quantization Toolkit
|
||||
<https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization>`__. :obj:`TensorQuantizer` is the module
|
||||
for quantizing tensors, with :obj:`QuantDescriptor` defining how the tensor should be quantized. Refer to `Pytorch
|
||||
Quantization Toolkit userguide
|
||||
<https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/userguide.html>`__ for more details.
|
||||
|
||||
Before creating QDQBERT model, one has to set the default :obj:`QuantDescriptor` defining default tensor quantizers.
|
||||
Example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> import pytorch_quantization.nn as quant_nn
|
||||
>>> from pytorch_quantization.tensor_quant import QuantDescriptor
|
||||
|
||||
>>> # The default tensor quantizer is set to use Max calibration method
|
||||
>>> input_desc = QuantDescriptor(num_bits=8, calib_method="max")
|
||||
>>> # The default tensor quantizer is set to be per-channel quantization for weights
|
||||
>>> weight_desc = QuantDescriptor(num_bits=8, axis=((0,)))
|
||||
>>> quant_nn.QuantLinear.set_default_quant_desc_input(input_desc)
|
||||
>>> quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc)
|
||||
|
||||
|
||||
Calibration
|
||||
_______________________________________________________________________________________________________________________
|
||||
|
||||
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for
|
||||
tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> # Find the TensorQuantizer and enable calibration
|
||||
>>> for name, module in model.named_modules():
|
||||
>>> if name.endswith('_input_quantizer'):
|
||||
>>> module.enable_calib()
|
||||
>>> module.disable_quant() # Use full precision data to calibrate
|
||||
|
||||
>>> # Feeding data samples
|
||||
>>> model(x)
|
||||
>>> # ...
|
||||
|
||||
>>> # Finalize calibration
|
||||
>>> for name, module in model.named_modules():
|
||||
>>> if name.endswith('_input_quantizer'):
|
||||
>>> module.load_calib_amax()
|
||||
>>> module.enable_quant()
|
||||
|
||||
>>> # If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process
|
||||
>>> model.cuda()
|
||||
|
||||
>>> # Keep running the quantized model
|
||||
>>> # ...
|
||||
|
||||
|
||||
Export to ONNX
|
||||
_______________________________________________________________________________________________________________________
|
||||
|
||||
The goal of exporting to ONNX is to deploy inference by `TensorRT <https://developer.nvidia.com/tensorrt>`__. Fake
|
||||
quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of
|
||||
TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow
|
||||
the instructions in `torch.onnx <https://pytorch.org/docs/stable/onnx.html>`__. Example:
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from pytorch_quantization.nn import TensorQuantizer
|
||||
>>> TensorQuantizer.use_fb_fake_quant = True
|
||||
|
||||
>>> # Load the calibrated model
|
||||
>>> ...
|
||||
>>> # ONNX export
|
||||
>>> torch.onnx.export(...)
|
||||
|
||||
|
||||
QDQBertConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertConfig
|
||||
:members:
|
||||
|
||||
|
||||
QDQBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertModel
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertLMHeadModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertLMHeadModel
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertForNextSentencePrediction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertForNextSentencePrediction
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
QDQBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.QDQBertForQuestionAnswering
|
||||
:members: forward
|
||||
|
96
docs/source/model_doc/rag.mdx
Normal file
96
docs/source/model_doc/rag.mdx
Normal file
@ -0,0 +1,96 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# RAG
|
||||
|
||||
## Overview
|
||||
|
||||
Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and
|
||||
sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate
|
||||
outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing
|
||||
both retrieval and generation to adapt to downstream tasks.
|
||||
|
||||
It is based on the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir
|
||||
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve
|
||||
state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely
|
||||
manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind
|
||||
task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge
|
||||
remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric
|
||||
memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a
|
||||
general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained
|
||||
parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a
|
||||
pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a
|
||||
pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages
|
||||
across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our
|
||||
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks,
|
||||
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation
|
||||
tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art
|
||||
parametric-only seq2seq baseline.*
|
||||
|
||||
This model was contributed by [ola13](https://huggingface.co/ola13).
|
||||
|
||||
|
||||
## RagConfig
|
||||
|
||||
[[autodoc]] RagConfig
|
||||
|
||||
## RagTokenizer
|
||||
|
||||
[[autodoc]] RagTokenizer
|
||||
|
||||
## Rag specific outputs
|
||||
|
||||
[[autodoc]] models.rag.modeling_rag.RetrievAugLMMarginOutput
|
||||
|
||||
[[autodoc]] models.rag.modeling_rag.RetrievAugLMOutput
|
||||
|
||||
## RagRetriever
|
||||
|
||||
[[autodoc]] RagRetriever
|
||||
|
||||
## RagModel
|
||||
|
||||
[[autodoc]] RagModel
|
||||
- forward
|
||||
|
||||
## RagSequenceForGeneration
|
||||
|
||||
[[autodoc]] RagSequenceForGeneration
|
||||
- forward
|
||||
- generate
|
||||
|
||||
## RagTokenForGeneration
|
||||
|
||||
[[autodoc]] RagTokenForGeneration
|
||||
- forward
|
||||
- generate
|
||||
|
||||
## TFRagModel
|
||||
|
||||
[[autodoc]] TFRagModel
|
||||
- call
|
||||
|
||||
## TFRagSequenceForGeneration
|
||||
|
||||
[[autodoc]] TFRagSequenceForGeneration
|
||||
- call
|
||||
- generate
|
||||
|
||||
## TFRagTokenForGeneration
|
||||
|
||||
[[autodoc]] TFRagTokenForGeneration
|
||||
- call
|
||||
- generate
|
@ -1,118 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
RAG
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and
|
||||
sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate
|
||||
outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing
|
||||
both retrieval and generation to adapt to downstream tasks.
|
||||
|
||||
It is based on the paper `Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
|
||||
<https://arxiv.org/abs/2005.11401>`__ by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir
|
||||
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve
|
||||
state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely
|
||||
manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind
|
||||
task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge
|
||||
remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric
|
||||
memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a
|
||||
general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained
|
||||
parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a
|
||||
pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a
|
||||
pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages
|
||||
across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our
|
||||
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks,
|
||||
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation
|
||||
tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art
|
||||
parametric-only seq2seq baseline.*
|
||||
|
||||
This model was contributed by `ola13 <https://huggingface.co/ola13>`__.
|
||||
|
||||
|
||||
RagConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RagConfig
|
||||
:members:
|
||||
|
||||
|
||||
RagTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RagTokenizer
|
||||
:members:
|
||||
|
||||
|
||||
Rag specific outputs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: transformers.models.rag.modeling_rag.RetrievAugLMOutput
|
||||
:members:
|
||||
|
||||
RagRetriever
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RagRetriever
|
||||
:members:
|
||||
|
||||
|
||||
RagModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RagModel
|
||||
:members: forward
|
||||
|
||||
|
||||
RagSequenceForGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RagSequenceForGeneration
|
||||
:members: forward, generate
|
||||
|
||||
|
||||
RagTokenForGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RagTokenForGeneration
|
||||
:members: forward, generate
|
||||
|
||||
|
||||
TFRagModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRagModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFRagSequenceForGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRagSequenceForGeneration
|
||||
:members: call, generate
|
||||
|
||||
|
||||
TFRagTokenForGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRagTokenForGeneration
|
||||
:members: call, generate
|
177
docs/source/model_doc/reformer.mdx
Normal file
177
docs/source/model_doc/reformer.mdx
Normal file
@ -0,0 +1,177 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Reformer
|
||||
|
||||
**DISCLAIMER:** This model is still a work in progress, if you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title).
|
||||
|
||||
## Overview
|
||||
|
||||
The Reformer model was proposed in the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451.pdf) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can
|
||||
be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of
|
||||
Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its
|
||||
complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual
|
||||
layers instead of the standard residuals, which allows storing activations only once in the training process instead of
|
||||
N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models
|
||||
while being much more memory-efficient and much faster on long sequences.*
|
||||
|
||||
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be
|
||||
found [here](https://github.com/google/trax/tree/master/trax/models/reformer).
|
||||
|
||||
**Note**:
|
||||
|
||||
- Reformer does **not** work with *torch.nn.DataParallel* due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035)
|
||||
|
||||
## Axial Positional Encodings
|
||||
|
||||
Axial Positional Encodings were first implemented in Google's [trax library](https://github.com/google/trax/blob/4d99ad4965bab1deba227539758d59f0df0fef48/trax/layers/research/position_encodings.py#L29)
|
||||
and developed by the authors of this model's paper. In models that are treating very long input sequences, the
|
||||
conventional position id encodings store an embedings vector of size \\(d\\) being the `config.hidden_size` for
|
||||
every position \\(i, \ldots, n_s\\), with \\(n_s\\) being `config.max_embedding_size`. This means that having
|
||||
a sequence length of \\(n_s = 2^{19} \approx 0.5M\\) and a `config.hidden_size` of \\(d = 2^{10} \approx 1000\\)
|
||||
would result in a position encoding matrix:
|
||||
|
||||
$$X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]$$
|
||||
|
||||
which alone has over 500M parameters to store. Axial positional encodings factorize \\(X_{i,j}\\) into two matrices:
|
||||
|
||||
$$X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]$$
|
||||
|
||||
and
|
||||
|
||||
$$X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]$$
|
||||
|
||||
with:
|
||||
|
||||
$$d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .$$
|
||||
|
||||
Therefore the following holds:
|
||||
|
||||
$$X_{i,j} = \begin{cases}
|
||||
X^{1}_{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \\
|
||||
X^{2}_{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor
|
||||
\end{cases}$$
|
||||
|
||||
Intuitively, this means that a position embedding vector \\(x_j \in \mathbb{R}^{d}\\) is now the composition of two
|
||||
factorized embedding vectors: \\(x^1_{k, l} + x^2_{l, k}\\), where as the `config.max_embedding_size` dimension
|
||||
\\(j\\) is factorized into \\(k \text{ and } l\\). This design ensures that each position embedding vector
|
||||
\\(x_j\\) is unique.
|
||||
|
||||
Using the above example again, axial position encoding with \\(d^1 = 2^5, d^2 = 2^5, n_s^1 = 2^9, n_s^2 = 2^{10}\\)
|
||||
can drastically reduced the number of parameters to \\(2^{14} + 2^{15} \approx 49000\\) parameters.
|
||||
|
||||
In practice, the parameter `config.axial_pos_embds_dim` is set to a tuple \\((d^1, d^2)\\) which sum has to be
|
||||
equal to `config.hidden_size` and `config.axial_pos_shape` is set to a tuple \\((n_s^1, n_s^2)\\) which
|
||||
product has to be equal to `config.max_embedding_size`, which during training has to be equal to the *sequence
|
||||
length* of the `input_ids`.
|
||||
|
||||
|
||||
## LSH Self Attention
|
||||
|
||||
In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key
|
||||
query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in
|
||||
[Practical and Optimal LSH for Angular Distance](https://arxiv.org/abs/1509.02897) to assign each of the tied key
|
||||
query embedding vectors to one of `config.num_buckets` possible buckets. The premise is that the more "similar"
|
||||
key query embedding vectors (in terms of *cosine similarity*) are to each other, the more likely they are assigned to
|
||||
the same bucket.
|
||||
|
||||
The accuracy of the LSH mechanism can be improved by increasing `config.num_hashes` or directly the argument
|
||||
`num_hashes` of the forward function so that the output of the LSH self attention better approximates the output
|
||||
of the "normal" full self attention. The buckets are then sorted and chunked into query key embedding vector chunks
|
||||
each of length `config.lsh_chunk_length`. For each chunk, the query embedding vectors attend to its key vectors
|
||||
(which are tied to themselves) and to the key embedding vectors of `config.lsh_num_chunks_before` previous
|
||||
neighboring chunks and `config.lsh_num_chunks_after` following neighboring chunks.
|
||||
|
||||
For more information, see the [original Paper](https://arxiv.org/abs/2001.04451) or this great [blog post](https://www.pragmatic.ml/reformer-deep-dive/).
|
||||
|
||||
Note that `config.num_buckets` can also be factorized into a list \\((n_{\text{buckets}}^1,
|
||||
n_{\text{buckets}}^2)\\). This way instead of assigning the query key embedding vectors to one of \\((1,\ldots,
|
||||
n_{\text{buckets}})\\) they are assigned to one of \\((1-1,\ldots, n_{\text{buckets}}^1-1, \ldots,
|
||||
1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)\\). This is crucial for very long sequences to
|
||||
save memory.
|
||||
|
||||
When training a model from scratch, it is recommended to leave `config.num_buckets=None`, so that depending on the
|
||||
sequence length a good value for `num_buckets` is calculated on the fly. This value will then automatically be
|
||||
saved in the config and should be reused for inference.
|
||||
|
||||
Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from
|
||||
\\(\mathcal{O}(n_s \times n_s)\\) to \\(\mathcal{O}(n_s \times \log(n_s))\\), which usually represents the memory
|
||||
and time bottleneck in a transformer model, with \\(n_s\\) being the sequence length.
|
||||
|
||||
|
||||
## Local Self Attention
|
||||
|
||||
Local self attention is essentially a "normal" self attention layer with key, query and value projections, but is
|
||||
chunked so that in each chunk of length `config.local_chunk_length` the query embedding vectors only attends to
|
||||
the key embedding vectors in its chunk and to the key embedding vectors of `config.local_num_chunks_before`
|
||||
previous neighboring chunks and `config.local_num_chunks_after` following neighboring chunks.
|
||||
|
||||
Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from
|
||||
\\(\mathcal{O}(n_s \times n_s)\\) to \\(\mathcal{O}(n_s \times \log(n_s))\\), which usually represents the memory
|
||||
and time bottleneck in a transformer model, with \\(n_s\\) being the sequence length.
|
||||
|
||||
|
||||
## Training
|
||||
|
||||
During training, we must ensure that the sequence length is set to a value that can be divided by the least common
|
||||
multiple of `config.lsh_chunk_length` and `config.local_chunk_length` and that the parameters of the Axial
|
||||
Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can
|
||||
easily be trained on sequences as long as 64000 tokens.
|
||||
|
||||
For training, the [`ReformerModelWithLMHead`] should be used as follows:
|
||||
|
||||
```python
|
||||
input_ids = tokenizer.encode('This is a sentence from the training data', return_tensors='pt')
|
||||
loss = model(input_ids, labels=input_ids)[0]
|
||||
```
|
||||
|
||||
## ReformerConfig
|
||||
|
||||
[[autodoc]] ReformerConfig
|
||||
|
||||
## ReformerTokenizer
|
||||
|
||||
[[autodoc]] ReformerTokenizer
|
||||
- save_vocabulary
|
||||
|
||||
## ReformerTokenizerFast
|
||||
|
||||
[[autodoc]] ReformerTokenizerFast
|
||||
|
||||
## ReformerModel
|
||||
|
||||
[[autodoc]] ReformerModel
|
||||
- forward
|
||||
|
||||
## ReformerModelWithLMHead
|
||||
|
||||
[[autodoc]] ReformerModelWithLMHead
|
||||
- forward
|
||||
|
||||
## ReformerForMaskedLM
|
||||
|
||||
[[autodoc]] ReformerForMaskedLM
|
||||
- forward
|
||||
|
||||
## ReformerForSequenceClassification
|
||||
|
||||
[[autodoc]] ReformerForSequenceClassification
|
||||
- forward
|
||||
|
||||
## ReformerForQuestionAnswering
|
||||
|
||||
[[autodoc]] ReformerForQuestionAnswering
|
||||
- forward
|
@ -1,211 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Reformer
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
**DISCLAIMER:** This model is still a work in progress, if you see something strange, file a `Github Issue
|
||||
<https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title>`__.
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Reformer model was proposed in the paper `Reformer: The Efficient Transformer
|
||||
<https://arxiv.org/abs/2001.04451.pdf>`__ by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can
|
||||
be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of
|
||||
Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its
|
||||
complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual
|
||||
layers instead of the standard residuals, which allows storing activations only once in the training process instead of
|
||||
N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models
|
||||
while being much more memory-efficient and much faster on long sequences.*
|
||||
|
||||
This model was contributed by `patrickvonplaten <https://huggingface.co/patrickvonplaten>`__. The Authors' code can be
|
||||
found `here <https://github.com/google/trax/tree/master/trax/models/reformer>`__.
|
||||
|
||||
**Note**:
|
||||
|
||||
- Reformer does **not** work with `torch.nn.DataParallel` due to a bug in PyTorch, see `issue #36035
|
||||
<https://github.com/pytorch/pytorch/issues/36035>`__
|
||||
|
||||
Axial Positional Encodings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Axial Positional Encodings were first implemented in Google's `trax library
|
||||
<https://github.com/google/trax/blob/4d99ad4965bab1deba227539758d59f0df0fef48/trax/layers/research/position_encodings.py#L29>`__
|
||||
and developed by the authors of this model's paper. In models that are treating very long input sequences, the
|
||||
conventional position id encodings store an embedings vector of size :math:`d` being the :obj:`config.hidden_size` for
|
||||
every position :math:`i, \ldots, n_s`, with :math:`n_s` being :obj:`config.max_embedding_size`. This means that having
|
||||
a sequence length of :math:`n_s = 2^{19} \approx 0.5M` and a ``config.hidden_size`` of :math:`d = 2^{10} \approx 1000`
|
||||
would result in a position encoding matrix:
|
||||
|
||||
.. math::
|
||||
X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]
|
||||
|
||||
which alone has over 500M parameters to store. Axial positional encodings factorize :math:`X_{i,j}` into two matrices:
|
||||
|
||||
.. math::
|
||||
X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]
|
||||
|
||||
and
|
||||
|
||||
.. math::
|
||||
X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]
|
||||
|
||||
with:
|
||||
|
||||
.. math::
|
||||
d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .
|
||||
|
||||
Therefore the following holds:
|
||||
|
||||
.. math::
|
||||
X_{i,j} = \begin{cases}
|
||||
X^{1}_{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \\
|
||||
X^{2}_{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor
|
||||
\end{cases}
|
||||
|
||||
Intuitively, this means that a position embedding vector :math:`x_j \in \mathbb{R}^{d}` is now the composition of two
|
||||
factorized embedding vectors: :math:`x^1_{k, l} + x^2_{l, k}`, where as the :obj:`config.max_embedding_size` dimension
|
||||
:math:`j` is factorized into :math:`k \text{ and } l`. This design ensures that each position embedding vector
|
||||
:math:`x_j` is unique.
|
||||
|
||||
Using the above example again, axial position encoding with :math:`d^1 = 2^5, d^2 = 2^5, n_s^1 = 2^9, n_s^2 = 2^{10}`
|
||||
can drastically reduced the number of parameters to :math:`2^{14} + 2^{15} \approx 49000` parameters.
|
||||
|
||||
In practice, the parameter :obj:`config.axial_pos_embds_dim` is set to a tuple :math:`(d^1, d^2)` which sum has to be
|
||||
equal to :obj:`config.hidden_size` and :obj:`config.axial_pos_shape` is set to a tuple :math:`(n_s^1, n_s^2)` which
|
||||
product has to be equal to :obj:`config.max_embedding_size`, which during training has to be equal to the `sequence
|
||||
length` of the :obj:`input_ids`.
|
||||
|
||||
|
||||
LSH Self Attention
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key
|
||||
query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in
|
||||
`Practical and Optimal LSH for Angular Distance <https://arxiv.org/abs/1509.02897>`__ to assign each of the tied key
|
||||
query embedding vectors to one of :obj:`config.num_buckets` possible buckets. The premise is that the more "similar"
|
||||
key query embedding vectors (in terms of *cosine similarity*) are to each other, the more likely they are assigned to
|
||||
the same bucket.
|
||||
|
||||
The accuracy of the LSH mechanism can be improved by increasing :obj:`config.num_hashes` or directly the argument
|
||||
:obj:`num_hashes` of the forward function so that the output of the LSH self attention better approximates the output
|
||||
of the "normal" full self attention. The buckets are then sorted and chunked into query key embedding vector chunks
|
||||
each of length :obj:`config.lsh_chunk_length`. For each chunk, the query embedding vectors attend to its key vectors
|
||||
(which are tied to themselves) and to the key embedding vectors of :obj:`config.lsh_num_chunks_before` previous
|
||||
neighboring chunks and :obj:`config.lsh_num_chunks_after` following neighboring chunks.
|
||||
|
||||
For more information, see the `original Paper <https://arxiv.org/abs/2001.04451>`__ or this great `blog post
|
||||
<https://www.pragmatic.ml/reformer-deep-dive/>`__.
|
||||
|
||||
Note that :obj:`config.num_buckets` can also be factorized into a list :math:`(n_{\text{buckets}}^1,
|
||||
n_{\text{buckets}}^2)`. This way instead of assigning the query key embedding vectors to one of :math:`(1,\ldots,
|
||||
n_{\text{buckets}})` they are assigned to one of :math:`(1-1,\ldots, n_{\text{buckets}}^1-1, \ldots,
|
||||
1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)`. This is crucial for very long sequences to
|
||||
save memory.
|
||||
|
||||
When training a model from scratch, it is recommended to leave :obj:`config.num_buckets=None`, so that depending on the
|
||||
sequence length a good value for :obj:`num_buckets` is calculated on the fly. This value will then automatically be
|
||||
saved in the config and should be reused for inference.
|
||||
|
||||
Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from
|
||||
:math:`\mathcal{O}(n_s \times n_s)` to :math:`\mathcal{O}(n_s \times \log(n_s))`, which usually represents the memory
|
||||
and time bottleneck in a transformer model, with :math:`n_s` being the sequence length.
|
||||
|
||||
|
||||
Local Self Attention
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Local self attention is essentially a "normal" self attention layer with key, query and value projections, but is
|
||||
chunked so that in each chunk of length :obj:`config.local_chunk_length` the query embedding vectors only attends to
|
||||
the key embedding vectors in its chunk and to the key embedding vectors of :obj:`config.local_num_chunks_before`
|
||||
previous neighboring chunks and :obj:`config.local_num_chunks_after` following neighboring chunks.
|
||||
|
||||
Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from
|
||||
:math:`\mathcal{O}(n_s \times n_s)` to :math:`\mathcal{O}(n_s \times \log(n_s))`, which usually represents the memory
|
||||
and time bottleneck in a transformer model, with :math:`n_s` being the sequence length.
|
||||
|
||||
|
||||
Training
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
During training, we must ensure that the sequence length is set to a value that can be divided by the least common
|
||||
multiple of :obj:`config.lsh_chunk_length` and :obj:`config.local_chunk_length` and that the parameters of the Axial
|
||||
Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can
|
||||
easily be trained on sequences as long as 64000 tokens.
|
||||
|
||||
For training, the :class:`~transformers.ReformerModelWithLMHead` should be used as follows:
|
||||
|
||||
.. code-block::
|
||||
|
||||
input_ids = tokenizer.encode('This is a sentence from the training data', return_tensors='pt')
|
||||
loss = model(input_ids, labels=input_ids)[0]
|
||||
|
||||
|
||||
ReformerConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerConfig
|
||||
:members:
|
||||
|
||||
|
||||
ReformerTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerTokenizer
|
||||
:members: save_vocabulary
|
||||
|
||||
|
||||
ReformerTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
ReformerModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerModel
|
||||
:members: forward
|
||||
|
||||
|
||||
ReformerModelWithLMHead
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerModelWithLMHead
|
||||
:members: forward
|
||||
|
||||
|
||||
ReformerForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
ReformerForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
ReformerForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.ReformerForQuestionAnswering
|
||||
:members: forward
|
128
docs/source/model_doc/rembert.mdx
Normal file
128
docs/source/model_doc/rembert.mdx
Normal file
@ -0,0 +1,128 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# RemBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The RemBERT model was proposed in [Rethinking Embedding Coupling in Pre-trained Language Models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art
|
||||
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
|
||||
significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By
|
||||
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
|
||||
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
|
||||
allocating additional capacity to the output embedding provides benefits to the model that persist through the
|
||||
fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger
|
||||
output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage
|
||||
Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these
|
||||
findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the
|
||||
number of parameters at the fine-tuning stage.*
|
||||
|
||||
Tips:
|
||||
|
||||
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
|
||||
embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
|
||||
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
|
||||
also similar to the Albert one rather than the BERT one.
|
||||
|
||||
## RemBertConfig
|
||||
|
||||
[[autodoc]] RemBertConfig
|
||||
|
||||
## RemBertTokenizer
|
||||
|
||||
[[autodoc]] RemBertTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## RemBertTokenizerFast
|
||||
|
||||
[[autodoc]] RemBertTokenizerFast
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## RemBertModel
|
||||
|
||||
[[autodoc]] RemBertModel
|
||||
- forward
|
||||
|
||||
## RemBertForCausalLM
|
||||
|
||||
[[autodoc]] RemBertForCausalLM
|
||||
- forward
|
||||
|
||||
## RemBertForMaskedLM
|
||||
|
||||
[[autodoc]] RemBertForMaskedLM
|
||||
- forward
|
||||
|
||||
## RemBertForSequenceClassification
|
||||
|
||||
[[autodoc]] RemBertForSequenceClassification
|
||||
- forward
|
||||
|
||||
## RemBertForMultipleChoice
|
||||
|
||||
[[autodoc]] RemBertForMultipleChoice
|
||||
- forward
|
||||
|
||||
## RemBertForTokenClassification
|
||||
|
||||
[[autodoc]] RemBertForTokenClassification
|
||||
- forward
|
||||
|
||||
## RemBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] RemBertForQuestionAnswering
|
||||
- forward
|
||||
|
||||
## TFRemBertModel
|
||||
|
||||
[[autodoc]] TFRemBertModel
|
||||
- call
|
||||
|
||||
## TFRemBertForMaskedLM
|
||||
|
||||
[[autodoc]] TFRemBertForMaskedLM
|
||||
- call
|
||||
|
||||
## TFRemBertForCausalLM
|
||||
|
||||
[[autodoc]] TFRemBertForCausalLM
|
||||
- call
|
||||
|
||||
## TFRemBertForSequenceClassification
|
||||
|
||||
[[autodoc]] TFRemBertForSequenceClassification
|
||||
- call
|
||||
|
||||
## TFRemBertForMultipleChoice
|
||||
|
||||
[[autodoc]] TFRemBertForMultipleChoice
|
||||
- call
|
||||
|
||||
## TFRemBertForTokenClassification
|
||||
|
||||
[[autodoc]] TFRemBertForTokenClassification
|
||||
- call
|
||||
|
||||
## TFRemBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] TFRemBertForQuestionAnswering
|
||||
- call
|
@ -1,161 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
RemBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The RemBERT model was proposed in `Rethinking Embedding Coupling in Pre-trained Language Models
|
||||
<https://arxiv.org/abs/2010.12821>`__ by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art
|
||||
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
|
||||
significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By
|
||||
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
|
||||
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
|
||||
allocating additional capacity to the output embedding provides benefits to the model that persist through the
|
||||
fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger
|
||||
output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage
|
||||
Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these
|
||||
findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the
|
||||
number of parameters at the fine-tuning stage.*
|
||||
|
||||
Tips:
|
||||
|
||||
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
|
||||
embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
|
||||
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
|
||||
also similar to the Albert one rather than the BERT one.
|
||||
|
||||
RemBertConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertConfig
|
||||
:members:
|
||||
|
||||
|
||||
RemBertTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
RemBertTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertTokenizerFast
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
RemBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertModel
|
||||
:members: forward
|
||||
|
||||
|
||||
RemBertForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertForCausalLM
|
||||
:members: forward
|
||||
|
||||
|
||||
RemBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
RemBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
RemBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
RemBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
RemBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RemBertForQuestionAnswering
|
||||
:members: forward
|
||||
|
||||
|
||||
TFRemBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFRemBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertForMaskedLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFRemBertForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertForCausalLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFRemBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertForSequenceClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFRemBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertForMultipleChoice
|
||||
:members: call
|
||||
|
||||
|
||||
TFRemBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertForTokenClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFRemBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRemBertForQuestionAnswering
|
||||
:members: call
|
40
docs/source/model_doc/retribert.mdx
Normal file
40
docs/source/model_doc/retribert.mdx
Normal file
@ -0,0 +1,40 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# RetriBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The RetriBERT model was proposed in the blog post [Explain Anything Like I'm Five: A Model for Open Domain Long Form
|
||||
Question Answering](https://yjernite.github.io/lfqa.html). RetriBERT is a small model that uses either a single or
|
||||
pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.
|
||||
|
||||
This model was contributed by [yjernite](https://huggingface.co/yjernite). Code to train and use the model can be
|
||||
found [here](https://github.com/huggingface/transformers/tree/master/examples/research-projects/distillation).
|
||||
|
||||
|
||||
## RetriBertConfig
|
||||
|
||||
[[autodoc]] RetriBertConfig
|
||||
|
||||
## RetriBertTokenizer
|
||||
|
||||
[[autodoc]] RetriBertTokenizer
|
||||
|
||||
## RetriBertTokenizerFast
|
||||
|
||||
[[autodoc]] RetriBertTokenizerFast
|
||||
|
||||
## RetriBertModel
|
||||
|
||||
[[autodoc]] RetriBertModel
|
||||
- forward
|
@ -1,52 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
RetriBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The RetriBERT model was proposed in the blog post `Explain Anything Like I'm Five: A Model for Open Domain Long Form
|
||||
Question Answering <https://yjernite.github.io/lfqa.html>`__. RetriBERT is a small model that uses either a single or
|
||||
pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.
|
||||
|
||||
This model was contributed by `yjernite <https://huggingface.co/yjernite>`__. Code to train and use the model can be
|
||||
found :prefix_link:`here <examples/research-projects/distillation>`.
|
||||
|
||||
|
||||
RetriBertConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RetriBertConfig
|
||||
:members:
|
||||
|
||||
|
||||
RetriBertTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RetriBertTokenizer
|
||||
:members:
|
||||
|
||||
|
||||
RetriBertTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RetriBertTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
RetriBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RetriBertModel
|
||||
:members: forward
|
162
docs/source/model_doc/roberta.mdx
Normal file
162
docs/source/model_doc/roberta.mdx
Normal file
@ -0,0 +1,162 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# RoBERTa
|
||||
|
||||
## Overview
|
||||
|
||||
The RoBERTa model was proposed in [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer
|
||||
Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018.
|
||||
|
||||
It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with
|
||||
much larger mini-batches and learning rates.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Language model pretraining has led to significant performance gains but careful comparison between different
|
||||
approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes,
|
||||
and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication
|
||||
study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and
|
||||
training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every
|
||||
model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results
|
||||
highlight the importance of previously overlooked design choices, and raise questions about the source of recently
|
||||
reported improvements. We release our models and code.*
|
||||
|
||||
Tips:
|
||||
|
||||
- This implementation is the same as [`BertModel`] with a tiny embeddings tweak as well as a setup
|
||||
for Roberta pretrained models.
|
||||
- RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
|
||||
different pretraining scheme.
|
||||
- RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just
|
||||
separate your segments with the separation token `tokenizer.sep_token` (or `</s>`)
|
||||
- [CamemBERT](camembert) is a wrapper around RoBERTa. Refer to this page for usage examples.
|
||||
|
||||
This model was contributed by [julien-c](https://huggingface.co/julien-c). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/roberta).
|
||||
|
||||
|
||||
## RobertaConfig
|
||||
|
||||
[[autodoc]] RobertaConfig
|
||||
|
||||
## RobertaTokenizer
|
||||
|
||||
[[autodoc]] RobertaTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## RobertaTokenizerFast
|
||||
|
||||
[[autodoc]] RobertaTokenizerFast
|
||||
- build_inputs_with_special_tokens
|
||||
|
||||
## RobertaModel
|
||||
|
||||
[[autodoc]] RobertaModel
|
||||
- forward
|
||||
|
||||
## RobertaForCausalLM
|
||||
|
||||
[[autodoc]] RobertaForCausalLM
|
||||
- forward
|
||||
|
||||
## RobertaForMaskedLM
|
||||
|
||||
[[autodoc]] RobertaForMaskedLM
|
||||
- forward
|
||||
|
||||
## RobertaForSequenceClassification
|
||||
|
||||
[[autodoc]] RobertaForSequenceClassification
|
||||
- forward
|
||||
|
||||
## RobertaForMultipleChoice
|
||||
|
||||
[[autodoc]] RobertaForMultipleChoice
|
||||
- forward
|
||||
|
||||
## RobertaForTokenClassification
|
||||
|
||||
[[autodoc]] RobertaForTokenClassification
|
||||
- forward
|
||||
|
||||
## RobertaForQuestionAnswering
|
||||
|
||||
[[autodoc]] RobertaForQuestionAnswering
|
||||
- forward
|
||||
|
||||
## TFRobertaModel
|
||||
|
||||
[[autodoc]] TFRobertaModel
|
||||
- call
|
||||
|
||||
## TFRobertaForCausalLM
|
||||
|
||||
[[autodoc]] TFRobertaForCausalLM
|
||||
- call
|
||||
|
||||
## TFRobertaForMaskedLM
|
||||
|
||||
[[autodoc]] TFRobertaForMaskedLM
|
||||
- call
|
||||
|
||||
## TFRobertaForSequenceClassification
|
||||
|
||||
[[autodoc]] TFRobertaForSequenceClassification
|
||||
- call
|
||||
|
||||
## TFRobertaForMultipleChoice
|
||||
|
||||
[[autodoc]] TFRobertaForMultipleChoice
|
||||
- call
|
||||
|
||||
## TFRobertaForTokenClassification
|
||||
|
||||
[[autodoc]] TFRobertaForTokenClassification
|
||||
- call
|
||||
|
||||
## TFRobertaForQuestionAnswering
|
||||
|
||||
[[autodoc]] TFRobertaForQuestionAnswering
|
||||
- call
|
||||
|
||||
## FlaxRobertaModel
|
||||
|
||||
[[autodoc]] FlaxRobertaModel
|
||||
- __call__
|
||||
|
||||
## FlaxRobertaForMaskedLM
|
||||
|
||||
[[autodoc]] FlaxRobertaForMaskedLM
|
||||
- __call__
|
||||
|
||||
## FlaxRobertaForSequenceClassification
|
||||
|
||||
[[autodoc]] FlaxRobertaForSequenceClassification
|
||||
- __call__
|
||||
|
||||
## FlaxRobertaForMultipleChoice
|
||||
|
||||
[[autodoc]] FlaxRobertaForMultipleChoice
|
||||
- __call__
|
||||
|
||||
## FlaxRobertaForTokenClassification
|
||||
|
||||
[[autodoc]] FlaxRobertaForTokenClassification
|
||||
- __call__
|
||||
|
||||
## FlaxRobertaForQuestionAnswering
|
||||
|
||||
[[autodoc]] FlaxRobertaForQuestionAnswering
|
||||
- __call__
|
@ -1,210 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
RoBERTa
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The RoBERTa model was proposed in `RoBERTa: A Robustly Optimized BERT Pretraining Approach
|
||||
<https://arxiv.org/abs/1907.11692>`_ by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer
|
||||
Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018.
|
||||
|
||||
It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with
|
||||
much larger mini-batches and learning rates.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Language model pretraining has led to significant performance gains but careful comparison between different
|
||||
approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes,
|
||||
and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication
|
||||
study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and
|
||||
training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every
|
||||
model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results
|
||||
highlight the importance of previously overlooked design choices, and raise questions about the source of recently
|
||||
reported improvements. We release our models and code.*
|
||||
|
||||
Tips:
|
||||
|
||||
- This implementation is the same as :class:`~transformers.BertModel` with a tiny embeddings tweak as well as a setup
|
||||
for Roberta pretrained models.
|
||||
- RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
|
||||
different pretraining scheme.
|
||||
- RoBERTa doesn't have :obj:`token_type_ids`, you don't need to indicate which token belongs to which segment. Just
|
||||
separate your segments with the separation token :obj:`tokenizer.sep_token` (or :obj:`</s>`)
|
||||
- :doc:`CamemBERT <camembert>` is a wrapper around RoBERTa. Refer to this page for usage examples.
|
||||
|
||||
This model was contributed by `julien-c <https://huggingface.co/julien-c>`__. The original code can be found `here
|
||||
<https://github.com/pytorch/fairseq/tree/master/examples/roberta>`_.
|
||||
|
||||
|
||||
RobertaConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaConfig
|
||||
:members:
|
||||
|
||||
|
||||
RobertaTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
RobertaTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaTokenizerFast
|
||||
:members: build_inputs_with_special_tokens
|
||||
|
||||
|
||||
RobertaModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaModel
|
||||
:members: forward
|
||||
|
||||
|
||||
RobertaForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaForCausalLM
|
||||
:members: forward
|
||||
|
||||
|
||||
RobertaForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
RobertaForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
RobertaForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
RobertaForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
RobertaForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RobertaForQuestionAnswering
|
||||
:members: forward
|
||||
|
||||
|
||||
TFRobertaModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFRobertaForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaForCausalLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFRobertaForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaForMaskedLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFRobertaForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaForSequenceClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFRobertaForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaForMultipleChoice
|
||||
:members: call
|
||||
|
||||
|
||||
TFRobertaForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaForTokenClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFRobertaForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRobertaForQuestionAnswering
|
||||
:members: call
|
||||
|
||||
|
||||
FlaxRobertaModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxRobertaModel
|
||||
:members: __call__
|
||||
|
||||
|
||||
FlaxRobertaForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxRobertaForMaskedLM
|
||||
:members: __call__
|
||||
|
||||
|
||||
FlaxRobertaForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxRobertaForSequenceClassification
|
||||
:members: __call__
|
||||
|
||||
|
||||
FlaxRobertaForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxRobertaForMultipleChoice
|
||||
:members: __call__
|
||||
|
||||
|
||||
FlaxRobertaForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxRobertaForTokenClassification
|
||||
:members: __call__
|
||||
|
||||
|
||||
FlaxRobertaForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxRobertaForQuestionAnswering
|
||||
:members: __call__
|
125
docs/source/model_doc/roformer.mdx
Normal file
125
docs/source/model_doc/roformer.mdx
Normal file
@ -0,0 +1,125 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# RoFormer
|
||||
|
||||
## Overview
|
||||
|
||||
The RoFormer model was proposed in [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Position encoding in transformer architecture provides supervision for dependency modeling between elements at
|
||||
different positions in the sequence. We investigate various methods to encode positional information in
|
||||
transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The
|
||||
proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative
|
||||
position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of
|
||||
being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and
|
||||
capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced
|
||||
transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We
|
||||
release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing
|
||||
experiment for English benchmark will soon be updated.*
|
||||
|
||||
Tips:
|
||||
|
||||
- RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown
|
||||
improved performance on classification tasks with long texts.
|
||||
|
||||
|
||||
This model was contributed by [junnyu](https://huggingface.co/junnyu). The original code can be found [here](https://github.com/ZhuiyiTechnology/roformer).
|
||||
|
||||
## RoFormerConfig
|
||||
|
||||
[[autodoc]] RoFormerConfig
|
||||
|
||||
## RoFormerTokenizer
|
||||
|
||||
[[autodoc]] RoFormerTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## RoFormerTokenizerFast
|
||||
|
||||
[[autodoc]] RoFormerTokenizerFast
|
||||
- build_inputs_with_special_tokens
|
||||
|
||||
## RoFormerModel
|
||||
|
||||
[[autodoc]] RoFormerModel
|
||||
- forward
|
||||
|
||||
## RoFormerForCausalLM
|
||||
|
||||
[[autodoc]] RoFormerForCausalLM
|
||||
- forward
|
||||
|
||||
## RoFormerForMaskedLM
|
||||
|
||||
[[autodoc]] RoFormerForMaskedLM
|
||||
- forward
|
||||
|
||||
## RoFormerForSequenceClassification
|
||||
|
||||
[[autodoc]] RoFormerForSequenceClassification
|
||||
- forward
|
||||
|
||||
## RoFormerForMultipleChoice
|
||||
|
||||
[[autodoc]] RoFormerForMultipleChoice
|
||||
- forward
|
||||
|
||||
## RoFormerForTokenClassification
|
||||
|
||||
[[autodoc]] RoFormerForTokenClassification
|
||||
- forward
|
||||
|
||||
## RoFormerForQuestionAnswering
|
||||
|
||||
[[autodoc]] RoFormerForQuestionAnswering
|
||||
- forward
|
||||
|
||||
## TFRoFormerModel
|
||||
|
||||
[[autodoc]] TFRoFormerModel
|
||||
- call
|
||||
|
||||
## TFRoFormerForMaskedLM
|
||||
|
||||
[[autodoc]] TFRoFormerForMaskedLM
|
||||
- call
|
||||
|
||||
## TFRoFormerForCausalLM
|
||||
|
||||
[[autodoc]] TFRoFormerForCausalLM
|
||||
- call
|
||||
|
||||
## TFRoFormerForSequenceClassification
|
||||
|
||||
[[autodoc]] TFRoFormerForSequenceClassification
|
||||
- call
|
||||
|
||||
## TFRoFormerForMultipleChoice
|
||||
|
||||
[[autodoc]] TFRoFormerForMultipleChoice
|
||||
- call
|
||||
|
||||
## TFRoFormerForTokenClassification
|
||||
|
||||
[[autodoc]] TFRoFormerForTokenClassification
|
||||
- call
|
||||
|
||||
## TFRoFormerForQuestionAnswering
|
||||
|
||||
[[autodoc]] TFRoFormerForQuestionAnswering
|
||||
- call
|
@ -1,161 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
RoFormer
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The RoFormer model was proposed in `RoFormer: Enhanced Transformer with Rotary Position Embedding
|
||||
<https://arxiv.org/pdf/2104.09864v1.pdf>`__ by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Position encoding in transformer architecture provides supervision for dependency modeling between elements at
|
||||
different positions in the sequence. We investigate various methods to encode positional information in
|
||||
transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The
|
||||
proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative
|
||||
position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of
|
||||
being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and
|
||||
capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced
|
||||
transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We
|
||||
release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing
|
||||
experiment for English benchmark will soon be updated.*
|
||||
|
||||
Tips:
|
||||
|
||||
- RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown
|
||||
improved performance on classification tasks with long texts.
|
||||
|
||||
|
||||
This model was contributed by `junnyu <https://huggingface.co/junnyu>`__. The original code can be found `here
|
||||
<https://github.com/ZhuiyiTechnology/roformer>`__.
|
||||
|
||||
RoFormerConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerConfig
|
||||
:members:
|
||||
|
||||
|
||||
RoFormerTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
RoFormerTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerTokenizerFast
|
||||
:members: build_inputs_with_special_tokens
|
||||
|
||||
|
||||
RoFormerModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerModel
|
||||
:members: forward
|
||||
|
||||
|
||||
RoFormerForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerForCausalLM
|
||||
:members: forward
|
||||
|
||||
|
||||
RoFormerForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerForMaskedLM
|
||||
:members: forward
|
||||
|
||||
|
||||
RoFormerForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerForSequenceClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
RoFormerForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerForMultipleChoice
|
||||
:members: forward
|
||||
|
||||
|
||||
RoFormerForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerForTokenClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
RoFormerForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.RoFormerForQuestionAnswering
|
||||
:members: forward
|
||||
|
||||
|
||||
TFRoFormerModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerModel
|
||||
:members: call
|
||||
|
||||
|
||||
TFRoFormerForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerForMaskedLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFRoFormerForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerForCausalLM
|
||||
:members: call
|
||||
|
||||
|
||||
TFRoFormerForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerForSequenceClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFRoFormerForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerForMultipleChoice
|
||||
:members: call
|
||||
|
||||
|
||||
TFRoFormerForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerForTokenClassification
|
||||
:members: call
|
||||
|
||||
|
||||
TFRoFormerForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFRoFormerForQuestionAnswering
|
||||
:members: call
|
104
docs/source/model_doc/segformer.mdx
Normal file
104
docs/source/model_doc/segformer.mdx
Normal file
@ -0,0 +1,104 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# SegFormer
|
||||
|
||||
## Overview
|
||||
|
||||
The SegFormer model was proposed in [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping
|
||||
Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great
|
||||
results on image segmentation benchmarks such as ADE20K and Cityscapes.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with
|
||||
lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel
|
||||
hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding,
|
||||
thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution
|
||||
differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from
|
||||
different layers, and thus combining both local attention and global attention to render powerful representations. We
|
||||
show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our
|
||||
approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance
|
||||
and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters,
|
||||
being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on
|
||||
Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.*
|
||||
|
||||
The figure below illustrates the architecture of SegFormer. Taken from the [original paper](https://arxiv.org/abs/2105.15203).
|
||||
|
||||
<img width="600" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/segformer_architecture.png"/>
|
||||
|
||||
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/NVlabs/SegFormer).
|
||||
|
||||
Tips:
|
||||
|
||||
- SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decode head.
|
||||
[`SegformerModel`] is the hierarchical Transformer encoder (which in the paper is also referred to
|
||||
as Mix Transformer or MiT). [`SegformerForSemanticSegmentation`] adds the all-MLP decode head on
|
||||
top to perform semantic segmentation of images. In addition, there's
|
||||
[`SegformerForImageClassification`] which can be used to - you guessed it - classify images. The
|
||||
authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw
|
||||
away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on
|
||||
ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be
|
||||
found on the [hub](https://huggingface.co/models?other=segformer).
|
||||
- The quickest way to get started with SegFormer is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer) (which showcase both inference and
|
||||
fine-tuning on custom data).
|
||||
- One can use [`SegformerFeatureExtractor`] to prepare images and corresponding segmentation maps
|
||||
for the model. Note that this feature extractor is fairly basic and does not include all data augmentations used in
|
||||
the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found [here](https://github.com/NVlabs/SegFormer/blob/master/local_configs/_base_/datasets/ade20k_repeat.py). The most
|
||||
important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size,
|
||||
such as 512x512 or 640x640, after which they are normalized.
|
||||
- One additional thing to keep in mind is that one can initialize [`SegformerFeatureExtractor`] with
|
||||
`reduce_labels` set to *True* or *False*. In some datasets (like ADE20k), the 0 index is used in the annotated
|
||||
segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels.
|
||||
Therefore, `reduce_labels` is used to reduce all labels by 1, and to make sure no loss is computed for the
|
||||
background class (i.e. it replaces 0 in the annotated maps by 255, which is the *ignore_index* of the loss function
|
||||
used by [`SegformerForSemanticSegmentation`]). However, other datasets use the 0 index as
|
||||
background class and include this class as part of all labels. In that case, `reduce_labels` should be set to
|
||||
*False*, as loss should also be computed for the background class.
|
||||
- As most models, SegFormer comes in different sizes, the details of which can be found in the table below.
|
||||
|
||||
| **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** |
|
||||
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
|
||||
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
|
||||
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
|
||||
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
|
||||
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
|
||||
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
|
||||
|
||||
## SegformerConfig
|
||||
|
||||
[[autodoc]] SegformerConfig
|
||||
|
||||
## SegformerFeatureExtractor
|
||||
|
||||
[[autodoc]] SegformerFeatureExtractor
|
||||
- __call__
|
||||
|
||||
## SegformerModel
|
||||
|
||||
[[autodoc]] SegformerModel
|
||||
- forward
|
||||
|
||||
## SegformerDecodeHead
|
||||
|
||||
[[autodoc]] SegformerDecodeHead
|
||||
- forward
|
||||
|
||||
## SegformerForImageClassification
|
||||
|
||||
[[autodoc]] SegformerForImageClassification
|
||||
- forward
|
||||
|
||||
## SegformerForSemanticSegmentation
|
||||
|
||||
[[autodoc]] SegformerForSemanticSegmentation
|
||||
- forward
|
@ -1,132 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
SegFormer
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The SegFormer model was proposed in `SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
|
||||
<https://arxiv.org/abs/2105.15203>`__ by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping
|
||||
Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great
|
||||
results on image segmentation benchmarks such as ADE20K and Cityscapes.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with
|
||||
lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel
|
||||
hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding,
|
||||
thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution
|
||||
differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from
|
||||
different layers, and thus combining both local attention and global attention to render powerful representations. We
|
||||
show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our
|
||||
approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance
|
||||
and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters,
|
||||
being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on
|
||||
Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.*
|
||||
|
||||
The figure below illustrates the architecture of SegFormer. Taken from the `original paper
|
||||
<https://arxiv.org/abs/2105.15203>`__.
|
||||
|
||||
.. image:: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/segformer_architecture.png
|
||||
:width: 600
|
||||
|
||||
This model was contributed by `nielsr <https://huggingface.co/nielsr>`__. The original code can be found `here
|
||||
<https://github.com/NVlabs/SegFormer>`__.
|
||||
|
||||
Tips:
|
||||
|
||||
- SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decode head.
|
||||
:class:`~transformers.SegformerModel` is the hierarchical Transformer encoder (which in the paper is also referred to
|
||||
as Mix Transformer or MiT). :class:`~transformers.SegformerForSemanticSegmentation` adds the all-MLP decode head on
|
||||
top to perform semantic segmentation of images. In addition, there's
|
||||
:class:`~transformers.SegformerForImageClassification` which can be used to - you guessed it - classify images. The
|
||||
authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw
|
||||
away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on
|
||||
ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be
|
||||
found on the `hub <https://huggingface.co/models?other=segformer>`__.
|
||||
- The quickest way to get started with SegFormer is by checking the `example notebooks
|
||||
<https://github.com/NielsRogge/Transformers-Tutorials/tree/master/SegFormer>`__ (which showcase both inference and
|
||||
fine-tuning on custom data).
|
||||
- One can use :class:`~transformers.SegformerFeatureExtractor` to prepare images and corresponding segmentation maps
|
||||
for the model. Note that this feature extractor is fairly basic and does not include all data augmentations used in
|
||||
the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found `here
|
||||
<https://github.com/NVlabs/SegFormer/blob/master/local_configs/_base_/datasets/ade20k_repeat.py>`__. The most
|
||||
important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size,
|
||||
such as 512x512 or 640x640, after which they are normalized.
|
||||
- One additional thing to keep in mind is that one can initialize :class:`~transformers.SegformerFeatureExtractor` with
|
||||
:obj:`reduce_labels` set to `True` or `False`. In some datasets (like ADE20k), the 0 index is used in the annotated
|
||||
segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels.
|
||||
Therefore, :obj:`reduce_labels` is used to reduce all labels by 1, and to make sure no loss is computed for the
|
||||
background class (i.e. it replaces 0 in the annotated maps by 255, which is the `ignore_index` of the loss function
|
||||
used by :class:`~transformers.SegformerForSemanticSegmentation`). However, other datasets use the 0 index as
|
||||
background class and include this class as part of all labels. In that case, :obj:`reduce_labels` should be set to
|
||||
`False`, as loss should also be computed for the background class.
|
||||
- As most models, SegFormer comes in different sizes, the details of which can be found in the table below.
|
||||
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
|
||||
+-------------------+---------------+---------------------+-------------------------+----------------+-----------------------+
|
||||
|
||||
SegformerConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SegformerConfig
|
||||
:members:
|
||||
|
||||
|
||||
SegformerFeatureExtractor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SegformerFeatureExtractor
|
||||
:members: __call__
|
||||
|
||||
|
||||
SegformerModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SegformerModel
|
||||
:members: forward
|
||||
|
||||
|
||||
SegformerDecodeHead
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SegformerDecodeHead
|
||||
:members: forward
|
||||
|
||||
|
||||
SegformerForImageClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SegformerForImageClassification
|
||||
:members: forward
|
||||
|
||||
|
||||
SegformerForSemanticSegmentation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SegformerForSemanticSegmentation
|
||||
:members: forward
|
57
docs/source/model_doc/sew.mdx
Normal file
57
docs/source/model_doc/sew.mdx
Normal file
@ -0,0 +1,57 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# SEW
|
||||
|
||||
## Overview
|
||||
|
||||
SEW (Squeezed and Efficient Wav2Vec) was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training
|
||||
for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q.
|
||||
Weinberger, Yoav Artzi.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
|
||||
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
|
||||
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
|
||||
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
|
||||
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
|
||||
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
|
||||
time, SEW reduces word error rate by 25-50% across different model sizes.*
|
||||
|
||||
Tips:
|
||||
|
||||
- SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
|
||||
- SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
|
||||
[`Wav2Vec2CTCTokenizer`].
|
||||
|
||||
This model was contributed by [anton-l](https://huggingface.co/anton-l).
|
||||
|
||||
|
||||
## SEWConfig
|
||||
|
||||
[[autodoc]] SEWConfig
|
||||
|
||||
## SEWModel
|
||||
|
||||
[[autodoc]] SEWModel
|
||||
- forward
|
||||
|
||||
## SEWForCTC
|
||||
|
||||
[[autodoc]] SEWForCTC
|
||||
- forward
|
||||
|
||||
## SEWForSequenceClassification
|
||||
|
||||
[[autodoc]] SEWForSequenceClassification
|
||||
- forward
|
@ -1,67 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
SEW
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
SEW (Squeezed and Efficient Wav2Vec) was proposed in `Performance-Efficiency Trade-offs in Unsupervised Pre-training
|
||||
for Speech Recognition <https://arxiv.org/abs/2109.06870>`__ by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q.
|
||||
Weinberger, Yoav Artzi.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
|
||||
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
|
||||
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
|
||||
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
|
||||
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
|
||||
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
|
||||
time, SEW reduces word error rate by 25-50% across different model sizes.*
|
||||
|
||||
Tips:
|
||||
|
||||
- SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
|
||||
- SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using
|
||||
:class:`~transformers.Wav2Vec2CTCTokenizer`.
|
||||
|
||||
This model was contributed by `anton-l <https://huggingface.co/anton-l>`__.
|
||||
|
||||
|
||||
SEWConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWConfig
|
||||
:members:
|
||||
|
||||
|
||||
SEWModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWModel
|
||||
:members: forward
|
||||
|
||||
|
||||
SEWForCTC
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWForCTC
|
||||
:members: forward
|
||||
|
||||
|
||||
SEWForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWForSequenceClassification
|
||||
:members: forward
|
57
docs/source/model_doc/sew_d.mdx
Normal file
57
docs/source/model_doc/sew_d.mdx
Normal file
@ -0,0 +1,57 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# SEW-D
|
||||
|
||||
## Overview
|
||||
|
||||
SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in [Performance-Efficiency Trade-offs
|
||||
in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim,
|
||||
Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
|
||||
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
|
||||
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
|
||||
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
|
||||
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
|
||||
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
|
||||
time, SEW reduces word error rate by 25-50% across different model sizes.*
|
||||
|
||||
Tips:
|
||||
|
||||
- SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
|
||||
- SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
|
||||
using [`Wav2Vec2CTCTokenizer`].
|
||||
|
||||
This model was contributed by [anton-l](https://huggingface.co/anton-l).
|
||||
|
||||
|
||||
## SEWDConfig
|
||||
|
||||
[[autodoc]] SEWDConfig
|
||||
|
||||
## SEWDModel
|
||||
|
||||
[[autodoc]] SEWDModel
|
||||
- forward
|
||||
|
||||
## SEWDForCTC
|
||||
|
||||
[[autodoc]] SEWDForCTC
|
||||
- forward
|
||||
|
||||
## SEWDForSequenceClassification
|
||||
|
||||
[[autodoc]] SEWDForSequenceClassification
|
||||
- forward
|
@ -1,66 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
SEW-D
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in `Performance-Efficiency Trade-offs
|
||||
in Unsupervised Pre-training for Speech Recognition <https://arxiv.org/abs/2109.06870>`__ by Felix Wu, Kwangyoun Kim,
|
||||
Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
|
||||
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
|
||||
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
|
||||
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
|
||||
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
|
||||
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
|
||||
time, SEW reduces word error rate by 25-50% across different model sizes.*
|
||||
|
||||
Tips:
|
||||
|
||||
- SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
|
||||
- SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
|
||||
using :class:`~transformers.Wav2Vec2CTCTokenizer`.
|
||||
|
||||
This model was contributed by `anton-l <https://huggingface.co/anton-l>`__.
|
||||
|
||||
|
||||
SEWDConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWDConfig
|
||||
:members:
|
||||
|
||||
|
||||
SEWDModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWDModel
|
||||
:members: forward
|
||||
|
||||
|
||||
SEWDForCTC
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWDForCTC
|
||||
:members: forward
|
||||
|
||||
SEWDForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SEWDForSequenceClassification
|
||||
:members: forward
|
138
docs/source/model_doc/speech_to_text.mdx
Normal file
138
docs/source/model_doc/speech_to_text.mdx
Normal file
@ -0,0 +1,138 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Speech2Text
|
||||
|
||||
## Overview
|
||||
|
||||
The Speech2Text model was proposed in [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a
|
||||
transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
|
||||
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
|
||||
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
|
||||
transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST:
|
||||
[LibriSpeech](http://www.openslr.org/12), [CoVoST 2](https://github.com/facebookresearch/covost), [MuST-C](https://ict.fbk.eu/must-c/).
|
||||
|
||||
This model was contributed by [valhalla](https://huggingface.co/valhalla). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text).
|
||||
|
||||
|
||||
## Inference
|
||||
|
||||
Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech
|
||||
signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The
|
||||
`generate()` method can be used for inference.
|
||||
|
||||
The [`Speech2TextFeatureExtractor`] class is responsible for extracting the log-mel filter-bank
|
||||
features. The [`Speech2TextProcessor`] wraps [`Speech2TextFeatureExtractor`] and
|
||||
[`Speech2TextTokenizer`] into a single instance to both extract the input features and decode the
|
||||
predicted token ids.
|
||||
|
||||
The feature extractor depends on `torchaudio` and the tokenizer depends on `sentencepiece` so be sure to
|
||||
install those packages before running the examples. You could either install those as extra speech dependencies with
|
||||
`pip install transformers"[speech, sentencepiece]"` or install the packages seperately with `pip install torchaudio sentencepiece`. Also `torchaudio` requires the development version of the [libsndfile](http://www.mega-nerd.com/libsndfile/) package which can be installed via a system package manager. On Ubuntu it can
|
||||
be installed as follows: `apt install libsndfile1-dev`
|
||||
|
||||
|
||||
- ASR and Speech Translation
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
|
||||
>>> from datasets import load_dataset
|
||||
>>> import soundfile as sf
|
||||
|
||||
>>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
|
||||
>>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
|
||||
|
||||
>>> def map_to_array(batch):
|
||||
... speech, _ = sf.read(batch["file"])
|
||||
... batch["speech"] = speech
|
||||
... return batch
|
||||
|
||||
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.map(map_to_array)
|
||||
|
||||
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
|
||||
>>> generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
|
||||
|
||||
>>> transcription = processor.batch_decode(generated_ids)
|
||||
```
|
||||
|
||||
- Multilingual speech translation
|
||||
|
||||
For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and
|
||||
the target language id is forced as the first generated token. To force the target language id as the first
|
||||
generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following
|
||||
example shows how to transate English speech to French text using the *facebook/s2t-medium-mustc-multilingual-st*
|
||||
checkpoint.
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
|
||||
>>> from datasets import load_dataset
|
||||
>>> import soundfile as sf
|
||||
|
||||
>>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
|
||||
>>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
|
||||
|
||||
>>> def map_to_array(batch):
|
||||
... speech, _ = sf.read(batch["file"])
|
||||
... batch["speech"] = speech
|
||||
... return batch
|
||||
|
||||
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.map(map_to_array)
|
||||
|
||||
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
|
||||
>>> generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask], forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"])
|
||||
|
||||
>>> translation = processor.batch_decode(generated_ids)
|
||||
```
|
||||
|
||||
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for Speech2Text checkpoints.
|
||||
|
||||
|
||||
## Speech2TextConfig
|
||||
|
||||
[[autodoc]] Speech2TextConfig
|
||||
|
||||
## Speech2TextTokenizer
|
||||
|
||||
[[autodoc]] Speech2TextTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## Speech2TextFeatureExtractor
|
||||
|
||||
[[autodoc]] Speech2TextFeatureExtractor
|
||||
- __call__
|
||||
|
||||
## Speech2TextProcessor
|
||||
|
||||
[[autodoc]] Speech2TextProcessor
|
||||
- __call__
|
||||
- from_pretrained
|
||||
- save_pretrained
|
||||
- batch_decode
|
||||
- decode
|
||||
- as_target_processor
|
||||
|
||||
## Speech2TextModel
|
||||
|
||||
[[autodoc]] Speech2TextModel
|
||||
- forward
|
||||
|
||||
## Speech2TextForConditionalGeneration
|
||||
|
||||
[[autodoc]] Speech2TextForConditionalGeneration
|
||||
- forward
|
@ -1,153 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Speech2Text
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Speech2Text model was proposed in `fairseq S2T: Fast Speech-to-Text Modeling with fairseq
|
||||
<https://arxiv.org/abs/2010.05171>`__ by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a
|
||||
transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
|
||||
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
|
||||
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
|
||||
transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST:
|
||||
`LibriSpeech <http://www.openslr.org/12>`__, `CoVoST 2 <https://github.com/facebookresearch/covost>`__, `MuST-C
|
||||
<https://ict.fbk.eu/must-c/>`__.
|
||||
|
||||
This model was contributed by `valhalla <https://huggingface.co/valhalla>`__. The original code can be found `here
|
||||
<https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text>`__.
|
||||
|
||||
|
||||
Inference
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech
|
||||
signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The
|
||||
:obj:`generate()` method can be used for inference.
|
||||
|
||||
The :class:`~transformers.Speech2TextFeatureExtractor` class is responsible for extracting the log-mel filter-bank
|
||||
features. The :class:`~transformers.Speech2TextProcessor` wraps :class:`~transformers.Speech2TextFeatureExtractor` and
|
||||
:class:`~transformers.Speech2TextTokenizer` into a single instance to both extract the input features and decode the
|
||||
predicted token ids.
|
||||
|
||||
The feature extractor depends on :obj:`torchaudio` and the tokenizer depends on :obj:`sentencepiece` so be sure to
|
||||
install those packages before running the examples. You could either install those as extra speech dependencies with
|
||||
``pip install transformers"[speech, sentencepiece]"`` or install the packages seperately with ``pip install torchaudio
|
||||
sentencepiece``. Also ``torchaudio`` requires the development version of the `libsndfile
|
||||
<http://www.mega-nerd.com/libsndfile/>`__ package which can be installed via a system package manager. On Ubuntu it can
|
||||
be installed as follows: ``apt install libsndfile1-dev``
|
||||
|
||||
|
||||
- ASR and Speech Translation
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> import torch
|
||||
>>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
|
||||
>>> from datasets import load_dataset
|
||||
>>> import soundfile as sf
|
||||
|
||||
>>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
|
||||
>>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
|
||||
|
||||
>>> def map_to_array(batch):
|
||||
... speech, _ = sf.read(batch["file"])
|
||||
... batch["speech"] = speech
|
||||
... return batch
|
||||
|
||||
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.map(map_to_array)
|
||||
|
||||
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
|
||||
>>> generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
|
||||
|
||||
>>> transcription = processor.batch_decode(generated_ids)
|
||||
|
||||
|
||||
- Multilingual speech translation
|
||||
|
||||
For multilingual speech translation models, :obj:`eos_token_id` is used as the :obj:`decoder_start_token_id` and
|
||||
the target language id is forced as the first generated token. To force the target language id as the first
|
||||
generated token, pass the :obj:`forced_bos_token_id` parameter to the :obj:`generate()` method. The following
|
||||
example shows how to transate English speech to French text using the `facebook/s2t-medium-mustc-multilingual-st`
|
||||
checkpoint.
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> import torch
|
||||
>>> from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
|
||||
>>> from datasets import load_dataset
|
||||
>>> import soundfile as sf
|
||||
|
||||
>>> model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
|
||||
>>> processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
|
||||
|
||||
>>> def map_to_array(batch):
|
||||
... speech, _ = sf.read(batch["file"])
|
||||
... batch["speech"] = speech
|
||||
... return batch
|
||||
|
||||
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.map(map_to_array)
|
||||
|
||||
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
|
||||
>>> generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask], forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"])
|
||||
|
||||
>>> translation = processor.batch_decode(generated_ids)
|
||||
|
||||
|
||||
See the `model hub <https://huggingface.co/models?filter=speech_to_text>`__ to look for Speech2Text checkpoints.
|
||||
|
||||
|
||||
Speech2TextConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2TextConfig
|
||||
:members:
|
||||
|
||||
|
||||
Speech2TextTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2TextTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
Speech2TextFeatureExtractor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2TextFeatureExtractor
|
||||
:members: __call__
|
||||
|
||||
|
||||
Speech2TextProcessor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2TextProcessor
|
||||
:members: __call__, from_pretrained, save_pretrained, batch_decode, decode, as_target_processor
|
||||
|
||||
|
||||
Speech2TextModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2TextModel
|
||||
:members: forward
|
||||
|
||||
|
||||
Speech2TextForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2TextForConditionalGeneration
|
||||
:members: forward
|
116
docs/source/model_doc/speech_to_text_2.mdx
Normal file
116
docs/source/model_doc/speech_to_text_2.mdx
Normal file
@ -0,0 +1,116 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Speech2Text2
|
||||
|
||||
## Overview
|
||||
|
||||
The Speech2Text2 model is used together with [Wav2Vec2](wav2vec2) for Speech Translation models proposed in
|
||||
[Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by
|
||||
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
|
||||
|
||||
Speech2Text2 is a *decoder-only* transformer model that can be used with any speech *encoder-only*, such as
|
||||
[Wav2Vec2](wav2vec2) or [HuBERT](hubert) for Speech-to-Text tasks. Please refer to the
|
||||
[SpeechEncoderDecoder](speechencoderdecoder) class on how to combine Speech2Text2 with any speech *encoder-only*
|
||||
model.
|
||||
|
||||
This model was contributed by [Patrick von Platen](https://huggingface.co/patrickvonplaten).
|
||||
|
||||
The original code can be found [here](https://github.com/pytorch/fairseq/blob/1f7ef9ed1e1061f8c7f88f8b94c7186834398690/fairseq/models/wav2vec/wav2vec2_asr.py#L266).
|
||||
|
||||
|
||||
Tips:
|
||||
|
||||
- Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
|
||||
the [official models](https://huggingface.co/models?other=speech2text2) .
|
||||
- Speech2Text2 is always used within the [SpeechEncoderDecoder](speechencoderdecoder) framework.
|
||||
- Speech2Text2's tokenizer is based on [fastBPE](https://github.com/glample/fastBPE).
|
||||
|
||||
## Inference
|
||||
|
||||
Speech2Text2's [`SpeechEncoderDecoderModel`] model accepts raw waveform input values from speech and
|
||||
makes use of [`~generation_utils.GenerationMixin.generate`] to translate the input speech
|
||||
autoregressively to the target language.
|
||||
|
||||
The [`Wav2Vec2FeatureExtractor`] class is responsible for preprocessing the input speech and
|
||||
[`Speech2Text2Tokenizer`] decodes the generated target tokens to the target string. The
|
||||
[`Speech2Text2Processor`] wraps [`Wav2Vec2FeatureExtractor`] and
|
||||
[`Speech2Text2Tokenizer`] into a single instance to both extract the input features and decode the
|
||||
predicted token ids.
|
||||
|
||||
- Step-by-step Speech Translation
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
|
||||
>>> from datasets import load_dataset
|
||||
>>> import soundfile as sf
|
||||
|
||||
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
|
||||
>>> processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
|
||||
|
||||
>>> def map_to_array(batch):
|
||||
... speech, _ = sf.read(batch["file"])
|
||||
... batch["speech"] = speech
|
||||
... return batch
|
||||
|
||||
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.map(map_to_array)
|
||||
|
||||
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
|
||||
>>> generated_ids = model.generate(input_ids=inputs["input_values"], attention_mask=inputs["attention_mask"])
|
||||
|
||||
>>> transcription = processor.batch_decode(generated_ids)
|
||||
```
|
||||
|
||||
- Speech Translation via Pipelines
|
||||
|
||||
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
|
||||
|
||||
```python
|
||||
>>> from datasets import load_dataset
|
||||
>>> from transformers import pipeline
|
||||
|
||||
>>> librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-de", feature_extractor="facebook/s2t-wav2vec2-large-en-de")
|
||||
|
||||
>>> translation_de = asr(librispeech_en[0]["file"])
|
||||
```
|
||||
|
||||
See [model hub](https://huggingface.co/models?filter=speech2text2) to look for Speech2Text2 checkpoints.
|
||||
|
||||
|
||||
## Speech2Text2Config
|
||||
|
||||
[[autodoc]] Speech2Text2Config
|
||||
|
||||
## Speech2TextTokenizer
|
||||
|
||||
[[autodoc]] Speech2Text2Tokenizer
|
||||
- batch_decode
|
||||
- decode
|
||||
- save_vocabulary
|
||||
|
||||
## Speech2Text2Processor
|
||||
|
||||
[[autodoc]] Speech2Text2Processor
|
||||
- __call__
|
||||
- from_pretrained
|
||||
- save_pretrained
|
||||
- batch_decode
|
||||
- decode
|
||||
- as_target_processor
|
||||
|
||||
## Speech2Text2ForCausalLM
|
||||
|
||||
[[autodoc]] Speech2Text2ForCausalLM
|
||||
- forward
|
@ -1,123 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Speech2Text2
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Speech2Text2 model is used together with :doc:`Wav2Vec2 <wav2vec2>` for Speech Translation models proposed in
|
||||
`Large-Scale Self- and Semi-Supervised Learning for Speech Translation <https://arxiv.org/abs/2104.06678>`__ by
|
||||
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
|
||||
|
||||
Speech2Text2 is a *decoder-only* transformer model that can be used with any speech *encoder-only*, such as
|
||||
:doc:`Wav2Vec2 <wav2vec2>` or :doc:`HuBERT <hubert>` for Speech-to-Text tasks. Please refer to the
|
||||
:doc:`SpeechEncoderDecoder <speechencoderdecoder>` class on how to combine Speech2Text2 with any speech *encoder-only*
|
||||
model.
|
||||
|
||||
This model was contributed by `Patrick von Platen <https://huggingface.co/patrickvonplaten>`__.
|
||||
|
||||
The original code can be found `here
|
||||
<https://github.com/pytorch/fairseq/blob/1f7ef9ed1e1061f8c7f88f8b94c7186834398690/fairseq/models/wav2vec/wav2vec2_asr.py#L266>`__.
|
||||
|
||||
|
||||
Tips:
|
||||
|
||||
- Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
|
||||
the `official models <https://huggingface.co/models?other=speech2text2>`__ .
|
||||
- Speech2Text2 is always used within the :doc:`SpeechEncoderDecoder <speechencoderdecoder>` framework.
|
||||
- Speech2Text2's tokenizer is based on `fastBPE <https://github.com/glample/fastBPE>`.
|
||||
|
||||
Inference
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Speech2Text2's :class:`~transformers.SpeechEncoderDecoderModel` model accepts raw waveform input values from speech and
|
||||
makes use of :func:`~transformers.generation_utils.GenerationMixin.generate` to translate the input speech
|
||||
autoregressively to the target language.
|
||||
|
||||
The :class:`~transformers.Wav2Vec2FeatureExtractor` class is responsible for preprocessing the input speech and
|
||||
:class:`~transformers.Speech2Text2Tokenizer` decodes the generated target tokens to the target string. The
|
||||
:class:`~transformers.Speech2Text2Processor` wraps :class:`~transformers.Wav2Vec2FeatureExtractor` and
|
||||
:class:`~transformers.Speech2Text2Tokenizer` into a single instance to both extract the input features and decode the
|
||||
predicted token ids.
|
||||
|
||||
- Step-by-step Speech Translation
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> import torch
|
||||
>>> from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
|
||||
>>> from datasets import load_dataset
|
||||
>>> import soundfile as sf
|
||||
|
||||
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
|
||||
>>> processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
|
||||
|
||||
>>> def map_to_array(batch):
|
||||
... speech, _ = sf.read(batch["file"])
|
||||
... batch["speech"] = speech
|
||||
... return batch
|
||||
|
||||
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.map(map_to_array)
|
||||
|
||||
>>> inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
|
||||
>>> generated_ids = model.generate(input_ids=inputs["input_values"], attention_mask=inputs["attention_mask"])
|
||||
|
||||
>>> transcription = processor.batch_decode(generated_ids)
|
||||
|
||||
|
||||
- Speech Translation via Pipelines
|
||||
|
||||
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
|
||||
|
||||
.. code-block::
|
||||
|
||||
>>> from datasets import load_dataset
|
||||
>>> from transformers import pipeline
|
||||
|
||||
>>> librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-de", feature_extractor="facebook/s2t-wav2vec2-large-en-de")
|
||||
|
||||
>>> translation_de = asr(librispeech_en[0]["file"])
|
||||
|
||||
|
||||
See `model hub <https://huggingface.co/models?filter=speech2text2>`__ to look for Speech2Text2 checkpoints.
|
||||
|
||||
|
||||
Speech2Text2Config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2Text2Config
|
||||
:members:
|
||||
|
||||
|
||||
Speech2TextTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2Text2Tokenizer
|
||||
:members: batch_decode, decode, save_vocabulary
|
||||
|
||||
|
||||
Speech2Text2Processor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2Text2Processor
|
||||
:members: __call__, from_pretrained, save_pretrained, batch_decode, decode, as_target_processor
|
||||
|
||||
|
||||
Speech2Text2ForCausalLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.Speech2Text2ForCausalLM
|
||||
:members: forward
|
35
docs/source/model_doc/speechencoderdecoder.mdx
Normal file
35
docs/source/model_doc/speechencoderdecoder.mdx
Normal file
@ -0,0 +1,35 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Speech Encoder Decoder Models
|
||||
|
||||
The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-sequence-to-text-sequence model
|
||||
with any pretrained speech autoencoding model as the encoder (*e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder.
|
||||
|
||||
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
|
||||
recognition and speech translation has *e.g.* been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech
|
||||
Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
|
||||
Alexis Conneau.
|
||||
|
||||
An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in
|
||||
[Speech2Text2](speech_to_text_2).
|
||||
|
||||
|
||||
## SpeechEncoderDecoderConfig
|
||||
|
||||
[[autodoc]] SpeechEncoderDecoderConfig
|
||||
|
||||
## SpeechEncoderDecoderModel
|
||||
|
||||
[[autodoc]] SpeechEncoderDecoderModel
|
||||
- forward
|
||||
- from_encoder_decoder_pretrained
|
@ -1,40 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Speech Encoder Decoder Models
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
The :class:`~transformers.SpeechEncoderDecoderModel` can be used to initialize a speech-sequence-to-text-sequence model
|
||||
with any pretrained speech autoencoding model as the encoder (*e.g.* :doc:`Wav2Vec2 <wav2vec2>`, :doc:`Hubert
|
||||
<hubert>`) and any pretrained autoregressive model as the decoder.
|
||||
|
||||
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
|
||||
recognition and speech translation has *e.g.* been shown in `Large-Scale Self- and Semi-Supervised Learning for Speech
|
||||
Translation <https://arxiv.org/abs/2104.06678>`__ by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
|
||||
Alexis Conneau.
|
||||
|
||||
An example of how to use a :class:`~transformers.SpeechEncoderDecoderModel` for inference can be seen in
|
||||
:doc:`Speech2Text2 <speech_to_text_2>`.
|
||||
|
||||
|
||||
SpeechEncoderDecoderConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SpeechEncoderDecoderConfig
|
||||
:members:
|
||||
|
||||
|
||||
SpeechEncoderDecoderModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SpeechEncoderDecoderModel
|
||||
:members: forward, from_encoder_decoder_pretrained
|
74
docs/source/model_doc/splinter.mdx
Normal file
74
docs/source/model_doc/splinter.mdx
Normal file
@ -0,0 +1,74 @@
|
||||
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# Splinter
|
||||
|
||||
## Overview
|
||||
|
||||
The Splinter model was proposed in [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter
|
||||
is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus
|
||||
comprising Wikipedia and the Toronto Book Corpus.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order
|
||||
of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred
|
||||
training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between
|
||||
current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question
|
||||
answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all
|
||||
recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans
|
||||
are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select
|
||||
the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD
|
||||
with only 128 training examples), while maintaining competitive performance in the high-resource setting.
|
||||
|
||||
Tips:
|
||||
|
||||
- Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize
|
||||
to question representations which are used to predict the answers. This layer is called QASS, and is the default
|
||||
behaviour in the [`SplinterForQuestionAnswering`] class. Therefore:
|
||||
- Use [`SplinterTokenizer`] (rather than [`BertTokenizer`]), as it already
|
||||
contains this special token. Also, its default behavior is to use this token when two sequences are given (for
|
||||
example, in the *run_qa.py* script).
|
||||
- If you plan on using Splinter outside *run_qa.py*, please keep in mind the question token - it might be important for
|
||||
the success of your model, especially in a few-shot setting.
|
||||
- Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that
|
||||
one also has the pretrained wights of the QASS layer (*tau/splinter-base-qass* and *tau/splinter-large-qass*) and one
|
||||
doesn't (*tau/splinter-base* and *tau/splinter-large*). This is done to support randomly initializing this layer at
|
||||
fine-tuning, as it is shown to yield better results for some cases in the paper.
|
||||
|
||||
This model was contributed by [yuvalkirstain](https://huggingface.co/yuvalkirstain) and [oriram](https://huggingface.co/oriram). The original code can be found [here](https://github.com/oriram/splinter).
|
||||
|
||||
## SplinterConfig
|
||||
|
||||
[[autodoc]] SplinterConfig
|
||||
|
||||
## SplinterTokenizer
|
||||
|
||||
[[autodoc]] SplinterTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## SplinterTokenizerFast
|
||||
|
||||
[[autodoc]] SplinterTokenizerFast
|
||||
|
||||
## SplinterModel
|
||||
|
||||
[[autodoc]] SplinterModel
|
||||
- forward
|
||||
|
||||
## SplinterForQuestionAnswering
|
||||
|
||||
[[autodoc]] SplinterForQuestionAnswering
|
||||
- forward
|
@ -1,87 +0,0 @@
|
||||
..
|
||||
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
Splinter
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Splinter model was proposed in `Few-Shot Question Answering by Pretraining Span Selection
|
||||
<https://arxiv.org/abs/2101.00438>`__ by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter
|
||||
is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus
|
||||
comprising Wikipedia and the Toronto Book Corpus.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order
|
||||
of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred
|
||||
training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between
|
||||
current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question
|
||||
answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all
|
||||
recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans
|
||||
are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select
|
||||
the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD
|
||||
with only 128 training examples), while maintaining competitive performance in the high-resource setting.
|
||||
|
||||
Tips:
|
||||
|
||||
- Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize
|
||||
to question representations which are used to predict the answers. This layer is called QASS, and is the default
|
||||
behaviour in the :class:`~transformers.SplinterForQuestionAnswering` class. Therefore:
|
||||
- Use :class:`~transformers.SplinterTokenizer` (rather than :class:`~transformers.BertTokenizer`), as it already
|
||||
contains this special token. Also, its default behavior is to use this token when two sequences are given (for
|
||||
example, in the `run_qa.py` script).
|
||||
- If you plan on using Splinter outside `run_qa.py`, please keep in mind the question token - it might be important for
|
||||
the success of your model, especially in a few-shot setting.
|
||||
- Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that
|
||||
one also has the pretrained wights of the QASS layer (`tau/splinter-base-qass` and `tau/splinter-large-qass`) and one
|
||||
doesn't (`tau/splinter-base` and `tau/splinter-large`). This is done to support randomly initializing this layer at
|
||||
fine-tuning, as it is shown to yield better results for some cases in the paper.
|
||||
|
||||
This model was contributed by `yuvalkirstain <https://huggingface.co/yuvalkirstain>`__ and `oriram
|
||||
<https://huggingface.co/oriram>`__. The original code can be found `here <https://github.com/oriram/splinter>`__.
|
||||
|
||||
SplinterConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SplinterConfig
|
||||
:members:
|
||||
|
||||
|
||||
SplinterTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SplinterTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
SplinterTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SplinterTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
SplinterModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SplinterModel
|
||||
:members: forward
|
||||
|
||||
|
||||
SplinterForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SplinterForQuestionAnswering
|
||||
:members: forward
|
88
docs/source/model_doc/squeezebert.mdx
Normal file
88
docs/source/model_doc/squeezebert.mdx
Normal file
@ -0,0 +1,88 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# SqueezeBERT
|
||||
|
||||
## Overview
|
||||
|
||||
The SqueezeBERT model was proposed in [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
|
||||
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
|
||||
SqueezeBERT architecture is that SqueezeBERT uses [grouped convolutions](https://blog.yani.io/filter-group-tutorial)
|
||||
instead of fully-connected layers for the Q, K, V and FFN layers.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
|
||||
large computing systems, and better neural network models, natural language processing (NLP) technology has made
|
||||
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
|
||||
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
|
||||
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
|
||||
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
|
||||
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
|
||||
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
|
||||
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
|
||||
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
|
||||
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
|
||||
set. The SqueezeBERT code will be released.*
|
||||
|
||||
Tips:
|
||||
|
||||
- SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
|
||||
rather than the left.
|
||||
- SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
|
||||
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
|
||||
with a causal language modeling (CLM) objective are better in that regard.
|
||||
- For best results when finetuning on sequence classification tasks, it is recommended to start with the
|
||||
*squeezebert/squeezebert-mnli-headless* checkpoint.
|
||||
|
||||
This model was contributed by [forresti](https://huggingface.co/forresti).
|
||||
|
||||
|
||||
## SqueezeBertConfig
|
||||
|
||||
[[autodoc]] SqueezeBertConfig
|
||||
|
||||
## SqueezeBertTokenizer
|
||||
|
||||
[[autodoc]] SqueezeBertTokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## SqueezeBertTokenizerFast
|
||||
|
||||
[[autodoc]] SqueezeBertTokenizerFast
|
||||
|
||||
## SqueezeBertModel
|
||||
|
||||
[[autodoc]] SqueezeBertModel
|
||||
|
||||
## SqueezeBertForMaskedLM
|
||||
|
||||
[[autodoc]] SqueezeBertForMaskedLM
|
||||
|
||||
## SqueezeBertForSequenceClassification
|
||||
|
||||
[[autodoc]] SqueezeBertForSequenceClassification
|
||||
|
||||
## SqueezeBertForMultipleChoice
|
||||
|
||||
[[autodoc]] SqueezeBertForMultipleChoice
|
||||
|
||||
## SqueezeBertForTokenClassification
|
||||
|
||||
[[autodoc]] SqueezeBertForTokenClassification
|
||||
|
||||
## SqueezeBertForQuestionAnswering
|
||||
|
||||
[[autodoc]] SqueezeBertForQuestionAnswering
|
@ -1,114 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
SqueezeBERT
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The SqueezeBERT model was proposed in `SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
|
||||
<https://arxiv.org/abs/2006.11316>`__ by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
|
||||
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
|
||||
SqueezeBERT architecture is that SqueezeBERT uses `grouped convolutions <https://blog.yani.io/filter-group-tutorial>`__
|
||||
instead of fully-connected layers for the Q, K, V and FFN layers.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
|
||||
large computing systems, and better neural network models, natural language processing (NLP) technology has made
|
||||
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
|
||||
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
|
||||
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
|
||||
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
|
||||
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
|
||||
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
|
||||
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
|
||||
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
|
||||
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
|
||||
set. The SqueezeBERT code will be released.*
|
||||
|
||||
Tips:
|
||||
|
||||
- SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
|
||||
rather than the left.
|
||||
- SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
|
||||
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
|
||||
with a causal language modeling (CLM) objective are better in that regard.
|
||||
- For best results when finetuning on sequence classification tasks, it is recommended to start with the
|
||||
`squeezebert/squeezebert-mnli-headless` checkpoint.
|
||||
|
||||
This model was contributed by `forresti <https://huggingface.co/forresti>`__.
|
||||
|
||||
|
||||
SqueezeBertConfig
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertConfig
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertTokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertTokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
SqueezeBertTokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertTokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertModel
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertForMaskedLM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertForMaskedLM
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertForSequenceClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertForSequenceClassification
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertForMultipleChoice
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertForMultipleChoice
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertForTokenClassification
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertForTokenClassification
|
||||
:members:
|
||||
|
||||
|
||||
SqueezeBertForQuestionAnswering
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.SqueezeBertForQuestionAnswering
|
||||
:members:
|
342
docs/source/model_doc/t5.mdx
Normal file
342
docs/source/model_doc/t5.mdx
Normal file
@ -0,0 +1,342 @@
|
||||
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# T5
|
||||
|
||||
## Overview
|
||||
|
||||
The T5 model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
|
||||
Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
|
||||
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
|
||||
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
|
||||
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
|
||||
text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer
|
||||
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
|
||||
with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering
|
||||
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
|
||||
NLP, we release our dataset, pre-trained models, and code.*
|
||||
|
||||
Tips:
|
||||
|
||||
- T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
|
||||
each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
|
||||
different prefix to the input corresponding to each task, e.g., for translation: *translate English to German: ...*,
|
||||
for summarization: *summarize: ...*.
|
||||
|
||||
- T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right.
|
||||
|
||||
- See the [training](#training), [inference](#inference) and [scripts](#scripts) sections below for all details regarding usage.
|
||||
|
||||
T5 comes in different sizes:
|
||||
|
||||
- [t5-small](https://huggingface.co/t5-small)
|
||||
|
||||
- [t5-base](https://huggingface.co/t5-base)
|
||||
|
||||
- [t5-large](https://huggingface.co/t5-large)
|
||||
|
||||
- [t5-3b](https://huggingface.co/t5-3b)
|
||||
|
||||
- [t5-11b](https://huggingface.co/t5-11b).
|
||||
|
||||
Based on the original T5 model, Google has released some follow-up works:
|
||||
|
||||
- **T5v1.1**: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without
|
||||
mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found [here](t5v1.1).
|
||||
|
||||
- **mT5**: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to
|
||||
the documentation of mT5 which can be found [here](mt5).
|
||||
|
||||
- **byT5**: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer
|
||||
to the documentation of byT5 which can be found [here](byt5).
|
||||
|
||||
All checkpoints can be found on the [hub](https://huggingface.co/models?search=t5).
|
||||
|
||||
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/google-research/text-to-text-transfer-transformer).
|
||||
|
||||
<a id='training'></a>
|
||||
|
||||
## Training
|
||||
|
||||
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
|
||||
forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input
|
||||
sequence is fed to the model using `input_ids`. The target sequence is shifted to the right, i.e., prepended by a
|
||||
start-sequence token and fed to the decoder using the `decoder_input_ids`. In teacher-forcing style, the target
|
||||
sequence is then appended by the EOS token and corresponds to the `labels`. The PAD token is hereby used as the
|
||||
start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
|
||||
|
||||
One can use [`T5ForConditionalGeneration`] (or the Tensorflow/Flax variant), which includes the
|
||||
language modeling head on top of the decoder.
|
||||
|
||||
- Unsupervised denoising training
|
||||
|
||||
In this setup, spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and
|
||||
the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens. Each
|
||||
sentinel token represents a unique mask token for this sentence and should start with `<extra_id_0>`,
|
||||
`<extra_id_1>`, ... up to `<extra_id_99>`. As a default, 100 sentinel tokens are available in
|
||||
[`T5Tokenizer`].
|
||||
|
||||
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be
|
||||
processed as follows:
|
||||
|
||||
```python
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
|
||||
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
|
||||
# the forward function automatically creates the correct decoder_input_ids
|
||||
loss = model(input_ids=input_ids, labels=labels).loss
|
||||
```
|
||||
|
||||
If you're interested in pre-training T5 on a new corpus, check out the [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) script in the Examples
|
||||
directory.
|
||||
|
||||
- Supervised training
|
||||
|
||||
In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping.
|
||||
Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input
|
||||
sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar.", then they should be prepared for
|
||||
the model as follows:
|
||||
|
||||
```python
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
|
||||
labels = tokenizer('Das Haus ist wunderbar.', return_tensors='pt').input_ids
|
||||
# the forward function automatically creates the correct decoder_input_ids
|
||||
loss = model(input_ids=input_ids, labels=labels).loss
|
||||
```
|
||||
|
||||
As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the
|
||||
`input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded
|
||||
target sequence). The model will automatically create the `decoder_input_ids` based on the `labels`, by
|
||||
shifting them one position to the right and prepending the `config.decoder_start_token_id`, which for T5 is
|
||||
equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate
|
||||
English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used
|
||||
during T5's pre-training.
|
||||
|
||||
However, the example above only shows a single training example. In practice, one trains deep learning models in
|
||||
batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one
|
||||
typically defines a `max_source_length` and `max_target_length`, which determine the maximum length of the
|
||||
input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on
|
||||
the task.
|
||||
|
||||
In addition, we must make sure that padding token id's of the `labels` are not taken into account by the loss
|
||||
function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the `ignore_index`
|
||||
of the `CrossEntropyLoss`. In Flax, one can use the `decoder_attention_mask` to ignore padded tokens from
|
||||
the loss (see the [Flax summarization script](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) for details). We also pass
|
||||
`attention_mask` as additional input to the model, which makes sure that padding tokens of the inputs are
|
||||
ignored. The code example below illustrates all of this.
|
||||
|
||||
```python
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
import torch
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
# the following 2 hyperparameters are task-specific
|
||||
max_source_length = 512
|
||||
max_target_length = 128
|
||||
|
||||
# Suppose we have the following 2 training examples:
|
||||
input_sequence_1 = "Welcome to NYC"
|
||||
output_sequence_1 = "Bienvenue à NYC"
|
||||
|
||||
input_sequence_2 = "HuggingFace is a company"
|
||||
output_sequence_2 = "HuggingFace est une entreprise"
|
||||
|
||||
# encode the inputs
|
||||
task_prefix = "translate English to French: "
|
||||
input_sequences = [input_sequence_1, input_sequence_2]
|
||||
encoding = tokenizer([task_prefix + sequence for sequence in input_sequences],
|
||||
padding='longest',
|
||||
max_length=max_source_length,
|
||||
truncation=True,
|
||||
return_tensors="pt")
|
||||
input_ids, attention_mask = encoding.input_ids, encoding.attention_mask
|
||||
|
||||
# encode the targets
|
||||
target_encoding = tokenizer([output_sequence_1, output_sequence_2],
|
||||
padding='longest',
|
||||
max_length=max_target_length,
|
||||
truncation=True)
|
||||
labels = target_encoding.input_ids
|
||||
|
||||
# replace padding token id's of the labels by -100
|
||||
labels = torch.tensor(labels)
|
||||
labels[labels == tokenizer.pad_token_id] = -100
|
||||
|
||||
# forward pass
|
||||
loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
|
||||
```
|
||||
|
||||
Additional training tips:
|
||||
|
||||
- T5 models need a slightly higher learning rate than the default one set in the `Trainer` when using the AdamW
|
||||
optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question
|
||||
answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer.
|
||||
|
||||
- According to [this forum post](https://discuss.huggingface.co/t/t5-finetuning-tips/684), task prefixes matter when
|
||||
(1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's
|
||||
pre-training mixture (see Appendix D of the [paper](https://arxiv.org/pdf/1910.10683.pdf) for the task prefixes
|
||||
used).
|
||||
|
||||
- If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of
|
||||
*pad_to_multiple_of* to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding
|
||||
batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is
|
||||
encountered during training thus significantly slowing down the training. only padding up to the longest example in a
|
||||
batch) leads to very slow training on TPU.
|
||||
|
||||
<a id='inference'></a>
|
||||
|
||||
## Inference
|
||||
|
||||
At inference time, it is recommended to use [`~generation_utils.GenerationMixin.generate`]. This
|
||||
method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder
|
||||
and auto-regressively generates the decoder output. Check out [this blog post](https://huggingface.co/blog/how-to-generate) to know all the details about generating text with Transformers.
|
||||
There's also [this blog post](https://huggingface.co/blog/encoder-decoder#encoder-decoder) which explains how
|
||||
generation works in general in encoder-decoder models.
|
||||
|
||||
```python
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
|
||||
outputs = model.generate(input_ids)
|
||||
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||
# Das Haus ist wunderbar.
|
||||
```
|
||||
|
||||
Note that T5 uses the `pad_token_id` as the `decoder_start_token_id`, so when doing generation without using
|
||||
[`~generation_utils.GenerationMixin.generate`], make sure you start it with the `pad_token_id`.
|
||||
|
||||
The example above only shows a single example. You can also do batched inference, like so:
|
||||
|
||||
```python
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
# when generating, we will use the logits of right-most token to predict the next token
|
||||
# so the padding should be on the left
|
||||
tokenizer.padding_side = "left"
|
||||
tokenizer.pad_token = tokenizer.eos_token # to avoid an error
|
||||
|
||||
task_prefix = 'translate English to German: '
|
||||
sentences = ['The house is wonderful.', 'I like to work in NYC.'] # use different length sentences to test batching
|
||||
inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True)
|
||||
|
||||
output_sequences = model.generate(
|
||||
input_ids=inputs['input_ids'],
|
||||
attention_mask=inputs['attention_mask'],
|
||||
do_sample=False, # disable sampling to test if batching affects output
|
||||
)
|
||||
|
||||
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
|
||||
|
||||
# ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.']
|
||||
```
|
||||
|
||||
<a id='scripts'></a>
|
||||
|
||||
## Example scripts
|
||||
|
||||
T5 is supported by several example scripts, both for pre-training and fine-tuning.
|
||||
|
||||
- pre-training: the [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py)
|
||||
script allows you to further pre-train T5 or pre-train T5 from scratch on your own data. The [t5_tokenizer_model.py](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/t5_tokenizer_model.py)
|
||||
script allows you to further train a T5 tokenizer or train a T5 Tokenizer from scratch on your own data. Note that
|
||||
Flax (a neural network library on top of JAX) is particularly useful to train on TPU hardware.
|
||||
|
||||
- fine-tuning: T5 is supported by the official summarization scripts ([PyTorch](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization), [Tensorflow](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization), and [Flax](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)) and translation scripts
|
||||
([PyTorch](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) and [Tensorflow](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/translation)). These scripts allow
|
||||
you to easily fine-tune T5 on custom data for summarization/translation.
|
||||
|
||||
## T5Config
|
||||
|
||||
[[autodoc]] T5Config
|
||||
|
||||
## T5Tokenizer
|
||||
|
||||
[[autodoc]] T5Tokenizer
|
||||
- build_inputs_with_special_tokens
|
||||
- get_special_tokens_mask
|
||||
- create_token_type_ids_from_sequences
|
||||
- save_vocabulary
|
||||
|
||||
## T5TokenizerFast
|
||||
|
||||
[[autodoc]] T5TokenizerFast
|
||||
|
||||
## T5Model
|
||||
|
||||
[[autodoc]] T5Model
|
||||
- forward
|
||||
- parallelize
|
||||
- deparallelize
|
||||
|
||||
## T5ForConditionalGeneration
|
||||
|
||||
[[autodoc]] T5ForConditionalGeneration
|
||||
- forward
|
||||
- parallelize
|
||||
- deparallelize
|
||||
|
||||
## T5EncoderModel
|
||||
|
||||
[[autodoc]] T5EncoderModel
|
||||
- forward
|
||||
- parallelize
|
||||
- deparallelize
|
||||
|
||||
## TFT5Model
|
||||
|
||||
[[autodoc]] TFT5Model
|
||||
- call
|
||||
|
||||
## TFT5ForConditionalGeneration
|
||||
|
||||
[[autodoc]] TFT5ForConditionalGeneration
|
||||
- call
|
||||
|
||||
## TFT5EncoderModel
|
||||
|
||||
[[autodoc]] TFT5EncoderModel
|
||||
- call
|
||||
|
||||
## FlaxT5Model
|
||||
|
||||
[[autodoc]] FlaxT5Model
|
||||
- __call__
|
||||
- encode
|
||||
- decode
|
||||
|
||||
## FlaxT5ForConditionalGeneration
|
||||
|
||||
[[autodoc]] FlaxT5ForConditionalGeneration
|
||||
- __call__
|
||||
- encode
|
||||
- decode
|
@ -1,364 +0,0 @@
|
||||
..
|
||||
Copyright 2020 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
T5
|
||||
-----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
Overview
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The T5 model was presented in `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
|
||||
<https://arxiv.org/pdf/1910.10683.pdf>`_ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
|
||||
Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
|
||||
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
|
||||
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
|
||||
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
|
||||
text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer
|
||||
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
|
||||
with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering
|
||||
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
|
||||
NLP, we release our dataset, pre-trained models, and code.*
|
||||
|
||||
Tips:
|
||||
|
||||
- T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
|
||||
each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
|
||||
different prefix to the input corresponding to each task, e.g., for translation: *translate English to German: ...*,
|
||||
for summarization: *summarize: ...*.
|
||||
|
||||
- T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right.
|
||||
|
||||
- See the :ref:`training`, :ref:`inference` and :ref:`scripts` sections below for all details regarding usage.
|
||||
|
||||
T5 comes in different sizes:
|
||||
|
||||
- `t5-small <https://huggingface.co/t5-small>`__
|
||||
|
||||
- `t5-base <https://huggingface.co/t5-base>`__
|
||||
|
||||
- `t5-large <https://huggingface.co/t5-large>`__
|
||||
|
||||
- `t5-3b <https://huggingface.co/t5-3b>`__
|
||||
|
||||
- `t5-11b <https://huggingface.co/t5-11b>`__.
|
||||
|
||||
Based on the original T5 model, Google has released some follow-up works:
|
||||
|
||||
- **T5v1.1**: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without
|
||||
mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found :doc:`here <t5v1.1>`.
|
||||
|
||||
- **mT5**: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to
|
||||
the documentation of mT5 which can be found :doc:`here <mt5>`.
|
||||
|
||||
- **byT5**: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer
|
||||
to the documentation of byT5 which can be found :doc:`here <byt5>`.
|
||||
|
||||
All checkpoints can be found on the `hub <https://huggingface.co/models?search=t5>`__.
|
||||
|
||||
This model was contributed by `thomwolf <https://huggingface.co/thomwolf>`__. The original code can be found `here
|
||||
<https://github.com/google-research/text-to-text-transfer-transformer>`__.
|
||||
|
||||
.. _training:
|
||||
|
||||
Training
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
|
||||
forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input
|
||||
sequence is fed to the model using :obj:`input_ids`. The target sequence is shifted to the right, i.e., prepended by a
|
||||
start-sequence token and fed to the decoder using the :obj:`decoder_input_ids`. In teacher-forcing style, the target
|
||||
sequence is then appended by the EOS token and corresponds to the :obj:`labels`. The PAD token is hereby used as the
|
||||
start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
|
||||
|
||||
One can use :class:`~transformers.T5ForConditionalGeneration` (or the Tensorflow/Flax variant), which includes the
|
||||
language modeling head on top of the decoder.
|
||||
|
||||
- Unsupervised denoising training
|
||||
|
||||
In this setup, spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and
|
||||
the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens. Each
|
||||
sentinel token represents a unique mask token for this sentence and should start with :obj:`<extra_id_0>`,
|
||||
:obj:`<extra_id_1>`, ... up to :obj:`<extra_id_99>`. As a default, 100 sentinel tokens are available in
|
||||
:class:`~transformers.T5Tokenizer`.
|
||||
|
||||
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be
|
||||
processed as follows:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
|
||||
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
|
||||
# the forward function automatically creates the correct decoder_input_ids
|
||||
loss = model(input_ids=input_ids, labels=labels).loss
|
||||
|
||||
If you're interested in pre-training T5 on a new corpus, check out the `run_t5_mlm_flax.py
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling>`__ script in the Examples
|
||||
directory.
|
||||
|
||||
- Supervised training
|
||||
|
||||
In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping.
|
||||
Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input
|
||||
sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar.", then they should be prepared for
|
||||
the model as follows:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
|
||||
labels = tokenizer('Das Haus ist wunderbar.', return_tensors='pt').input_ids
|
||||
# the forward function automatically creates the correct decoder_input_ids
|
||||
loss = model(input_ids=input_ids, labels=labels).loss
|
||||
|
||||
As you can see, only 2 inputs are required for the model in order to compute a loss: :obj:`input_ids` (which are the
|
||||
:obj:`input_ids` of the encoded input sequence) and :obj:`labels` (which are the :obj:`input_ids` of the encoded
|
||||
target sequence). The model will automatically create the :obj:`decoder_input_ids` based on the :obj:`labels`, by
|
||||
shifting them one position to the right and prepending the :obj:`config.decoder_start_token_id`, which for T5 is
|
||||
equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate
|
||||
English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used
|
||||
during T5's pre-training.
|
||||
|
||||
However, the example above only shows a single training example. In practice, one trains deep learning models in
|
||||
batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one
|
||||
typically defines a :obj:`max_source_length` and :obj:`max_target_length`, which determine the maximum length of the
|
||||
input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on
|
||||
the task.
|
||||
|
||||
In addition, we must make sure that padding token id's of the :obj:`labels` are not taken into account by the loss
|
||||
function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the :obj:`ignore_index`
|
||||
of the :obj:`CrossEntropyLoss`. In Flax, one can use the :obj:`decoder_attention_mask` to ignore padded tokens from
|
||||
the loss (see the `Flax summarization script
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/flax/summarization>`__ for details). We also pass
|
||||
:obj:`attention_mask` as additional input to the model, which makes sure that padding tokens of the inputs are
|
||||
ignored. The code example below illustrates all of this.
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
import torch
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
# the following 2 hyperparameters are task-specific
|
||||
max_source_length = 512
|
||||
max_target_length = 128
|
||||
|
||||
# Suppose we have the following 2 training examples:
|
||||
input_sequence_1 = "Welcome to NYC"
|
||||
output_sequence_1 = "Bienvenue à NYC"
|
||||
|
||||
input_sequence_2 = "HuggingFace is a company"
|
||||
output_sequence_2 = "HuggingFace est une entreprise"
|
||||
|
||||
# encode the inputs
|
||||
task_prefix = "translate English to French: "
|
||||
input_sequences = [input_sequence_1, input_sequence_2]
|
||||
encoding = tokenizer([task_prefix + sequence for sequence in input_sequences],
|
||||
padding='longest',
|
||||
max_length=max_source_length,
|
||||
truncation=True,
|
||||
return_tensors="pt")
|
||||
input_ids, attention_mask = encoding.input_ids, encoding.attention_mask
|
||||
|
||||
# encode the targets
|
||||
target_encoding = tokenizer([output_sequence_1, output_sequence_2],
|
||||
padding='longest',
|
||||
max_length=max_target_length,
|
||||
truncation=True)
|
||||
labels = target_encoding.input_ids
|
||||
|
||||
# replace padding token id's of the labels by -100
|
||||
labels = torch.tensor(labels)
|
||||
labels[labels == tokenizer.pad_token_id] = -100
|
||||
|
||||
# forward pass
|
||||
loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
|
||||
|
||||
Additional training tips:
|
||||
|
||||
- T5 models need a slightly higher learning rate than the default one set in the :obj:`Trainer` when using the AdamW
|
||||
optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question
|
||||
answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer.
|
||||
|
||||
- According to `this forum post <https://discuss.huggingface.co/t/t5-finetuning-tips/684>`__, task prefixes matter when
|
||||
(1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's
|
||||
pre-training mixture (see Appendix D of the `paper <https://arxiv.org/pdf/1910.10683.pdf>`__ for the task prefixes
|
||||
used).
|
||||
|
||||
- If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of
|
||||
`pad_to_multiple_of` to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding
|
||||
batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is
|
||||
encountered during training thus significantly slowing down the training. only padding up to the longest example in a
|
||||
batch) leads to very slow training on TPU.
|
||||
|
||||
.. _inference:
|
||||
|
||||
Inference
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
At inference time, it is recommended to use :meth:`~transformers.generation_utils.GenerationMixin.generate`. This
|
||||
method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder
|
||||
and auto-regressively generates the decoder output. Check out `this blog post
|
||||
<https://huggingface.co/blog/how-to-generate>`__ to know all the details about generating text with Transformers.
|
||||
There's also `this blog post <https://huggingface.co/blog/encoder-decoder#encoder-decoder>`__ which explains how
|
||||
generation works in general in encoder-decoder models.
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
|
||||
outputs = model.generate(input_ids)
|
||||
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||
# Das Haus ist wunderbar.
|
||||
|
||||
Note that T5 uses the :obj:`pad_token_id` as the :obj:`decoder_start_token_id`, so when doing generation without using
|
||||
:meth:`~transformers.generation_utils.GenerationMixin.generate`, make sure you start it with the :obj:`pad_token_id`.
|
||||
|
||||
The example above only shows a single example. You can also do batched inference, like so:
|
||||
|
||||
.. code-block::
|
||||
|
||||
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
||||
|
||||
tokenizer = T5Tokenizer.from_pretrained("t5-small")
|
||||
model = T5ForConditionalGeneration.from_pretrained("t5-small")
|
||||
|
||||
# when generating, we will use the logits of right-most token to predict the next token
|
||||
# so the padding should be on the left
|
||||
tokenizer.padding_side = "left"
|
||||
tokenizer.pad_token = tokenizer.eos_token # to avoid an error
|
||||
|
||||
task_prefix = 'translate English to German: '
|
||||
sentences = ['The house is wonderful.', 'I like to work in NYC.'] # use different length sentences to test batching
|
||||
inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True)
|
||||
|
||||
output_sequences = model.generate(
|
||||
input_ids=inputs['input_ids'],
|
||||
attention_mask=inputs['attention_mask'],
|
||||
do_sample=False, # disable sampling to test if batching affects output
|
||||
)
|
||||
|
||||
print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
|
||||
|
||||
# ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.']
|
||||
|
||||
.. _scripts:
|
||||
|
||||
Example scripts
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
T5 is supported by several example scripts, both for pre-training and fine-tuning.
|
||||
|
||||
* pre-training: the `run_t5_mlm_flax.py
|
||||
<https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py>`__
|
||||
script allows you to further pre-train T5 or pre-train T5 from scratch on your own data. The `t5_tokenizer_model.py
|
||||
<https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/t5_tokenizer_model.py>`__
|
||||
script allows you to further train a T5 tokenizer or train a T5 Tokenizer from scratch on your own data. Note that
|
||||
Flax (a neural network library on top of JAX) is particularly useful to train on TPU hardware.
|
||||
|
||||
* fine-tuning: T5 is supported by the official summarization scripts (`PyTorch
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization>`__, `Tensorflow
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization>`__, and `Flax
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/flax/summarization>`__) and translation scripts
|
||||
(`PyTorch <https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation>`__ and `Tensorflow
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/tensorflow/translation>`__). These scripts allow
|
||||
you to easily fine-tune T5 on custom data for summarization/translation.
|
||||
|
||||
T5Config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.T5Config
|
||||
:members:
|
||||
|
||||
|
||||
T5Tokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.T5Tokenizer
|
||||
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
||||
create_token_type_ids_from_sequences, save_vocabulary
|
||||
|
||||
|
||||
T5TokenizerFast
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.T5TokenizerFast
|
||||
:members:
|
||||
|
||||
|
||||
T5Model
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.T5Model
|
||||
:members: forward, parallelize, deparallelize
|
||||
|
||||
|
||||
T5ForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.T5ForConditionalGeneration
|
||||
:members: forward, parallelize, deparallelize
|
||||
|
||||
T5EncoderModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.T5EncoderModel
|
||||
:members: forward, parallelize, deparallelize
|
||||
|
||||
TFT5Model
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFT5Model
|
||||
:members: call
|
||||
|
||||
|
||||
TFT5ForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFT5ForConditionalGeneration
|
||||
:members: call
|
||||
|
||||
TFT5EncoderModel
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.TFT5EncoderModel
|
||||
:members: call
|
||||
|
||||
FlaxT5Model
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxT5Model
|
||||
:members: __call__, encode, decode
|
||||
|
||||
FlaxT5ForConditionalGeneration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. autoclass:: transformers.FlaxT5ForConditionalGeneration
|
||||
:members: __call__, encode, decode
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user