mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 06:10:04 +06:00

* Important files
* Styling them all
* Revert "Styling them all"
This reverts commit 7d029395fd
.
* Syling them for realsies
* Fix syntax error
* Fix benchmark_utils
* More fixes
* Fix modeling auto and script
* Remove new line
* Fixes
* More fixes
* Fix more files
* Style
* Add FSMT
* More fixes
* More fixes
* More fixes
* More fixes
* Fixes
* More fixes
* More fixes
* Last fixes
* Make sphinx happy
124 lines
6.5 KiB
ReStructuredText
124 lines
6.5 KiB
ReStructuredText
T5
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
**DISCLAIMER:** This model is still a work in progress, if you see something strange, file a `Github Issue
|
|
<https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title>`__.
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The T5 model was presented in `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
|
|
<https://arxiv.org/pdf/1910.10683.pdf>`_ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
|
|
Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
|
|
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
|
|
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
|
|
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
|
|
text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer
|
|
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
|
|
with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering
|
|
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
|
|
NLP, we release our dataset, pre-trained models, and code.*
|
|
|
|
Tips:
|
|
|
|
- T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
|
|
each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
|
|
different prefix to the input corresponding to each task, e.g., for translation: *translate English to German: ...*,
|
|
for summarization: *summarize: ...*.
|
|
|
|
For more information about which prefix to use, it is easiest to look into Appendix D of the `paper
|
|
<https://arxiv.org/pdf/1910.10683.pdf>`__. - For sequence-to-sequence generation, it is recommended to use
|
|
:obj:`T5ForConditionalGeneration.generate()``. This method takes care of feeding the encoded input via
|
|
cross-attention layers to the decoder and auto-regressively generates the decoder output. - T5 uses relative scalar
|
|
embeddings. Encoder input padding can be done on the left and on the right.
|
|
|
|
The original code can be found `here <https://github.com/google-research/text-to-text-transfer-transformer>`__.
|
|
|
|
Training
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
|
|
forcing. This means that for training we always need an input sequence and a target sequence. The input sequence is fed
|
|
to the model using :obj:`input_ids``. The target sequence is shifted to the right, i.e., prepended by a start-sequence
|
|
token and fed to the decoder using the :obj:`decoder_input_ids`. In teacher-forcing style, the target sequence is then
|
|
appended by the EOS token and corresponds to the :obj:`labels`. The PAD token is hereby used as the start-sequence
|
|
token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
|
|
|
|
- Unsupervised denoising training
|
|
|
|
In this setup spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and
|
|
the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens. Each
|
|
sentinel token represents a unique mask token for this sentence and should start with :obj:`<extra_id_0>`,
|
|
:obj:`<extra_id_1>`, ... up to :obj:`<extra_id_99>`. As a default, 100 sentinel tokens are available in
|
|
:class:`~transformers.T5Tokenizer`.
|
|
|
|
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be
|
|
processed as follows:
|
|
|
|
.. code-block::
|
|
|
|
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
|
|
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
|
|
# the forward function automatically creates the correct decoder_input_ids
|
|
loss = model(input_ids=input_ids, labels=labels, return_dict=True).loss
|
|
|
|
- Supervised training
|
|
|
|
In this setup the input sequence and output sequence are standard sequence-to-sequence input output mapping. In
|
|
translation, for instance with the input sequence "The house is wonderful." and output sequence "Das Haus ist
|
|
wunderbar.", the sentences should be processed as follows:
|
|
|
|
.. code-block::
|
|
|
|
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
|
|
labels = tokenizer('Das Haus ist wunderbar.', return_tensors='pt').input_ids
|
|
# the forward function automatically creates the correct decoder_input_ids
|
|
loss = model(input_ids=input_ids, labels=labels, return_dict=True).loss
|
|
|
|
|
|
T5Config
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.T5Config
|
|
:members:
|
|
|
|
|
|
T5Tokenizer
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.T5Tokenizer
|
|
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
|
create_token_type_ids_from_sequences, prepare_seq2seq_batch, save_vocabulary
|
|
|
|
|
|
T5Model
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.T5Model
|
|
:members: forward
|
|
|
|
|
|
T5ForConditionalGeneration
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.T5ForConditionalGeneration
|
|
:members: forward
|
|
|
|
|
|
TFT5Model
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TFT5Model
|
|
:members: call
|
|
|
|
|
|
TFT5ForConditionalGeneration
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TFT5ForConditionalGeneration
|
|
:members: call
|