mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 22:30:09 +06:00

* [WIP] SP tokenizers * fixing tests for T5 * WIP tokenizers * serialization * update T5 * WIP T5 tokenization * slow to fast conversion script * Refactoring to move tokenzier implementations inside transformers * Adding gpt - refactoring - quality * WIP adding several tokenizers to the fast world * WIP Roberta - moving implementations * update to dev4 switch file loading to in-memory loading * Updating and fixing * advancing on the tokenizers - updating do_lower_case * style and quality * moving forward with tokenizers conversion and tests * MBart, T5 * dumping the fast version of transformer XL * Adding to autotokenizers + style/quality * update init and space_between_special_tokens * style and quality * bump up tokenizers version * add protobuf * fix pickle Bert JP with Mecab * fix newly added tokenizers * style and quality * fix bert japanese * fix funnel * limite tokenizer warning to one occurence * clean up file * fix new tokenizers * fast tokenizers deep tests * WIP adding all the special fast tests on the new fast tokenizers * quick fix * adding more fast tokenizers in the fast tests * all tokenizers in fast version tested * Adding BertGenerationFast * bump up setup.py for CI * remove BertGenerationFast (too early) * bump up tokenizers version * Clean old docstrings * Typo * Update following Lysandre comments Co-authored-by: Sylvain Gugger <sylvain.gugger@gmail.com>
91 lines
4.0 KiB
ReStructuredText
91 lines
4.0 KiB
ReStructuredText
Transformer XL
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The Transformer-XL model was proposed in `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
|
|
<https://arxiv.org/abs/1901.02860>`__ by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
|
|
Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can
|
|
reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax
|
|
inputs and outputs (tied).
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
|
|
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
|
|
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and
|
|
a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves
|
|
the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and
|
|
450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up
|
|
to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results
|
|
of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on
|
|
Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
|
|
coherent, novel text articles with thousands of tokens.*
|
|
|
|
Tips:
|
|
|
|
- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right.
|
|
The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
|
|
- Transformer-XL is one of the few models that has no sequence length limit.
|
|
|
|
The original code can be found `here <https://github.com/kimiyoung/transformer-xl>`__.
|
|
|
|
|
|
TransfoXLConfig
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TransfoXLConfig
|
|
:members:
|
|
|
|
|
|
TransfoXLTokenizer
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TransfoXLTokenizer
|
|
:members: save_vocabulary
|
|
|
|
|
|
TransfoXL specific outputs
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.modeling_transfo_xl.TransfoXLModelOutput
|
|
:members:
|
|
|
|
.. autoclass:: transformers.modeling_transfo_xl.TransfoXLLMHeadModelOutput
|
|
:members:
|
|
|
|
.. autoclass:: transformers.modeling_tf_transfo_xl.TFTransfoXLModelOutput
|
|
:members:
|
|
|
|
.. autoclass:: transformers.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput
|
|
:members:
|
|
|
|
|
|
TransfoXLModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TransfoXLModel
|
|
:members: forward
|
|
|
|
|
|
TransfoXLLMHeadModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TransfoXLLMHeadModel
|
|
:members: forward
|
|
|
|
|
|
TFTransfoXLModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TFTransfoXLModel
|
|
:members: call
|
|
|
|
|
|
TFTransfoXLLMHeadModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TFTransfoXLLMHeadModel
|
|
:members: call
|