
* Fixed typo: insted to instead * Fixed typo: relase to release * Fixed typo: nighlty to nightly * Fixed typos: versatible, benchamarks, becnhmark to versatile, benchmark, benchmarks * Fixed typo in comment: quantizd to quantized * Fixed typo: architecutre to architecture * Fixed typo: contibution to contribution * Fixed typo: Presequities to Prerequisites * Fixed typo: faste to faster * Fixed typo: extendeding to extending * Fixed typo: segmetantion_maps to segmentation_maps * Fixed typo: Alternativelly to Alternatively * Fixed incorrectly defined variable: output to output_disabled * Fixed typo in library name: tranformers.onnx to transformers.onnx * Fixed missing import: import tensorflow as tf * Fixed incorrectly defined variable: token_tensor to tokens_tensor * Fixed missing import: import torch * Fixed incorrectly defined variable and typo: uromaize to uromanize * Fixed incorrectly defined variable and typo: uromaize to uromanize * Fixed typo in function args: numpy.ndarry to numpy.ndarray * Fixed Inconsistent Library Name: Torchscript to TorchScript * Fixed Inconsistent Class Name: OneformerProcessor to OneFormerProcessor * Fixed Inconsistent Class Named Typo: TFLNetForMultipleChoice to TFXLNetForMultipleChoice * Fixed Inconsistent Library Name Typo: Pytorch to PyTorch * Fixed Inconsistent Function Name Typo: captureWarning to captureWarnings * Fixed Inconsistent Library Name Typo: Pytorch to PyTorch * Fixed Inconsistent Class Name Typo: TrainingArgument to TrainingArguments * Fixed Inconsistent Model Name Typo: Swin2R to Swin2SR * Fixed Inconsistent Model Name Typo: EART to BERT * Fixed Inconsistent Library Name Typo: TensorFLow to TensorFlow * Fixed Broken Link for Speech Emotion Classification with Wav2Vec2 * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed minor missing word Typo * Fixed Punctuation: Two commas * Fixed Punctuation: No Space between XLM-R and is * Fixed Punctuation: No Space between [~accelerate.Accelerator.backward] and method * Added backticks to display model.fit() in codeblock * Added backticks to display openai-community/gpt2 in codeblock * Fixed Minor Typo: will to with * Fixed Minor Typo: is to are * Fixed Minor Typo: in to on * Fixed Minor Typo: inhibits to exhibits * Fixed Minor Typo: they need to it needs * Fixed Minor Typo: cast the load the checkpoints To load the checkpoints * Fixed Inconsistent Class Name Typo: TFCamembertForCasualLM to TFCamembertForCausalLM * Fixed typo in attribute name: outputs.last_hidden_states to outputs.last_hidden_state * Added missing verbosity level: fatal * Fixed Minor Typo: take To takes * Fixed Minor Typo: heuristic To heuristics * Fixed Minor Typo: setting To settings * Fixed Minor Typo: Content To Contents * Fixed Minor Typo: millions To million * Fixed Minor Typo: difference To differences * Fixed Minor Typo: while extract To which extracts * Fixed Minor Typo: Hereby To Here * Fixed Minor Typo: addition To additional * Fixed Minor Typo: supports To supported * Fixed Minor Typo: so that benchmark results TO as a consequence, benchmark * Fixed Minor Typo: a To an * Fixed Minor Typo: a To an * Fixed Minor Typo: Chain-of-though To Chain-of-thought
6.6 KiB
XLNet
Overview
The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order.
The abstract from the paper is the following:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.
This model was contributed by thomwolf. The original code can be found here.
Usage tips
- The specific attention pattern can be controlled at training and test time using the
perm_mask
input. - Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained
using only a sub-set of the output tokens as target which are selected with the
target_mapping
input. - To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the
perm_mask
andtarget_mapping
inputs to control the attention span and outputs (see examples in examples/pytorch/text-generation/run_generation.py) - XLNet is one of the few models that has no sequence length limit.
- XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length.
- XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies.
Resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
- Causal language modeling task guide
- Multiple choice task guide
XLNetConfig
autodoc XLNetConfig
XLNetTokenizer
autodoc XLNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
XLNetTokenizerFast
autodoc XLNetTokenizerFast
XLNet specific outputs
autodoc models.xlnet.modeling_xlnet.XLNetModelOutput
autodoc models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput
autodoc models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput
autodoc models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput
autodoc models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput
autodoc models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput
autodoc models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput
autodoc models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput
autodoc models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput
autodoc models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput
autodoc models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput
autodoc models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput
autodoc models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput
XLNetModel
autodoc XLNetModel - forward
XLNetLMHeadModel
autodoc XLNetLMHeadModel - forward
XLNetForSequenceClassification
autodoc XLNetForSequenceClassification - forward
XLNetForMultipleChoice
autodoc XLNetForMultipleChoice - forward
XLNetForTokenClassification
autodoc XLNetForTokenClassification - forward
XLNetForQuestionAnsweringSimple
autodoc XLNetForQuestionAnsweringSimple - forward
XLNetForQuestionAnswering
autodoc XLNetForQuestionAnswering - forward
TFXLNetModel
autodoc TFXLNetModel - call
TFXLNetLMHeadModel
autodoc TFXLNetLMHeadModel - call
TFXLNetForSequenceClassification
autodoc TFXLNetForSequenceClassification - call
TFXLNetForMultipleChoice
autodoc TFXLNetForMultipleChoice - call
TFXLNetForTokenClassification
autodoc TFXLNetForTokenClassification - call
TFXLNetForQuestionAnsweringSimple
autodoc TFXLNetForQuestionAnsweringSimple - call