
* Rename index.mdx to index.md * With saved modifs * Address review comment * Treat all files * .mdx -> .md * Remove special char * Update utils/tests_fetcher.py Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr> --------- Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
4.2 KiB
ProphetNet
DISCLAIMER: If you see something strange, file a Github Issue and assign @patrickvonplaten
Overview
The ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020.
ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just the next token.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
Tips:
- ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left.
- The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.
The Authors' code can be found here.
Documentation resources
ProphetNetConfig
autodoc ProphetNetConfig
ProphetNetTokenizer
autodoc ProphetNetTokenizer
ProphetNet specific outputs
autodoc models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
autodoc models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
autodoc models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
autodoc models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
ProphetNetModel
autodoc ProphetNetModel - forward
ProphetNetEncoder
autodoc ProphetNetEncoder - forward
ProphetNetDecoder
autodoc ProphetNetDecoder - forward
ProphetNetForConditionalGeneration
autodoc ProphetNetForConditionalGeneration - forward
ProphetNetForCausalLM
autodoc ProphetNetForCausalLM - forward