mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
fix link to paper (#7116)
This commit is contained in:
parent
bb3106f741
commit
15d18e0307
@ -7,7 +7,7 @@ The effectiveness of initializing sequence-to-sequence models with pre-trained c
|
||||
|
||||
After such an :class:`~transformers.EncoderDecoderModel` has been trained / fine-tuned, it can be saved / loaded just like any other models (see Examples for more information).
|
||||
|
||||
An application of this architecture could be to leverage two pre-trained :obj:`transformers.BertModel` models as the encoder and decoder for a summarization model as was shown in: `Text Summarization with Pretrained Encoders <https://arxiv.org/abs/1910.13461>`_ by Yang Liu and Mirella Lapata.
|
||||
An application of this architecture could be to leverage two pre-trained :obj:`transformers.BertModel` models as the encoder and decoder for a summarization model as was shown in: `Text Summarization with Pretrained Encoders <https://arxiv.org/abs/1908.08345>`_ by Yang Liu and Mirella Lapata.
|
||||
|
||||
|
||||
``EncoderDecoderConfig``
|
||||
|
Loading…
Reference in New Issue
Block a user