mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 22:30:09 +06:00
Add BigBirdPegasus (#10991)
* init bigbird pegasus * add debugging nb ; update config * init conversion * update conversion script * complete conversion script * init forward() * complete forward() * add tokenizer * add some slow tests * commit current * fix copies * add docs * add conversion script for bigbird-roberta-summarization * remove TODO * small fixups * correct tokenizer * add bigbird core for now * fix config * fix more * revert pegasus-tokenizer back * make style * everything working for pubmed; yayygit status * complete tests finally * remove bigbird pegasus tok * correct tokenizer * correct tests * add tokenizer files * finish make style * fix test * update * make style * fix tok utils base file * make fix-copies * clean a bit * small update * fix some suggestions * add to readme * fix a bit, clean tests * fix more tests * Update src/transformers/__init__.py * Update src/transformers/__init__.py * make fix-copies * complete attn switching, auto-padding left * make style * fix auto-padding test * make style * fix batched attention tests * put tolerance at 1e-1 for stand-alone decoder test * fix docs * fix tests * correct slow tokenizer conversion * Apply suggestions from code review Co-authored-by: Suraj Patil <surajp815@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * complete remaining suggestions * fix test Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Suraj Patil <surajp815@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
This commit is contained in:
parent
6f40e31766
commit
dc3f6758cf
@ -195,6 +195,7 @@ Current number of checkpoints: ** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
|
1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
|
||||||
1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
|
1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
|
||||||
1. **[BigBird-RoBERTa](https://huggingface.co/transformers/model_doc/bigbird.html)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
|
1. **[BigBird-RoBERTa](https://huggingface.co/transformers/model_doc/bigbird.html)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
|
||||||
|
1. **[BigBird-Pegasus](https://huggingface.co/transformers/model_doc/bigbird_pegasus.html)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
|
||||||
1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
||||||
1. **[BlenderbotSmall](https://huggingface.co/transformers/model_doc/blenderbot_small.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
1. **[BlenderbotSmall](https://huggingface.co/transformers/model_doc/blenderbot_small.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
||||||
1. **[BORT](https://huggingface.co/transformers/model_doc/bort.html)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
|
1. **[BORT](https://huggingface.co/transformers/model_doc/bort.html)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
|
||||||
|
@ -100,153 +100,156 @@ conversion utilities for the following models:
|
|||||||
6. :doc:`BigBird-RoBERTa <model_doc/bigbird>` (from Google Research) released with the paper `Big Bird: Transformers
|
6. :doc:`BigBird-RoBERTa <model_doc/bigbird>` (from Google Research) released with the paper `Big Bird: Transformers
|
||||||
for Longer Sequences <https://arxiv.org/abs/2007.14062>`__ by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua
|
for Longer Sequences <https://arxiv.org/abs/2007.14062>`__ by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua
|
||||||
Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
|
Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
|
||||||
7. :doc:`Blenderbot <model_doc/blenderbot>` (from Facebook) released with the paper `Recipes for building an
|
7. :doc:`BigBird-Pegasus <model_doc/bigbird_pegasus>` (from Google Research) released with the paper `Big Bird:
|
||||||
|
Transformers for Longer Sequences <https://arxiv.org/abs/2007.14062>`__ by Manzil Zaheer, Guru Guruganesh, Avinava
|
||||||
|
Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
|
||||||
|
8. :doc:`Blenderbot <model_doc/blenderbot>` (from Facebook) released with the paper `Recipes for building an
|
||||||
open-domain chatbot <https://arxiv.org/abs/2004.13637>`__ by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary
|
open-domain chatbot <https://arxiv.org/abs/2004.13637>`__ by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary
|
||||||
Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
||||||
8. :doc:`BlenderbotSmall <model_doc/blenderbot_small>` (from Facebook) released with the paper `Recipes for building an
|
9. :doc:`BlenderbotSmall <model_doc/blenderbot_small>` (from Facebook) released with the paper `Recipes for building an
|
||||||
open-domain chatbot <https://arxiv.org/abs/2004.13637>`__ by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary
|
open-domain chatbot <https://arxiv.org/abs/2004.13637>`__ by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary
|
||||||
Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
|
||||||
9. :doc:`BORT <model_doc/bort>` (from Alexa) released with the paper `Optimal Subarchitecture Extraction For BERT
|
10. :doc:`BORT <model_doc/bort>` (from Alexa) released with the paper `Optimal Subarchitecture Extraction For BERT
|
||||||
<https://arxiv.org/abs/2010.10499>`__ by Adrian de Wynter and Daniel J. Perry.
|
<https://arxiv.org/abs/2010.10499>`__ by Adrian de Wynter and Daniel J. Perry.
|
||||||
10. :doc:`CamemBERT <model_doc/camembert>` (from Inria/Facebook/Sorbonne) released with the paper `CamemBERT: a Tasty
|
11. :doc:`CamemBERT <model_doc/camembert>` (from Inria/Facebook/Sorbonne) released with the paper `CamemBERT: a Tasty
|
||||||
French Language Model <https://arxiv.org/abs/1911.03894>`__ by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz
|
French Language Model <https://arxiv.org/abs/1911.03894>`__ by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz
|
||||||
Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
|
Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
|
||||||
11. :doc:`ConvBERT <model_doc/convbert>` (from YituTech) released with the paper `ConvBERT: Improving BERT with
|
12. :doc:`ConvBERT <model_doc/convbert>` (from YituTech) released with the paper `ConvBERT: Improving BERT with
|
||||||
Span-based Dynamic Convolution <https://arxiv.org/abs/2008.02496>`__ by Zihang Jiang, Weihao Yu, Daquan Zhou,
|
Span-based Dynamic Convolution <https://arxiv.org/abs/2008.02496>`__ by Zihang Jiang, Weihao Yu, Daquan Zhou,
|
||||||
Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
|
Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
|
||||||
12. :doc:`CPM <model_doc/cpm>` (from Tsinghua University) released with the paper `CPM: A Large-scale Generative
|
13. :doc:`CPM <model_doc/cpm>` (from Tsinghua University) released with the paper `CPM: A Large-scale Generative
|
||||||
Chinese Pre-trained Language Model <https://arxiv.org/abs/2012.00413>`__ by Zhengyan Zhang, Xu Han, Hao Zhou, Pei
|
Chinese Pre-trained Language Model <https://arxiv.org/abs/2012.00413>`__ by Zhengyan Zhang, Xu Han, Hao Zhou, Pei
|
||||||
Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng,
|
Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng,
|
||||||
Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang,
|
Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang,
|
||||||
Juanzi Li, Xiaoyan Zhu, Maosong Sun.
|
Juanzi Li, Xiaoyan Zhu, Maosong Sun.
|
||||||
13. :doc:`CTRL <model_doc/ctrl>` (from Salesforce) released with the paper `CTRL: A Conditional Transformer Language
|
14. :doc:`CTRL <model_doc/ctrl>` (from Salesforce) released with the paper `CTRL: A Conditional Transformer Language
|
||||||
Model for Controllable Generation <https://arxiv.org/abs/1909.05858>`__ by Nitish Shirish Keskar*, Bryan McCann*,
|
Model for Controllable Generation <https://arxiv.org/abs/1909.05858>`__ by Nitish Shirish Keskar*, Bryan McCann*,
|
||||||
Lav R. Varshney, Caiming Xiong and Richard Socher.
|
Lav R. Varshney, Caiming Xiong and Richard Socher.
|
||||||
14. :doc:`DeBERTa <model_doc/deberta>` (from Microsoft) released with the paper `DeBERTa: Decoding-enhanced BERT with
|
15. :doc:`DeBERTa <model_doc/deberta>` (from Microsoft) released with the paper `DeBERTa: Decoding-enhanced BERT with
|
||||||
Disentangled Attention <https://arxiv.org/abs/2006.03654>`__ by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu
|
Disentangled Attention <https://arxiv.org/abs/2006.03654>`__ by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu
|
||||||
Chen.
|
Chen.
|
||||||
15. :doc:`DeBERTa-v2 <model_doc/deberta_v2>` (from Microsoft) released with the paper `DeBERTa: Decoding-enhanced BERT
|
16. :doc:`DeBERTa-v2 <model_doc/deberta_v2>` (from Microsoft) released with the paper `DeBERTa: Decoding-enhanced BERT
|
||||||
with Disentangled Attention <https://arxiv.org/abs/2006.03654>`__ by Pengcheng He, Xiaodong Liu, Jianfeng Gao,
|
with Disentangled Attention <https://arxiv.org/abs/2006.03654>`__ by Pengcheng He, Xiaodong Liu, Jianfeng Gao,
|
||||||
Weizhu Chen.
|
Weizhu Chen.
|
||||||
16. :doc:`DeiT <model_doc/deit>` (from Facebook) released with the paper `Training data-efficient image transformers &
|
17. :doc:`DeiT <model_doc/deit>` (from Facebook) released with the paper `Training data-efficient image transformers &
|
||||||
distillation through attention <https://arxiv.org/abs/2012.12877>`__ by Hugo Touvron, Matthieu Cord, Matthijs
|
distillation through attention <https://arxiv.org/abs/2012.12877>`__ by Hugo Touvron, Matthieu Cord, Matthijs
|
||||||
Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
|
Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
|
||||||
17. :doc:`DialoGPT <model_doc/dialogpt>` (from Microsoft Research) released with the paper `DialoGPT: Large-Scale
|
18. :doc:`DialoGPT <model_doc/dialogpt>` (from Microsoft Research) released with the paper `DialoGPT: Large-Scale
|
||||||
Generative Pre-training for Conversational Response Generation <https://arxiv.org/abs/1911.00536>`__ by Yizhe
|
Generative Pre-training for Conversational Response Generation <https://arxiv.org/abs/1911.00536>`__ by Yizhe
|
||||||
Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
|
Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
|
||||||
18. :doc:`DistilBERT <model_doc/distilbert>` (from HuggingFace), released together with the paper `DistilBERT, a
|
19. :doc:`DistilBERT <model_doc/distilbert>` (from HuggingFace), released together with the paper `DistilBERT, a
|
||||||
distilled version of BERT: smaller, faster, cheaper and lighter <https://arxiv.org/abs/1910.01108>`__ by Victor
|
distilled version of BERT: smaller, faster, cheaper and lighter <https://arxiv.org/abs/1910.01108>`__ by Victor
|
||||||
Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into `DistilGPT2
|
Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into `DistilGPT2
|
||||||
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__, RoBERTa into `DistilRoBERTa
|
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__, RoBERTa into `DistilRoBERTa
|
||||||
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__, Multilingual BERT into
|
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__, Multilingual BERT into
|
||||||
`DistilmBERT <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__ and a German
|
`DistilmBERT <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__ and a German
|
||||||
version of DistilBERT.
|
version of DistilBERT.
|
||||||
19. :doc:`DPR <model_doc/dpr>` (from Facebook) released with the paper `Dense Passage Retrieval for Open-Domain
|
20. :doc:`DPR <model_doc/dpr>` (from Facebook) released with the paper `Dense Passage Retrieval for Open-Domain
|
||||||
Question Answering <https://arxiv.org/abs/2004.04906>`__ by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick
|
Question Answering <https://arxiv.org/abs/2004.04906>`__ by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick
|
||||||
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
|
Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
|
||||||
20. :doc:`ELECTRA <model_doc/electra>` (from Google Research/Stanford University) released with the paper `ELECTRA:
|
21. :doc:`ELECTRA <model_doc/electra>` (from Google Research/Stanford University) released with the paper `ELECTRA:
|
||||||
Pre-training text encoders as discriminators rather than generators <https://arxiv.org/abs/2003.10555>`__ by Kevin
|
Pre-training text encoders as discriminators rather than generators <https://arxiv.org/abs/2003.10555>`__ by Kevin
|
||||||
Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
|
Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
|
||||||
21. :doc:`FlauBERT <model_doc/flaubert>` (from CNRS) released with the paper `FlauBERT: Unsupervised Language Model
|
22. :doc:`FlauBERT <model_doc/flaubert>` (from CNRS) released with the paper `FlauBERT: Unsupervised Language Model
|
||||||
Pre-training for French <https://arxiv.org/abs/1912.05372>`__ by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne,
|
Pre-training for French <https://arxiv.org/abs/1912.05372>`__ by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne,
|
||||||
Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
|
Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
|
||||||
22. :doc:`Funnel Transformer <model_doc/funnel>` (from CMU/Google Brain) released with the paper `Funnel-Transformer:
|
23. :doc:`Funnel Transformer <model_doc/funnel>` (from CMU/Google Brain) released with the paper `Funnel-Transformer:
|
||||||
Filtering out Sequential Redundancy for Efficient Language Processing <https://arxiv.org/abs/2006.03236>`__ by
|
Filtering out Sequential Redundancy for Efficient Language Processing <https://arxiv.org/abs/2006.03236>`__ by
|
||||||
Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
|
Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
|
||||||
23. :doc:`GPT <model_doc/gpt>` (from OpenAI) released with the paper `Improving Language Understanding by Generative
|
24. :doc:`GPT <model_doc/gpt>` (from OpenAI) released with the paper `Improving Language Understanding by Generative
|
||||||
Pre-Training <https://blog.openai.com/language-unsupervised/>`__ by Alec Radford, Karthik Narasimhan, Tim Salimans
|
Pre-Training <https://blog.openai.com/language-unsupervised/>`__ by Alec Radford, Karthik Narasimhan, Tim Salimans
|
||||||
and Ilya Sutskever.
|
and Ilya Sutskever.
|
||||||
24. :doc:`GPT-2 <model_doc/gpt2>` (from OpenAI) released with the paper `Language Models are Unsupervised Multitask
|
25. :doc:`GPT-2 <model_doc/gpt2>` (from OpenAI) released with the paper `Language Models are Unsupervised Multitask
|
||||||
Learners <https://blog.openai.com/better-language-models/>`__ by Alec Radford*, Jeffrey Wu*, Rewon Child, David
|
Learners <https://blog.openai.com/better-language-models/>`__ by Alec Radford*, Jeffrey Wu*, Rewon Child, David
|
||||||
Luan, Dario Amodei** and Ilya Sutskever**.
|
Luan, Dario Amodei** and Ilya Sutskever**.
|
||||||
25. :doc:`GPT Neo <model_doc/gpt_neo>` (from EleutherAI) released in the repository `EleutherAI/gpt-neo
|
26. :doc:`GPT Neo <model_doc/gpt_neo>` (from EleutherAI) released in the repository `EleutherAI/gpt-neo
|
||||||
<https://github.com/EleutherAI/gpt-neo>`__ by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
|
<https://github.com/EleutherAI/gpt-neo>`__ by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
|
||||||
26. :doc:`I-BERT <model_doc/ibert>` (from Berkeley) released with the paper `I-BERT: Integer-only BERT Quantization
|
27. :doc:`I-BERT <model_doc/ibert>` (from Berkeley) released with the paper `I-BERT: Integer-only BERT Quantization
|
||||||
<https://arxiv.org/abs/2101.01321>`__ by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer
|
<https://arxiv.org/abs/2101.01321>`__ by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer
|
||||||
27. :doc:`LayoutLM <model_doc/layoutlm>` (from Microsoft Research Asia) released with the paper `LayoutLM: Pre-training
|
28. :doc:`LayoutLM <model_doc/layoutlm>` (from Microsoft Research Asia) released with the paper `LayoutLM: Pre-training
|
||||||
of Text and Layout for Document Image Understanding <https://arxiv.org/abs/1912.13318>`__ by Yiheng Xu, Minghao Li,
|
of Text and Layout for Document Image Understanding <https://arxiv.org/abs/1912.13318>`__ by Yiheng Xu, Minghao Li,
|
||||||
Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
|
Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
|
||||||
28. :doc:`LED <model_doc/led>` (from AllenAI) released with the paper `Longformer: The Long-Document Transformer
|
29. :doc:`LED <model_doc/led>` (from AllenAI) released with the paper `Longformer: The Long-Document Transformer
|
||||||
<https://arxiv.org/abs/2004.05150>`__ by Iz Beltagy, Matthew E. Peters, Arman Cohan.
|
<https://arxiv.org/abs/2004.05150>`__ by Iz Beltagy, Matthew E. Peters, Arman Cohan.
|
||||||
29. :doc:`Longformer <model_doc/longformer>` (from AllenAI) released with the paper `Longformer: The Long-Document
|
30. :doc:`Longformer <model_doc/longformer>` (from AllenAI) released with the paper `Longformer: The Long-Document
|
||||||
Transformer <https://arxiv.org/abs/2004.05150>`__ by Iz Beltagy, Matthew E. Peters, Arman Cohan.
|
Transformer <https://arxiv.org/abs/2004.05150>`__ by Iz Beltagy, Matthew E. Peters, Arman Cohan.
|
||||||
30. :doc:`LUKE <model_doc/luke>` (from Studio Ousia) released with the paper `LUKE: Deep Contextualized Entity
|
31. :doc:`LUKE <model_doc/luke>` (from Studio Ousia) released with the paper `LUKE: Deep Contextualized Entity
|
||||||
Representations with Entity-aware Self-attention <https://arxiv.org/abs/2010.01057>`__ by Ikuya Yamada, Akari Asai,
|
Representations with Entity-aware Self-attention <https://arxiv.org/abs/2010.01057>`__ by Ikuya Yamada, Akari Asai,
|
||||||
Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
|
Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
|
||||||
31. :doc:`LXMERT <model_doc/lxmert>` (from UNC Chapel Hill) released with the paper `LXMERT: Learning Cross-Modality
|
32. :doc:`LXMERT <model_doc/lxmert>` (from UNC Chapel Hill) released with the paper `LXMERT: Learning Cross-Modality
|
||||||
Encoder Representations from Transformers for Open-Domain Question Answering <https://arxiv.org/abs/1908.07490>`__
|
Encoder Representations from Transformers for Open-Domain Question Answering <https://arxiv.org/abs/1908.07490>`__
|
||||||
by Hao Tan and Mohit Bansal.
|
by Hao Tan and Mohit Bansal.
|
||||||
32. :doc:`M2M100 <model_doc/m2m_100>` (from Facebook) released with the paper `Beyond English-Centric Multilingual
|
33. :doc:`M2M100 <model_doc/m2m_100>` (from Facebook) released with the paper `Beyond English-Centric Multilingual
|
||||||
Machine Translation <https://arxiv.org/abs/2010.11125>`__ by by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi
|
Machine Translation <https://arxiv.org/abs/2010.11125>`__ by by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi
|
||||||
Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman
|
Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman
|
||||||
Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
|
Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
|
||||||
33. :doc:`MarianMT <model_doc/marian>` Machine translation models trained using `OPUS <http://opus.nlpl.eu/>`__ data by
|
34. :doc:`MarianMT <model_doc/marian>` Machine translation models trained using `OPUS <http://opus.nlpl.eu/>`__ data by
|
||||||
Jörg Tiedemann. The `Marian Framework <https://marian-nmt.github.io/>`__ is being developed by the Microsoft
|
Jörg Tiedemann. The `Marian Framework <https://marian-nmt.github.io/>`__ is being developed by the Microsoft
|
||||||
Translator Team.
|
Translator Team.
|
||||||
34. :doc:`MBart <model_doc/mbart>` (from Facebook) released with the paper `Multilingual Denoising Pre-training for
|
35. :doc:`MBart <model_doc/mbart>` (from Facebook) released with the paper `Multilingual Denoising Pre-training for
|
||||||
Neural Machine Translation <https://arxiv.org/abs/2001.08210>`__ by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li,
|
Neural Machine Translation <https://arxiv.org/abs/2001.08210>`__ by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li,
|
||||||
Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
|
Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
|
||||||
35. :doc:`MBart-50 <model_doc/mbart>` (from Facebook) released with the paper `Multilingual Translation with Extensible
|
36. :doc:`MBart-50 <model_doc/mbart>` (from Facebook) released with the paper `Multilingual Translation with Extensible
|
||||||
Multilingual Pretraining and Finetuning <https://arxiv.org/abs/2008.00401>`__ by Yuqing Tang, Chau Tran, Xian Li,
|
Multilingual Pretraining and Finetuning <https://arxiv.org/abs/2008.00401>`__ by Yuqing Tang, Chau Tran, Xian Li,
|
||||||
Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
|
Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
|
||||||
36. :doc:`Megatron-BERT <model_doc/megatron_bert>` (from NVIDIA) released with the paper `Megatron-LM: Training
|
37. :doc:`Megatron-BERT <model_doc/megatron_bert>` (from NVIDIA) released with the paper `Megatron-LM: Training
|
||||||
Multi-Billion Parameter Language Models Using Model Parallelism <https://arxiv.org/abs/1909.08053>`__ by Mohammad
|
Multi-Billion Parameter Language Models Using Model Parallelism <https://arxiv.org/abs/1909.08053>`__ by Mohammad
|
||||||
Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
|
Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
|
||||||
37. :doc:`Megatron-GPT2 <model_doc/megatron_gpt2>` (from NVIDIA) released with the paper `Megatron-LM: Training
|
38. :doc:`Megatron-GPT2 <model_doc/megatron_gpt2>` (from NVIDIA) released with the paper `Megatron-LM: Training
|
||||||
Multi-Billion Parameter Language Models Using Model Parallelism <https://arxiv.org/abs/1909.08053>`__ by Mohammad
|
Multi-Billion Parameter Language Models Using Model Parallelism <https://arxiv.org/abs/1909.08053>`__ by Mohammad
|
||||||
Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
|
Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
|
||||||
38. :doc:`MPNet <model_doc/mpnet>` (from Microsoft Research) released with the paper `MPNet: Masked and Permuted
|
39. :doc:`MPNet <model_doc/mpnet>` (from Microsoft Research) released with the paper `MPNet: Masked and Permuted
|
||||||
Pre-training for Language Understanding <https://arxiv.org/abs/2004.09297>`__ by Kaitao Song, Xu Tan, Tao Qin,
|
Pre-training for Language Understanding <https://arxiv.org/abs/2004.09297>`__ by Kaitao Song, Xu Tan, Tao Qin,
|
||||||
Jianfeng Lu, Tie-Yan Liu.
|
Jianfeng Lu, Tie-Yan Liu.
|
||||||
39. :doc:`MT5 <model_doc/mt5>` (from Google AI) released with the paper `mT5: A massively multilingual pre-trained
|
40. :doc:`MT5 <model_doc/mt5>` (from Google AI) released with the paper `mT5: A massively multilingual pre-trained
|
||||||
text-to-text transformer <https://arxiv.org/abs/2010.11934>`__ by Linting Xue, Noah Constant, Adam Roberts, Mihir
|
text-to-text transformer <https://arxiv.org/abs/2010.11934>`__ by Linting Xue, Noah Constant, Adam Roberts, Mihir
|
||||||
Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
|
Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
|
||||||
40. :doc:`Pegasus <model_doc/pegasus>` (from Google) released with the paper `PEGASUS: Pre-training with Extracted
|
41. :doc:`Pegasus <model_doc/pegasus>` (from Google) released with the paper `PEGASUS: Pre-training with Extracted
|
||||||
Gap-sentences for Abstractive Summarization <https://arxiv.org/abs/1912.08777>`__> by Jingqing Zhang, Yao Zhao,
|
Gap-sentences for Abstractive Summarization <https://arxiv.org/abs/1912.08777>`__> by Jingqing Zhang, Yao Zhao,
|
||||||
Mohammad Saleh and Peter J. Liu.
|
Mohammad Saleh and Peter J. Liu.
|
||||||
41. :doc:`ProphetNet <model_doc/prophetnet>` (from Microsoft Research) released with the paper `ProphetNet: Predicting
|
42. :doc:`ProphetNet <model_doc/prophetnet>` (from Microsoft Research) released with the paper `ProphetNet: Predicting
|
||||||
Future N-gram for Sequence-to-Sequence Pre-training <https://arxiv.org/abs/2001.04063>`__ by Yu Yan, Weizhen Qi,
|
Future N-gram for Sequence-to-Sequence Pre-training <https://arxiv.org/abs/2001.04063>`__ by Yu Yan, Weizhen Qi,
|
||||||
Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
|
Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
|
||||||
42. :doc:`Reformer <model_doc/reformer>` (from Google Research) released with the paper `Reformer: The Efficient
|
43. :doc:`Reformer <model_doc/reformer>` (from Google Research) released with the paper `Reformer: The Efficient
|
||||||
Transformer <https://arxiv.org/abs/2001.04451>`__ by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
|
Transformer <https://arxiv.org/abs/2001.04451>`__ by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
|
||||||
43. :doc:`RoBERTa <model_doc/roberta>` (from Facebook), released together with the paper a `Robustly Optimized BERT
|
44. :doc:`RoBERTa <model_doc/roberta>` (from Facebook), released together with the paper a `Robustly Optimized BERT
|
||||||
Pretraining Approach <https://arxiv.org/abs/1907.11692>`__ by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar
|
Pretraining Approach <https://arxiv.org/abs/1907.11692>`__ by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar
|
||||||
Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
|
Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
|
||||||
44. :doc:`SpeechToTextTransformer <model_doc/speech_to_text>` (from Facebook), released together with the paper
|
45. :doc:`SpeechToTextTransformer <model_doc/speech_to_text>` (from Facebook), released together with the paper
|
||||||
`fairseq S2T: Fast Speech-to-Text Modeling with fairseq <https://arxiv.org/abs/2010.05171>`__ by Changhan Wang, Yun
|
`fairseq S2T: Fast Speech-to-Text Modeling with fairseq <https://arxiv.org/abs/2010.05171>`__ by Changhan Wang, Yun
|
||||||
Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
|
Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
|
||||||
45. :doc:`SqueezeBert <model_doc/squeezebert>` released with the paper `SqueezeBERT: What can computer vision teach NLP
|
46. :doc:`SqueezeBert <model_doc/squeezebert>` released with the paper `SqueezeBERT: What can computer vision teach NLP
|
||||||
about efficient neural networks? <https://arxiv.org/abs/2006.11316>`__ by Forrest N. Iandola, Albert E. Shaw, Ravi
|
about efficient neural networks? <https://arxiv.org/abs/2006.11316>`__ by Forrest N. Iandola, Albert E. Shaw, Ravi
|
||||||
Krishna, and Kurt W. Keutzer.
|
Krishna, and Kurt W. Keutzer.
|
||||||
46. :doc:`T5 <model_doc/t5>` (from Google AI) released with the paper `Exploring the Limits of Transfer Learning with a
|
47. :doc:`T5 <model_doc/t5>` (from Google AI) released with the paper `Exploring the Limits of Transfer Learning with a
|
||||||
Unified Text-to-Text Transformer <https://arxiv.org/abs/1910.10683>`__ by Colin Raffel and Noam Shazeer and Adam
|
Unified Text-to-Text Transformer <https://arxiv.org/abs/1910.10683>`__ by Colin Raffel and Noam Shazeer and Adam
|
||||||
Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
|
Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
|
||||||
47. :doc:`TAPAS <model_doc/tapas>` (from Google AI) released with the paper `TAPAS: Weakly Supervised Table Parsing via
|
48. :doc:`TAPAS <model_doc/tapas>` (from Google AI) released with the paper `TAPAS: Weakly Supervised Table Parsing via
|
||||||
Pre-training <https://arxiv.org/abs/2004.02349>`__ by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller,
|
Pre-training <https://arxiv.org/abs/2004.02349>`__ by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller,
|
||||||
Francesco Piccinno and Julian Martin Eisenschlos.
|
Francesco Piccinno and Julian Martin Eisenschlos.
|
||||||
48. :doc:`Transformer-XL <model_doc/transformerxl>` (from Google/CMU) released with the paper `Transformer-XL:
|
49. :doc:`Transformer-XL <model_doc/transformerxl>` (from Google/CMU) released with the paper `Transformer-XL:
|
||||||
Attentive Language Models Beyond a Fixed-Length Context <https://arxiv.org/abs/1901.02860>`__ by Zihang Dai*,
|
Attentive Language Models Beyond a Fixed-Length Context <https://arxiv.org/abs/1901.02860>`__ by Zihang Dai*,
|
||||||
Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
|
Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
|
||||||
49. :doc:`Vision Transformer (ViT) <model_doc/vit>` (from Google AI) released with the paper `An Image is Worth 16x16
|
50. :doc:`Vision Transformer (ViT) <model_doc/vit>` (from Google AI) released with the paper `An Image is Worth 16x16
|
||||||
Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`__ by Alexey Dosovitskiy,
|
Words: Transformers for Image Recognition at Scale <https://arxiv.org/abs/2010.11929>`__ by Alexey Dosovitskiy,
|
||||||
Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias
|
Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias
|
||||||
Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
|
Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
|
||||||
50. :doc:`Wav2Vec2 <model_doc/wav2vec2>` (from Facebook AI) released with the paper `wav2vec 2.0: A Framework for
|
51. :doc:`Wav2Vec2 <model_doc/wav2vec2>` (from Facebook AI) released with the paper `wav2vec 2.0: A Framework for
|
||||||
Self-Supervised Learning of Speech Representations <https://arxiv.org/abs/2006.11477>`__ by Alexei Baevski, Henry
|
Self-Supervised Learning of Speech Representations <https://arxiv.org/abs/2006.11477>`__ by Alexei Baevski, Henry
|
||||||
Zhou, Abdelrahman Mohamed, Michael Auli.
|
Zhou, Abdelrahman Mohamed, Michael Auli.
|
||||||
51. :doc:`XLM <model_doc/xlm>` (from Facebook) released together with the paper `Cross-lingual Language Model
|
52. :doc:`XLM <model_doc/xlm>` (from Facebook) released together with the paper `Cross-lingual Language Model
|
||||||
Pretraining <https://arxiv.org/abs/1901.07291>`__ by Guillaume Lample and Alexis Conneau.
|
Pretraining <https://arxiv.org/abs/1901.07291>`__ by Guillaume Lample and Alexis Conneau.
|
||||||
52. :doc:`XLM-ProphetNet <model_doc/xlmprophetnet>` (from Microsoft Research) released with the paper `ProphetNet:
|
53. :doc:`XLM-ProphetNet <model_doc/xlmprophetnet>` (from Microsoft Research) released with the paper `ProphetNet:
|
||||||
Predicting Future N-gram for Sequence-to-Sequence Pre-training <https://arxiv.org/abs/2001.04063>`__ by Yu Yan,
|
Predicting Future N-gram for Sequence-to-Sequence Pre-training <https://arxiv.org/abs/2001.04063>`__ by Yu Yan,
|
||||||
Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
|
Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
|
||||||
53. :doc:`XLM-RoBERTa <model_doc/xlmroberta>` (from Facebook AI), released together with the paper `Unsupervised
|
54. :doc:`XLM-RoBERTa <model_doc/xlmroberta>` (from Facebook AI), released together with the paper `Unsupervised
|
||||||
Cross-lingual Representation Learning at Scale <https://arxiv.org/abs/1911.02116>`__ by Alexis Conneau*, Kartikay
|
Cross-lingual Representation Learning at Scale <https://arxiv.org/abs/1911.02116>`__ by Alexis Conneau*, Kartikay
|
||||||
Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke
|
Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke
|
||||||
Zettlemoyer and Veselin Stoyanov.
|
Zettlemoyer and Veselin Stoyanov.
|
||||||
54. :doc:`XLNet <model_doc/xlnet>` (from Google/CMU) released with the paper `XLNet: Generalized Autoregressive
|
55. :doc:`XLNet <model_doc/xlnet>` (from Google/CMU) released with the paper `XLNet: Generalized Autoregressive
|
||||||
Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237>`__ by Zhilin Yang*, Zihang Dai*, Yiming
|
Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237>`__ by Zhilin Yang*, Zihang Dai*, Yiming
|
||||||
Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
|
Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
|
||||||
55. :doc:`XLSR-Wav2Vec2 <model_doc/xlsr_wav2vec2>` (from Facebook AI) released with the paper `Unsupervised
|
56. :doc:`XLSR-Wav2Vec2 <model_doc/xlsr_wav2vec2>` (from Facebook AI) released with the paper `Unsupervised
|
||||||
Cross-Lingual Representation Learning For Speech Recognition <https://arxiv.org/abs/2006.13979>`__ by Alexis
|
Cross-Lingual Representation Learning For Speech Recognition <https://arxiv.org/abs/2006.13979>`__ by Alexis
|
||||||
Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
|
Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
|
||||||
|
|
||||||
@ -275,6 +278,8 @@ Flax), PyTorch, and/or TensorFlow.
|
|||||||
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
||||||
| BigBird | ✅ | ❌ | ✅ | ❌ | ❌ |
|
| BigBird | ✅ | ❌ | ✅ | ❌ | ❌ |
|
||||||
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
||||||
|
| BigBirdPegasus | ❌ | ❌ | ✅ | ❌ | ❌ |
|
||||||
|
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
||||||
| Blenderbot | ✅ | ❌ | ✅ | ✅ | ❌ |
|
| Blenderbot | ✅ | ❌ | ✅ | ✅ | ❌ |
|
||||||
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
+-----------------------------+----------------+----------------+-----------------+--------------------+--------------+
|
||||||
| BlenderbotSmall | ✅ | ❌ | ✅ | ✅ | ❌ |
|
| BlenderbotSmall | ✅ | ❌ | ✅ | ✅ | ❌ |
|
||||||
@ -451,6 +456,7 @@ Flax), PyTorch, and/or TensorFlow.
|
|||||||
model_doc/bertgeneration
|
model_doc/bertgeneration
|
||||||
model_doc/bert_japanese
|
model_doc/bert_japanese
|
||||||
model_doc/bigbird
|
model_doc/bigbird
|
||||||
|
model_doc/bigbird_pegasus
|
||||||
model_doc/blenderbot
|
model_doc/blenderbot
|
||||||
model_doc/blenderbot_small
|
model_doc/blenderbot_small
|
||||||
model_doc/bort
|
model_doc/bort
|
||||||
|
98
docs/source/model_doc/bigbird_pegasus.rst
Normal file
98
docs/source/model_doc/bigbird_pegasus.rst
Normal file
@ -0,0 +1,98 @@
|
|||||||
|
..
|
||||||
|
Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
the License. You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
specific language governing permissions and limitations under the License.
|
||||||
|
|
||||||
|
BigBirdPegasus
|
||||||
|
-----------------------------------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Overview
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
The BigBird model was proposed in `Big Bird: Transformers for Longer Sequences <https://arxiv.org/abs/2007.14062>`__ by
|
||||||
|
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
|
||||||
|
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
|
||||||
|
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
|
||||||
|
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
|
||||||
|
has been shown that applying sparse, global, and random attention approximates full attention, while being
|
||||||
|
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
|
||||||
|
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
|
||||||
|
summarization, compared to BERT or RoBERTa.
|
||||||
|
|
||||||
|
The abstract from the paper is the following:
|
||||||
|
|
||||||
|
*Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
|
||||||
|
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
|
||||||
|
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
|
||||||
|
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
|
||||||
|
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
|
||||||
|
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
|
||||||
|
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
|
||||||
|
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
|
||||||
|
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
|
||||||
|
propose novel applications to genomics data.*
|
||||||
|
|
||||||
|
Tips:
|
||||||
|
|
||||||
|
- For an in-detail explanation on how BigBird's attention works, see `this blog post
|
||||||
|
<https://huggingface.co/blog/big-bird>`__.
|
||||||
|
- BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using
|
||||||
|
**original_full** is advised as there is no benefit in using **block_sparse** attention.
|
||||||
|
- The code currently uses window size of 3 blocks and 2 global blocks.
|
||||||
|
- Sequence length must be divisible by block size.
|
||||||
|
- Current implementation supports only **ITC**.
|
||||||
|
- Current implementation doesn't support **num_random_blocks = 0**.
|
||||||
|
- BigBirdPegasus uses the `PegasusTokenizer
|
||||||
|
<https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/tokenization_pegasus.py>`__.
|
||||||
|
|
||||||
|
The original code can be found `here <https://github.com/google-research/bigbird>`__.
|
||||||
|
|
||||||
|
BigBirdPegasusConfig
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. autoclass:: transformers.BigBirdPegasusConfig
|
||||||
|
:members:
|
||||||
|
|
||||||
|
|
||||||
|
BigBirdPegasusModel
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. autoclass:: transformers.BigBirdPegasusModel
|
||||||
|
:members: forward
|
||||||
|
|
||||||
|
|
||||||
|
BigBirdPegasusForConditionalGeneration
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. autoclass:: transformers.BigBirdPegasusForConditionalGeneration
|
||||||
|
:members: forward
|
||||||
|
|
||||||
|
|
||||||
|
BigBirdPegasusForSequenceClassification
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. autoclass:: transformers.BigBirdPegasusForSequenceClassification
|
||||||
|
:members: forward
|
||||||
|
|
||||||
|
|
||||||
|
BigBirdPegasusForQuestionAnswering
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. autoclass:: transformers.BigBirdPegasusForQuestionAnswering
|
||||||
|
:members: forward
|
||||||
|
|
||||||
|
|
||||||
|
BigBirdPegasusForCausalLM
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. autoclass:: transformers.BigBirdPegasusForCausalLM
|
||||||
|
:members: forward
|
||||||
|
|
||||||
|
|
@ -155,6 +155,10 @@ _import_structure = {
|
|||||||
"models.bert_japanese": ["BertJapaneseTokenizer", "CharacterTokenizer", "MecabTokenizer"],
|
"models.bert_japanese": ["BertJapaneseTokenizer", "CharacterTokenizer", "MecabTokenizer"],
|
||||||
"models.bertweet": ["BertweetTokenizer"],
|
"models.bertweet": ["BertweetTokenizer"],
|
||||||
"models.big_bird": ["BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP", "BigBirdConfig", "BigBirdTokenizer"],
|
"models.big_bird": ["BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP", "BigBirdConfig", "BigBirdTokenizer"],
|
||||||
|
"models.bigbird_pegasus": [
|
||||||
|
"BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||||
|
"BigBirdPegasusConfig",
|
||||||
|
],
|
||||||
"models.blenderbot": ["BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP", "BlenderbotConfig", "BlenderbotTokenizer"],
|
"models.blenderbot": ["BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP", "BlenderbotConfig", "BlenderbotTokenizer"],
|
||||||
"models.blenderbot_small": [
|
"models.blenderbot_small": [
|
||||||
"BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
"BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||||
@ -543,6 +547,16 @@ if is_torch_available():
|
|||||||
"load_tf_weights_in_big_bird",
|
"load_tf_weights_in_big_bird",
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
_import_structure["models.bigbird_pegasus"].extend(
|
||||||
|
[
|
||||||
|
"BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||||
|
"BigBirdPegasusForCausalLM",
|
||||||
|
"BigBirdPegasusForConditionalGeneration",
|
||||||
|
"BigBirdPegasusForQuestionAnswering",
|
||||||
|
"BigBirdPegasusForSequenceClassification",
|
||||||
|
"BigBirdPegasusModel",
|
||||||
|
]
|
||||||
|
)
|
||||||
_import_structure["models.blenderbot"].extend(
|
_import_structure["models.blenderbot"].extend(
|
||||||
[
|
[
|
||||||
"BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST",
|
"BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||||
@ -1541,6 +1555,7 @@ if TYPE_CHECKING:
|
|||||||
from .models.bert_japanese import BertJapaneseTokenizer, CharacterTokenizer, MecabTokenizer
|
from .models.bert_japanese import BertJapaneseTokenizer, CharacterTokenizer, MecabTokenizer
|
||||||
from .models.bertweet import BertweetTokenizer
|
from .models.bertweet import BertweetTokenizer
|
||||||
from .models.big_bird import BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdConfig, BigBirdTokenizer
|
from .models.big_bird import BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdConfig, BigBirdTokenizer
|
||||||
|
from .models.bigbird_pegasus import BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdPegasusConfig
|
||||||
from .models.blenderbot import BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP, BlenderbotConfig, BlenderbotTokenizer
|
from .models.blenderbot import BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP, BlenderbotConfig, BlenderbotTokenizer
|
||||||
from .models.blenderbot_small import (
|
from .models.blenderbot_small import (
|
||||||
BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
@ -1885,6 +1900,14 @@ if TYPE_CHECKING:
|
|||||||
BigBirdPreTrainedModel,
|
BigBirdPreTrainedModel,
|
||||||
load_tf_weights_in_big_bird,
|
load_tf_weights_in_big_bird,
|
||||||
)
|
)
|
||||||
|
from .models.bigbird_pegasus import (
|
||||||
|
BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||||
|
BigBirdPegasusForCausalLM,
|
||||||
|
BigBirdPegasusForConditionalGeneration,
|
||||||
|
BigBirdPegasusForQuestionAnswering,
|
||||||
|
BigBirdPegasusForSequenceClassification,
|
||||||
|
BigBirdPegasusModel,
|
||||||
|
)
|
||||||
from .models.blenderbot import (
|
from .models.blenderbot import (
|
||||||
BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST,
|
BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||||
BlenderbotForCausalLM,
|
BlenderbotForCausalLM,
|
||||||
|
@ -635,9 +635,17 @@ class PegasusConverter(SpmConverter):
|
|||||||
vocab = [
|
vocab = [
|
||||||
(self.original_tokenizer.pad_token, 0.0),
|
(self.original_tokenizer.pad_token, 0.0),
|
||||||
(self.original_tokenizer.eos_token, 0.0),
|
(self.original_tokenizer.eos_token, 0.0),
|
||||||
(self.original_tokenizer.mask_token_sent, 0.0),
|
|
||||||
(self.original_tokenizer.mask_token, 0.0),
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
if self.original_tokenizer.mask_token_sent is not None:
|
||||||
|
vocab += [(self.original_tokenizer.mask_token_sent, 0.0)]
|
||||||
|
|
||||||
|
if (
|
||||||
|
self.original_tokenizer.mask_token is not None
|
||||||
|
and self.original_tokenizer.mask_token_id < self.original_tokenizer.offset
|
||||||
|
):
|
||||||
|
vocab += [(self.original_tokenizer.mask_token, 0.0)]
|
||||||
|
|
||||||
vocab += [(f"<unk_{i}>", -100.0) for i in range(2, self.original_tokenizer.offset)]
|
vocab += [(f"<unk_{i}>", -100.0) for i in range(2, self.original_tokenizer.offset)]
|
||||||
vocab += [(piece.piece, piece.score) for piece in proto.pieces[2:]]
|
vocab += [(piece.piece, piece.score) for piece in proto.pieces[2:]]
|
||||||
return vocab
|
return vocab
|
||||||
|
@ -26,6 +26,7 @@ from . import (
|
|||||||
bert_japanese,
|
bert_japanese,
|
||||||
bertweet,
|
bertweet,
|
||||||
big_bird,
|
big_bird,
|
||||||
|
bigbird_pegasus,
|
||||||
blenderbot,
|
blenderbot,
|
||||||
blenderbot_small,
|
blenderbot_small,
|
||||||
camembert,
|
camembert,
|
||||||
|
@ -23,6 +23,10 @@ from ..bart.configuration_bart import BART_PRETRAINED_CONFIG_ARCHIVE_MAP, BartCo
|
|||||||
from ..bert.configuration_bert import BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, BertConfig
|
from ..bert.configuration_bert import BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, BertConfig
|
||||||
from ..bert_generation.configuration_bert_generation import BertGenerationConfig
|
from ..bert_generation.configuration_bert_generation import BertGenerationConfig
|
||||||
from ..big_bird.configuration_big_bird import BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdConfig
|
from ..big_bird.configuration_big_bird import BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdConfig
|
||||||
|
from ..bigbird_pegasus.configuration_bigbird_pegasus import (
|
||||||
|
BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
|
BigBirdPegasusConfig,
|
||||||
|
)
|
||||||
from ..blenderbot.configuration_blenderbot import BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP, BlenderbotConfig
|
from ..blenderbot.configuration_blenderbot import BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP, BlenderbotConfig
|
||||||
from ..blenderbot_small.configuration_blenderbot_small import (
|
from ..blenderbot_small.configuration_blenderbot_small import (
|
||||||
BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
@ -86,6 +90,7 @@ ALL_PRETRAINED_CONFIG_ARCHIVE_MAP = dict(
|
|||||||
(key, value)
|
(key, value)
|
||||||
for pretrained_map in [
|
for pretrained_map in [
|
||||||
# Add archive maps here
|
# Add archive maps here
|
||||||
|
BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
LUKE_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
LUKE_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
GPT_NEO_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
GPT_NEO_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||||
@ -139,6 +144,7 @@ ALL_PRETRAINED_CONFIG_ARCHIVE_MAP = dict(
|
|||||||
CONFIG_MAPPING = OrderedDict(
|
CONFIG_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Add configs here
|
# Add configs here
|
||||||
|
("bigbird_pegasus", BigBirdPegasusConfig),
|
||||||
("deit", DeiTConfig),
|
("deit", DeiTConfig),
|
||||||
("luke", LukeConfig),
|
("luke", LukeConfig),
|
||||||
("gpt_neo", GPTNeoConfig),
|
("gpt_neo", GPTNeoConfig),
|
||||||
@ -198,6 +204,7 @@ CONFIG_MAPPING = OrderedDict(
|
|||||||
MODEL_NAMES_MAPPING = OrderedDict(
|
MODEL_NAMES_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Add full (and cased) model names here
|
# Add full (and cased) model names here
|
||||||
|
("bigbird_pegasus", "BigBirdPegasus"),
|
||||||
("deit", "DeiT"),
|
("deit", "DeiT"),
|
||||||
("luke", "LUKE"),
|
("luke", "LUKE"),
|
||||||
("gpt_neo", "GPT Neo"),
|
("gpt_neo", "GPT Neo"),
|
||||||
|
@ -59,6 +59,13 @@ from ..big_bird.modeling_big_bird import (
|
|||||||
BigBirdForTokenClassification,
|
BigBirdForTokenClassification,
|
||||||
BigBirdModel,
|
BigBirdModel,
|
||||||
)
|
)
|
||||||
|
from ..bigbird_pegasus.modeling_bigbird_pegasus import (
|
||||||
|
BigBirdPegasusForCausalLM,
|
||||||
|
BigBirdPegasusForConditionalGeneration,
|
||||||
|
BigBirdPegasusForQuestionAnswering,
|
||||||
|
BigBirdPegasusForSequenceClassification,
|
||||||
|
BigBirdPegasusModel,
|
||||||
|
)
|
||||||
from ..blenderbot.modeling_blenderbot import BlenderbotForCausalLM, BlenderbotForConditionalGeneration, BlenderbotModel
|
from ..blenderbot.modeling_blenderbot import BlenderbotForCausalLM, BlenderbotForConditionalGeneration, BlenderbotModel
|
||||||
from ..blenderbot_small.modeling_blenderbot_small import (
|
from ..blenderbot_small.modeling_blenderbot_small import (
|
||||||
BlenderbotSmallForCausalLM,
|
BlenderbotSmallForCausalLM,
|
||||||
@ -288,6 +295,7 @@ from .configuration_auto import (
|
|||||||
BertConfig,
|
BertConfig,
|
||||||
BertGenerationConfig,
|
BertGenerationConfig,
|
||||||
BigBirdConfig,
|
BigBirdConfig,
|
||||||
|
BigBirdPegasusConfig,
|
||||||
BlenderbotConfig,
|
BlenderbotConfig,
|
||||||
BlenderbotSmallConfig,
|
BlenderbotSmallConfig,
|
||||||
CamembertConfig,
|
CamembertConfig,
|
||||||
@ -344,6 +352,7 @@ logger = logging.get_logger(__name__)
|
|||||||
MODEL_MAPPING = OrderedDict(
|
MODEL_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Base model mapping
|
# Base model mapping
|
||||||
|
(BigBirdPegasusConfig, BigBirdPegasusModel),
|
||||||
(DeiTConfig, DeiTModel),
|
(DeiTConfig, DeiTModel),
|
||||||
(LukeConfig, LukeModel),
|
(LukeConfig, LukeModel),
|
||||||
(GPTNeoConfig, GPTNeoModel),
|
(GPTNeoConfig, GPTNeoModel),
|
||||||
@ -439,6 +448,7 @@ MODEL_FOR_PRETRAINING_MAPPING = OrderedDict(
|
|||||||
MODEL_WITH_LM_HEAD_MAPPING = OrderedDict(
|
MODEL_WITH_LM_HEAD_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Model with LM heads mapping
|
# Model with LM heads mapping
|
||||||
|
(BigBirdPegasusConfig, BigBirdPegasusForConditionalGeneration),
|
||||||
(GPTNeoConfig, GPTNeoForCausalLM),
|
(GPTNeoConfig, GPTNeoForCausalLM),
|
||||||
(BigBirdConfig, BigBirdForMaskedLM),
|
(BigBirdConfig, BigBirdForMaskedLM),
|
||||||
(Speech2TextConfig, Speech2TextForConditionalGeneration),
|
(Speech2TextConfig, Speech2TextForConditionalGeneration),
|
||||||
@ -485,6 +495,7 @@ MODEL_WITH_LM_HEAD_MAPPING = OrderedDict(
|
|||||||
MODEL_FOR_CAUSAL_LM_MAPPING = OrderedDict(
|
MODEL_FOR_CAUSAL_LM_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Model for Causal LM mapping
|
# Model for Causal LM mapping
|
||||||
|
(BigBirdPegasusConfig, BigBirdPegasusForCausalLM),
|
||||||
(GPTNeoConfig, GPTNeoForCausalLM),
|
(GPTNeoConfig, GPTNeoForCausalLM),
|
||||||
(BigBirdConfig, BigBirdForCausalLM),
|
(BigBirdConfig, BigBirdForCausalLM),
|
||||||
(CamembertConfig, CamembertForCausalLM),
|
(CamembertConfig, CamembertForCausalLM),
|
||||||
@ -557,6 +568,7 @@ MODEL_FOR_MASKED_LM_MAPPING = OrderedDict(
|
|||||||
MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING = OrderedDict(
|
MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Model for Seq2Seq Causal LM mapping
|
# Model for Seq2Seq Causal LM mapping
|
||||||
|
(BigBirdPegasusConfig, BigBirdPegasusForConditionalGeneration),
|
||||||
(M2M100Config, M2M100ForConditionalGeneration),
|
(M2M100Config, M2M100ForConditionalGeneration),
|
||||||
(LEDConfig, LEDForConditionalGeneration),
|
(LEDConfig, LEDForConditionalGeneration),
|
||||||
(BlenderbotSmallConfig, BlenderbotSmallForConditionalGeneration),
|
(BlenderbotSmallConfig, BlenderbotSmallForConditionalGeneration),
|
||||||
@ -577,6 +589,7 @@ MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING = OrderedDict(
|
|||||||
MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING = OrderedDict(
|
MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Model for Sequence Classification mapping
|
# Model for Sequence Classification mapping
|
||||||
|
(BigBirdPegasusConfig, BigBirdPegasusForSequenceClassification),
|
||||||
(BigBirdConfig, BigBirdForSequenceClassification),
|
(BigBirdConfig, BigBirdForSequenceClassification),
|
||||||
(ConvBertConfig, ConvBertForSequenceClassification),
|
(ConvBertConfig, ConvBertForSequenceClassification),
|
||||||
(LEDConfig, LEDForSequenceClassification),
|
(LEDConfig, LEDForSequenceClassification),
|
||||||
@ -614,6 +627,7 @@ MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING = OrderedDict(
|
|||||||
MODEL_FOR_QUESTION_ANSWERING_MAPPING = OrderedDict(
|
MODEL_FOR_QUESTION_ANSWERING_MAPPING = OrderedDict(
|
||||||
[
|
[
|
||||||
# Model for Question Answering mapping
|
# Model for Question Answering mapping
|
||||||
|
(BigBirdPegasusConfig, BigBirdPegasusForQuestionAnswering),
|
||||||
(BigBirdConfig, BigBirdForQuestionAnswering),
|
(BigBirdConfig, BigBirdForQuestionAnswering),
|
||||||
(ConvBertConfig, ConvBertForQuestionAnswering),
|
(ConvBertConfig, ConvBertForQuestionAnswering),
|
||||||
(LEDConfig, LEDForQuestionAnswering),
|
(LEDConfig, LEDForQuestionAnswering),
|
||||||
|
@ -549,6 +549,7 @@ class BigBirdBlockSparseAttention(nn.Module):
|
|||||||
|
|
||||||
rsqrt_d = 1 / math.sqrt(attention_head_size)
|
rsqrt_d = 1 / math.sqrt(attention_head_size)
|
||||||
bsz = batch_size
|
bsz = batch_size
|
||||||
|
attn_mask_penalty = -10000.0
|
||||||
|
|
||||||
# generate random attention and corresponding masks
|
# generate random attention and corresponding masks
|
||||||
np.random.seed(seed)
|
np.random.seed(seed)
|
||||||
@ -606,7 +607,7 @@ class BigBirdBlockSparseAttention(nn.Module):
|
|||||||
first_product = self.torch_bmm_nd_transpose(blocked_query_matrix[:, :, 0], key_layer, ndim=4)
|
first_product = self.torch_bmm_nd_transpose(blocked_query_matrix[:, :, 0], key_layer, ndim=4)
|
||||||
|
|
||||||
first_product = first_product * rsqrt_d
|
first_product = first_product * rsqrt_d
|
||||||
first_product += (1.0 - to_mask) * -10000.0
|
first_product += (1.0 - to_mask) * attn_mask_penalty
|
||||||
first_attn_weights = F.softmax(first_product, dim=-1) # [bsz, n_heads, from_block_size, to_seq_len]
|
first_attn_weights = F.softmax(first_product, dim=-1) # [bsz, n_heads, from_block_size, to_seq_len]
|
||||||
|
|
||||||
# [bsz, n_heads, from_block_size, to_seq_len] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, -1]
|
# [bsz, n_heads, from_block_size, to_seq_len] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, -1]
|
||||||
@ -658,7 +659,7 @@ class BigBirdBlockSparseAttention(nn.Module):
|
|||||||
dim=3,
|
dim=3,
|
||||||
)
|
)
|
||||||
second_product = second_product * rsqrt_d
|
second_product = second_product * rsqrt_d
|
||||||
second_product += (1.0 - torch.minimum(second_seq_pad, second_rand_pad)) * -10000.0
|
second_product += (1.0 - torch.minimum(second_seq_pad, second_rand_pad)) * attn_mask_penalty
|
||||||
second_attn_weights = F.softmax(
|
second_attn_weights = F.softmax(
|
||||||
second_product, dim=-1
|
second_product, dim=-1
|
||||||
) # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size]
|
) # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size]
|
||||||
@ -709,10 +710,10 @@ class BigBirdBlockSparseAttention(nn.Module):
|
|||||||
last_band_product = last_band_product * rsqrt_d
|
last_band_product = last_band_product * rsqrt_d
|
||||||
|
|
||||||
# masking padded tokens
|
# masking padded tokens
|
||||||
inner_band_product += (1.0 - band_mask) * -10000.0
|
inner_band_product += (1.0 - band_mask) * attn_mask_penalty
|
||||||
first_band_product += (1.0 - to_mask[:, :, :, :to_block_size].unsqueeze(3)) * -10000.0
|
first_band_product += (1.0 - to_mask[:, :, :, :to_block_size].unsqueeze(3)) * attn_mask_penalty
|
||||||
last_band_product += (1.0 - to_mask[:, :, :, -to_block_size:].unsqueeze(3)) * -10000.0
|
last_band_product += (1.0 - to_mask[:, :, :, -to_block_size:].unsqueeze(3)) * attn_mask_penalty
|
||||||
rand_band_product += (1.0 - rand_mask[:, :, 1:-1]) * -10000.0
|
rand_band_product += (1.0 - rand_mask[:, :, 1:-1]) * attn_mask_penalty
|
||||||
|
|
||||||
# completing attention scores matrix for all q[-2:2]
|
# completing attention scores matrix for all q[-2:2]
|
||||||
band_product = torch.cat(
|
band_product = torch.cat(
|
||||||
@ -792,7 +793,7 @@ class BigBirdBlockSparseAttention(nn.Module):
|
|||||||
dim=3,
|
dim=3,
|
||||||
)
|
)
|
||||||
second_last_product = second_last_product * rsqrt_d
|
second_last_product = second_last_product * rsqrt_d
|
||||||
second_last_product += (1.0 - torch.minimum(second_last_seq_pad, second_last_rand_pad)) * -10000.0
|
second_last_product += (1.0 - torch.minimum(second_last_seq_pad, second_last_rand_pad)) * attn_mask_penalty
|
||||||
second_last_attn_weights = F.softmax(
|
second_last_attn_weights = F.softmax(
|
||||||
second_last_product, dim=-1
|
second_last_product, dim=-1
|
||||||
) # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size]
|
) # [bsz, n_heads, from_block_size, (4+n_rand_blocks)*to_block_size]
|
||||||
@ -808,7 +809,7 @@ class BigBirdBlockSparseAttention(nn.Module):
|
|||||||
# [bsz, n_heads, from_block_size, -1] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, to_seq_len]
|
# [bsz, n_heads, from_block_size, -1] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, to_seq_len]
|
||||||
last_product = self.torch_bmm_nd_transpose(blocked_query_matrix[:, :, -1], key_layer, ndim=4)
|
last_product = self.torch_bmm_nd_transpose(blocked_query_matrix[:, :, -1], key_layer, ndim=4)
|
||||||
last_product = last_product * rsqrt_d
|
last_product = last_product * rsqrt_d
|
||||||
last_product += (1.0 - to_mask) * -10000.0
|
last_product += (1.0 - to_mask) * attn_mask_penalty
|
||||||
last_attn_weights = F.softmax(last_product, dim=-1) # [bsz, n_heads, from_block_size, n]
|
last_attn_weights = F.softmax(last_product, dim=-1) # [bsz, n_heads, from_block_size, n]
|
||||||
|
|
||||||
# [bsz, n_heads, from_block_size, to_seq_len] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, -1]
|
# [bsz, n_heads, from_block_size, to_seq_len] x [bsz, n_heads, to_seq_len, -1] ==> [bsz, n_heads, from_block_size, -1]
|
||||||
|
70
src/transformers/models/bigbird_pegasus/__init__.py
Normal file
70
src/transformers/models/bigbird_pegasus/__init__.py
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
# flake8: noqa
|
||||||
|
# There's no way to ignore "F401 '...' imported but unused" warnings in this
|
||||||
|
# module, but to preserve other warnings. So, don't check this module at all.
|
||||||
|
|
||||||
|
# Copyright 2021 The HuggingFace Team. All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from ...file_utils import _BaseLazyModule, is_torch_available
|
||||||
|
|
||||||
|
|
||||||
|
_import_structure = {
|
||||||
|
"configuration_bigbird_pegasus": ["BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP", "BigBirdPegasusConfig"],
|
||||||
|
}
|
||||||
|
|
||||||
|
if is_torch_available():
|
||||||
|
_import_structure["modeling_bigbird_pegasus"] = [
|
||||||
|
"BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||||
|
"BigBirdPegasusForCausalLM",
|
||||||
|
"BigBirdPegasusForConditionalGeneration",
|
||||||
|
"BigBirdPegasusForQuestionAnswering",
|
||||||
|
"BigBirdPegasusForSequenceClassification",
|
||||||
|
"BigBirdPegasusModel",
|
||||||
|
"BigBirdPegasusPreTrainedModel",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .configuration_bigbird_pegasus import BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdPegasusConfig
|
||||||
|
|
||||||
|
if is_torch_available():
|
||||||
|
from .modeling_bigbird_pegasus import (
|
||||||
|
BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||||
|
BigBirdPegasusForCausalLM,
|
||||||
|
BigBirdPegasusForConditionalGeneration,
|
||||||
|
BigBirdPegasusForQuestionAnswering,
|
||||||
|
BigBirdPegasusForSequenceClassification,
|
||||||
|
BigBirdPegasusModel,
|
||||||
|
BigBirdPegasusPreTrainedModel,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
else:
|
||||||
|
import importlib
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
class _LazyModule(_BaseLazyModule):
|
||||||
|
"""
|
||||||
|
Module class that surfaces all objects but only performs associated imports when the objects are requested.
|
||||||
|
"""
|
||||||
|
|
||||||
|
__file__ = globals()["__file__"]
|
||||||
|
__path__ = [os.path.dirname(__file__)]
|
||||||
|
|
||||||
|
def _get_module(self, module_name: str):
|
||||||
|
return importlib.import_module("." + module_name, self.__name__)
|
||||||
|
|
||||||
|
sys.modules[__name__] = _LazyModule(__name__, _import_structure)
|
@ -0,0 +1,196 @@
|
|||||||
|
# coding=utf-8
|
||||||
|
# Copyright Google Research and The HuggingFace Inc. team. All rights reserved.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
""" BigBirdPegasus model configuration """
|
||||||
|
|
||||||
|
from ...configuration_utils import PretrainedConfig
|
||||||
|
from ...utils import logging
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.get_logger(__name__)
|
||||||
|
|
||||||
|
BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||||
|
"google/bigbird-pegasus-large-arxiv": "https://huggingface.co/google/bigbird-pegasus-large-arxiv/resolve/main/config.json",
|
||||||
|
"google/bigbird-pegasus-large-pubmed": "https://huggingface.co/google/bigbird-pegasus-large-pubmed/resolve/main/config.json",
|
||||||
|
"google/bigbird-pegasus-large-bigpatent": "https://huggingface.co/google/bigbird-pegasus-large-bigpatent/resolve/main/config.json",
|
||||||
|
# See all BigBirdPegasus models at https://huggingface.co/models?filter=bigbird_pegasus
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class BigBirdPegasusConfig(PretrainedConfig):
|
||||||
|
r"""
|
||||||
|
This is the configuration class to store the configuration of a :class:`~transformers.BigBirdPegasusModel`. It is
|
||||||
|
used to instantiate an BigBirdPegasus model according to the specified arguments, defining the model architecture.
|
||||||
|
Instantiating a configuration with the defaults will yield a similar configuration to that of the BigBirdPegasus
|
||||||
|
`google/bigbird-pegasus-large-arxiv <https://huggingface.co/google/bigbird-pegasus-large-arxiv>`__ architecture.
|
||||||
|
|
||||||
|
Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model
|
||||||
|
outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information.
|
||||||
|
|
||||||
|
|
||||||
|
Args:
|
||||||
|
vocab_size (:obj:`int`, `optional`, defaults to 96103):
|
||||||
|
Vocabulary size of the BigBirdPegasus model. Defines the number of different tokens that can be represented
|
||||||
|
by the :obj:`inputs_ids` passed when calling :class:`~transformers.BigBirdPegasusModel`.
|
||||||
|
d_model (:obj:`int`, `optional`, defaults to 1024):
|
||||||
|
Dimension of the layers and the pooler layer.
|
||||||
|
encoder_layers (:obj:`int`, `optional`, defaults to 16):
|
||||||
|
Number of encoder layers.
|
||||||
|
decoder_layers (:obj:`int`, `optional`, defaults to 16):
|
||||||
|
Number of decoder layers.
|
||||||
|
encoder_attention_heads (:obj:`int`, `optional`, defaults to 16):
|
||||||
|
Number of attention heads for each attention layer in the Transformer encoder.
|
||||||
|
decoder_attention_heads (:obj:`int`, `optional`, defaults to 16):
|
||||||
|
Number of attention heads for each attention layer in the Transformer decoder.
|
||||||
|
decoder_ffn_dim (:obj:`int`, `optional`, defaults to 4096):
|
||||||
|
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
|
||||||
|
encoder_ffn_dim (:obj:`int`, `optional`, defaults to 4096):
|
||||||
|
Dimension of the "intermediate" (often named feed-forward) layer in decoder.
|
||||||
|
activation_function (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu_fast"`):
|
||||||
|
The non-linear activation function (function or string) in the encoder and pooler. If string,
|
||||||
|
:obj:`"gelu"`, :obj:`"relu"`, :obj:`"silu"`, :obj:`"gelu_fast"` and :obj:`"gelu_new"` are supported.
|
||||||
|
dropout (:obj:`float`, `optional`, defaults to 0.1):
|
||||||
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
||||||
|
attention_dropout (:obj:`float`, `optional`, defaults to 0.0):
|
||||||
|
The dropout ratio for the attention probabilities.
|
||||||
|
activation_dropout (:obj:`float`, `optional`, defaults to 0.0):
|
||||||
|
The dropout ratio for activations inside the fully connected layer.
|
||||||
|
classifier_dropout (:obj:`float`, `optional`, defaults to 0.0):
|
||||||
|
The dropout ratio for classifier.
|
||||||
|
max_position_embeddings (:obj:`int`, `optional`, defaults to 4096):
|
||||||
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
|
||||||
|
just in case (e.g., 1024 or 2048 or 4096).
|
||||||
|
init_std (:obj:`float`, `optional`, defaults to 0.02):
|
||||||
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||||
|
encoder_layerdrop: (:obj:`float`, `optional`, defaults to 0.0):
|
||||||
|
The LayerDrop probability for the encoder. See the `LayerDrop paper <see
|
||||||
|
https://arxiv.org/abs/1909.11556>`__ for more details.
|
||||||
|
decoder_layerdrop: (:obj:`float`, `optional`, defaults to 0.0):
|
||||||
|
The LayerDrop probability for the decoder. See the `LayerDrop paper <see
|
||||||
|
https://arxiv.org/abs/1909.11556>`__ for more details.
|
||||||
|
use_cache (:obj:`bool`, `optional`, defaults to :obj:`True`):
|
||||||
|
Whether or not the model should return the last key/values attentions (not used by all models).
|
||||||
|
attention_type (:obj:`str`, `optional`, defaults to :obj:`"block_sparse"`)
|
||||||
|
Whether to use block sparse attention (with n complexity) as introduced in paper or original attention
|
||||||
|
layer (with n^2 complexity) in encoder. Possible values are :obj:`"original_full"` and
|
||||||
|
:obj:`"block_sparse"`.
|
||||||
|
use_bias (:obj:`bool`, `optional`, defaults to :obj:`False`)
|
||||||
|
Whether to use bias in query, key, value.
|
||||||
|
block_size (:obj:`int`, `optional`, defaults to 64)
|
||||||
|
Size of each block. Useful only when :obj:`attention_type == "block_sparse"`.
|
||||||
|
num_random_blocks (:obj:`int`, `optional`, defaults to 3)
|
||||||
|
Each query is going to attend these many number of random blocks. Useful only when :obj:`attention_type ==
|
||||||
|
"block_sparse"`.
|
||||||
|
scale_embeddings (:obj:`bool`, `optional`, defaults to :obj:`True`)
|
||||||
|
Whether to rescale embeddings with (hidden_size ** 0.5).
|
||||||
|
gradient_checkpointing (:obj:`bool`, `optional`, defaults to :obj:`False`):
|
||||||
|
If True, use gradient checkpointing to save memory at the expense of slower backward pass.
|
||||||
|
|
||||||
|
Example::
|
||||||
|
|
||||||
|
>>> from transformers import BigBirdPegasusModel, BigBirdPegasusConfig
|
||||||
|
|
||||||
|
>>> # Initializing a BigBirdPegasus bigbird-pegasus-base style configuration
|
||||||
|
>>> configuration = BigBirdPegasusConfig()
|
||||||
|
|
||||||
|
>>> # Initializing a model from the bigbird-pegasus-base style configuration
|
||||||
|
>>> model = BigBirdPegasusModel(configuration)
|
||||||
|
|
||||||
|
>>> # Accessing the model configuration
|
||||||
|
>>> configuration = model.config
|
||||||
|
"""
|
||||||
|
model_type = "bigbird_pegasus"
|
||||||
|
keys_to_ignore_at_inference = ["past_key_values"]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
vocab_size=96103,
|
||||||
|
max_position_embeddings=4096,
|
||||||
|
encoder_layers=16,
|
||||||
|
encoder_ffn_dim=4096,
|
||||||
|
encoder_attention_heads=16,
|
||||||
|
decoder_layers=16,
|
||||||
|
decoder_ffn_dim=4096,
|
||||||
|
decoder_attention_heads=16,
|
||||||
|
encoder_layerdrop=0.0,
|
||||||
|
decoder_layerdrop=0.0,
|
||||||
|
use_cache=True,
|
||||||
|
is_encoder_decoder=True,
|
||||||
|
activation_function="gelu_fast",
|
||||||
|
d_model=1024,
|
||||||
|
dropout=0.1,
|
||||||
|
attention_dropout=0.0,
|
||||||
|
activation_dropout=0.0,
|
||||||
|
init_std=0.02,
|
||||||
|
decoder_start_token_id=2,
|
||||||
|
classifier_dropout=0.0,
|
||||||
|
scale_embedding=True,
|
||||||
|
gradient_checkpointing=False,
|
||||||
|
pad_token_id=0,
|
||||||
|
bos_token_id=2,
|
||||||
|
eos_token_id=1,
|
||||||
|
attention_type="block_sparse", # only for encoder
|
||||||
|
block_size=64,
|
||||||
|
num_random_blocks=3,
|
||||||
|
use_bias=False,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
super().__init__(
|
||||||
|
pad_token_id=pad_token_id,
|
||||||
|
bos_token_id=bos_token_id,
|
||||||
|
eos_token_id=eos_token_id,
|
||||||
|
is_encoder_decoder=is_encoder_decoder,
|
||||||
|
decoder_start_token_id=decoder_start_token_id,
|
||||||
|
**kwargs,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.vocab_size = vocab_size
|
||||||
|
self.max_position_embeddings = max_position_embeddings
|
||||||
|
self.d_model = d_model
|
||||||
|
self.encoder_ffn_dim = encoder_ffn_dim
|
||||||
|
self.encoder_layers = encoder_layers
|
||||||
|
self.encoder_attention_heads = encoder_attention_heads
|
||||||
|
self.decoder_ffn_dim = decoder_ffn_dim
|
||||||
|
self.decoder_layers = decoder_layers
|
||||||
|
self.decoder_attention_heads = decoder_attention_heads
|
||||||
|
self.dropout = dropout
|
||||||
|
self.attention_dropout = attention_dropout
|
||||||
|
self.activation_dropout = activation_dropout
|
||||||
|
self.activation_function = activation_function
|
||||||
|
self.init_std = init_std
|
||||||
|
self.encoder_layerdrop = encoder_layerdrop
|
||||||
|
self.decoder_layerdrop = decoder_layerdrop
|
||||||
|
self.classifier_dropout = classifier_dropout
|
||||||
|
self.use_cache = use_cache
|
||||||
|
self.num_hidden_layers = encoder_layers
|
||||||
|
self.gradient_checkpointing = gradient_checkpointing
|
||||||
|
self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True
|
||||||
|
|
||||||
|
# extra config
|
||||||
|
self.attention_type = attention_type
|
||||||
|
self.block_size = block_size
|
||||||
|
self.num_random_blocks = num_random_blocks
|
||||||
|
self.use_bias = use_bias
|
||||||
|
|
||||||
|
@property
|
||||||
|
def num_attention_heads(self) -> int:
|
||||||
|
return self.encoder_attention_heads
|
||||||
|
|
||||||
|
@property
|
||||||
|
def hidden_size(self) -> int:
|
||||||
|
return self.d_model
|
||||||
|
|
||||||
|
@property
|
||||||
|
def attention_probs_dropout_prob(self) -> float:
|
||||||
|
return self.attention_dropout
|
@ -0,0 +1,171 @@
|
|||||||
|
# coding=utf-8
|
||||||
|
# Copyright 2021 The HuggingFace Inc. team.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
import tensorflow as tf
|
||||||
|
import torch
|
||||||
|
from tqdm import tqdm
|
||||||
|
|
||||||
|
from transformers import BigBirdPegasusConfig, BigBirdPegasusForConditionalGeneration
|
||||||
|
|
||||||
|
|
||||||
|
INIT_COMMON = [
|
||||||
|
# tf -> hf
|
||||||
|
("/", "."),
|
||||||
|
("layer_", "layers."),
|
||||||
|
("kernel", "weight"),
|
||||||
|
("beta", "bias"),
|
||||||
|
("gamma", "weight"),
|
||||||
|
("pegasus", "model"),
|
||||||
|
]
|
||||||
|
END_COMMON = [
|
||||||
|
(".output.dense", ".fc2"),
|
||||||
|
("intermediate.LayerNorm", "final_layer_norm"),
|
||||||
|
("intermediate.dense", "fc1"),
|
||||||
|
]
|
||||||
|
|
||||||
|
DECODER_PATTERNS = (
|
||||||
|
INIT_COMMON
|
||||||
|
+ [
|
||||||
|
("attention.self.LayerNorm", "self_attn_layer_norm"),
|
||||||
|
("attention.output.dense", "self_attn.out_proj"),
|
||||||
|
("attention.self", "self_attn"),
|
||||||
|
("attention.encdec.LayerNorm", "encoder_attn_layer_norm"),
|
||||||
|
("attention.encdec_output.dense", "encoder_attn.out_proj"),
|
||||||
|
("attention.encdec", "encoder_attn"),
|
||||||
|
("key", "k_proj"),
|
||||||
|
("value", "v_proj"),
|
||||||
|
("query", "q_proj"),
|
||||||
|
("decoder.LayerNorm", "decoder.layernorm_embedding"),
|
||||||
|
]
|
||||||
|
+ END_COMMON
|
||||||
|
)
|
||||||
|
|
||||||
|
REMAINING_PATTERNS = (
|
||||||
|
INIT_COMMON
|
||||||
|
+ [
|
||||||
|
("embeddings.word_embeddings", "shared.weight"),
|
||||||
|
("embeddings.position_embeddings", "embed_positions.weight"),
|
||||||
|
("attention.self.LayerNorm", "self_attn_layer_norm"),
|
||||||
|
("attention.output.dense", "self_attn.output"),
|
||||||
|
("attention.self", "self_attn.self"),
|
||||||
|
("encoder.LayerNorm", "encoder.layernorm_embedding"),
|
||||||
|
]
|
||||||
|
+ END_COMMON
|
||||||
|
)
|
||||||
|
|
||||||
|
KEYS_TO_IGNORE = [
|
||||||
|
"encdec/key/bias",
|
||||||
|
"encdec/query/bias",
|
||||||
|
"encdec/value/bias",
|
||||||
|
"self/key/bias",
|
||||||
|
"self/query/bias",
|
||||||
|
"self/value/bias",
|
||||||
|
"encdec_output/dense/bias",
|
||||||
|
"attention/output/dense/bias",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def rename_state_dict_key(k, patterns):
|
||||||
|
for tf_name, hf_name in patterns:
|
||||||
|
k = k.replace(tf_name, hf_name)
|
||||||
|
return k
|
||||||
|
|
||||||
|
|
||||||
|
def convert_bigbird_pegasus(tf_weights: dict, config_update: dict) -> BigBirdPegasusForConditionalGeneration:
|
||||||
|
|
||||||
|
cfg = BigBirdPegasusConfig(**config_update)
|
||||||
|
torch_model = BigBirdPegasusForConditionalGeneration(cfg)
|
||||||
|
state_dict = torch_model.state_dict()
|
||||||
|
mapping = {}
|
||||||
|
|
||||||
|
# separating decoder weights
|
||||||
|
decoder_weights = {k: tf_weights[k] for k in tf_weights if k.startswith("pegasus/decoder")}
|
||||||
|
remaining_weights = {k: tf_weights[k] for k in tf_weights if not k.startswith("pegasus/decoder")}
|
||||||
|
|
||||||
|
for k, v in tqdm(decoder_weights.items(), "tf -> hf conversion"):
|
||||||
|
conditions = [k.endswith(ending) for ending in KEYS_TO_IGNORE]
|
||||||
|
if any(conditions):
|
||||||
|
continue
|
||||||
|
patterns = DECODER_PATTERNS
|
||||||
|
new_k = rename_state_dict_key(k, patterns)
|
||||||
|
if new_k not in state_dict:
|
||||||
|
raise ValueError(f"could not find new key {new_k} in state dict. (converted from {k})")
|
||||||
|
if any([True if i in k else False for i in ["dense", "query", "key", "value"]]):
|
||||||
|
v = v.T
|
||||||
|
mapping[new_k] = torch.from_numpy(v)
|
||||||
|
assert v.shape == state_dict[new_k].shape, f"{new_k}, {k}, {v.shape}, {state_dict[new_k].shape}"
|
||||||
|
|
||||||
|
for k, v in tqdm(remaining_weights.items(), "tf -> hf conversion"):
|
||||||
|
conditions = [k.endswith(ending) for ending in KEYS_TO_IGNORE]
|
||||||
|
if any(conditions):
|
||||||
|
continue
|
||||||
|
patterns = REMAINING_PATTERNS
|
||||||
|
new_k = rename_state_dict_key(k, patterns)
|
||||||
|
if new_k not in state_dict and k != "pegasus/embeddings/position_embeddings":
|
||||||
|
raise ValueError(f"could not find new key {new_k} in state dict. (converted from {k})")
|
||||||
|
if any([True if i in k else False for i in ["dense", "query", "key", "value"]]):
|
||||||
|
v = v.T
|
||||||
|
mapping[new_k] = torch.from_numpy(v)
|
||||||
|
if k != "pegasus/embeddings/position_embeddings":
|
||||||
|
assert v.shape == state_dict[new_k].shape, f"{new_k}, {k}, {v.shape}, {state_dict[new_k].shape}"
|
||||||
|
|
||||||
|
mapping["model.encoder.embed_positions.weight"] = mapping["model.embed_positions.weight"]
|
||||||
|
mapping["model.decoder.embed_positions.weight"] = mapping.pop("model.embed_positions.weight")
|
||||||
|
missing, extra = torch_model.load_state_dict(mapping, strict=False)
|
||||||
|
unexpected_missing = [
|
||||||
|
k
|
||||||
|
for k in missing
|
||||||
|
if k
|
||||||
|
not in [
|
||||||
|
"final_logits_bias",
|
||||||
|
"model.encoder.embed_tokens.weight",
|
||||||
|
"model.decoder.embed_tokens.weight",
|
||||||
|
"lm_head.weight",
|
||||||
|
]
|
||||||
|
]
|
||||||
|
assert unexpected_missing == [], f"no matches found for the following torch keys {unexpected_missing}"
|
||||||
|
assert extra == [], f"no matches found for the following tf keys {extra}"
|
||||||
|
return torch_model
|
||||||
|
|
||||||
|
|
||||||
|
def get_tf_weights_as_numpy(path) -> Dict:
|
||||||
|
init_vars = tf.train.list_variables(path)
|
||||||
|
tf_weights = {}
|
||||||
|
ignore_name = ["global_step"]
|
||||||
|
for name, shape in tqdm(init_vars, desc="converting tf checkpoint to dict"):
|
||||||
|
skip_key = any([pat in name for pat in ignore_name])
|
||||||
|
if skip_key:
|
||||||
|
continue
|
||||||
|
array = tf.train.load_variable(path, name)
|
||||||
|
tf_weights[name] = array
|
||||||
|
return tf_weights
|
||||||
|
|
||||||
|
|
||||||
|
def convert_bigbird_pegasus_ckpt_to_pytorch(ckpt_path: str, save_dir: str, config_update: dict):
|
||||||
|
tf_weights = get_tf_weights_as_numpy(ckpt_path)
|
||||||
|
torch_model = convert_bigbird_pegasus(tf_weights, config_update)
|
||||||
|
torch_model.save_pretrained(save_dir)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("--tf_ckpt_path", type=str, help="passed to tf.train.list_variables")
|
||||||
|
parser.add_argument("--save_dir", default=None, type=str, help="Path to the output PyTorch model.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
config_update = {}
|
||||||
|
convert_bigbird_pegasus_ckpt_to_pytorch(args.tf_ckpt_path, args.save_dir, config_update=config_update)
|
3009
src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
Executable file
3009
src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
Executable file
File diff suppressed because it is too large
Load Diff
@ -80,7 +80,6 @@ class PegasusTokenizer(PreTrainedTokenizer):
|
|||||||
"""
|
"""
|
||||||
vocab_files_names = VOCAB_FILES_NAMES
|
vocab_files_names = VOCAB_FILES_NAMES
|
||||||
|
|
||||||
offset = 103 # entries 2 - 104 are only used for pretraining
|
|
||||||
vocab_files_names = VOCAB_FILES_NAMES
|
vocab_files_names = VOCAB_FILES_NAMES
|
||||||
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||||
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
||||||
@ -95,8 +94,11 @@ class PegasusTokenizer(PreTrainedTokenizer):
|
|||||||
mask_token="<mask_2>",
|
mask_token="<mask_2>",
|
||||||
mask_token_sent="<mask_1>",
|
mask_token_sent="<mask_1>",
|
||||||
additional_special_tokens=None,
|
additional_special_tokens=None,
|
||||||
|
offset=103, # entries 2 - 104 are only used for pretraining
|
||||||
**kwargs
|
**kwargs
|
||||||
):
|
):
|
||||||
|
self.offset = offset
|
||||||
|
|
||||||
if additional_special_tokens is not None:
|
if additional_special_tokens is not None:
|
||||||
assert isinstance(
|
assert isinstance(
|
||||||
additional_special_tokens, list
|
additional_special_tokens, list
|
||||||
@ -104,7 +106,7 @@ class PegasusTokenizer(PreTrainedTokenizer):
|
|||||||
|
|
||||||
additional_special_tokens_extended = (
|
additional_special_tokens_extended = (
|
||||||
([mask_token_sent] + additional_special_tokens)
|
([mask_token_sent] + additional_special_tokens)
|
||||||
if mask_token_sent not in additional_special_tokens
|
if mask_token_sent not in additional_special_tokens and mask_token_sent is not None
|
||||||
else additional_special_tokens
|
else additional_special_tokens
|
||||||
)
|
)
|
||||||
# fill additional tokens with ..., <unk_token_102> in case not all additional tokens are already taken
|
# fill additional tokens with ..., <unk_token_102> in case not all additional tokens are already taken
|
||||||
@ -118,7 +120,7 @@ class PegasusTokenizer(PreTrainedTokenizer):
|
|||||||
)
|
)
|
||||||
additional_special_tokens = additional_special_tokens_extended
|
additional_special_tokens = additional_special_tokens_extended
|
||||||
else:
|
else:
|
||||||
additional_special_tokens = [mask_token_sent]
|
additional_special_tokens = [mask_token_sent] if mask_token_sent is not None else []
|
||||||
additional_special_tokens += [f"<unk_{i}>" for i in range(2, self.offset)]
|
additional_special_tokens += [f"<unk_{i}>" for i in range(2, self.offset)]
|
||||||
|
|
||||||
super().__init__(
|
super().__init__(
|
||||||
@ -127,24 +129,34 @@ class PegasusTokenizer(PreTrainedTokenizer):
|
|||||||
mask_token=mask_token,
|
mask_token=mask_token,
|
||||||
pad_token=pad_token,
|
pad_token=pad_token,
|
||||||
mask_token_sent=mask_token_sent,
|
mask_token_sent=mask_token_sent,
|
||||||
|
offset=offset,
|
||||||
additional_special_tokens=additional_special_tokens,
|
additional_special_tokens=additional_special_tokens,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
)
|
)
|
||||||
|
self.mask_token_sent = mask_token_sent
|
||||||
self.vocab_file = vocab_file
|
self.vocab_file = vocab_file
|
||||||
self.sp_model = spm.SentencePieceProcessor()
|
self.sp_model = spm.SentencePieceProcessor()
|
||||||
self.sp_model.Load(vocab_file)
|
self.sp_model.Load(vocab_file)
|
||||||
self.mask_token_sent = mask_token_sent
|
|
||||||
|
|
||||||
# add special tokens to encoder dict
|
# add special tokens to encoder dict
|
||||||
self.encoder: Dict[int, str] = {
|
self.encoder: Dict[int, str] = {
|
||||||
0: self.pad_token,
|
0: self.pad_token,
|
||||||
1: self.eos_token,
|
1: self.eos_token,
|
||||||
|
}
|
||||||
|
|
||||||
|
if self.mask_token_sent is not None:
|
||||||
|
self.encoder.update(
|
||||||
|
{
|
||||||
2: self.mask_token_sent,
|
2: self.mask_token_sent,
|
||||||
3: self.mask_token,
|
3: self.mask_token,
|
||||||
}
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.offset > 0:
|
||||||
# entries 2-104 are only used for pretraining and called <mask_1>, <mask_2>, unk_2, ...unk_102
|
# entries 2-104 are only used for pretraining and called <mask_1>, <mask_2>, unk_2, ...unk_102
|
||||||
# mask_token_sent is already added to list -> so start at 1
|
# mask_token_sent is already added to list -> so start at 1
|
||||||
self.encoder.update({i + 3: additional_special_tokens[i] for i in range(1, self.offset - 1)})
|
self.encoder.update({i + 3: additional_special_tokens[i] for i in range(1, self.offset - 1)})
|
||||||
|
|
||||||
self.decoder: Dict[str, int] = {v: k for k, v in self.encoder.items()}
|
self.decoder: Dict[str, int] = {v: k for k, v in self.encoder.items()}
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@ -206,10 +218,6 @@ class PegasusTokenizer(PreTrainedTokenizer):
|
|||||||
all_special_ids = set(self.all_special_ids) # call it once instead of inside list comp
|
all_special_ids = set(self.all_special_ids) # call it once instead of inside list comp
|
||||||
all_special_ids.remove(self.unk_token_id) # <unk> is only sometimes special
|
all_special_ids.remove(self.unk_token_id) # <unk> is only sometimes special
|
||||||
|
|
||||||
assert all_special_ids == set(
|
|
||||||
range(len(self.additional_special_tokens) + 3)
|
|
||||||
), f"There should be 3 special tokens: mask_token, pad_token, and eos_token + {len(self.additional_special_tokens)} additional_special_tokens, but got {all_special_ids}"
|
|
||||||
|
|
||||||
return [1 if x in all_special_ids else 0 for x in seq]
|
return [1 if x in all_special_ids else 0 for x in seq]
|
||||||
|
|
||||||
def get_special_tokens_mask(
|
def get_special_tokens_mask(
|
||||||
|
@ -90,7 +90,6 @@ class PegasusTokenizerFast(PreTrainedTokenizerFast):
|
|||||||
<https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/ops/pretrain_parsing_ops.cc#L66>`__
|
<https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/ops/pretrain_parsing_ops.cc#L66>`__
|
||||||
that uses the tokens 2 - 104 only for pretraining
|
that uses the tokens 2 - 104 only for pretraining
|
||||||
"""
|
"""
|
||||||
offset = 103 # entries 2-104 are only used for pretraining
|
|
||||||
vocab_files_names = VOCAB_FILES_NAMES
|
vocab_files_names = VOCAB_FILES_NAMES
|
||||||
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||||
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
||||||
@ -107,8 +106,11 @@ class PegasusTokenizerFast(PreTrainedTokenizerFast):
|
|||||||
mask_token="<mask_2>",
|
mask_token="<mask_2>",
|
||||||
mask_token_sent="<mask_1>",
|
mask_token_sent="<mask_1>",
|
||||||
additional_special_tokens=None,
|
additional_special_tokens=None,
|
||||||
|
offset=103, # entries 2 - 104 are only used for pretraining
|
||||||
**kwargs
|
**kwargs
|
||||||
):
|
):
|
||||||
|
self.offset = offset
|
||||||
|
|
||||||
if additional_special_tokens is not None:
|
if additional_special_tokens is not None:
|
||||||
assert isinstance(
|
assert isinstance(
|
||||||
additional_special_tokens, list
|
additional_special_tokens, list
|
||||||
@ -116,7 +118,7 @@ class PegasusTokenizerFast(PreTrainedTokenizerFast):
|
|||||||
|
|
||||||
additional_special_tokens_extended = (
|
additional_special_tokens_extended = (
|
||||||
([mask_token_sent] + additional_special_tokens)
|
([mask_token_sent] + additional_special_tokens)
|
||||||
if mask_token_sent not in additional_special_tokens
|
if mask_token_sent not in additional_special_tokens and mask_token_sent is not None
|
||||||
else additional_special_tokens
|
else additional_special_tokens
|
||||||
)
|
)
|
||||||
# fill additional tokens with ..., <unk_token_102> in case not all additional tokens are already taken
|
# fill additional tokens with ..., <unk_token_102> in case not all additional tokens are already taken
|
||||||
@ -130,7 +132,7 @@ class PegasusTokenizerFast(PreTrainedTokenizerFast):
|
|||||||
)
|
)
|
||||||
additional_special_tokens = additional_special_tokens_extended
|
additional_special_tokens = additional_special_tokens_extended
|
||||||
else:
|
else:
|
||||||
additional_special_tokens = [mask_token_sent]
|
additional_special_tokens = [mask_token_sent] if mask_token_sent is not None else []
|
||||||
additional_special_tokens += [f"<unk_{i}>" for i in range(2, self.offset)]
|
additional_special_tokens += [f"<unk_{i}>" for i in range(2, self.offset)]
|
||||||
|
|
||||||
super().__init__(
|
super().__init__(
|
||||||
@ -141,10 +143,10 @@ class PegasusTokenizerFast(PreTrainedTokenizerFast):
|
|||||||
unk_token=unk_token,
|
unk_token=unk_token,
|
||||||
mask_token=mask_token,
|
mask_token=mask_token,
|
||||||
mask_token_sent=mask_token_sent,
|
mask_token_sent=mask_token_sent,
|
||||||
|
offset=offset,
|
||||||
additional_special_tokens=additional_special_tokens,
|
additional_special_tokens=additional_special_tokens,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
)
|
)
|
||||||
|
|
||||||
self.vocab_file = vocab_file
|
self.vocab_file = vocab_file
|
||||||
|
|
||||||
def _special_token_mask(self, seq):
|
def _special_token_mask(self, seq):
|
||||||
|
@ -721,6 +721,50 @@ def load_tf_weights_in_big_bird(*args, **kwargs):
|
|||||||
requires_backends(load_tf_weights_in_big_bird, ["torch"])
|
requires_backends(load_tf_weights_in_big_bird, ["torch"])
|
||||||
|
|
||||||
|
|
||||||
|
BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||||
|
|
||||||
|
|
||||||
|
class BigBirdPegasusForCausalLM:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
|
||||||
|
class BigBirdPegasusForConditionalGeneration:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_pretrained(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
|
||||||
|
class BigBirdPegasusForQuestionAnswering:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_pretrained(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
|
||||||
|
class BigBirdPegasusForSequenceClassification:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_pretrained(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
|
||||||
|
class BigBirdPegasusModel:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_pretrained(self, *args, **kwargs):
|
||||||
|
requires_backends(self, ["torch"])
|
||||||
|
|
||||||
|
|
||||||
BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||||
|
|
||||||
|
|
||||||
|
@ -6,6 +6,7 @@ from collections import OrderedDict
|
|||||||
|
|
||||||
MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict(
|
MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict(
|
||||||
[
|
[
|
||||||
|
("BigBirdPegasusConfig", "BigBirdPegasusForQuestionAnswering"),
|
||||||
("BigBirdConfig", "BigBirdForQuestionAnswering"),
|
("BigBirdConfig", "BigBirdForQuestionAnswering"),
|
||||||
("ConvBertConfig", "ConvBertForQuestionAnswering"),
|
("ConvBertConfig", "ConvBertForQuestionAnswering"),
|
||||||
("LEDConfig", "LEDForQuestionAnswering"),
|
("LEDConfig", "LEDForQuestionAnswering"),
|
||||||
|
@ -309,7 +309,6 @@ class GenerationTesterMixin:
|
|||||||
# prevent flaky generation test failures
|
# prevent flaky generation test failures
|
||||||
logits_processor.append(InfNanRemoveLogitsProcessor())
|
logits_processor.append(InfNanRemoveLogitsProcessor())
|
||||||
|
|
||||||
with torch.no_grad():
|
|
||||||
with torch.no_grad():
|
with torch.no_grad():
|
||||||
output_sample = model.sample(
|
output_sample = model.sample(
|
||||||
input_ids_clone,
|
input_ids_clone,
|
||||||
|
762
tests/test_modeling_bigbird_pegasus.py
Normal file
762
tests/test_modeling_bigbird_pegasus.py
Normal file
File diff suppressed because one or more lines are too long
@ -1043,7 +1043,6 @@ class ModelTesterMixin:
|
|||||||
with tempfile.TemporaryDirectory() as temp_dir_name:
|
with tempfile.TemporaryDirectory() as temp_dir_name:
|
||||||
model.base_model.save_pretrained(temp_dir_name)
|
model.base_model.save_pretrained(temp_dir_name)
|
||||||
model, loading_info = model_class.from_pretrained(temp_dir_name, output_loading_info=True)
|
model, loading_info = model_class.from_pretrained(temp_dir_name, output_loading_info=True)
|
||||||
|
|
||||||
with self.subTest(msg=f"Missing keys for {model.__class__.__name__}"):
|
with self.subTest(msg=f"Missing keys for {model.__class__.__name__}"):
|
||||||
self.assertGreater(len(loading_info["missing_keys"]), 0)
|
self.assertGreater(len(loading_info["missing_keys"]), 0)
|
||||||
|
|
||||||
|
@ -55,7 +55,6 @@ class PegasusTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
|
|||||||
raw_input_str = "Let's see which <unk> is the better <unk_token_11> one <mask_1> It seems like this <mask_2> was important </s> <pad> <pad> <pad>"
|
raw_input_str = "Let's see which <unk> is the better <unk_token_11> one <mask_1> It seems like this <mask_2> was important </s> <pad> <pad> <pad>"
|
||||||
rust_ids = rust_tokenizer([raw_input_str], return_tensors=None, add_special_tokens=False).input_ids[0]
|
rust_ids = rust_tokenizer([raw_input_str], return_tensors=None, add_special_tokens=False).input_ids[0]
|
||||||
py_ids = py_tokenizer([raw_input_str], return_tensors=None, add_special_tokens=False).input_ids[0]
|
py_ids = py_tokenizer([raw_input_str], return_tensors=None, add_special_tokens=False).input_ids[0]
|
||||||
# TODO: (Thom, Patrick) - this fails because the rust tokenizer does not know about the <mask_1>, <mask_2>, and those <unk_token_x> yet
|
|
||||||
self.assertListEqual(py_ids, rust_ids)
|
self.assertListEqual(py_ids, rust_ids)
|
||||||
|
|
||||||
def test_large_mask_tokens(self):
|
def test_large_mask_tokens(self):
|
||||||
@ -96,3 +95,81 @@ class PegasusTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
|
|||||||
assert batch.attention_mask.shape == (2, 1024)
|
assert batch.attention_mask.shape == (2, 1024)
|
||||||
assert targets["input_ids"].shape == (2, 5)
|
assert targets["input_ids"].shape == (2, 5)
|
||||||
assert len(batch) == 2 # input_ids, attention_mask.
|
assert len(batch) == 2 # input_ids, attention_mask.
|
||||||
|
|
||||||
|
|
||||||
|
@require_sentencepiece
|
||||||
|
@require_tokenizers
|
||||||
|
class BigBirdPegasusTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
|
||||||
|
|
||||||
|
tokenizer_class = PegasusTokenizer
|
||||||
|
rust_tokenizer_class = PegasusTokenizerFast
|
||||||
|
test_rust_tokenizer = True
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
super().setUp()
|
||||||
|
|
||||||
|
# We have a SentencePiece fixture for testing
|
||||||
|
tokenizer = PegasusTokenizer(SAMPLE_VOCAB, offset=0, mask_token_sent=None, mask_token="[MASK]")
|
||||||
|
tokenizer.save_pretrained(self.tmpdirname)
|
||||||
|
|
||||||
|
@cached_property
|
||||||
|
def _large_tokenizer(self):
|
||||||
|
return PegasusTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
|
||||||
|
|
||||||
|
def get_tokenizer(self, **kwargs) -> PegasusTokenizer:
|
||||||
|
return PegasusTokenizer.from_pretrained(self.tmpdirname, **kwargs)
|
||||||
|
|
||||||
|
def get_input_output_texts(self, tokenizer):
|
||||||
|
return ("This is a test", "This is a test")
|
||||||
|
|
||||||
|
def test_mask_tokens_rust_pegasus(self):
|
||||||
|
rust_tokenizer = self.rust_tokenizer_class.from_pretrained(self.tmpdirname)
|
||||||
|
py_tokenizer = self.tokenizer_class.from_pretrained(self.tmpdirname)
|
||||||
|
raw_input_str = "Let's see which <unk> is the better <unk_token> one [MASK] It seems like this [MASK] was important </s> <pad> <pad> <pad>"
|
||||||
|
rust_ids = rust_tokenizer([raw_input_str], return_tensors=None, add_special_tokens=False).input_ids[0]
|
||||||
|
py_ids = py_tokenizer([raw_input_str], return_tensors=None, add_special_tokens=False).input_ids[0]
|
||||||
|
self.assertListEqual(py_ids, rust_ids)
|
||||||
|
|
||||||
|
@require_torch
|
||||||
|
def test_large_seq2seq_truncation(self):
|
||||||
|
src_texts = ["This is going to be way too long." * 1000, "short example"]
|
||||||
|
tgt_texts = ["not super long but more than 5 tokens", "tiny"]
|
||||||
|
batch = self._large_tokenizer(src_texts, padding=True, truncation=True, return_tensors="pt")
|
||||||
|
with self._large_tokenizer.as_target_tokenizer():
|
||||||
|
targets = self._large_tokenizer(
|
||||||
|
tgt_texts, max_length=5, padding=True, truncation=True, return_tensors="pt"
|
||||||
|
)
|
||||||
|
|
||||||
|
assert batch.input_ids.shape == (2, 4096)
|
||||||
|
assert batch.attention_mask.shape == (2, 4096)
|
||||||
|
assert targets["input_ids"].shape == (2, 5)
|
||||||
|
assert len(batch) == 2 # input_ids, attention_mask.
|
||||||
|
|
||||||
|
def test_equivalence_to_orig_tokenizer(self):
|
||||||
|
"""
|
||||||
|
To run with original TF tokenizer:
|
||||||
|
|
||||||
|
!wget https://github.com/google-research/bigbird/raw/master/bigbird/vocab/pegasus.model
|
||||||
|
!pip install tensorflow-text
|
||||||
|
|
||||||
|
import tensorflow.compat.v2 as tf
|
||||||
|
import tensorflow_text as tft
|
||||||
|
|
||||||
|
VOCAB_FILE = "./pegasus.model"
|
||||||
|
|
||||||
|
tf.enable_v2_behavior()
|
||||||
|
|
||||||
|
test_str = "This is an example string that is used to test the original TF implementation against the HF implementation"
|
||||||
|
tokenizer = tft.SentencepieceTokenizer(model=tf.io.gfile.GFile(VOCAB_FILE, "rb").read())
|
||||||
|
|
||||||
|
tokenizer.tokenize(test_str)
|
||||||
|
"""
|
||||||
|
|
||||||
|
test_str = "This is an example string that is used to test the original TF implementation against the HF implementation"
|
||||||
|
|
||||||
|
token_ids = self._large_tokenizer(test_str).input_ids
|
||||||
|
|
||||||
|
self.assertListEqual(
|
||||||
|
token_ids,
|
||||||
|
[182, 117, 142, 587, 4211, 120, 117, 263, 112, 804, 109, 856, 25016, 3137, 464, 109, 26955, 3137, 1],
|
||||||
|
)
|
||||||
|
@ -35,6 +35,9 @@ PATH_TO_DOC = "docs/source"
|
|||||||
# Being in this list is an exception and should **not** be the rule.
|
# Being in this list is an exception and should **not** be the rule.
|
||||||
IGNORE_NON_TESTED = [
|
IGNORE_NON_TESTED = [
|
||||||
# models to ignore for not tested
|
# models to ignore for not tested
|
||||||
|
"BigBirdPegasusEncoder", # Building part of bigger (tested) model.
|
||||||
|
"BigBirdPegasusDecoder", # Building part of bigger (tested) model.
|
||||||
|
"BigBirdPegasusDecoderWrapper", # Building part of bigger (tested) model.
|
||||||
"M2M100Encoder", # Building part of bigger (tested) model.
|
"M2M100Encoder", # Building part of bigger (tested) model.
|
||||||
"M2M100Decoder", # Building part of bigger (tested) model.
|
"M2M100Decoder", # Building part of bigger (tested) model.
|
||||||
"Speech2TextEncoder", # Building part of bigger (tested) model.
|
"Speech2TextEncoder", # Building part of bigger (tested) model.
|
||||||
|
Loading…
Reference in New Issue
Block a user