mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Check and update model list in index.rst automatically (#7527)
* Check and update model list in index.rst automatically * Check and update model list in index.rst automatically * Adapt template
This commit is contained in:
parent
ca05c2a47d
commit
b2b7fc7814
@ -54,105 +54,117 @@ The documentation is organized in five parts:
|
||||
The library currently contains PyTorch and Tensorflow implementations, pre-trained model weights, usage scripts and
|
||||
conversion utilities for the following models:
|
||||
|
||||
1. `ALBERT <https://github.com/google-research/ALBERT>`_ (from Google Research), released together with the paper
|
||||
`ALBERT: A Lite BERT for Self-supervised Learning of Language Representations <https://arxiv.org/abs/1909.11942>`_
|
||||
by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
|
||||
2. `BART <https://github.com/pytorch/fairseq/tree/master/examples/bart>`_ (from Facebook) released with the paper
|
||||
`BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
|
||||
<https://arxiv.org/pdf/1910.13461.pdf>`_ by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman
|
||||
Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer.
|
||||
3. `BERT <https://github.com/google-research/bert>`_ (from Google) released with the paper `BERT: Pre-training of Deep
|
||||
Bidirectional Transformers for Language Understanding <https://arxiv.org/abs/1810.04805>`_ by Jacob Devlin, Ming-Wei
|
||||
Chang, Kenton Lee, and Kristina Toutanova.
|
||||
4. `BERT For Sequence Generation <https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder>`_
|
||||
(from Google) released with the paper `Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
|
||||
<https://arxiv.org/abs/1907.12461>`_ by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
|
||||
5. `CamemBERT <https://huggingface.co/transformers/model_doc/camembert.html>`_ (from FAIR, Inria, Sorbonne Université)
|
||||
released together with the paper `CamemBERT: a Tasty French Language Model <https://arxiv.org/abs/1911.03894>`_ by
|
||||
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suarez, Yoann Dupont, Laurent Romary, Eric Villemonte de la
|
||||
Clergerie, Djame Seddah, and Benoît Sagot.
|
||||
6. `CTRL <https://github.com/pytorch/fairseq/tree/master/examples/ctrl>`_ (from Salesforce), released together with the
|
||||
paper `CTRL: A Conditional Transformer Language Model for Controllable Generation
|
||||
<https://www.github.com/salesforce/ctrl>`_ by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong,
|
||||
and Richard Socher.
|
||||
7. `DeBERTa <https://huggingface.co/transformers/model_doc/deberta.html>`_ (from Microsoft Research) released with the
|
||||
paper `DeBERTa: Decoding-enhanced BERT with Disentangled Attention <https://arxiv.org/abs/2006.03654>`_ by Pengcheng
|
||||
He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
|
||||
8. `DialoGPT <https://github.com/microsoft/DialoGPT>`_ (from Microsoft Research) released with the paper `DialoGPT:
|
||||
Large-Scale Generative Pre-training for Conversational Response Generation <https://arxiv.org/abs/1911.00536>`_ by
|
||||
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu,
|
||||
and Bill Dolan.
|
||||
9. `DistilBERT <https://huggingface.co/transformers/model_doc/distilbert.html>`_ (from HuggingFace) released together
|
||||
..
|
||||
This list is updated automatically from the README with `make fix-copies`. Do not update manually!
|
||||
|
||||
1. `ALBERT <https://huggingface.co/transformers/model_doc/albert.html>`__ (from Google Research and the Toyota
|
||||
Technological Institute at Chicago) released with the paper `ALBERT: A Lite BERT for Self-supervised Learning of
|
||||
Language Representations <https://arxiv.org/abs/1909.11942>`__, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman,
|
||||
Kevin Gimpel, Piyush Sharma, Radu Soricut.
|
||||
2. `BART <https://huggingface.co/transformers/model_doc/bart.html>`__ (from Facebook) released with the paper `BART:
|
||||
Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
|
||||
<https://arxiv.org/pdf/1910.13461.pdf>`__ by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman
|
||||
Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
|
||||
3. `BERT <https://huggingface.co/transformers/model_doc/bert.html>`__ (from Google) released with the paper `BERT:
|
||||
Pre-training of Deep Bidirectional Transformers for Language Understanding <https://arxiv.org/abs/1810.04805>`__ by
|
||||
Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
|
||||
4. `BERT For Sequence Generation <https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder>`__ (from
|
||||
Google) released with the paper `Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
|
||||
<https://arxiv.org/abs/1907.12461>`__ by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
|
||||
5. `CamemBERT <https://huggingface.co/transformers/model_doc/camembert.html>`__ (from Inria/Facebook/Sorbonne) released
|
||||
with the paper `CamemBERT: a Tasty French Language Model <https://arxiv.org/abs/1911.03894>`__ by Louis Martin*,
|
||||
Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé
|
||||
Seddah and Benoît Sagot.
|
||||
6. `CTRL <https://huggingface.co/transformers/model_doc/ctrl.html>`__ (from Salesforce) released with the paper `CTRL:
|
||||
A Conditional Transformer Language Model for Controllable Generation <https://arxiv.org/abs/1909.05858>`__ by Nitish
|
||||
Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
|
||||
7. `DeBERTa <https://huggingface.co/transformers/model_doc/deberta.html>`__ (from Microsoft Research) released with the
|
||||
paper `DeBERTa: Decoding-enhanced BERT with Disentangled Attention <https://arxiv.org/abs/2006.03654>`__ by
|
||||
Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
|
||||
8. `DialoGPT <https://huggingface.co/transformers/model_doc/dialogpt.html>`__ (from Microsoft Research) released with
|
||||
the paper `DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
|
||||
<https://arxiv.org/abs/1911.00536>`__ by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang
|
||||
Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
|
||||
9. `DistilBERT <https://huggingface.co/transformers/model_doc/distilbert.html>`__ (from HuggingFace), released together
|
||||
with the paper `DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
|
||||
<https://arxiv.org/abs/1910.01108>`_ by Victor Sanh, Lysandre Debut, and Thomas Wolf. The same method has been
|
||||
applied to compress GPT2 into
|
||||
`DistilGPT2 <https://github.com/huggingface/transformers/tree/master/examples/distillation>`_.
|
||||
10. `DPR <https://github.com/facebookresearch/DPR>`_ (from Facebook) released with the paper `Dense Passage Retrieval
|
||||
for Open-Domain Question Answering <https://arxiv.org/abs/2004.04906>`_ by Vladimir Karpukhin, Barlas Oğuz, Sewon
|
||||
<https://arxiv.org/abs/1910.01108>`__ by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been
|
||||
applied to compress GPT2 into `DistilGPT2
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__, RoBERTa into `DistilRoBERTa
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__, Multilingual BERT into
|
||||
`DistilmBERT <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__ and a German version
|
||||
of DistilBERT.
|
||||
10. `DPR <https://github.com/facebookresearch/DPR>`__ (from Facebook) released with the paper `Dense Passage Retrieval
|
||||
for Open-Domain Question Answering <https://arxiv.org/abs/2004.04906>`__ by Vladimir Karpukhin, Barlas Oğuz, Sewon
|
||||
Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
|
||||
11. `ELECTRA <https://github.com/google-research/electra>`_ (from Google Research/Stanford University) released with
|
||||
the paper `ELECTRA: Pre-training text encoders as discriminators rather than generators
|
||||
<https://arxiv.org/abs/2003.10555>`_ by Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.
|
||||
12. `FlauBERT <https://github.com/getalp/Flaubert>`_ (from CNRS) released with the paper `FlauBERT: Unsupervised
|
||||
Language Model Pre-training for French <https://arxiv.org/abs/1912.05372>`_ by Hang Le, Loïc Vial, Jibril Frej,
|
||||
Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and
|
||||
Didier Schwab.
|
||||
13. `Funnel Transformer <https://github.com/laiguokun/Funnel-Transformer>`_ (from CMU/Google Brain) released with the paper
|
||||
`Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing
|
||||
<https://arxiv.org/abs/2006.03236>`_ by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
|
||||
14. `GPT <https://github.com/openai/finetune-transformer-lm>`_ (from OpenAI) released with the paper `Improving Language
|
||||
Understanding by Generative Pre-Training <https://blog.openai.com/language-unsupervised>`_ by Alec Radford, Karthik
|
||||
Narasimhan, Tim Salimans, and Ilya Sutskever.
|
||||
15. `GPT-2 <https://blog.openai.com/better-language-models>`_ (from OpenAI) released with the paper `Language Models are
|
||||
Unsupervised Multitask Learners <https://blog.openai.com/better-language-models>`_ by Alec Radford, Jeffrey Wu,
|
||||
Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.
|
||||
16. `LayoutLM <https://github.com/microsoft/unilm/tree/master/layoutlm>`_ (from Microsoft Research Asia) released with
|
||||
11. `ELECTRA <https://huggingface.co/transformers/model_doc/electra.html>`__ (from Google Research/Stanford University)
|
||||
released with the paper `ELECTRA: Pre-training text encoders as discriminators rather than generators
|
||||
<https://arxiv.org/abs/2003.10555>`__ by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
|
||||
12. `FlauBERT <https://huggingface.co/transformers/model_doc/flaubert.html>`__ (from CNRS) released with the paper
|
||||
`FlauBERT: Unsupervised Language Model Pre-training for French <https://arxiv.org/abs/1912.05372>`__ by Hang Le,
|
||||
Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé,
|
||||
Laurent Besacier, Didier Schwab.
|
||||
13. `Funnel Transformer <https://github.com/laiguokun/Funnel-Transformer>`__ (from CMU/Google Brain) released with the
|
||||
paper `Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing
|
||||
<https://arxiv.org/abs/2006.03236>`__ by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
|
||||
14. `GPT <https://huggingface.co/transformers/model_doc/gpt.html>`__ (from OpenAI) released with the paper `Improving
|
||||
Language Understanding by Generative Pre-Training <https://blog.openai.com/language-unsupervised/>`__ by Alec
|
||||
Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
|
||||
15. `GPT-2 <https://huggingface.co/transformers/model_doc/gpt2.html>`__ (from OpenAI) released with the paper `Language
|
||||
Models are Unsupervised Multitask Learners <https://blog.openai.com/better-language-models/>`__ by Alec Radford*,
|
||||
Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
16. `LayoutLM <https://github.com/microsoft/unilm/tree/master/layoutlm>`__ (from Microsoft Research Asia) released with
|
||||
the paper `LayoutLM: Pre-training of Text and Layout for Document Image Understanding
|
||||
<https://arxiv.org/abs/1912.13318>`_ by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
|
||||
17. `Longformer <https://github.com/allenai/longformer>`_ (from AllenAI) released with the paper `Longformer: The
|
||||
Long-Document Transformer <https://arxiv.org/abs/2004.05150>`_ by Iz Beltagy, Matthew E. Peters, and Arman Cohan.
|
||||
18. `LXMERT <https://github.com/airsplay/lxmert>`_ (from UNC Chapel Hill) released with the paper `LXMERT: Learning
|
||||
Cross-Modality Encoder Representations from Transformers for Open-Domain Question
|
||||
Answering <https://arxiv.org/abs/1908.07490>`_ by Hao Tan and Mohit Bansal.
|
||||
19. `MarianMT <https://marian-nmt.github.io/>`_ (developed by the Microsoft Translator Team) machine translation models
|
||||
trained using `OPUS <http://opus.nlpl.eu/>`_ pretrained_models data by Jörg Tiedemann.
|
||||
20. `MBart <https://github.com/pytorch/fairseq/tree/master/examples/mbart>`_ (from Facebook) released with the paper
|
||||
`Multilingual Denoising Pre-training for Neural Machine Translation <https://arxiv.org/abs/2001.08210>`_ by Yinhan
|
||||
<https://arxiv.org/abs/1912.13318>`__ by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
|
||||
17. `Longformer <https://huggingface.co/transformers/model_doc/longformer.html>`__ (from AllenAI) released with the
|
||||
paper `Longformer: The Long-Document Transformer <https://arxiv.org/abs/2004.05150>`__ by Iz Beltagy, Matthew E.
|
||||
Peters, Arman Cohan.
|
||||
18. `LXMERT <https://github.com/airsplay/lxmert>`__ (from UNC Chapel Hill) released with the paper `LXMERT: Learning
|
||||
Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering
|
||||
<https://arxiv.org/abs/1908.07490>`__ by Hao Tan and Mohit Bansal.
|
||||
19. `MarianMT <https://huggingface.co/transformers/model_doc/marian.html>`__ Machine translation models trained using
|
||||
`OPUS <http://opus.nlpl.eu/>`__ data by Jörg Tiedemann. The `Marian Framework <https://marian-nmt.github.io/>`__ is
|
||||
being developed by the Microsoft Translator Team.
|
||||
20. `MBart <https://github.com/pytorch/fairseq/tree/master/examples/mbart>`__ (from Facebook) released with the paper
|
||||
`Multilingual Denoising Pre-training for Neural Machine Translation <https://arxiv.org/abs/2001.08210>`__ by Yinhan
|
||||
Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
|
||||
21. `MMBT <https://github.com/facebookresearch/mmbt/>`_ (from Facebook), released together with the paper a `Supervised
|
||||
Multimodal Bitransformers for Classifying Images and Text <https://arxiv.org/pdf/1909.02950.pdf>`_ by Douwe Kiela,
|
||||
Suvrat Bhooshan, Hamed Firooz, and Davide Testuggine.
|
||||
22. `Pegasus <https://github.com/google-research/pegasus>`_ (from Google) released with the paper `PEGASUS:
|
||||
Pre-training with Extracted Gap-sentences for Abstractive Summarization <https://arxiv.org/abs/1912.08777>`_ by
|
||||
21. `MMBT <https://github.com/facebookresearch/mmbt/>`__ (from Facebook), released together with the paper a
|
||||
`Supervised Multimodal Bitransformers for Classifying Images and Text <https://arxiv.org/pdf/1909.02950.pdf>`__ by
|
||||
Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine.
|
||||
22. `Pegasus <https://github.com/google-research/pegasus>`__ (from Google) released with the paper `PEGASUS:
|
||||
Pre-training with Extracted Gap-sentences for Abstractive Summarization <https://arxiv.org/abs/1912.08777>`__> by
|
||||
Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
|
||||
23. `Reformer <https://github.com/google/trax/tree/master/trax/models/reformer>`_ (from Google Research) released with
|
||||
the paper `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ by Nikita Kitaev, Łukasz
|
||||
Kaiser, and Anselm Levskaya.
|
||||
24. `RoBERTa <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`_ (from Facebook), released together with
|
||||
the paper a `Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692>`_ by Yinhan Liu, Myle
|
||||
Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin
|
||||
Stoyanov.
|
||||
25. `T5 <https://github.com/google-research/text-to-text-transfer-transformer>`_ (from Google) released with the paper
|
||||
`Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
|
||||
<https://arxiv.org/abs/1910.10683>`_ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
|
||||
Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.
|
||||
26. `Transformer-XL <https://github.com/kimiyoung/transformer-xl>`_ (from Google/CMU) released with the paper
|
||||
`Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context <https://arxiv.org/abs/1901.02860>`_ by
|
||||
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov.
|
||||
27. `XLM <https://github.com/facebookresearch/XLM>`_ (from Facebook) released together with the paper `Cross-lingual
|
||||
Language Model Pretraining <https://arxiv.org/abs/1901.07291>`_ by Guillaume Lample and Alexis Conneau.
|
||||
28. `XLM-RoBERTa <https://github.com/pytorch/fairseq/tree/master/examples/xlmr>`_ (from Facebook AI), released together
|
||||
with the paper `Unsupervised Cross-lingual Representation Learning at Scale <https://arxiv.org/abs/1911.02116>`_ by
|
||||
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard
|
||||
Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov.
|
||||
29. `XLNet <https://github.com/zihangdai/xlnet>`_ (from Google/CMU) released with the paper `XLNet: Generalized
|
||||
Autoregressive Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237>`_ by Zhilin Yang, Zihang
|
||||
Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le.
|
||||
30. SqueezeBERT (from UC Berkeley) released with the paper
|
||||
`SqueezeBERT: What can computer vision teach NLP about efficient neural networks? <https://arxiv.org/abs/2006.11316>`_
|
||||
by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
|
||||
31. `Other community models <https://huggingface.co/models>`_, contributed by the `community
|
||||
<https://huggingface.co/users>`_.
|
||||
23. `Reformer <https://huggingface.co/transformers/model_doc/reformer.html>`__ (from Google Research) released with the
|
||||
paper `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`__ by Nikita Kitaev, Łukasz Kaiser,
|
||||
Anselm Levskaya.
|
||||
24. `RoBERTa <https://huggingface.co/transformers/model_doc/roberta.html>`__ (from Facebook), released together with
|
||||
the paper a `Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692>`__ by Yinhan Liu, Myle
|
||||
Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
|
||||
ultilingual BERT into `DistilmBERT
|
||||
<https://github.com/huggingface/transformers/tree/master/examples/distillation>`__ and a German version of
|
||||
DistilBERT.
|
||||
25. `SqueezeBert <https://huggingface.co/transformers/model_doc/squeezebert.html>`__ released with the paper
|
||||
`SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
|
||||
<https://arxiv.org/abs/2006.11316>`__ by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
|
||||
26. `T5 <https://huggingface.co/transformers/model_doc/t5.html>`__ (from Google AI) released with the paper `Exploring
|
||||
the Limits of Transfer Learning with a Unified Text-to-Text Transformer <https://arxiv.org/abs/1910.10683>`__ by
|
||||
Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi
|
||||
Zhou and Wei Li and Peter J. Liu.
|
||||
27. `Transformer-XL <https://huggingface.co/transformers/model_doc/transformerxl.html>`__ (from Google/CMU) released
|
||||
with the paper `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
|
||||
<https://arxiv.org/abs/1901.02860>`__ by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le,
|
||||
Ruslan Salakhutdinov.
|
||||
28. `XLM <https://huggingface.co/transformers/model_doc/xlm.html>`__ (from Facebook) released together with the paper
|
||||
`Cross-lingual Language Model Pretraining <https://arxiv.org/abs/1901.07291>`__ by Guillaume Lample and Alexis
|
||||
Conneau.
|
||||
29. `XLM-RoBERTa <https://huggingface.co/transformers/model_doc/xlmroberta.html>`__ (from Facebook AI), released
|
||||
together with the paper `Unsupervised Cross-lingual Representation Learning at Scale
|
||||
<https://arxiv.org/abs/1911.02116>`__ by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary,
|
||||
Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
|
||||
30. `XLNet <https://huggingface.co/transformers/model_doc/xlnet.html>`__ (from Google/CMU) released with the paper
|
||||
`XLNet: Generalized Autoregressive Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237>`__ by
|
||||
Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
|
||||
31. `Other community models <https://huggingface.co/models>`__, contributed by the `community
|
||||
<https://huggingface.co/users>`__.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
@ -83,7 +83,7 @@ You can then finish the addition step by adding imports for your classes in the
|
||||
- [ ] Edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py`
|
||||
file.
|
||||
- [ ] Add a mention of your model in the doc: `README.md` and the documentation itself
|
||||
in `docs/source/index.rst` and `docs/source/pretrained_models.rst`.
|
||||
in `docs/source/pretrained_models.rst`. Rune `make fix-copies` to update `docs/source/index.rst` with your changes.
|
||||
- [ ] Upload the pretrained weights, configurations and vocabulary files.
|
||||
- [ ] Create model card(s) for your models on huggingface.co. For those last two steps, check the
|
||||
[model sharing documentation](https://huggingface.co/transformers/model_sharing.html).
|
||||
|
@ -23,6 +23,7 @@ import tempfile
|
||||
# All paths are set with the intent you should run this script from the root of the repo with the command
|
||||
# python utils/check_copies.py
|
||||
TRANSFORMERS_PATH = "src/transformers"
|
||||
PATH_TO_DOCS = "docs/source"
|
||||
|
||||
|
||||
def find_code_in_transformers(object_name):
|
||||
@ -166,6 +167,113 @@ def check_copies(overwrite: bool = False):
|
||||
+ diff
|
||||
+ "\nRun `make fix-copies` or `python utils/check_copies --fix_and_overwrite` to fix them."
|
||||
)
|
||||
check_model_list_copy(overwrite=overwrite)
|
||||
|
||||
|
||||
def get_model_list():
|
||||
""" Extracts the model list from the README. """
|
||||
# If the introduction or the conclusion of the list change, the prompts may need to be updated.
|
||||
_start_prompt = "🤗 Transformers currently provides the following architectures"
|
||||
_end_prompt = "1. Want to contribute a new model?"
|
||||
with open(os.path.join("README.md"), "r", encoding="utf-8") as f:
|
||||
lines = f.readlines()
|
||||
# Find the start of the list.
|
||||
start_index = 0
|
||||
while not lines[start_index].startswith(_start_prompt):
|
||||
start_index += 1
|
||||
start_index += 1
|
||||
|
||||
result = []
|
||||
current_line = ""
|
||||
end_index = start_index
|
||||
|
||||
while not lines[end_index].startswith(_end_prompt):
|
||||
if lines[end_index].startswith("1."):
|
||||
if len(current_line) > 1:
|
||||
result.append(current_line)
|
||||
current_line = lines[end_index]
|
||||
elif len(lines[end_index]) > 1:
|
||||
current_line = f"{current_line[:-1]} {lines[end_index].lstrip()}"
|
||||
end_index += 1
|
||||
if len(current_line) > 1:
|
||||
result.append(current_line)
|
||||
|
||||
return "".join(result)
|
||||
|
||||
|
||||
def split_long_line_with_indent(line, max_per_line, indent):
|
||||
""" Split the `line` so that it doesn't go over `max_per_line` and adds `indent` to new lines. """
|
||||
words = line.split(" ")
|
||||
lines = []
|
||||
current_line = words[0]
|
||||
for word in words[1:]:
|
||||
if len(f"{current_line} {word}") > max_per_line:
|
||||
lines.append(current_line)
|
||||
current_line = " " * indent + word
|
||||
else:
|
||||
current_line = f"{current_line} {word}"
|
||||
lines.append(current_line)
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def convert_to_rst(model_list, max_per_line=None):
|
||||
""" Convert `model_list` to rst format. """
|
||||
# Convert **[description](link)** to `description <link>`__
|
||||
model_list = re.sub(r"\*\*\[([^\]]*)\]\(([^\)]*)\)\*\*", r"`\1 <\2>`__", model_list)
|
||||
|
||||
# Convert [description](link) to `description <link>`__
|
||||
model_list = re.sub(r"\[([^\]]*)\]\(([^\)]*)\)", r"`\1 <\2>`__", model_list)
|
||||
|
||||
# Enumerate the lines properly
|
||||
lines = model_list.split("\n")
|
||||
result = []
|
||||
for i, line in enumerate(lines):
|
||||
line = re.sub(r"^\s*(\d+)\.", f"{i+1}.", line)
|
||||
# Split the lines that are too long
|
||||
if max_per_line is not None and len(line) > max_per_line:
|
||||
prompt = re.search(r"^(\s*\d+\.\s+)\S", line)
|
||||
indent = len(prompt.groups()[0]) if prompt is not None else 0
|
||||
line = split_long_line_with_indent(line, max_per_line, indent)
|
||||
|
||||
result.append(line)
|
||||
return "\n".join(result)
|
||||
|
||||
|
||||
def check_model_list_copy(overwrite=False, max_per_line=119):
|
||||
""" Check the model lists in the README and index.rst are consistent and maybe `overwrite`. """
|
||||
_start_prompt = " This list is updated automatically from the README"
|
||||
_end_prompt = ".. toctree::"
|
||||
with open(os.path.join(PATH_TO_DOCS, "index.rst"), "r", encoding="utf-8") as f:
|
||||
lines = f.readlines()
|
||||
# Find the start of the list.
|
||||
start_index = 0
|
||||
while not lines[start_index].startswith(_start_prompt):
|
||||
start_index += 1
|
||||
start_index += 1
|
||||
|
||||
end_index = start_index
|
||||
while not lines[end_index].startswith(_end_prompt):
|
||||
end_index += 1
|
||||
end_index -= 1
|
||||
|
||||
while len(lines[start_index]) <= 1:
|
||||
start_index += 1
|
||||
while len(lines[end_index]) <= 1:
|
||||
end_index -= 1
|
||||
end_index += 1
|
||||
|
||||
rst_list = "".join(lines[start_index:end_index])
|
||||
md_list = get_model_list()
|
||||
converted_list = convert_to_rst(md_list, max_per_line=max_per_line)
|
||||
|
||||
if converted_list != rst_list:
|
||||
if overwrite:
|
||||
with open(os.path.join(PATH_TO_DOCS, "index.rst"), "w", encoding="utf-8") as f:
|
||||
f.writelines(lines[:start_index] + [converted_list] + lines[end_index:])
|
||||
else:
|
||||
raise ValueError(
|
||||
"The model list in the README changed and the list in `index.rst` has not been updated. Run `make fix-copies` to fix this."
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
Loading…
Reference in New Issue
Block a user