mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 14:20:04 +06:00

* Add DeBERTa model * Remove dependency of deberta * Address comments * Patch DeBERTa Documentation Style * Add final tests * Style * Enable tests + nitpicks * position IDs * BERT -> DeBERTa * Quality * Style * Tokenization * Last updates. * @patrickvonplaten's comments * Not everything can be a copy * Apply most of @sgugger's review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Last reviews * DeBERTa -> Deberta Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by: Lysandre Debut <lysandre@huggingface.co> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
63 lines
2.5 KiB
ReStructuredText
63 lines
2.5 KiB
ReStructuredText
DeBERTa
|
|
----------------------------------------------------
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The DeBERTa model was proposed in `DeBERTa: Decoding-enhanced BERT with Disentangled Attention <https://arxiv.org/abs/2006.03654>`__
|
|
by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen
|
|
It is based on Google's BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
|
|
|
|
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
|
|
In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa
|
|
models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode
|
|
its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and
|
|
relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining.
|
|
We show that these two techniques significantly improve the efficiency of model pre-training and performance of downstream tasks. Compared to
|
|
RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements
|
|
on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained
|
|
models will be made publicly available at https://github.com/microsoft/DeBERTa.*
|
|
|
|
|
|
The original code can be found `here <https://github.com/microsoft/DeBERTa>`__.
|
|
|
|
|
|
DebertaConfig
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.DebertaConfig
|
|
:members:
|
|
|
|
|
|
DebertaTokenizer
|
|
~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.DebertaTokenizer
|
|
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
|
create_token_type_ids_from_sequences, save_vocabulary
|
|
|
|
|
|
DebertaModel
|
|
~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.DebertaModel
|
|
:members:
|
|
|
|
|
|
DebertaPreTrainedModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.DebertaPreTrainedModel
|
|
:members:
|
|
|
|
|
|
DebertaForSequenceClassification
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.DebertaForSequenceClassification
|
|
:members:
|