mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-07 23:00:08 +06:00

* Copy RoBERTa * formatting * implement RoBERTa with prelayer normalization * update test expectations * add documentation * add convertion script for DinkyTrain weights * update checkpoint repo Unfortunately the original checkpoints assumes a hacked roberta model * add to RoBERTa-PreLayerNorm docs to toc * run utils/check_copies.py * lint files * remove unused import * fix check_repo reporting wrongly a test is missing * fix import error, caused by rebase * run make fix-copies * add RobertaPreLayerNormConfig to ROBERTA_EMBEDDING_ADJUSMENT_CONFIGS * Fix documentation <Facebook> -> Facebook Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup: Fix documentation <Facebook> -> Facebook Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * Add missing Flax header Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * expected_slice -> EXPECTED_SLICE Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * update copies after rebase * add missing copied from statements * make fix-copies * make prelayernorm explicit in code * fix checkpoint path for the original implementation * add flax integration tests * improve docs * update utils/documentation_tests.txt * lint files * Remove Copyright notice Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * make fix-copies * Remove EXPECTED_SLICE calculation comments Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
141 lines
4.2 KiB
Plaintext
141 lines
4.2 KiB
Plaintext
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# RoBERTa-PreLayerNorm
|
|
|
|
## Overview
|
|
|
|
The RoBERTa-PreLayerNorm model was proposed in [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
|
|
It is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/).
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.*
|
|
|
|
Tips:
|
|
|
|
- The implementation is the same as [Roberta](roberta) except instead of using _Add and Norm_ it does _Norm and Add_. _Add_ and _Norm_ refers to the Addition and LayerNormalization as described in [Attention Is All You Need](https://arxiv.org/abs/1706.03762).
|
|
- This is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/).
|
|
|
|
This model was contributed by [andreasmaden](https://huggingface.co/andreasmaden).
|
|
The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain).
|
|
|
|
|
|
## RobertaPreLayerNormConfig
|
|
|
|
[[autodoc]] RobertaPreLayerNormConfig
|
|
|
|
## RobertaPreLayerNormModel
|
|
|
|
[[autodoc]] RobertaPreLayerNormModel
|
|
- forward
|
|
|
|
## RobertaPreLayerNormForCausalLM
|
|
|
|
[[autodoc]] RobertaPreLayerNormForCausalLM
|
|
- forward
|
|
|
|
## RobertaPreLayerNormForMaskedLM
|
|
|
|
[[autodoc]] RobertaPreLayerNormForMaskedLM
|
|
- forward
|
|
|
|
## RobertaPreLayerNormForSequenceClassification
|
|
|
|
[[autodoc]] RobertaPreLayerNormForSequenceClassification
|
|
- forward
|
|
|
|
## RobertaPreLayerNormForMultipleChoice
|
|
|
|
[[autodoc]] RobertaPreLayerNormForMultipleChoice
|
|
- forward
|
|
|
|
## RobertaPreLayerNormForTokenClassification
|
|
|
|
[[autodoc]] RobertaPreLayerNormForTokenClassification
|
|
- forward
|
|
|
|
## RobertaPreLayerNormForQuestionAnswering
|
|
|
|
[[autodoc]] RobertaPreLayerNormForQuestionAnswering
|
|
- forward
|
|
|
|
## TFRobertaPreLayerNormModel
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormModel
|
|
- call
|
|
|
|
## TFRobertaPreLayerNormForCausalLM
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormForCausalLM
|
|
- call
|
|
|
|
## TFRobertaPreLayerNormForMaskedLM
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormForMaskedLM
|
|
- call
|
|
|
|
## TFRobertaPreLayerNormForSequenceClassification
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormForSequenceClassification
|
|
- call
|
|
|
|
## TFRobertaPreLayerNormForMultipleChoice
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormForMultipleChoice
|
|
- call
|
|
|
|
## TFRobertaPreLayerNormForTokenClassification
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormForTokenClassification
|
|
- call
|
|
|
|
## TFRobertaPreLayerNormForQuestionAnswering
|
|
|
|
[[autodoc]] TFRobertaPreLayerNormForQuestionAnswering
|
|
- call
|
|
|
|
## FlaxRobertaPreLayerNormModel
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormModel
|
|
- __call__
|
|
|
|
## FlaxRobertaPreLayerNormForCausalLM
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormForCausalLM
|
|
- __call__
|
|
|
|
## FlaxRobertaPreLayerNormForMaskedLM
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormForMaskedLM
|
|
- __call__
|
|
|
|
## FlaxRobertaPreLayerNormForSequenceClassification
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification
|
|
- __call__
|
|
|
|
## FlaxRobertaPreLayerNormForMultipleChoice
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice
|
|
- __call__
|
|
|
|
## FlaxRobertaPreLayerNormForTokenClassification
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormForTokenClassification
|
|
- __call__
|
|
|
|
## FlaxRobertaPreLayerNormForQuestionAnswering
|
|
|
|
[[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering
|
|
- __call__
|