mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 13:20:12 +06:00

* starting attn refactor for encoder decoder models via bart (eager + sdpa) * flash attention works, remove unnecessary code * flex attention support for bart!, gotta check if the renaming is not too aggressive * some comments * skip flex grad test for standalone as done with the other test * revert flex attn rename (for now), sdpa simplify, and todos * more todos * refactor mask creation for reuse * modular attempt at biogpt * first batch of other models * fix attn dropout * fix autoformer copies * hubert * another batch of models * copies/style + last round of bart models --> whisper next? * remove unnecessary _reshape function and remove copy to whisper * add skip for decoder-only models out of enc-dec (same as in bart) * bring back licences * remove comment, added to pr read instead * mostly docs * disable sew flex attn as it's unclear attn mask for now * oops * test fixes for enc-dec * torch fx fixes + try at flex attn * skip on mbart * some more fixes * musicgen skip / delete old attn class logic + sdpa compose compile skip * disable flex attn for musicgen, not worth the effort * more fixes and style * flex attention test for dropout and encoder decoder that dont have main input names * informer fixes * the weirdest thing I've encountered yet... * style * remove empty tensor attempt, found core root in previous commits * disable time series due to tests being very text centric on inputs * add speech to text to be ignoring the other attns, also due to tests * update docs * remaining issues resolved ? * update docs for current state --> nllb moe and pegasus x sdpa is questionable :D * some models have not set the is_causal flag... * change dtype in softmax tol old behaviour + some modular fixes * I hate it but it is what it is * fixes from main for bart * forgot this one * some model fixes * style * current status * marian works now * fixing some copies * some copy fixes + time series x informer * last models possibly and fixes on style/copies * some post merge fixes * more fixes * make attention interface callable and move warnings there * style lol * add comment to "unsupported" * remove callable interface and change interface warnings + some copies * fix * ternary is ugly af, make it simpler * how did that happen * fix flex attn test * failing the test * no more fallback! fixing copies next * style + attn fixed * fixing copies and mask creation * wrong copy * fixup tests and disable flex attn for now * fixup last tests?
226 lines
12 KiB
Markdown
226 lines
12 KiB
Markdown
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# MarianMT
|
|
|
|
<div class="flex flex-wrap space-x-1">
|
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
|
<img alt="TensorFlow" src="https://img.shields.io/badge/TensorFlow-FF6F00?style=flat&logo=tensorflow&logoColor=white">
|
|
<img alt="Flax" src="https://img.shields.io/badge/Flax-29a79b.svg?style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC0AAAAtCAMAAAANxBKoAAAC7lBMVEUAAADg5vYHPVgAoJH+/v76+v39/f9JbLP///9+AIgAnY3///+mcqzt8fXy9fgkXa3Ax9709fr+///9/f8qXq49qp5AaLGMwrv8/P0eW60VWawxYq8yqJzG2dytt9Wyu9elzci519Lf3O3S2efY3OrY0+Xp7PT///////+dqNCexMc6Z7AGpJeGvbenstPZ5ejQ1OfJzOLa7ejh4+/r8fT29vpccbklWK8PVa0AS6ghW63O498vYa+lsdKz1NDRt9Kw1c672tbD3tnAxt7R6OHp5vDe7OrDyuDn6vLl6/EAQKak0MgATakkppo3ZK/Bz9y8w9yzu9jey97axdvHzeG21NHH4trTwthKZrVGZLSUSpuPQJiGAI+GAI8SWKydycLL4d7f2OTi1+S9xNzL0ePT6OLGzeEAo5U0qJw/aLEAo5JFa7JBabEAp5Y4qZ2QxLyKmsm3kL2xoMOehrRNb7RIbbOZgrGre68AUqwAqZqNN5aKJ5N/lMq+qsd8kMa4pcWzh7muhLMEV69juq2kbKqgUaOTR5uMMZWLLZSGAI5VAIdEAH+ovNDHuNCnxcy3qcaYx8K8msGplrx+wLahjbYdXrV6vbMvYK9DrZ8QrZ8tqJuFms+Sos6sw8ecy8RffsNVeMCvmb43aLltv7Q4Y7EZWK4QWa1gt6meZKUdr6GOAZVeA4xPAISyveLUwtivxtKTpNJ2jcqfvcltiMiwwcfAoMVxhL+Kx7xjdrqTe60tsaNQs6KaRKACrJ6UTZwkqpqTL5pkHY4AloSgsd2ptNXPvNOOncuxxsqFl8lmg8apt8FJcr9EbryGxLqlkrkrY7dRa7ZGZLQ5t6iXUZ6PPpgVpZeJCJFKAIGareTa0+KJod3H0deY2M+esM25usmYu8d2zsJOdcBVvrCLbqcAOaaHaKQAMaScWqKBXqCXMJ2RHpiLF5NmJZAdAHN2kta11dKu1M+DkcZLdb+Mcql3TppyRJdzQ5ZtNZNlIY+DF4+voCOQAAAAZ3RSTlMABAT+MEEJ/RH+/TP+Zlv+pUo6Ifz8+fco/fz6+evr39S9nJmOilQaF/7+/f38+smmoYp6b1T+/v7++vj189zU0tDJxsGzsrKSfv34+Pf27dDOysG9t6+n/vv6+vr59uzr1tG+tZ6Qg9Ym3QAABR5JREFUSMeNlVVUG1EQhpcuxEspXqS0SKEtxQp1d3d332STTRpIQhIISQgJhODu7lAoDoUCpe7u7u7+1puGpqnCPOyZvffbOXPm/PsP9JfQgyCC+tmTABTOcbxDz/heENS7/1F+9nhvkHePG0wNDLbGWwdXL+rbLWvpmZHXD8+gMfBjTh+aSe6Gnn7lwQIOTR0c8wfX3PWgv7avbdKwf/ZoBp1Gp/PvuvXW3vw5ib7emnTW4OR+3D4jB9vjNJ/7gNvfWWeH/TO/JyYrsiKCRjVEZA3UB+96kON+DxOQ/NLE8PE5iUYgIXjFnCOlxEQMaSGVxjg4gxOnEycGz8bptuNjVx08LscIgrzH3umcn+KKtiBIyvzOO2O99aAdR8cF19oZalnCtvREUw79tCd5sow1g1UKM6kXqUx4T8wsi3sTjJ3yzDmmhenLXLpo8u45eG5y4Vvbk6kkC4LLtJMowkSQxmk4ggVJEG+7c6QpHT8vvW9X7/o7+3ELmiJi2mEzZJiz8cT6TBlanBk70cB5GGIGC1gRDdZ00yADLW1FL6gqhtvNXNG5S9gdSrk4M1qu7JAsmYshzDS4peoMrU/gT7qQdqYGZaYhxZmVbGJAm/CS/HloWyhRUlknQ9KYcExTwS80d3VNOxUZJpITYyspl0LbhArhpZCD9cRWEQuhYkNGMHToQ/2Cs6swJlb39CsllxdXX6IUKh/H5jbnSsPKjgmoaFQ1f8wRLR0UnGE/RcDEjj2jXG1WVTwUs8+zxfcrVO+vSsuOpVKxCfYZiQ0/aPKuxQbQ8lIz+DClxC8u+snlcJ7Yr1z1JPqUH0V+GDXbOwAib931Y4Imaq0NTIXPXY+N5L18GJ37SVWu+hwXff8l72Ds9XuwYIBaXPq6Shm4l+Vl/5QiOlV+uTk6YR9PxKsI9xNJny31ygK1e+nIRC1N97EGkFPI+jCpiHe5PCEy7oWqWSwRrpOvhFzcbTWMbm3ZJAOn1rUKpYIt/lDhW/5RHHteeWFN60qo98YJuoq1nK3uW5AabyspC1BcIEpOhft+SZAShYoLSvnmSfnYADUERP5jJn2h5XtsgCRuhYQqAvwTwn33+YWEKUI72HX5AtfSAZDe8F2DtPPm77afhl0EkthzuCQU0BWApgQIH9+KB0JhopMM7bJrdTRoleM2JAVNMyPF+wdoaz+XJpGoVAQ7WXUkcV7gT3oUZyi/ISIJAVKhgNp+4b4veCFhYVJw4locdSjZCp9cPUhLF9EZ3KKzURepMEtCDPP3VcWFx4UIiZIklIpFNfHpdEafIF2aRmOcrUmjohbT2WUllbmRvgfbythbQO3222fpDJoufaQPncYYuqoGtUEsCJZL6/3PR5b4syeSjZMQG/T2maGANlXT2v8S4AULWaUkCxfLyW8iW4kdka+nEMjxpL2NCwsYNBp+Q61PF43zyDg9Bm9+3NNySn78jMZUUkumqE4Gp7JmFOdP1vc8PpRrzj9+wPinCy8K1PiJ4aYbnTYpCCbDkBSbzhu2QJ1Gd82t8jI8TH51+OzvXoWbnXUOBkNW+0mWFwGcGOUVpU81/n3TOHb5oMt2FgYGjzau0Nif0Ss7Q3XB33hjjQHjHA5E5aOyIQc8CBrLdQSs3j92VG+3nNEjbkbdbBr9zm04ruvw37vh0QKOdeGIkckc80fX3KH/h7PT4BOjgCty8VZ5ux1MoO5Cf5naca2LAsEgehI+drX8o/0Nu+W0m6K/I9gGPd/dfx/EN/wN62AhsBWuAAAAAElFTkSuQmCC
|
|
">
|
|
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
|
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
|
</div>
|
|
|
|
## Overview
|
|
|
|
A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card.
|
|
This model was contributed by [sshleifer](https://huggingface.co/sshleifer).
|
|
|
|
|
|
## Implementation Notes
|
|
|
|
- Each model is about 298 MB on disk, there are more than 1,000 models.
|
|
- The list of supported language pairs can be found [here](https://huggingface.co/Helsinki-NLP).
|
|
- Models were originally trained by [Jörg Tiedemann](https://researchportal.helsinki.fi/en/persons/j%C3%B6rg-tiedemann) using the [Marian](https://marian-nmt.github.io/) C++ library, which supports fast training and translation.
|
|
- All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented
|
|
in a model card.
|
|
- The 80 opus models that require BPE preprocessing are not supported.
|
|
- The modeling code is the same as [`BartForConditionalGeneration`] with a few minor modifications:
|
|
|
|
- static (sinusoid) positional embeddings (`MarianConfig.static_position_embeddings=True`)
|
|
- no layernorm_embedding (`MarianConfig.normalize_embedding=False`)
|
|
- the model starts generating with `pad_token_id` (which has 0 as a token_embedding) as the prefix (Bart uses
|
|
`<s/>`),
|
|
- Code to bulk convert models can be found in `convert_marian_to_pytorch.py`.
|
|
|
|
|
|
## Naming
|
|
|
|
- All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`
|
|
- The language codes used to name models are inconsistent. Two digit codes can usually be found [here](https://developers.google.com/admin-sdk/directory/v1/languages), three digit codes require googling "language
|
|
code {code}".
|
|
- Codes formatted like `es_AR` are usually `code_{region}`. That one is Spanish from Argentina.
|
|
- The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second
|
|
group use a combination of ISO-639-5 codes and ISO-639-2 codes.
|
|
|
|
|
|
## Examples
|
|
|
|
- Since Marian models are smaller than many other translation models available in the library, they can be useful for
|
|
fine-tuning experiments and integration tests.
|
|
- [Fine-tune on GPU](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/train_distil_marian_enro.sh)
|
|
|
|
## Multilingual Models
|
|
|
|
- All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`:
|
|
- If a model can output multiple languages, and you should specify a language code by prepending the desired output
|
|
language to the `src_text`.
|
|
- You can see a models's supported language codes in its model card, under target constituents, like in [opus-mt-en-roa](https://huggingface.co/Helsinki-NLP/opus-mt-en-roa).
|
|
- Note that if a model is only multilingual on the source side, like `Helsinki-NLP/opus-mt-roa-en`, no language
|
|
codes are required.
|
|
|
|
New multi-lingual models from the [Tatoeba-Challenge repo](https://github.com/Helsinki-NLP/Tatoeba-Challenge)
|
|
require 3 character language codes:
|
|
|
|
```python
|
|
>>> from transformers import MarianMTModel, MarianTokenizer
|
|
|
|
>>> src_text = [
|
|
... ">>fra<< this is a sentence in english that we want to translate to french",
|
|
... ">>por<< This should go to portuguese",
|
|
... ">>esp<< And this to Spanish",
|
|
... ]
|
|
|
|
>>> model_name = "Helsinki-NLP/opus-mt-en-roa"
|
|
>>> tokenizer = MarianTokenizer.from_pretrained(model_name)
|
|
>>> print(tokenizer.supported_language_codes)
|
|
['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<']
|
|
|
|
>>> model = MarianMTModel.from_pretrained(model_name)
|
|
>>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
|
|
>>> [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
|
|
["c'est une phrase en anglais que nous voulons traduire en français",
|
|
'Isto deve ir para o português.',
|
|
'Y esto al español']
|
|
```
|
|
|
|
Here is the code to see all available pretrained models on the hub:
|
|
|
|
```python
|
|
from huggingface_hub import list_models
|
|
|
|
model_list = list_models()
|
|
org = "Helsinki-NLP"
|
|
model_ids = [x.id for x in model_list if x.id.startswith(org)]
|
|
suffix = [x.split("/")[1] for x in model_ids]
|
|
old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()]
|
|
```
|
|
|
|
## Old Style Multi-Lingual Models
|
|
|
|
These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language
|
|
group:
|
|
|
|
```python no-style
|
|
['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU',
|
|
'Helsinki-NLP/opus-mt-ROMANCE-en',
|
|
'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA',
|
|
'Helsinki-NLP/opus-mt-de-ZH',
|
|
'Helsinki-NLP/opus-mt-en-CELTIC',
|
|
'Helsinki-NLP/opus-mt-en-ROMANCE',
|
|
'Helsinki-NLP/opus-mt-es-NORWAY',
|
|
'Helsinki-NLP/opus-mt-fi-NORWAY',
|
|
'Helsinki-NLP/opus-mt-fi-ZH',
|
|
'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI',
|
|
'Helsinki-NLP/opus-mt-sv-NORWAY',
|
|
'Helsinki-NLP/opus-mt-sv-ZH']
|
|
GROUP_MEMBERS = {
|
|
'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],
|
|
'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],
|
|
'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
|
|
'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],
|
|
'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],
|
|
'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],
|
|
'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']
|
|
}
|
|
```
|
|
|
|
Example of translating english to many romance languages, using old-style 2 character language codes
|
|
|
|
|
|
```python
|
|
>>> from transformers import MarianMTModel, MarianTokenizer
|
|
|
|
>>> src_text = [
|
|
... ">>fr<< this is a sentence in english that we want to translate to french",
|
|
... ">>pt<< This should go to portuguese",
|
|
... ">>es<< And this to Spanish",
|
|
... ]
|
|
|
|
>>> model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
|
|
>>> tokenizer = MarianTokenizer.from_pretrained(model_name)
|
|
|
|
>>> model = MarianMTModel.from_pretrained(model_name)
|
|
>>> translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
|
|
>>> tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
|
|
["c'est une phrase en anglais que nous voulons traduire en français",
|
|
'Isto deve ir para o português.',
|
|
'Y esto al español']
|
|
```
|
|
|
|
## Resources
|
|
|
|
- [Translation task guide](../tasks/translation)
|
|
- [Summarization task guide](../tasks/summarization)
|
|
- [Causal language modeling task guide](../tasks/language_modeling)
|
|
|
|
## MarianConfig
|
|
|
|
[[autodoc]] MarianConfig
|
|
|
|
## MarianTokenizer
|
|
|
|
[[autodoc]] MarianTokenizer
|
|
- build_inputs_with_special_tokens
|
|
|
|
<frameworkcontent>
|
|
<pt>
|
|
|
|
## MarianModel
|
|
|
|
[[autodoc]] MarianModel
|
|
- forward
|
|
|
|
## MarianMTModel
|
|
|
|
[[autodoc]] MarianMTModel
|
|
- forward
|
|
|
|
## MarianForCausalLM
|
|
|
|
[[autodoc]] MarianForCausalLM
|
|
- forward
|
|
|
|
</pt>
|
|
<tf>
|
|
|
|
## TFMarianModel
|
|
|
|
[[autodoc]] TFMarianModel
|
|
- call
|
|
|
|
## TFMarianMTModel
|
|
|
|
[[autodoc]] TFMarianMTModel
|
|
- call
|
|
|
|
</tf>
|
|
<jax>
|
|
|
|
## FlaxMarianModel
|
|
|
|
[[autodoc]] FlaxMarianModel
|
|
- __call__
|
|
|
|
## FlaxMarianMTModel
|
|
|
|
[[autodoc]] FlaxMarianMTModel
|
|
- __call__
|
|
|
|
</jax>
|
|
</frameworkcontent>
|