Model downloads License Task Translation Model size
# MarianMT ## Overview [MarianMT](https://huggingface.co/papers/1804.00344) is a machine translation model trained with the Marian framework which is written in pure C++. The framework includes its own custom auto-differentiation engine and efficient meta-algorithms to train encoder-decoder models like BART. All MarianMT models are transformer encoder-decoders with 6 layers in each component, use static sinusoidal positional embeddings, don't have a layernorm embedding, and the model starts generating with the prefix `pad_token_id` instead of ``. You can find all the original MarianMT checkpoints under the [Language Technology Research Group at the University of Helsinki](https://huggingface.co/Helsinki-NLP/models?search=opus-mt) organization. > [!TIP] > This model was contributed by [sshleifer](https://huggingface.co/sshleifer). > > Click on the MarianMT models in the right sidebar for more examples of how to apply MarianMT to translation tasks. The example below demonstrates how to translate text using [`Pipeline`] or the [`AutoModel`] class. ```python from transformers import pipeline translator = pipeline("translation_en_to_de", model="Helsinki-NLP/opus-mt-en-de") result = translator("Hello, how are you?") print(result) ``` ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_name = "Helsinki-NLP/opus-mt-en-de" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) inputs = tokenizer("Hello, how are you?", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Quantization Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. The example below uses [dynamic quantization](https://docs.pytorch.org/docs/stable/quantization.html#dynamic-quantization) to only quantize the weights to INT8. ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch model_name = "Helsinki-NLP/opus-mt-en-de" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) quantized_model = torch.quantization.quantize_dynamic( model, {torch.nn.Linear}, dtype=torch.qint8 ) inputs = tokenizer("Hello, how are you?", return_tensors="pt") outputs = quantized_model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Use the [AttentionMaskVisualizer](https://github.com/huggingface/transformers/blob/beb9b5b02246b9b7ee81ddf938f93f44cfeaad19/src/transformers/utils/attention_visualizer.py#L139) to better understand what tokens the model can and cannot attend to. ```python from transformers.utils.attention_visualizer import AttentionMaskVisualizer visualizer = AttentionMaskVisualizer("Helsinki-NLP/opus-mt-en-de") visualizer("Hello, how are you?") ``` ## Supported Languages All models follow the naming convention: Helsinki-NLP/opus-mt-{src}-{tgt}, where src is the source language code and tgt is the target language code. The list of supported languages and codes is available in each model card. Some models are multilingual; for example, opus-mt-en-ROMANCE translates English to multiple Romance languages (French, Spanish, Portuguese, etc.). Newer models use 3-character language codes, e.g., >>fra<< for French, >>por<< for Portuguese. Older models use 2-character or region-specific codes like es_AR (Spanish from Argentina). Example of translating English to multiple Romance languages: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fra<< This is a sentence in English to translate to French.", ">>por<< This should go to Portuguese.", ">>spa<< And this to Spanish." ] model_name = "Helsinki-NLP/opus-mt-en-roa" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) inputs = tokenizer(src_text, return_tensors="pt", padding=True) outputs = model.generate(**inputs) result = [tokenizer.decode(t, skip_special_tokens=True) for t in outputs] print(result) ``` ## Notes - MarianMT models are ~298MB on disk and there are more than 1000 models. Check this [list](https://huggingface.co/Helsinki-NLP) for supported language pairs. The language codes may be inconsistent. Two digit codes can be found [here](https://developers.google.com/admin-sdk/directory/v1/languages) while three digit codes may require further searching. - Models that require BPE preprocessing are not supported. - All model names use the following format: `Helsinki-NLP/opus-mt-{src}-{tgt}`. Language codes formatted like `es_AR` usually refer to the `code_{region}`. For example, `es_AR` refers to Spanish from Argentina. - If a model can output multiple languages, prepend the desired output language to `src_txt` as shown below. New multilingual models from the [Tatoeba-Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge) require 3 character language codes. ```py add code snippet here - Older multilingual models use 2 character language codes. ```py add code snippet here ## Resources - **Marian Research Paper:** [Marian: Fast Neural Machine Translation in C++](https://arxiv.org/abs/2001.08210) - **MarianMT Model Collection:** [Helsinki-NLP on Hugging Face](https://huggingface.co/Helsinki-NLP) - **Marian Official Framework:** [Marian-NMT GitHub](https://github.com/marian-nmt/marian) - **Language Codes Reference:** [ISO 639-1 Language Codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) - **Translation Task Guide:** [Hugging Face Translation Guide](https://huggingface.co/tasks/translation) - **Quantization Overview:** [Transformers Quantization Docs](https://huggingface.co/docs/transformers/main/en/perf_optimization#model-quantization) - **Tokenizer Guide:** [Hugging Face Tokenizer Documentation](https://huggingface.co/docs/transformers/main/en/main_classes/tokenizer) - **Model Conversion Tool:** [convert_marian_to_pytorch.py (GitHub)](https://github.com/huggingface/transformers/blob/main/src/transformers/models/marian/convert_marian_to_pytorch.py) - **Supported Language Pairs:** Refer to individual model cards under [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) for supported languages.