transformers/docs/source/en/model_doc/marian.md
emanrissha 3a6afe3cc0
Update docs/source/en/model_doc/marian.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-07-02 05:39:49 +03:00

7.5 KiB

Model downloads License Task Translation Model size

MarianMT

Overview

MarianMT is a machine translation model trained with the Marian framework which is written in pure C++. The framework includes its own custom auto-differentiation engine and efficient meta-algorithms to train encoder-decoder models like BART.

All MarianMT models are transformer encoder-decoders with 6 layers in each component, use static sinusoidal positional embeddings, don't have a layernorm embedding, and the model starts generating with the prefix pad_token_id instead of <s/>.

There are over 1,000 MarianMT models, covering a wide variety of language pairs. Each model is around 298 MB on disk.

You can find all the original MarianMT checkpoints under the Helsinki-NLP collection.

The MarianMT code and framework are open source and available on Marian GitHub.

Tip

Click on the MarianMT models in the right sidebar to see more examples of how to apply MarianMT to different translation tasks.


The example below demonstrates how to translate text using [Pipeline] or the [AutoModelForSeq2SeqLM] class.


from transformers import pipeline

translator = pipeline("translation_en_to_de", model="Helsinki-NLP/opus-mt-en-de")
result = translator("Hello, how are you?")
print(result)


from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model_name = "Helsinki-NLP/opus-mt-en-de"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Not supported for this model.

Quantization

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses dynamic quantization to only quantize the weights to INT8.


from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch

model_name = "Helsinki-NLP/opus-mt-en-de"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

quantized_model = torch.quantization.quantize_dynamic(
    model, {torch.nn.Linear}, dtype=torch.qint8
)

inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = quantized_model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Attention Mask Visualizer Support

Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.

from transformers.utils.attention_visualizer import AttentionMaskVisualizer

visualizer = AttentionMaskVisualizer("Helsinki-NLP/opus-mt-en-de")
visualizer("Hello, how are you?")

Supported Languages

All models follow the naming convention: Helsinki-NLP/opus-mt-{src}-{tgt}, where src is the source language code and tgt is the target language code.

The list of supported languages and codes is available in each model card.

Some models are multilingual; for example, opus-mt-en-ROMANCE translates English to multiple Romance languages (French, Spanish, Portuguese, etc.).

Newer models use 3-character language codes, e.g., >>fra<< for French, >>por<< for Portuguese.

Older models use 2-character or region-specific codes like es_AR (Spanish from Argentina).

Example of translating English to multiple Romance languages:

from transformers import MarianMTModel, MarianTokenizer

src_text = [
    ">>fra<< This is a sentence in English to translate to French.",
    ">>por<< This should go to Portuguese.",
    ">>spa<< And this to Spanish."
]

model_name = "Helsinki-NLP/opus-mt-en-roa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)

inputs = tokenizer(src_text, return_tensors="pt", padding=True)
outputs = model.generate(**inputs)
result = [tokenizer.decode(t, skip_special_tokens=True) for t in outputs]
print(result)

Notes

  • MarianMT models are smaller than many other translation models, enabling faster inference, low memory usage, and suitability for CPU environments.

  • Based on Transformer encoder-decoder architecture with 6 layers each.

  • Originally trained with the Marian C++ framework for efficiency.

  • Does not support the older OPUS models that require BPE preprocessing (80 models not supported).

  • When using quantization, expect a small trade-off in accuracy for a significant gain in speed and memory.

  • The modeling code is based on BartForConditionalGeneration with adjustments for translation.

Resources