7.9 KiB
MarianMT
Overview
MarianMT is a machine translation model developed by the Microsoft Translator team and trained originally by Jörg Tiedemann using the Marian C++ library. MarianMT models are designed to be fast, efficient, and lightweight for translation tasks. Unlike very large general models, MarianMT provides compact, language-specific models that are small enough to run on CPUs or low-resource environments, making it ideal for production and offline usage.
All MarianMT models are Transformer encoder-decoder architectures with 6 layers each in both encoder and decoder, similar in design to BART but with important modifications for translation tasks:
- Static (sinusoidal) positional embeddings (
MarianConfig.static_position_embeddings=True
) - No layer normalization on embeddings (
MarianConfig.normalize_embedding=False
) - Starts decoding with
pad_token_id
instead of special<s>
tokens as BART does
There are over 1,000 MarianMT models, covering a wide variety of language pairs. Each model is around 298 MB on disk.
You can find all the original MarianMT checkpoints under the Helsinki-NLP collection.
The MarianMT code and framework are open source and available on Marian GitHub.
Tip
Click on the MarianMT models in the right sidebar to see more examples of how to apply MarianMT to different translation tasks.
The example below demonstrates how to translate text using [Pipeline
] or the [AutoModelForSeq2SeqLM
] class.
from transformers import pipeline
translator = pipeline("translation_en_to_de", model="Helsinki-NLP/opus-mt-en-de")
result = translator("Hello, how are you?")
print(result)
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "Helsinki-NLP/opus-mt-en-de"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Not supported for this model.
Quantization
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses dynamic quantization to only quantize the weights to INT8.
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
model_name = "Helsinki-NLP/opus-mt-en-de"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = quantized_model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Attention Mask Visualizer Support
Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
visualizer = AttentionMaskVisualizer("Helsinki-NLP/opus-mt-en-de")
visualizer("Hello, how are you?")
Supported Languages
All models follow the naming convention: Helsinki-NLP/opus-mt-{src}-{tgt}, where src is the source language code and tgt is the target language code.
The list of supported languages and codes is available in each model card.
Some models are multilingual; for example, opus-mt-en-ROMANCE translates English to multiple Romance languages (French, Spanish, Portuguese, etc.).
Newer models use 3-character language codes, e.g., >>fra<< for French, >>por<< for Portuguese.
Older models use 2-character or region-specific codes like es_AR (Spanish from Argentina).
Example of translating English to multiple Romance languages:
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>fra<< This is a sentence in English to translate to French.",
">>por<< This should go to Portuguese.",
">>spa<< And this to Spanish."
]
model_name = "Helsinki-NLP/opus-mt-en-roa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
inputs = tokenizer(src_text, return_tensors="pt", padding=True)
outputs = model.generate(**inputs)
result = [tokenizer.decode(t, skip_special_tokens=True) for t in outputs]
print(result)
Notes
-
MarianMT models are smaller than many other translation models, enabling faster inference, low memory usage, and suitability for CPU environments.
-
Based on Transformer encoder-decoder architecture with 6 layers each.
-
Originally trained with the Marian C++ framework for efficiency.
-
Does not support the older OPUS models that require BPE preprocessing (80 models not supported).
-
When using quantization, expect a small trade-off in accuracy for a significant gain in speed and memory.
-
The modeling code is based on BartForConditionalGeneration with adjustments for translation.
Resources
- Marian Research Paper: Marian: Fast Neural Machine Translation in C++
- MarianMT Model Collection: Helsinki-NLP on Hugging Face
- Marian Official Framework: Marian-NMT GitHub
- Language Codes Reference: ISO 639-1 Language Codes
- Translation Task Guide: Hugging Face Translation Guide
- Quantization Overview: Transformers Quantization Docs
- Tokenizer Guide: Hugging Face Tokenizer Documentation
- Model Conversion Tool: convert_marian_to_pytorch.py (GitHub)
- Supported Language Pairs: Refer to individual model cards under Helsinki-NLP for supported languages.