transformers/docs/source/en/model_doc/marian.md
2025-07-02 06:42:40 +03:00

5.2 KiB

Model downloads License Task Translation Model size

MarianMT

Overview

MarianMT is a machine translation model trained with the Marian framework which is written in pure C++. The framework includes its own custom auto-differentiation engine and efficient meta-algorithms to train encoder-decoder models like BART.

All MarianMT models are transformer encoder-decoders with 6 layers in each component, use static sinusoidal positional embeddings, don't have a layernorm embedding, and the model starts generating with the prefix pad_token_id instead of <s/>.

You can find all the original MarianMT checkpoints under the Language Technology Research Group at the University of Helsinki organization.

Tip

This model was contributed by sshleifer.

Click on the MarianMT models in the right sidebar for more examples of how to apply MarianMT to translation tasks.

The example below demonstrates how to translate text using [Pipeline] or the [AutoModel] class.


import torch
from transformers import pipeline

pipeline = pipeline("translation_en_to_de", model="Helsinki-NLP/opus-mt-en-de", torch_dtype=torch.float16, device=0)
pipeline("Hello, how are you?")


import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de", torch_dtype=torch.float16, attn_implementation="sdpa", device_map="auto")

inputs = tokenizer("Hello, how are you?", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, cache_implementation="static")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.

from transformers.utils.attention_visualizer import AttentionMaskVisualizer

visualizer = AttentionMaskVisualizer("Helsinki-NLP/opus-mt-en-de")
visualizer("Hello, how are you?")

Notes

  • MarianMT models are ~298MB on disk and there are more than 1000 models. Check this list for supported language pairs. The language codes may be inconsistent. Two digit codes can be found here while three digit codes may require further searching.

  • Models that require BPE preprocessing are not supported.

  • All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}. Language codes formatted like es_AR usually refer to the code_{region}. For example, es_AR refers to Spanish from Argentina.

  • If a model can output multiple languages, prepend the desired output language to src_txt as shown below. New multilingual models from the Tatoeba-Challenge require 3 character language codes.

    add code snippet here
    
    
  • Older multilingual models use 2 character language codes.

    add code snippet here
    
    

MarianConfig

autodoc MarianConfig

MarianTokenizer

autodoc MarianTokenizer - build_inputs_with_special_tokens

MarianModel

autodoc MarianModel - forward

MarianMTModel

autodoc MarianMTModel - forward

MarianForCausalLM

autodoc MarianForCausalLM - forward

TFMarianModel

autodoc TFMarianModel - call

TFMarianMTModel

autodoc TFMarianMTModel - call

FlaxMarianModel

autodoc FlaxMarianModel - call

FlaxMarianMTModel

autodoc FlaxMarianMTModel - call