MarianMTModel ---------------------------------------------------- **DISCLAIMER:** If you see something strange, file a `Github Issue `__ and assign @sshleifer These models are for machine translation. The list of supported language pairs can be found `here `__. Opus Project ~~~~~~~~~~~~ The 1,000+ models were originally trained by `Jörg Tiedemann `__ using the `Marian `_ C++ library, which supports fast training and translation. All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. Implementation Notes ~~~~~~~~~~~~~~~~~~~~ - each model is about 298 MB on disk, there are 1,000+ models. - Models are named with the following patter 'Helsinki-NLP/opus-mt-{src_langs}-{targ_langs}'. If there are multiple source or target languages they are joined by a '+' symbol. - the 80 opus models that require BPE preprocessing are not supported. - There is an outstanding issue w.r.t multilingual models and language codes. - The modeling code is the same as ``BartModel`` with a few minor modifications: - static (sinusoid) positional embeddings (``MarianConfig.static_position_embeddings=True``) - a new final_logits_bias (``MarianConfig.add_bias_logits=True``) - no layernorm_embedding (``MarianConfig.normalize_embedding=False``) - the model starts generating with pad_token_id (which has 0 token_embedding) as the prefix. (Bart uses ) - Code to bulk convert models can be found in ``convert_marian_to_pytorch.py`` MarianMTModel ~~~~~~~~~~~~~ Pytorch version of marian-nmt's transformer.h (c++). Designed for the OPUS-NMT translation checkpoints. Model API is identical to BartForConditionalGeneration. Available models are listed at `Model List `__ This class inherits all functionality from ``BartForConditionalGeneration``, see that page for method signatures. .. autoclass:: transformers.MarianMTModel :members: MarianTokenizer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.MarianTokenizer :members: prepare_translation_batch