mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 21:00:08 +06:00

* m2m_100 * no layernorm_embedding * sinusoidal positional embeddings * update pos embeddings * add default config values * tokenizer * add conversion script * fix config * fix pos embed * remove _float_tensor * update tokenizer * update lang codes * handle lang codes * fix pos embeds * fix spm key * put embedding weights on device * remove qa and seq classification heads * fix convert script * lang codes pn one line * fix embeds * fix tokenizer * fix tokenizer * add fast tokenizer * style * M2M100MT => M2M100 * fix copyright, style * tokenizer converter * vocab file * remove fast tokenizer * fix embeds * fix tokenizer * fix tests * add tokenizer tests * add integration test * quality * fix model name * fix test * doc * doc * fix doc * add copied from statements * fix tokenizer tests * apply review suggestions * fix urls * fix shift_tokens_right * apply review suggestions * fix * fix doc * add lang code to id * remove unused function * update checkpoint names * fix copy * fix tokenizer * fix checkpoint names * fix merge issue * style
126 lines
6.1 KiB
ReStructuredText
126 lines
6.1 KiB
ReStructuredText
..
|
|
Copyright 2020 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
M2M100
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The M2M100 model was proposed in `Beyond English-Centric Multilingual Machine Translation
|
|
<https://arxiv.org/abs/2010.11125>`__ by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky,
|
|
Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy
|
|
Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Existing work in translation demonstrated the potential of massively multilingual machine translation by training a
|
|
single model able to translate between any pair of languages. However, much of this work is English-Centric by training
|
|
only on data which was translated from or to English. While this is supported by large sources of training data, it
|
|
does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation
|
|
model that can translate directly between any pair of 100 languages. We build and open source a training dataset that
|
|
covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how
|
|
to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters
|
|
to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly
|
|
translating between non-English directions while performing competitively to the best single systems of WMT. We
|
|
open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.*
|
|
|
|
|
|
Training and Generation
|
|
_______________________________________________________________________________________________________________________
|
|
|
|
M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is
|
|
multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the
|
|
source and target text. The source text format is :obj:`[lang_code] X [eos]`, where :obj:`lang_code` is source language
|
|
id for source text and target language id for target text, with :obj:`X` being the source or target text.
|
|
|
|
- Supervised Training
|
|
|
|
.. code-block::
|
|
|
|
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
|
|
|
|
model = M2M100ForConditionalGeneration.from_pretrained('facebook/m2m100_418M')
|
|
tokenizer = M2M100Tokenizer.from_pretrained('facebook/m2m100_418M', src_lang="en", tgt_lang="fr")
|
|
|
|
src_text = "Life is like a box of chocolates."
|
|
tgt_lang = "La vie est comme une boîte de chocolat."
|
|
|
|
model_inputs = tokenizer(src_text, return_tensors="pt")
|
|
with tokenizer.as_target_tokenizer():
|
|
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
|
|
|
|
loss = model(**model_inputs, labels=labels) # forward pass
|
|
|
|
|
|
- Generation
|
|
|
|
M2M100 uses the :obj:`eos_token_id` as the :obj:`decoder_start_token_id` for generation with the target language id
|
|
being forced as the first generated token. To force the target language id as the first generated token, pass the
|
|
`forced_bos_token_id` parameter to the `generate` method. The following example shows how to translate between
|
|
Hindi to French and Chinese to English using the `facebook/m2m100_418M` checkpoint.
|
|
|
|
.. code-block::
|
|
|
|
>>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
|
|
|
|
>>> hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
|
|
>>> chinese_text = "生活就像一盒巧克力。"
|
|
|
|
>>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
|
|
>>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
|
|
|
|
>>> # translate Hindi to French
|
|
>>> tokenizer.src_lang = "hi"
|
|
>>> encoded_hi = tokenizer(hi_text, return_tensors="pt")
|
|
>>> generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
|
|
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
|
"La vie est comme une boîte de chocolat."
|
|
|
|
>>> # translate Chinese to English
|
|
>>> tokenizer.src_lang = "ar_AR"
|
|
>>> encoded_zh = tokenizer(chinese_text, return_tensors="pt")
|
|
>>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
|
|
>>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
|
"Life is like a box of chocolate."
|
|
|
|
|
|
M2M100Config
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.M2M100Config
|
|
:members:
|
|
|
|
|
|
M2M100Tokenizer
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.M2M100Tokenizer
|
|
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
|
create_token_type_ids_from_sequences, save_vocabulary
|
|
|
|
|
|
M2M100Model
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.M2M100Model
|
|
:members: forward
|
|
|
|
|
|
M2M100ForConditionalGeneration
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.M2M100ForConditionalGeneration
|
|
:members: forward
|
|
|
|
|