mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 22:30:09 +06:00

* Clean up model documentation * Formatting * Preparation work * Long lines * Main work on rst files * Cleanup all config files * Syntax fix * Clean all tokenizers * Work on first models * Models beginning * FaluBERT * All PyTorch models * All models * Long lines again * Fixes * More fixes * Update docs/source/model_doc/bert.rst Co-authored-by: Lysandre Debut <lysandre@huggingface.co> * Update docs/source/model_doc/electra.rst Co-authored-by: Lysandre Debut <lysandre@huggingface.co> * Last fixes Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
61 lines
3.0 KiB
ReStructuredText
61 lines
3.0 KiB
ReStructuredText
FSMT
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
**DISCLAIMER:** If you see something strange, file a `Github Issue
|
|
<https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title>`__ and assign
|
|
@stas00.
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
FSMT (FairSeq MachineTranslation) models were introduced in `Facebook FAIR's WMT19 News Translation Task Submission
|
|
<https://arxiv.org/abs/1907.06616>`__ by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.
|
|
|
|
The abstract of the paper is the following:
|
|
|
|
*This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two
|
|
language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from
|
|
last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling
|
|
toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes,
|
|
as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific
|
|
data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the
|
|
human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations.
|
|
This system improves upon our WMT'18 submission by 4.5 BLEU points.*
|
|
|
|
The original code can be found here <https://github.com/pytorch/fairseq/tree/master/examples/wmt19>__.
|
|
|
|
Implementation Notes
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
- FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens
|
|
either. Its tokenizer is very similar to :class:`~transformers.XLMTokenizer` and the main model is derived from
|
|
:class:`~transformers.BartModel`.
|
|
|
|
|
|
FSMTConfig
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.FSMTConfig
|
|
:members:
|
|
|
|
|
|
FSMTTokenizer
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.FSMTTokenizer
|
|
:members: build_inputs_with_special_tokens, get_special_tokens_mask,
|
|
create_token_type_ids_from_sequences, prepare_seq2seq_batch, save_vocabulary
|
|
|
|
|
|
FSMTModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.FSMTModel
|
|
:members: forward
|
|
|
|
|
|
FSMTForConditionalGeneration
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.FSMTForConditionalGeneration
|
|
:members: forward |