mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-13 01:30:04 +06:00

* Add init barthez * Add barthez model, tokenizer and docs BARThez is a pre-trained french seq2seq model that uses BART objective. * Apply suggestions from code review docs typos Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Add license * Change URLs scheme * Remove barthez model keep tokenizer * Fix style * Fix quality * Update tokenizer * Add fast tokenizer * Add fast tokenizer test Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
42 lines
2.3 KiB
ReStructuredText
42 lines
2.3 KiB
ReStructuredText
BARThez
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
The BARThez model was proposed in `BARThez: a Skilled Pretrained French Sequence-to-Sequence Model`
|
|
<https://arxiv.org/abs/2010.12321>`__ by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct,
|
|
2020.
|
|
|
|
The abstract of the paper:
|
|
|
|
|
|
*Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing
|
|
(NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language
|
|
understanding tasks. While there are some notable exceptions, most of the available models and research have been
|
|
conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language
|
|
(to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research
|
|
that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as
|
|
CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also
|
|
its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel
|
|
summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already
|
|
pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez,
|
|
provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.*
|
|
|
|
The Authors' code can be found `here <https://github.com/moussaKam/BARThez>`__.
|
|
|
|
|
|
Examples
|
|
_______________________________________________________________________________________________________________________
|
|
|
|
- BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check: `examples/seq2seq/
|
|
<https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md>`__.
|
|
|
|
|
|
BarthezTokenizer
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.BarthezTokenizer
|
|
:members:
|
|
|