mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-14 10:08:29 +06:00
![]() * First commit: adding all files from tapas_v3 * Fix multiple bugs including soft dependency and new structure of the library * Improve testing by adding torch_device to inputs and adding dependency on scatter * Use Python 3 inheritance rather than Python 2 * First draft model cards of base sized models * Remove model cards as they are already on the hub * Fix multiple bugs with integration tests * All model integration tests pass * Remove print statement * Add test for convert_logits_to_predictions method of TapasTokenizer * Incorporate suggestions by Google authors * Fix remaining tests * Change position embeddings sizes to 512 instead of 1024 * Comment out positional embedding sizes * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES * Added more model names * Fix truncation when no max length is specified * Disable torchscript test * Make style & make quality * Quality * Address CI needs * Test the Masked LM model * Fix the masked LM model * Truncate when overflowing * More much needed docs improvements * Fix some URLs * Some more docs improvements * Test PyTorch scatter * Set to slow + minify * Calm flake8 down * First commit: adding all files from tapas_v3 * Fix multiple bugs including soft dependency and new structure of the library * Improve testing by adding torch_device to inputs and adding dependency on scatter * Use Python 3 inheritance rather than Python 2 * First draft model cards of base sized models * Remove model cards as they are already on the hub * Fix multiple bugs with integration tests * All model integration tests pass * Remove print statement * Add test for convert_logits_to_predictions method of TapasTokenizer * Incorporate suggestions by Google authors * Fix remaining tests * Change position embeddings sizes to 512 instead of 1024 * Comment out positional embedding sizes * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES * Added more model names * Fix truncation when no max length is specified * Disable torchscript test * Make style & make quality * Quality * Address CI needs * Test the Masked LM model * Fix the masked LM model * Truncate when overflowing * More much needed docs improvements * Fix some URLs * Some more docs improvements * Add add_pooling_layer argument to TapasModel Fix comments by @sgugger and @patrickvonplaten * Fix issue in docs + fix style and quality * Clean up conversion script and add task parameter to TapasConfig * Revert the task parameter of TapasConfig Some minor fixes * Improve conversion script and add test for absolute position embeddings * Improve conversion script and add test for absolute position embeddings * Fix bug with reset_position_index_per_cell arg of the conversion cli * Add notebooks to the examples directory and fix style and quality * Apply suggestions from code review * Move from `nielsr/` to `google/` namespace * Apply Sylvain's comments Co-authored-by: sgugger <sylvain.gugger@gmail.com> Co-authored-by: Rogge Niels <niels.rogge@howest.be> Co-authored-by: LysandreJik <lysandre.debut@reseau.eseo.fr> Co-authored-by: Lysandre Debut <lysandre@huggingface.co> Co-authored-by: sgugger <sylvain.gugger@gmail.com> |
||
---|---|---|
.. | ||
albert.rst | ||
auto.rst | ||
bart.rst | ||
barthez.rst | ||
bert.rst | ||
bertgeneration.rst | ||
blenderbot.rst | ||
camembert.rst | ||
ctrl.rst | ||
deberta.rst | ||
dialogpt.rst | ||
distilbert.rst | ||
dpr.rst | ||
electra.rst | ||
encoderdecoder.rst | ||
flaubert.rst | ||
fsmt.rst | ||
funnel.rst | ||
gpt.rst | ||
gpt2.rst | ||
layoutlm.rst | ||
longformer.rst | ||
lxmert.rst | ||
marian.rst | ||
mbart.rst | ||
mobilebert.rst | ||
mpnet.rst | ||
mt5.rst | ||
pegasus.rst | ||
prophetnet.rst | ||
rag.rst | ||
reformer.rst | ||
retribert.rst | ||
roberta.rst | ||
squeezebert.rst | ||
t5.rst | ||
tapas.rst | ||
transformerxl.rst | ||
xlm.rst | ||
xlmprophetnet.rst | ||
xlmroberta.rst | ||
xlnet.rst |