transformers/templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}
Funtowicz Morgan 2ee9f9b69e
Fix computation of attention_probs when head_mask is provided. (#9853)
* Fix computation of attention_probs when head_mask is provided.

Signed-off-by: Morgan Funtowicz <funtowiczmo@gmail.com>

* Apply changes to the template

Co-authored-by: Lysandre <lysandre.debut@reseau.eseo.fr>
2021-01-28 06:11:52 -05:00
..
__init__.py Fast imports part 3 (#9474) 2021-01-08 07:40:59 -05:00
{{cookiecutter.lowercase_modelname}}.rst Model Templates for Seq2Seq (#9251) 2020-12-22 23:41:20 +01:00
configuration_{{cookiecutter.lowercase_modelname}}.py [PyTorch Bart] Split Bart into different models (#9343) 2021-01-05 22:00:05 +01:00
configuration.json Model Templates for Seq2Seq (#9251) 2020-12-22 23:41:20 +01:00
modeling_{{cookiecutter.lowercase_modelname}}.py Fix model templates and use less than 119 chars (#9684) 2021-01-19 17:11:22 -05:00
modeling_tf_{{cookiecutter.lowercase_modelname}}.py Fix computation of attention_probs when head_mask is provided. (#9853) 2021-01-28 06:11:52 -05:00
test_modeling_{{cookiecutter.lowercase_modelname}}.py [TFBart] Split TF-Bart (#9497) 2021-01-12 02:06:32 +01:00
test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py Fix head_mask for model templates 2021-01-26 11:02:48 +01:00
to_replace_{{cookiecutter.lowercase_modelname}}.py Transformers fast import part 2 (#9446) 2021-01-07 09:36:14 -05:00
tokenization_{{cookiecutter.lowercase_modelname}}.py Model Templates for Seq2Seq (#9251) 2020-12-22 23:41:20 +01:00
tokenization_fast_{{cookiecutter.lowercase_modelname}}.py Model Templates for Seq2Seq (#9251) 2020-12-22 23:41:20 +01:00