transformers/tests
mrbean eb16be415a
add onnx support for deberta and debertav2 (#17617)
* add onnx support for debertav2

* debertav2 -> deberta-v2 in onnx features file

* remove causal lm

* add deberta-v2-xlarge to onnx tests

* use self.type().dtype() in xsoftmax

Co-authored-by: Jingya HUANG <44135271+JingyaHuang@users.noreply.github.com>

* remove hack for deberta

* remove unused imports

* Update src/transformers/models/deberta_v2/configuration_deberta_v2.py

Co-authored-by: Jingya HUANG <44135271+JingyaHuang@users.noreply.github.com>

* use generate dummy inputs

* linter

* add imports

* add support for deberta v1 as well

* deberta does not support multiple choice

* Update src/transformers/models/deberta/configuration_deberta.py

Co-authored-by: Jingya HUANG <44135271+JingyaHuang@users.noreply.github.com>

* Update src/transformers/models/deberta_v2/configuration_deberta_v2.py

Co-authored-by: Jingya HUANG <44135271+JingyaHuang@users.noreply.github.com>

* one line ordered dict

* fire build

Co-authored-by: Jingya HUANG <44135271+JingyaHuang@users.noreply.github.com>
2022-06-21 11:04:15 +02:00
..
benchmark [Test refactor 1/5] Per-folder tests reorganization (#15725) 2022-02-23 15:46:28 -05:00
deepspeed deprecate is_torch_bf16_available (#17738) 2022-06-20 08:40:11 -04:00
extended Update self-push workflow (#17177) 2022-05-13 16:28:00 +02:00
fixtures add a warning in SpmConverter for sentencepiece's model using the byte fallback feature (#16629) 2022-04-11 11:06:10 +02:00
generation [Generation Test] Make fast test actually fast (#17661) 2022-06-10 18:49:03 +02:00
models Not use -1e4 as attn mask (#17306) 2022-06-20 16:16:16 +02:00
onnx add onnx support for deberta and debertav2 (#17617) 2022-06-21 11:04:15 +02:00
optimization [Test refactor 1/5] Per-folder tests reorganization (#15725) 2022-02-23 15:46:28 -05:00
pipelines Add LongT5 model (#16792) 2022-06-13 22:36:58 +02:00
sagemaker Black preview (#17217) 2022-05-12 16:25:55 -04:00
tokenization fix train_new_from_iterator in the case of byte-level tokenizers (#17549) 2022-06-08 15:30:41 +02:00
trainer deprecate is_torch_bf16_available (#17738) 2022-06-20 08:40:11 -04:00
utils 🐛 Properly raise RepoNotFoundError when not authenticated (#17651) 2022-06-10 15:41:53 +02:00
__init__.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_configuration_common.py Black preview (#17217) 2022-05-12 16:25:55 -04:00
test_feature_extraction_common.py [Json configs] Make json prettier for all saved tokenizer files & ensure same json format for all processors (tok + feat_extract) (#17457) 2022-05-31 17:07:30 +02:00
test_modeling_common.py [modeling_utils] torch_dtype/auto floating dtype fixes (#17614) 2022-06-09 10:18:26 -07:00
test_modeling_flax_common.py [Flax] improve large model init and loading (#16148) 2022-04-19 14:19:55 +02:00
test_modeling_tf_common.py TF: BART compatible with XLA generation (#17479) 2022-06-20 11:07:46 +01:00
test_sequence_feature_extraction_common.py Some tests misusing assertTrue for comparisons fix (#16771) 2022-04-19 14:44:08 +02:00
test_tokenization_common.py [Json configs] Make json prettier for all saved tokenizer files & ensure same json format for all processors (tok + feat_extract) (#17457) 2022-05-31 17:07:30 +02:00