transformers/tests
2020-02-27 11:52:46 +01:00
..
fixtures AutoConfig + other Auto classes honor model_type 2020-01-11 02:46:17 +00:00
__init__.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_activations.py get_activation('relu') provides a simple mapping from strings i… (#2807) 2020-02-13 08:28:33 -05:00
test_configuration_auto.py Map configs to models and tokenizers 2020-01-13 23:11:44 +00:00
test_configuration_common.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_doc_samples.py Rename test_examples to test_doc_samples 2020-01-30 10:07:22 -05:00
test_hf_api.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_model_card.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_albert.py Added test for AlbertForTokenClassification 2020-02-27 11:52:46 +01:00
test_modeling_auto.py Flaubert auto tokenizer + tests 2020-01-31 14:16:52 -05:00
test_modeling_bart.py Bart: fix layerdrop and cached decoder_input_ids for generation (#2969) 2020-02-22 16:25:04 -05:00
test_modeling_bert.py BERT decoder: Fix causal mask dtype. 2020-02-11 15:19:22 -05:00
test_modeling_common.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_modeling_ctrl.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_modeling_distilbert.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_encoder_decoder.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_flaubert.py Correct slow test 2020-02-04 18:05:35 -05:00
test_modeling_gpt2.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_modeling_openai.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_modeling_roberta.py New BartModel (#2745) 2020-02-20 18:11:13 -05:00
test_modeling_t5.py New BartModel (#2745) 2020-02-20 18:11:13 -05:00
test_modeling_tf_albert.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_auto.py Add AutoModelForPreTraining 2020-01-27 14:27:07 -05:00
test_modeling_tf_bert.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_common.py Absolute definitive HeisenDistilBug solve 2020-01-27 21:58:36 -05:00
test_modeling_tf_ctrl.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_distilbert.py Definitive HeisenDistilBug fix 2020-01-27 12:09:58 -05:00
test_modeling_tf_gpt2.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_openai_gpt.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_roberta.py RoBERTa TensorFlow Tests 2020-02-04 18:05:35 -05:00
test_modeling_tf_t5.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_transfo_xl.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_xlm.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_tf_xlnet.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_modeling_transfo_xl.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_modeling_xlm.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_modeling_xlnet.py Improve special_token_id logic in run_generation.py and add tests (#2885) 2020-02-21 12:09:59 -05:00
test_optimization_tf.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_optimization.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_pipelines.py Integrate fast tokenizers library inside transformers (#2674) 2020-02-19 11:35:40 -05:00
test_tokenization_albert.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_auto.py Integrate fast tokenizers library inside transformers (#2674) 2020-02-19 11:35:40 -05:00
test_tokenization_bert_japanese.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_bert.py Fix BasicTokenizer to respect never_split parameters (#2557) 2020-01-17 14:57:56 -05:00
test_tokenization_common.py Add get_vocab method to PretrainedTokenizer 2020-02-20 15:26:49 -05:00
test_tokenization_ctrl.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_distilbert.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_tokenization_fast.py Fix max_length not taken into account when using pad_to_max_length on fast tokenizers (#2961) 2020-02-22 09:27:47 -05:00
test_tokenization_gpt2.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_openai.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_roberta.py Preserve spaces in GPT-2 tokenizers (#2778) 2020-02-13 13:29:43 -05:00
test_tokenization_t5.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_transfo_xl.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_utils.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_tokenization_xlm.py 💄 super 2020-01-15 18:33:50 -05:00
test_tokenization_xlnet.py 💄 super 2020-01-15 18:33:50 -05:00
utils.py More AutoConfig tests 2020-01-11 03:43:57 +00:00