transformers/tests/models/qwen2_vl
gewenbin0992 6a1ab634b6
qwen2.5vl: fix bugs when using flash2+bf16 or num_return_sequences>1 (#36083)
* qwen2.5vl: fix bugs when using flash2+bf16 or num_return_sequences>1

* fix

* fix

* fix

* fix

* add tests

* fix test bugs

* fix

* fix failed tests

* fix
2025-02-13 11:35:28 +01:00
..
__init__.py support qwen2-vl (#32318) 2024-08-26 15:16:44 +02:00
test_image_processing_qwen2_vl.py use torch.testing.assertclose instead to get more details about error in cis (#35659) 2025-01-24 16:55:28 +01:00
test_modeling_qwen2_vl.py qwen2.5vl: fix bugs when using flash2+bf16 or num_return_sequences>1 (#36083) 2025-02-13 11:35:28 +01:00
test_processor_qwen2_vl.py Chat template: update for processor (#35953) 2025-02-10 09:52:19 +01:00