transformers/tests/models/kosmos2
Raushan Turganbay d23aae2b8c
[VLMs] support attention backends (#37576)
* update models

* why rename

* return attn weights when sdpa

* fixes

* fix attn implementation composite

* fix moshi

* add message

* add typings

* use explicitly all flags for each attn type

* fix some tests

* import what is needed

* kosmos on main has ew attention already, yay

* new models in main, run fixup

* won't fix kosmos yet

* fix-copies

* clean up after rebasing

* fix tests

* style

* dont cast attns to fp32

* did we update ruff? oke, let's just do what it asks

* fix pixtral after rebase
2025-05-08 18:18:54 +02:00
..
__init__.py Add Kosmos-2 model (#24709) 2023-10-30 13:32:17 +01:00
test_modeling_kosmos2.py [VLMs] support attention backends (#37576) 2025-05-08 18:18:54 +02:00
test_processor_kosmos2.py 🚨 🚨 Setup -> setupclass conversion (#37282) 2025-04-08 17:15:37 +01:00