transformers/tests/models/llava_next
Joao Gante 62c7ea0201
CI: avoid human error, automatically infer generative models (#33212)
* tmp commit

* move tests to the right class

* remove ALL all_generative_model_classes = ...

* skip tf roberta

* skip InstructBlipForConditionalGenerationDecoderOnlyTest

* videollava

* reduce diff

* reduce diff

* remove  on vlms

* fix a few more

* manual rebase bits

* more manual rebase

* remove all manual generative model class test entries

* fix up to ernie

* a few more removals

* handle remaining cases

* recurrent gemma

* it's better here

* make fixup

* tf idefics is broken

* tf bert + generate is broken

* don't touch tf :()

* don't touch tf :(

* make fixup

* better comments for test skips

* revert tf changes

* remove empty line removal

* one more

* missing one
2025-02-13 16:27:11 +01:00
..
__init__.py Add LLaVa-1.6, bis (#29586) 2024-03-20 15:51:12 +00:00
test_image_processing_llava_next.py Refactoring of ImageProcessorFast (#35069) 2025-02-04 17:52:31 -05:00
test_modeling_llava_next.py CI: avoid human error, automatically infer generative models (#33212) 2025-02-13 16:27:11 +01:00
test_processor_llava_next.py Chat template: update for processor (#35953) 2025-02-10 09:52:19 +01:00