transformers/tests/models/llama
Fanli Lin 2fa876d2d8
[tests] make cuda-only tests device-agnostic (#35607)
* intial commit

* remove unrelated files

* further remove

* Update test_trainer.py

* fix style
2025-01-13 14:48:39 +01:00
..
__init__.py LLaMA Implementation (#21955) 2023-03-16 09:00:53 -04:00
test_modeling_flax_llama.py Add Llama Flax Implementation (#24587) 2023-12-07 07:05:00 +01:00
test_modeling_llama.py [tests] make cuda-only tests device-agnostic (#35607) 2025-01-13 14:48:39 +01:00
test_tokenization_llama.py VLM: special multimodal Tokenizer (#34461) 2024-11-04 16:37:51 +01:00