transformers/tests/models/llama
Yih-Dar ab98f0b0a1
avoid calling gc.collect and cuda.empty_cache (#34514)
* update

* update

* update

* update

* update

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-10-31 16:36:13 +01:00
..
__init__.py LLaMA Implementation (#21955) 2023-03-16 09:00:53 -04:00
test_modeling_flax_llama.py Add Llama Flax Implementation (#24587) 2023-12-07 07:05:00 +01:00
test_modeling_llama.py avoid calling gc.collect and cuda.empty_cache (#34514) 2024-10-31 16:36:13 +01:00
test_tokenization_llama.py use diff internal model in tests (#33387) 2024-09-11 11:27:00 +02:00