transformers/tests/models/llama
Arthur 115ac94d06
[Core generation] Adds support for static KV cache (#27931)
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2024-02-08 11:50:34 +01:00
..
__init__.py LLaMA Implementation (#21955) 2023-03-16 09:00:53 -04:00
test_modeling_flax_llama.py Add Llama Flax Implementation (#24587) 2023-12-07 07:05:00 +01:00
test_modeling_llama.py [Core generation] Adds support for static KV cache (#27931) 2024-02-08 11:50:34 +01:00
test_tokenization_llama.py [Docs] Fix spelling and grammar mistakes (#28825) 2024-02-02 08:45:00 +01:00