transformers/tests/models/llama
Joao Gante cf32ee1753
Cache: use batch_size instead of max_batch_size (#32657)
* more precise name

* better docstrings

* Update src/transformers/cache_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-08-16 11:48:45 +01:00
..
__init__.py
test_modeling_flax_llama.py Add Llama Flax Implementation (#24587) 2023-12-07 07:05:00 +01:00
test_modeling_llama.py Cache: use batch_size instead of max_batch_size (#32657) 2024-08-16 11:48:45 +01:00
test_tokenization_llama.py Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00