transformers/tests/models/gemma
Joseph Enguehard 07bf2dff78
Add TokenClassification for Mistral, Mixtral and Qwen2 (#29878)
* Add MistralForTokenClassification

* Add tests and docs

* Add token classification for Mixtral and Qwen2

* Save llma for token classification draft

* Add token classification support for Llama, Gemma, Persimmon, StableLm and StarCoder2

* Formatting

* Add token classification support for Qwen2Moe model

* Add dropout layer to each ForTokenClassification model

* Add copied from in tests

* Update src/transformers/models/llama/modeling_llama.py

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>

* Propagate suggested changes

* Style

---------

Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2024-05-20 10:06:57 +02:00
..
__init__.py [ gemma] Adds support for Gemma 💎 (#29167) 2024-02-21 14:21:28 +01:00
test_modeling_flax_gemma.py FIX [Gemma / CI] Make sure our runners have access to the model (#29242) 2024-02-28 06:25:23 +01:00
test_modeling_gemma.py Add TokenClassification for Mistral, Mixtral and Qwen2 (#29878) 2024-05-20 10:06:57 +02:00
test_tokenization_gemma.py [LlamaTokenizerFast] Refactor default llama (#28881) 2024-04-23 23:12:59 +02:00