
* Fix documentation for Gemma2. Model sizes and Blog post URL are wrong in the documentation. * Update docs/source/en/model_doc/gemma2.md Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2.2 KiB
Gemma2
Overview
The Gemma2 model was proposed in Gemma2: Open Models Based on Gemini Technology and Research by Gemma2 Team, Google. Two Gemma2 models are released, with parameters sizes of 9 billion (9B) and 27 billion (27B).
The abstract from the blog post is the following:
Now we’re officially releasing Gemma 2 to researchers and developers globally. Available in both 9 billion (9B) and 27 billion (27B) parameter sizes, Gemma 2 is higher-performing and more efficient at inference than the first generation, with significant safety advancements built in. In fact, at 27B, it offers competitive alternatives to models more than twice its size, delivering the kind of performance that was only possible with proprietary models as recently as December.
Tips:
- The original checkpoints can be converted using the conversion script
src/transformers/models/Gemma2/convert_Gemma2_weights_to_hf.py
This model was contributed by Arthur Zucker, Pedro Cuenca and Tom Arsen.
Gemma2Config
autodoc Gemma2Config
Gemma2Model
autodoc Gemma2Model - forward
Gemma2ForCausalLM
autodoc Gemma2ForCausalLM - forward
Gemma2ForSequenceClassification
autodoc Gemma2ForSequenceClassification - forward
Gemma2ForTokenClassification
autodoc Gemma2ForTokenClassification - forward