
* inital commit * Add doc * protect? * fixup stuffs * update tests * fix build documentation * mmmmmmm config attributes * style * nit * uodate * nit * Fix docs * protect some stuff --------- Co-authored-by: Lysandre <lysandre@huggingface.co>
2.4 KiB
Gemma2
Overview
The Gemma2 model was proposed in Gemma2: Open Models Based on Gemini Technology and Research by Gemma2 Team, Google. Gemma2 models are trained on 6T tokens, and released with 2 versions, 2b and 7b.
The abstract from the paper is the following:
This work introduces Gemma2, a new family of open language models demonstrating strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma2 outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of our model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations
Tips:
- The original checkpoints can be converted using the conversion script
src/transformers/models/Gemma2/convert_Gemma2_weights_to_hf.py
This model was contributed by Arthur Zucker, Pedro Cuenca and Tom Arsen.
Gemma2Config
autodoc Gemma2Config
Gemma2Model
autodoc Gemma2Model - forward
Gemma2ForCausalLM
autodoc Gemma2ForCausalLM - forward
Gemma2ForSequenceClassification
autodoc Gemma2ForSequenceClassification - forward
Gemma2ForTokenClassification
autodoc Gemma2ForTokenClassification - forward