transformers/docs/source/en/model_doc/zamba2.md
Steven Liu 86d7564611
[docs] Fix Zamba2 (#35916)
fix code block
2025-01-27 11:44:10 -08:00

3.0 KiB

Zamba2

Zamba2 is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.

This model was contributed by pglo.

Model details

Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B are hybrid models combining state-space models (Specifically Mamba) and transformer, and were trained using next-token prediction. Zamba2 uses shared transformer layers after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba2-1.2B, Zamba2-2.7B and Zamba2-7B were pre-trained on 2T and 3T tokens, respectively.

Quick start

Presequities

Zamba2 requires you use transformers version 4.48.0 or higher:

pip install transformers>=4.48.0

Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-7B")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-7B", device_map="cuda", torch_dtype=torch.bfloat16)

input_text = "What factors contributed to the fall of the Roman Empire?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Model card

The model cards can be found at:

Issues

For issues with model output, or community discussion, please use the Hugging Face community forum

License

The model weights are open-sourced via an Apache 2.0 license.

Zamba2Config

autodoc Zamba2Config

Zamba2Model

autodoc Zamba2Model - forward

Zamba2ForCausalLM

autodoc Zamba2ForCausalLM - forward

Zamba2ForSequenceClassification

autodoc transformers.Zamba2ForSequenceClassification - forward