transformers/docs/source/en/model_doc/granite.md
Steven Liu a52478253b
Some checks are pending
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
[docs] Tensor parallelism (#38241)
* updates

* feedback

* badges

* fix?

* fix?

* fix?

* fix?
2025-06-26 14:40:45 -07:00

5.1 KiB

PyTorch FlashAttention SDPA Tensor parallelism

Granite

Granite is a 3B parameter language model trained with the Power scheduler. Discovering a good learning rate for pretraining large language models is difficult because it depends on so many variables (batch size, number of training tokens, etc.) and it is expensive to perform a hyperparameter search. The Power scheduler is based on a power-law relationship between the variables and their transferability to larger models. Combining the Power scheduler with Maximum Update Parameterization (MUP) allows a model to be pretrained with one set of hyperparameters regardless of all the variables.

You can find all the original Granite checkpoints under the IBM-Granite organization.

Tip

Click on the Granite models in the right sidebar for more examples of how to apply Granite to different language tasks.

The example below demonstrates how to generate text with [Pipeline], [AutoModel, and from the command line.

import torch
from transformers import pipeline

pipe = pipeline(
    task="text-generation",
    model="ibm-granite/granite-3.3-2b-base",
    torch_dtype=torch.bfloat16,
    device=0
)
pipe("Explain quantum computing in simple terms ", max_new_tokens=50)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-3.3-2b-base")
model = AutoModelForCausalLM.from_pretrained(
    "ibm-granite/granite-3.3-2b-base",                                          
    torch_dtype=torch.bfloat16, 
    device_map="auto",
    attn_implementation="sdpa"
)

inputs = tokenizer("Explain quantum computing in simple terms", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=50, cache_implementation="static")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
echo -e "Explain quantum computing simply." | transformers-cli run --task text-generation --model ibm-granite/granite-3.3-8b-instruct --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to only quantize the weights to int4.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-3.3-8b-base")
model = AutoModelForCausalLM.from_pretrained("ibm-granite/granite-3.3-8b-base", torch_dtype=torch.bfloat16, device_map="auto", attn_implementation="sdpa", quantization_config=quantization_config)

inputs = tokenizer("Explain quantum computing in simple terms", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=50, cache_implementation="static")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

quantization_config = BitsAndBytesConfig(load_in_4bit=True)

tokenizer = AutoTokenizer.from_pretrained(""ibm-granite/granite-3.3-2b-base"")
model = AutoModelForCausalLM.from_pretrained(
    "ibm-granite/granite-3.3-2b-base",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    attn_implementation="sdpa",
    quantization_config=quantization_config,
)

input_ids = tokenizer("Explain artificial intelligence to a 10 year old", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=50, cache_implementation="static")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

GraniteConfig

autodoc GraniteConfig

GraniteModel

autodoc GraniteModel - forward

GraniteForCausalLM

autodoc GraniteForCausalLM - forward