transformers/docs/source/en/model_doc/llama.md
Lysandre Debut d538293f62
Transformers cli clean command (#37657)
* transformers-cli -> transformers

* Chat command works with positional argument

* update doc references to transformers-cli

* doc headers

* deepspeed

---------

Co-authored-by: Joao Gante <joao@huggingface.co>
2025-04-30 12:15:43 +01:00

9.0 KiB

PyTorch Flax FlashAttention SDPA

Llama

Llama is a family of large language models ranging from 7B to 65B parameters. These models are focused on efficient inference (important for serving language models) by training a smaller model on more tokens rather than training a larger model on fewer tokens. The Llama model is based on the GPT architecture, but it uses pre-normalization to improve training stability, replaces ReLU with SwiGLU to improve performance, and replaces absolute positional embeddings with rotary positional embeddings (RoPE) to better handle longer sequence lengths.

You can find all the original Llama checkpoints under the Huggy Llama organization.

Tip

Click on the Llama models in the right sidebar for more examples of how to apply Llama to different language tasks.

The example below demonstrates how to generate text with [Pipeline] or the [AutoModel], and from the command line.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="text-generation",
    model="huggyllama/llama-7b",
    torch_dtype=torch.float16,
    device=0
)
pipeline("Plants create energy through a process known as")
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(
    "huggyllama/llama-7b",
)
model = AutoModelForCausalLM.from_pretrained(
    "huggyllama/llama-7b",
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="sdpa"
)
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to("cuda")

output = model.generate(**input_ids, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model huggyllama/llama-7b --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses torchao to only quantize the weights to int4.

# pip install torchao
import torch
from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer

quantization_config = TorchAoConfig("int4_weight_only", group_size=128)
model = AutoModelForCausalLM.from_pretrained(
    "huggyllama/llama-30b",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    quantization_config=quantization_config
)

tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-30b")
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to("cuda")

output = model.generate(**input_ids, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))

Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.

from transformers.utils.attention_visualizer import AttentionMaskVisualizer

visualizer = AttentionMaskVisualizer("huggyllama/llama-7b")
visualizer("Plants create energy through a process known as")

Notes

  • The tokenizer is a byte-pair encoding model based on SentencePiece. During decoding, if the first token is the start of the word (for example, "Banana"), the tokenizer doesn't prepend the prefix space to the string.

LlamaConfig

autodoc LlamaConfig

LlamaTokenizer

autodoc LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

LlamaTokenizerFast

autodoc LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary

LlamaModel

autodoc LlamaModel - forward

LlamaForCausalLM

autodoc LlamaForCausalLM - forward

LlamaForSequenceClassification

autodoc LlamaForSequenceClassification - forward

LlamaForQuestionAnswering

autodoc LlamaForQuestionAnswering - forward

LlamaForTokenClassification

autodoc LlamaForTokenClassification - forward

FlaxLlamaModel

autodoc FlaxLlamaModel - call

FlaxLlamaForCausalLM

autodoc FlaxLlamaForCausalLM - call