transformers/docs/source/en/model_doc/pegasus.md
2025-06-12 10:27:07 -07:00

9.7 KiB

PyTorch TensorFlow Flax FlashAttention SDPA

Pegasus

Pegasus is an encoder-decoder (sequence-to-sequence) transformer model pretrained on unlabeled text to perform abstractive summarization. Pegasus is trained jointly on two self-supervised objective functions, masked language modeling (MLM) and gap sentence generation (GSG). Whole sentences are masked and the model has to fill in the gaps in the document. It can be fine-tuned with good performance even on small datasets with only 1000 examples.

You can find all the original Pegasus checkpoints under the Google organization.

Tip

Click on the Pegasus models in the right sidebar for more examples of how to apply Pegasus to different language tasks.

The example below demonstrates how to summarize text with [Pipeline], [AutoModel], and from the command line.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="summarization",
    model="google/pegasus-xsum",
    torch_dtype=torch.float16,
    device=0
)
pipeline("""Plants are remarkable organisms that produce their own food using a method called photosynthesis.
This process involves converting sunlight, carbon dioxide, and water into glucose, which provides energy for growth.
Plants play a crucial role in sustaining life on Earth by generating oxygen and serving as the foundation of most ecosystems.""")
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(
    "google/pegasus-xsum"
)
model = AutoModelForSeq2SeqLM.from_pretrained(
    "google/pegasus-xsum",
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="sdpa"
)

input_text = """Plants are remarkable organisms that produce their own food using a method called photosynthesis.
This process involves converting sunlight, carbon dioxide, and water into glucose, which provides energy for growth.
Plants play a crucial role in sustaining life on Earth by generating oxygen and serving as the foundation of most ecosystems."""
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

output = model.generate(**input_ids, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))
echo -e "Plants are remarkable organisms that produce their own food using a method called photosynthesis. This process involves converting sunlight, carbon dioxide, and water into glucose, which provides energy for growth. Plants play a crucial role in sustaining life on Earth by generating oxygen and serving as the foundation of most ecosystems." | transformers-cli run --task summarization --model google/pegasus-xsum --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to only quantize the weights to int4.

import torch
from transformers import BitsAndBytesConfig, AutoModelForSeq2SeqLM, AutoTokenizer

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_quant_type="nf4"
)
model = AutoModelForSeq2SeqLM.from_pretrained(
    "google/pegasus-xsum",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    quantization_config=quantization_config
)

tokenizer = AutoTokenizer.from_pretrained(
    "google/pegasus-xsum"
)
input_text = """Plants are remarkable organisms that produce their own food using a method called photosynthesis.
This process involves converting sunlight, carbon dioxide, and water into glucose, which provides energy for growth.
Plants play a crucial role in sustaining life on Earth by generating oxygen and serving as the foundation of most ecosystems."""
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

output = model.generate(**input_ids, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))

Notes

  • [AdaFactor] is the recommended optimizer for fine-tuning Pegasus.
  • This implementation of Pegasus inherits from [BartForConditionalGeneration] but it uses static/sinusoidal positional embeddings instead. Pegasus also starts generating with pad_token_id as the prefix and uses num_beams=8.

PegasusConfig

autodoc PegasusConfig

PegasusTokenizer

warning: add_tokens does not work at the moment.

autodoc PegasusTokenizer

PegasusTokenizerFast

autodoc PegasusTokenizerFast

PegasusModel

autodoc PegasusModel - forward

PegasusForConditionalGeneration

autodoc PegasusForConditionalGeneration - forward

PegasusForCausalLM

autodoc PegasusForCausalLM - forward

TFPegasusModel

autodoc TFPegasusModel - call

TFPegasusForConditionalGeneration

autodoc TFPegasusForConditionalGeneration - call

FlaxPegasusModel

autodoc FlaxPegasusModel - call - encode - decode

FlaxPegasusForConditionalGeneration

autodoc FlaxPegasusForConditionalGeneration - call - encode - decode