transformers/docs/source/en/model_doc/vit_mae.md
Jiwook Han 9a8510572b
Updated the model card for ViTMAE (#38302)
* Update vit_mae.md

* badge float:right

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/model_doc/vit_mae.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update model_doc/vit_mae.md

* fix

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-05-28 09:19:43 -07:00

3.9 KiB

PyTorch TensorFlow FlashAttention SDPA

ViTMAE

ViTMAE is a self-supervised vision model that is pretrained by masking large portions of an image (~75%). An encoder processes the visible image patches and a decoder reconstructs the missing pixels from the encoded patches and mask tokens. After pretraining, the encoder can be reused for downstream tasks like image classification or object detection — often outperforming models trained with supervised learning.

drawing

You can find all the original ViTMAE checkpoints under the AI at Meta organization.

Tip

Click on the ViTMAE models in the right sidebar for more examples of how to apply ViTMAE to vision tasks.

The example below demonstrates how to reconstruct the missing pixels with the [ViTMAEForPreTraining] class.

import torch
import requests
from PIL import Image
from transformers import ViTImageProcessor, ViTMAEForPreTraining

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

processor = ViTImageProcessor.from_pretrained("facebook/vit-mae-base")
inputs = processor(image, return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}

model = ViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base", attn_implementation="sdpa").to("cuda")
with torch.no_grad():
    outputs = model(**inputs)

reconstruction = outputs.logits

Notes

  • ViTMAE is typically used in two stages. Self-supervised pretraining with [ViTMAEForPreTraining], and then discarding the decoder and fine-tuning the encoder. After fine-tuning, the weights can be plugged into a model like [ViTForImageClassification].
  • Use [ViTImageProcessor] for input preparation.

Resources

  • Refer to this notebook to learn how to visualize the reconstructed pixels from [ViTMAEForPreTraining].

ViTMAEConfig

autodoc ViTMAEConfig

ViTMAEModel

autodoc ViTMAEModel - forward

ViTMAEForPreTraining

autodoc transformers.ViTMAEForPreTraining - forward

TFViTMAEModel

autodoc TFViTMAEModel - call

TFViTMAEForPreTraining

autodoc transformers.TFViTMAEForPreTraining - call