transformers/docs/source/en/model_doc/cvt.md
Md. Muhaimin Rahman 3ae52cc312
Update CvT documentation with improved usage examples and additional … (#38731)
* Update CvT documentation with improved usage examples and additional notes

* initial update

* cvt

* Update docs/source/en/model_doc/cvt.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update cvt.md

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-06-17 10:30:03 -07:00

3.8 KiB

PyTorch TensorFlow

Convolutional Vision Transformer (CvT)

Convolutional Vision Transformer (CvT) is a model that combines the strengths of convolutional neural networks (CNNs) and Vision transformers for the computer vision tasks. It introduces convolutional layers into the vision transformer architecture, allowing it to capture local patterns in images while maintaining the global context provided by self-attention mechanisms.

You can find all the CvT checkpoints under the Microsoft organization.

Tip

This model was contributed by anujunj.

Click on the CvT models in the right sidebar for more examples of how to apply CvT to different computer vision tasks.

The example below demonstrates how to classify an image with [Pipeline] or the [AutoModel] class.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="image-classification",
    model="microsoft/cvt-13",
    torch_dtype=torch.float16,
    device=0 
)
pipeline(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
import torch
import requests
from PIL import Image
from transformers import AutoModelForImageClassification, AutoImageProcessor

image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13")
model = AutoModelForImageClassification.from_pretrained(
    "microsoft/cvt-13",
    torch_dtype=torch.float16,
    device_map="auto"
)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt").to("cuda")

with torch.no_grad():
  logits = model(**inputs).logits
predicted_class_id = logits.argmax(dim=-1).item()

class_labels = model.config.id2label
predicted_class_label = class_labels[predicted_class_id]
print(f"The predicted class label is: {predicted_class_label}")

Resources

Refer to this set of ViT notebooks for examples of inference and fine-tuning on custom datasets. Replace [ViTFeatureExtractor] and [ViTForImageClassification] in these notebooks with [AutoImageProcessor] and [CvtForImageClassification].

CvtConfig

autodoc CvtConfig

CvtModel

autodoc CvtModel - forward

CvtForImageClassification

autodoc CvtForImageClassification - forward

TFCvtModel

autodoc TFCvtModel - call

TFCvtForImageClassification

autodoc TFCvtForImageClassification - call