transformers/docs/source/en/model_doc/mobilenet_v2.md
Yuanzhou Cai 942c60956f
Model card for mobilenet v1 and v2 (#37948)
* doc: #36979

* doc: update hfoptions

* add model checkpoints links

* add model checkpoints links

* update example output

* update style #36979

* add pipeline tags

* improve comments

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* apply suggested changes

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-05-28 09:20:19 -07:00

5.7 KiB

PyTorch

MobileNet V2

MobileNet V2 improves performance on mobile devices with a more efficient architecture. It uses inverted residual blocks and linear bottlenecks to start with a smaller representation of the data, expands it for processing, and shrinks it again to reduce the number of computations. The model also removes non-linearities to maintain accuracy despite its simplified design. Like MobileNet V1, it uses depthwise separable convolutions for efficiency.

You can all the original MobileNet checkpoints under the Google organization.

Tip

Click on the MobileNet V2 models in the right sidebar for more examples of how to apply MobileNet to different vision tasks.

The examples below demonstrate how to classify an image with [Pipeline] or the [AutoModel] class.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="image-classification",
    model="google/mobilenet_v2_1.4_224",
    torch_dtype=torch.float16,
    device=0
)
pipeline(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
import torch
import requests
from PIL import Image
from transformers import AutoModelForImageClassification, AutoImageProcessor

image_processor = AutoImageProcessor.from_pretrained(
    "google/mobilenet_v2_1.4_224",
)
model = AutoModelForImageClassification.from_pretrained(
    "google/mobilenet_v2_1.4_224",
)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt")

with torch.no_grad():
  logits = model(**inputs).logits
predicted_class_id = logits.argmax(dim=-1).item()

class_labels = model.config.id2label
predicted_class_label = class_labels[predicted_class_id]
print(f"The predicted class label is: {predicted_class_label}")

Notes

  • Classification checkpoint names follow the pattern mobilenet_v2_{depth_multiplier}_{resolution}, like mobilenet_v2_1.4_224. 1.4 is the depth multiplier and 224 is the image resolution. Segmentation checkpoint names follow the pattern deeplabv3_mobilenet_v2_{depth_multiplier}_{resolution}.
  • While trained on images of a specific sizes, the model architecture works with images of different sizes (minimum 32x32). The [MobileNetV2ImageProcessor] handles the necessary preprocessing.
  • MobileNet is pretrained on ImageNet-1k, a dataset with 1000 classes. However, the model actually predicts 1001 classes. The additional class is an extra "background" class (index 0).
  • The segmentation models use a DeepLabV3+ head which is often pretrained on datasets like PASCAL VOC.
  • The original TensorFlow checkpoints determines the padding amount at inference because it depends on the input image size. To use the native PyTorch padding behavior, set tf_padding=False in [MobileNetV2Config].
    from transformers import MobileNetV2Config
    
    config = MobileNetV2Config.from_pretrained("google/mobilenet_v2_1.4_224", tf_padding=True)
    
  • The Transformers implementation does not support the following features.
    • Uses global average pooling instead of the optional 7x7 average pooling with stride 2. For larger inputs, this gives a pooled output that is larger than a 1x1 pixel.
    • output_hidden_states=True returns all intermediate hidden states. It is not possible to extract the output from specific layers for other downstream purposes.
    • Does not include the quantized models from the original checkpoints because they include "FakeQuantization" operations to unquantize the weights.
    • For segmentation models, the final convolution layer of the backbone is computed even though the DeepLabV3+ head doesn't use it.

MobileNetV2Config

autodoc MobileNetV2Config

MobileNetV2FeatureExtractor

autodoc MobileNetV2FeatureExtractor - preprocess - post_process_semantic_segmentation

MobileNetV2ImageProcessor

autodoc MobileNetV2ImageProcessor - preprocess

MobileNetV2ImageProcessorFast

autodoc MobileNetV2ImageProcessorFast - preprocess - post_process_semantic_segmentation

MobileNetV2Model

autodoc MobileNetV2Model - forward

MobileNetV2ForImageClassification

autodoc MobileNetV2ForImageClassification - forward

MobileNetV2ForSemanticSegmentation

autodoc MobileNetV2ForSemanticSegmentation - forward