mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 10:12:23 +06:00
[docs] Backbone (#28739)
* backbones * fix path * fix paths * fix code snippet * fix links
This commit is contained in:
parent
23ea6743f2
commit
abbffc4525
@ -65,6 +65,48 @@ For vision tasks, an image processor processes the image into the correct input
|
||||
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
|
||||
```
|
||||
|
||||
## AutoBackbone
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stages.png">
|
||||
<figcaption class="mt-2 text-center text-sm text-gray-500">A Swin backbone with multiple stages for outputting a feature map.</figcaption>
|
||||
</div>
|
||||
|
||||
The [`AutoBackbone`] lets you use pretrained models as backbones to get feature maps from different stages of the backbone. You should specify one of the following parameters in [`~PretrainedConfig.from_pretrained`]:
|
||||
|
||||
* `out_indices` is the index of the layer you'd like to get the feature map from
|
||||
* `out_features` is the name of the layer you'd like to get the feature map from
|
||||
|
||||
These parameters can be used interchangeably, but if you use both, make sure they're aligned with each other! If you don't pass any of these parameters, the backbone returns the feature map from the last layer.
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stage%201.png">
|
||||
<figcaption class="mt-2 text-center text-sm text-gray-500">A feature map from the first stage of the backbone. The patch partition refers to the model stem.</figcaption>
|
||||
</div>
|
||||
|
||||
For example, in the above diagram, to return the feature map from the first stage of the Swin backbone, you can set `out_indices=(1,)`:
|
||||
|
||||
```py
|
||||
>>> from transformers import AutoImageProcessor, AutoBackbone
|
||||
>>> import torch
|
||||
>>> from PIL import Image
|
||||
>>> import requests
|
||||
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
||||
>>> image = Image.open(requests.get(url, stream=True).raw)
|
||||
>>> processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
|
||||
>>> model = AutoBackbone.from_pretrained("microsoft/swin-tiny-patch4-window7-224", out_indices=(1,))
|
||||
|
||||
>>> inputs = processor(image, return_tensors="pt")
|
||||
>>> outputs = model(**inputs)
|
||||
>>> feature_maps = outputs.feature_maps
|
||||
```
|
||||
|
||||
Now you can access the `feature_maps` object from the first stage of the backbone:
|
||||
|
||||
```py
|
||||
>>> list(feature_maps[0].shape)
|
||||
[1, 96, 56, 56]
|
||||
```
|
||||
|
||||
## AutoFeatureExtractor
|
||||
|
||||
@ -142,24 +184,3 @@ Easily reuse the same checkpoint to load an architecture for a different task:
|
||||
Generally, we recommend using the `AutoTokenizer` class and the `TFAutoModelFor` class to load pretrained instances of models. This will ensure you load the correct architecture every time. In the next [tutorial](preprocessing), learn how to use your newly loaded tokenizer, image processor, feature extractor and processor to preprocess a dataset for fine-tuning.
|
||||
</tf>
|
||||
</frameworkcontent>
|
||||
|
||||
## AutoBackbone
|
||||
|
||||
`AutoBackbone` lets you use pretrained models as backbones and get feature maps as outputs from different stages of the models. Below you can see how to get feature maps from a [Swin](model_doc/swin) checkpoint.
|
||||
|
||||
```py
|
||||
>>> from transformers import AutoImageProcessor, AutoBackbone
|
||||
>>> import torch
|
||||
>>> from PIL import Image
|
||||
>>> import requests
|
||||
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
||||
>>> image = Image.open(requests.get(url, stream=True).raw)
|
||||
>>> processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
|
||||
>>> model = AutoBackbone.from_pretrained("microsoft/swin-tiny-patch4-window7-224", out_indices=(0,))
|
||||
|
||||
>>> inputs = processor(image, return_tensors="pt")
|
||||
>>> outputs = model(**inputs)
|
||||
>>> feature_maps = outputs.feature_maps
|
||||
>>> list(feature_maps[-1].shape)
|
||||
[1, 96, 56, 56]
|
||||
```
|
||||
|
@ -249,7 +249,7 @@ By default, [`AutoTokenizer`] will try to load a fast tokenizer. You can disable
|
||||
|
||||
</Tip>
|
||||
|
||||
## Image Processor
|
||||
## Image processor
|
||||
|
||||
An image processor processes vision inputs. It inherits from the base [`~image_processing_utils.ImageProcessingMixin`] class.
|
||||
|
||||
@ -311,7 +311,73 @@ ViTImageProcessor {
|
||||
}
|
||||
```
|
||||
|
||||
## Feature Extractor
|
||||
## Backbone
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Backbone.png">
|
||||
</div>
|
||||
|
||||
Computer vision models consist of a backbone, neck, and head. The backbone extracts features from an input image, the neck combines and enhances the extracted features, and the head is used for the main task (e.g., object detection). Start by initializing a backbone in the model config and specify whether you want to load pretrained weights or load randomly initialized weights. Then you can pass the model config to the model head.
|
||||
|
||||
For example, to load a [ResNet](../model_doc/resnet) backbone into a [MaskFormer](../model_doc/maskformer) model with an instance segmentation head:
|
||||
|
||||
<hfoptions id="backbone">
|
||||
<hfoption id="pretrained weights">
|
||||
|
||||
Set `use_pretrained_backbone=True` to load pretrained ResNet weights for the backbone.
|
||||
|
||||
```py
|
||||
from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig
|
||||
|
||||
config = MaskFormerConfig(backbone="microsoft/resnet50", use_pretrained_backbone=True) # backbone and neck config
|
||||
model = MaskFormerForInstanceSegmentation(config) # head
|
||||
```
|
||||
|
||||
You could also load the backbone config separately and then pass it to the model config.
|
||||
|
||||
```py
|
||||
from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig
|
||||
|
||||
backbone_config = ResNetConfig.from_pretrained("microsoft/resnet-50")
|
||||
config = MaskFormerConfig(backbone_config=backbone_config)
|
||||
model = MaskFormerForInstanceSegmentation(config)
|
||||
```
|
||||
|
||||
</hfoption>
|
||||
<hfoption id="random weights">
|
||||
|
||||
Set `use_pretrained_backbone=False` to randomly initialize a ResNet backbone.
|
||||
|
||||
```py
|
||||
from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig
|
||||
|
||||
config = MaskFormerConfig(backbone="microsoft/resnet50", use_pretrained_backbone=False) # backbone and neck config
|
||||
model = MaskFormerForInstanceSegmentation(config) # head
|
||||
```
|
||||
|
||||
You could also load the backbone config separately and then pass it to the model config.
|
||||
|
||||
```py
|
||||
from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig
|
||||
|
||||
backbone_config = ResNetConfig()
|
||||
config = MaskFormerConfig(backbone_config=backbone_config)
|
||||
model = MaskFormerForInstanceSegmentation(config)
|
||||
```
|
||||
|
||||
</hfoption>
|
||||
</hfoptions>
|
||||
|
||||
[timm](https://hf.co/docs/timm/index) models are loaded with [`TimmBackbone`] and [`TimmBackboneConfig`].
|
||||
|
||||
```python
|
||||
from transformers import TimmBackboneConfig, TimmBackbone
|
||||
|
||||
backbone_config = TimmBackboneConfig("resnet50")
|
||||
model = TimmBackbone(config=backbone_config)
|
||||
```
|
||||
|
||||
## Feature extractor
|
||||
|
||||
A feature extractor processes audio inputs. It inherits from the base [`~feature_extraction_utils.FeatureExtractionMixin`] class, and may also inherit from the [`SequenceFeatureExtractor`] class for processing audio inputs.
|
||||
|
||||
@ -357,7 +423,6 @@ Wav2Vec2FeatureExtractor {
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Processor
|
||||
|
||||
For models that support multimodal tasks, 🤗 Transformers offers a processor class that conveniently wraps processing classes such as a feature extractor and a tokenizer into a single object. For example, let's use the [`Wav2Vec2Processor`] for an automatic speech recognition task (ASR). ASR transcribes audio to text, so you will need a feature extractor and a tokenizer.
|
||||
|
@ -14,86 +14,47 @@ rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# Backbones
|
||||
# Backbone
|
||||
|
||||
Backbones are models used for feature extraction for computer vision tasks. One can use a model as backbone in two ways:
|
||||
A backbone is a model used for feature extraction for higher level computer vision tasks such as object detection and image classification. Transformers provides an [`AutoBackbone`] class for initializing a Transformers backbone from pretrained model weights, and two utility classes:
|
||||
|
||||
* initializing `AutoBackbone` class with a pretrained model,
|
||||
* initializing a supported backbone configuration and passing it to the model architecture.
|
||||
* [`~utils.backbone_utils.BackboneMixin`] enables initializing a backbone from Transformers or [timm](https://hf.co/docs/timm/index) and includes functions for returning the output features and indices.
|
||||
* [`~utils.backbone_utils.BackboneConfigMixin`] sets the output features and indices of the backbone configuration.
|
||||
|
||||
## Using AutoBackbone
|
||||
[timm](https://hf.co/docs/timm/index) models are loaded with the [`TimmBackbone`] and [`TimmBackboneConfig`] classes.
|
||||
|
||||
You can use `AutoBackbone` class to initialize a model as a backbone and get the feature maps for any stage. You can define `out_indices` to indicate the index of the layers which you would like to get the feature maps from. You can also use `out_features` if you know the name of the layers. You can use them interchangeably. If you are using both `out_indices` and `out_features`, ensure they are consistent. Not passing any of the feature map arguments will make the backbone yield the feature maps of the last layer.
|
||||
To visualize how stages look like, let's take the Swin model. Each stage is responsible from feature extraction, outputting feature maps.
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stages.png">
|
||||
</div>
|
||||
Backbones are supported for the following models:
|
||||
|
||||
Illustrating feature maps of the first stage looks like below.
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Swin%20Stage%201.png">
|
||||
</div>
|
||||
* [BEiT](..model_doc/beit)
|
||||
* [BiT](../model_doc/bit)
|
||||
* [ConvNet](../model_doc/convnext)
|
||||
* [ConvNextV2](../model_doc/convnextv2)
|
||||
* [DiNAT](..model_doc/dinat)
|
||||
* [DINOV2](../model_doc/dinov2)
|
||||
* [FocalNet](../model_doc/focalnet)
|
||||
* [MaskFormer](../model_doc/maskformer)
|
||||
* [NAT](../model_doc/nat)
|
||||
* [ResNet](../model_doc/resnet)
|
||||
* [Swin Transformer](../model_doc/swin)
|
||||
* [Swin Transformer v2](../model_doc/swinv2)
|
||||
* [ViTDet](../model_doc/vitdet)
|
||||
|
||||
Let's see with an example. Note that `out_indices=(0,)` results in yielding the stem of the model. Stem refers to the stage before the first feature extraction stage. In above diagram, it refers to patch partition. We would like to have the feature maps from stem, first, and second stage of the model.
|
||||
```py
|
||||
>>> from transformers import AutoImageProcessor, AutoBackbone
|
||||
>>> import torch
|
||||
>>> from PIL import Image
|
||||
>>> import requests
|
||||
## AutoBackbone
|
||||
|
||||
>>> processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
|
||||
>>> model = AutoBackbone.from_pretrained("microsoft/swin-tiny-patch4-window7-224", out_indices=(0,1,2))
|
||||
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
||||
>>> image = Image.open(requests.get(url, stream=True).raw)
|
||||
[[autodoc]] AutoBackbone
|
||||
|
||||
>>> inputs = processor(image, return_tensors="pt")
|
||||
>>> outputs = model(**inputs)
|
||||
>>> feature_maps = outputs.feature_maps
|
||||
```
|
||||
`feature_maps` object now has three feature maps, each can be accessed like below. Say we would like to get the feature map of the stem.
|
||||
```python
|
||||
>>> list(feature_maps[0].shape)
|
||||
[1, 96, 56, 56]
|
||||
```
|
||||
## BackboneMixin
|
||||
|
||||
We can get the feature maps of first and second stages like below.
|
||||
```python
|
||||
>>> list(feature_maps[1].shape)
|
||||
[1, 96, 56, 56]
|
||||
>>> list(feature_maps[2].shape)
|
||||
[1, 192, 28, 28]
|
||||
```
|
||||
[[autodoc]] utils.backbone_utils.BackboneMixin
|
||||
|
||||
## Initializing Backbone Configuration
|
||||
## BackboneConfigMixin
|
||||
|
||||
In computer vision, models consist of backbone, neck, and a head. Backbone extracts the features, neck enhances the extracted features and head is used for the main task (e.g. object detection).
|
||||
[[autodoc]] utils.backbone_utils.BackboneConfigMixin
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/Backbone.png">
|
||||
</div>
|
||||
## TimmBackbone
|
||||
|
||||
You can initialize such multiple-stage model with Backbone API. Initialize the config of the backbone of your choice first. Initialize the neck config by passing the backbone config in. Then, initialize the head with the neck's config. To illustrate this, below you can see how to initialize the [MaskFormer](../model_doc/maskformer) model with instance segmentation head with [ResNet](../model_doc/resnet) backbone.
|
||||
[[autodoc]] models.timm_backbone.TimmBackbone
|
||||
|
||||
```py
|
||||
from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig
|
||||
## TimmBackboneConfig
|
||||
|
||||
backbone_config = ResNetConfig.from_pretrained("microsoft/resnet-50")
|
||||
config = MaskFormerConfig(backbone_config=backbone_config)
|
||||
model = MaskFormerForInstanceSegmentation(config)
|
||||
```
|
||||
You can also initialize a backbone with random weights to initialize the model neck with it.
|
||||
|
||||
```py
|
||||
backbone_config = ResNetConfig()
|
||||
config = MaskFormerConfig(backbone_config=backbone_config)
|
||||
model = MaskFormerForInstanceSegmentation(config)
|
||||
```
|
||||
|
||||
`timm` models are also supported in transformers through `TimmBackbone` and `TimmBackboneConfig`.
|
||||
|
||||
```python
|
||||
from transformers import TimmBackboneConfig, TimmBackbone
|
||||
|
||||
backbone_config = TimmBackboneConfig("resnet50")
|
||||
model = TimmBackbone(config=backbone_config)
|
||||
```
|
||||
[[autodoc]] models.timm_backbone.TimmBackboneConfig
|
||||
|
Loading…
Reference in New Issue
Block a user