5.2 KiB
Video Vision Transformer (ViViT)
Overview
The Vivit model was proposed in ViViT: A Video Vision Transformer by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. The paper proposes one of the first successful pure-transformer based set of models for video understanding.
The abstract from the paper is the following:
We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks.
This model was contributed by jegormeister. The original code (written in JAX) can be found here.
Using Scaled Dot Product Attention (SDPA)
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of torch.nn.functional
. This function
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
official documentation
or the GPU Inference
page for more information.
SDPA is used by default for torch>=2.1.1
when an implementation is available, but you may also set
attn_implementation="sdpa"
in from_pretrained()
to explicitly request SDPA to be used.
from transformers import VivitModel
model = VivitModel.from_pretrained("google/vivit-b-16x2-kinetics400", attn_implementation="sdpa", torch_dtype=torch.float16)
...
For the best speedups, we recommend loading the model in half-precision (e.g. torch.float16
or torch.bfloat16
).
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with float32
and google/vivit-b-16x2-kinetics400
model, we saw the following speedups during inference.
Training
num_training_steps | batch_size | is cuda | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) |
---|---|---|---|---|---|---|
100 | 1 | True | 7.122 | 2575.28 | 5932.54 | 130.364 |
Inference
num_batches | batch_size | is cuda | is half | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) |
---|---|---|---|---|---|---|---|
20 | 1 | True | False | 15.422 | 715.807 | 317.079 | 125.75 |
20 | 2 | True | False | 17.146 | 1234.75 | 447.175 | 176.122 |
20 | 4 | True | False | 18.093 | 2275.82 | 709.864 | 220.6 |
20 | 8 | True | False | 19.284 | 4358.19 | 1233.24 | 253.393 |
VivitConfig
autodoc VivitConfig
VivitImageProcessor
autodoc VivitImageProcessor - preprocess
VivitModel
autodoc VivitModel - forward
VivitForVideoClassification
autodoc transformers.VivitForVideoClassification - forward