transformers/docs/source/en/model_doc/timesformer.mdx
fatih cc3d0e1b01
[New Model] Add TimeSformer model (#18908)
* init timesformer

* apply fix-copies

* reformat style

* revert back some incoorect style updates

* init timesformer

* apply fix-copies

* reformat style

* revert back some incoorect style updates

* update timseformer doc

* add some functions and classes

* add new config params

* implement multiple classes

* update TimeSformerLayer

* update TimeSformerModel, TimeSformerPreTrainedModel, TimeSformerEncoder

* several fixes

* reformat

* temporary update

* fix some typos

* fix weight converter

* more fixes

* fix a typo

* fix typo

* remove redundant params

* fix for latest hf-hub

* merge fix

* fix some checks

* video classification works with einops

* add paper info to docs

* merge fix

* remove redundant line

* remove redundant docstring

* update config

* fix some typos

* fix converter

* update some test constants

* refactor einops functions

* reformat

* fix a comment

* remove redundat imports

* reformat

* fix a typo

* remove comment

* remove unused imports

* remove redundant doc line

* reformat

* add missing line

* fix docs

* fix timesformer auto feat ext

* add unittests

* reformat

* fix docs

* some fixes and updates

* fix readme

* fix modeling

* fix readme

* update index

* revert _toctree.yml changes

* update timseformer.mdx

* update drop_path_prob to drop_path_rate

* add dosctring for drop_path_rate

* update TimeSformerPatchEmbed naming

* remove to_2tuple

* explicit use of nn.functional

* reformat

* many updates from review comments

* fix a typo

* reformat

* remove assert, better variable name

* make variable names more explicit

* add some adapted from

* more explicit variable names

* remove redundant docstring

* fix initilaization

* move permute inside embedding

* update class names

* remove unused imports

* add test for video classification

* update PretrainedModel with PreTrainedModel

* remove double permute

* update based on sylvain's review

* aply auto fix

* update image_processing_auto for timesformer

* update hub urls

* reformat

* remove duplicate import

* update doc link
2022-12-02 09:13:25 +01:00

44 lines
2.7 KiB
Plaintext

<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# TimeSformer
## Overview
The TimeSformer model was proposed in [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Facebook Research.
This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification papers.
The abstract from the paper is the following:
*We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: [this https URL](https://github.com/facebookresearch/TimeSformer).*
Tips:
There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover, the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model.
This model was contributed by [fcakyon](https://huggingface.co/fcakyon).
The original code can be found [here](https://github.com/facebookresearch/TimeSformer).
## TimesformerConfig
[[autodoc]] TimesformerConfig
## TimesformerModel
[[autodoc]] TimesformerModel
- forward
## TimesformerForVideoClassification
[[autodoc]] TimesformerForVideoClassification
- forward