mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 05:10:06 +06:00

* Add DiNAT * Adds DiNAT + tests * Minor fixes * Added HF model * Add natten to dependencies. * Cleanup * Minor fixup * Reformat * Optional NATTEN import. * Reformat & add doc to _toctree * Reformat (finally) * Dummy objects for DiNAT * Add NAT + minor changes Adds NAT as its own independent model + docs, tests Adds NATTEN to ext deps to ensure ci picks it up. * Remove natten from `all` and `dev-torch` deps, add manual pip install to ci tests * Minor fixes. * Fix READMEs. * Requested changes to docs + minor fixes. * Requested changes. * Add NAT/DiNAT tests to layoutlm_job * Correction to Dinat doc. * Requested changes.
74 lines
3.5 KiB
Plaintext
74 lines
3.5 KiB
Plaintext
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Neighborhood Attention Transformer
|
|
|
|
## Overview
|
|
|
|
NAT was proposed in [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143)
|
|
by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
|
|
|
|
It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision.
|
|
NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a
|
|
linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's
|
|
receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike
|
|
Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package
|
|
with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less
|
|
memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA
|
|
that boosts image classification and downstream vision performance. Experimental results on NAT are competitive;
|
|
NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9%
|
|
ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. *
|
|
|
|
Tips:
|
|
- One can use the [`AutoImageProcessor`] API to prepare images for the model.
|
|
- NAT can be used as a *backbone*. When `output_hidden_states = True`,
|
|
it will output both `hidden_states` and `reshaped_hidden_states`.
|
|
The `reshaped_hidden_states` have a shape of `(batch, num_channels, height, width)` rather than
|
|
`(batch_size, height, width, num_channels)`.
|
|
|
|
Notes:
|
|
- NAT depends on [NATTEN](https://github.com/SHI-Labs/NATTEN/)'s implementation of Neighborhood Attention.
|
|
You can install it with pre-built wheels for Linux by referring to [shi-labs.com/natten](https://shi-labs.com/natten),
|
|
or build on your system by running `pip install natten`.
|
|
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
|
|
- Patch size of 4 is only supported at the moment.
|
|
|
|
<img
|
|
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/neighborhood-attention-pattern.jpg"
|
|
alt="drawing" width="600"/>
|
|
|
|
<small> Neighborhood Attention compared to other attention patterns.
|
|
Taken from the <a href="https://arxiv.org/abs/2204.07143">original paper</a>.</small>
|
|
|
|
This model was contributed by [Ali Hassani](https://huggingface.co/alihassanijr).
|
|
The original code can be found [here](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
|
|
|
|
|
|
## NatConfig
|
|
|
|
[[autodoc]] NatConfig
|
|
|
|
|
|
## NatModel
|
|
|
|
[[autodoc]] NatModel
|
|
- forward
|
|
|
|
## NatForImageClassification
|
|
|
|
[[autodoc]] NatForImageClassification
|
|
- forward
|