mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-06 22:30:09 +06:00

* First commit * Add conversion script * Make conversion script work for base model * More improvements * Update conversion script, works for vqa * Add indexing argument to meshgrid * Make conversion script work for ViltForPreTraining * Add ViltForPreTraining to docs * Fix device issue * Add processor * Add MinMaxResize to feature extractor * Implement call method of ViltProcessor * Fix tests * Add integration test * Add loss calculation for VQA * Improve tests * Improve some more tests * Debug tests * Small improvements * Add support for attention_mask * Remove mask_it * Add pixel_mask * Add tests for ViltFeatureExtractor * Improve tests * Add ViltForNaturalLanguageVisualReasoning * Add ViltForNaturalLanguageVisualReasoning to conversion script * Minor fixes * Add support for image_embeds, update docstrings to markdown * Update docs to markdown * Improve conversion script * Rename ViltForPreTraining to ViltForMaskedLM * Improve conversion script * Convert docstrings to markdown * Fix code example of retrieval model * Properly convert masked language model * Add integration test for nlvr * Fix code quality * Apply suggestions from code review * Add copied from statements * Fix pretrained_config_archive_map * Fix docs * Add model to README * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Apply more suggestions from code review * Make code more readable * Add ViltForNaturalLanguageVisualReasoning to the tests * Rename ViltForVisualQuestionAnswering to ViltForQuestionAnswering * Replace pixel_values_2 by single tensor * Add hidden_states and attentions * Fix one more test * Fix all tests * Update year * Fix rebase issues * Fix another rebase issue * Remove ViltForPreTraining from auto mapping * Rename ViltForImageRetrievalTextRetrieval to ViltForImageAndTextRetrieval * Make it possible to use BertTokenizerFast in the processor * Use BertTokenizerFast by default * Rename ViltForNaturalLanguageVisualReasoning, define custom model output Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
88 lines
3.7 KiB
Plaintext
88 lines
3.7 KiB
Plaintext
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# ViLT
|
|
|
|
## Overview
|
|
|
|
The ViLT model was proposed in [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334)
|
|
by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design
|
|
for Vision-and-Language Pre-training (VLP).
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks.
|
|
Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision
|
|
(e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we
|
|
find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more
|
|
computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive
|
|
power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model,
|
|
Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically
|
|
simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of
|
|
times faster than previous VLP models, yet with competitive or better downstream task performance.*
|
|
|
|
Tips:
|
|
|
|
- ViLT is a model that takes both `pixel_values` and `input_ids` as input. One can use [`ViltProcessor`] to prepare data for the model.
|
|
This processor wraps a feature extractor (for the image modality) and a tokenizer (for the language modality) into one.
|
|
- ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to
|
|
under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a `pixel_mask` that indicates
|
|
which pixel values are real and which are padding. [`ViltProcessor`] automatically creates this for you.
|
|
- The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes
|
|
additional embedding layers for the language modality.
|
|
|
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vilt_architecture.jpg"
|
|
alt="drawing" width="600"/>
|
|
|
|
<small> ViLT architecture. Taken from the <a href="https://arxiv.org/abs/2102.03334">original paper</a>. </small>
|
|
|
|
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/dandelin/ViLT).
|
|
|
|
## ViltConfig
|
|
|
|
[[autodoc]] ViltConfig
|
|
|
|
## ViltFeatureExtractor
|
|
|
|
[[autodoc]] ViltFeatureExtractor
|
|
- __call__
|
|
|
|
## ViltProcessor
|
|
|
|
[[autodoc]] ViltProcessor
|
|
- __call__
|
|
|
|
## ViltModel
|
|
|
|
[[autodoc]] ViltModel
|
|
- forward
|
|
|
|
## ViltForMaskedLM
|
|
|
|
[[autodoc]] ViltForMaskedLM
|
|
- forward
|
|
|
|
## ViltForQuestionAnswering
|
|
|
|
[[autodoc]] ViltForQuestionAnswering
|
|
- forward
|
|
|
|
## ViltForImagesAndTextClassification
|
|
|
|
[[autodoc]] ViltForImagesAndTextClassification
|
|
- forward
|
|
|
|
## ViltForImageAndTextRetrieval
|
|
|
|
[[autodoc]] ViltForImageAndTextRetrieval
|
|
- forward
|