mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
80 lines
4.7 KiB
Markdown
80 lines
4.7 KiB
Markdown
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# InstructBLIP
|
|
|
|
<div class="flex flex-wrap space-x-1">
|
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
|
</div>
|
|
|
|
## Overview
|
|
|
|
The InstructBLIP model was proposed in [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://huggingface.co/papers/2305.06500) by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
|
|
InstructBLIP leverages the [BLIP-2](blip2) architecture for visual instruction tuning.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.*
|
|
|
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg"
|
|
alt="drawing" width="600"/>
|
|
|
|
<small> InstructBLIP architecture. Taken from the <a href="https://huggingface.co/papers/2305.06500">original paper.</a> </small>
|
|
|
|
This model was contributed by [nielsr](https://huggingface.co/nielsr).
|
|
The original code can be found [here](https://github.com/salesforce/LAVIS/tree/main/projects/instructblip).
|
|
|
|
## Usage tips
|
|
|
|
InstructBLIP uses the same architecture as [BLIP-2](blip2) with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
|
|
|
|
> [!NOTE]
|
|
> BLIP models after release v4.46 will raise warnings about adding `processor.num_query_tokens = {{num_query_tokens}}` and expand model embeddings layer to add special `<image>` token. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. Adding these attributes means that BLIP will add the number of query tokens required per image and expand the text with as many `<image>` placeholders as there will be query tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there wil be failure when merging the embeddings.
|
|
The attributes can be obtained from model config, as `model.config.num_query_tokens` and model embeddings expansion can be done by following [this link](https://gist.github.com/zucchini-nlp/e9f20b054fa322f84ac9311d9ab67042).
|
|
|
|
## InstructBlipConfig
|
|
|
|
[[autodoc]] InstructBlipConfig
|
|
- from_vision_qformer_text_configs
|
|
|
|
## InstructBlipVisionConfig
|
|
|
|
[[autodoc]] InstructBlipVisionConfig
|
|
|
|
## InstructBlipQFormerConfig
|
|
|
|
[[autodoc]] InstructBlipQFormerConfig
|
|
|
|
## InstructBlipProcessor
|
|
|
|
[[autodoc]] InstructBlipProcessor
|
|
|
|
|
|
## InstructBlipVisionModel
|
|
|
|
[[autodoc]] InstructBlipVisionModel
|
|
- forward
|
|
|
|
## InstructBlipQFormerModel
|
|
|
|
[[autodoc]] InstructBlipQFormerModel
|
|
- forward
|
|
|
|
## InstructBlipModel
|
|
|
|
[[autodoc]] InstructBlipModel
|
|
|
|
## InstructBlipForConditionalGeneration
|
|
|
|
[[autodoc]] InstructBlipForConditionalGeneration
|
|
- forward
|
|
- generate |