mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-07 14:50:07 +06:00

* add new model like * add v1 * v1 * v1 * vision encoder logits match * v2 * fix * add docstring * CI tests pass * fix tests * make fixup * add to `toctree` * fix processors * fix processors * fix doc * fill title * add content doc * remove from tokenization auto * fix config * change order * add `# Copied from` * few fixes - add correct license on modeling text - remove dummy argument * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * replace name * refactor a bit * more refactor * remove unused arg * make fixup + remove some `# Adapted from ...` * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * more `# Copied from` * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * now `generate` supports no prefix * remove `FeatureExtractor` * fix path * correct dependency * fix tests * few fixes * add integration tests * add correct conversion script * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add `blip` to tokenization auto * fix docstrings * fix test + add image * remove processor from uncorrect place * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * clean up a bit * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * clean pixel mask * clean pixel mask * fix `F` * Update src/transformers/models/blip/modeling_blip.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * fix output * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * fix pad token id * remove `token_type_ids` * make fixup * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * make fixup * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * add comments * Update src/transformers/models/blip/modeling_blip.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * remove `token_type_ids` * make fixup * better name * replace with `image_attention_mask` * refactor * make fixup * better docstring * replace `answer_xx` * remove ununsed args * add `labels` * add `labels` * fix processing tests * make fixup * make fixup * put correct repo * remove `pad` * remove `crop` and `center_crop` * Update src/transformers/models/blip/image_processing_blip.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * fix * remove `size_divisor` * fix weights `init` * remove unneeded functions * add suggestions * minor changes - change slow test output for PT 1.13 - docstring order * replace `feature_extractor` by `image_processor` * fix doctests * fix weight init order + add fp16 slow test * add `blip` to doctest * add correct repo name and fix test * Update src/transformers/models/blip/processing_blip.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * fix tests * use `convert_to_rgb` from `image_transforms` * make fixup * fix large loading issue Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
92 lines
3.1 KiB
Plaintext
92 lines
3.1 KiB
Plaintext
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# BLIP
|
|
|
|
## Overview
|
|
|
|
The BLIP model was proposed in [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
|
|
|
|
BLIP is a model that is able to perform various multi-modal tasks including
|
|
- Visual Question Answering
|
|
- Image-Text retrieval (Image-text matching)
|
|
- Image Captioning
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks.
|
|
However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
|
|
|
|

|
|
|
|
This model was contributed by [ybelkada](https://huggingface.co/ybelkada).
|
|
The original code can be found [here](https://github.com/salesforce/BLIP).
|
|
|
|
|
|
## BlipConfig
|
|
|
|
[[autodoc]] BlipConfig
|
|
- from_text_vision_configs
|
|
|
|
## BlipTextConfig
|
|
|
|
[[autodoc]] BlipTextConfig
|
|
|
|
## BlipVisionConfig
|
|
|
|
[[autodoc]] BlipVisionConfig
|
|
|
|
## BlipProcessor
|
|
|
|
[[autodoc]] BlipProcessor
|
|
|
|
|
|
## BlipImageProcessor
|
|
|
|
[[autodoc]] BlipImageProcessor
|
|
- preprocess
|
|
|
|
## BlipModel
|
|
|
|
[[autodoc]] BlipModel
|
|
- forward
|
|
- get_text_features
|
|
- get_image_features
|
|
|
|
## BlipTextModel
|
|
|
|
[[autodoc]] BlipTextModel
|
|
- forward
|
|
|
|
|
|
## BlipVisionModel
|
|
|
|
[[autodoc]] BlipVisionModel
|
|
- forward
|
|
|
|
|
|
## BlipForConditionalGeneration
|
|
|
|
[[autodoc]] BlipForConditionalGeneration
|
|
- forward
|
|
|
|
|
|
## BlipForImageTextRetrieval
|
|
|
|
[[autodoc]] BlipForImageTextRetrieval
|
|
- forward
|
|
|
|
|
|
## BlipForQuestionAnswering
|
|
|
|
[[autodoc]] BlipForQuestionAnswering
|
|
- forward |