transformers/docs/source/en/model_doc/blip.md
Yoni Gozlan fa56dcc2ab
Refactoring of ImageProcessorFast (#35069)
* add init and base image processing functions

* add add_fast_image_processor to transformers-cli

* add working fast image processor clip

* add fast image processor to doc, working tests

* remove "to be implemented" SigLip

* fix unprotected import

* fix unprotected vision import

* update ViTImageProcessorFast

* increase threshold slow fast ewuivalence

* add fast img blip

* add fast class in tests with cli

* improve cli

* add fast image processor convnext

* add LlavaPatchingMixin and fast image processor for llava_next and llava_onevision

* add device kwarg to ImagesKwargs for fast processing on cuda

* cleanup

* fix unprotected import

* group images by sizes and add batch processing

* Add batch equivalence tests, skip when center_crop is used

* cleanup

* update init and cli

* fix-copies

* refactor convnext, cleanup base

* fix

* remove patching mixins, add piped torchvision transforms for ViT

* fix unbatched processing

* fix f strings

* protect imports

* change llava onevision to class transforms (test)

* fix convnext

* improve formatting (following Pavel review)

* fix handling device arg

* improve cli

* fix

* fix inits

* Add distinction between preprocess and _preprocess, and support for arbitrary kwargs through valid_extra_kwargs

* uniformize qwen2_vl fast

* fix docstrings

* add add fast image processor llava

* remove min_pixels max_pixels from accepted size

* nit

* nit

* refactor fast image processors docstrings

* cleanup and remove fast class transforms

* update add fast image processor transformers cli

* cleanup docstring

* uniformize pixtral fast and  make _process_image explicit

* fix prepare image structure llava next/onevision

* Use typed kwargs instead of explicit args

* nit fix import Unpack

* clearly separate pops and gets in base preprocess. Use explicit typed kwargs

* make qwen2_vl preprocess arguments hashable
2025-02-04 17:52:31 -05:00

4.2 KiB

BLIP

Overview

The BLIP model was proposed in BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.

BLIP is a model that is able to perform various multi-modal tasks including:

  • Visual Question Answering
  • Image-Text retrieval (Image-text matching)
  • Image Captioning

The abstract from the paper is the following:

Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.

BLIP.gif

This model was contributed by ybelkada. The original code can be found here.

Resources

  • Jupyter notebook on how to fine-tune BLIP for image captioning on a custom dataset

BlipConfig

autodoc BlipConfig - from_text_vision_configs

BlipTextConfig

autodoc BlipTextConfig

BlipVisionConfig

autodoc BlipVisionConfig

BlipProcessor

autodoc BlipProcessor

BlipImageProcessor

autodoc BlipImageProcessor - preprocess

BlipImageProcessorFast

autodoc BlipImageProcessorFast - preprocess

BlipModel

BlipModel is going to be deprecated in future versions, please use BlipForConditionalGeneration, BlipForImageTextRetrieval or BlipForQuestionAnswering depending on your usecase.

autodoc BlipModel - forward - get_text_features - get_image_features

BlipTextModel

autodoc BlipTextModel - forward

BlipVisionModel

autodoc BlipVisionModel - forward

BlipForConditionalGeneration

autodoc BlipForConditionalGeneration - forward

BlipForImageTextRetrieval

autodoc BlipForImageTextRetrieval - forward

BlipForQuestionAnswering

autodoc BlipForQuestionAnswering - forward

TFBlipModel

autodoc TFBlipModel - call - get_text_features - get_image_features

TFBlipTextModel

autodoc TFBlipTextModel - call

TFBlipVisionModel

autodoc TFBlipVisionModel - call

TFBlipForConditionalGeneration

autodoc TFBlipForConditionalGeneration - call

TFBlipForImageTextRetrieval

autodoc TFBlipForImageTextRetrieval - call

TFBlipForQuestionAnswering

autodoc TFBlipForQuestionAnswering - call