mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 05:10:06 +06:00
Updated aya_vision.md (#38749)
* Update aya_vision.md * Suggested changes made to aya_vision.md * Quantization Example added - aya_vision.md * Polished - aya_vision.md * Update aya_vision.md --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
This commit is contained in:
parent
5ab0f447ab
commit
64e9b049d9
@ -14,82 +14,32 @@ rendered properly in your Markdown viewer.
|
|||||||
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
# AyaVision
|
<div style="float: right;">
|
||||||
|
<div class="flex flex-wrap space-x-1">
|
||||||
|
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
## Overview
|
# Aya Vision
|
||||||
|
|
||||||
The Aya Vision 8B and 32B models is a state-of-the-art multilingual multimodal models developed by Cohere For AI. They build on the Aya Expanse recipe to handle both visual and textual information without compromising on the strong multilingual textual performance of the original model.
|
[Aya Vision](https://huggingface.co/papers/2505.08751) is a family of open-weight multimodal vision-language models from Cohere Labs. It is trained with a synthetic annotation framework that generates high-quality multilingual image captions, improving Aya Vision's generated responses. In addition, a cross-modal model merging technique is used to prevent the model from losing its text capabilities after adding vision capabilities. The model combines a CommandR-7B language model with a SigLIP vision encoder.
|
||||||
|
|
||||||
Aya Vision 8B combines the `Siglip2-so400-384-14` vision encoder with the Cohere CommandR-7B language model further post-trained with the Aya Expanse recipe, creating a powerful vision-language model capable of understanding images and generating text across 23 languages. Whereas, Aya Vision 32B uses Aya Expanse 32B as the language model.
|
You can find all the original Aya Vision checkpoints under the [Aya Vision](https://huggingface.co/collections/CohereLabs/cohere-labs-aya-vision-67c4ccd395ca064308ee1484) collection.
|
||||||
|
|
||||||
Key features of Aya Vision include:
|
> [!TIP]
|
||||||
- Multimodal capabilities in 23 languages
|
> This model was contributed by [saurabhdash](https://huggingface.co/saurabhdash) and [yonigozlan](https://huggingface.co/yonigozlan).
|
||||||
- Strong text-only multilingual capabilities inherited from CommandR-7B post-trained with the Aya Expanse recipe and Aya Expanse 32B
|
>
|
||||||
- High-quality visual understanding using the Siglip2-so400-384-14 vision encoder
|
> Click on the Aya Vision models in the right sidebar for more examples of how to apply Aya Vision to different image-to-text tasks.
|
||||||
- Seamless integration of visual and textual information in 23 languages.
|
|
||||||
|
|
||||||
<!-- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/aya_vision_architecture.webp"
|
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
|
||||||
alt="drawing" width="600"/>
|
|
||||||
|
|
||||||
<small> Aya Vision architecture. </small> -->
|
<hfoptions id="usage">
|
||||||
|
<hfoption id="Pipeline">
|
||||||
Tips:
|
|
||||||
|
|
||||||
- Aya Vision is a multimodal model that takes images and text as input and produces text as output.
|
|
||||||
- Images are represented using the `<image>` tag in the templated input.
|
|
||||||
- For best results, use the `apply_chat_template` method of the processor to format your inputs correctly.
|
|
||||||
- The model can process multiple images in a single conversation.
|
|
||||||
- Aya Vision can understand and generate text in 23 languages, making it suitable for multilingual multimodal applications.
|
|
||||||
|
|
||||||
This model was contributed by [saurabhdash](https://huggingface.co/saurabhdash) and [yonigozlan](https://huggingface.co/yonigozlan).
|
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
Here's how to use Aya Vision for inference:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from transformers import AutoProcessor, AutoModelForImageTextToText
|
|
||||||
import torch
|
|
||||||
|
|
||||||
model_id = "CohereForAI/aya-vision-8b"
|
|
||||||
torch_device = "cuda:0"
|
|
||||||
|
|
||||||
# Use fast image processor
|
|
||||||
processor = AutoProcessor.from_pretrained(model_id, use_fast=True)
|
|
||||||
model = AutoModelForImageTextToText.from_pretrained(
|
|
||||||
model_id, device_map=torch_device, torch_dtype=torch.float16
|
|
||||||
)
|
|
||||||
|
|
||||||
# Format message with the aya-vision chat template
|
|
||||||
messages = [
|
|
||||||
{"role": "user",
|
|
||||||
"content": [
|
|
||||||
{"type": "image", "url": "https://pbs.twimg.com/media/Fx7YvfQWYAIp6rZ?format=jpg&name=medium"},
|
|
||||||
{"type": "text", "text": "चित्र में लिखा पाठ क्या कहता है?"},
|
|
||||||
]},
|
|
||||||
]
|
|
||||||
|
|
||||||
# Process image on CUDA
|
|
||||||
inputs = processor.apply_chat_template(
|
|
||||||
messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", device=torch_device
|
|
||||||
).to(model.device)
|
|
||||||
|
|
||||||
gen_tokens = model.generate(
|
|
||||||
**inputs,
|
|
||||||
max_new_tokens=300,
|
|
||||||
do_sample=True,
|
|
||||||
temperature=0.3,
|
|
||||||
)
|
|
||||||
|
|
||||||
gen_text = print(processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
|
|
||||||
```
|
|
||||||
### Pipeline
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from transformers import pipeline
|
from transformers import pipeline
|
||||||
|
|
||||||
pipe = pipeline(model="CohereForAI/aya-vision-8b", task="image-text-to-text", device_map="auto")
|
pipe = pipeline(model="CohereLabs/aya-vision-8b", task="image-text-to-text", device_map="auto")
|
||||||
|
|
||||||
# Format message with the aya-vision chat template
|
# Format message with the aya-vision chat template
|
||||||
messages = [
|
messages = [
|
||||||
@ -104,41 +54,29 @@ outputs = pipe(text=messages, max_new_tokens=300, return_full_text=False)
|
|||||||
print(outputs)
|
print(outputs)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Multiple Images and Batched Inputs
|
</hfoption>
|
||||||
|
<hfoption id="AutoModel">
|
||||||
Aya Vision can process multiple images in a single conversation. Here's how to use it with multiple images:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
# pip install 'git+https://github.com/huggingface/transformers.git@v4.49.0-Aya Vision'
|
||||||
from transformers import AutoProcessor, AutoModelForImageTextToText
|
from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
model_id = "CohereForAI/aya-vision-8b"
|
model_id = "CohereLabs/aya-vision-8b"
|
||||||
|
|
||||||
processor = AutoProcessor.from_pretrained(model_id)
|
processor = AutoProcessor.from_pretrained(model_id)
|
||||||
model = AutoModelForImageTextToText.from_pretrained(
|
model = AutoModelForImageTextToText.from_pretrained(
|
||||||
model_id, device_map="cuda:0", torch_dtype=torch.float16
|
model_id, device_map="auto", torch_dtype=torch.float16
|
||||||
)
|
)
|
||||||
|
|
||||||
# Example with multiple images in a single message
|
# Format message with the aya-vision chat template
|
||||||
messages = [
|
messages = [
|
||||||
{
|
{"role": "user",
|
||||||
"role": "user",
|
|
||||||
"content": [
|
"content": [
|
||||||
{
|
{"type": "image", "url": "https://pbs.twimg.com/media/Fx7YvfQWYAIp6rZ?format=jpg&name=medium"},
|
||||||
"type": "image",
|
{"type": "text", "text": "चित्र में लिखा पाठ क्या कहता है?"},
|
||||||
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg",
|
]},
|
||||||
},
|
]
|
||||||
{
|
|
||||||
"type": "image",
|
|
||||||
"url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "text",
|
|
||||||
"text": "These images depict two different landmarks. Can you identify them?",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
inputs = processor.apply_chat_template(
|
inputs = processor.apply_chat_template(
|
||||||
messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt"
|
messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt"
|
||||||
@ -151,26 +89,119 @@ gen_tokens = model.generate(
|
|||||||
temperature=0.3,
|
temperature=0.3,
|
||||||
)
|
)
|
||||||
|
|
||||||
gen_text = processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
print(processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
|
||||||
print(gen_text)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
For processing batched inputs (multiple conversations at once):
|
</hfoption>
|
||||||
|
</hfoptions>
|
||||||
|
|
||||||
|
Quantization reduces the memory footprint of large models by representing weights at lower precision. Refer to the [Quantization](../quantization/overview) overview for supported backends.
|
||||||
|
|
||||||
|
The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to 4-bits.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from transformers import AutoProcessor, AutoModelForImageTextToText
|
|
||||||
import torch
|
import torch
|
||||||
|
from transformers import (
|
||||||
model_id = "CohereForAI/aya-vision-8b"
|
AutoProcessor,
|
||||||
|
AutoModelForImageTextToText,
|
||||||
processor = AutoProcessor.from_pretrained(model_id)
|
BitsAndBytesConfig
|
||||||
model = AutoModelForImageTextToText.from_pretrained(
|
|
||||||
model_id, device_map="cuda:0", torch_dtype=torch.float16
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Prepare two different conversations
|
bnb_config = BitsAndBytesConfig(
|
||||||
batch_messages = [
|
load_in_4bit=True,
|
||||||
# First conversation with a single image
|
bnb_4bit_quant_type="nf4",
|
||||||
|
bnb_4bit_compute_dtype=torch.bfloat16,
|
||||||
|
bnb_4bit_use_double_quant=True
|
||||||
|
)
|
||||||
|
|
||||||
|
processor = AutoProcessor.from_pretrained("CohereLabs/aya-vision-32b", use_fast=True)
|
||||||
|
model = AutoModelForImageTextToText.from_pretrained(
|
||||||
|
"CohereLabs/aya-vision-32b",
|
||||||
|
quantization_config=bnb_config,
|
||||||
|
device_map="auto"
|
||||||
|
)
|
||||||
|
|
||||||
|
inputs = processor.apply_chat_template(
|
||||||
|
[
|
||||||
|
{"role": "user", "content": [
|
||||||
|
{"type": "image", "url": "https://huggingface.co/roschmid/dog-races/resolve/main/images/Border_Collie.jpg"},
|
||||||
|
{"type": "text", "text":"Describe what you see."}
|
||||||
|
]}
|
||||||
|
],
|
||||||
|
padding=True,
|
||||||
|
add_generation_prompt=True,
|
||||||
|
tokenize=True,
|
||||||
|
return_tensors="pt"
|
||||||
|
).to("cuda")
|
||||||
|
|
||||||
|
generated = model.generate(**inputs, max_new_tokens=50)
|
||||||
|
print(processor.tokenizer.decode(generated[0], skip_special_tokens=True))
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Images are represented with the `<image>` tag in the chat template.
|
||||||
|
|
||||||
|
- Use the [`~ProcessorMixin.apply_chat_template`] method to correctly format inputs.
|
||||||
|
|
||||||
|
- The example below demonstrates inference with multiple images.
|
||||||
|
|
||||||
|
```py
|
||||||
|
from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||||
|
import torch
|
||||||
|
|
||||||
|
processor = AutoProcessor.from_pretrained("CohereForAI/aya-vision-8b")
|
||||||
|
model = AutoModelForImageTextToText.from_pretrained(
|
||||||
|
"CohereForAI/aya-vision-8b", device_map="cuda", torch_dtype=torch.float16
|
||||||
|
)
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "image",
|
||||||
|
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "image",
|
||||||
|
"url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "These images depict two different landmarks. Can you identify them?",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
inputs = processor.apply_chat_template(
|
||||||
|
messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt"
|
||||||
|
).to("cuda")
|
||||||
|
|
||||||
|
gen_tokens = model.generate(
|
||||||
|
**inputs,
|
||||||
|
max_new_tokens=300,
|
||||||
|
do_sample=True,
|
||||||
|
temperature=0.3,
|
||||||
|
)
|
||||||
|
|
||||||
|
gen_text = processor.tokenizer.decode(gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
||||||
|
print(gen_text)
|
||||||
|
```
|
||||||
|
|
||||||
|
- The example below demonstrates inference with batched inputs.
|
||||||
|
|
||||||
|
```py
|
||||||
|
from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||||
|
import torch
|
||||||
|
|
||||||
|
processor = AutoProcessor.from_pretrained(model_id)
|
||||||
|
model = AutoModelForImageTextToText.from_pretrained(
|
||||||
|
"CohereForAI/aya-vision-8b", device_map="cuda", torch_dtype=torch.float16
|
||||||
|
)
|
||||||
|
|
||||||
|
batch_messages = [
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"role": "user",
|
"role": "user",
|
||||||
@ -180,7 +211,6 @@ batch_messages = [
|
|||||||
],
|
],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
# Second conversation with multiple images
|
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"role": "user",
|
"role": "user",
|
||||||
@ -200,34 +230,31 @@ batch_messages = [
|
|||||||
],
|
],
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
]
|
]
|
||||||
|
|
||||||
# Process each conversation separately and combine into a batch
|
batch_inputs = processor.apply_chat_template(
|
||||||
batch_inputs = processor.apply_chat_template(
|
|
||||||
batch_messages,
|
batch_messages,
|
||||||
padding=True,
|
padding=True,
|
||||||
add_generation_prompt=True,
|
add_generation_prompt=True,
|
||||||
tokenize=True,
|
tokenize=True,
|
||||||
return_dict=True,
|
return_dict=True,
|
||||||
return_tensors="pt"
|
return_tensors="pt"
|
||||||
).to(model.device)
|
).to(model.device)
|
||||||
|
|
||||||
# Generate responses for the batch
|
batch_outputs = model.generate(
|
||||||
batch_outputs = model.generate(
|
|
||||||
**batch_inputs,
|
**batch_inputs,
|
||||||
max_new_tokens=300,
|
max_new_tokens=300,
|
||||||
do_sample=True,
|
do_sample=True,
|
||||||
temperature=0.3,
|
temperature=0.3,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Decode the generated responses
|
for i, output in enumerate(batch_outputs):
|
||||||
for i, output in enumerate(batch_outputs):
|
|
||||||
response = processor.tokenizer.decode(
|
response = processor.tokenizer.decode(
|
||||||
output[batch_inputs.input_ids.shape[1]:],
|
output[batch_inputs.input_ids.shape[1]:],
|
||||||
skip_special_tokens=True
|
skip_special_tokens=True
|
||||||
)
|
)
|
||||||
print(f"Response {i+1}:\n{response}\n")
|
print(f"Response {i+1}:\n{response}\n")
|
||||||
```
|
```
|
||||||
|
|
||||||
## AyaVisionProcessor
|
## AyaVisionProcessor
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user