mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-01 02:31:11 +06:00
Update docs/source/en/model_doc/llava_next.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
This commit is contained in:
parent
f28b1397e4
commit
e088b9c7e7
@ -121,7 +121,11 @@ viz("<image> What is shown in this image?")
|
|||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
* Different checkpoints (Mistral, Vicuna, etc.) require a specific prompt format depending on the underlying LLM. Always use [`~ProcessorMixin.apply_chat_template`] to ensure correct formatting. Refer to the [Templates](../chat_templating) guide for more details.
|
* Different checkpoints (Mistral, Vicuna, etc.) require a specific prompt format depending on the underlying LLM. Always use [`~ProcessorMixin.apply_chat_template`] to ensure correct formatting. Refer to the [Templates](../chat_templating) guide for more details.
|
||||||
* **Multi-image support**: You can pass multiple images—just align `<image>` tokens with image order.
|
- The example below demonstrates inference with multiple input images.
|
||||||
|
|
||||||
|
```py
|
||||||
|
add code snippet here
|
||||||
|
|
||||||
|
|
||||||
## LlavaNextConfig
|
## LlavaNextConfig
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user