Update docs/source/en/model_doc/llava_next.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
This commit is contained in:
Priya aka Priyamvadha Balakrishnan 2025-06-29 21:21:25 -04:00 committed by GitHub
parent f28b1397e4
commit e088b9c7e7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -121,7 +121,11 @@ viz("<image> What is shown in this image?")
## Notes ## Notes
* Different checkpoints (Mistral, Vicuna, etc.) require a specific prompt format depending on the underlying LLM. Always use [`~ProcessorMixin.apply_chat_template`] to ensure correct formatting. Refer to the [Templates](../chat_templating) guide for more details. * Different checkpoints (Mistral, Vicuna, etc.) require a specific prompt format depending on the underlying LLM. Always use [`~ProcessorMixin.apply_chat_template`] to ensure correct formatting. Refer to the [Templates](../chat_templating) guide for more details.
* **Multi-image support**: You can pass multiple images—just align `<image>` tokens with image order. - The example below demonstrates inference with multiple input images.
```py
add code snippet here
## LlavaNextConfig ## LlavaNextConfig