mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 10:12:23 +06:00
Update docs/source/en/model_doc/llava_next.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
This commit is contained in:
parent
f28b1397e4
commit
e088b9c7e7
@ -121,7 +121,11 @@ viz("<image> What is shown in this image?")
|
||||
## Notes
|
||||
|
||||
* Different checkpoints (Mistral, Vicuna, etc.) require a specific prompt format depending on the underlying LLM. Always use [`~ProcessorMixin.apply_chat_template`] to ensure correct formatting. Refer to the [Templates](../chat_templating) guide for more details.
|
||||
* **Multi-image support**: You can pass multiple images—just align `<image>` tokens with image order.
|
||||
- The example below demonstrates inference with multiple input images.
|
||||
|
||||
```py
|
||||
add code snippet here
|
||||
|
||||
|
||||
## LlavaNextConfig
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user