mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Qwen2VL: skip base input_ids
-inputs_embeds
equivalence check (#34535)
it has complex inputs_embeds computation
This commit is contained in:
parent
ab98f0b0a1
commit
4ca004eac6
@ -1610,7 +1610,7 @@ class GenerationTesterMixin:
|
||||
inputs_dict.pop("pixel_values_images", None)
|
||||
# 2.C - No easy fix, let's skip the check that compares the outputs from `input_ids` and `inputs_embeds`
|
||||
has_complex_embeds_computation = any(
|
||||
model_name in model_class.__name__.lower() for model_name in ["moshi"]
|
||||
model_name in model_class.__name__.lower() for model_name in ["moshi", "qwen2vl"]
|
||||
)
|
||||
# 3 - `inputs_dict` doesn't contain `attention_mask`. When `attention_mask` is not passed to generate,
|
||||
# we infer it from `input_ids`. The last test case will fail if there is a pad token in the original input.
|
||||
|
Loading…
Reference in New Issue
Block a user