transformers/tests/quantization
jiqing-feng 27361bd218
fix xpu tests (#36656)
* fix awq xpu tests

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* update

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix llava next video bnb tests

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-03-17 15:57:49 +01:00
..
aqlm_integration Skipping aqlm non working inference tests till fix merged (#34865) 2024-11-26 11:09:30 +01:00
autoawq fix xpu tests (#36656) 2025-03-17 15:57:49 +01:00
bitnet_integration Fix : BitNet tests (#34895) 2024-11-25 16:47:14 +01:00
bnb enable/disable compile for quants methods (#36519) 2025-03-17 11:38:21 +01:00
compressed_tensors Fix Expected output for compressed-tensors tests (#36425) 2025-02-26 21:17:24 +01:00
eetq_integration Fix typo in EETQ Tests (#35160) 2024-12-09 14:13:36 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
finegrained_fp8 Add require_read_token to fp8 tests (#36189) 2025-02-14 12:27:35 +01:00
ggml Guard against unset resolved_archive_file (#35628) 2025-02-14 14:44:31 +01:00
gptq Fix Failing GPTQ tests (#36666) 2025-03-12 20:03:02 +01:00
higgs New HIGGS quantization interfaces, JIT kernel compilation support. (#36148) 2025-02-14 12:26:45 +01:00
hqq Fix : HQQ config when hqq not available (#35655) 2025-01-14 11:37:37 +01:00
quanto_integration Changing the test model in Quanto kv cache (#36670) 2025-03-13 12:23:34 +01:00
spqr_integration Efficient Inference Kernel for SpQR (#34976) 2025-02-13 16:22:58 +01:00
torchao_integration enable torchao quantization on CPU (#36146) 2025-02-25 11:06:52 +01:00
vptq_integration Fix : VPTQ test (#35394) 2024-12-23 16:27:46 +01:00