transformers/tests/quantization
2025-02-26 21:17:24 +01:00
..
aqlm_integration Skipping aqlm non working inference tests till fix merged (#34865) 2024-11-26 11:09:30 +01:00
autoawq [tests] enable autoawq tests on XPU (#36327) 2025-02-25 13:38:09 +01:00
bitnet_integration Fix : BitNet tests (#34895) 2024-11-25 16:47:14 +01:00
bnb tests: revert change of torch_require_multi_gpu to be device agnostic (#35721) 2025-02-25 13:36:10 +01:00
compressed_tensors Fix Expected output for compressed-tensors tests (#36425) 2025-02-26 21:17:24 +01:00
eetq_integration Fix typo in EETQ Tests (#35160) 2024-12-09 14:13:36 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
finegrained_fp8 Add require_read_token to fp8 tests (#36189) 2025-02-14 12:27:35 +01:00
ggml Guard against unset resolved_archive_file (#35628) 2025-02-14 14:44:31 +01:00
gptq Enable gptqmodel (#35012) 2025-01-15 14:22:49 +01:00
higgs New HIGGS quantization interfaces, JIT kernel compilation support. (#36148) 2025-02-14 12:26:45 +01:00
hqq Fix : HQQ config when hqq not available (#35655) 2025-01-14 11:37:37 +01:00
quanto_integration [tests] make quanto tests device-agnostic (#36328) 2025-02-21 14:20:40 +01:00
spqr_integration Efficient Inference Kernel for SpQR (#34976) 2025-02-13 16:22:58 +01:00
torchao_integration enable torchao quantization on CPU (#36146) 2025-02-25 11:06:52 +01:00
vptq_integration Fix : VPTQ test (#35394) 2024-12-23 16:27:46 +01:00