transformers/tests/quantization
Mohamed Mekkouri 59178780a6
Fix : VPTQ test (#35394)
fix_test
2024-12-23 16:27:46 +01:00
..
aqlm_integration Skipping aqlm non working inference tests till fix merged (#34865) 2024-11-26 11:09:30 +01:00
autoawq Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet_integration Fix : BitNet tests (#34895) 2024-11-25 16:47:14 +01:00
bnb change bnb tests (#34713) 2024-12-18 09:49:59 -05:00
compressed_tensor Run model as compressed/uncompressed mode (#34719) 2024-12-13 08:23:31 +01:00
eetq_integration Fix typo in EETQ Tests (#35160) 2024-12-09 14:13:36 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
ggml Fix : model used to test ggml conversion of Falcon-7b is incorrect (#35083) 2024-12-16 13:21:44 +01:00
gptq 🚨 Remove dataset with restrictive license (#31452) 2024-06-17 17:56:51 +01:00
hqq Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
quanto_integration [Quantization] Switch to optimum-quanto (#31732) 2024-10-02 15:14:34 +02:00
torchao_integration Fix CI by tweaking torchao tests (#34832) 2024-11-20 20:28:51 +01:00
vptq_integration Fix : VPTQ test (#35394) 2024-12-23 16:27:46 +01:00