transformers/tests/quantization
Mohamed Mekkouri dbd8474125
Fix : BLOOM tie_word_embeddings in GGUF (#35812)
* fix bloom ggml

* fix falcon output

* make style
2025-01-21 15:35:54 +01:00
..
aqlm_integration Skipping aqlm non working inference tests till fix merged (#34865) 2024-11-26 11:09:30 +01:00
autoawq Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet_integration Fix : BitNet tests (#34895) 2024-11-25 16:47:14 +01:00
bnb Fix new BNB test failures (#35345) 2025-01-02 11:24:52 +01:00
compressed_tensor Run model as compressed/uncompressed mode (#34719) 2024-12-13 08:23:31 +01:00
eetq_integration Fix typo in EETQ Tests (#35160) 2024-12-09 14:13:36 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
ggml Fix : BLOOM tie_word_embeddings in GGUF (#35812) 2025-01-21 15:35:54 +01:00
gptq Enable gptqmodel (#35012) 2025-01-15 14:22:49 +01:00
higgs HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
hqq Fix : HQQ config when hqq not available (#35655) 2025-01-14 11:37:37 +01:00
quanto_integration [tests] make cuda-only tests device-agnostic (#35607) 2025-01-13 14:48:39 +01:00
torchao_integration Fix CI by tweaking torchao tests (#34832) 2024-11-20 20:28:51 +01:00
vptq_integration Fix : VPTQ test (#35394) 2024-12-23 16:27:46 +01:00