transformers/tests/quantization
jiqing-feng 69e31eb1bf
change bnb tests (#34713)
* fix training tests

* fix xpu check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* rm pdb

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix 4bit logits check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix 4bit logits check

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* add xpu check on int8 training

* fix training tests

* add llama test on bnb

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* only cpu and xpu disable autocast training

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

* fix format

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>

---------

Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Titus <9048635+Titus-von-Koeller@users.noreply.github.com>
2024-12-18 09:49:59 -05:00
..
aqlm_integration Skipping aqlm non working inference tests till fix merged (#34865) 2024-11-26 11:09:30 +01:00
autoawq Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet_integration Fix : BitNet tests (#34895) 2024-11-25 16:47:14 +01:00
bnb change bnb tests (#34713) 2024-12-18 09:49:59 -05:00
compressed_tensor Run model as compressed/uncompressed mode (#34719) 2024-12-13 08:23:31 +01:00
eetq_integration Fix typo in EETQ Tests (#35160) 2024-12-09 14:13:36 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
ggml Fix : model used to test ggml conversion of Falcon-7b is incorrect (#35083) 2024-12-16 13:21:44 +01:00
gptq 🚨 Remove dataset with restrictive license (#31452) 2024-06-17 17:56:51 +01:00
hqq Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
quanto_integration [Quantization] Switch to optimum-quanto (#31732) 2024-10-02 15:14:34 +02:00
torchao_integration Fix CI by tweaking torchao tests (#34832) 2024-11-20 20:28:51 +01:00