transformers/tests/quantization
Theia Vogel e719b65c31
Fix FbgemmFp8Linear not preserving tensor shape (#33239)
* add tests for linear shape behavior

* fix linear shape behavior

ended up adding the reshape at the end, after f8f8bf16_rowwise, because adding
it directly after quantize_fp8_per_row caused f8f8bf16_rowwise to drop the
seq_len dimension. (i.e., (17, 23, 1014) -> (17, 1024))

* save shape up front + comment
2024-09-11 13:26:44 +02:00
..
aqlm_integration Cache: use batch_size instead of max_batch_size (#32657) 2024-08-16 11:48:45 +01:00
autoawq Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00
bnb remove to restriction for 4-bit model (#33122) 2024-09-02 16:28:50 +02:00
eetq_integration [FEAT]: EETQ quantizer support (#30262) 2024-04-22 20:38:58 +01:00
fbgemm_fp8 Fix FbgemmFp8Linear not preserving tensor shape (#33239) 2024-09-11 13:26:44 +02:00
ggml Add support for GGUF Phi-3 (#31844) 2024-09-10 13:32:38 +02:00
gptq 🚨 Remove dataset with restrictive license (#31452) 2024-06-17 17:56:51 +01:00
hqq Quantization / HQQ: Fix HQQ tests on our runner (#30668) 2024-05-06 11:33:52 +02:00
quanto_integration Skip tests properly (#31308) 2024-06-26 21:59:08 +01:00
torchao_integration Add TorchAOHfQuantizer (#32306) 2024-08-14 16:14:24 +02:00