transformers/docs/source/en/quantization
2024-11-18 09:58:50 -08:00
..
aqlm.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
awq.md Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet.md FEAT : Adding BitNet quantization method to HFQuantizer (#33410) 2024-10-09 17:51:41 +02:00
bitsandbytes.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
compressed_tensors.md [Docs] Update compressed_tensors.md (#33961) 2024-10-10 15:22:41 +02:00
contribute.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
eetq.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
fbgemm_fp8.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
gptq.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
hqq.md Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
optimum.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
overview.md add xpu path for awq (#34712) 2024-11-15 15:45:24 +01:00
quanto.md [docs] add XPU besides CUDA, MPS etc. (#34777) 2024-11-18 09:58:50 -08:00
torchao.md Enable non-safetensor ser/deser for TorchAoConfig quantized model 🔴 (#33456) 2024-09-30 11:30:29 +02:00