transformers/docs/source/en/quantization
Fanli Lin 14ca7f1452
[docs] fix typo (#36080)
typo fix
2025-02-07 12:42:09 -08:00
..
aqlm.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
awq.md Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet.md FEAT : Adding BitNet quantization method to HFQuantizer (#33410) 2024-10-09 17:51:41 +02:00
bitsandbytes.md [docs] fix bugs in the bitsandbytes documentation (#35868) 2025-02-05 08:21:20 -08:00
compressed_tensors.md [docs] fix typo (#36080) 2025-02-07 12:42:09 -08:00
contribute.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
eetq.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
fbgemm_fp8.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
gptq.md Enable gptqmodel (#35012) 2025-01-15 14:22:49 +01:00
higgs.md HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
hqq.md Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
optimum.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
overview.md Enable gptqmodel (#35012) 2025-01-15 14:22:49 +01:00
quanto.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
torchao.md Update torchao.md: use auto-compilation (#35490) 2025-01-14 11:33:48 +01:00
vptq.md FEAT : Adding VPTQ quantization method to HFQuantizer (#34770) 2024-12-20 09:45:53 +01:00