transformers/docs/source/en/quantization
wejoncy 4e27a4009d
FEAT : Adding VPTQ quantization method to HFQuantizer (#34770)
* init vptq

* add integration

* add vptq support

fix readme

* add tests && format

* format

* address comments

* format

* format

* address comments

* format

* address comments

* remove debug code

* Revert "remove debug code"

This reverts commit ed3b3eaaba.

* fix test

---------

Co-authored-by: Yang Wang <wyatuestc@gmail.com>
2024-12-20 09:45:53 +01:00
..
aqlm.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
awq.md Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet.md FEAT : Adding BitNet quantization method to HFQuantizer (#33410) 2024-10-09 17:51:41 +02:00
bitsandbytes.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
compressed_tensors.md [Docs] Update compressed_tensors.md (#33961) 2024-10-10 15:22:41 +02:00
contribute.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
eetq.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
fbgemm_fp8.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
gptq.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
hqq.md Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
optimum.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
overview.md FEAT : Adding VPTQ quantization method to HFQuantizer (#34770) 2024-12-20 09:45:53 +01:00
quanto.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
torchao.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
vptq.md FEAT : Adding VPTQ quantization method to HFQuantizer (#34770) 2024-12-20 09:45:53 +01:00