transformers/docs/source/en/quantization/overview.md
Wenhua Cheng b3492ff9f7
Add AutoRound quantization support (#37393)
* add auto-round support

* Update src/transformers/quantizers/auto.py

Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>

* fix style issue

Signed-off-by: wenhuach <wenhuach87@gmail.com>

* tiny change

* tiny change

* refine ut and doc

* revert unnecessary change

* tiny change

* try to fix style issue

* try to fix style issue

* try to fix style issue

* try to fix style issue

* try to fix style issue

* try to fix style issue

* try to fix style issue

* fix doc issue

* Update tests/quantization/autoround/test_auto_round.py

* fix comments

* Update tests/quantization/autoround/test_auto_round.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/autoround/test_auto_round.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* update doc

* Update src/transformers/quantizers/quantizer_auto_round.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* update

* update

* fix

* try to fix style issue

* Update src/transformers/quantizers/auto.py

Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>

* Update docs/source/en/quantization/auto_round.md

Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>

* Update docs/source/en/quantization/auto_round.md

Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>

* Update docs/source/en/quantization/auto_round.md

Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>

* update

* fix style issue

* update doc

* update doc

* Refine the doc

* refine doc

* revert one change

* set sym to True by default

* Enhance the unit test's robustness.

* update

* add torch dtype

* tiny change

* add awq convert test

* fix typo

* update

* fix packing format issue

* use one gpu

---------

Signed-off-by: wenhuach <wenhuach87@gmail.com>
Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Shen, Haihao <haihao.shen@intel.com>
2025-04-22 13:56:54 +02:00

9.0 KiB

Overview

Quantization lowers the memory requirements of loading and using a model by storing the weights in a lower precision while trying to preserve as much accuracy as possible. Weights are typically stored in full-precision (fp32) floating point representations, but half-precision (fp16 or bf16) are increasingly popular data types given the large size of models today. Some quantization methods can reduce the precision even further to integer representations, like int8 or int4.

Transformers supports many quantization methods, each with their pros and cons, so you can pick the best one for your specific use case. Some methods require calibration for greater accuracy and extreme compression (1-2 bits), while other methods work out of the box with on-the-fly quantization.

Use the Space below to help you pick a quantization method depending on your hardware and number of bits to quantize to.

Quantization Method On the fly quantization CPU CUDA GPU ROCm GPU Metal (Apple Silicon) Intel GPU Torch compile() Bits PEFT Fine Tuning Serializable with 🤗Transformers 🤗Transformers Support Link to library
AQLM 🔴 🟢 🟢 🔴 🔴 🔴 🟢 1/2 🟢 🟢 🟢 https://github.com/Vahe1994/AQLM
AutoRound 🔴 🟢 🟢 🔴 🔴 🟢 🔴 2/3/4/8 🔴 🟢 🟢 https://github.com/intel/auto-round
AWQ 🔴 🟢 🟢 🟢 🔴 🟢 ? 4 🟢 🟢 🟢 https://github.com/casper-hansen/AutoAWQ
bitsandbytes 🟢 🟡 🟢 🟡 🔴 🟡 🔴 4/8 🟢 🟢 🟢 https://github.com/bitsandbytes-foundation/bitsandbytes
compressed-tensors 🔴 🟢 🟢 🟢 🔴 🔴 🔴 1/8 🟢 🟢 🟢 https://github.com/neuralmagic/compressed-tensors
EETQ 🟢 🔴 🟢 🔴 🔴 🔴 ? 8 🟢 🟢 🟢 https://github.com/NetEase-FuXi/EETQ
GGUF / GGML (llama.cpp) 🟢 🟢 🟢 🔴 🟢 🔴 🔴 1/8 🔴 See Notes See Notes https://github.com/ggerganov/llama.cpp
GPTQModel 🔴 🟢 🟢 🟢 🟢 🟢 🔴 2/3/4/8 🟢 🟢 🟢 https://github.com/ModelCloud/GPTQModel
AutoGPTQ 🔴 🔴 🟢 🟢 🔴 🔴 🔴 2/3/4/8 🟢 🟢 🟢 https://github.com/AutoGPTQ/AutoGPTQ
HIGGS 🟢 🔴 🟢 🔴 🔴 🔴 🟢 2/4 🔴 🟢 🟢 https://github.com/HanGuo97/flute
HQQ 🟢 🟢 🟢 🔴 🔴 🔴 🟢 1/8 🟢 🔴 🟢 https://github.com/mobiusml/hqq/
optimum-quanto 🟢 🟢 🟢 🔴 🟢 🔴 🟢 2/4/8 🔴 🔴 🟢 https://github.com/huggingface/optimum-quanto
FBGEMM_FP8 🟢 🔴 🟢 🔴 🔴 🔴 🔴 8 🔴 🟢 🟢 https://github.com/pytorch/FBGEMM
torchao 🟢 🟢 🟢 🔴 🟡 🔴 4/8 🟢🔴 🟢 https://github.com/pytorch/ao
VPTQ 🔴 🔴 🟢 🟡 🔴 🔴 🟢 1/8 🔴 🟢 🟢 https://github.com/microsoft/VPTQ
FINEGRAINED_FP8 🟢 🔴 🟢 🔴 🔴 🔴 🔴 8 🔴 🟢 🟢
SpQR 🔴 🔴 🟢 🔴 🔴 🔴 🟢 3 🔴 🟢 🟢 https://github.com/Vahe1994/SpQR/
Quark 🔴 🟢 🟢 🟢 🟢 🟢 ? 2/4/6/8/9/16 🔴 🔴 🟢 https://quark.docs.amd.com/latest/

Resources

If you are new to quantization, we recommend checking out these beginner-friendly quantization courses in collaboration with DeepLearning.AI.

User-Friendly Quantization Tools

If you are looking for a user-friendly quantization experience, you can use the following community spaces and notebooks: