transformers/docs/source/en/main_classes/quantization.md
fxmarty-amd 1a374799ce
Support loading Quark quantized models in Transformers (#36372)
* add quark quantizer

* add quark doc

* clean up doc

* fix tests

* make style

* more style fixes

* cleanup imports

* cleaning

* precise install

* Update docs/source/en/quantization/quark.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/quark_integration/test_quark.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/utils/quantization_config.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* remove import guard as suggested

* update copyright headers

* add quark to transformers-quantization-latest-gpu Dockerfile

* make tests pass on transformers main + quark==0.7

* add missing F8_E4M3 and F8_E5M2 keys from str_to_torch_dtype

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2025-03-20 15:40:51 +01:00

2.1 KiB
Executable File

Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.

Quantization techniques that aren't supported in Transformers can be added with the [HfQuantizer] class.

Learn how to quantize models in the Quantization guide.

QuantoConfig

autodoc QuantoConfig

AqlmConfig

autodoc AqlmConfig

VptqConfig

autodoc VptqConfig

AwqConfig

autodoc AwqConfig

EetqConfig

autodoc EetqConfig

GPTQConfig

autodoc GPTQConfig

BitsAndBytesConfig

autodoc BitsAndBytesConfig

HfQuantizer

autodoc quantizers.base.HfQuantizer

HiggsConfig

autodoc HiggsConfig

HqqConfig

autodoc HqqConfig

FbgemmFp8Config

autodoc FbgemmFp8Config

CompressedTensorsConfig

autodoc CompressedTensorsConfig

TorchAoConfig

autodoc TorchAoConfig

BitNetConfig

autodoc BitNetConfig

SpQRConfig

autodoc SpQRConfig

FineGrainedFP8Config

autodoc FineGrainedFP8Config

QuarkConfig

autodoc QuarkConfig