transformers/docs/source/en/main_classes/quantization.md
Mohamed Mekkouri efe72fe21f
Adding FP8 Quantization to transformers (#36026)
* first commit

* adding kernels

* fix create_quantized_param

* fix quantization logic

* end2end

* fix style

* fix imports

* fix consistency

* update

* fix style

* update

* udpate after review

* make style

* update

* update

* fix

* update

* fix docstring

* update

* update after review

* update

* fix scheme

* update

* update

* fix

* update

* fix docstring

* add source

* fix test

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-02-13 13:01:19 +01:00

2.0 KiB
Executable File

Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.

Quantization techniques that aren't supported in Transformers can be added with the [HfQuantizer] class.

Learn how to quantize models in the Quantization guide.

QuantoConfig

autodoc QuantoConfig

AqlmConfig

autodoc AqlmConfig

VptqConfig

autodoc VptqConfig

AwqConfig

autodoc AwqConfig

EetqConfig

autodoc EetqConfig

GPTQConfig

autodoc GPTQConfig

BitsAndBytesConfig

autodoc BitsAndBytesConfig

HfQuantizer

autodoc quantizers.base.HfQuantizer

HiggsConfig

autodoc HiggsConfig

HqqConfig

autodoc HqqConfig

FbgemmFp8Config

autodoc FbgemmFp8Config

CompressedTensorsConfig

autodoc CompressedTensorsConfig

TorchAoConfig

autodoc TorchAoConfig

BitNetConfig

autodoc BitNetConfig

FineGrainedFP8Config

autodoc FineGrainedFP8Config