
* Adding BitNet b1.58 Model * Add testing code for BitNet * Fix format issues * Fix docstring format issues * Fix docstring * Fix docstring * Fix: weight back to uint8 * Fix * Fix format issues * Remove copy comments * Add model link to the docstring * Fix: set tie_word_embeddings default to false * Update * Generate modeling file * Change config name for automatically generating modeling file. * Generate modeling file * Fix class name * Change testing branch * Remove unused param * Fix config docstring * Add docstring for BitNetQuantConfig. * Fix docstring * Update docs/source/en/model_doc/bitnet.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/model_doc/bitnet.md Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * Update bitnet config * Update explanation between online and offline mode * Remove space * revert changes * more revert * spaces * update * fix-copies * doc fix * fix minor nits * empty * small nit * empty --------- Co-authored-by: Shuming Ma <shumingma@pku.edu.cn> Co-authored-by: shumingma <shmingm@gmail.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2.1 KiB
Executable File
Quantization
Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.
Quantization techniques that aren't supported in Transformers can be added with the [HfQuantizer
] class.
Learn how to quantize models in the Quantization guide.
QuantoConfig
autodoc QuantoConfig
AqlmConfig
autodoc AqlmConfig
VptqConfig
autodoc VptqConfig
AwqConfig
autodoc AwqConfig
EetqConfig
autodoc EetqConfig
GPTQConfig
autodoc GPTQConfig
BitsAndBytesConfig
autodoc BitsAndBytesConfig
HfQuantizer
autodoc quantizers.base.HfQuantizer
HiggsConfig
autodoc HiggsConfig
HqqConfig
autodoc HqqConfig
FbgemmFp8Config
autodoc FbgemmFp8Config
CompressedTensorsConfig
autodoc CompressedTensorsConfig
TorchAoConfig
autodoc TorchAoConfig
BitNetQuantConfig
autodoc BitNetQuantConfig
SpQRConfig
autodoc SpQRConfig
FineGrainedFP8Config
autodoc FineGrainedFP8Config
QuarkConfig
autodoc QuarkConfig
AutoRoundConfig
autodoc AutoRoundConfig