mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-05 22:00:09 +06:00

* add auto-round support * Update src/transformers/quantizers/auto.py Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com> * fix style issue Signed-off-by: wenhuach <wenhuach87@gmail.com> * tiny change * tiny change * refine ut and doc * revert unnecessary change * tiny change * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * fix doc issue * Update tests/quantization/autoround/test_auto_round.py * fix comments * Update tests/quantization/autoround/test_auto_round.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * Update tests/quantization/autoround/test_auto_round.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * update doc * Update src/transformers/quantizers/quantizer_auto_round.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * update * update * fix * try to fix style issue * Update src/transformers/quantizers/auto.py Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/quantization/auto_round.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/quantization/auto_round.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/quantization/auto_round.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * update * fix style issue * update doc * update doc * Refine the doc * refine doc * revert one change * set sym to True by default * Enhance the unit test's robustness. * update * add torch dtype * tiny change * add awq convert test * fix typo * update * fix packing format issue * use one gpu --------- Signed-off-by: wenhuach <wenhuach87@gmail.com> Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> Co-authored-by: Shen, Haihao <haihao.shen@intel.com>
99 lines
2.1 KiB
Markdown
Executable File
99 lines
2.1 KiB
Markdown
Executable File
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# Quantization
|
|
|
|
Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference. Transformers supports the AWQ and GPTQ quantization algorithms and it supports 8-bit and 4-bit quantization with bitsandbytes.
|
|
|
|
Quantization techniques that aren't supported in Transformers can be added with the [`HfQuantizer`] class.
|
|
|
|
<Tip>
|
|
|
|
Learn how to quantize models in the [Quantization](../quantization) guide.
|
|
|
|
</Tip>
|
|
|
|
## QuantoConfig
|
|
|
|
[[autodoc]] QuantoConfig
|
|
|
|
## AqlmConfig
|
|
|
|
[[autodoc]] AqlmConfig
|
|
|
|
## VptqConfig
|
|
|
|
[[autodoc]] VptqConfig
|
|
|
|
## AwqConfig
|
|
|
|
[[autodoc]] AwqConfig
|
|
|
|
## EetqConfig
|
|
[[autodoc]] EetqConfig
|
|
|
|
## GPTQConfig
|
|
|
|
[[autodoc]] GPTQConfig
|
|
|
|
## BitsAndBytesConfig
|
|
|
|
[[autodoc]] BitsAndBytesConfig
|
|
|
|
## HfQuantizer
|
|
|
|
[[autodoc]] quantizers.base.HfQuantizer
|
|
|
|
## HiggsConfig
|
|
|
|
[[autodoc]] HiggsConfig
|
|
|
|
## HqqConfig
|
|
|
|
[[autodoc]] HqqConfig
|
|
|
|
## FbgemmFp8Config
|
|
|
|
[[autodoc]] FbgemmFp8Config
|
|
|
|
## CompressedTensorsConfig
|
|
|
|
[[autodoc]] CompressedTensorsConfig
|
|
|
|
## TorchAoConfig
|
|
|
|
[[autodoc]] TorchAoConfig
|
|
|
|
## BitNetConfig
|
|
|
|
[[autodoc]] BitNetConfig
|
|
|
|
## SpQRConfig
|
|
|
|
[[autodoc]] SpQRConfig
|
|
|
|
## FineGrainedFP8Config
|
|
|
|
[[autodoc]] FineGrainedFP8Config
|
|
|
|
## QuarkConfig
|
|
|
|
[[autodoc]] QuarkConfig
|
|
|
|
## AutoRoundConfig
|
|
|
|
[[autodoc]] AutoRoundConfig
|