transformers/docs/source/en/quantization
Andrei Panferov 64c05eecd6
HIGGS Quantization Support (#34997)
* higgs init

* working with crunches

* per-model workspaces

* style

* style 2

* tests and style

* higgs tests passing

* protecting torch import

* removed torch.Tensor type annotations

* torch.nn.Module inheritance fix maybe

* hide inputs inside quantizer calls

* style structure something

* Update src/transformers/quantizers/quantizer_higgs.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* reworked num_sms

* Update src/transformers/integrations/higgs.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* revamped device checks

* docstring upd

* Update src/transformers/quantizers/quantizer_higgs.py

Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>

* edited tests and device map assertions

* minor edits

* updated flute cuda version in docker

* Added p=1 and 2,3bit HIGGS

* flute version check update

* incorporated `modules_to_not_convert`

* less hardcoding

* Fixed comment

* Added docs

* Fixed gemma support

* example in docs

* fixed torch_dtype for HIGGS

* Update docs/source/en/quantization/higgs.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Collection link

* dequantize interface

* newer flute version, torch.compile support

* unittest message fix

* docs update compile

* isort

* ValueError instead of assert

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2024-12-23 16:54:49 +01:00
..
aqlm.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
awq.md Enables CPU AWQ model with IPEX version. (#33460) 2024-10-04 16:25:10 +02:00
bitnet.md FEAT : Adding BitNet quantization method to HFQuantizer (#33410) 2024-10-09 17:51:41 +02:00
bitsandbytes.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
compressed_tensors.md [Docs] Update compressed_tensors.md (#33961) 2024-10-10 15:22:41 +02:00
contribute.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
eetq.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
fbgemm_fp8.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
gptq.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
higgs.md HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
hqq.md Hqq serialization (#33141) 2024-09-30 14:47:18 +02:00
optimum.md Docs / Quantization: refactor quantization documentation (#30942) 2024-05-23 14:31:52 +02:00
overview.md HIGGS Quantization Support (#34997) 2024-12-23 16:54:49 +01:00
quanto.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
torchao.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
vptq.md FEAT : Adding VPTQ quantization method to HFQuantizer (#34770) 2024-12-20 09:45:53 +01:00