mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-05 13:50:13 +06:00

* toctree * not-doctested.txt * collapse sections * feedback * update * rewrite get started sections * fixes * fix * loading models * fix * customize models * share * fix link * contribute part 1 * contribute pt 2 * fix toctree * tokenization pt 1 * Add new model (#32615) * v1 - working version * fix * fix * fix * fix * rename to correct name * fix title * fixup * rename files * fix * add copied from on tests * rename to `FalconMamba` everywhere and fix bugs * fix quantization + accelerate * fix copies * add `torch.compile` support * fix tests * fix tests and add slow tests * copies on config * merge the latest changes * fix tests * add few lines about instruct * Apply suggestions from code review Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * fix * fix tests --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> * "to be not" -> "not to be" (#32636) * "to be not" -> "not to be" * Update sam.md * Update trainer.py * Update modeling_utils.py * Update test_modeling_utils.py * Update test_modeling_utils.py * fix hfoption tag * tokenization pt. 2 * image processor * fix toctree * backbones * feature extractor * fix file name * processor * update not-doctested * update * make style * fix toctree * revision * make fixup * fix toctree * fix * make style * fix hfoption tag * pipeline * pipeline gradio * pipeline web server * add pipeline * fix toctree * not-doctested * prompting * llm optims * fix toctree * fixes * cache * text generation * fix * chat pipeline * chat stuff * xla * torch.compile * cpu inference * toctree * gpu inference * agents and tools * gguf/tiktoken * finetune * toctree * trainer * trainer pt 2 * optims * optimizers * accelerate * parallelism * fsdp * update * distributed cpu * hardware training * gpu training * gpu training 2 * peft * distrib debug * deepspeed 1 * deepspeed 2 * chat toctree * quant pt 1 * quant pt 2 * fix toctree * fix * fix * quant pt 3 * quant pt 4 * serialization * torchscript * scripts * tpu * review * model addition timeline * modular * more reviews * reviews * fix toctree * reviews reviews * continue reviews * more reviews * modular transformers * more review * zamba2 * fix * all frameworks * pytorch * supported model frameworks * flashattention * rm check_table * not-doctested.txt * rm check_support_list.py * feedback * updates/feedback * review * feedback * fix * update * feedback * updates * update --------- Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
172 lines
8.4 KiB
Markdown
172 lines
8.4 KiB
Markdown
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# GPTQ
|
|
|
|
The [GPTQModel](https://github.com/ModelCloud/GPTQModel) and [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) implements the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently to find a version of the weights that minimizes the error. These weights are quantized to int4, but they're restored to fp16 on the fly during inference. This can save memory usage by 4x because the int4 weights are dequantized in a fused kernel rather than a GPU's global memory. Inference is also faster because a lower bitwidth takes less time to communicate.
|
|
|
|
> [!WARNING]
|
|
> AutoGPTQ is likely to be deprecated in the future due to lack of continued support for new models and features. See the [GPTQModel](#gptqmodel) section for more details.
|
|
|
|
Install Accelerate, Transformers and Optimum first.
|
|
|
|
```bash
|
|
pip install --upgrade accelerate optimum transformers
|
|
```
|
|
|
|
Then run the command below to install a GPTQ library.
|
|
|
|
<hfoptions id="install">
|
|
<hfoption id="GPTQmodel">
|
|
|
|
```bash
|
|
pip install gptqmodel --no-build-isolation
|
|
```
|
|
|
|
</hfoption>
|
|
<hfoption id="AutoGPTQ">
|
|
|
|
```bash
|
|
pip install auto-gptq --no-build-isolation
|
|
```
|
|
|
|
</hfoption>
|
|
</hfoptions>
|
|
|
|
Create a [`GPTQConfig`] class and set the number of bits to quantize to, a dataset to calbrate the weights for quantization, and a tokenizer to prepare the dataset.
|
|
|
|
```py
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m")
|
|
gptq_config = GPTQConfig(bits=4, dataset="c4", tokenizer=tokenizer)
|
|
```
|
|
|
|
You can pass your own dataset as a list of strings, but it is highly recommended to use the same dataset from the GPTQ paper.
|
|
|
|
```py
|
|
dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."]
|
|
gptq_config = GPTQConfig(bits=4, dataset=dataset, tokenizer=tokenizer)
|
|
```
|
|
|
|
Load a model to quantize and pass [`GPTQConfig`] to [`~AutoModelForCausalLM.from_pretrained`]. Set `device_map="auto"` to automatically offload the model to a CPU to help fit the model in memory, and allow the model modules to be moved between the CPU and GPU for quantization.
|
|
|
|
```py
|
|
quantized_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m", device_map="auto", quantization_config=gptq_config)
|
|
```
|
|
|
|
If you're running out of memory because a dataset is too large (disk offloading is not supported), try passing the `max_memory` parameter to allocate the amount of memory to use on your device (GPU and CPU).
|
|
|
|
```py
|
|
quantized_model = AutoModelForCausalLM.from_pretrained(
|
|
"facebook/opt-125m",
|
|
device_map="auto",
|
|
max_memory={0: "30GiB", 1: "46GiB", "cpu": "30GiB"},
|
|
quantization_config=gptq_config
|
|
)
|
|
```
|
|
|
|
> [!WARNING]
|
|
> Depending on your hardware, it can take some time to quantize a model from scratch. It can take ~5 minutes to quantize the [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) model on a free-tier Google Colab GPU, but it'll take ~4 hours to quantize a 175B parameter model on a NVIDIA A100. Before you quantize a model, it is a good idea to check the Hub if a GPTQ-quantized version of the model already exists.
|
|
|
|
Once a model is quantized, you can use [`~PreTrainedModel.push_to_hub`] to push the model and tokenizer to the Hub where it can be easily shared and accessed. This saves the [`GPTQConfig`].
|
|
|
|
```py
|
|
quantized_model.push_to_hub("opt-125m-gptq")
|
|
tokenizer.push_to_hub("opt-125m-gptq")
|
|
```
|
|
|
|
[`~PreTrainedModel.save_pretrained`] saves a quantized model locally. If the model was quantized with the `device_map` parameter, make sure to move the entire model to a GPU or CPU before saving it. The example below saves the model on a CPU.
|
|
|
|
```py
|
|
quantized_model.save_pretrained("opt-125m-gptq")
|
|
tokenizer.save_pretrained("opt-125m-gptq")
|
|
|
|
# if quantized with device_map set
|
|
quantized_model.to("cpu")
|
|
quantized_model.save_pretrained("opt-125m-gptq")
|
|
```
|
|
|
|
Reload a quantized model with [`~PreTrainedModel.from_pretrained`], and set `device_map="auto"` to automatically distribute the model on all available GPUs to load the model faster without using more memory than needed.
|
|
|
|
```py
|
|
from transformers import AutoModelForCausalLM
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto")
|
|
```
|
|
|
|
## Marlin
|
|
|
|
[Marlin](https://github.com/IST-DASLab/marlin) is a 4-bit only CUDA GPTQ kernel, highly optimized for the NVIDIA A100 GPU (Ampere) architecture. Loading, dequantization, and execution of post-dequantized weights are highly parallelized, offering a substantial inference improvement versus the original CUDA GPTQ kernel. Marlin is only available for quantized inference and does not support model quantization.
|
|
|
|
Marlin inference can be activated with the `backend` parameter in [`GPTQConfig`].
|
|
|
|
```py
|
|
|
|
from transformers import AutoModelForCausalLM, GPTQConfig
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=GPTQConfig(bits=4, backend="marlin"))
|
|
```
|
|
|
|
## ExLlama
|
|
|
|
> [!WARNING]
|
|
> Only 4-bit models are supported, and we recommend deactivating the ExLlama kernels if you're finetuning a quantized model with PEFT.
|
|
|
|
[ExLlama](https://github.com/turboderp/exllama) is a Python/C++/CUDA implementation of the [Llama](model_doc/llama) model that is designed for faster inference with 4-bit GPTQ weights (check out these [benchmarks](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark)). The ExLlama kernel is activated by default when you create a [`GPTQConfig`] object.
|
|
|
|
To boost inference speed even further, use the [ExLlamaV2](https://github.com/turboderp/exllamav2) kernels by configuring the `exllama_config` parameter in [`GPTQConfig`].
|
|
|
|
```py
|
|
import torch
|
|
from transformers import AutoModelForCausalLM, GPTQConfig
|
|
|
|
gptq_config = GPTQConfig(bits=4, exllama_config={"version":2})
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
"{your_username}/opt-125m-gptq",
|
|
device_map="auto",
|
|
quantization_config=gptq_config
|
|
)
|
|
```
|
|
|
|
The ExLlama kernels are only supported when the entire model is on the GPU. If you're doing inference on a CPU with AutoGPTQ 0.4.2+, disable the ExLlama kernel in [`GPTQConfig`]. This overwrites the attributes related to the ExLlama kernels in the quantization config of the `config.json` file.
|
|
|
|
```py
|
|
import torch
|
|
from transformers import AutoModelForCausalLM, GPTQConfig
|
|
|
|
gptq_config = GPTQConfig(bits=4, use_exllama=False)
|
|
model = AutoModelForCausalLM.from_pretrained(
|
|
"{your_username}/opt-125m-gptq",
|
|
device_map="cpu",
|
|
quantization_config=gptq_config
|
|
)
|
|
```
|
|
|
|
## GPTQModel
|
|
|
|
It is recommended to use GPTQModel, originally a maintained fork of AutoGPTQ, because it has since diverged from AutoGTPQ with some significant features. GPTQModel has faster quantization, lower memory usage, and more accurate default quantization.
|
|
|
|
GPTQModel provides asymmetric quantization which can potentially lower quantization errors compared to symmetric quantization. It is not backward compatible with AutoGPTQ, and not all kernels (Marlin) support asymmetric quantization.
|
|
|
|
GPTQModel also has broader support for the latest LLM models, multimodal models (Qwen2-VL and Ovis1.6-VL), platforms (Linux, macOS, Windows 11), and hardware (AMD ROCm, Apple Silicon, Intel/AMD CPUs, and Intel Datacenter Max/Arc GPUs, etc.).
|
|
|
|
The Marlin kernels are also updated for A100 GPUs and other kernels are updated to include auto-padding for legacy models and models with non-uniform in/out-features.
|
|
|
|
## Resources
|
|
|
|
Run the GPTQ quantization with PEFT [notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) for a hands-on experience, and read [Making LLMs lighter with AutoGPTQ and transformers](https://huggingface.co/blog/gptq-integration) to learn more about the AutoGPTQ integration.
|