transformers/tests/bitsandbytes
Tim Dettmers 9d73b92269
4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#23479)
* Added lion and paged optimizers and made original tests pass.

* Added tests for paged and lion optimizers.

* Added and fixed optimizer tests.

* Style and quality checks.

* Initial draft. Some tests fail.

* Fixed dtype bug.

* Fixed bug caused by torch_dtype='auto'.

* All test green for 8-bit and 4-bit layers.

* Added fix for fp32 layer norms and bf16 compute in LLaMA.

* Initial draft. Some tests fail.

* Fixed dtype bug.

* Fixed bug caused by torch_dtype='auto'.

* All test green for 8-bit and 4-bit layers.

* Added lion and paged optimizers and made original tests pass.

* Added tests for paged and lion optimizers.

* Added and fixed optimizer tests.

* Style and quality checks.

* Fixing issues for PR #23479.

* Added fix for fp32 layer norms and bf16 compute in LLaMA.

* Reverted variable name change.

* Initial draft. Some tests fail.

* Fixed dtype bug.

* Fixed bug caused by torch_dtype='auto'.

* All test green for 8-bit and 4-bit layers.

* Added lion and paged optimizers and made original tests pass.

* Added tests for paged and lion optimizers.

* Added and fixed optimizer tests.

* Style and quality checks.

* Added missing tests.

* Fixup changes.

* Added fixup changes.

* Missed some variables to rename.

* revert trainer tests

* revert test trainer

* another revert

* fix tests and safety checkers

* protect import

* simplify a bit

* Update src/transformers/trainer.py

* few fixes

* add warning

* replace with `load_in_kbit = load_in_4bit or load_in_8bit`

* fix test

* fix tests

* this time fix tests

* safety checker

* add docs

* revert torch_dtype

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* multiple fixes

* update docs

* version checks and multiple fixes

* replace `is_loaded_in_kbit`

* replace `load_in_kbit`

* change methods names

* better checks

* oops

* oops

* address final comments

---------

Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-24 12:52:45 +02:00
..
__init__.py 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#23479) 2023-05-24 12:52:45 +02:00
README.md 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#23479) 2023-05-24 12:52:45 +02:00
test_4bit.py 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#23479) 2023-05-24 12:52:45 +02:00
test_mixed_int8.py 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) (#23479) 2023-05-24 12:52:45 +02:00

Testing mixed int8 quantization

HFxbitsandbytes.png

The following is the recipe on how to effectively debug bitsandbytes integration on Hugging Face transformers.

Library requirements

  • transformers>=4.22.0
  • accelerate>=0.12.0
  • bitsandbytes>=0.31.5.

Hardware requirements

The following instructions are tested with 2 NVIDIA-Tesla T4 GPUs. To run successfully bitsandbytes you would need a 8-bit core tensor supported GPU. Note that Turing, Ampere or newer architectures - e.g. T4, RTX20s RTX30s, A40-A100, A6000 should be supported.

Virutal envs

conda create --name int8-testing python==3.8
pip install bitsandbytes>=0.31.5
pip install accelerate>=0.12.0
pip install transformers>=4.23.0

if transformers>=4.23.0 is not released yet, then use:

pip install git+https://github.com/huggingface/transformers.git

Troubleshooting

A list of common errors:

Torch does not correctly do the operations on GPU

First check that:

import torch

vec = torch.randn(1, 2, 3).to(0)

Works without any error. If not, install torch using conda like:

conda create --name int8-testing python==3.8
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge
pip install bitsandbytes>=0.31.5
pip install accelerate>=0.12.0
pip install transformers>=4.23.0

For the latest pytorch instructions please see this

and the snippet above should work.

bitsandbytes operations are not supported under CPU!

This happens when some Linear weights are set to the CPU when using accelerate. Please check carefully model.hf_device_map and make sure that there is no Linear module that is assigned to CPU. It is fine to have the last module (usually the Lm_head) set on CPU.

To use the type as a Parameter, please correct the detach() semantics defined by __torch_dispatch__() implementation.

Use the latest version of accelerate with a command such as: pip install -U accelerate and the problem should be solved.

Parameter has no attribue .CB

Same solution as above.

RuntimeError: CUDA error: an illegal memory access was encountered ... consider passing CUDA_LAUNCH_BLOCKING=1

Run your script by pre-pending CUDA_LAUNCH_BLOCKING=1 and you should observe an error as described in the next section.

CUDA illegal memory error: an illegal memory access at line...:

Check the CUDA verisons with:

nvcc --version

and confirm it is the same version as the one detected by bitsandbytes. If not, run:

ls -l $CONDA_PREFIX/lib/libcudart.so

or

ls -l $LD_LIBRARY_PATH

Check if libcudart.so has a correct symlink that is set. Sometimes nvcc detects the correct CUDA version but bitsandbytes doesn't. You have to make sure that the symlink that is set for the file libcudart.so is redirected to the correct CUDA file.

Here is an example of a badly configured CUDA installation:

nvcc --version gives:

Screenshot 2022-08-15 at 15.12.23.png

which means that the detected CUDA version is 11.3 but bitsandbytes outputs:

image.png

First check:

echo $LD_LIBRARY_PATH

If this contains multiple paths separated by :. Then you have to make sure that the correct CUDA version is set. By doing:

ls -l $path/libcudart.so

On each path ($path) separated by :. If not, simply run

ls -l $LD_LIBRARY_PATH/libcudart.so

and you can see

Screenshot 2022-08-15 at 15.12.33.png

If you see that the file is linked to the wrong CUDA version (here 10.2), find the correct location for libcudart.so (find --name libcudart.so) and replace the environment variable LD_LIBRARY_PATH with the one containing the correct libcudart.so file.