transformers/docker
fxmarty-amd 1a374799ce
Support loading Quark quantized models in Transformers (#36372)
* add quark quantizer

* add quark doc

* clean up doc

* fix tests

* make style

* more style fixes

* cleanup imports

* cleaning

* precise install

* Update docs/source/en/quantization/quark.md

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update tests/quantization/quark_integration/test_quark.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* Update src/transformers/utils/quantization_config.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* remove import guard as suggested

* update copyright headers

* add quark to transformers-quantization-latest-gpu Dockerfile

* make tests pass on transformers main + quark==0.7

* add missing F8_E4M3 and F8_E5M2 keys from str_to_torch_dtype

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Bowen Bao <bowenbao@amd.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2025-03-20 15:40:51 +01:00
..
transformers-all-latest-gpu use torch 2.6 for daily CI (#35985) 2025-01-31 18:58:23 +01:00
transformers-doc-builder Use python 3.10 for docbuild (#28399) 2024-01-11 14:39:49 +01:00
transformers-gpu TF: TF 2.10 unpin + related onnx test skips (#18995) 2022-09-12 19:30:27 +01:00
transformers-past-gpu DeepSpeed github repo move sync (#36021) 2025-02-05 08:19:31 -08:00
transformers-pytorch-amd-gpu Update amd pytorch index to match base image (#36347) 2025-02-24 16:17:20 +01:00
transformers-pytorch-deepspeed-amd-gpu AMD DeepSpeed image additional HIP dependencies (#36195) 2025-02-17 11:50:49 +01:00
transformers-pytorch-deepspeed-latest-gpu update docker file transformers-pytorch-deepspeed-latest-gpu (#35940) 2025-01-29 16:01:27 +01:00
transformers-pytorch-deepspeed-nightly-gpu DeepSpeed github repo move sync (#36021) 2025-02-05 08:19:31 -08:00
transformers-pytorch-gpu use torch 2.6 for daily CI (#35985) 2025-01-31 18:58:23 +01:00
transformers-pytorch-tpu Rename master to main for notebooks links and leftovers (#16397) 2022-03-25 09:12:23 -04:00
transformers-quantization-latest-gpu Support loading Quark quantized models in Transformers (#36372) 2025-03-20 15:40:51 +01:00
transformers-tensorflow-gpu pin tensorflow_probability<0.22 in docker files (#34381) 2024-10-28 11:59:46 +01:00
consistency.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
custom-tokenizers.dockerfile [CI] Automatic rerun of certain test failures (#36694) 2025-03-13 15:40:23 +00:00
examples-tf.dockerfile [CI] Automatic rerun of certain test failures (#36694) 2025-03-13 15:40:23 +00:00
examples-torch.dockerfile [CI] Automatic rerun of certain test failures (#36694) 2025-03-13 15:40:23 +00:00
exotic-models.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
jax-light.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
pipeline-tf.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
pipeline-torch.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
quality.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
README.md Add documentation for docker (#33156) 2024-10-14 11:58:45 +02:00
tf-light.dockerfile [smolvlm] make CI green (#36306) 2025-02-20 18:56:11 +01:00
torch-jax-light.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00
torch-light.dockerfile [smolvlm] make CI green (#36306) 2025-02-20 18:56:11 +01:00
torch-tf-light.dockerfile CircleCI with python 3.9 (#36027) 2025-02-04 17:40:20 +01:00

Dockers for transformers

In this folder you will find various docker files, and some subfolders.

  • dockerfiles (ex: consistency.dockerfile) present under ~/docker are used for our "fast" CIs. You should be able to use them for tasks that only need CPU. For example torch-light is a very light weights container (703MiB).
  • subfloder contain dockerfiles used for our slow CIs, which can be used for GPU tasks, but they are BIG as they were not specifically designed for a single model / single task. Thus the ~/docker/transformers-pytorch-gpu includes additional dependencies to allow us to run ALL model tests (say librosa or tesseract, which you do not need to run LLMs)

Note that in both case, you need to run uv pip install -e ., which should take around 5 seconds. We do it outside the dockerfile for the need of our CI: we checkout a new branch each time, and the transformers code is thus updated.

We are open to contribution, and invite the community to create dockerfiles with potential arguments that properly choose extras depending on the model's dependencies! 🤗