mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
![]() * add auto-round support * Update src/transformers/quantizers/auto.py Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com> * fix style issue Signed-off-by: wenhuach <wenhuach87@gmail.com> * tiny change * tiny change * refine ut and doc * revert unnecessary change * tiny change * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * try to fix style issue * fix doc issue * Update tests/quantization/autoround/test_auto_round.py * fix comments * Update tests/quantization/autoround/test_auto_round.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * Update tests/quantization/autoround/test_auto_round.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * update doc * Update src/transformers/quantizers/quantizer_auto_round.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * update * update * fix * try to fix style issue * Update src/transformers/quantizers/auto.py Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/quantization/auto_round.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/quantization/auto_round.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Update docs/source/en/quantization/auto_round.md Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * update * fix style issue * update doc * update doc * Refine the doc * refine doc * revert one change * set sym to True by default * Enhance the unit test's robustness. * update * add torch dtype * tiny change * add awq convert test * fix typo * update * fix packing format issue * use one gpu --------- Signed-off-by: wenhuach <wenhuach87@gmail.com> Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> Co-authored-by: Shen, Haihao <haihao.shen@intel.com> |
||
---|---|---|
.. | ||
transformers-all-latest-gpu | ||
transformers-doc-builder | ||
transformers-gpu | ||
transformers-past-gpu | ||
transformers-pytorch-amd-gpu | ||
transformers-pytorch-deepspeed-amd-gpu | ||
transformers-pytorch-deepspeed-latest-gpu | ||
transformers-pytorch-deepspeed-nightly-gpu | ||
transformers-pytorch-gpu | ||
transformers-pytorch-tpu | ||
transformers-quantization-latest-gpu | ||
transformers-tensorflow-gpu | ||
consistency.dockerfile | ||
custom-tokenizers.dockerfile | ||
examples-tf.dockerfile | ||
examples-torch.dockerfile | ||
exotic-models.dockerfile | ||
jax-light.dockerfile | ||
pipeline-tf.dockerfile | ||
pipeline-torch.dockerfile | ||
quality.dockerfile | ||
README.md | ||
tf-light.dockerfile | ||
torch-jax-light.dockerfile | ||
torch-light.dockerfile | ||
torch-tf-light.dockerfile |
Dockers for transformers
In this folder you will find various docker files, and some subfolders.
- dockerfiles (ex:
consistency.dockerfile
) present under~/docker
are used for our "fast" CIs. You should be able to use them for tasks that only need CPU. For exampletorch-light
is a very light weights container (703MiB). - subfolders contain dockerfiles used for our
slow
CIs, which can be used for GPU tasks, but they are BIG as they were not specifically designed for a single model / single task. Thus the~/docker/transformers-pytorch-gpu
includes additional dependencies to allow us to run ALL model tests (saylibrosa
ortesseract
, which you do not need to run LLMs)
Note that in both case, you need to run uv pip install -e .
, which should take around 5 seconds. We do it outside the dockerfile for the need of our CI: we checkout a new branch each time, and the transformers
code is thus updated.
We are open to contribution, and invite the community to create dockerfiles with potential arguments that properly choose extras depending on the model's dependencies! 🤗