mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-01 02:31:11 +06:00
![]() * higgs init * working with crunches * per-model workspaces * style * style 2 * tests and style * higgs tests passing * protecting torch import * removed torch.Tensor type annotations * torch.nn.Module inheritance fix maybe * hide inputs inside quantizer calls * style structure something * Update src/transformers/quantizers/quantizer_higgs.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * reworked num_sms * Update src/transformers/integrations/higgs.py Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * revamped device checks * docstring upd * Update src/transformers/quantizers/quantizer_higgs.py Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * edited tests and device map assertions * minor edits * updated flute cuda version in docker * Added p=1 and 2,3bit HIGGS * flute version check update * incorporated `modules_to_not_convert` * less hardcoding * Fixed comment * Added docs * Fixed gemma support * example in docs * fixed torch_dtype for HIGGS * Update docs/source/en/quantization/higgs.md Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> * Collection link * dequantize interface * newer flute version, torch.compile support * unittest message fix * docs update compile * isort * ValueError instead of assert --------- Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com> Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> |
||
---|---|---|
.. | ||
transformers-all-latest-gpu | ||
transformers-doc-builder | ||
transformers-gpu | ||
transformers-past-gpu | ||
transformers-pytorch-amd-gpu | ||
transformers-pytorch-deepspeed-amd-gpu | ||
transformers-pytorch-deepspeed-latest-gpu | ||
transformers-pytorch-deepspeed-nightly-gpu | ||
transformers-pytorch-gpu | ||
transformers-pytorch-tpu | ||
transformers-quantization-latest-gpu | ||
transformers-tensorflow-gpu | ||
consistency.dockerfile | ||
custom-tokenizers.dockerfile | ||
examples-tf.dockerfile | ||
examples-torch.dockerfile | ||
exotic-models.dockerfile | ||
jax-light.dockerfile | ||
pipeline-tf.dockerfile | ||
pipeline-torch.dockerfile | ||
quality.dockerfile | ||
README.md | ||
tf-light.dockerfile | ||
torch-jax-light.dockerfile | ||
torch-light.dockerfile | ||
torch-tf-light.dockerfile |
Dockers for transformers
In this folder you will find various docker files, and some subfolders.
- dockerfiles (ex:
consistency.dockerfile
) present under~/docker
are used for our "fast" CIs. You should be able to use them for tasks that only need CPU. For exampletorch-light
is a very light weights container (703MiB). - subfloder contain dockerfiles used for our
slow
CIs, which can be used for GPU tasks, but they are BIG as they were not specifically designed for a single model / single task. Thus the~/docker/transformers-pytorch-gpu
includes additional dependencies to allow us to run ALL model tests (saylibrosa
ortesseract
, which you do not need to run LLMs)
Note that in both case, you need to run uv pip install -e .
, which should take around 5 seconds. We do it outside the dockerfile for the need of our CI: we checkout a new branch each time, and the transformers
code is thus updated.
We are open to contribution, and invite the community to create dockerfiles with potential arguments that properly choose extras depending on the model's dependencies! 🤗