jiqing-feng
9d6abf9778
enable torchao quantization on CPU ( #36146 )
...
* enable torchao quantization on CPU
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix int4
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix format
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* enable CPU torchao tests
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix cuda tests
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix cpu tests
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update tests
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix style
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix cuda tests
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix torchao available
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix torchao available
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix torchao config cannot convert to json
* fix docs
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* rm to_dict to rebase
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* limited torchao version for CPU
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix format
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix skip
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix format
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* Update src/transformers/testing_utils.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* fix cpu test
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix format
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-02-25 11:06:52 +01:00
Cyril Vallez
401543a825
Fix is_causal
fail with compile ( #36374 )
...
fix
2025-02-25 10:44:56 +01:00
Cyril Vallez
bc65f3fc1c
[modular] Do not track imports in functions ( #36279 )
...
* Add check
* just check for function
* Update examples
2025-02-25 10:29:47 +01:00
Cyril Vallez
4b5cf5496d
Load models much faster on accelerator devices!! ( #36380 )
...
* caching allocator warmup
* Update modeling_utils.py
* reuse expanded map
* style
2025-02-25 09:41:22 +01:00
Yin Song
931e5f4ac3
Update modeling_llava_onevision.py ( #36391 )
...
Fixed a potential bug in modeling_llava_onevision.py
2025-02-25 09:34:50 +01:00
Yih-Dar
2ab7bdc403
notify new model merged to main
( #36375 )
...
notify new model
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-02-24 17:53:18 +01:00
Kyle Sayers
05dfed06d7
[Modeling] Reduce runtime when loading missing keys ( #36312 )
...
* hoist keys
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
* remove hoist
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
---------
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
2025-02-24 16:10:28 +00:00
Mathew Shen
18276b03f7
fix(type): padding_side type should be Optional[str] ( #36326 )
2025-02-24 16:09:42 +00:00
ivarflakstad
f4684a6eb2
Update amd pytorch index to match base image ( #36347 )
...
pip pytorch index should match docker base image
2025-02-24 16:17:20 +01:00
Jerry Zhang
2af272c101
Add autoquant support for torchao quantizer ( #35503 )
...
* Add autoquant support for torchao quantizer
Summary:
att, also verified that autoquantized model can be saved and loaded:
save: https://gist.github.com/jerryzh168/01d367aaf44dbbbfd4068a4a10a00061
load: https://gist.github.com/jerryzh168/d5c6c401b2abdf18e0b6771341f1525c
Test Plan:
tested locally with above script
model uploaded to https://huggingface.co/jerryzh168/llama3-8b-autoquant
Reviewers:
Subscribers:
Tasks:
Tags:
* add test
* ruff fix
* ruff reformat
* add docs and min_sqnr support
* format
* format
* fix test
* update doc
* format
* remove disable_compile
* format
2025-02-24 15:54:16 +01:00
ivarflakstad
977a61f743
Change slack channel for mi250 CI to amd-hf-ci ( #36346 )
2025-02-24 15:50:06 +01:00
Rahul Tuli
884a8ea1f0
Improve model loading for compressed tensor models ( #36152 )
...
* Disable warnings for stacked compressors
* Introduce two new hooks in HfQuantizer lifecycle
to allow updates to missing and unexpected keys
* Update missing and unexpected keys
for stacked compressors
* Add tests
* Fix: run_compressed cases
* Fix: uncompressed cases
* Rename compressed_tensor folder to compressed_tensors
Move RunCompressedTest to the same file
Update tests to unittest
2025-02-24 13:47:21 +01:00
Fanli Lin
4dbf17c17f
[tests] enable bnb tests on xpu ( #36233 )
...
* fix failed test
* fix device
* fix more device cases
* add more cases
* fix empty cache
* Update test_4bit.py
---------
Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
2025-02-24 11:30:15 +01:00
Matt
92c5ca9dd7
Fix exploitable regexes in Nougat and GPTSan/GPTJNeoXJapanese ( #36121 )
...
* Fix potential regex catastrophic backtracking in NougatTokenizerFast
The original regex pattern in tokenization_nougat_fast.py was vulnerable to
catastrophic backtracking due to greedy quantifiers and nested alternations.
This commit replaces it with a more efficient pattern that:
1. Uses explicit character classes instead of dot (.)
2. Handles whitespace more precisely
3. Avoids unnecessary backtracking
4. Supports both lowercase and uppercase roman numerals
5. Maintains the same functionality while being more robust
* Try another regex
* Trying deepseek's answer
* Start with a simplification
* Another simplification
* Just rewrite the whole function myself
* Fix gptneox and gptsan
* Simplify the regex even further
* Tighten up the price regex a little
* Add possessive version of the regex
* Fix regex
* Much cleaner regexes
---------
Co-authored-by: openhands <openhands@all-hands.dev>
2025-02-21 19:49:51 +00:00
CalOmnie
547911e727
Uses Collection in transformers.image_transforms.normalize ( #36301 )
...
* Uses Collection instead of Sequence in transformers.image_transforms.normalize
* Uses collections.abc.Collection in lieu of deprecated typing one
2025-02-21 18:38:41 +01:00
Fanli Lin
7c5bd24ffa
[tests] make quanto tests device-agnostic ( #36328 )
...
* make device-agnostic
* name change
2025-02-21 14:20:40 +01:00
Joao Gante
678885bbbd
[CI] Check test if the GenerationTesterMixin
inheritance is correct 🐛 🔫 ( #36180 )
2025-02-21 10:18:20 +00:00
Pavel Iakubovskii
a957b7911a
Add SigLIP 2 ( #36323 )
...
* Docs
* Inits
* Auto classes
* Add siglip base
* Add base tests
* Fix Siglip V1 for fix res version
* Add image processor
* Update conversion
* Experimenting with vectorized embeddings
* Fixup
* Add modular Siglip2Processor
* Add modular configuration
* Rename num patches
* Correct image and text features merging
* Working conversion script
* Refactoring conversion script
* Remove unused code in conversion script
* Shorten dict a bit
* Refactoring conversion
* Done conversion refactoring
* Fixup
* Modular siglip2
* Make model exportable and compilable without graph breaks
* Remove position_ids from image_processor
* REmove position ids from modeling file
* Update modular
* Type hint
* Fixup
* Set defaults to processor
* Add integration test
* Revert spatial shapes back to tensor
* Change order
* Fix most of the tests
* Fix docstring
* Remove interpolate_pos_encoding arg (not needed)
* Update docs
* Standardize processing
* Fix attention_mask in vision head
* Siglip v1: remove double transpose in FA2
* Update modular file
* Update FA2 test
* Update expected logits
* Fix interpolation for siglip2 image processor
* Skip init test
* Skip dispatch on flash test
* Fix modeling tests
* Fixup
* Add dummy objects
* Fix some docstrings
* Add siglip2 in index.md
* Fix consistency
* Add docs
* Remove size and data format
* Add image processor tests
* Fix
* Add fast image processor
* Fix style
* Fix
* Docs
* Set lowercase for tokenizer
* Adjust head size for Siglip v1
* Update siglip2 for consistency with siglip1
* Update siglip2 conversion
* Update pipeline
* Update checkpoints in tests
* Update checkpoint name
* Fix pooling for image classification model
* Fix FA2 test
* Update processor
* Fix check repo
* Update docs
* Fix typos
* Fix docstring for fast image processor
* Add siglip2 to FA2 docs
* Fix fast ip tests
* Fix constitency
* Fix tokenizer class for siglip v1
* Fix missing header
* Refactor scaling for clip, siglip, siglip2
* Remove unused imports
* Make fast IP default for siglip2
* Update docs
* Update checkpoints
* Update modular
* Update paper link
* Fixup
* Fix name in toctree
* Fix test
2025-02-21 09:04:19 +00:00
Raushan Turganbay
14552cbd7c
VLMs: even more clean-up ( #36249 )
...
* squash
* style
2025-02-21 09:46:31 +01:00
Cyan
e18f233f6c
Fix default attention mask of generate in MoshiForConditionalGeneration ( #36171 )
2025-02-20 19:53:27 +00:00
Joao Gante
27d1707586
[smolvlm] make CI green ( #36306 )
...
* add smolvlm to toctree
* add requirements
* dev-ci
* no docker changes
* dev-ci
* update torch-light.dockerfile
* derp
* dev-ci
2025-02-20 18:56:11 +01:00
Nosimus
effaef334b
fix: prevent second save in the end of training if last step was saved already ( #36219 )
...
* fix: prevent second save in the end of training
* fix: prevent second save in the end of training
* test: added test for no duplicate save on epoch save strategy
* fix: removed TrainerControl
* chore: style formatting
---------
Co-authored-by: JaktensTid <jaktenstid1@gmail.com>
2025-02-20 17:38:52 +01:00
12v
5412ff1a13
Fix typo in Pixtral example ( #36302 )
...
Fix typo
2025-02-20 14:13:48 +00:00
Orr Zohar
4397dfcb71
SmolVLM2 ( #36126 )
...
* smolvlm init
* updates
* fixing bugs
* minimal run, no checks
* minimal run, no checks
* passing first check + adding url support
* updating video dataloading logic
* fixing image logic
* trying modular, but fails
* modular is working, changing processor to match PR comments and general transformers logic
* fixing kwargs
* offloading video loading logic to image_util
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* fixing circleci code formatting errors
* update
* add idefics3-based tests
* add keyword to all
* add PreTrainedModel
* updateing video loading logic
* working inference
* updates for PR comments
* updates for PR comments
* moving SmolVLMPretrainedModel higher to fix import error
* CI test pass
* CI test pass
* removing lambda
* CI test pass
* CI test pass
* CI test pass
* CI test pass
* CI test pass
* CI test pass
* processor tests
* add example in docs
* typo
* fix copies
* skip compile tests - sdpa for VisionTransformer
* fix init
* raise import error for num2words
* update doc for FA2
* more doc fix
* CI
* updates for PR comments
* Update docs/source/en/model_doc/smolvlm.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/model_doc/smolvlm.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/model_doc/smolvlm.md
Co-authored-by: Joshua Lochner <admin@xenova.com>
* Update docs/source/en/model_doc/smolvlm.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/model_doc/smolvlm.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fixing processor -- tokenizer not defined properly, (gpt2 tokenizer), and does not have the attributes of fake image token, etc
* adding smolvlm to VQA models
* removing vqa auto class
* Update src/transformers/models/smolvlm/processing_smolvlm.py
Co-authored-by: Joshua Lochner <admin@xenova.com>
* removing smolvlmvisiontransformer from index.md
* my bad, video processing had typos
* fixing docs
* renaming params in SmolVLMModel.inputs_merger
* removing un-needed dtype/device in model forward
* ruff for CI
* update docs
* Update docs/source/en/model_doc/smolvlm.md
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* return cache position
* return cache position
* return cache also in modular
* needed to run modular again
* fix training tests
* push vectorized inputs merger
* format
* format
* reduce number of mappings
* addressing PR comments
* happy CI, happy me :)
* skip non-nested images
* adjust integration test for smaller GPUs
* format
* fix kwargs in chat template apply
* skip this for now
---------
Co-authored-by: raushan <raushan@huggingface.co>
Co-authored-by: Pablo <pablo.montalvo.leroux@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Joshua Lochner <admin@xenova.com>
2025-02-20 15:00:26 +01:00
Yih-Dar
f2ab182dca
Ignore conversion files in test fetcher ( #36251 )
...
fix
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-02-20 13:32:02 +01:00
Yih-Dar
e8531a0e33
Fix broken CI on release branch due to missing conversion files ( #36275 )
...
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-02-20 13:22:10 +01:00
Ilyas Moutawwakil
5e2183f344
Make cache traceable ( #35873 )
...
simply make cache traceable
2025-02-20 09:59:25 +01:00
Marc Sun
31bb662db1
Fix callback handler reference ( #36250 )
...
* fix reference
* style
2025-02-19 18:17:33 +01:00
hyjbrave
78d6484675
docs: Update README_zh-hans.md ( #36269 )
...
Update README_zh-hans.md
docs: Fix awkward sentence in README
2025-02-19 09:04:46 -08:00
Mohamed Mekkouri
e5cea20743
Add Example for Custom quantization ( #36286 )
...
* add example
* rename
2025-02-19 17:09:23 +01:00
Joao Gante
e3d99ec2f5
[tests] make test_from_pretrained_low_cpu_mem_usage_equal
less flaky ( #36255 )
...
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-02-19 15:14:02 +00:00
Joao Gante
99adc74462
[tests] remove flax-pt equivalence and cross tests ( #36283 )
2025-02-19 15:13:27 +00:00
Joao Gante
fa8cdccd91
[tests] deflake dither test ( #36284 )
2025-02-19 15:13:10 +00:00
Cyril Vallez
60226c6ff3
TP initialization module-by-module ( #35996 )
...
* module-by-module loading!
* Update modeling_utils.py
* dtyle and comments
* Update modeling_utils.py
* Update modeling_utils.py
* Update test
* Update modeling_utils.py
* Update modeling_utils.py
* Update test_tp.py
* Update test_tp.py
* Update modeling_utils.py
* re-trigger CIs
* re-trigger CIs
2025-02-19 14:04:57 +01:00
Joao Gante
0863eef248
[tests] remove pt_tf
equivalence tests ( #36253 )
2025-02-19 11:55:11 +00:00
Karel Vesely
1a81d774b1
Add dithering to the Speech2TextFeatureExtractor
API. ( #34638 )
...
* Add dithering to the `Speech2TextFeatureExtractor` API.
- in kaldi : 4a8b7f6732/src/feat/feature-window.cc (L145)
- with dithering without a seed, the features become non-deterministic due
to small Gaussian noise added to the audio (i.e. 2 runs lead to little
different outputs)
* update the PR
- add dithering also for WhisperFeatureExtractor
- not adding to Wav2Vec2FeatureExtractor (no FBANK computation)
* add unit-tests for dithering, fix docstrings
* ruff
* utils/check_copies.py --fix_and_overwrite
* update code, add seed to unit-test
* adding explanation of dithering
2025-02-19 11:50:02 +01:00
Yoni Gozlan
9f51dc2535
Add support for post-processing kwargs in image-text-to-text pipeline ( #35374 )
...
* fix error and improve pipeline
* add processing_kwargs to apply_chat_template
* change default post_process kwarg to args
* Fix slow tests
* fix copies
2025-02-18 17:43:36 -05:00
Yoni Gozlan
9b479a245b
Uniformize LlavaNextVideoProcessor kwargs ( #35613 )
...
* Uniformize processor kwargs and add tests
* add videos_kwargs tests
* fix copies
* fix llava_next_video chat template tests
* remove unnecessary default kwargs
2025-02-18 14:13:51 -05:00
Ardalan
8ee50537fe
Qwen2VL fix cos,sin dtypes to float when used with deepspeed ( #36188 )
...
* fix dtype of cos,sin when used with deepspeed
* move sin,cos casting withing flash attention functions
* fix cos,sin float casting in modular
---------
Co-authored-by: ardalan.mehrani <ardalan.mehrani@ardalanmehranis-MacBook-Pro.local>
Co-authored-by: ardalan.mehrani <ardalan.mehrani@bytedance.com>
2025-02-18 19:18:29 +01:00
Parteek
8eaae6bee9
Added Support for Custom Quantization ( #35915 )
...
* Added Support for Custom Quantization
* Update code
* code reformatted
* Updated Changes
* Updated Changes
---------
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2025-02-18 16:14:19 +01:00
ivarflakstad
07182b2e10
GitModelIntegrationTest - flatten the expected slice tensor ( #36260 )
...
Flatten the expected slice tensor
2025-02-18 16:04:19 +01:00
Damiano Amatruda
4d2de5f63c
Fix XGLM loss computation (PyTorch and TensorFlow) ( #35878 )
...
* Fix XGLM loss computation (PyTorch and TensorFlow)
* Update expected output string in XGLM sample test
This updates the expected output string of test_xglm_sample for torch
2.0 to the correct one and removes the one for torch 1.13.1 + cu116
(transformers moved to torch 2.0 with PR #35358 ).
* Update expected output IDs in XGLM generation test
2025-02-18 15:37:48 +01:00
Mehant Kammakomati
c3ba53303b
feat: add support for tensor parallel training workflow with accelerate ( #34194 )
...
* feat: add support for tensor parallel flow using accelerate
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* fix: add tp degree to env variable
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* fix: add version check for accelerate to allow TP
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* docs: tensor parallelism
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* nit: rename plugin name
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* fix: guard accelerate version before allow tp
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* docs: add more docs and updates related to TP
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
---------
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-02-18 14:05:46 +01:00
Raushan Turganbay
e6cc410d5b
Remove flakiness in VLMs ( #36242 )
...
* fix
* nit
* no logits processor needed
* two more tests on assisted decoding
2025-02-18 11:41:07 +01:00
andrewor14
fdcfdbfd22
Fix TorchAoConfig not JSON serializable ( #36206 )
...
**Summary:** TorchAoConfig optionally contains a
`torchao.dtypes.Layout` object which is a dataclass and not
JSON serializable, and so the following fails:
```
import json
from torchao.dtypes import TensorCoreTiledLayout
from transformers import TorchAoConfig
config = TorchAoConfig("int4_weight_only", layout=TensorCoreTiledLayout())
config.to_json_string()
json.dumps(config.to_dict())
```
This also causes `quantized_model.save_pretrained(...)` to
fail because the first step of this call is to JSON serialize
the config. Fixes https://github.com/pytorch/ao/issues/1704 .
**Test Plan:**
python tests/quantization/torchao_integration/test_torchao.py -k test_json_serializable
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-02-18 11:05:42 +01:00
Yih-Dar
626666c444
Au revoir flaky test_fast_is_faster_than_slow
( #36240 )
...
* fix
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-02-17 18:30:07 +01:00
Joao Gante
429f1a682d
[tests] remove test_export_to_onnx
( #36241 )
2025-02-17 16:52:44 +00:00
Marc Sun
dae8708c36
Add compressed tensor in quant dockerfile ( #36239 )
...
add compressed_tensors in the dockerfile
2025-02-17 17:48:57 +01:00
dependabot[bot]
3e970dbbf1
Bump transformers from 4.38.0 to 4.48.0 in /examples/research_projects/codeparrot/examples ( #36237 )
...
Bump transformers in /examples/research_projects/codeparrot/examples
Bumps [transformers](https://github.com/huggingface/transformers ) from 4.38.0 to 4.48.0.
- [Release notes](https://github.com/huggingface/transformers/releases )
- [Commits](https://github.com/huggingface/transformers/compare/v4.38.0...v4.48.0 )
---
updated-dependencies:
- dependency-name: transformers
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-17 16:28:43 +00:00
eustlb
77aa9fc076
[generate] Fix encoder decoder models attention mask ( #36018 )
2025-02-17 15:42:28 +00:00