Arthur
d04c2b1ab6
fix mistral now
Secret Leaks / trufflehog (push) Waiting to run
2025-07-01 15:56:37 +02:00
Arthur
075bd0c2f3
fux csm and mistral
2025-07-01 15:53:44 +02:00
Arthur
5e5ae84a05
fix csm now
2025-07-01 15:41:52 +02:00
Arthur
aaae861fc8
fix another one
2025-07-01 15:37:33 +02:00
Arthur
9fa5f266a1
fix small lm3
2025-07-01 15:32:17 +02:00
Arthur
6a132a0799
finish fixing gemma3n
2025-07-01 15:22:52 +02:00
Arthur
f7a1f0da3d
some fixes, loss_kwargs should never had been
2025-07-01 15:19:32 +02:00
Arthur
0b119ffb1f
quel enfer
2025-07-01 15:06:54 +02:00
Arthur
3ac6c52f34
move the fix a bit
2025-07-01 15:00:38 +02:00
Arthur
00afce9837
fix emu3
2025-07-01 14:58:12 +02:00
Arthur
10fb88ae84
fix emu3
2025-07-01 14:53:05 +02:00
Arthur
209d5022ac
update
2025-07-01 14:47:38 +02:00
Arthur
da50ccc549
fix conflicts
2025-07-01 14:42:33 +02:00
Arthur
2748b99388
update
2025-07-01 14:39:58 +02:00
Arthur
22423738c4
update
2025-07-01 14:27:21 +02:00
Arthur
15a8ff4fe9
update
2025-07-01 14:20:56 +02:00
Arthur
a13a98c6da
more fix
2025-07-01 14:19:38 +02:00
Arthur
7a0512a1f5
fixes
2025-07-01 14:16:22 +02:00
StevenBucaille
1283877571
[superglue] fix wrong concatenation which made batching results wrong ( #38850 )
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
New model PR merged notification / Notify new model (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
2025-07-01 12:14:44 +00:00
Raushan Turganbay
f8b88866f5
[VLMs] support passing embeds along with pixels ( #38467 )
...
* VLMs can work with embeds now
* update more models
* fix tests
* fix copies
* fixup
* fix
* style
* unskip tests
* fix copies
* fix tests
* style
* omni modality models
* qwen models had extra indentation
* fix some other tests
* fix copies
* fix test last time
* unrelated changes revert
* we can't rely only on embeds
* delete file
* de-flake mistral3
* fix qwen models
* fix style
* fix tests
* fix copies
* deflake the test
* modular reverted by fixes, fix again
* flaky test, overwritten
* fix copies
* style
2025-07-01 11:33:20 +00:00
Ayush Singh
20901f1d68
[typing] LlamaAttention return typehint ( #38998 )
...
* helo llama
* helo llama
* helo llama
* apply modular
* fix dia
---------
Co-authored-by: qubvel <qubvel@gmail.com>
2025-07-01 11:29:52 +01:00
Raushan Turganbay
7a25f8dfdb
[qwen2-vl] fix FA2 inference ( #39121 )
...
* fix FA2
* update is causal flag and remove mask for FA2
* update for FA2 with varlen path
* how the tests were passing with different devices?
* add comment and ref to the PR
* move mask preparation to base pretrained model
* seq len is the first dim, not second
* fix copies to fix GLM4V
2025-07-01 10:18:37 +00:00
Mehant Kammakomati
def9663239
feat: support indivisible shards for TP model loading and TPlizing. ( #37220 )
...
* feat: support uneven loading and sharding
resolve merge conflicts
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* fix: allow for empty tensor computations
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* test: add llama1b test case
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* due to q_proj colwise it has to be multi of 2
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
---------
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-07-01 10:03:22 +00:00
jiqing-feng
06c4a4d499
fix caching_allocator_warmup with tie weights ( #39070 )
...
* fix caching_allocator_warmup with tie weights
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix comment
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-07-01 11:32:20 +02:00
Raushan Turganbay
e435574721
🚨 Don't use cache in non-generative models ( #38751 )
...
* deprecate for 1 version
* style
* fix some tests
* fix esm
* skip for now, GC requires positional args but we have keyword args
* remove transpose for scores in modified models only
* skip fx trace tests
2025-07-01 09:08:21 +00:00
Arthur
3c0c56b84d
test this
2025-07-01 10:58:16 +02:00
Arthur
780141ca52
same
2025-07-01 10:56:29 +02:00
Arthur
01d4da8510
support cross attention edge case
2025-07-01 10:56:06 +02:00
Cyril Vallez
dbc98328da
Several fixes for Gemma3n ( #39135 )
...
* remove the skips
* fix the epsilon to a small value (does not make sense otherwise)
* safeguard
* overload test_eager_matches_sdpa
* Update test_modeling_common.py
* skip appropriate tests
* correct no_split_layer
* fix all devices issue
* fix backward
* fix
2025-07-01 10:34:53 +02:00
BUI Van Tuan
d53518c5f2
Fix key mapping for VLMs ( #39029 )
...
* fix key mapping for VLMs
* use __mro__ instead
* update key mapping in save_pretrained
2025-07-01 09:47:53 +02:00
Arthur
8c96926f60
Merge branch 'main' of github.com:huggingface/transformers into clean-llamas
2025-07-01 08:20:39 +02:00
eustlb
3457e8e73e
[Whisper] update token timestamps tests ( #39126 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
* fixes
* update comment
* update for A10
* all a10
* all a10
* all a10
* all a10
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-06-30 21:55:36 +02:00
Drew Ross
fe35eca7bd
Update BigBirdPegasus model card ( #39104 )
...
* Update igbird_pegasus.md
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-06-30 10:42:56 -07:00
Arthur
063e510dc8
propagate
Secret Leaks / trufflehog (push) Waiting to run
2025-06-30 18:02:53 +02:00
Arthur
1303470aa4
remove output attentions
2025-06-30 18:01:58 +02:00
Arthur
e63ef640ea
propagate gemma?
2025-06-30 18:01:02 +02:00
Arthur
c7d195feee
update
2025-06-30 17:58:35 +02:00
Yao Matrix
29a3f5ed8c
switch default xpu tp backend to pytorch built-in XCCL from pytorch 2.8 ( #39024 )
...
* switch default xpu tp backend to pytorch built-in XCCL from pytorch 2.8
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* Update docs/source/en/perf_infer_gpu_multi.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update perf_infer_gpu_multi.md
* Update perf_infer_gpu_multi.md
* Update perf_infer_gpu_multi.md
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-06-30 08:54:05 -07:00
Vladimir Gutuev
9e0c865b8b
docs: correct two typos in awesome-transformers.md ( #39102 )
...
* docs(awesome-projects): fix typo “Itt leverages” → “It leverages” (#39101 )
closes #39101
* docs(awesome-projects): fix grammar “We provides” → “We provide” (#39101 )
closes #39101
2025-06-30 08:53:43 -07:00
Arthur
7266aafab7
remove the **flash stuff in favor of noraml kwargs
2025-06-30 17:10:56 +02:00
jiqing-feng
03db2700ab
Enable XPU doc ( #38929 )
...
* fix example with dataset
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix device type
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* revert torchao change
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* revert torchao change
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update xpu torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update chat_templating_multimodal.md
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* use full name for int8
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* revert int8 title
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2025-06-30 07:56:55 -07:00
Arthur
3fb6b710f2
update
2025-06-30 16:49:11 +02:00
Joao Gante
ea0ea392e5
Fix chat ( #39128 )
2025-06-30 13:47:48 +00:00
Arthur
a74974d989
update
2025-06-30 15:44:17 +02:00
Lysandre Debut
ed36f8490e
Licenses ( #39127 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
New model PR merged notification / Notify new model (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
* Licenses
* Licenses
2025-06-30 15:25:36 +02:00
Arthur
e7705c981a
update models based on qwen2
2025-06-30 15:25:03 +02:00
Arthur
113219becd
update modularqwen2
2025-06-30 15:22:39 +02:00
Lysandre Debut
e8f90b5397
Split transformers chat
and transformers serve
( #38443 )
...
* Next token
* Split chat and serve
* Support both generation methods
* Style
* Generation Config
* temp
* temp
* Finalize serving.py
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
* Finalize chat.py
* Update src/transformers/commands/serving.py
Co-authored-by: célina <hanouticelina@gmail.com>
* Lucain's comments
Co-authored-by: Lucain <lucain@huggingface.co>
* Update
* Last comments on PR
* Better error handling
* Better error handling
* CI errors
* CI errors
* Add tests
* Fix tests
* Fix tests
* [chat] Split chat/serve (built on top of lysandre's PR) (#39031 )
* Next token
* Split chat and serve
* Support both generation methods
* Style
* Generation Config
* temp
* temp
* Finalize serving.py
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
* Finalize chat.py
* Update src/transformers/commands/serving.py
Co-authored-by: célina <hanouticelina@gmail.com>
* Lucain's comments
Co-authored-by: Lucain <lucain@huggingface.co>
* Update
* Last comments on PR
* Better error handling
* Better error handling
* CI errors
* CI errors
* Add tests
* Fix tests
* Fix tests
* streaming tool call
* abstract tool state; set tool start as eos
* todos
* server working on models without tools
* rm chat's deprecated flags
* chat defaults
* kv cache persists across calls
* add server docs
* link
* Update src/transformers/commands/serving.py
* Apply suggestions from code review
* i love merge conflicts
* solve multi turn with tiny-agents
* On the fly switching of the models
* Remove required positional arg
---------
Co-authored-by: Lysandre <hi@lysand.re>
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
Co-authored-by: Lucain <lucain@huggingface.co>
* Protect names
* Fix tests
---------
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
Co-authored-by: Lucain <lucain@huggingface.co>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2025-06-30 15:10:53 +02:00
Arthur
3caf7d76a0
fix other models as well!
2025-06-30 14:55:01 +02:00
Arthur
8c66f4d0bb
this fixes more tests
2025-06-30 14:50:34 +02:00