Kingsley
454b4a39f4
update video token replacement
2025-07-02 11:47:06 +00:00
Kingsley
5df38281d3
change get_video_features
2025-07-02 10:55:02 +00:00
Kingsley
b729471763
update modular
2025-07-02 10:45:41 +00:00
Kingsley
807af61a1d
Merge branch 'huggingface:main' into glm4v
2025-07-02 18:25:54 +08:00
Raushan Turganbay
4d5822e65d
[smolvlm] fix video inference ( #39147 )
...
* fix smolvlm
* better do as before, set sampling params in overwritten `apply_chat_template`
* style
* update with `setdefault`
2025-07-02 12:05:10 +02:00
वेदांत
9b2f5b66d8
fix default value of config to match checkpionts in LLaVa-OV models ( #39163 )
2025-07-02 09:45:50 +00:00
Chong You
e8e0c76162
Add activation sparsity reference in gemma3n doc ( #39160 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
Add activation sparsity reference in the description of gemma3n
2025-07-02 04:11:03 +02:00
Yih-Dar
8e87adc45f
fix llama
tests ( #39161 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
New model PR merged notification / Notify new model (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
* fix
* fix
* fix
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-07-01 23:27:22 +02:00
Yih-Dar
4c1715b610
Update expected values (after switching to A10) ( #39157 )
...
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* empty
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-07-01 20:54:31 +02:00
Yih-Dar
ab59cc27fe
Suggest jobs to use in run-slow
( #39100 )
...
* pr
* pr
* pr
* pr
* pr
* pr
* pr
* pr
* pr
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-07-01 20:19:06 +02:00
jiqing-feng
db2f535443
update bnb ground truth ( #39117 )
...
* update bnb resulte
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* set seed to avoid sampling different results
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix int8 tests
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix typo
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* add comments
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-07-01 20:06:37 +02:00
ybkurt
260846efad
fix: remove undefined variable ( #39146 )
2025-07-01 19:10:29 +02:00
rasmi
cdfe49a4d0
Change @lru_cache()
to @lru_cache
to match styles from #38883 . ( #39093 )
...
Match styles in #38883
2025-07-01 18:29:16 +02:00
DavidS2106
f46798193e
Fix: Ensure wandb logs config in offline mode ( #38992 )
...
* Fix: Ensure wandb logs config in offline mode
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2025-07-01 16:17:58 +00:00
Yih-Dar
fe838d6631
Fix missing fsdp & trainer jobs in daily CI ( #39153 )
...
fix
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-07-01 18:10:30 +02:00
StevenBucaille
1283877571
[superglue] fix wrong concatenation which made batching results wrong ( #38850 )
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
New model PR merged notification / Notify new model (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
2025-07-01 12:14:44 +00:00
Raushan Turganbay
f8b88866f5
[VLMs] support passing embeds along with pixels ( #38467 )
...
* VLMs can work with embeds now
* update more models
* fix tests
* fix copies
* fixup
* fix
* style
* unskip tests
* fix copies
* fix tests
* style
* omni modality models
* qwen models had extra indentation
* fix some other tests
* fix copies
* fix test last time
* unrelated changes revert
* we can't rely only on embeds
* delete file
* de-flake mistral3
* fix qwen models
* fix style
* fix tests
* fix copies
* deflake the test
* modular reverted by fixes, fix again
* flaky test, overwritten
* fix copies
* style
2025-07-01 11:33:20 +00:00
Ayush Singh
20901f1d68
[typing] LlamaAttention return typehint ( #38998 )
...
* helo llama
* helo llama
* helo llama
* apply modular
* fix dia
---------
Co-authored-by: qubvel <qubvel@gmail.com>
2025-07-01 11:29:52 +01:00
Raushan Turganbay
7a25f8dfdb
[qwen2-vl] fix FA2 inference ( #39121 )
...
* fix FA2
* update is causal flag and remove mask for FA2
* update for FA2 with varlen path
* how the tests were passing with different devices?
* add comment and ref to the PR
* move mask preparation to base pretrained model
* seq len is the first dim, not second
* fix copies to fix GLM4V
2025-07-01 10:18:37 +00:00
Mehant Kammakomati
def9663239
feat: support indivisible shards for TP model loading and TPlizing. ( #37220 )
...
* feat: support uneven loading and sharding
resolve merge conflicts
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* fix: allow for empty tensor computations
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* test: add llama1b test case
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* due to q_proj colwise it has to be multi of 2
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
* refactor: use slice API
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
---------
Signed-off-by: Mehant Kammakomati <mehant.kammakomati2@ibm.com>
2025-07-01 10:03:22 +00:00
jiqing-feng
06c4a4d499
fix caching_allocator_warmup with tie weights ( #39070 )
...
* fix caching_allocator_warmup with tie weights
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix comment
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
2025-07-01 11:32:20 +02:00
Raushan Turganbay
e435574721
🚨 Don't use cache in non-generative models ( #38751 )
...
* deprecate for 1 version
* style
* fix some tests
* fix esm
* skip for now, GC requires positional args but we have keyword args
* remove transpose for scores in modified models only
* skip fx trace tests
2025-07-01 09:08:21 +00:00
Cyril Vallez
dbc98328da
Several fixes for Gemma3n ( #39135 )
...
* remove the skips
* fix the epsilon to a small value (does not make sense otherwise)
* safeguard
* overload test_eager_matches_sdpa
* Update test_modeling_common.py
* skip appropriate tests
* correct no_split_layer
* fix all devices issue
* fix backward
* fix
2025-07-01 10:34:53 +02:00
BUI Van Tuan
d53518c5f2
Fix key mapping for VLMs ( #39029 )
...
* fix key mapping for VLMs
* use __mro__ instead
* update key mapping in save_pretrained
2025-07-01 09:47:53 +02:00
eustlb
3457e8e73e
[Whisper] update token timestamps tests ( #39126 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
* fixes
* update comment
* update for A10
* all a10
* all a10
* all a10
* all a10
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-06-30 21:55:36 +02:00
Kingsley
adc82c85c2
changes for video
2025-06-30 17:47:41 +00:00
Drew Ross
fe35eca7bd
Update BigBirdPegasus model card ( #39104 )
...
* Update igbird_pegasus.md
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-06-30 10:42:56 -07:00
Yao Matrix
29a3f5ed8c
switch default xpu tp backend to pytorch built-in XCCL from pytorch 2.8 ( #39024 )
...
* switch default xpu tp backend to pytorch built-in XCCL from pytorch 2.8
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* Update docs/source/en/perf_infer_gpu_multi.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update perf_infer_gpu_multi.md
* Update perf_infer_gpu_multi.md
* Update perf_infer_gpu_multi.md
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-06-30 08:54:05 -07:00
Vladimir Gutuev
9e0c865b8b
docs: correct two typos in awesome-transformers.md ( #39102 )
...
* docs(awesome-projects): fix typo “Itt leverages” → “It leverages” (#39101 )
closes #39101
* docs(awesome-projects): fix grammar “We provides” → “We provide” (#39101 )
closes #39101
2025-06-30 08:53:43 -07:00
jiqing-feng
03db2700ab
Enable XPU doc ( #38929 )
...
* fix example with dataset
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix device type
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* revert torchao change
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* fix torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* revert torchao change
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update xpu torchao doc
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* update chat_templating_multimodal.md
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* use full name for int8
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
* revert int8 title
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
2025-06-30 07:56:55 -07:00
Joao Gante
ea0ea392e5
Fix chat ( #39128 )
2025-06-30 13:47:48 +00:00
Lysandre Debut
ed36f8490e
Licenses ( #39127 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
New model PR merged notification / Notify new model (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
* Licenses
* Licenses
2025-06-30 15:25:36 +02:00
Lysandre Debut
e8f90b5397
Split transformers chat
and transformers serve
( #38443 )
...
* Next token
* Split chat and serve
* Support both generation methods
* Style
* Generation Config
* temp
* temp
* Finalize serving.py
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
* Finalize chat.py
* Update src/transformers/commands/serving.py
Co-authored-by: célina <hanouticelina@gmail.com>
* Lucain's comments
Co-authored-by: Lucain <lucain@huggingface.co>
* Update
* Last comments on PR
* Better error handling
* Better error handling
* CI errors
* CI errors
* Add tests
* Fix tests
* Fix tests
* [chat] Split chat/serve (built on top of lysandre's PR) (#39031 )
* Next token
* Split chat and serve
* Support both generation methods
* Style
* Generation Config
* temp
* temp
* Finalize serving.py
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
* Finalize chat.py
* Update src/transformers/commands/serving.py
Co-authored-by: célina <hanouticelina@gmail.com>
* Lucain's comments
Co-authored-by: Lucain <lucain@huggingface.co>
* Update
* Last comments on PR
* Better error handling
* Better error handling
* CI errors
* CI errors
* Add tests
* Fix tests
* Fix tests
* streaming tool call
* abstract tool state; set tool start as eos
* todos
* server working on models without tools
* rm chat's deprecated flags
* chat defaults
* kv cache persists across calls
* add server docs
* link
* Update src/transformers/commands/serving.py
* Apply suggestions from code review
* i love merge conflicts
* solve multi turn with tiny-agents
* On the fly switching of the models
* Remove required positional arg
---------
Co-authored-by: Lysandre <hi@lysand.re>
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
Co-authored-by: Lucain <lucain@huggingface.co>
* Protect names
* Fix tests
---------
Co-authored-by: =?UTF-8?q?c=C3=A9lina?= <hanouticelina@gmail.com>
Co-authored-by: Lucain <lucain@huggingface.co>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2025-06-30 15:10:53 +02:00
Yih-Dar
539c6c2fa8
All CI jobs with A10 ( #39119 )
...
all a10
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-06-30 14:23:27 +02:00
Ryan Mullins
ed9f252608
docs: Gemma 3n audio encoder ( #39087 )
...
Updating Gemma 3n docs and docstrings to clarify the relationship
between the newly trained audio encoder used in Gemma 3n and the USM
model from the original paper.
2025-06-30 14:10:51 +02:00
Yuxuan Zhang
4a79bf947d
Fix some bug for finetune and batch infer For GLM-4.1V ( #39090 )
...
* update
* 1
2025-06-30 12:16:22 +02:00
Yao Matrix
2100ee6545
fix UT failures on XPU w/ stock PyTorch 2.7 & 2.8 ( #39116 )
...
* fix UT failures on XPU w/ stock PyTorch 2.7 & 2.8
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* zamba2
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* xx
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* internvl
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* tp cases
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-06-30 11:49:03 +02:00
Yih-Dar
ccf2ca162e
skip some test_sdpa_can_dispatch_on_flash
( #39092 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Has been cancelled
Build documentation / build (push) Has been cancelled
Slow tests on important models (on Push - A10) / Get all modified files (push) Has been cancelled
Self-hosted runner (push-caller) / Check if setup was changed (push) Has been cancelled
Secret Leaks / trufflehog (push) Has been cancelled
Update Transformers metadata / build_and_package (push) Has been cancelled
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Has been cancelled
Self-hosted runner (push-caller) / build-docker-containers (push) Has been cancelled
Self-hosted runner (push-caller) / Trigger Push CI (push) Has been cancelled
* fix
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-06-27 23:08:14 +02:00
st81
a11f692895
Fixes the failing test test_is_split_into_words
in test_pipelines_token_classification.py
( #39079 )
...
Self-hosted runner (benchmark) / Benchmark (aws-g5-4xlarge-cache) (push) Waiting to run
Build documentation / build (push) Waiting to run
Slow tests on important models (on Push - A10) / Get all modified files (push) Waiting to run
Slow tests on important models (on Push - A10) / Slow & FA2 tests (push) Blocked by required conditions
Self-hosted runner (push-caller) / Check if setup was changed (push) Waiting to run
Self-hosted runner (push-caller) / build-docker-containers (push) Blocked by required conditions
Self-hosted runner (push-caller) / Trigger Push CI (push) Blocked by required conditions
Secret Leaks / trufflehog (push) Waiting to run
Update Transformers metadata / build_and_package (push) Waiting to run
New model PR merged notification / Notify new model (push) Has been cancelled
* Fix test pipelines token classification for is_split_into_words
* Fix incorrect import format
2025-06-27 19:25:32 +01:00
Sandeep Yadav
18143c76bf
Sandeepyadav1478/2025 06 19 deberta v2 model card update ( #38895 )
...
* [docs]: update deberta-v2.md model card
* chore: req updates
* chore: address code review feedback and update docs
* chore: review feedback and updates
* chore: model selection updates
* chores: quantizations review updates
2025-06-27 10:35:30 -07:00
Steven Liu
02a769b058
[fix] Add FastSpeech2ConformerWithHifiGan ( #38207 )
...
* add to mapping
* oops
* oops
* add to config_mapping_names
* revert
* fix?
* config-mapping-names
* fix?
* fix?
2025-06-27 09:38:21 -07:00
Benjamin Bossan
c2dc72bb5f
TST Fix PEFT integration test bitsandbytes config ( #39082 )
...
TST Fix PEFT integration test bitsandbytes config
The PEFT integration tests still used load_in_{4,8}_bit, which is
deprecated, moving to properly setting BitsAndBytesConfig. For 4bit,
also ensure that nf4 is being used to prevent
> RuntimeError: quant_type must be nf4 on CPU, got fp4
2025-06-27 18:33:11 +02:00
Matej Sirovatka
c8064bea9a
Fix: unprotected import of tp plugin ( #39083 )
2025-06-27 17:28:05 +02:00
farrosalferro
dd7dc4a4a2
Add Fast Image Processor for Chameleon ( #37140 )
...
* Add Fast Image Processor for Chameleon
* add warning to resize and move blend_rgba to convert_to_rgb
* Remove unrelated files
* Update image_processing_chameleon_fast to use auto_docstring
* fix equivalence test
---------
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
2025-06-27 15:26:57 +00:00
Yih-Dar
6d773fc3bc
fix dots1
tests ( #39088 )
...
fix
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-06-27 16:54:11 +02:00
Tijana Vukovic
c8764ab935
guard torch distributed check ( #39057 )
...
* guard torch distributed check
* Update src/transformers/pipelines/base.py
---------
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
2025-06-27 14:49:47 +00:00
MinJu-Ha
49d9fd49bd
Add Fast Image Processor for mobileViT ( #37143 )
...
* Add image_processing_mobilevit_fast.py
* Fix copies
* update _preprocess for channel_flip
* Update for batched image processing
* Resolve merge conflicts with main
* Fix import order and remove trailing whitespace (ruff clean-up)
* Fix copy inconsistencies
* Add NotImplementedError for post_process_semantic_segmentation to satisfy repo checks
* Add auto_docstring
* Adjust style
* Update docs/source/en/model_doc/mobilevit.md
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Update src/transformers/models/mobilevit/image_processing_mobilevit_fast.py
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Update src/transformers/models/mobilevit/image_processing_mobilevit_fast.py
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Delete not used function
* test: add missing tests for and
* Add post_process_semantic_segmentation to mobilevit_fast.py
* Add preprocess function to image_processing_mobilebit_fast.py
* ruff check for formatting
* fix: modify preprocess method to handle BatchFeature correctly
* Remove logic for default value assignment
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Remove normalization adn RGB conversion logic not used in slow processor
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Simplify return_tensors logic using one-liner conditional expression
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Remove unused normalization and format parameters
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* add **kwargs and remove default values in _preprocess
* add slow_fast equivalence tests for segmentation
* style: autoformat code with ruff
* Fix slow_fast equivalence test
* merge + remove skipped test
---------
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
2025-06-27 14:40:24 +00:00
Nahieli
4336ecd1ea
add fast image processor nougat ( #37661 )
...
* add fast image processor nougat
* test fixes
* docstring white space
* last fixes
* docstring_type
* tolerance unit test
* fix tolerance
* fix rtol
* remove traling white space
* remove white space
* note for tolerance unit test
* fix tests
* remove print
---------
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
2025-06-27 14:39:43 +00:00
Benjamin Bossan
0c35280e58
TST PEFT integration tests with pipeline generate ( #39086 )
...
Some PEFT integration tests involving text generation pipelines were
failing since #38129 because the base model is too small to generate
longer sequences. Setting max_new_tokens fixes this.
2025-06-27 15:58:10 +02:00
JINO ROHIT
993665a5ff
fixed typo for docstring in prepare_inputs method ( #39071 )
2025-06-27 13:57:56 +00:00