BUI Van Tuan
e355c0a11c
Fix missing initializations for models created in 2024 ( #38987 )
...
* fix GroundingDino
* fix SuperGlue
* fix GroundingDino
* fix MambaModel
* fix OmDetTurbo
* fix SegGpt
* fix Qwen2Audio
* fix Mamba2
* fix DabDetr
* fix Dac
* fix FalconMamba
* skip timm initialization
* fix Encodec and MusicgenMelody
* fix Musicgen
* skip timm initialization test
* fix OmDetTurbo
* clean the code
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
* add reviewed changes
* add back timm
* style
* better check for parametrizations
---------
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
2025-07-02 15:03:57 +02:00
Yao Matrix
89542fb81c
enable more test cases on xpu ( #38572 )
...
* enable glm4 integration cases on XPU, set xpu expectation for blip2
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
* more
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* fix style
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* refine wording
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* refine test case names
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* run
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* add gemma2 and chameleon
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* fix review comments
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
---------
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
2025-06-06 09:29:51 +02:00
Arthur
f5d45d89c4
🚨 Early-error 🚨 config will error out if output_attentions=True
and the attn implementation is wrong ( #38288 )
...
* Protect ParallelInterface
* early error out on output attention setting for no wraning in modeling
* modular update
* fixup
* update model tests
* update
* oups
* set model's config
* more cases
* ??
* properly fix
* fixup
* update
* last onces
* update
* fix?
* fix wrong merge commit
* fix hub test
* nits
* wow I am tired
* updates
* fix pipeline!
---------
Co-authored-by: Lysandre <hi@lysand.re>
2025-05-23 17:17:38 +02:00
cyyever
1e6b546ea6
Use Python 3.9 syntax in tests ( #37343 )
...
Signed-off-by: cyy <cyyever@outlook.com>
2025-04-08 14:12:08 +02:00
Arthur
b912f5ee43
use torch.testing.assertclose instead to get more details about error in cis ( #35659 )
...
* use torch.testing.assertclose instead to get more details about error in cis
* fix
* style
* test_all
* revert for I bert
* fixes and updates
* more image processing fixes
* more image processors
* fix mamba and co
* style
* less strick
* ok I won't be strict
* skip and be done
* up
2025-01-24 16:55:28 +01:00
Pavel Iakubovskii
42b2857b01
OmDet Turbo processor standardization ( #34937 )
...
* Fix docstring
* Fix docstring
* Add `classes_structure` to model output
* Update omdet postprocessing
* Adjust tests
* Update code example in docs
* Add deprecation to "classes" key in output
* Types, docs
* Fixing test
* Fix missed clip_boxes
* [run-slow] omdet_turbo
* Apply suggestions from code review
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* Make CamelCase class
---------
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
2025-01-17 14:10:19 +00:00
Fanli Lin
2fa876d2d8
[tests] make cuda-only tests device-agnostic ( #35607 )
...
* intial commit
* remove unrelated files
* further remove
* Update test_trainer.py
* fix style
2025-01-13 14:48:39 +01:00
Yoni Gozlan
94f18cf23c
Add OmDet-Turbo ( #31843 )
...
* Add template with add-new-model-like
* Add rough OmDetTurboEncoder and OmDetTurboDecoder
* Add working OmDetTurbo convert to hf
* Change OmDetTurbo encoder to RT-DETR encoder
* Add swin timm backbone as default, add always partition fix for swin timm
* Add labels and tasks caching
* Fix make fix-copies
* Format omdet_turbo
* fix Tokenizer tests
* Fix style and quality
* Reformat omdet_turbo
* Fix quality, style, copies
* Standardize processor kwargs
* Fix style
* Add output_hidden_states and ouput_attentions
* Add personalize multi-head attention, improve docstrings
* Add integrated test and fix copy, style, quality
* Fix unprotected import
* Cleanup comments and fix unprotected imports
* Add fix different prompts in batch (key_padding_mask)
* Add key_padding_mask to custom multi-head attention module
* Replace attention_mask by key_padding_mask
* Remove OmDetTurboModel and refactor
* Refactor processing of classes and abstract use of timm backbone
* Add testing, fix output attentions and hidden states, add cache for anchors generation
* Fix copies, style, quality
* Add documentation, conver key_padding_mask to attention_mask
* revert changes to backbone_utils
* Fic docstrings rst
* Fix unused argument in config
* Fix image link documentation
* Reorder config and cleanup
* Add tokenizer_init_kwargs in merge_kwargs of the processor
* Change AutoTokenizer to CLIPTokenizer in convert
* Fix init_weights
* Add ProcessorMixin tests, Fix convert while waiting on uniform kwargs
* change processor kwargs and make task input optional
* Fix omdet docs
* Remove unnecessary tests for processor kwargs
* Replace nested BatchEncoding output of the processor by a flattened BatchFeature
* Make modifications from Pavel review
* Add changes Amy review
* Remove unused param
* Remove normalize_before param, Modify processor call docstring
* Remove redundant decoder class, add gradient checkpointing for decoder
* Remove commented out code
* Fix inference in fp16 and add fp16 integrated test
* update omdet md doc
* Add OmdetTurboModel
* fix caching and nit
* add OmDetTurboModel to tests
* nit change repeated key test
* Improve inference speed in eager mode
* fix copies
* Fix nit
* remove OmdetTurboModel
* [run-slow] omdet_turbo
* [run-slow] omdet_turbo
* skip dataparallel test
* [run-slow] omdet_turbo
* update weights to new path
* remove unnecessary config in class
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-91-248.ec2.internal>
2024-09-25 13:26:28 -04:00