* standardize
* fix tests
* batch update some processors, not final yet
* oke, now I tested that everything indeed runs. Still needs prettification
* emu3
* fixup
* gemma3 but it doesn't generate anything
* fuyu
* update
* why?
* Update src/transformers/models/aya_vision/processing_aya_vision.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* address comments
* bc
* why do we need to guard import this every time?
* i hate guarded imports
* i am blind
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Firstly: Better detection of when we're a custom class
* Trigger tests
* Let's break everything
* make fixup
* fix mistaken line doubling
* Let's try to get rid of it from config classes at least
* Let's try to get rid of it from config classes at least
* Fixup image processor
* no more circular import
* Let's go back to setting `_auto_class` again
* Let's go back to setting `_auto_class` again
* stash commit
* Revert the irrelevant changes until we figure out AutoConfig
* Change tests since we're breaking expectations
* make fixup
* do the same for all custom classes
* Cleanup for feature extractor tests
* Cleanup tokenization tests too
* typo
* Fix tokenizer tests
* make fixup
* fix image processor test
* make fixup
* Remove warning from register_for_auto_class
* Stop adding model info to auto map entirely
* Remove todo
* Remove the other todo
* Let's start slapping _auto_class on models why not
* Let's start slapping _auto_class on models why not
* Make sure the tests know what's up
* Make sure the tests know what's up
* Completely remove add_model_info_to_*
* Start adding _auto_class to models
* Start adding _auto_class to models
* Add a flaky decorator
* Add a flaky decorator and import
* stash commit
* More message cleanup
* make fixup
* fix indent
* Fix trust_remote_code prompts
* make fixup
* correct indentation
* Reincorporate changes into dynamic_module_utils
* Update call to trust_remote_code
* make fixup
* Fix video processors too
* Fix video processors too
* Remove is_flaky additions
* make fixup
* let's try a non-regex solution
* make fixup
* Slight adjustment
* Let's just use the original code with a check
* slight tweak to conditional
* slight tweak to conditional
* use device agnostic APIs in test cases
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* fix style
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* add one more
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* xpu now supports integer device id, aligning to CUDA behaviors
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* update to use device_properties
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* fix style
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* update comment
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* fix comments
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* fix style
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
---------
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* Update roformer model card
* fix example purpose description
* fix model description according to the comments
* revert changes for autodoc
* remove unneeded tags
* fix review issues
* fix hfoption
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* docs(swinv2): Update SwinV2 model card to new standard format
* docs(swinv2): Apply review suggestions
Incorporates feedback from @stevhliu to:
- Enhance the introductory paragraph with more details about scaling and SimMIM.
- Generalize the tip from "image classification tasks" to "vision tasks".
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update BioGPT model card
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/biogpt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* correction for CPU fallback
* added quantization code and method
* fixed transformers-cli call
---------
Co-authored-by: Aguedo <aguedo@fakeemail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* stash commit
* Experiment 1: Try just Gemma
* Experiment 1: Just try Gemma
* make fixup
* Trigger tests
* stash commit
* Try adding Gemma3 as well
* make fixup
* Correct attrib names
* Correct pipeline model mapping
* Add in all_model_classes for Gemma1 again
* Move the pipeline model mapping around again
* make fixup
* Revert Gemma3 changes since it's a VLM
* Let's try Falcon
* Correct attributes
* Correct attributes
* Let's try just overriding get_config() for now
* Do Nemotron too
* And Llama!
* Do llama/persimmon
* Correctly skip tests
* Fix Persimmon
* Include Phimoe
* Fix Gemma2
* Set model_tester_class correctly
* Add GLM
* More models!
* models models models
* make fixup
* Add Qwen3 + Qwen3MoE
* Correct import
* make fixup
* Add the QuestionAnswering classes
* Add the QuestionAnswering classes
* Move pipeline mapping to the right place
* Jetmoe too
* Stop RoPE testing models with no RoPE
* Fix up JetMOE a bit
* Fix up JetMOE a bit
* Can we just force pad_token_id all the time?
* make fixup
* fix starcoder2
* Move pipeline mapping
* Fix RoPE skipping
* Fix RecurrentGemma tests
* Fix Falcon tests
* Add MoE attributes
* Fix values for RoPE testing
* Make sure we set bos_token_id and eos_token_id in an appropriate range
* make fixup
* Fix GLM4
* Add mamba attributes
* Revert bits of JetMOE
* Re-add the JetMOE skips
* Update tests/causal_lm_tester.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add licence
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Get parallel loader working. Include tests.
* Update the tests for parallel loading
* Rename env variables.
* Add docs for parallel model weight loading.
* Touch up parallel model loading docs.
* Touch up parallel model loading docs again.
* Edit comment in test_modeling_utils_parallel_loading.py
* Make sure HF_PARALLEL_LOADING_WORKERS is spelled correctly in modeling_utils.py
* Correct times for parallelized loading, previous times were for a "hot" filesystem
* Update parallel model loading so the spawn method is encapsulated. DRY up the code by leveraging get_submodule.
* Update docs on model loading parallelism so that details on setting the multiprocessing start method are removed, now that the package handles this step internally.
* Fix style on model loading parallelism changes.
* Merge latest version of master's modeling_utils.
* Removed unused variable.
* Fix argument packing for the parallel loader.
* Fix state dict being undefined in the parallel model loader.
* Rename variables used in parallel model loading for clarity. Use get_module_from_name().
* Switch to the use of threads for parallel model loading.
* Update docs for parallel loading.
* Remove the use of json.loads when evaluating HF_ENABLE_PARALLEL_LOADING. Prefer simple casting.
* Move parallelized shard loading into its own function.
* Remove use of is_true(). Favor checking env var true values for HF_ENABLE_PARALLEL_LOADING.
* Update copyright to 2025 in readme for paralell model loading.
* Remove garbage collection line in load_shard_file, implicit garbage collection already occurs.
* Run formatter on modeling_utils.py
* Apply style fixes
* Delete tests/utils/test_modeling_utils_parallel_loading.py
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
* refactor to rm property can_save_slow_tokenizer, it can be done within the if of save_vocab
* move property to fast
* revert if
* check if vocab_file is attr
* fix check for sp
* fix if condition
* fix if condition
* fix if condition