* Calculating box_bias at the start once, then reusing it at inference
* Updating the compute_box_bias function for backwards compatibility
* Caching compute_box_bias function
* Bux fix
* Update owlv2 accordingly to ensure repo consistency
* Co-authored by: nvbinh15 <binh.pdc01@gmail.com>
* Fixup changes
* Made copied code consistent
* Co-authored by: nvbinh15 <binh.pdc01@gmail.com>
---------
Co-authored-by: Nguyen Van Binh <>
Co-authored-by: Nguyen Van Binh <binh.pdc01@gmail.com>
* attempt to fix
* the actual fix that works with compilation!
* this?
* temporary update
* nit?
* dispatcg to memory efficient?
* update both models that have static cache support
* fix copies fix compile
* make sure fix
* fix cohere and gemma
* fix beams?
* nit
* slipped through the cracks
* nit
* nits
* update
* fix-copies
* skip failing tests
* nits
* added safety checkers for load_in_4bit and load_in_8bit on init, as well as their setters
* Update src/transformers/utils/quantization_config.py
typo correction for load_in_8bit setter checks
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Initial commit (still lots of unfinished bits)
* (Still untested) add safetensors sharding to save_pretrained
* Fix savetensors saving, update default shard size to match PT
* Add proper loading of TF-format safetensors
* Revert default size in case that changes things
* Fix incorrect index name
* Update loading priority
* Update tests
* Make the tests a little more stringent
* Expand tests
* Add sharded cross-test
* Fix argument name
* One more test fix
* Adding mlx to the list of allowed formats
* Remove irrelevant block for safetensors
* Refactor warning logging into a separate function
* Remove unused skip_logger_warnings arg
* Update src/transformers/modeling_tf_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Move function def
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docstring for RMSNorm
* Update cache_params object to correct MambaCache type
* Update docstrings and type info
* Pass through use_cache
* ruff
* Reformat with 119 char limit per line (thanks Arthur)
* Pass through use_cache specifically to the backbone rather than all keyword arguments
* Update src/transformers/models/mamba/modeling_mamba.py
* Update src/transformers/models/mamba/modeling_mamba.py
* Update src/transformers/models/mamba/modeling_mamba.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/mamba/modeling_mamba.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update tab
* Update src/transformers/models/mamba/modeling_mamba.py
* Update src/transformers/models/mamba/modeling_mamba.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fixed the issue of DPO trainer that using one node and mutiple GPUs
* before update, add the assert
* run the ruff formatter
* Update src/transformers/trainer.py
Thank you.
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* remember to do make style and make quality before commit
* Update src/transformers/trainer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Added SuperPoint docs
* Added tests
* Removed commented part
* Commit to create and fix add_superpoint branch with a new branch
* Fixed dummy_pt_objects
* Committed missing files
* Fixed README.md
* Apply suggestions from code review
Fixed small changes
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Moved ImagePointDescriptionOutput from modeling_outputs.py to modeling_superpoint.py
* Removed AutoModelForKeypointDetection and related stuff
* Fixed inconsistencies in image_processing_superpoint.py
* Moved infer_on_model logic simply in test_inference
* Fixed bugs, added labels to forward method with checks whether it is properly a None value, also added tests about this logic in test_modeling_superpoint.py
* Added tests to SuperPointImageProcessor to ensure that images are properly converted to grayscale
* Removed remaining mentions of MODEL_FOR_KEYPOINT_DETECTION_MAPPING
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fixed from (w, h) to (h, w) as input for tests
* Removed unnecessary condition
* Moved last_hidden_state to be the first returned
* Moved last_hidden_state to be the first returned (bis)
* Moved last_hidden_state to be the first returned (ter)
* Switched image_width and image_height in tests to match recent changes
* Added config as first SuperPointConvBlock init argument
* Reordered README's after merge
* Added missing first config argument to SuperPointConvBlock instantiations
* Removed formatting error
* Added SuperPoint to README's de, pt-br, ru, te and vi
* Checked out README_fr.md
* Fixed README_fr.md
* Test fix README_fr.md
* Test fix README_fr.md
* Last make fix-copies !
* Updated checkpoint path
* Removed unused SuperPoint doc
* Added missing image
* Update src/transformers/models/superpoint/modeling_superpoint.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Removed unnecessary import
* Update src/transformers/models/superpoint/modeling_superpoint.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Added SuperPoint to _toctree.yml
---------
Co-authored-by: steven <steven.bucaillle@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Steven Bucaille <steven.bucaille@buawei.com>
* use user_defined_symbols
* fixup
* nit
* add a very robust test
* make sure all models are tested with the `pretrained_tokenizer_to_test`
* should we make sure we test all of them?
* merge
* remove the id
* fix test
* update
* ousies
* oups
* fixup
* fix copies check
* remove `pretrained_tokenizer_to_test`
* add galore v1
* add import
* add tests and doc
* fix doctest
* forward contrib credits from discussions
* forward contrib credits from discussions
* Apply suggestions from code review
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* fix failing tests'
* switch to `optim_target_modules` and clarify docs
* more clarification
* enhance lookup logic
* update a test to add peak memory
* add regex, all-linear and single string support
* add layer-wise optimization through DummyOptimizers and LRSchedulers
* forward contrib credits from discussions and original idea
* add a section about DDP not supported in layerwise
* Update src/transformers/trainer.py
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* fix self
* check only if layer_wise
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* oops
* make use of intervals
* clarify comment
* add matching tests
* GaLoRe -> GaLore
* move to `get_scheduler`
* add note on docs
* add a warning
* adapt a bit the docs
* update docstring
* support original API
* Update docs/source/en/trainer.md
* slightly refactor
* Update docs/source/en/trainer.md
Co-authored-by: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix args parsing and add tests
* remove warning for regex
* fix type hint
* add note about extra args
* make `is_regex` return optional
---------
Co-authored-by: Maxime <maximegmd @users.noreply.github.com>
Co-authored-by: Wing Lian <winglian @users.noreply.github.com>
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: hiyouga <hiyouga@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
* Update pipeline_tutorial.md to include gradio
* Update pipeline_tutorial.md
* Update docs/source/en/pipeline_tutorial.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/pipeline_tutorial.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/pipeline_tutorial.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/pipeline_tutorial.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update pipeline_tutorial.md
* Update docs/source/en/pipeline_tutorial.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Cohere Model Release (#1)
Cohere Model Release
* Remove unnecessary files and code (#2)
Some cleanup
* Delete cohere-model directory (#3)
* Make Fix (#5)
* Pr fixes (#6)
* fixes for pr
* pr fixes for the format
* pr fixes for the format
* src/transformers/models/auto/tokenization_auto.py
* Tokenizer test (#8)
* tokenizer test
* format fix
* Adding Docs and other minor changes (#7)
* Add modeling tests (#9)
* Smol Fix (#11)
* tokenization tests are fixed
* format fixes
* fix pr doc tests
* fix pr doc tests
* fix pr doc tests
* fix pr style check
* small changes in cohere.md
* FIX: Address final comments for transformers integration (#13)
* fix modeling final nits and add proper test file
* for now leave empty tests
* add integration test
* push new test
* fix modeling cohere (#14)
* Update chat templates to use the new API (#15)
---------
Co-authored-by: ahmetustun <ahmetustun89@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Allow apply_chat_template to pass kwargs to the template
* Fix priority for template_kwargs
* Fix docstring
* style fix
* Add the option for the model to have a dict of templates
* Error message cleanup
* Add test for chat template dicts
* Simplify the chat template dict test and apply it to all tokenizers in self.get_tokenizers()
* Save chat template dicts as lists with fixed key names
* Add test for serialization/reloading
* Add require_jinja just to be safe, even though I don't think we use it