fix typo:
from:
"model = TFAutoModelForQuestionAnswering("distilbert-base-uncased")"
to:
model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")
* update doc
* revert
* typo fix
* refine
* add dtypes
* Update docs/source/en/perf_train_cpu.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/perf_train_cpu.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/perf_train_cpu.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* no comma
* use avx512-vnni
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Changed type hinting for all attention inputs to 'Optional[Tuple[torch.FloatTensor,...]] = None'
* Fixed the ruff formatting issue
* fixed type hinting for all hidden_states to 'Optional[Tuple[torch.FloatTensor, ...]] = None'
* Changed type hinting in these 12 scripts modeling_dpr.py,modeling_nat.py,idefics/vision.py,modeling_tf_dpr.py,modeling_luke.py,modeling_swin.py,modeling_tf_swin.py,modeling_blip.py,modeling_tf_blip.py,modeling_donut_swin.py,modeling_dinat.py,modeling_swinv2.py
* test fail update
* fixed type hinting for these 15 scripts modeling_xlnet.py,modeling_tf_xlnet.py,modeling_led.py,modeling_tf_led.py,modleing_rwkv.py,modeling_dpt.py,modeling_tf_cvt.py,modeling_clip.py,modeling_flax_clip.py,modeling_tf_clip.py,modeling_longformer.py,modeling_tf_longformer.py,modeling_siglip.py,modeling_clap.py,modeling_git.py
* Changed type hinting in these 12 scripts modeling_dpr.py,modeling_nat.py,idefics/vision.py,modeling_tf_dpr.py,modeling_luke.py,modeling_swin.py,modeling_tf_swin.py,modeling_blip.py,modeling_tf_blip.py,modeling_donut_swin.py,modeling_dinat.py,modeling_swinv2.py
* test fail update
* Removed the myvenv file
* Fixed type hinting for these 8 scripts modeling_tvlt.py,modeling_sam.py,modeling_tf_sam.py,modeling_tvp.py,modeling_rag.py,modeling_tf_rag.py,modeling_tf_xlm.py,modeling_xlm.py
* fix the function load_balancing_loss_func in Mixtral_Moe to include attention_mask
* format code using black and ruff
* skip computing mask if attention_mask=None
* add tests for load balancing loss Mixtral-Moe
* fix assert loss is different in mixtral_test
* fix pad_leng
* use assertNotAlmostEqual and print to debug
* remove print for debug
* minor updates
* reduce rtol and atol
* fix a hidden bug of GenerationConfig
* keep `sort_keys=True` to maintain visibility
* Update src/transformers/generation/configuration_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update configuration_utils.py
in case `obj` is a list, check the items in the list
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add dataloader prefetch factor in training args and trainer
* remove trailing spaces
* prevent dataloader_num_workers == 0 and dataloader_prefetch_factor != None
dataloader_prefetch_factor works only when data is loaded in a different process as the main one. This commit adds the necessary checks to avoid having prefetch_factor set when there is no such process.
* Remove whitespaces in empty line
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Enable instantiating model with pretrained backbone weights
* Update tests so backbone checkpoint isn't passed in
* Remove doc updates until changes made in modeling code
* Clarify pretrained import
* Update configs - docs and validation check
* Update src/transformers/utils/backbone_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Clarify exception message
* Update config init in tests
* Add test for when use_timm_backbone=True
* Small test updates
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update convert_llava_weights_to_hf.py script
* Remove config update of adding padding to `vocab_size` and `text_config.vocab_size` which causes `ValueError` exception.
* Remove keys that ends with `inv_freq` from the state dict.
* Add examples and instructions for creating `model_state_dict.bin` that can be used by the script.
* Update convert_llava_weights_to_hf.py
* Update convert_vipllava_weights_to_hf.py
* [DETA] fix freeze/unfreeze function
* Update src/transformers/models/deta/modeling_deta.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/deta/modeling_deta.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* add freeze/unfreeze test case in DETA
* fix type
* fix typo 2
* fix : enable aux and enc loss in training pipeline
* Add unsynced variables from original DETA for training
* modification for passing CI test
* make style
* make fix
* manual make fix
* change deta_modeling_test of configuration 'two_stage' default to TRUE and minor change of dist checking
* remove print
* divide configuration in DetaModel and DetaForObjectDetection
* image smaller size than 224 will give topk error
* pred_boxes and logits should be equivalent to two_stage_num_proposals
* add missing part in DetaConfig
* Update src/transformers/models/deta/modeling_deta.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add docstring in configure and prettify TO DO part
* change distribute related code to accelerate
* Update src/transformers/models/deta/configuration_deta.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/deta/test_modeling_deta.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* protect importing accelerate
* change variable name to specific value
* wrong import
* fix aux_loss in conditional_detr
* add test aux_loss
* add aux_loss test in deta and table_transformer
* fix yolos since it doesn't have auxiliary function
* fix maskformer auxiliary_loss related code
* make style
* change param 'auxiliary_loss' to 'use_auxiliary_loss'
* change param 'auxiliary_loss' to 'use_auxiliary_loss' in tests
* make style & fix-copies, also revert yolos related parameter
* revert variable name 'use_auxiliary_loss' to 'auxiliary_loss' due to DetrConfig
* revert variable name in yolos
* revert maskformer
* add aux_loss test in maskformer
* make style
* Update src/transformers/models/yolos/configuration_yolos.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Allow non-special tokens to be added
* Add test, fix token adding code
* Revert changes to id_to_token and token_to_id
* Update the ESM tokenizer to be a bit more standardized
* Update src/transformers/models/esm/tokenization_esm.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update convert_llava_weights_to_hf.py
Fix call to `tokenizer.add_tokens`
* Add special_tokens to tokenizer.add_tokens in convert_vipllava_weights_to_hf.py
* finalize
* make fix copies whisper
* [Tests] Make sure that we don't run tests mulitple times
* Update src/transformers/models/whisper/modeling_whisper.py
* [Tests] Make sure that we don't run tests mulitple times
* fix more
* improve
* improve
* improve further
* improve more
* improve
* fix more
* git commit and git push
* fix more
* fix more
* fix more
* New try
* Fix more whisper stuff
* Improve
* correct more
* correct more
* correct more
* Fix some tests
* Add more tests
* correct more
* correct more
* correct more
* push
* correct more
* Fix more
* Better
* without dec mask
* correct more
* clean
* save intermediate
* Fix more
* Fix VAD for large-v2
* Save new
* Correct more
* make cleaner
* correct tests
* correct src
* Finish
* Fix more
* Fix more
* finish
* Fix edge cases
* fix return_dict_in_generate
* fix all tests
* make style
* add docstrings
* add docstrings
* Fix logit processor
* make style
* fix pipeline test
* fix more style
* Apply suggestions from code review
* apply feedback Sanchit
* correct more
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* correct more
* correct more
* correct more
* Fix staticmethod
* correct more
* fix
* fix slow tests
* make style
* fix tokenizer test
* fix tokenizer test
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* finish
* finish
* revert kwargs change
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>