* Fix saved_model_creation_extended
* Skip the BLIP model creation test for now
* Fix TF SAM test
* Fix longformer tests
* Fix Wav2Vec2
* Add a skip for XLNet
* make fixup
* make fix-copies
* Add comments
* Fix resuming checkpoints for PeftModels
Fix an error occurred when resuming a PeftModel from a training checkpoint. That was caused since PeftModel.pre_trained saves only adapter-related data while _load_from_checkpoint was expecting a torch sved model. This PR fix this issue and allows the adapter checkpoint to be loaded.
Resolves: #24252
* fix last comment
* fix nits
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
* docs: add BentoML to awesome-transformers
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
* chore: add the project to the bottom of the line
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
---------
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
Update __init__.py
Fix link to documentation to install Transformers from source
Probably the title changed at some point from 'Installing' to 'Install'
* Add test for proper input signatures
* No more signature pruning
* Test the dummy inputs are valid too
* fine-tine -> fine-tune
* Fix indent in test_dataset_conversion
* Use tied weight keys
* More
* Fix tied weight missing warning
* Only give info on unexpected keys with different classes
* Deal with empty archs
* Fix tests
* Refine test
* Fix one BLIP arg not being optional, remove misspelled arg
* Remove the lxmert test overrides and just use the base test_saved_model_creation
* saved_model_creation fixes and re-enabling tests across the board
* Remove unnecessary skip
* Stop caching sinusoidal embeddings in speech_to_text
* Fix transfo_xl compilation
* Fix transfo_xl compilation
* Fix the conditionals in xglm
* Set the save spec only when building
* Clarify comment
* Move comment correctly
* Correct embeddings generation for speech2text
* Mark RAG generation tests as @slow
* Remove redundant else:
* Add comment to clarify the save_spec line in build()
* Fix size tests for XGLM at last!
* make fixup
* Remove one band_part operation
* Mark test_keras_fit as @slow
* Revert whisper change and modify the test_compile_tf_model test
* make fixup
* Tweak test slightly
* Add functional model saving to test
* Ensure TF can infer shapes for data2vec
* Add override for efficientformer
* Mark test as slow
* Fix LLaMa beam search when using parallelize
same issue as T5 #11717
* fix code format in modeling_llama.py
* fix format of _reorder_cache in modeling_llama.py
* Make conversion faster, fix None vs 0 bug
* Add second sort for consistency
* Update src/transformers/convert_slow_tokenizer.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add mms ctc fine tuning
* make style
* More fixes that are needed
* make fix-copies
* make draft for README
* add new file
* move to new file
* make style
* make style
* add quick test
* make style
* make style