* starting attn refactor for encoder decoder models via bart (eager + sdpa)
* flash attention works, remove unnecessary code
* flex attention support for bart!, gotta check if the renaming is not too aggressive
* some comments
* skip flex grad test for standalone as done with the other test
* revert flex attn rename (for now), sdpa simplify, and todos
* more todos
* refactor mask creation for reuse
* modular attempt at biogpt
* first batch of other models
* fix attn dropout
* fix autoformer copies
* hubert
* another batch of models
* copies/style + last round of bart models --> whisper next?
* remove unnecessary _reshape function and remove copy to whisper
* add skip for decoder-only models out of enc-dec (same as in bart)
* bring back licences
* remove comment, added to pr read instead
* mostly docs
* disable sew flex attn as it's unclear attn mask for now
* oops
* test fixes for enc-dec
* torch fx fixes + try at flex attn
* skip on mbart
* some more fixes
* musicgen skip / delete old attn class logic + sdpa compose compile skip
* disable flex attn for musicgen, not worth the effort
* more fixes and style
* flex attention test for dropout and encoder decoder that dont have main input names
* informer fixes
* the weirdest thing I've encountered yet...
* style
* remove empty tensor attempt, found core root in previous commits
* disable time series due to tests being very text centric on inputs
* add speech to text to be ignoring the other attns, also due to tests
* update docs
* remaining issues resolved ?
* update docs for current state --> nllb moe and pegasus x sdpa is questionable :D
* some models have not set the is_causal flag...
* change dtype in softmax tol old behaviour + some modular fixes
* I hate it but it is what it is
* fixes from main for bart
* forgot this one
* some model fixes
* style
* current status
* marian works now
* fixing some copies
* some copy fixes + time series x informer
* last models possibly and fixes on style/copies
* some post merge fixes
* more fixes
* make attention interface callable and move warnings there
* style lol
* add comment to "unsupported"
* remove callable interface and change interface warnings + some copies
* fix
* ternary is ugly af, make it simpler
* how did that happen
* fix flex attn test
* failing the test
* no more fallback! fixing copies next
* style + attn fixed
* fixing copies and mask creation
* wrong copy
* fixup tests and disable flex attn for now
* fixup last tests?
* rm already deprecated padding max length
* truncate_strategy AS AN ARG is already deprecated for a few years
* fix
* rm test_padding_to_max_length
* rm pad_to_max_length=True in other tests
* rm from common
* missed fnet
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* fix: format codes
* chore: fix copy mismatch issue
* fix: format codes
* chore: fix copy mismatch issue
* chore: fix copy mismatch issue
* chore: fix copy mismatch issue
* chore: restore previous words
* chore: revert unexpected changes
* Correct the new defaults
* CIs
* add check
* Update utils.py
* Update utils.py
* Add the max_length in generate test checking shape without passing length
* style
* CIs
* fix fx CI issue
* first try
* codestyle
* idefics2 is happy
* [run-slow] llava, llava_next, video_llava, vipllava, llava_next_video, idefics, idefics2, kosmos2, fuyu, blip, blip_2, instructblip, instructblipvideo, paligemma
* fix-copies
* [run-slow] llava, llava_next, video_llava, vipllava, llava_next_video, idefics, idefics2, kosmos2, fuyu, blip, blip_2, instructblip, instructblipvideo
* blip-2 needs to init vision from config
* when was this removed O_o
* minor fix
* tests
* this way?
* tests
* model-agnostic code
* codestyle
* add tests for idefics
* modify general test for VLMs
* no generation test for vlm yet!
* no generation test here also
* wanr in VIT-SDPA if output attn
* add more tests
* user can pass dict as attn impl
* repo consistency
* update
* muicgen
* no prints
* forgot speech enc-dec and clip
* how many composite models we have?
* musicgen meelody is same as mudicgen
* +siglip
* fix tests + add some more
* remove idefics custom overriden code
* make idefics2 automappable
* nits
* skip tests
* doctests
* Update src/transformers/models/idefics2/configuration_idefics2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/clip/test_modeling_clip.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/idefics2/test_modeling_idefics2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/idefics2/test_modeling_idefics2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/configuration_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* major update, no need for automap
* clean up
* add FA2 test
* more tests
* style
* skip tests
* why did these started failing now?
* no attributes for FA2 needed
* one tiny test
* address comment about FA2 false warning
* style
* add new models and resolve conflicts
* fix copies
* let it be this way for now, come back tomorrow to review
* some more fixes
* update
* more updates
* update
* fix copies
* style and tests
* another big update
* fix tests
* fix tests
* update
* another update
* fix tests
* fix copies
* fix tests
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
This commit fixes the following errors:
* Fix "expected all tensors to be on the same device" error
* Fix "can't convert device type tensor to numpy"
According to pytorch documentation torch.Tensor.numpy(force=False)
performs conversion only if tensor is on CPU (plus few other restrictions)
which is not the case. For our case we need force=True since we just
need a data and don't care about tensors coherency.
Fixes: #33517
See: https://pytorch.org/docs/2.4/generated/torch.Tensor.numpy.html
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* Port core files + ESM (because ESM code is odd)
* Search-replace in modelling code
* Fix up transfo_xl as well
* Fix other core files + tests (still need to add correct import to tests)
* Fix cookiecutter
* make fixup, fix imports in some more core files
* Auto-add imports to tests
* Cleanup, add imports to sagemaker tests
* Use correct exception for importing tf_keras
* Fixes in modeling_tf_utils
* make fixup
* Correct version parsing code
* Ensure the pipeline tests correctly revert to float32 after each test
* Ensure the pipeline tests correctly revert to float32 after each test
* More tf.keras -> keras
* Add dtype cast
* Better imports of tf_keras
* Add a cast for tf.assign, just in case
* Fix callback imports
* Add a convenience method for building in your own name scope
* Second attempt at auto layer building
* Revert "Second attempt at auto layer building"
This reverts commit e03a3aaecf9ec41a805582b83cbdfe3290a631be.
* Attempt #3
* Revert "Attempt #3"
This reverts commit b9df7a0857560d29b5abbed6127d9e9eca77cf47.
* Add missing attributes that we're going to need later
* Add some attributes we're going to need later
* A fourth attempt! Feel the power flow through you!
* Revert "A fourth attempt! Feel the power flow through you!"
This reverts commit 6bf4aaf3875d6f28485f50187617a4c616c8aff7.
* Add more values we'll need later
* TF refactor that we'll need later
* Revert "TF refactor that we'll need later"
This reverts commit ca07202fb5b7b7436b893baa8d688b4f348ea7b9.
* Revert "Revert "TF refactor that we'll need later""
This reverts commit 1beb0f39f293ed9c27594575e1c849aadeb15c13.
* make fixup
* Attempt five!
* Revert "Attempt five!"
This reverts commit 3302207958dfd0374b0447a51c06eea51a506044.
* Attempt six - this time don't add empty methods
* Revert "Attempt six - this time don't add empty methods"
This reverts commit 67d60129be75416b6beb8f47c7d38d77b18d79bb.
* Attempt seven - better base model class detection!
* Revert "Attempt seven - better base model class detection!"
This reverts commit 5f14845e92ea0e87c598da933bfbfee10f553bc9.
* Another attribute we'll need later
* Try again with the missing attribute!
* Revert "Try again with the missing attribute!"
This reverts commit 760c6f30c5dffb3e04b0e73c34a77d1882a0fef7.
* This is the attempt that will pierce the heavens!
* Revert "This is the attempt that will pierce the heavens!"
This reverts commit c868bb657de057aca7a5260350a3f831fc4dfee6.
* Attempt seven - snag list is steadily decreasing
* Revert "Attempt seven - snag list is steadily decreasing"
This reverts commit 46fbd975deda64429bfb3e5fac4fc0370c00d316.
* Attempt eight - will an empty snag list do it?
* Revert "Attempt eight - will an empty snag list do it?"
This reverts commit 7c8a3c2b083253649569e9877e02054ae5cec67b.
* Fixes to Hubert issues that cause problems later
* Trying again with Conv1D/SeparableConv fixes
* Revert "Trying again with Conv1D/SeparableConv fixes"
This reverts commit 55092bca952bc0f750aa1ffe246a640bf1e2036e.
* Apply the build shape fixes to Wav2Vec2 as well
* One more attempt!
* Revert "One more attempt!"
This reverts commit 5ac3e4cb01b9458cc93312873725f9444ae7261c.
* Another attempt!
* Revert "Another attempt!"
This reverts commit ea16d890e019d7de8792a3b8e72f3b1c02adae50.
* Let's see how many failures we get without the internal build method
* Fix OpenAI
* Fix MobileBERT
* (Mostly) fix GroupVIT
* Fix BLIP
* One more BLIP fix
* One more BLIP fix!
* Fix Regnet
* Finally fully fix GroupViT
* Fix Data2Vec and add the new AdaptivePool
* Fix Segformer
* Fix Albert
* Fix Deberta/DebertaV2
* Fix XLM
* Actually fix XLM
* Fix Flaubert
* Fix lxmert
* Fix Resnet
* Fix ConvBERT
* Fix ESM
* Fix Convnext / ConvnextV2
* Fix SAM
* Fix Efficientformer
* Fix LayoutLMv3
* Fix speech_to_text
* Fix mpnet and mobilevit
* Fix Swin
* Fix CTRL
* Fix CVT
* Fix DPR
* Fix Wav2Vec2
* Fix T5
* Fix Hubert
* Fix GPT2
* Fix Whisper
* Fix DeiT
* Fix the encoder-decoder / dual-encoder classes
* make fix-copies
* build in name scope
* Fix summarization test
* Fix tied weight names for BART + Blenderbot
* Fix tied weight name building
* Fix to TFESM weight building
* Update TF SAM
* Expand all the shapes out into Big Boy Shapes
* try to stylify using ruff
* might need to remove these changes?
* use ruf format andruff check
* use isinstance instead of type comparision
* use # fmt: skip
* use # fmt: skip
* nits
* soem styling changes
* update ci job
* nits isinstance
* more files update
* nits
* more nits
* small nits
* check and format
* revert wrong changes
* actually use formatter instead of checker
* nits
* well docbuilder is overwriting this commit
* revert notebook changes
* try to nuke docbuilder
* style
* fix feature exrtaction test
* remve `indent-width = 4`
* fixup
* more nits
* update the ruff version that we use
* style
* nuke docbuilder styling
* leve the print for detected changes
* nits
* Remove file I/O
Co-authored-by: charliermarsh
<charlie.r.marsh@gmail.com>
* style
* nits
* revert notebook changes
* Add # fmt skip when possible
* Add # fmt skip when possible
* Fix
* More ` # fmt: skip` usage
* More ` # fmt: skip` usage
* More ` # fmt: skip` usage
* NIts
* more fixes
* fix tapas
* Another way to skip
* Recommended way
* Fix two more fiels
* Remove asynch
Remove asynch
---------
Co-authored-by: charliermarsh <charlie.r.marsh@gmail.com>
* Safetensors serialization by default
* First pass on the tests
* Second pass on the tests
* Third pass on the tests
* Fix TF weight loading from TF-format safetensors
* Specific encoder-decoder fixes for weight crossloading
* Add VisionEncoderDecoder fixes for TF too
* Change filename test for pt-to-tf
* One missing fix for TFVisionEncoderDecoder
* Fix the other crossload test
* Support for flax + updated tests
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Sanchit's comments
* Sanchit's comments 2
* Nico's comments
* Fix tests
* cleanup
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Matt <rocketknight1@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add a default decoder_attention_mask for EncoderDecoderModel during training
Since we are already creating the default decoder_input_ids from the labels, we should also
create a default decoder_attention_mask to go with it.
* Fix test constant that relied on manual_seed()
The test was changed to use a decoder_attention_mask that ignores padding instead (which is
the default one created by BERT when attention_mask is None).
* Create the decoder_attention_mask using decoder_input_ids instead of labels
* Fix formatting in test
* A fun new PR where I break the entire codebase again
* A fun new PR where I break the entire codebase again
* Handle cross-attention
* Move calls to model(model.dummy_inputs) to the new build() method
* Seeing what fails with the build context thing
* make fix-copies
* Let's see what fails with new build methods
* Fix the pytorch crossload build calls
* Fix the overridden build methods in vision_text_dual_encoder
* Make sure all our build methods set self.built or call super().build(), which also sets it
* make fix-copies
* Remove finished TODO
* Tentatively remove unneeded (?) line
* Transpose b in deberta correctly and remove unused threading local
* Get rid of build_with_dummies and all it stands for
* Rollback some changes to TF-PT crossloading
* Correctly call super().build()
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Don't forget the imports
* Add the imports to tests too
* make fixup
* Refactor tests that depended on get_type_hints
* Better test refactor
* Fix an old hidden bug in the test_keras_fit input creation code
* Fix for the Deit tests
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies
* move generation_*.py src files into generation/*.py
* populate generation.__init__ with lazy loading
* move imports and references from generation.xxx.object to generation.object
* First step of PT->TF for composite models
* Update the tests
* For VisionEncoderDecoderModel
* Fix
* Fix
* Add comment
* Fix
* clean up import
* Save memory
* For (TF)EncoderDecoderModel
* For (TF)EncoderDecoderModel
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>