Arthur
673440d073
update ruff version ( #30932 )
...
* update ruff version
* fix research projects
* Empty
* Fix errors
---------
Co-authored-by: Lysandre <lysandre@huggingface.co>
2024-05-22 06:40:15 +02:00
Matt
71d47f0ad4
More TF fixes ( #28081 )
...
* More build_in_name_scope()
* Make sure we set the save spec now we don't do it with dummies anymore
* make fixup
2023-12-18 15:26:03 +00:00
Matt
050e0b44f6
Proper build() methods for TF ( #27794 )
...
* Add a convenience method for building in your own name scope
* Second attempt at auto layer building
* Revert "Second attempt at auto layer building"
This reverts commit e03a3aaecf9ec41a805582b83cbdfe3290a631be.
* Attempt #3
* Revert "Attempt #3 "
This reverts commit b9df7a0857560d29b5abbed6127d9e9eca77cf47.
* Add missing attributes that we're going to need later
* Add some attributes we're going to need later
* A fourth attempt! Feel the power flow through you!
* Revert "A fourth attempt! Feel the power flow through you!"
This reverts commit 6bf4aaf3875d6f28485f50187617a4c616c8aff7.
* Add more values we'll need later
* TF refactor that we'll need later
* Revert "TF refactor that we'll need later"
This reverts commit ca07202fb5b7b7436b893baa8d688b4f348ea7b9.
* Revert "Revert "TF refactor that we'll need later""
This reverts commit 1beb0f39f293ed9c27594575e1c849aadeb15c13.
* make fixup
* Attempt five!
* Revert "Attempt five!"
This reverts commit 3302207958dfd0374b0447a51c06eea51a506044.
* Attempt six - this time don't add empty methods
* Revert "Attempt six - this time don't add empty methods"
This reverts commit 67d60129be75416b6beb8f47c7d38d77b18d79bb.
* Attempt seven - better base model class detection!
* Revert "Attempt seven - better base model class detection!"
This reverts commit 5f14845e92ea0e87c598da933bfbfee10f553bc9.
* Another attribute we'll need later
* Try again with the missing attribute!
* Revert "Try again with the missing attribute!"
This reverts commit 760c6f30c5dffb3e04b0e73c34a77d1882a0fef7.
* This is the attempt that will pierce the heavens!
* Revert "This is the attempt that will pierce the heavens!"
This reverts commit c868bb657de057aca7a5260350a3f831fc4dfee6.
* Attempt seven - snag list is steadily decreasing
* Revert "Attempt seven - snag list is steadily decreasing"
This reverts commit 46fbd975deda64429bfb3e5fac4fc0370c00d316.
* Attempt eight - will an empty snag list do it?
* Revert "Attempt eight - will an empty snag list do it?"
This reverts commit 7c8a3c2b083253649569e9877e02054ae5cec67b.
* Fixes to Hubert issues that cause problems later
* Trying again with Conv1D/SeparableConv fixes
* Revert "Trying again with Conv1D/SeparableConv fixes"
This reverts commit 55092bca952bc0f750aa1ffe246a640bf1e2036e.
* Apply the build shape fixes to Wav2Vec2 as well
* One more attempt!
* Revert "One more attempt!"
This reverts commit 5ac3e4cb01b9458cc93312873725f9444ae7261c.
* Another attempt!
* Revert "Another attempt!"
This reverts commit ea16d890e019d7de8792a3b8e72f3b1c02adae50.
* Let's see how many failures we get without the internal build method
* Fix OpenAI
* Fix MobileBERT
* (Mostly) fix GroupVIT
* Fix BLIP
* One more BLIP fix
* One more BLIP fix!
* Fix Regnet
* Finally fully fix GroupViT
* Fix Data2Vec and add the new AdaptivePool
* Fix Segformer
* Fix Albert
* Fix Deberta/DebertaV2
* Fix XLM
* Actually fix XLM
* Fix Flaubert
* Fix lxmert
* Fix Resnet
* Fix ConvBERT
* Fix ESM
* Fix Convnext / ConvnextV2
* Fix SAM
* Fix Efficientformer
* Fix LayoutLMv3
* Fix speech_to_text
* Fix mpnet and mobilevit
* Fix Swin
* Fix CTRL
* Fix CVT
* Fix DPR
* Fix Wav2Vec2
* Fix T5
* Fix Hubert
* Fix GPT2
* Fix Whisper
* Fix DeiT
* Fix the encoder-decoder / dual-encoder classes
* make fix-copies
* build in name scope
* Fix summarization test
* Fix tied weight names for BART + Blenderbot
* Fix tied weight name building
* Fix to TFESM weight building
* Update TF SAM
* Expand all the shapes out into Big Boy Shapes
2023-12-14 15:17:30 +00:00
Thien Tran
1e3c9ddacc
Make Whisper Encoder's sinusoidal PE non-trainable by default ( #26032 )
...
* set encoder's PE as non-trainable
* freeze flax
* init sinusoids
* add test for non-trainable embed positions
* simplify TF encoder embed_pos
* revert tf
* clean up
* add sinusoidal init for jax
* make consistent sinusoidal function
* fix dtype
* add default dtype
* use numpy for sinusoids. fix jax
* add sinusoid init for TF
* fix
* use custom embedding
* use specialized init for each impl
* fix sinusoids init. add test for pytorch
* fix TF dtype
* simplify sinusoid init for flax and tf
* add tests for TF
* change default dtype to float32
* add sinusoid test for flax
* Update src/transformers/models/whisper/modeling_flax_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* move sinusoidal init to _init_weights
---------
Co-authored-by: sanchit-gandhi <sanchit@huggingface.co>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2023-10-11 09:08:54 +01:00
Yih-Dar
d2871b2975
Skip test_beam_search_xla_generate_simple
for T5
( #25566 )
...
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-17 15:30:46 +02:00
Yih-Dar
2898fd3968
Fix some TFWhisperModelIntegrationTests
( #24428 )
...
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* fix
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-06-23 14:27:49 +02:00
Matt
4a55e47877
Move TF building to an actual build() method ( #23760 )
...
* A fun new PR where I break the entire codebase again
* A fun new PR where I break the entire codebase again
* Handle cross-attention
* Move calls to model(model.dummy_inputs) to the new build() method
* Seeing what fails with the build context thing
* make fix-copies
* Let's see what fails with new build methods
* Fix the pytorch crossload build calls
* Fix the overridden build methods in vision_text_dual_encoder
* Make sure all our build methods set self.built or call super().build(), which also sets it
* make fix-copies
* Remove finished TODO
* Tentatively remove unneeded (?) line
* Transpose b in deberta correctly and remove unused threading local
* Get rid of build_with_dummies and all it stands for
* Rollback some changes to TF-PT crossloading
* Correctly call super().build()
2023-06-06 18:30:51 +01:00
Matt
f8b2574416
Better TF docstring types ( #23477 )
...
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Rework TF type hints to use | None instead of Optional[] for tf.Tensor
* Don't forget the imports
* Add the imports to tests too
* make fixup
* Refactor tests that depended on get_type_hints
* Better test refactor
* Fix an old hidden bug in the test_keras_fit input creation code
* Fix for the Deit tests
2023-05-24 13:52:52 +01:00
amyeroberts
f82ee109e6
Temporary tolerance fix for flaky whipser PT-TF equiv. test ( #23257 )
...
* Temp tol fix for flaky whipser test
* Add equivalent update to TF tests
2023-05-11 10:04:07 +01:00
Joao Gante
78cda46f17
Generate: Add assisted generation ( #22211 )
...
* working mvp
* remove breakpoint
* fix commit
* standardize outputs
* tmp commit
* tests almost ready
* tmp commit
* skip a few models
* Add streaming; Docs and examples
* document limitations
* PR commits
* Amy PR comments
2023-04-18 17:36:56 +01:00
Yih-Dar
871c31a6f1
🔥 Rework pipeline testing by removing PipelineTestCaseMeta
🚀 ( #21516 )
...
* Add PipelineTesterMixin
* remove class PipelineTestCaseMeta
* move validate_test_components
* Add for ViT
* Add to SPECIAL_MODULE_TO_TEST_MAP
* style and quality
* Add feature-extraction
* update
* raise instead of skip
* add tiny_model_summary.json
* more explicit
* skip tasks not in mapping
* add availability check
* Add Copyright
* A way to diable irrelevant tests
* update with main
* remove disable_irrelevant_tests
* skip tests
* better skip message
* better skip message
* Add all pipeline task tests
* revert
* Import PipelineTesterMixin
* subclass test classes with PipelineTesterMixin
* Add pipieline_model_mapping
* Fix import after adding pipieline_model_mapping
* Fix style and quality after adding pipieline_model_mapping
* Fix one more import after adding pipieline_model_mapping
* Fix style and quality after adding pipieline_model_mapping
* Fix test issues
* Fix import requirements
* Fix mapping for MobileViTModelTest
* Update
* Better skip message
* pipieline_model_mapping could not be None
* Remove some PipelineTesterMixin
* Fix typo
* revert tests_fetcher.py
* update
* rename
* revert
* Remove PipelineTestCaseMeta from ZeroShotAudioClassificationPipelineTests
* style and quality
* test fetcher for all pipeline/model tests
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-02-28 19:40:57 +01:00
Jonatan Kłosko
deafc24388
Add WhisperTokenizerFast ( #21222 )
...
* Add WhisperTokenizerFast
* Fixup
* Up
* Up
* Improve tests
* Update src/transformers/models/whisper/tokenization_whisper_fast.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Keep stride in whisper pipelien test
* Remove unknown token special case
* Reduce vocabulary size in tests
* Fix vocab size assertion
* Sync copied changes from WhisperTokenizer
* Skip pipeline tests
* Update assertion
* Remove Whisper tokenizer dependency on sentencepiece
* Format
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2023-02-21 06:58:54 +01:00
Yih-Dar
cbecf121cd
Fix env. variable type issue in testing ( #21609 )
...
* fix env issue
* fix env issue
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-02-13 20:53:26 +01:00
Sylvain Gugger
6f79d26442
Update quality tooling for formatting ( #21480 )
...
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies
2023-02-06 18:10:56 -05:00
Arthur
6f3faf3863
[WHISPER] Small patch ( #21307 )
...
* add small patch
* update tests, forced decoder ids is not prioritary against generation config
* fix two new tests
2023-01-25 22:49:23 +01:00
Arthur
761b3fad92
Expected output for the test changed ( #20493 )
2022-11-30 15:07:28 +01:00
Arthur
11b2e45ccc
[WHISPER] Update modeling tests ( #20162 )
...
* Update modeling tests
* update tokenization test
* typo
* nit
* fix expected attention outputs
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update tests from review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* remove problematics kwargs passed to the padding function
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-11-15 11:04:58 +01:00
Yih-Dar
3436842102
Run some TF Whisper tests in subprocesses to avoid GPU OOM ( #19772 )
...
* Run some TF Whisper tests in subprocesses to avoid GPU OOM
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-21 21:59:18 +02:00
Arthur
d51ca32404
fix tests ( #19670 )
2022-10-18 06:45:48 +02:00
amyeroberts
e3f028f3af
Add TF whisper ( #19378 )
...
* simplify loop
* add featur extractor
* add model
* start conversion
* add dropout
* initial commit of test files
* copnversion for all models
* update processor for correct padding
* update feature extraction
* update integration test logits match
* fmnt: off for the logits
* on the fly mel bank
* small nit
* update test
* update tokenizer
* nit feature extraction
* update
* update tokenizer test
* adds logit processor and update tokenizer to get supress tokens
* style
* clean convert
* revert to original modeling tf utils
* Update
* update
* nit
* clean convert file
* update tests and nits
* quality
* slow generation test
* ffn_dim to allow customization
* update readme
* add to toctreee
* start fixing integration tests
* update tests and code
* fix feature extractor
* fix config tests common
* update code to fix tests
* fix feature exctractor
* nit feature extraction
* update test for new feature extractor
* style
* add absrtact
* large logits wioth custom decoder input ids
* wraap around is otrch available
* fix feature extractor
* correct logits for whisper small.en
* nit
* fix encoder_attentino_mask
* some fixes
* remove unnecessary inputs
* nits
* add normalizer file
* update etst tokenization
* fix attention mask not defined
* fix generate
* remove uncoder attention mask useless
* update test modeling whisper
* update condfig to add second non supress tokens
* nits on feature exrtactor
* nit for test tokenizers
* update etsts
* update tests
* update tokenization test
* fixup
* invalidated hf token. Clean convert openai to whisper
* fix logit tests
* fixup
* Add model to README
* Fix doc tests
* clean merge
* revert toc_tree changes
* remove useless LogitProcessor
* Update whisper .mdx
* update config file doc
* update configuration docstring
* update test tokenization
* update test tokenization
* update tokenization whisper
Added copied from where needed
* update feature extraction
* nit test name
* style
* quality
* remove get suppress tokens and update non_speech tokens global variables
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* clean modeling whisper and test
Removed the attention mask arguments that are deprecated
* fix large test
* Add multilingual audio test, and translate test
* style
* fix larg multilingual test
* nits
* add copied from for attention layer
* remove attention masks in doc
* add english normalizer
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update tokenization test
* remove copied from in whisper attention : no bias in k_proj only
* wrap around dependencies in english normalizer
* style
* correct import generation logits
* for now, wrap feature extractor with torch
* remove torch depencies for feature extraction and style
* Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixup
* nit
* update logitds
* style
* nit
* nits and fix final tests
* add `is_more_itertools_available` to utils
* quality
* add begin supress tokens, supress tokens to generate args and config
* clean supressTokensLogitProcessor in generation logits
* Nit naming
* add supressTokensAtBegin
* udpate tests, supress tokens to None or correct values
* nit and style
* update RAG to fit test and generate_logit
* add copy pasted statment on english normalizer
* add arguments to config_common_kwargs
* Update src/transformers/generation_utils.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/generation_logits_process.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* revert changes based on reviews
* update doc and nits
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* more nits
* last nits
* update test configuration common
* add BART name in decoder attention mask documentation
* Update src/transformers/models/whisper/modeling_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* style
* nit
* nit
* add english.json file to git
* nits on documentation
* nit
* nits
* last styling
* add main toctree file
* remove sentence piece dependency
* clean init file
* fix tokenizer that has no dependencies on sentencepiece
* update whisper init file, nit
* remove english.json file
* add get decoder prompt id
* All weights loading
* Remove hanging pdb
* Fixup and tidy up
* Use same copied from as PT model
* Remove whitespace changes
* Remove torch references
* Tie embeddings
* Remove logits processor input to generate
* Update logit values
* revert changes and add forced logit processor
* nit
* clean normalizer
* remove protected
* Add logit processors and update generation code & tests
* Some tidy up
* Update docstring
* update
* update based on review
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update to reflect changes on the PT model branch
* Tidy up
* Remove extra whitespace
* Fix test - make input ids small enough we can append
* Include upstream changes on main
* PR comments - add batch tests, remove comments & defaults
* Fix model output imports
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation_tf_logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/models/whisper/test_modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update docstring example
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Remove changes to adjust_logits_during_generation function
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Tidy up imports that don't require TF
* Update tests - skip and no more skip
* Update tests/generation/test_generation_tf_logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Add training flags
* Add (skipped) XLA generation tests
* Add embedding correctness test
* Add constant ids for generation tests
* Make logits finding a bit tidier
* Remove unused args
* xla generation enabled
* Don't skip XLA tests anymore
* Fix tests - add position ids to expected signature and update rag generation
* Undo method reorder
* Remove added whitespace
* Remove copy-paste gradient checkopint ref
* Remove
* Trigger CI - (issue with refs when pulling)
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
Co-authored-by: Joao Gante <joao@huggingface.co>
2022-10-10 14:48:17 +01:00