* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* chore: fix typos in the tests
* fix: format codes
* chore: fix copy mismatch issue
* fix: format codes
* chore: fix copy mismatch issue
* chore: fix copy mismatch issue
* chore: fix copy mismatch issue
* chore: restore previous words
* chore: revert unexpected changes
* use torch.testing.assertclose instead to get more details about error in cis
* fix
* style
* test_all
* revert for I bert
* fixes and updates
* more image processing fixes
* more image processors
* fix mamba and co
* style
* less strick
* ok I won't be strict
* skip and be done
* up
* Pass datasets trust_remote_code
* Pass trust_remote_code in more tests
* Add trust_remote_dataset_code arg to some tests
* Revert "Temporarily pin datasets upper version to fix CI"
This reverts commit b7672826ca.
* Pass trust_remote_code in librispeech_asr_dummy docstrings
* Revert "Pin datasets<2.20.0 for examples"
This reverts commit 833fc17a3e.
* Pass trust_remote_code to all examples
* Revert "Add trust_remote_dataset_code arg to some tests" to research_projects
* Pass trust_remote_code to tests
* Pass trust_remote_code to docstrings
* Fix flax examples tests requirements
* Pass trust_remote_dataset_code arg to tests
* Replace trust_remote_dataset_code with trust_remote_code in one example
* Fix duplicate trust_remote_code
* Replace args.trust_remote_dataset_code with args.trust_remote_code
* Replace trust_remote_dataset_code with trust_remote_code in parser
* Replace trust_remote_dataset_code with trust_remote_code in dataclasses
* Replace trust_remote_dataset_code with trust_remote_code arg
* Rename to test_model_common_attributes
The method name is misleading - it is testing being able to get and set embeddings, not common attributes to all models
* Explicitly skip
* add tests for batching support
* Update src/transformers/models/fastspeech2_conformer/modeling_fastspeech2_conformer.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/fastspeech2_conformer/modeling_fastspeech2_conformer.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* fixes and comments
* use cosine distance for conv models
* skip mra model testing
* Update tests/models/vilt/test_modeling_vilt.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* finzalize and make style
* check model type by input names
* Update tests/models/vilt/test_modeling_vilt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixed batch size for all testers
* Revert "fixed batch size for all testers"
This reverts commit 525f3a0a05.
* add batch_size for all testers
* dict from model output
* do not skip layoutlm
* bring back some code from git revert
* Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* clean-up
* where did minus go in tolerance
* make whisper happy
* deal with consequences of losing minus
* deal with consequences of losing minus
* maskformer needs its own test for happiness
* fix more models
* tag flaky CV models from Amy's approval
* make codestyle
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* initial commit
* Add inital testing files and modify __init__ files to add UnivNet imports.
* Fix some bugs
* Add checkpoint conversion script and add references to transformers pre-trained model.
* Add UnivNet entries for auto.
* Add initial docs for UnivNet.
* Handle input and output shapes in UnivNetGan.forward and add initial docstrings.
* Write tests and make them pass.
* Write docs.
* Add UnivNet doc to _toctree.yml and improve docs.
* fix typo
* make fixup
* make fix-copies
* Add upsample_rates parameter to config and improve config documentation.
* make fixup
* make fix-copies
* Remove unused upsample_rates config parameter.
* apply suggestions from review
* make style
* Verify and add reason for skipped tests inherited from ModelTesterMixin.
* Add initial UnivNetGan integration tests
* make style
* Remove noise_length input to UnivNetGan and improve integration tests.
* Fix bug and make style
* Make UnivNet integration tests pass
* Add initial code for UnivNetFeatureExtractor.
* make style
* Add initial tests for UnivNetFeatureExtractor.
* make style
* Properly initialize weights for UnivNetGan
* Get feature extractor fast tests passing
* make style
* Get feature extractor integration tests passing
* Get UnivNet integration tests passing
* make style
* Add UnivNetGan usage example
* make style and use feature extractor from hub in integration tests
* Update tips in docs
* apply suggestions from review
* make style
* Calculate padding directly instead of using get_padding methods.
* Update UnivNetFeatureExtractor.to_dict to be UnivNet-specific.
* Update feature extractor to support using model(**inputs) and add the ability to generate noise and pad the end of the spectrogram in __call__.
* Perform padding before generating noise to ensure the shapes are correct.
* Rename UnivNetGan.forward's noise_waveform argument to noise_sequence.
* make style
* Add tests to test generating noise and padding the end for UnivNetFeatureExtractor.__call__.
* Add tests for checking batched vs unbatched inputs for UnivNet feature extractor and model.
* Add expected mean and stddev checks to the integration tests and make them pass.
* make style
* Make it possible to use model(**inputs), where inputs is the output of the feature extractor.
* fix typo in UnivNetGanConfig example
* Calculate spectrogram_zero from other config values.
* apply suggestions from review
* make style
* Refactor UnivNet conversion script to use load_state_dict (following persimmon).
* Rename UnivNetFeatureExtractor to UnivNetGanFeatureExtractor.
* make style
* Switch to using torch.tensor and torch.testing.assert_close for testing expected values/slices.
* make style
* Use config in UnivNetGan modeling blocks.
* make style
* Rename the spectrogram argument of UnivNetGan.forward to input_features, following Whisper.
* make style
* Improving padding documentation.
* Add UnivNet usage example to the docs.
* apply suggestions from review
* Move dynamic_range_compression computation into the mel_spectrogram method of the feature extractor.
* Improve UnivNetGan.forward return docstring.
* Update table in docs/source/en/index.md.
* make fix-copies
* Rename UnivNet components to have pattern UnivNet*.
* make style
* make fix-copies
* Update docs
* make style
* Increase tolerance on flaky unbatched integration test.
* Remove torch.no_grad decorators from UnivNet integration tests to try to avoid flax/Tensorflow test errors.
* Add padding_mask argument to UnivNetModel.forward and add batch_decode feature extractor method to remove padding.
* Update documentation and clean up padding code.
* make style
* make style
* Remove torch dependency from UnivNetFeatureExtractor.
* make style
* Fix UnivNetModel usage example
* Clean up feature extractor code/docstrings.
* apply suggestions from review
* make style
* Add comments for tests skipped via ModelTesterMixin flags.
* Add comment for model parallel tests skipped via the test_model_parallel ModelTesterMixin flag.
* Add # Copied from statements to copied UnivNetFeatureExtractionTest tests.
* Simplify UnivNetFeatureExtractorTest.test_batch_decode.
* Add support for unbatched padding_masks in UnivNetModel.forward.
* Refactor unbatched padding_mask support.
* make style