* add AutoModelForTextToSpeech class
* add TTS pipeline and tessting
* add docstrings to text_to_speech pipeline
* fix torch dependency
* corrector 'processor is None' case in Pipeline
* correct repo id
* modify text-to-speech -> text-to-audio
* remove processor
* rename text_to_speech pipelines files to text_audio
* add textToWaveform and textToSpectrogram instead of textToAudio classes
* update TTS pipeline to the bare minimum
* update tests TTS pipeline
* make style and erase useless import torch in TTS pipeline tests
* modify how to check if generate or forward in TTS pipeline
* remove unnecessary extra new lines
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* refactor input_texts -> text_inputs
* correct docstrings of TTS.__call__
* correct the shape of generated waveform
* take care of Bark tokenizer special case
* correct run_pipeline_test TTS
* make style
* update TTS docstrings
* address Sylvain nit refactors
* make style
* refactor into one liners
* correct squeeze
* correct way to test if forward or generate
* Update output audio waveform shape
* make style
* correct import
* modify how the TTS pipeline test if a model can generate
* align shape output of TTS pipeline with consistent shape
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* add util for ram efficient loading of model when using fsdp
* make fix-copies
* fixes 😅
* docs
* making it further easier to use
* rename the function
* refactor to handle fsdp ram efficiency in `from_pretrained`
* fixes
* fixes
* fixes
* update
* fixes
* revert `load_pretrained_model_only_on_rank0`
* resolve `load_from_checkpoint`
* Inconsistency in PreTrainedModel.resize_token_embeddings
This PR addresses https://github.com/huggingface/transformers/issues/25241.
In previous implementation when ZeRO stage 3 was enbaled, resize_token_embeddings would create independent PyTorch weights on each device. Here we ensure that new embeddings are created with DeepSpeed init, and are properly partitioned accros devices.
* formatting with black
* adding the removed comments back in
---------
Co-authored-by: Sina Moeini <smoeini@amazon.com>
* fix EVERYTHING
* more fixes
* ⚗️⚗️ Tokenizer magic ⚗️⚗️
* wrong value but test passes for the TODO
* update
* updat
* safe protobuf import?
* style
* non gated repo
* update
* fixup
* Update src/transformers/models/llama/tokenization_llama.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llama/tokenization_llama.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/t5/test_tokenization_t5.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* nits
* fix t5 too
* use assert equal
* fix llama decoding
* nits on t5
* fixup
* only remove the prefix space, not other spaces
* more deconding tests and more todos
* fix CI as well
* fixup
* skip failing test on CI (its tf its ok)
* skip test_subword_regularization_tokenizer that is also crashing on the CI for TF
* update llama
* revert good fixes
* fixup
* empty
* explain why we need to encode with an additional token
* better warning?
* nits
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix
* revert cahnges and update resizing of embedding layer
* use wraning
* fixup
* more styling nits
* fix all tests that overload the embedding tests
* 👀👀 remove breakpoint
* remove useless overload + overload correctly where needed
* resize lm head with new vocab size
* reverse not necessary changes
* style
* fix CIs!
* fix last CI tests, adapt bark and Marian
* fixup
* Adds `TRANSFORMERS_TEST_DEVICE`
Mirrors the same API in the diffusers library. Useful in transformers
too.
* replace backend checking with trying `torch.device`
* Adds better error message for unknown test devices
* `make style`
* adds documentation showing `TRANSFORMERS_TEST_DEVICE` usage.
* [ASR Pipeline] Fix init
* refactor test
* change default kwarg setting
* only perform checks if we have to
* override init
* move pre/forward/post checks to sanitize
* Add copied from statements for image processors
* Move out rescale and normalize to base image processor
* Remove rescale and normalize from vit (post rebase)
* Update docstrings and tidy up
* PR comments
* Add input_data_format as preprocess argument
* Resolve tests and tidy up
* Remove num_channels argument
* Update doc strings -> default ints not in code formatting
* Make training args fully immutable
* Working tests, PyTorch
* In test_trainer
* during testing
* Use proper dataclass way
* Fix test
* Another one
* Fix tf
* Lingering slow
* Exception
* Clean