mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
[WIP] add SpeechT5 model (#18922)
* make SpeechT5 model by copying Wav2Vec2 * add paper to docs * whoops added docs in wrong file * remove SpeechT5Tokenizer + put CTC back in the name * remove deprecated class * remove unused docstring * delete SpeechT5FeatureExtractor, use Wav2Vec2FeatureExtractor instead * remove classes we don't need right now * initial stab at speech encoder prenet * add more speech encoder prenet stuff * improve SpeechEncoderPrenet * add encoder (not finished yet) * add relative position bias to self-attention * add encoder CTC layers * fix formatting * add decoder from BART, doesn't work yet * make it work with generate loop * wrap the encoder into a speech encoder class * wrap the decoder in a text decoder class * changed my mind * changed my mind again ;-) * load decoder weights, make it work * add weights for text decoder postnet * add SpeechT5ForCTC model that uses only the encoder * clean up EncoderLayer and DecoderLayer * implement _init_weights in SpeechT5PreTrainedModel * cleanup config + Encoder and Decoder * add head + cross attention masks * improve doc comments * fixup * more cleanup * more fixup * TextDecoderPrenet works now, thanks Kendall * add CTC loss * add placeholders for other pre/postnets * add type annotation * fix freeze_feature_encoder * set padding tokens to 0 in decoder attention mask * encoder attention mask downsampling * remove features_pen calculation * disable the padding tokens thing again * fixup * more fixup * code review fixes * rename encoder/decoder wrapper classes * allow checkpoints to be loaded into SpeechT5Model * put encoder into wrapper for CTC model * clean up conversion script * add encoder for TTS model * add speech decoder prenet * add speech decoder post-net * attempt to reconstruct the generation loop * add speech generation loop * clean up generate_speech * small tweaks * fix forward pass * enable always dropout on speech decoder prenet * sort declaration * rename models * fixup * fix copies * more fixup * make consistency checker happy * add Seq2SeqSpectrogramOutput class * doc comments * quick note about loss and labels * add HiFi-GAN implementation (from Speech2Speech PR) * rename file * add vocoder to TTS model * improve vocoder * working on tokenizer * more better tokenizer * add CTC tokenizer * fix decode and batch_code in CTC tokenizer * fix processor * two processors and feature extractors * use SpeechT5WaveformFeatureExtractor instead of Wav2Vec2 * cleanup * more cleanup * even more fixup * notebooks * fix log-mel spectrograms * support reduction factor * fixup * shift spectrograms to right to create decoder inputs * return correct labels * add labels for stop token prediction * fix doc comments * fixup * remove SpeechT5ForPreTraining * more fixup * update copyright headers * add usage examples * add SpeechT5ProcessorForCTC * fixup * push unofficial checkpoints to hub * initial version of tokenizer unit tests * add slow test * fix failing tests * tests for CTC tokenizer * finish CTC tokenizer tests * processor tests * initial test for feature extractors * tests for spectrogram feature extractor * fixup * more fixup * add decorators * require speech for tests * modeling tests * more tests for ASR model * fix imports * add fake tests for the other models * fixup * remove jupyter notebooks * add missing SpeechT5Model tests * add missing tests for SpeechT5ForCTC * add missing tests for SpeechT5ForTextToSpeech * sort tests by name * fix Hi-Fi GAN tests * fixup * add speech-to-speech model * refactor duplicate speech generation code * add processor for SpeechToSpeech model * add usage example * add tests for speech-to-speech model * fixup * enable gradient checkpointing for SpeechT5FeatureEncoder * code review * push_to_hub now takes repo_id * improve doc comments for HiFi-GAN config * add missing test * add integration tests * make number of layers in speech decoder prenet configurable * rename variable * rename variables * add auto classes for TTS and S2S * REMOVE CTC!!! * S2S processor does not support save/load_pretrained * fixup * these models are now in an auto mapping * fix doc links * rename HiFiGAN to HifiGan, remove separate config file * REMOVE auto classes * there can be only one * fixup * replace assert * reformat * feature extractor can process input and target at same time * update checkpoint names * fix commit hash
This commit is contained in:
parent
fb13a7df95
commit
e4bacf6614
@ -400,6 +400,7 @@ Current number of checkpoints: ** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
|
||||
|
@ -393,6 +393,7 @@ Número actual de puntos de control: ** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
|
||||
|
@ -365,6 +365,7 @@ conda install -c huggingface transformers
|
||||
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP से) साथ देने वाला पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स](https ://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योव आर्टज़ी द्वारा।
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP से) साथ में पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स] (https://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योआव आर्टज़ी द्वारा पोस्ट किया गया।
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (फेसबुक से), साथ में पेपर [फेयरसेक S2T: फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग विद फेयरसेक](https: //arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया。
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (फेसबुक से) साथ में पेपर [लार्ज-स्केल सेल्फ- एंड सेमी-सुपरवाइज्ड लर्निंग फॉर स्पीच ट्रांसलेशन](https://arxiv.org/abs/2104.06678) चांगहान वांग, ऐनी वू, जुआन पिनो, एलेक्सी बेवस्की, माइकल औली, एलेक्सिस द्वारा Conneau द्वारा पोस्ट किया गया।
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (तेल अवीव यूनिवर्सिटी से) साथ में पेपर [स्पैन सिलेक्शन को प्री-ट्रेनिंग करके कुछ-शॉट क्वेश्चन आंसरिंग](https:// arxiv.org/abs/2101.00438) ओरि राम, युवल कर्स्टन, जोनाथन बेरेंट, अमीर ग्लोबर्सन, ओमर लेवी द्वारा।
|
||||
|
@ -427,6 +427,7 @@ Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それ
|
||||
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA から) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo から公開された研究論文: [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203)
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP から) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi から公開された研究論文: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP から) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi から公開された研究論文: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (Microsoft Research から) Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. から公開された研究論文 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205)
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook から), Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino から公開された研究論文: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook から), Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau から公開された研究論文: [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678)
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University から), Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy から公開された研究論文: [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438)
|
||||
|
@ -342,6 +342,7 @@ Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는
|
||||
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (NVIDIA 에서) Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 의 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 논문과 함께 발표했습니다.
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP 에서) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 의 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 논문과 함께 발표했습니다.
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP 에서) Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 의 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 논문과 함께 발표했습니다.
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (Microsoft Research 에서 제공)은 Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.의 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205)논문과 함께 발표했습니다.
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (Facebook 에서) Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 의 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 논문과 함께 발표했습니다.
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (Facebook 에서) Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 의 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 논문과 함께 발표했습니다.
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (Tel Aviv University 에서) Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 의 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 논문과 함께 발표했습니다.
|
||||
|
@ -366,6 +366,7 @@ conda install -c huggingface transformers
|
||||
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (来自 Microsoft Research) 伴随论文 [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) 由 Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei 发布。
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。
|
||||
|
@ -378,6 +378,7 @@ conda install -c huggingface transformers
|
||||
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
|
||||
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SpeechT5](https://huggingface.co/docs/transformers/main/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
|
||||
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
|
||||
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
|
||||
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
|
||||
|
@ -491,6 +491,8 @@
|
||||
title: Speech2Text
|
||||
- local: model_doc/speech_to_text_2
|
||||
title: Speech2Text2
|
||||
- local: model_doc/speecht5
|
||||
title: SpeechT5
|
||||
- local: model_doc/unispeech
|
||||
title: UniSpeech
|
||||
- local: model_doc/unispeech-sat
|
||||
|
@ -179,6 +179,7 @@ The documentation is organized into five sections:
|
||||
1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
|
||||
1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
|
||||
1. **[SpeechT5](model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
|
||||
1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
|
||||
1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
|
||||
1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
|
||||
@ -360,6 +361,7 @@ Flax), PyTorch, and/or TensorFlow.
|
||||
| Speech Encoder decoder | ❌ | ❌ | ✅ | ❌ | ✅ |
|
||||
| Speech2Text | ✅ | ❌ | ✅ | ✅ | ❌ |
|
||||
| Speech2Text2 | ✅ | ❌ | ❌ | ❌ | ❌ |
|
||||
| SpeechT5 | ✅ | ❌ | ✅ | ❌ | ❌ |
|
||||
| Splinter | ✅ | ✅ | ✅ | ❌ | ❌ |
|
||||
| SqueezeBERT | ✅ | ✅ | ✅ | ❌ | ❌ |
|
||||
| Swin Transformer | ❌ | ❌ | ✅ | ✅ | ❌ |
|
||||
|
@ -136,6 +136,10 @@ documented on their corresponding model page.
|
||||
|
||||
[[autodoc]] modeling_outputs.Seq2SeqQuestionAnsweringModelOutput
|
||||
|
||||
## Seq2SeqSpectrogramOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.Seq2SeqSpectrogramOutput
|
||||
|
||||
## SemanticSegmenterOutput
|
||||
|
||||
[[autodoc]] modeling_outputs.SemanticSegmenterOutput
|
||||
|
81
docs/source/en/model_doc/speecht5.mdx
Normal file
81
docs/source/en/model_doc/speecht5.mdx
Normal file
@ -0,0 +1,81 @@
|
||||
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# SpeechT5
|
||||
|
||||
## Overview
|
||||
|
||||
The SpeechT5 model was proposed in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.*
|
||||
|
||||
This model was contributed by [Matthijs](https://huggingface.co/Matthijs). The original code can be found [here](https://github.com/microsoft/SpeechT5).
|
||||
|
||||
## SpeechT5Config
|
||||
|
||||
[[autodoc]] SpeechT5Config
|
||||
|
||||
## SpeechT5HifiGanConfig
|
||||
|
||||
[[autodoc]] SpeechT5HifiGanConfig
|
||||
|
||||
## SpeechT5Tokenizer
|
||||
|
||||
[[autodoc]] SpeechT5Tokenizer
|
||||
- __call__
|
||||
- save_vocabulary
|
||||
- decode
|
||||
- batch_decode
|
||||
|
||||
## SpeechT5FeatureExtractor
|
||||
|
||||
[[autodoc]] SpeechT5FeatureExtractor
|
||||
- __call__
|
||||
|
||||
## SpeechT5Processor
|
||||
|
||||
[[autodoc]] SpeechT5Processor
|
||||
- __call__
|
||||
- pad
|
||||
- from_pretrained
|
||||
- save_pretrained
|
||||
- batch_decode
|
||||
- decode
|
||||
|
||||
## SpeechT5Model
|
||||
|
||||
[[autodoc]] SpeechT5Model
|
||||
- forward
|
||||
|
||||
## SpeechT5ForSpeechToText
|
||||
|
||||
[[autodoc]] SpeechT5ForSpeechToText
|
||||
- forward
|
||||
|
||||
## SpeechT5ForTextToSpeech
|
||||
|
||||
[[autodoc]] SpeechT5ForTextToSpeech
|
||||
- forward
|
||||
- generate_speech
|
||||
|
||||
## SpeechT5ForSpeechToSpeech
|
||||
|
||||
[[autodoc]] SpeechT5ForSpeechToSpeech
|
||||
- forward
|
||||
- generate_speech
|
||||
|
||||
## SpeechT5HifiGan
|
||||
|
||||
[[autodoc]] SpeechT5HifiGan
|
||||
- forward
|
@ -402,6 +402,12 @@ _import_structure = {
|
||||
"Speech2Text2Processor",
|
||||
"Speech2Text2Tokenizer",
|
||||
],
|
||||
"models.speecht5": [
|
||||
"SPEECHT5_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"SPEECHT5_PRETRAINED_HIFIGAN_CONFIG_ARCHIVE_MAP",
|
||||
"SpeechT5Config",
|
||||
"SpeechT5HifiGanConfig",
|
||||
],
|
||||
"models.splinter": ["SPLINTER_PRETRAINED_CONFIG_ARCHIVE_MAP", "SplinterConfig", "SplinterTokenizer"],
|
||||
"models.squeezebert": ["SQUEEZEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "SqueezeBertConfig", "SqueezeBertTokenizer"],
|
||||
"models.swin": ["SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP", "SwinConfig"],
|
||||
@ -632,6 +638,7 @@ else:
|
||||
_import_structure["models.reformer"].append("ReformerTokenizer")
|
||||
_import_structure["models.rembert"].append("RemBertTokenizer")
|
||||
_import_structure["models.speech_to_text"].append("Speech2TextTokenizer")
|
||||
_import_structure["models.speecht5"].append("SpeechT5Tokenizer")
|
||||
_import_structure["models.t5"].append("T5Tokenizer")
|
||||
_import_structure["models.xglm"].append("XGLMTokenizer")
|
||||
_import_structure["models.xlm_prophetnet"].append("XLMProphetNetTokenizer")
|
||||
@ -734,6 +741,7 @@ else:
|
||||
_import_structure["models.audio_spectrogram_transformer"].append("ASTFeatureExtractor")
|
||||
_import_structure["models.mctct"].append("MCTCTFeatureExtractor")
|
||||
_import_structure["models.speech_to_text"].append("Speech2TextFeatureExtractor")
|
||||
_import_structure["models.speecht5"].append("SpeechT5FeatureExtractor")
|
||||
|
||||
# Tensorflow-text-specific objects
|
||||
try:
|
||||
@ -772,6 +780,7 @@ except OptionalDependencyNotAvailable:
|
||||
]
|
||||
else:
|
||||
_import_structure["models.speech_to_text"].append("Speech2TextProcessor")
|
||||
_import_structure["models.speecht5"].append("SpeechT5Processor")
|
||||
|
||||
# Vision-specific objects
|
||||
try:
|
||||
@ -2193,6 +2202,17 @@ else:
|
||||
]
|
||||
)
|
||||
_import_structure["models.speech_to_text_2"].extend(["Speech2Text2ForCausalLM", "Speech2Text2PreTrainedModel"])
|
||||
_import_structure["models.speecht5"].extend(
|
||||
[
|
||||
"SPEECHT5_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"SpeechT5ForSpeechToSpeech",
|
||||
"SpeechT5ForSpeechToText",
|
||||
"SpeechT5ForTextToSpeech",
|
||||
"SpeechT5HifiGan",
|
||||
"SpeechT5Model",
|
||||
"SpeechT5PreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.splinter"].extend(
|
||||
[
|
||||
"SPLINTER_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
@ -3842,6 +3862,12 @@ if TYPE_CHECKING:
|
||||
Speech2Text2Processor,
|
||||
Speech2Text2Tokenizer,
|
||||
)
|
||||
from .models.speecht5 import (
|
||||
SPEECHT5_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
SPEECHT5_PRETRAINED_HIFIGAN_CONFIG_ARCHIVE_MAP,
|
||||
SpeechT5Config,
|
||||
SpeechT5HifiGanConfig,
|
||||
)
|
||||
from .models.splinter import SPLINTER_PRETRAINED_CONFIG_ARCHIVE_MAP, SplinterConfig, SplinterTokenizer
|
||||
from .models.squeezebert import SQUEEZEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, SqueezeBertConfig, SqueezeBertTokenizer
|
||||
from .models.swin import SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP, SwinConfig
|
||||
@ -4054,6 +4080,7 @@ if TYPE_CHECKING:
|
||||
from .models.reformer import ReformerTokenizer
|
||||
from .models.rembert import RemBertTokenizer
|
||||
from .models.speech_to_text import Speech2TextTokenizer
|
||||
from .models.speecht5 import SpeechT5Tokenizer
|
||||
from .models.t5 import T5Tokenizer
|
||||
from .models.xglm import XGLMTokenizer
|
||||
from .models.xlm_prophetnet import XLMProphetNetTokenizer
|
||||
@ -4139,6 +4166,7 @@ if TYPE_CHECKING:
|
||||
from .models.audio_spectrogram_transformer import ASTFeatureExtractor
|
||||
from .models.mctct import MCTCTFeatureExtractor
|
||||
from .models.speech_to_text import Speech2TextFeatureExtractor
|
||||
from .models.speecht5 import SpeechT5FeatureExtractor
|
||||
|
||||
try:
|
||||
if not is_tensorflow_text_available():
|
||||
@ -4163,6 +4191,7 @@ if TYPE_CHECKING:
|
||||
from .utils.dummy_sentencepiece_and_speech_objects import *
|
||||
else:
|
||||
from .models.speech_to_text import Speech2TextProcessor
|
||||
from .models.speecht5 import SpeechT5Processor
|
||||
|
||||
try:
|
||||
if not is_vision_available():
|
||||
@ -5326,6 +5355,15 @@ if TYPE_CHECKING:
|
||||
Speech2TextPreTrainedModel,
|
||||
)
|
||||
from .models.speech_to_text_2 import Speech2Text2ForCausalLM, Speech2Text2PreTrainedModel
|
||||
from .models.speecht5 import (
|
||||
SPEECHT5_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
SpeechT5ForSpeechToSpeech,
|
||||
SpeechT5ForSpeechToText,
|
||||
SpeechT5ForTextToSpeech,
|
||||
SpeechT5HifiGan,
|
||||
SpeechT5Model,
|
||||
SpeechT5PreTrainedModel,
|
||||
)
|
||||
from .models.splinter import (
|
||||
SPLINTER_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
SplinterForPreTraining,
|
||||
|
@ -1355,3 +1355,63 @@ class BaseModelOutputWithPoolingAndProjection(ModelOutput):
|
||||
hidden_states: Optional[Tuple[torch.FloatTensor]] = None
|
||||
attentions: Optional[Tuple[torch.FloatTensor]] = None
|
||||
projection_state: Optional[Tuple[torch.FloatTensor]] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class Seq2SeqSpectrogramOutput(ModelOutput):
|
||||
"""
|
||||
Base class for sequence-to-sequence spectrogram outputs.
|
||||
|
||||
Args:
|
||||
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
|
||||
Spectrogram generation loss.
|
||||
spectrogram (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_bins)`):
|
||||
The predicted spectrogram.
|
||||
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
||||
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
|
||||
`(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
|
||||
`(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
|
||||
|
||||
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
|
||||
blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
|
||||
decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
||||
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
|
||||
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
||||
|
||||
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
|
||||
decoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
||||
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
||||
sequence_length)`.
|
||||
|
||||
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
|
||||
self-attention heads.
|
||||
cross_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
||||
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
||||
sequence_length)`.
|
||||
|
||||
Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the
|
||||
weighted average in the cross-attention heads.
|
||||
encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
|
||||
Sequence of hidden-states at the output of the last layer of the encoder of the model.
|
||||
encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
||||
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
|
||||
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
||||
|
||||
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
|
||||
encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
||||
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
||||
sequence_length)`.
|
||||
|
||||
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
|
||||
self-attention heads.
|
||||
"""
|
||||
|
||||
loss: Optional[torch.FloatTensor] = None
|
||||
spectrogram: torch.FloatTensor = None
|
||||
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
|
||||
decoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None
|
||||
decoder_attentions: Optional[Tuple[torch.FloatTensor]] = None
|
||||
cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
|
||||
encoder_last_hidden_state: Optional[torch.FloatTensor] = None
|
||||
encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None
|
||||
encoder_attentions: Optional[Tuple[torch.FloatTensor]] = None
|
||||
|
@ -155,6 +155,7 @@ from . import (
|
||||
speech_encoder_decoder,
|
||||
speech_to_text,
|
||||
speech_to_text_2,
|
||||
speecht5,
|
||||
splinter,
|
||||
squeezebert,
|
||||
swin,
|
||||
|
@ -152,6 +152,7 @@ CONFIG_MAPPING_NAMES = OrderedDict(
|
||||
("speech-encoder-decoder", "SpeechEncoderDecoderConfig"),
|
||||
("speech_to_text", "Speech2TextConfig"),
|
||||
("speech_to_text_2", "Speech2Text2Config"),
|
||||
("speecht5", "SpeechT5Config"),
|
||||
("splinter", "SplinterConfig"),
|
||||
("squeezebert", "SqueezeBertConfig"),
|
||||
("swin", "SwinConfig"),
|
||||
@ -310,6 +311,7 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict(
|
||||
("sew-d", "SEW_D_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("speech_to_text", "SPEECH_TO_TEXT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("speech_to_text_2", "SPEECH_TO_TEXT_2_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("speecht5", "SPEECHT5_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("splinter", "SPLINTER_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("squeezebert", "SQUEEZEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("swin", "SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
@ -489,6 +491,7 @@ MODEL_NAMES_MAPPING = OrderedDict(
|
||||
("speech-encoder-decoder", "Speech Encoder decoder"),
|
||||
("speech_to_text", "Speech2Text"),
|
||||
("speech_to_text_2", "Speech2Text2"),
|
||||
("speecht5", "SpeechT5"),
|
||||
("splinter", "Splinter"),
|
||||
("squeezebert", "SqueezeBERT"),
|
||||
("swin", "Swin Transformer"),
|
||||
|
@ -76,6 +76,7 @@ FEATURE_EXTRACTOR_MAPPING_NAMES = OrderedDict(
|
||||
("sew", "Wav2Vec2FeatureExtractor"),
|
||||
("sew-d", "Wav2Vec2FeatureExtractor"),
|
||||
("speech_to_text", "Speech2TextFeatureExtractor"),
|
||||
("speecht5", "SpeechT5FeatureExtractor"),
|
||||
("swin", "ViTFeatureExtractor"),
|
||||
("swinv2", "ViTFeatureExtractor"),
|
||||
("table-transformer", "DetrFeatureExtractor"),
|
||||
|
@ -148,6 +148,7 @@ MODEL_MAPPING_NAMES = OrderedDict(
|
||||
("sew", "SEWModel"),
|
||||
("sew-d", "SEWDModel"),
|
||||
("speech_to_text", "Speech2TextModel"),
|
||||
("speecht5", "SpeechT5Model"),
|
||||
("splinter", "SplinterModel"),
|
||||
("squeezebert", "SqueezeBertModel"),
|
||||
("swin", "SwinModel"),
|
||||
@ -591,6 +592,7 @@ MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES = OrderedDict(
|
||||
[
|
||||
("speech-encoder-decoder", "SpeechEncoderDecoderModel"),
|
||||
("speech_to_text", "Speech2TextForConditionalGeneration"),
|
||||
("speecht5", "SpeechT5ForSpeechToText"),
|
||||
("whisper", "WhisperForConditionalGeneration"),
|
||||
]
|
||||
)
|
||||
|
@ -61,6 +61,7 @@ PROCESSOR_MAPPING_NAMES = OrderedDict(
|
||||
("sew-d", "Wav2Vec2Processor"),
|
||||
("speech_to_text", "Speech2TextProcessor"),
|
||||
("speech_to_text_2", "Speech2Text2Processor"),
|
||||
("speecht5", "SpeechT5Processor"),
|
||||
("trocr", "TrOCRProcessor"),
|
||||
("unispeech", "Wav2Vec2Processor"),
|
||||
("unispeech-sat", "Wav2Vec2Processor"),
|
||||
|
@ -264,6 +264,7 @@ else:
|
||||
("roformer", ("RoFormerTokenizer", "RoFormerTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("speech_to_text", ("Speech2TextTokenizer" if is_sentencepiece_available() else None, None)),
|
||||
("speech_to_text_2", ("Speech2Text2Tokenizer", None)),
|
||||
("speecht5", ("SpeechT5Tokenizer" if is_sentencepiece_available() else None, None)),
|
||||
("splinter", ("SplinterTokenizer", "SplinterTokenizerFast")),
|
||||
(
|
||||
"squeezebert",
|
||||
|
129
src/transformers/models/speecht5/__init__.py
Normal file
129
src/transformers/models/speecht5/__init__.py
Normal file
@ -0,0 +1,129 @@
|
||||
# flake8: noqa
|
||||
# There's no way to ignore "F401 '...' imported but unused" warnings in this
|
||||
# module, but to preserve other warnings. So, don't check this module at all.
|
||||
|
||||
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from ...utils import (
|
||||
OptionalDependencyNotAvailable,
|
||||
_LazyModule,
|
||||
is_sentencepiece_available,
|
||||
is_speech_available,
|
||||
is_torch_available,
|
||||
)
|
||||
|
||||
|
||||
_import_structure = {
|
||||
"configuration_speecht5": [
|
||||
"SPEECHT5_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"SPEECHT5_PRETRAINED_HIFIGAN_CONFIG_ARCHIVE_MAP",
|
||||
"SpeechT5Config",
|
||||
"SpeechT5HifiGanConfig",
|
||||
],
|
||||
}
|
||||
|
||||
try:
|
||||
if not is_sentencepiece_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["tokenization_speecht5"] = ["SpeechT5Tokenizer"]
|
||||
|
||||
try:
|
||||
if not is_speech_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["feature_extraction_speecht5"] = ["SpeechT5FeatureExtractor"]
|
||||
|
||||
try:
|
||||
if not (is_speech_available() and is_sentencepiece_available()):
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["processing_speecht5"] = ["SpeechT5Processor"]
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["modeling_speecht5"] = [
|
||||
"SPEECHT5_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"SpeechT5ForSpeechToText",
|
||||
"SpeechT5ForSpeechToSpeech",
|
||||
"SpeechT5ForTextToSpeech",
|
||||
"SpeechT5Model",
|
||||
"SpeechT5PreTrainedModel",
|
||||
"SpeechT5HifiGan",
|
||||
]
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .configuration_speecht5 import (
|
||||
SPEECHT5_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
SPEECHT5_PRETRAINED_HIFIGAN_CONFIG_ARCHIVE_MAP,
|
||||
SpeechT5Config,
|
||||
SpeechT5HifiGanConfig,
|
||||
)
|
||||
|
||||
try:
|
||||
if not is_sentencepiece_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .tokenization_speecht5 import SpeechT5Tokenizer
|
||||
|
||||
try:
|
||||
if not is_speech_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .feature_extraction_speecht5 import SpeechT5FeatureExtractor
|
||||
|
||||
try:
|
||||
if not (is_speech_available() and is_sentencepiece_available()):
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .processing_speecht5 import SpeechT5Processor
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .modeling_speecht5 import (
|
||||
SPEECHT5_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
SpeechT5ForSpeechToSpeech,
|
||||
SpeechT5ForSpeechToText,
|
||||
SpeechT5ForTextToSpeech,
|
||||
SpeechT5HifiGan,
|
||||
SpeechT5Model,
|
||||
SpeechT5PreTrainedModel,
|
||||
)
|
||||
|
||||
else:
|
||||
import sys
|
||||
|
||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
412
src/transformers/models/speecht5/configuration_speecht5.py
Normal file
412
src/transformers/models/speecht5/configuration_speecht5.py
Normal file
@ -0,0 +1,412 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The Fairseq Authors, Microsoft Research, and the HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" SpeechT5 model configuration"""
|
||||
|
||||
import functools
|
||||
import operator
|
||||
|
||||
from ...configuration_utils import PretrainedConfig
|
||||
from ...utils import logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
SPEECHT5_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||
"microsoft/speecht5_asr": "https://huggingface.co/microsoft/speecht5_asr/resolve/main/config.json",
|
||||
"microsoft/speecht5_tts": "https://huggingface.co/microsoft/speecht5_tts/resolve/main/config.json",
|
||||
"microsoft/speecht5_vc": "https://huggingface.co/microsoft/speecht5_vc/resolve/main/config.json",
|
||||
}
|
||||
|
||||
SPEECHT5_PRETRAINED_HIFIGAN_CONFIG_ARCHIVE_MAP = {
|
||||
"microsoft/speecht5_hifigan": "https://huggingface.co/microsoft/speecht5_hifigan/resolve/main/config.json",
|
||||
}
|
||||
|
||||
|
||||
class SpeechT5Config(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`SpeechT5Model`]. It is used to instantiate a
|
||||
SpeechT5 model according to the specified arguments, defining the model architecture. Instantiating a configuration
|
||||
with the defaults will yield a similar configuration to that of the SpeechT5
|
||||
[microsoft/speecht5_asr](https://huggingface.co/microsoft/speecht5_asr) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
vocab_size (`int`, *optional*, defaults to 81):
|
||||
Vocabulary size of the SpeechT5 model. Defines the number of different tokens that can be represented by
|
||||
the `inputs_ids` passed to the forward method of [`SpeechT5Model`].
|
||||
hidden_size (`int`, *optional*, defaults to 768):
|
||||
Dimensionality of the encoder layers and the pooler layer.
|
||||
encoder_layers (`int`, *optional*, defaults to 12):
|
||||
Number of hidden layers in the Transformer encoder.
|
||||
encoder_attention_heads (`int`, *optional*, defaults to 12):
|
||||
Number of attention heads for each attention layer in the Transformer encoder.
|
||||
encoder_ffn_dim (`int`, *optional*, defaults to 3072):
|
||||
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
||||
encoder_layerdrop (`float`, *optional*, defaults to 0.1):
|
||||
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
|
||||
for more details.
|
||||
decoder_layers (`int`, *optional*, defaults to 6):
|
||||
Number of hidden layers in the Transformer decoder.
|
||||
decoder_attention_heads (`int`, *optional*, defaults to 12):
|
||||
Number of attention heads for each attention layer in the Transformer decoder.
|
||||
decoder_ffn_dim (`int`, *optional*, defaults to 3072):
|
||||
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer decoder.
|
||||
decoder_layerdrop (`float`, *optional*, defaults to 0.1):
|
||||
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
|
||||
for more details.
|
||||
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
|
||||
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
||||
`"relu"`, `"selu"` and `"gelu_new"` are supported.
|
||||
positional_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout probability for the text position encoding layers.
|
||||
hidden_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
||||
attention_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio for the attention probabilities.
|
||||
activation_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio for activations inside the fully connected layer.
|
||||
initializer_range (`float`, *optional*, defaults to 0.02):
|
||||
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
|
||||
The epsilon used by the layer normalization layers.
|
||||
scale_embedding (`bool`, *optional*, defaults to `False`):
|
||||
Scale embeddings by diving by sqrt(d_model).
|
||||
feat_extract_norm (`str`, *optional*, defaults to `"group"`):
|
||||
The norm to be applied to 1D convolutional layers in the speech encoder pre-net. One of `"group"` for group
|
||||
normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
|
||||
convolutional layers.
|
||||
feat_proj_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout probability for output of the speech encoder pre-net.
|
||||
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
|
||||
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
|
||||
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
|
||||
conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
|
||||
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
|
||||
speech encoder pre-net. The length of *conv_dim* defines the number of 1D convolutional layers.
|
||||
conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
|
||||
A tuple of integers defining the stride of each 1D convolutional layer in the speech encoder pre-net. The
|
||||
length of *conv_stride* defines the number of convolutional layers and has to match the length of
|
||||
*conv_dim*.
|
||||
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
|
||||
A tuple of integers defining the kernel size of each 1D convolutional layer in the speech encoder pre-net.
|
||||
The length of *conv_kernel* defines the number of convolutional layers and has to match the length of
|
||||
*conv_dim*.
|
||||
conv_bias (`bool`, *optional*, defaults to `False`):
|
||||
Whether the 1D convolutional layers have a bias.
|
||||
num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
|
||||
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
|
||||
embeddings layer.
|
||||
num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
|
||||
Number of groups of 1D convolutional positional embeddings layer.
|
||||
apply_spec_augment (`bool`, *optional*, defaults to `True`):
|
||||
Whether to apply *SpecAugment* data augmentation to the outputs of the speech encoder pre-net. For
|
||||
reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech
|
||||
Recognition](https://arxiv.org/abs/1904.08779).
|
||||
mask_time_prob (`float`, *optional*, defaults to 0.05):
|
||||
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
|
||||
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
|
||||
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
|
||||
masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
|
||||
actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
|
||||
mask_time_length (`int`, *optional*, defaults to 10):
|
||||
Length of vector span along the time axis.
|
||||
mask_time_min_masks (`int`, *optional*, defaults to 2),:
|
||||
The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
|
||||
irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
|
||||
mask_time_min_masks''
|
||||
mask_feature_prob (`float`, *optional*, defaults to 0.0):
|
||||
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
|
||||
masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
|
||||
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
|
||||
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
|
||||
may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
|
||||
True`.
|
||||
mask_feature_length (`int`, *optional*, defaults to 10):
|
||||
Length of vector span along the feature axis.
|
||||
mask_feature_min_masks (`int`, *optional*, defaults to 0),:
|
||||
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
|
||||
step, irrespectively of `mask_feature_prob`. Only relevant if
|
||||
''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
|
||||
num_mel_bins (`int`, *optional*, defaults to 80):
|
||||
Number of mel features used per input features. Used by the speech decoder pre-net. Should correspond to
|
||||
the value used in the [`SpeechT5Processor`] class.
|
||||
speech_decoder_prenet_layers (`int`, *optional*, defaults to 2):
|
||||
Number of layers in the speech decoder pre-net.
|
||||
speech_decoder_prenet_units (`int`, *optional*, defaults to 256):
|
||||
Dimensionality of the layers in the speech decoder pre-net.
|
||||
speech_decoder_prenet_dropout (`float`, *optional*, defaults to 0.5):
|
||||
The dropout probability for the speech decoder pre-net layers.
|
||||
speaker_embedding_dim (`int`, *optional*, defaults to 512):
|
||||
Dimensionality of the *XVector* embedding vectors.
|
||||
speech_decoder_postnet_layers (`int`, *optional*, defaults to 5):
|
||||
Number of layers in the speech decoder post-net.
|
||||
speech_decoder_postnet_units (`int`, *optional*, defaults to 256):
|
||||
Dimensionality of the layers in the speech decoder post-net.
|
||||
speech_decoder_postnet_kernel (`int`, *optional*, defaults to 5):
|
||||
Number of convolutional filter channels in the speech decoder post-net.
|
||||
speech_decoder_postnet_dropout (`float`, *optional*, defaults to 0.5):
|
||||
The dropout probability for the speech decoder post-net layers.
|
||||
reduction_factor (`int`, *optional*, defaults to 2):
|
||||
Spectrogram length reduction factor for the speech decoder post-net.
|
||||
max_speech_positions (`int`, *optional*, defaults to 4000):
|
||||
The maximum sequence length of speech features that this model might ever be used with.
|
||||
max_text_positions (`int`, *optional*, defaults to 450):
|
||||
The maximum sequence length of text features that this model might ever be used with.
|
||||
encoder_max_relative_position (`int`, *optional*, defaults to 160):
|
||||
Maximum distance for relative position embedding in the encoder.
|
||||
decoder_max_relative_position (`int`, *optional*, defaults to 160):
|
||||
Maximum distance for relative position embedding in the dencoder.
|
||||
use_cache (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not the model should return the last key/values attentions (not used by all models).
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import SpeechT5Model, SpeechT5Config
|
||||
|
||||
>>> # Initializing a "microsoft/speecht5_asr" style configuration
|
||||
>>> configuration = SpeechT5Config()
|
||||
|
||||
>>> # Initializing a model (with random weights) from the "microsoft/speecht5_asr" style configuration
|
||||
>>> model = SpeechT5Model(configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
model_type = "speecht5"
|
||||
attribute_map = {"num_attention_heads": "encoder_attention_heads", "num_hidden_layers": "encoder_layers"}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size=81,
|
||||
hidden_size=768,
|
||||
encoder_layers=12,
|
||||
encoder_attention_heads=12,
|
||||
encoder_ffn_dim=3072,
|
||||
encoder_layerdrop=0.1,
|
||||
decoder_layers=6,
|
||||
decoder_ffn_dim=3072,
|
||||
decoder_attention_heads=12,
|
||||
decoder_layerdrop=0.1,
|
||||
hidden_act="gelu",
|
||||
positional_dropout=0.1,
|
||||
hidden_dropout=0.1,
|
||||
attention_dropout=0.1,
|
||||
activation_dropout=0.1,
|
||||
initializer_range=0.02,
|
||||
layer_norm_eps=1e-5,
|
||||
scale_embedding=False,
|
||||
feat_extract_norm="group",
|
||||
feat_proj_dropout=0.0,
|
||||
feat_extract_activation="gelu",
|
||||
conv_dim=(512, 512, 512, 512, 512, 512, 512),
|
||||
conv_stride=(5, 2, 2, 2, 2, 2, 2),
|
||||
conv_kernel=(10, 3, 3, 3, 3, 2, 2),
|
||||
conv_bias=False,
|
||||
num_conv_pos_embeddings=128,
|
||||
num_conv_pos_embedding_groups=16,
|
||||
apply_spec_augment=True,
|
||||
mask_time_prob=0.05,
|
||||
mask_time_length=10,
|
||||
mask_time_min_masks=2,
|
||||
mask_feature_prob=0.0,
|
||||
mask_feature_length=10,
|
||||
mask_feature_min_masks=0,
|
||||
pad_token_id=1,
|
||||
bos_token_id=0,
|
||||
eos_token_id=2,
|
||||
decoder_start_token_id=2,
|
||||
num_mel_bins=80,
|
||||
speech_decoder_prenet_layers=2,
|
||||
speech_decoder_prenet_units=256,
|
||||
speech_decoder_prenet_dropout=0.5,
|
||||
speaker_embedding_dim=512,
|
||||
speech_decoder_postnet_layers=5,
|
||||
speech_decoder_postnet_units=256,
|
||||
speech_decoder_postnet_kernel=5,
|
||||
speech_decoder_postnet_dropout=0.5,
|
||||
reduction_factor=2,
|
||||
max_speech_positions=4000,
|
||||
max_text_positions=450,
|
||||
encoder_max_relative_position=160,
|
||||
decoder_max_relative_position=160,
|
||||
use_cache=True,
|
||||
is_encoder_decoder=True,
|
||||
**kwargs
|
||||
):
|
||||
self.vocab_size = vocab_size
|
||||
self.hidden_size = hidden_size
|
||||
self.encoder_layers = encoder_layers
|
||||
self.encoder_ffn_dim = encoder_ffn_dim
|
||||
self.encoder_attention_heads = encoder_attention_heads
|
||||
self.encoder_layerdrop = encoder_layerdrop
|
||||
self.decoder_layers = decoder_layers
|
||||
self.decoder_ffn_dim = decoder_ffn_dim
|
||||
self.decoder_attention_heads = decoder_attention_heads
|
||||
self.decoder_layerdrop = decoder_layerdrop
|
||||
self.hidden_act = hidden_act
|
||||
self.positional_dropout = positional_dropout
|
||||
self.hidden_dropout = hidden_dropout
|
||||
self.attention_dropout = attention_dropout
|
||||
self.activation_dropout = activation_dropout
|
||||
self.initializer_range = initializer_range
|
||||
self.layer_norm_eps = layer_norm_eps
|
||||
self.scale_embedding = scale_embedding
|
||||
|
||||
self.feat_extract_norm = feat_extract_norm
|
||||
self.feat_proj_dropout = feat_proj_dropout
|
||||
self.feat_extract_activation = feat_extract_activation
|
||||
self.conv_dim = list(conv_dim)
|
||||
self.conv_stride = list(conv_stride)
|
||||
self.conv_kernel = list(conv_kernel)
|
||||
self.conv_bias = conv_bias
|
||||
self.num_conv_pos_embeddings = num_conv_pos_embeddings
|
||||
self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups
|
||||
self.num_feat_extract_layers = len(self.conv_dim)
|
||||
|
||||
if (
|
||||
(len(self.conv_stride) != self.num_feat_extract_layers)
|
||||
or (len(self.conv_kernel) != self.num_feat_extract_layers)
|
||||
or (len(self.conv_dim) != self.num_feat_extract_layers)
|
||||
):
|
||||
raise ValueError(
|
||||
"Configuration for convolutional layers is incorrect. It is required that `len(config.conv_dim)` =="
|
||||
" `len(config.conv_stride)` == `len(config.conv_kernel)`, but is `len(config.conv_dim) ="
|
||||
f" {len(self.conv_dim)}`, `len(config.conv_stride) = {len(self.conv_stride)}`,"
|
||||
f" `len(config.conv_kernel) = {len(self.conv_kernel)}`."
|
||||
)
|
||||
|
||||
# fine-tuning config parameters for SpecAugment: https://arxiv.org/abs/1904.08779
|
||||
self.apply_spec_augment = apply_spec_augment
|
||||
self.mask_time_prob = mask_time_prob
|
||||
self.mask_time_length = mask_time_length
|
||||
self.mask_time_min_masks = mask_time_min_masks
|
||||
self.mask_feature_prob = mask_feature_prob
|
||||
self.mask_feature_length = mask_feature_length
|
||||
self.mask_feature_min_masks = mask_feature_min_masks
|
||||
|
||||
self.num_mel_bins = num_mel_bins
|
||||
self.speech_decoder_prenet_layers = speech_decoder_prenet_layers
|
||||
self.speech_decoder_prenet_units = speech_decoder_prenet_units
|
||||
self.speech_decoder_prenet_dropout = speech_decoder_prenet_dropout
|
||||
self.speaker_embedding_dim = speaker_embedding_dim
|
||||
|
||||
self.speech_decoder_postnet_layers = speech_decoder_postnet_layers
|
||||
self.speech_decoder_postnet_units = speech_decoder_postnet_units
|
||||
self.speech_decoder_postnet_kernel = speech_decoder_postnet_kernel
|
||||
self.speech_decoder_postnet_dropout = speech_decoder_postnet_dropout
|
||||
self.reduction_factor = reduction_factor
|
||||
|
||||
self.max_speech_positions = max_speech_positions
|
||||
self.max_text_positions = max_text_positions
|
||||
self.encoder_max_relative_position = encoder_max_relative_position
|
||||
self.decoder_max_relative_position = decoder_max_relative_position
|
||||
self.use_cache = use_cache
|
||||
self.is_encoder_decoder = is_encoder_decoder
|
||||
|
||||
super().__init__(
|
||||
pad_token_id=pad_token_id,
|
||||
bos_token_id=bos_token_id,
|
||||
eos_token_id=eos_token_id,
|
||||
is_encoder_decoder=is_encoder_decoder,
|
||||
decoder_start_token_id=decoder_start_token_id,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def inputs_to_logits_ratio(self):
|
||||
return functools.reduce(operator.mul, self.conv_stride, 1)
|
||||
|
||||
|
||||
class SpeechT5HifiGanConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`SpeechT5HifiGanModel`]. It is used to instantiate
|
||||
a SpeechT5 HiFi-GAN vocoder model according to the specified arguments, defining the model architecture.
|
||||
Instantiating a configuration with the defaults will yield a similar configuration to that of the SpeechT5
|
||||
[microsoft/speecht5_hifigan](https://huggingface.co/microsoft/speecht5_hifigan) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
model_in_dim (`int`, *optional*, defaults to 80):
|
||||
The number of frequency bins in the input log-mel spectrogram.
|
||||
sampling_rate (`int`, *optional*, defaults to 16000):
|
||||
The sampling rate at which the output audio will be generated, expressed in hertz (Hz).
|
||||
upsample_initial_channel (`int`, *optional*, defaults to 512):
|
||||
The number of input channels into the upsampling network.
|
||||
upsample_rates (`Tuple[int]` or `List[int]`, *optional*, defaults to `[4, 4, 4, 4]`):
|
||||
A tuple of integers defining the stride of each 1D convolutional layer in the upsampling network. The
|
||||
length of *upsample_rates* defines the number of convolutional layers and has to match the length of
|
||||
*upsample_kernel_sizes*.
|
||||
upsample_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[8, 8, 8, 8]`):
|
||||
A tuple of integers defining the kernel size of each 1D convolutional layer in the upsampling network. The
|
||||
length of *upsample_kernel_sizes* defines the number of convolutional layers and has to match the length of
|
||||
*upsample_rates*.
|
||||
resblock_kernel_sizes (`Tuple[int]` or `List[int]`, *optional*, defaults to `[3, 7, 11]`):
|
||||
A tuple of integers defining the kernel sizes of the 1D convolutional layers in the multi-receptive field
|
||||
fusion (MRF) module.
|
||||
resblock_dilation_sizes (`Tuple[Tuple[int]]` or `List[List[int]]`, *optional*, defaults to `[[1, 3, 5], [1, 3, 5], [1, 3, 5]]`):
|
||||
A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the
|
||||
multi-receptive field fusion (MRF) module.
|
||||
initializer_range (`float`, *optional*, defaults to 0.01):
|
||||
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||
leaky_relu_slope (`float`, *optional*, defaults to 0.1):
|
||||
The angle of the negative slope used by the leaky ReLU activation.
|
||||
normalize_before (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not to normalize the spectrogram before vocoding using the vocoder's learned mean and variance.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import SpeechT5HifiGan, SpeechT5HifiGanConfig
|
||||
|
||||
>>> # Initializing a "microsoft/speecht5_hifigan" style configuration
|
||||
>>> configuration = SpeechT5HifiGanConfig()
|
||||
|
||||
>>> # Initializing a model (with random weights) from the "microsoft/speecht5_hifigan" style configuration
|
||||
>>> model = SpeechT5HifiGan(configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
model_type = "hifigan"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model_in_dim=80,
|
||||
sampling_rate=16000,
|
||||
upsample_initial_channel=512,
|
||||
upsample_rates=[4, 4, 4, 4],
|
||||
upsample_kernel_sizes=[8, 8, 8, 8],
|
||||
resblock_kernel_sizes=[3, 7, 11],
|
||||
resblock_dilation_sizes=[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
|
||||
initializer_range=0.01,
|
||||
leaky_relu_slope=0.1,
|
||||
normalize_before=True,
|
||||
**kwargs,
|
||||
):
|
||||
self.model_in_dim = model_in_dim
|
||||
self.sampling_rate = sampling_rate
|
||||
self.upsample_initial_channel = upsample_initial_channel
|
||||
self.upsample_rates = upsample_rates
|
||||
self.upsample_kernel_sizes = upsample_kernel_sizes
|
||||
self.resblock_kernel_sizes = resblock_kernel_sizes
|
||||
self.resblock_dilation_sizes = resblock_dilation_sizes
|
||||
self.initializer_range = initializer_range
|
||||
self.leaky_relu_slope = leaky_relu_slope
|
||||
self.normalize_before = normalize_before
|
||||
super().__init__(**kwargs)
|
108
src/transformers/models/speecht5/convert_hifigan.py
Normal file
108
src/transformers/models/speecht5/convert_hifigan.py
Normal file
@ -0,0 +1,108 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Convert SpeechT5 HiFi-GAN checkpoint."""
|
||||
|
||||
import argparse
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
from transformers import SpeechT5HifiGan, SpeechT5HifiGanConfig, logging
|
||||
|
||||
|
||||
logging.set_verbosity_info()
|
||||
logger = logging.get_logger("transformers.models.speecht5")
|
||||
|
||||
|
||||
def load_weights(checkpoint, hf_model, config):
|
||||
hf_model.apply_weight_norm()
|
||||
|
||||
hf_model.conv_pre.weight_g.data = checkpoint["input_conv.weight_g"]
|
||||
hf_model.conv_pre.weight_v.data = checkpoint["input_conv.weight_v"]
|
||||
hf_model.conv_pre.bias.data = checkpoint["input_conv.bias"]
|
||||
|
||||
for i in range(len(config.upsample_rates)):
|
||||
hf_model.upsampler[i].weight_g.data = checkpoint[f"upsamples.{i}.1.weight_g"]
|
||||
hf_model.upsampler[i].weight_v.data = checkpoint[f"upsamples.{i}.1.weight_v"]
|
||||
hf_model.upsampler[i].bias.data = checkpoint[f"upsamples.{i}.1.bias"]
|
||||
|
||||
for i in range(len(config.upsample_rates) * len(config.resblock_kernel_sizes)):
|
||||
for j in range(len(config.resblock_dilation_sizes)):
|
||||
hf_model.resblocks[i].convs1[j].weight_g.data = checkpoint[f"blocks.{i}.convs1.{j}.1.weight_g"]
|
||||
hf_model.resblocks[i].convs1[j].weight_v.data = checkpoint[f"blocks.{i}.convs1.{j}.1.weight_v"]
|
||||
hf_model.resblocks[i].convs1[j].bias.data = checkpoint[f"blocks.{i}.convs1.{j}.1.bias"]
|
||||
|
||||
hf_model.resblocks[i].convs2[j].weight_g.data = checkpoint[f"blocks.{i}.convs2.{j}.1.weight_g"]
|
||||
hf_model.resblocks[i].convs2[j].weight_v.data = checkpoint[f"blocks.{i}.convs2.{j}.1.weight_v"]
|
||||
hf_model.resblocks[i].convs2[j].bias.data = checkpoint[f"blocks.{i}.convs2.{j}.1.bias"]
|
||||
|
||||
hf_model.conv_post.weight_g.data = checkpoint["output_conv.1.weight_g"]
|
||||
hf_model.conv_post.weight_v.data = checkpoint["output_conv.1.weight_v"]
|
||||
hf_model.conv_post.bias.data = checkpoint["output_conv.1.bias"]
|
||||
|
||||
hf_model.remove_weight_norm()
|
||||
|
||||
|
||||
@torch.no_grad()
|
||||
def convert_hifigan_checkpoint(
|
||||
checkpoint_path,
|
||||
stats_path,
|
||||
pytorch_dump_folder_path,
|
||||
config_path=None,
|
||||
repo_id=None,
|
||||
):
|
||||
if config_path is not None:
|
||||
config = SpeechT5HifiGanConfig.from_pretrained(config_path)
|
||||
else:
|
||||
config = SpeechT5HifiGanConfig()
|
||||
|
||||
model = SpeechT5HifiGan(config)
|
||||
|
||||
orig_checkpoint = torch.load(checkpoint_path)
|
||||
load_weights(orig_checkpoint["model"]["generator"], model, config)
|
||||
|
||||
stats = np.load(stats_path)
|
||||
mean = stats[0].reshape(-1)
|
||||
scale = stats[1].reshape(-1)
|
||||
model.mean = torch.from_numpy(mean).float()
|
||||
model.scale = torch.from_numpy(scale).float()
|
||||
|
||||
model.save_pretrained(pytorch_dump_folder_path)
|
||||
|
||||
if repo_id:
|
||||
print("Pushing to the hub...")
|
||||
model.push_to_hub(repo_id)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--checkpoint_path", required=True, default=None, type=str, help="Path to original checkpoint")
|
||||
parser.add_argument("--stats_path", required=True, default=None, type=str, help="Path to stats.npy file")
|
||||
parser.add_argument("--config_path", default=None, type=str, help="Path to hf config.json of model to convert")
|
||||
parser.add_argument(
|
||||
"--pytorch_dump_folder_path", required=True, default=None, type=str, help="Path to the output PyTorch model."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--push_to_hub", default=None, type=str, help="Where to upload the converted model on the 🤗 hub."
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
convert_hifigan_checkpoint(
|
||||
args.checkpoint_path,
|
||||
args.stats_path,
|
||||
args.pytorch_dump_folder_path,
|
||||
args.config_path,
|
||||
args.push_to_hub,
|
||||
)
|
@ -0,0 +1,402 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Convert SpeechT5 checkpoint."""
|
||||
|
||||
import argparse
|
||||
|
||||
import torch
|
||||
|
||||
from transformers import (
|
||||
SpeechT5Config,
|
||||
SpeechT5FeatureExtractor,
|
||||
SpeechT5ForSpeechToSpeech,
|
||||
SpeechT5ForSpeechToText,
|
||||
SpeechT5ForTextToSpeech,
|
||||
SpeechT5Processor,
|
||||
SpeechT5Tokenizer,
|
||||
logging,
|
||||
)
|
||||
from transformers.tokenization_utils import AddedToken
|
||||
|
||||
|
||||
logging.set_verbosity_info()
|
||||
logger = logging.get_logger("transformers.models.speecht5")
|
||||
|
||||
MAPPING_SPEECH_ENCODER_PRENET = {
|
||||
"speech_encoder_prenet.layer_norm": "speecht5.encoder.prenet.feature_projection.layer_norm",
|
||||
"speech_encoder_prenet.post_extract_proj": "speecht5.encoder.prenet.feature_projection.projection",
|
||||
"speech_encoder_prenet.pos_conv.0": "speecht5.encoder.prenet.pos_conv_embed.conv",
|
||||
"speech_encoder_prenet.mask_emb": "speecht5.encoder.prenet.masked_spec_embed",
|
||||
}
|
||||
MAPPING_TEXT_ENCODER_PRENET = {
|
||||
"text_encoder_prenet.encoder_prenet.0": "speecht5.encoder.prenet.embed_tokens",
|
||||
"text_encoder_prenet.encoder_prenet.1.alpha": "speecht5.encoder.prenet.encode_positions.alpha",
|
||||
}
|
||||
MAPPING_SPEECH_DECODER_PRENET = {
|
||||
"speech_decoder_prenet.decoder_prenet.0.0.prenet.0.0": "speecht5.decoder.prenet.layers.0",
|
||||
"speech_decoder_prenet.decoder_prenet.0.0.prenet.1.0": "speecht5.decoder.prenet.layers.1",
|
||||
"speech_decoder_prenet.decoder_prenet.0.1": "speecht5.decoder.prenet.final_layer",
|
||||
"speech_decoder_prenet.decoder_prenet.1.alpha": "speecht5.decoder.prenet.encode_positions.alpha",
|
||||
"speech_decoder_prenet.spkembs_layer.0": "speecht5.decoder.prenet.speaker_embeds_layer",
|
||||
}
|
||||
MAPPING_SPEECH_DECODER_POSTNET = {
|
||||
"speech_decoder_postnet.feat_out": "speech_decoder_postnet.feat_out",
|
||||
"speech_decoder_postnet.prob_out": "speech_decoder_postnet.prob_out",
|
||||
"speech_decoder_postnet.postnet.postnet.0.0": "speech_decoder_postnet.layers.0.conv",
|
||||
"speech_decoder_postnet.postnet.postnet.0.1": "speech_decoder_postnet.layers.0.batch_norm",
|
||||
"speech_decoder_postnet.postnet.postnet.1.0": "speech_decoder_postnet.layers.1.conv",
|
||||
"speech_decoder_postnet.postnet.postnet.1.1": "speech_decoder_postnet.layers.1.batch_norm",
|
||||
"speech_decoder_postnet.postnet.postnet.2.0": "speech_decoder_postnet.layers.2.conv",
|
||||
"speech_decoder_postnet.postnet.postnet.2.1": "speech_decoder_postnet.layers.2.batch_norm",
|
||||
"speech_decoder_postnet.postnet.postnet.3.0": "speech_decoder_postnet.layers.3.conv",
|
||||
"speech_decoder_postnet.postnet.postnet.3.1": "speech_decoder_postnet.layers.3.batch_norm",
|
||||
"speech_decoder_postnet.postnet.postnet.4.0": "speech_decoder_postnet.layers.4.conv",
|
||||
"speech_decoder_postnet.postnet.postnet.4.1": "speech_decoder_postnet.layers.4.batch_norm",
|
||||
}
|
||||
MAPPING_TEXT_DECODER_PRENET = {
|
||||
"text_decoder_prenet.embed_tokens": "speecht5.decoder.prenet.embed_tokens",
|
||||
}
|
||||
MAPPING_TEXT_DECODER_POSTNET = {
|
||||
"text_decoder_postnet.output_projection": "text_decoder_postnet.lm_head",
|
||||
}
|
||||
MAPPING_ENCODER = {
|
||||
"encoder.layers.*.self_attn.k_proj": "speecht5.encoder.wrapped_encoder.layers.*.attention.k_proj",
|
||||
"encoder.layers.*.self_attn.v_proj": "speecht5.encoder.wrapped_encoder.layers.*.attention.v_proj",
|
||||
"encoder.layers.*.self_attn.q_proj": "speecht5.encoder.wrapped_encoder.layers.*.attention.q_proj",
|
||||
"encoder.layers.*.self_attn.out_proj": "speecht5.encoder.wrapped_encoder.layers.*.attention.out_proj",
|
||||
"encoder.layers.*.self_attn_layer_norm": "speecht5.encoder.wrapped_encoder.layers.*.layer_norm",
|
||||
"encoder.layers.*.fc1": "speecht5.encoder.wrapped_encoder.layers.*.feed_forward.intermediate_dense",
|
||||
"encoder.layers.*.fc2": "speecht5.encoder.wrapped_encoder.layers.*.feed_forward.output_dense",
|
||||
"encoder.layers.*.final_layer_norm": "speecht5.encoder.wrapped_encoder.layers.*.final_layer_norm",
|
||||
"encoder.layer_norm": "speecht5.encoder.wrapped_encoder.layer_norm",
|
||||
"encoder.pos_emb.pe_k": "speecht5.encoder.wrapped_encoder.embed_positions.pe_k",
|
||||
}
|
||||
MAPPING_DECODER = {
|
||||
"decoder.layers.*.self_attn.k_proj": "speecht5.decoder.wrapped_decoder.layers.*.self_attn.k_proj",
|
||||
"decoder.layers.*.self_attn.v_proj": "speecht5.decoder.wrapped_decoder.layers.*.self_attn.v_proj",
|
||||
"decoder.layers.*.self_attn.q_proj": "speecht5.decoder.wrapped_decoder.layers.*.self_attn.q_proj",
|
||||
"decoder.layers.*.self_attn.out_proj": "speecht5.decoder.wrapped_decoder.layers.*.self_attn.out_proj",
|
||||
"decoder.layers.*.self_attn_layer_norm": "speecht5.decoder.wrapped_decoder.layers.*.self_attn_layer_norm",
|
||||
"decoder.layers.*.encoder_attn.k_proj": "speecht5.decoder.wrapped_decoder.layers.*.encoder_attn.k_proj",
|
||||
"decoder.layers.*.encoder_attn.v_proj": "speecht5.decoder.wrapped_decoder.layers.*.encoder_attn.v_proj",
|
||||
"decoder.layers.*.encoder_attn.q_proj": "speecht5.decoder.wrapped_decoder.layers.*.encoder_attn.q_proj",
|
||||
"decoder.layers.*.encoder_attn.out_proj": "speecht5.decoder.wrapped_decoder.layers.*.encoder_attn.out_proj",
|
||||
"decoder.layers.*.encoder_attn_layer_norm": "speecht5.decoder.wrapped_decoder.layers.*.encoder_attn_layer_norm",
|
||||
"decoder.layers.*.fc1": "speecht5.decoder.wrapped_decoder.layers.*.feed_forward.intermediate_dense",
|
||||
"decoder.layers.*.fc2": "speecht5.decoder.wrapped_decoder.layers.*.feed_forward.output_dense",
|
||||
"decoder.layers.*.final_layer_norm": "speecht5.decoder.wrapped_decoder.layers.*.final_layer_norm",
|
||||
}
|
||||
MAPPING_S2T = {
|
||||
**MAPPING_SPEECH_ENCODER_PRENET,
|
||||
**MAPPING_ENCODER,
|
||||
**MAPPING_DECODER,
|
||||
**MAPPING_TEXT_DECODER_PRENET,
|
||||
**MAPPING_TEXT_DECODER_POSTNET,
|
||||
}
|
||||
MAPPING_T2S = {
|
||||
**MAPPING_TEXT_ENCODER_PRENET,
|
||||
**MAPPING_ENCODER,
|
||||
**MAPPING_DECODER,
|
||||
**MAPPING_SPEECH_DECODER_PRENET,
|
||||
**MAPPING_SPEECH_DECODER_POSTNET,
|
||||
}
|
||||
MAPPING_S2S = {
|
||||
**MAPPING_SPEECH_ENCODER_PRENET,
|
||||
**MAPPING_ENCODER,
|
||||
**MAPPING_DECODER,
|
||||
**MAPPING_SPEECH_DECODER_PRENET,
|
||||
**MAPPING_SPEECH_DECODER_POSTNET,
|
||||
}
|
||||
TOP_LEVEL_KEYS = []
|
||||
IGNORE_KEYS = [
|
||||
"encoder.version",
|
||||
"encoder.layers.*.norm_k.weight",
|
||||
"encoder.layers.*.norm_k.bias",
|
||||
"decoder.version",
|
||||
"decoder.layers.*.norm_k.weight",
|
||||
"decoder.layers.*.norm_k.bias",
|
||||
"decoder.pos_emb.pe_k",
|
||||
"speech_encoder_prenet.embed_positions._float_tensor",
|
||||
"text_decoder_prenet.embed_positions._float_tensor",
|
||||
]
|
||||
IGNORE_KEYS_S2T = IGNORE_KEYS + [
|
||||
"encoder.proj",
|
||||
"text_encoder_prenet.*",
|
||||
"speech_decoder_prenet.*",
|
||||
"speech_decoder_postnet.*",
|
||||
]
|
||||
IGNORE_KEYS_T2S = IGNORE_KEYS + [
|
||||
"encoder.proj",
|
||||
"speech_encoder_prenet.*",
|
||||
"text_decoder_prenet.*",
|
||||
"text_decoder_postnet.*",
|
||||
]
|
||||
IGNORE_KEYS_S2S = IGNORE_KEYS + [
|
||||
"encoder.proj",
|
||||
"text_encoder_prenet.*",
|
||||
"text_decoder_prenet.*",
|
||||
"text_decoder_postnet.*",
|
||||
]
|
||||
|
||||
|
||||
def set_recursively(hf_pointer, key, value, full_name, weight_type):
|
||||
for attribute in key.split("."):
|
||||
hf_pointer = getattr(hf_pointer, attribute)
|
||||
|
||||
if weight_type is not None:
|
||||
hf_shape = getattr(hf_pointer, weight_type).shape
|
||||
else:
|
||||
hf_shape = hf_pointer.shape
|
||||
|
||||
if hf_shape != value.shape:
|
||||
raise ValueError(
|
||||
f"Shape of hf {key + '.' + weight_type if weight_type is not None else ''} is {hf_shape}, but should be"
|
||||
f" {value.shape} for {full_name}"
|
||||
)
|
||||
|
||||
if weight_type == "weight":
|
||||
hf_pointer.weight.data = value
|
||||
elif weight_type == "weight_g":
|
||||
hf_pointer.weight_g.data = value
|
||||
elif weight_type == "weight_v":
|
||||
hf_pointer.weight_v.data = value
|
||||
elif weight_type == "bias":
|
||||
hf_pointer.bias.data = value
|
||||
elif weight_type == "running_mean":
|
||||
hf_pointer.running_mean.data = value
|
||||
elif weight_type == "running_var":
|
||||
hf_pointer.running_var.data = value
|
||||
elif weight_type == "num_batches_tracked":
|
||||
hf_pointer.num_batches_tracked.data = value
|
||||
else:
|
||||
hf_pointer.data = value
|
||||
|
||||
logger.info(f"{key + ('.' + weight_type if weight_type is not None else '')} was initialized from {full_name}.")
|
||||
|
||||
|
||||
def should_ignore(name, ignore_keys):
|
||||
for key in ignore_keys:
|
||||
if key.endswith(".*"):
|
||||
if name.startswith(key[:-1]):
|
||||
return True
|
||||
elif ".*." in key:
|
||||
prefix, suffix = key.split(".*.")
|
||||
if prefix in name and suffix in name:
|
||||
return True
|
||||
elif key in name:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def recursively_load_weights(fairseq_dict, hf_model, task):
|
||||
unused_weights = []
|
||||
|
||||
if task == "s2t":
|
||||
feature_encoder = hf_model.speecht5.encoder.prenet.feature_encoder
|
||||
MAPPING = MAPPING_S2T
|
||||
IGNORE_KEYS = IGNORE_KEYS_S2T
|
||||
elif task == "t2s":
|
||||
feature_encoder = None
|
||||
MAPPING = MAPPING_T2S
|
||||
IGNORE_KEYS = IGNORE_KEYS_T2S
|
||||
elif task == "s2s":
|
||||
feature_encoder = hf_model.speecht5.encoder.prenet.feature_encoder
|
||||
MAPPING = MAPPING_S2S
|
||||
IGNORE_KEYS = IGNORE_KEYS_S2S
|
||||
else:
|
||||
raise ValueError(f"Unsupported task: {task}")
|
||||
|
||||
for name, value in fairseq_dict.items():
|
||||
if should_ignore(name, IGNORE_KEYS):
|
||||
logger.info(f"{name} was ignored")
|
||||
continue
|
||||
|
||||
is_used = False
|
||||
if "conv_layers" in name:
|
||||
load_conv_layer(
|
||||
name,
|
||||
value,
|
||||
feature_encoder,
|
||||
unused_weights,
|
||||
hf_model.config.feat_extract_norm == "group",
|
||||
)
|
||||
is_used = True
|
||||
else:
|
||||
for key, mapped_key in MAPPING.items():
|
||||
# mapped_key = "speecht5." + mapped_key if mapped_key not in TOP_LEVEL_KEYS else mapped_key
|
||||
|
||||
if "*" in key:
|
||||
prefix, suffix = key.split(".*.")
|
||||
if prefix in name and suffix in name:
|
||||
key = suffix
|
||||
|
||||
# if key in name or key.split("w2v_model.")[-1] == name.split(".")[0]:
|
||||
if key in name:
|
||||
is_used = True
|
||||
if "*" in mapped_key:
|
||||
layer_index = name.split(key)[0].split(".")[-2]
|
||||
mapped_key = mapped_key.replace("*", layer_index)
|
||||
if "weight_g" in name:
|
||||
weight_type = "weight_g"
|
||||
elif "weight_v" in name:
|
||||
weight_type = "weight_v"
|
||||
elif "bias" in name:
|
||||
weight_type = "bias"
|
||||
elif "weight" in name:
|
||||
weight_type = "weight"
|
||||
elif "running_mean" in name:
|
||||
weight_type = "running_mean"
|
||||
elif "running_var" in name:
|
||||
weight_type = "running_var"
|
||||
elif "num_batches_tracked" in name:
|
||||
weight_type = "num_batches_tracked"
|
||||
else:
|
||||
weight_type = None
|
||||
set_recursively(hf_model, mapped_key, value, name, weight_type)
|
||||
continue
|
||||
if not is_used:
|
||||
unused_weights.append(name)
|
||||
|
||||
logger.warning(f"Unused weights: {unused_weights}")
|
||||
|
||||
|
||||
def load_conv_layer(full_name, value, feature_extractor, unused_weights, use_group_norm):
|
||||
name = full_name.split("conv_layers.")[-1]
|
||||
items = name.split(".")
|
||||
layer_id = int(items[0])
|
||||
type_id = int(items[1])
|
||||
|
||||
if type_id == 0:
|
||||
if "bias" in name:
|
||||
if value.shape != feature_extractor.conv_layers[layer_id].conv.bias.data.shape:
|
||||
raise ValueError(
|
||||
f"{full_name} has size {value.shape}, but"
|
||||
f" {feature_extractor.conv_layers[layer_id].conv.bias.data.shape} was found."
|
||||
)
|
||||
feature_extractor.conv_layers[layer_id].conv.bias.data = value
|
||||
logger.info(f"Feat extract conv layer {layer_id} was initialized from {full_name}.")
|
||||
elif "weight" in name:
|
||||
if value.shape != feature_extractor.conv_layers[layer_id].conv.weight.data.shape:
|
||||
raise ValueError(
|
||||
f"{full_name} has size {value.shape}, but"
|
||||
f" {feature_extractor.conv_layers[layer_id].conv.weight.data.shape} was found."
|
||||
)
|
||||
feature_extractor.conv_layers[layer_id].conv.weight.data = value
|
||||
logger.info(f"Feat extract conv layer {layer_id} was initialized from {full_name}.")
|
||||
elif (type_id == 2 and not use_group_norm) or (type_id == 2 and layer_id == 0 and use_group_norm):
|
||||
if "bias" in name:
|
||||
if value.shape != feature_extractor.conv_layers[layer_id].layer_norm.bias.data.shape:
|
||||
raise ValueError(
|
||||
f"{full_name} has size {value.shape}, but"
|
||||
f" {feature_extractor.conv_layers[layer_id].layer_norm.bias.data.shape} was found."
|
||||
)
|
||||
feature_extractor.conv_layers[layer_id].layer_norm.bias.data = value
|
||||
logger.info(f"Feat extract layer norm weight of layer {layer_id} was initialized from {full_name}.")
|
||||
elif "weight" in name:
|
||||
if value.shape != feature_extractor.conv_layers[layer_id].layer_norm.weight.data.shape:
|
||||
raise ValueError(
|
||||
f"{full_name} has size {value.shape}, but"
|
||||
f" {feature_extractor.conv_layers[layer_id].layer_norm.weight.data.shape} was found."
|
||||
)
|
||||
feature_extractor.conv_layers[layer_id].layer_norm.weight.data = value
|
||||
logger.info(f"Feat extract layer norm weight of layer {layer_id} was initialized from {full_name}.")
|
||||
else:
|
||||
unused_weights.append(full_name)
|
||||
|
||||
|
||||
@torch.no_grad()
|
||||
def convert_speecht5_checkpoint(
|
||||
task,
|
||||
checkpoint_path,
|
||||
pytorch_dump_folder_path,
|
||||
config_path=None,
|
||||
vocab_path=None,
|
||||
repo_id=None,
|
||||
):
|
||||
"""
|
||||
Copy/paste/tweak model's weights to transformers design.
|
||||
"""
|
||||
if config_path is not None:
|
||||
config = SpeechT5Config.from_pretrained(config_path)
|
||||
else:
|
||||
config = SpeechT5Config()
|
||||
|
||||
if task == "s2t":
|
||||
config.max_length = config.max_text_positions
|
||||
model = SpeechT5ForSpeechToText(config)
|
||||
elif task == "t2s":
|
||||
config.max_speech_positions = 1876
|
||||
config.max_text_positions = 600
|
||||
config.max_length = config.max_speech_positions
|
||||
model = SpeechT5ForTextToSpeech(config)
|
||||
elif task == "s2s":
|
||||
config.max_speech_positions = 1876
|
||||
config.max_length = config.max_speech_positions
|
||||
model = SpeechT5ForSpeechToSpeech(config)
|
||||
else:
|
||||
raise ValueError(f"Unknown task name: {task}")
|
||||
|
||||
if vocab_path:
|
||||
tokenizer = SpeechT5Tokenizer(vocab_path, model_max_length=config.max_text_positions)
|
||||
|
||||
if task == "pretrain":
|
||||
# Mask token behaves like a normal word, i.e. include the space before it
|
||||
mask_token = AddedToken("<mask>", lstrip=True, rstrip=False)
|
||||
tokenizer.mask_token = mask_token
|
||||
tokenizer.add_special_tokens({"mask_token": mask_token})
|
||||
tokenizer.add_tokens(["<ctc_blank>"])
|
||||
|
||||
feature_extractor = SpeechT5FeatureExtractor()
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
processor.save_pretrained(pytorch_dump_folder_path)
|
||||
|
||||
fairseq_checkpoint = torch.load(checkpoint_path)
|
||||
recursively_load_weights(fairseq_checkpoint["model"], model, task)
|
||||
|
||||
model.save_pretrained(pytorch_dump_folder_path)
|
||||
|
||||
if repo_id:
|
||||
print("Pushing to the hub...")
|
||||
processor.push_to_hub(repo_id)
|
||||
model.push_to_hub(repo_id)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--task",
|
||||
default="s2t",
|
||||
type=str,
|
||||
help="Type of the SpeechT5 model you'd like to convert. Should be one of 's2t', 't2s', 's2s'.",
|
||||
)
|
||||
parser.add_argument("--checkpoint_path", required=True, default=None, type=str, help="Path to fairseq checkpoint")
|
||||
parser.add_argument("--vocab_path", default=None, type=str, help="Path to SentencePiece model")
|
||||
parser.add_argument("--config_path", default=None, type=str, help="Path to hf config.json of model to convert")
|
||||
parser.add_argument(
|
||||
"--pytorch_dump_folder_path", required=True, default=None, type=str, help="Path to the output PyTorch model."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--push_to_hub", default=None, type=str, help="Where to upload the converted model on the 🤗 hub."
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
convert_speecht5_checkpoint(
|
||||
args.task,
|
||||
args.checkpoint_path,
|
||||
args.pytorch_dump_folder_path,
|
||||
args.config_path,
|
||||
args.vocab_path,
|
||||
args.push_to_hub,
|
||||
)
|
450
src/transformers/models/speecht5/feature_extraction_speecht5.py
Normal file
450
src/transformers/models/speecht5/feature_extraction_speecht5.py
Normal file
@ -0,0 +1,450 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Feature extractor class for SpeechT5."""
|
||||
|
||||
from typing import List, Optional, Union
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
import torchaudio
|
||||
|
||||
from ...feature_extraction_sequence_utils import SequenceFeatureExtractor
|
||||
from ...feature_extraction_utils import BatchFeature
|
||||
from ...utils import PaddingStrategy, TensorType, logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
|
||||
class SpeechT5FeatureExtractor(SequenceFeatureExtractor):
|
||||
r"""
|
||||
Constructs a SpeechT5 feature extractor.
|
||||
|
||||
This class can pre-process a raw speech signal by (optionally) normalizing to zero-mean unit-variance, for use by
|
||||
the SpeechT5 speech encoder prenet.
|
||||
|
||||
This class can also extract log-mel filter bank features from raw speech, for use by the SpeechT5 speech decoder
|
||||
prenet.
|
||||
|
||||
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
|
||||
most of the main methods. Users should refer to this superclass for more information regarding those methods.
|
||||
|
||||
Args:
|
||||
feature_size (`int`, *optional*, defaults to 1):
|
||||
The feature dimension of the extracted features.
|
||||
sampling_rate (`int`, *optional*, defaults to 16000):
|
||||
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
|
||||
padding_value (`float`, *optional*, defaults to 0.0):
|
||||
The value that is used to fill the padding values.
|
||||
do_normalize (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
|
||||
improve the performance for some models.
|
||||
num_mel_bins (`int`, *optional*, defaults to 80):
|
||||
The number of mel-frequency bins in the extracted spectrogram features.
|
||||
hop_length (`int`, *optional*, defaults to 16):
|
||||
Number of ms between windows. Otherwise referred to as "shift" in many papers.
|
||||
win_length (`int`, *optional*, defaults to 64):
|
||||
Number of ms per window.
|
||||
win_function (`str`, *optional*, defaults to `"hann_window"`):
|
||||
Name for the window function used for windowing, must be accessible via `torch.{win_function}`
|
||||
frame_signal_scale (`float`, *optional*, defaults to 1.0):
|
||||
Constant multiplied in creating the frames before applying DFT.
|
||||
fmin (`float`, *optional*, defaults to 80):
|
||||
Minimum mel frequency in Hz.
|
||||
fmax (`float`, *optional*, defaults to 7600):
|
||||
Maximum mel frequency in Hz.
|
||||
mel_floor (`float`, *optional*, defaults to 1e-10):
|
||||
Minimum value of mel frequency banks.
|
||||
reduction_factor (`int`, *optional*, defaults to 2):
|
||||
Spectrogram length reduction factor.
|
||||
return_attention_mask (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not [`~SpeechT5FeatureExtractor.__call__`] should return `attention_mask`.
|
||||
"""
|
||||
|
||||
model_input_names = ["input_values", "attention_mask"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
feature_size: int = 1,
|
||||
sampling_rate: int = 16000,
|
||||
padding_value: float = 0.0,
|
||||
do_normalize: bool = False,
|
||||
num_mel_bins: int = 80,
|
||||
hop_length: int = 16,
|
||||
win_length: int = 64,
|
||||
win_function: str = "hann_window",
|
||||
frame_signal_scale: float = 1.0,
|
||||
fmin: float = 80,
|
||||
fmax: float = 7600,
|
||||
mel_floor: float = 1e-10,
|
||||
reduction_factor: int = 2,
|
||||
return_attention_mask: bool = True,
|
||||
**kwargs
|
||||
):
|
||||
super().__init__(feature_size=feature_size, sampling_rate=sampling_rate, padding_value=padding_value, **kwargs)
|
||||
self.do_normalize = do_normalize
|
||||
self.return_attention_mask = return_attention_mask
|
||||
|
||||
self.num_mel_bins = num_mel_bins
|
||||
self.hop_length = hop_length
|
||||
self.win_length = win_length
|
||||
self.win_function = win_function
|
||||
self.frame_signal_scale = frame_signal_scale
|
||||
self.fmin = fmin
|
||||
self.fmax = fmax
|
||||
self.mel_floor = mel_floor
|
||||
self.reduction_factor = reduction_factor
|
||||
|
||||
self.sample_size = win_length * sampling_rate // 1000
|
||||
self.sample_stride = hop_length * sampling_rate // 1000
|
||||
|
||||
self.n_fft = 2 ** int(np.ceil(np.log2(self.sample_size)))
|
||||
self.n_freqs = (self.n_fft // 2) + 1
|
||||
|
||||
@staticmethod
|
||||
# Copied from transformers.models.wav2vec2.feature_extraction_wav2vec2.Wav2Vec2FeatureExtractor.zero_mean_unit_var_norm
|
||||
def zero_mean_unit_var_norm(
|
||||
input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
|
||||
) -> List[np.ndarray]:
|
||||
"""
|
||||
Every array in the list is normalized to have zero mean and unit variance
|
||||
"""
|
||||
if attention_mask is not None:
|
||||
attention_mask = np.array(attention_mask, np.int32)
|
||||
normed_input_values = []
|
||||
|
||||
for vector, length in zip(input_values, attention_mask.sum(-1)):
|
||||
normed_slice = (vector - vector[:length].mean()) / np.sqrt(vector[:length].var() + 1e-7)
|
||||
if length < normed_slice.shape[0]:
|
||||
normed_slice[length:] = padding_value
|
||||
|
||||
normed_input_values.append(normed_slice)
|
||||
else:
|
||||
normed_input_values = [(x - x.mean()) / np.sqrt(x.var() + 1e-7) for x in input_values]
|
||||
|
||||
return normed_input_values
|
||||
|
||||
@staticmethod
|
||||
def _center_pad(one_waveform, n_fft, pad_mode):
|
||||
padding = [(int(n_fft // 2), int(n_fft // 2))]
|
||||
return np.pad(one_waveform, padding, mode=pad_mode)
|
||||
|
||||
@staticmethod
|
||||
# Copied from transformers.models.mctct.feature_extraction_mctct.MCTCTFeatureExtractor._num_frames_calc
|
||||
def _num_frames_calc(in_size, frame_size, frame_stride):
|
||||
return int(1 + np.floor((in_size - frame_size) * 1 / frame_stride))
|
||||
|
||||
@staticmethod
|
||||
# Copied from transformers.models.mctct.feature_extraction_mctct.MCTCTFeatureExtractor._frame_signal
|
||||
def _frame_signal(one_waveform, n_frames, frame_signal_scale, window_length, sample_stride):
|
||||
scale = frame_signal_scale
|
||||
frames = np.zeros(n_frames * window_length)
|
||||
for frame_idx in range(n_frames):
|
||||
start = frame_idx * window_length
|
||||
end = (frame_idx + 1) * window_length
|
||||
wave_start = frame_idx * sample_stride
|
||||
wave_end = frame_idx * sample_stride + window_length
|
||||
frames[start:end] = scale * one_waveform[wave_start:wave_end]
|
||||
|
||||
return frames
|
||||
|
||||
@staticmethod
|
||||
# Copied from transformers.models.mctct.feature_extraction_mctct.MCTCTFeatureExtractor._windowing
|
||||
def _windowing(frames, window_length, window):
|
||||
if frames.size % window_length != 0:
|
||||
raise ValueError(
|
||||
f"`frames` is supposed to have length divisble by `window_length`, but is {frames.size} with"
|
||||
f" window_length={window_length}."
|
||||
)
|
||||
|
||||
shaped = frames.reshape(-1, window_length)
|
||||
shaped = window * shaped
|
||||
return shaped
|
||||
|
||||
@staticmethod
|
||||
# Copied from transformers.models.mctct.feature_extraction_mctct.MCTCTFeatureExtractor._dft
|
||||
def _dft(frames, K, n_frames, n_samples, n_fft):
|
||||
dft = np.zeros([n_frames, K])
|
||||
|
||||
for frame in range(n_frames):
|
||||
begin = frame * n_samples
|
||||
|
||||
inwards_buffer = frames[begin : begin + n_samples]
|
||||
inwards_buffer = np.pad(inwards_buffer, (0, n_fft - n_samples), "constant")
|
||||
out = np.fft.rfft(inwards_buffer)
|
||||
|
||||
dft[frame] = np.abs(out[:K])
|
||||
|
||||
return dft
|
||||
|
||||
def _extract_fbank_features(
|
||||
self,
|
||||
one_waveform: np.ndarray,
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Extracts log-mel filterbank features for one waveform vector (unbatched). Adapted from Flashlight's C++ MFSC
|
||||
code and librosa.
|
||||
"""
|
||||
one_waveform = self._center_pad(one_waveform, self.n_fft, "reflect")
|
||||
|
||||
n_frames = self._num_frames_calc(one_waveform.size, self.sample_size, self.sample_stride)
|
||||
|
||||
frames = self._frame_signal(
|
||||
one_waveform, n_frames, self.frame_signal_scale, self.sample_size, self.sample_stride
|
||||
)
|
||||
|
||||
window = getattr(torch, self.win_function)(window_length=self.sample_size, periodic=True)
|
||||
window = window.numpy()
|
||||
|
||||
frames = self._windowing(frames, self.sample_size, window)
|
||||
|
||||
dft_out = self._dft(frames.flatten(), self.n_freqs, n_frames, self.sample_size, self.n_fft)
|
||||
|
||||
fbanks = torchaudio.functional.melscale_fbanks(
|
||||
n_freqs=self.n_freqs,
|
||||
f_min=self.fmin,
|
||||
f_max=self.fmax,
|
||||
n_mels=self.num_mel_bins,
|
||||
sample_rate=self.sampling_rate,
|
||||
norm="slaney",
|
||||
mel_scale="slaney",
|
||||
)
|
||||
fbanks = fbanks.numpy()
|
||||
|
||||
return np.log10(np.maximum(self.mel_floor, np.dot(dft_out, fbanks)))
|
||||
|
||||
def _reduce(self, inputs):
|
||||
reduced = []
|
||||
for i in range(len(inputs)):
|
||||
reduced.append(inputs[i][self.reduction_factor - 1 :: self.reduction_factor])
|
||||
return reduced
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
audio: Optional[Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]]] = None,
|
||||
audio_target: Optional[Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]]] = None,
|
||||
padding: Union[bool, str, PaddingStrategy] = False,
|
||||
max_length: Optional[int] = None,
|
||||
truncation: bool = False,
|
||||
pad_to_multiple_of: Optional[int] = None,
|
||||
return_attention_mask: Optional[bool] = None,
|
||||
return_tensors: Optional[Union[str, TensorType]] = None,
|
||||
sampling_rate: Optional[int] = None,
|
||||
**kwargs
|
||||
) -> BatchFeature:
|
||||
"""
|
||||
Main method to featurize and prepare for the model one or several sequence(s).
|
||||
|
||||
Pass in a value for `audio` to extract waveform features. Pass in a value for `audio_target` to extract log-mel
|
||||
spectrogram features.
|
||||
|
||||
Args:
|
||||
audio (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`, *optional*):
|
||||
The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float
|
||||
values, a list of numpy arrays or a list of list of float values. This outputs waveform features.
|
||||
audio_target (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`, *optional*):
|
||||
The sequence or batch of sequences to be processed as targets. Each sequence can be a numpy array, a
|
||||
list of float values, a list of numpy arrays or a list of list of float values. This outputs log-mel
|
||||
spectrogram features.
|
||||
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
|
||||
Select a strategy to pad the returned sequences (according to the model's padding side and padding
|
||||
index) among:
|
||||
|
||||
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
|
||||
sequence if provided).
|
||||
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
|
||||
acceptable input length for the model if that argument is not provided.
|
||||
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
|
||||
lengths).
|
||||
max_length (`int`, *optional*):
|
||||
Maximum length of the returned list and optionally padding length (see above).
|
||||
truncation (`bool`):
|
||||
Activates truncation to cut input sequences longer than *max_length* to *max_length*.
|
||||
pad_to_multiple_of (`int`, *optional*):
|
||||
If set will pad the sequence to a multiple of the provided value.
|
||||
|
||||
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
|
||||
`>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
|
||||
return_attention_mask (`bool`, *optional*):
|
||||
Whether to return the attention mask. If left to the default, will return the attention mask according
|
||||
to the specific feature_extractor's default.
|
||||
|
||||
[What are attention masks?](../glossary#attention-mask)
|
||||
|
||||
return_tensors (`str` or [`~utils.TensorType`], *optional*):
|
||||
If set, will return tensors instead of list of python integers. Acceptable values are:
|
||||
|
||||
- `'tf'`: Return TensorFlow `tf.constant` objects.
|
||||
- `'pt'`: Return PyTorch `torch.Tensor` objects.
|
||||
- `'np'`: Return Numpy `np.ndarray` objects.
|
||||
sampling_rate (`int`, *optional*):
|
||||
The sampling rate at which the `audio` or `audio_target` input was sampled. It is strongly recommended
|
||||
to pass `sampling_rate` at the forward call to prevent silent errors.
|
||||
"""
|
||||
if audio is None and audio_target is None:
|
||||
raise ValueError("You must provide either `audio` or `audio_target` values.")
|
||||
|
||||
if sampling_rate is not None:
|
||||
if sampling_rate != self.sampling_rate:
|
||||
raise ValueError(
|
||||
f"The model corresponding to this feature extractor: {self} was trained using a sampling rate of"
|
||||
f" {self.sampling_rate}. Please make sure that the provided audio input was sampled with"
|
||||
f" {self.sampling_rate} and not {sampling_rate}."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"It is strongly recommended to pass the ``sampling_rate`` argument to this function. "
|
||||
"Failing to do so can result in silent errors that might be hard to debug."
|
||||
)
|
||||
|
||||
if audio is not None:
|
||||
inputs = self._process_audio(
|
||||
audio,
|
||||
False,
|
||||
padding,
|
||||
max_length,
|
||||
truncation,
|
||||
pad_to_multiple_of,
|
||||
return_attention_mask,
|
||||
return_tensors,
|
||||
**kwargs,
|
||||
)
|
||||
else:
|
||||
inputs = None
|
||||
|
||||
if audio_target is not None:
|
||||
inputs_target = self._process_audio(
|
||||
audio_target,
|
||||
True,
|
||||
padding,
|
||||
max_length,
|
||||
truncation,
|
||||
pad_to_multiple_of,
|
||||
return_attention_mask,
|
||||
return_tensors,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
if inputs is None:
|
||||
return inputs_target
|
||||
else:
|
||||
inputs["labels"] = inputs_target["input_values"]
|
||||
inputs["stop_labels"] = inputs_target["stop_labels"]
|
||||
decoder_attention_mask = inputs_target.get("attention_mask")
|
||||
if decoder_attention_mask is not None:
|
||||
inputs["decoder_attention_mask"] = decoder_attention_mask
|
||||
|
||||
return inputs
|
||||
|
||||
def _process_audio(
|
||||
self,
|
||||
speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
|
||||
is_target: bool = False,
|
||||
padding: Union[bool, str, PaddingStrategy] = False,
|
||||
max_length: Optional[int] = None,
|
||||
truncation: bool = False,
|
||||
pad_to_multiple_of: Optional[int] = None,
|
||||
return_attention_mask: Optional[bool] = None,
|
||||
return_tensors: Optional[Union[str, TensorType]] = None,
|
||||
**kwargs
|
||||
) -> BatchFeature:
|
||||
is_batched = bool(
|
||||
isinstance(speech, (list, tuple))
|
||||
and (isinstance(speech[0], np.ndarray) or isinstance(speech[0], (tuple, list)))
|
||||
)
|
||||
|
||||
if is_batched:
|
||||
speech = [np.asarray(speech, dtype=np.float32) for speech in speech]
|
||||
elif not is_batched and not isinstance(speech, np.ndarray):
|
||||
speech = np.asarray(speech, dtype=np.float32)
|
||||
elif isinstance(speech, np.ndarray) and speech.dtype is np.dtype(np.float64):
|
||||
speech = speech.astype(np.float32)
|
||||
|
||||
# always return batch
|
||||
if not is_batched:
|
||||
speech = [speech]
|
||||
|
||||
# needed to make pad() work on spectrogram inputs
|
||||
feature_size_hack = self.feature_size
|
||||
|
||||
# convert into correct format for padding
|
||||
if is_target:
|
||||
features = [self._extract_fbank_features(waveform) for waveform in speech]
|
||||
fbank_sizes = [len(x) for x in features]
|
||||
encoded_inputs = BatchFeature({"input_values": features})
|
||||
self.feature_size = self.num_mel_bins
|
||||
else:
|
||||
encoded_inputs = BatchFeature({"input_values": speech})
|
||||
|
||||
padded_inputs = self.pad(
|
||||
encoded_inputs,
|
||||
padding=padding,
|
||||
max_length=max_length,
|
||||
truncation=truncation,
|
||||
pad_to_multiple_of=pad_to_multiple_of,
|
||||
return_attention_mask=return_attention_mask,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
self.feature_size = feature_size_hack
|
||||
|
||||
# convert input values to correct format
|
||||
input_values = padded_inputs["input_values"]
|
||||
if not isinstance(input_values[0], np.ndarray):
|
||||
padded_inputs["input_values"] = [np.asarray(array, dtype=np.float32) for array in input_values]
|
||||
elif (
|
||||
not isinstance(input_values, np.ndarray)
|
||||
and isinstance(input_values[0], np.ndarray)
|
||||
and input_values[0].dtype is np.dtype(np.float64)
|
||||
):
|
||||
padded_inputs["input_values"] = [array.astype(np.float32) for array in input_values]
|
||||
elif isinstance(input_values, np.ndarray) and input_values.dtype is np.dtype(np.float64):
|
||||
padded_inputs["input_values"] = input_values.astype(np.float32)
|
||||
|
||||
# convert attention_mask to correct format
|
||||
attention_mask = padded_inputs.get("attention_mask")
|
||||
if attention_mask is not None:
|
||||
padded_inputs["attention_mask"] = [np.asarray(array, dtype=np.int32) for array in attention_mask]
|
||||
|
||||
# zero-mean and unit-variance normalization
|
||||
if not is_target and self.do_normalize:
|
||||
attention_mask = (
|
||||
attention_mask
|
||||
if self._get_padding_strategies(padding, max_length=max_length) is not PaddingStrategy.DO_NOT_PAD
|
||||
else None
|
||||
)
|
||||
padded_inputs["input_values"] = self.zero_mean_unit_var_norm(
|
||||
padded_inputs["input_values"], attention_mask=attention_mask, padding_value=self.padding_value
|
||||
)
|
||||
|
||||
if is_target:
|
||||
# make labels for stop prediction
|
||||
stop_labels = []
|
||||
for i, l in enumerate(fbank_sizes):
|
||||
labels = np.zeros(len(padded_inputs["input_values"][i]))
|
||||
labels[l - 1 :] = 1.0
|
||||
stop_labels.append(labels)
|
||||
padded_inputs["stop_labels"] = stop_labels
|
||||
|
||||
# thin out frames for reduction factor
|
||||
if self.reduction_factor > 1:
|
||||
padded_inputs["input_values"] = self._reduce(padded_inputs["input_values"])
|
||||
if attention_mask is not None:
|
||||
padded_inputs["attention_mask"] = self._reduce(padded_inputs["attention_mask"])
|
||||
|
||||
if return_tensors is not None:
|
||||
padded_inputs = padded_inputs.convert_to_tensors(return_tensors)
|
||||
|
||||
return padded_inputs
|
3060
src/transformers/models/speecht5/modeling_speecht5.py
Normal file
3060
src/transformers/models/speecht5/modeling_speecht5.py
Normal file
File diff suppressed because it is too large
Load Diff
156
src/transformers/models/speecht5/processing_speecht5.py
Normal file
156
src/transformers/models/speecht5/processing_speecht5.py
Normal file
@ -0,0 +1,156 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Speech processor class for SpeechT5."""
|
||||
|
||||
from ...processing_utils import ProcessorMixin
|
||||
|
||||
|
||||
class SpeechT5Processor(ProcessorMixin):
|
||||
r"""
|
||||
Constructs a SpeechT5 processor which wraps a feature extractor and a tokenizer into a single processor.
|
||||
|
||||
[`SpeechT5Processor`] offers all the functionalities of [`SpeechT5FeatureExtractor`] and [`SpeechT5Tokenizer`]. See
|
||||
the docstring of [`~SpeechT5Processor.__call__`] and [`~SpeechT5Processor.decode`] for more information.
|
||||
|
||||
Args:
|
||||
feature_extractor (`SpeechT5FeatureExtractor`):
|
||||
An instance of [`SpeechT5FeatureExtractor`]. The feature extractor is a required input.
|
||||
tokenizer (`SpeechT5Tokenizer`):
|
||||
An instance of [`SpeechT5Tokenizer`]. The tokenizer is a required input.
|
||||
"""
|
||||
feature_extractor_class = "SpeechT5FeatureExtractor"
|
||||
tokenizer_class = "SpeechT5Tokenizer"
|
||||
|
||||
def __init__(self, feature_extractor, tokenizer):
|
||||
super().__init__(feature_extractor, tokenizer)
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
"""
|
||||
Processes audio and text input, as well as audio and text targets.
|
||||
|
||||
You can process audio by using the argument `audio`, or process audio targets by using the argument
|
||||
`audio_target`. This forwards the arguments to SpeechT5FeatureExtractor's
|
||||
[`~SpeechT5FeatureExtractor.__call__`].
|
||||
|
||||
You can process text by using the argument `text`, or process text labels by using the argument `text_target`.
|
||||
This forwards the arguments to SpeechT5Tokenizer's [`~SpeechT5Tokenizer.__call__`].
|
||||
|
||||
Valid input combinations are:
|
||||
|
||||
- `text` only
|
||||
- `audio` only
|
||||
- `text_target` only
|
||||
- `audio_target` only
|
||||
- `text` and `audio_target`
|
||||
- `audio` and `audio_target`
|
||||
- `text` and `text_target`
|
||||
- `audio` and `text_target`
|
||||
|
||||
Please refer to the docstring of the above two methods for more information.
|
||||
"""
|
||||
audio = kwargs.pop("audio", None)
|
||||
text = kwargs.pop("text", None)
|
||||
text_target = kwargs.pop("text_target", None)
|
||||
audio_target = kwargs.pop("audio_target", None)
|
||||
sampling_rate = kwargs.pop("sampling_rate", None)
|
||||
|
||||
if audio is not None and text is not None:
|
||||
raise ValueError(
|
||||
"Cannot process both `audio` and `text` inputs. Did you mean `audio_target` or `text_target`?"
|
||||
)
|
||||
if audio_target is not None and text_target is not None:
|
||||
raise ValueError(
|
||||
"Cannot process both `audio_target` and `text_target` inputs. Did you mean `audio` or `text`?"
|
||||
)
|
||||
if audio is None and audio_target is None and text is None and text_target is None:
|
||||
raise ValueError(
|
||||
"You need to specify either an `audio`, `audio_target`, `text`, or `text_target` input to process."
|
||||
)
|
||||
|
||||
if audio is not None:
|
||||
inputs = self.feature_extractor(audio, *args, sampling_rate=sampling_rate, **kwargs)
|
||||
elif text is not None:
|
||||
inputs = self.tokenizer(text, **kwargs)
|
||||
else:
|
||||
inputs = None
|
||||
|
||||
if audio_target is not None:
|
||||
audio_target_features = self.feature_extractor(
|
||||
audio_target=audio_target, *args, sampling_rate=sampling_rate, **kwargs
|
||||
)
|
||||
if inputs is None:
|
||||
return audio_target_features
|
||||
else:
|
||||
inputs["labels"] = audio_target_features["input_values"]
|
||||
inputs["stop_labels"] = audio_target_features["stop_labels"]
|
||||
decoder_attention_mask = audio_target_features.get("attention_mask")
|
||||
if decoder_attention_mask is not None:
|
||||
inputs["decoder_attention_mask"] = decoder_attention_mask
|
||||
|
||||
if text_target is not None:
|
||||
encodings_target = self.tokenizer(text_target, **kwargs)
|
||||
if inputs is None:
|
||||
return encodings_target
|
||||
else:
|
||||
inputs["labels"] = encodings_target["input_ids"]
|
||||
decoder_attention_mask = encodings_target.get("attention_mask")
|
||||
if decoder_attention_mask is not None:
|
||||
inputs["decoder_attention_mask"] = decoder_attention_mask
|
||||
|
||||
return inputs
|
||||
|
||||
def pad(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to SpeechT5FeatureExtractor's [`~SpeechT5FeatureExtractor.pad`] and
|
||||
returns its output.
|
||||
|
||||
You can process your labels by using the argument `text` (either in the same call as your audio inputs, or in a
|
||||
separate call). This forwards its arguments to SpeechT5Tokenizer's [`~SpeechT5Tokenizer.pad`].
|
||||
|
||||
Please refer to the docstring of the above two methods for more information.
|
||||
"""
|
||||
input_features = kwargs.pop("input_features", None)
|
||||
labels = kwargs.pop("labels", None)
|
||||
|
||||
if len(args) > 0:
|
||||
input_features = args[0]
|
||||
args = args[1:]
|
||||
|
||||
if input_features is not None:
|
||||
input_features = self.feature_extractor.pad(input_features, *args, **kwargs)
|
||||
if labels is not None:
|
||||
labels = self.tokenizer.pad(labels, **kwargs)
|
||||
|
||||
if labels is None:
|
||||
return input_features
|
||||
elif input_features is None:
|
||||
return labels
|
||||
else:
|
||||
input_features["labels"] = labels["input_ids"]
|
||||
return input_features
|
||||
|
||||
def batch_decode(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to SpeechT5Tokenizer's [`~SpeechT5Tokenizer.batch_decode`]. Please refer
|
||||
to the docstring of this method for more information.
|
||||
"""
|
||||
return self.tokenizer.batch_decode(*args, **kwargs)
|
||||
|
||||
def decode(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to SpeechT5Tokenizer's [`~SpeechT5Tokenizer.decode`]. Please refer to
|
||||
the docstring of this method for more information.
|
||||
"""
|
||||
return self.tokenizer.decode(*args, **kwargs)
|
185
src/transformers/models/speecht5/tokenization_speecht5.py
Normal file
185
src/transformers/models/speecht5/tokenization_speecht5.py
Normal file
@ -0,0 +1,185 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The Facebook Inc. and The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Tokenization class for SpeechT5."""
|
||||
|
||||
|
||||
import os
|
||||
from shutil import copyfile
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
import sentencepiece as spm
|
||||
|
||||
from ...tokenization_utils import PreTrainedTokenizer
|
||||
from ...utils import logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
VOCAB_FILES_NAMES = {"vocab_file": "spm_char.model"}
|
||||
|
||||
PRETRAINED_VOCAB_FILES_MAP = {
|
||||
"vocab_file": {
|
||||
"microsoft/speecht5_asr": "https://huggingface.co/microsoft/speecht5_asr/resolve/main/spm_char.model",
|
||||
"microsoft/speecht5_tts": "https://huggingface.co/microsoft/speecht5_tts/resolve/main/spm_char.model",
|
||||
"microsoft/speecht5_vc": "https://huggingface.co/microsoft/speecht5_vc/resolve/main/spm_char.model",
|
||||
}
|
||||
}
|
||||
|
||||
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
|
||||
"microsoft/speecht5_asr": 1024,
|
||||
"microsoft/speecht5_tts": 1024,
|
||||
"microsoft/speecht5_vc": 1024,
|
||||
}
|
||||
|
||||
|
||||
class SpeechT5Tokenizer(PreTrainedTokenizer):
|
||||
"""
|
||||
Construct a SpeechT5 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
||||
|
||||
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
|
||||
this superclass for more information regarding those methods.
|
||||
|
||||
Args:
|
||||
vocab_file (`str`):
|
||||
[SentencePiece](https://github.com/google/sentencepiece) file (generally has a *.spm* extension) that
|
||||
contains the vocabulary necessary to instantiate a tokenizer.
|
||||
eos_token (`str`, *optional*, defaults to `"</s>"`):
|
||||
The end of sequence token.
|
||||
bos_token (`str`, *optional*, defaults to `"<s>"`):
|
||||
The begin of sequence token.
|
||||
unk_token (`str`, *optional*, defaults to `"<unk>"`):
|
||||
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
|
||||
token instead.
|
||||
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
||||
The token used for padding, for example when batching sequences of different lengths.
|
||||
sp_model_kwargs (`dict`, *optional*):
|
||||
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
|
||||
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
|
||||
to set:
|
||||
|
||||
- `enable_sampling`: Enable subword regularization.
|
||||
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
|
||||
|
||||
- `nbest_size = {0,1}`: No sampling is performed.
|
||||
- `nbest_size > 1`: samples from the nbest_size results.
|
||||
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
|
||||
using forward-filtering-and-backward-sampling algorithm.
|
||||
|
||||
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
|
||||
BPE-dropout.
|
||||
|
||||
Attributes:
|
||||
sp_model (`SentencePieceProcessor`):
|
||||
The *SentencePiece* processor that is used for every conversion (string, tokens and IDs).
|
||||
"""
|
||||
|
||||
vocab_files_names = VOCAB_FILES_NAMES
|
||||
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
||||
model_input_names = ["input_ids", "attention_mask"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_file,
|
||||
bos_token="<s>",
|
||||
eos_token="</s>",
|
||||
unk_token="<unk>",
|
||||
pad_token="<pad>",
|
||||
sp_model_kwargs: Optional[Dict[str, Any]] = None,
|
||||
**kwargs
|
||||
) -> None:
|
||||
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
|
||||
|
||||
super().__init__(
|
||||
bos_token=bos_token,
|
||||
eos_token=eos_token,
|
||||
unk_token=unk_token,
|
||||
pad_token=pad_token,
|
||||
sp_model_kwargs=self.sp_model_kwargs,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
self.vocab_file = vocab_file
|
||||
|
||||
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
||||
self.sp_model.Load(vocab_file)
|
||||
|
||||
@property
|
||||
def vocab_size(self):
|
||||
return self.sp_model.get_piece_size()
|
||||
|
||||
def get_vocab(self):
|
||||
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
|
||||
vocab.update(self.added_tokens_encoder)
|
||||
return vocab
|
||||
|
||||
def __getstate__(self):
|
||||
state = self.__dict__.copy()
|
||||
state["sp_model"] = None
|
||||
return state
|
||||
|
||||
def __setstate__(self, d):
|
||||
self.__dict__ = d
|
||||
|
||||
# for backward compatibility
|
||||
if not hasattr(self, "sp_model_kwargs"):
|
||||
self.sp_model_kwargs = {}
|
||||
|
||||
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
||||
self.sp_model.Load(self.vocab_file)
|
||||
|
||||
def _tokenize(self, text: str) -> List[str]:
|
||||
"""Take as input a string and return a list of strings (tokens) for words/sub-words"""
|
||||
return self.sp_model.encode(text, out_type=str)
|
||||
|
||||
def _convert_token_to_id(self, token):
|
||||
"""Converts a token (str) in an id using the vocab."""
|
||||
return self.sp_model.piece_to_id(token)
|
||||
|
||||
def _convert_id_to_token(self, index):
|
||||
"""Converts an index (integer) in a token (str) using the vocab."""
|
||||
token = self.sp_model.IdToPiece(index)
|
||||
return token
|
||||
|
||||
def convert_tokens_to_string(self, tokens):
|
||||
"""Converts a sequence of tokens (string) in a single string."""
|
||||
current_sub_tokens = []
|
||||
out_string = ""
|
||||
for token in tokens:
|
||||
# make sure that special tokens are not decoded using sentencepiece model
|
||||
if token in self.all_special_tokens:
|
||||
out_string += self.sp_model.decode(current_sub_tokens) + token
|
||||
current_sub_tokens = []
|
||||
else:
|
||||
current_sub_tokens.append(token)
|
||||
out_string += self.sp_model.decode(current_sub_tokens)
|
||||
return out_string.strip()
|
||||
|
||||
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
|
||||
if not os.path.isdir(save_directory):
|
||||
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
|
||||
return
|
||||
out_vocab_file = os.path.join(
|
||||
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
|
||||
)
|
||||
|
||||
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
|
||||
copyfile(self.vocab_file, out_vocab_file)
|
||||
elif not os.path.isfile(self.vocab_file):
|
||||
with open(out_vocab_file, "wb") as fi:
|
||||
content_spiece_model = self.sp_model.serialized_model_proto()
|
||||
fi.write(content_spiece_model)
|
||||
|
||||
return (out_vocab_file,)
|
@ -5505,6 +5505,51 @@ class Speech2Text2PreTrainedModel(metaclass=DummyObject):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
SPEECHT5_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
class SpeechT5ForSpeechToSpeech(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class SpeechT5ForSpeechToText(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class SpeechT5ForTextToSpeech(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class SpeechT5HifiGan(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class SpeechT5Model(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class SpeechT5PreTrainedModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
SPLINTER_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
|
@ -8,3 +8,10 @@ class Speech2TextProcessor(metaclass=DummyObject):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["sentencepiece", "speech"])
|
||||
|
||||
|
||||
class SpeechT5Processor(metaclass=DummyObject):
|
||||
_backends = ["sentencepiece", "speech"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["sentencepiece", "speech"])
|
||||
|
@ -164,6 +164,13 @@ class Speech2TextTokenizer(metaclass=DummyObject):
|
||||
requires_backends(self, ["sentencepiece"])
|
||||
|
||||
|
||||
class SpeechT5Tokenizer(metaclass=DummyObject):
|
||||
_backends = ["sentencepiece"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["sentencepiece"])
|
||||
|
||||
|
||||
class T5Tokenizer(metaclass=DummyObject):
|
||||
_backends = ["sentencepiece"]
|
||||
|
||||
|
@ -22,3 +22,10 @@ class Speech2TextFeatureExtractor(metaclass=DummyObject):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["speech"])
|
||||
|
||||
|
||||
class SpeechT5FeatureExtractor(metaclass=DummyObject):
|
||||
_backends = ["speech"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["speech"])
|
||||
|
BIN
tests/fixtures/test_sentencepiece_bpe_char.model
vendored
Normal file
BIN
tests/fixtures/test_sentencepiece_bpe_char.model
vendored
Normal file
Binary file not shown.
0
tests/models/speecht5/__init__.py
Normal file
0
tests/models/speecht5/__init__.py
Normal file
421
tests/models/speecht5/test_feature_extraction_speecht5.py
Normal file
421
tests/models/speecht5/test_feature_extraction_speecht5.py
Normal file
@ -0,0 +1,421 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2021-2023 HuggingFace Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Tests for the SpeechT5 feature extractors."""
|
||||
|
||||
import itertools
|
||||
import random
|
||||
import unittest
|
||||
|
||||
import numpy as np
|
||||
|
||||
from transformers import BatchFeature, is_speech_available
|
||||
from transformers.testing_utils import require_torch, require_torchaudio
|
||||
from transformers.utils.import_utils import is_torch_available
|
||||
|
||||
from ...test_sequence_feature_extraction_common import SequenceFeatureExtractionTestMixin
|
||||
|
||||
|
||||
if is_speech_available():
|
||||
from transformers import SpeechT5FeatureExtractor
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
|
||||
global_rng = random.Random()
|
||||
|
||||
|
||||
def floats_list(shape, scale=1.0, rng=None, name=None):
|
||||
"""Creates a random float32 tensor"""
|
||||
if rng is None:
|
||||
rng = global_rng
|
||||
|
||||
values = []
|
||||
for batch_idx in range(shape[0]):
|
||||
values.append([])
|
||||
for _ in range(shape[1]):
|
||||
values[-1].append(rng.random() * scale)
|
||||
|
||||
return values
|
||||
|
||||
|
||||
@require_torch
|
||||
class SpeechT5FeatureExtractionTester(unittest.TestCase):
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=7,
|
||||
min_seq_length=400,
|
||||
max_seq_length=2000,
|
||||
feature_size=1,
|
||||
padding_value=0.0,
|
||||
sampling_rate=16000,
|
||||
do_normalize=True,
|
||||
num_mel_bins=80,
|
||||
hop_length=16,
|
||||
win_length=64,
|
||||
win_function="hann_window",
|
||||
frame_signal_scale=1.0,
|
||||
fmin=80,
|
||||
fmax=7600,
|
||||
mel_floor=1e-10,
|
||||
reduction_factor=2,
|
||||
return_attention_mask=True,
|
||||
):
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.min_seq_length = min_seq_length
|
||||
self.max_seq_length = max_seq_length
|
||||
self.seq_length_diff = (self.max_seq_length - self.min_seq_length) // (self.batch_size - 1)
|
||||
self.feature_size = feature_size
|
||||
self.padding_value = padding_value
|
||||
self.sampling_rate = sampling_rate
|
||||
self.do_normalize = do_normalize
|
||||
self.num_mel_bins = num_mel_bins
|
||||
self.hop_length = hop_length
|
||||
self.win_length = win_length
|
||||
self.win_function = win_function
|
||||
self.frame_signal_scale = frame_signal_scale
|
||||
self.fmin = fmin
|
||||
self.fmax = fmax
|
||||
self.mel_floor = mel_floor
|
||||
self.reduction_factor = reduction_factor
|
||||
self.return_attention_mask = return_attention_mask
|
||||
|
||||
def prepare_feat_extract_dict(self):
|
||||
return {
|
||||
"feature_size": self.feature_size,
|
||||
"padding_value": self.padding_value,
|
||||
"sampling_rate": self.sampling_rate,
|
||||
"do_normalize": self.do_normalize,
|
||||
"num_mel_bins": self.num_mel_bins,
|
||||
"hop_length": self.hop_length,
|
||||
"win_length": self.win_length,
|
||||
"win_function": self.win_function,
|
||||
"frame_signal_scale": self.frame_signal_scale,
|
||||
"fmin": self.fmin,
|
||||
"fmax": self.fmax,
|
||||
"mel_floor": self.mel_floor,
|
||||
"reduction_factor": self.reduction_factor,
|
||||
"return_attention_mask": self.return_attention_mask,
|
||||
}
|
||||
|
||||
def prepare_inputs_for_common(self, equal_length=False, numpify=False):
|
||||
def _flatten(list_of_lists):
|
||||
return list(itertools.chain(*list_of_lists))
|
||||
|
||||
if equal_length:
|
||||
speech_inputs = floats_list((self.batch_size, self.max_seq_length))
|
||||
else:
|
||||
# make sure that inputs increase in size
|
||||
speech_inputs = [
|
||||
_flatten(floats_list((x, self.feature_size)))
|
||||
for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff)
|
||||
]
|
||||
|
||||
if numpify:
|
||||
speech_inputs = [np.asarray(x) for x in speech_inputs]
|
||||
|
||||
return speech_inputs
|
||||
|
||||
def prepare_inputs_for_target(self, equal_length=False, numpify=False):
|
||||
if equal_length:
|
||||
speech_inputs = [floats_list((self.max_seq_length, self.num_mel_bins)) for _ in range(self.batch_size)]
|
||||
else:
|
||||
# make sure that inputs increase in size
|
||||
speech_inputs = [
|
||||
floats_list((x, self.num_mel_bins))
|
||||
for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff)
|
||||
]
|
||||
|
||||
if numpify:
|
||||
speech_inputs = [np.asarray(x) for x in speech_inputs]
|
||||
|
||||
return speech_inputs
|
||||
|
||||
|
||||
@require_torch
|
||||
@require_torchaudio
|
||||
class SpeechT5FeatureExtractionTest(SequenceFeatureExtractionTestMixin, unittest.TestCase):
|
||||
|
||||
feature_extraction_class = SpeechT5FeatureExtractor if is_speech_available() else None
|
||||
|
||||
def setUp(self):
|
||||
self.feat_extract_tester = SpeechT5FeatureExtractionTester(self)
|
||||
|
||||
def _check_zero_mean_unit_variance(self, input_vector):
|
||||
self.assertTrue(np.all(np.mean(input_vector, axis=0) < 1e-3))
|
||||
self.assertTrue(np.all(np.abs(np.var(input_vector, axis=0) - 1) < 1e-3))
|
||||
|
||||
def test_call(self):
|
||||
# Tests that all call wrap to encode_plus and batch_encode_plus
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
# create three inputs of length 800, 1000, and 1200
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
||||
np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs]
|
||||
|
||||
# Test not batched input
|
||||
encoded_sequences_1 = feat_extract(speech_inputs[0], return_tensors="np").input_values
|
||||
encoded_sequences_2 = feat_extract(np_speech_inputs[0], return_tensors="np").input_values
|
||||
self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3))
|
||||
|
||||
# Test batched
|
||||
encoded_sequences_1 = feat_extract(speech_inputs, return_tensors="np").input_values
|
||||
encoded_sequences_2 = feat_extract(np_speech_inputs, return_tensors="np").input_values
|
||||
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
||||
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
||||
|
||||
def test_zero_mean_unit_variance_normalization_np(self):
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
||||
|
||||
paddings = ["longest", "max_length", "do_not_pad"]
|
||||
max_lengths = [None, 1600, None]
|
||||
for max_length, padding in zip(max_lengths, paddings):
|
||||
processed = feat_extract(speech_inputs, padding=padding, max_length=max_length, return_tensors="np")
|
||||
input_values = processed.input_values
|
||||
|
||||
self._check_zero_mean_unit_variance(input_values[0][:800])
|
||||
self.assertTrue(input_values[0][800:].sum() < 1e-6)
|
||||
self._check_zero_mean_unit_variance(input_values[1][:1000])
|
||||
self.assertTrue(input_values[0][1000:].sum() < 1e-6)
|
||||
self._check_zero_mean_unit_variance(input_values[2][:1200])
|
||||
|
||||
def test_zero_mean_unit_variance_normalization(self):
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
lengths = range(800, 1400, 200)
|
||||
speech_inputs = [floats_list((1, x))[0] for x in lengths]
|
||||
|
||||
paddings = ["longest", "max_length", "do_not_pad"]
|
||||
max_lengths = [None, 1600, None]
|
||||
|
||||
for max_length, padding in zip(max_lengths, paddings):
|
||||
processed = feat_extract(speech_inputs, max_length=max_length, padding=padding)
|
||||
input_values = processed.input_values
|
||||
|
||||
self._check_zero_mean_unit_variance(input_values[0][:800])
|
||||
self._check_zero_mean_unit_variance(input_values[1][:1000])
|
||||
self._check_zero_mean_unit_variance(input_values[2][:1200])
|
||||
|
||||
def test_zero_mean_unit_variance_normalization_trunc_np_max_length(self):
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
||||
processed = feat_extract(
|
||||
speech_inputs, truncation=True, max_length=1000, padding="max_length", return_tensors="np"
|
||||
)
|
||||
input_values = processed.input_values
|
||||
|
||||
self._check_zero_mean_unit_variance(input_values[0, :800])
|
||||
self._check_zero_mean_unit_variance(input_values[1])
|
||||
self._check_zero_mean_unit_variance(input_values[2])
|
||||
|
||||
def test_zero_mean_unit_variance_normalization_trunc_np_longest(self):
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
||||
processed = feat_extract(
|
||||
speech_inputs, truncation=True, max_length=1000, padding="longest", return_tensors="np"
|
||||
)
|
||||
input_values = processed.input_values
|
||||
|
||||
self._check_zero_mean_unit_variance(input_values[0, :800])
|
||||
self._check_zero_mean_unit_variance(input_values[1, :1000])
|
||||
self._check_zero_mean_unit_variance(input_values[2])
|
||||
|
||||
# make sure that if max_length < longest -> then pad to max_length
|
||||
self.assertTrue(input_values.shape == (3, 1000))
|
||||
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
||||
processed = feat_extract(
|
||||
speech_inputs, truncation=True, max_length=2000, padding="longest", return_tensors="np"
|
||||
)
|
||||
input_values = processed.input_values
|
||||
|
||||
self._check_zero_mean_unit_variance(input_values[0, :800])
|
||||
self._check_zero_mean_unit_variance(input_values[1, :1000])
|
||||
self._check_zero_mean_unit_variance(input_values[2])
|
||||
|
||||
# make sure that if max_length > longest -> then pad to longest
|
||||
self.assertTrue(input_values.shape == (3, 1200))
|
||||
|
||||
def test_double_precision_pad(self):
|
||||
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
np_speech_inputs = np.random.rand(100).astype(np.float64)
|
||||
py_speech_inputs = np_speech_inputs.tolist()
|
||||
|
||||
for inputs in [py_speech_inputs, np_speech_inputs]:
|
||||
np_processed = feature_extractor.pad([{"input_values": inputs}], return_tensors="np")
|
||||
self.assertTrue(np_processed.input_values.dtype == np.float32)
|
||||
pt_processed = feature_extractor.pad([{"input_values": inputs}], return_tensors="pt")
|
||||
self.assertTrue(pt_processed.input_values.dtype == torch.float32)
|
||||
|
||||
def test_call_target(self):
|
||||
# Tests that all call wrap to encode_plus and batch_encode_plus
|
||||
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
# create three inputs of length 8000, 14000, and 2000
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(8000, 14000, 2000)]
|
||||
np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs]
|
||||
|
||||
# Test feature size
|
||||
input_values = feature_extractor(audio_target=np_speech_inputs, padding=True, return_tensors="np").input_values
|
||||
self.assertTrue(input_values.ndim == 3)
|
||||
self.assertTrue(input_values.shape[-1] == feature_extractor.num_mel_bins)
|
||||
|
||||
# Test not batched input
|
||||
encoded_sequences_1 = feature_extractor(speech_inputs[0], return_tensors="np").input_values
|
||||
encoded_sequences_2 = feature_extractor(np_speech_inputs[0], return_tensors="np").input_values
|
||||
self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3))
|
||||
|
||||
# Test batched
|
||||
encoded_sequences_1 = feature_extractor(speech_inputs, return_tensors="np").input_values
|
||||
encoded_sequences_2 = feature_extractor(np_speech_inputs, return_tensors="np").input_values
|
||||
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
||||
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
||||
|
||||
def test_batch_feature_target(self):
|
||||
speech_inputs = self.feat_extract_tester.prepare_inputs_for_target()
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_dict)
|
||||
input_name = feat_extract.model_input_names[0]
|
||||
|
||||
processed_features = BatchFeature({input_name: speech_inputs})
|
||||
|
||||
self.assertTrue(all(len(x) == len(y) for x, y in zip(speech_inputs, processed_features[input_name])))
|
||||
|
||||
speech_inputs = self.feat_extract_tester.prepare_inputs_for_target(equal_length=True)
|
||||
processed_features = BatchFeature({input_name: speech_inputs}, tensor_type="np")
|
||||
|
||||
batch_features_input = processed_features[input_name]
|
||||
|
||||
if len(batch_features_input.shape) < 3:
|
||||
batch_features_input = batch_features_input[:, :, None]
|
||||
|
||||
self.assertTrue(
|
||||
batch_features_input.shape
|
||||
== (self.feat_extract_tester.batch_size, len(speech_inputs[0]), self.feat_extract_tester.num_mel_bins)
|
||||
)
|
||||
|
||||
@require_torch
|
||||
def test_batch_feature_target_pt(self):
|
||||
speech_inputs = self.feat_extract_tester.prepare_inputs_for_target(equal_length=True)
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_dict)
|
||||
input_name = feat_extract.model_input_names[0]
|
||||
|
||||
processed_features = BatchFeature({input_name: speech_inputs}, tensor_type="pt")
|
||||
|
||||
batch_features_input = processed_features[input_name]
|
||||
|
||||
if len(batch_features_input.shape) < 3:
|
||||
batch_features_input = batch_features_input[:, :, None]
|
||||
|
||||
self.assertTrue(
|
||||
batch_features_input.shape
|
||||
== (self.feat_extract_tester.batch_size, len(speech_inputs[0]), self.feat_extract_tester.num_mel_bins)
|
||||
)
|
||||
|
||||
@require_torch
|
||||
def test_padding_accepts_tensors_target_pt(self):
|
||||
feat_extract = self.feature_extraction_class(**self.feat_extract_dict)
|
||||
speech_inputs = self.feat_extract_tester.prepare_inputs_for_target()
|
||||
input_name = feat_extract.model_input_names[0]
|
||||
|
||||
processed_features = BatchFeature({input_name: speech_inputs})
|
||||
|
||||
feat_extract.feature_size = feat_extract.num_mel_bins # hack!
|
||||
|
||||
input_np = feat_extract.pad(processed_features, padding="longest", return_tensors="np")[input_name]
|
||||
input_pt = feat_extract.pad(processed_features, padding="longest", return_tensors="pt")[input_name]
|
||||
|
||||
self.assertTrue(abs(input_np.astype(np.float32).sum() - input_pt.numpy().astype(np.float32).sum()) < 1e-2)
|
||||
|
||||
def test_attention_mask_target(self):
|
||||
feat_dict = self.feat_extract_dict
|
||||
feat_dict["return_attention_mask"] = True
|
||||
feat_extract = self.feature_extraction_class(**feat_dict)
|
||||
speech_inputs = self.feat_extract_tester.prepare_inputs_for_target()
|
||||
input_lenghts = [len(x) for x in speech_inputs]
|
||||
input_name = feat_extract.model_input_names[0]
|
||||
|
||||
processed = BatchFeature({input_name: speech_inputs})
|
||||
|
||||
feat_extract.feature_size = feat_extract.num_mel_bins # hack!
|
||||
|
||||
processed = feat_extract.pad(processed, padding="longest", return_tensors="np")
|
||||
self.assertIn("attention_mask", processed)
|
||||
self.assertListEqual(list(processed.attention_mask.shape), list(processed[input_name].shape[:2]))
|
||||
self.assertListEqual(processed.attention_mask.sum(-1).tolist(), input_lenghts)
|
||||
|
||||
def test_attention_mask_with_truncation_target(self):
|
||||
feat_dict = self.feat_extract_dict
|
||||
feat_dict["return_attention_mask"] = True
|
||||
feat_extract = self.feature_extraction_class(**feat_dict)
|
||||
speech_inputs = self.feat_extract_tester.prepare_inputs_for_target()
|
||||
input_lenghts = [len(x) for x in speech_inputs]
|
||||
input_name = feat_extract.model_input_names[0]
|
||||
|
||||
processed = BatchFeature({input_name: speech_inputs})
|
||||
max_length = min(input_lenghts)
|
||||
|
||||
feat_extract.feature_size = feat_extract.num_mel_bins # hack!
|
||||
|
||||
processed_pad = feat_extract.pad(
|
||||
processed, padding="max_length", max_length=max_length, truncation=True, return_tensors="np"
|
||||
)
|
||||
self.assertIn("attention_mask", processed_pad)
|
||||
self.assertListEqual(
|
||||
list(processed_pad.attention_mask.shape), list((processed_pad[input_name].shape[0], max_length))
|
||||
)
|
||||
self.assertListEqual(
|
||||
processed_pad.attention_mask[:, :max_length].sum(-1).tolist(), [max_length for x in speech_inputs]
|
||||
)
|
||||
|
||||
def _load_datasamples(self, num_samples):
|
||||
from datasets import load_dataset
|
||||
|
||||
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
# automatic decoding with librispeech
|
||||
speech_samples = ds.sort("id").select(range(num_samples))[:num_samples]["audio"]
|
||||
|
||||
return [x["array"] for x in speech_samples]
|
||||
|
||||
def test_integration(self):
|
||||
# fmt: off
|
||||
EXPECTED_INPUT_VALUES = torch.tensor(
|
||||
[2.3804e-03, 2.0752e-03, 1.9836e-03, 2.1057e-03, 1.6174e-03,
|
||||
3.0518e-04, 9.1553e-05, 3.3569e-04, 9.7656e-04, 1.8311e-03,
|
||||
2.0142e-03, 2.1057e-03, 1.7395e-03, 4.5776e-04, -3.9673e-04,
|
||||
4.5776e-04, 1.0071e-03, 9.1553e-05, 4.8828e-04, 1.1597e-03,
|
||||
7.3242e-04, 9.4604e-04, 1.8005e-03, 1.8311e-03, 8.8501e-04,
|
||||
4.2725e-04, 4.8828e-04, 7.3242e-04, 1.0986e-03, 2.1057e-03]
|
||||
)
|
||||
# fmt: on
|
||||
|
||||
input_speech = self._load_datasamples(1)
|
||||
feature_extractor = SpeechT5FeatureExtractor()
|
||||
input_values = feature_extractor(input_speech, return_tensors="pt").input_values
|
||||
self.assertTrue(torch.allclose(input_values[0, :30], EXPECTED_INPUT_VALUES, atol=1e-4))
|
||||
|
||||
def test_integration_target(self):
|
||||
# fmt: off
|
||||
EXPECTED_INPUT_VALUES = torch.tensor(
|
||||
[-2.7713, -2.8896, -3.2619, -3.0843, -2.9919, -3.0084, -3.2796, -3.3169,
|
||||
-3.2397, -3.2053, -2.9151, -2.7921, -2.9403, -2.7411, -3.0654, -2.8314,
|
||||
-3.0026, -2.9797, -3.1314, -2.9939, -2.6748, -2.7725, -2.8563, -2.9462,
|
||||
-3.2623, -3.3044, -3.1318, -3.2672, -3.4030, -3.1988]
|
||||
)
|
||||
# fmt: on
|
||||
|
||||
input_speech = self._load_datasamples(1)
|
||||
feature_extractor = SpeechT5FeatureExtractor()
|
||||
input_values = feature_extractor(audio_target=input_speech, return_tensors="pt").input_values
|
||||
self.assertTrue(torch.allclose(input_values[0, 0, :30], EXPECTED_INPUT_VALUES, atol=1e-4))
|
1547
tests/models/speecht5/test_modeling_speecht5.py
Normal file
1547
tests/models/speecht5/test_modeling_speecht5.py
Normal file
File diff suppressed because it is too large
Load Diff
187
tests/models/speecht5/test_processor_speecht5.py
Normal file
187
tests/models/speecht5/test_processor_speecht5.py
Normal file
@ -0,0 +1,187 @@
|
||||
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Tests for the SpeechT5 processors."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
from transformers import is_speech_available, is_torch_available
|
||||
from transformers.models.speecht5 import SpeechT5Tokenizer
|
||||
from transformers.testing_utils import get_tests_dir, require_torch, require_torchaudio
|
||||
from transformers.utils import FEATURE_EXTRACTOR_NAME
|
||||
|
||||
|
||||
if is_speech_available() and is_torch_available():
|
||||
from transformers import SpeechT5FeatureExtractor, SpeechT5Processor
|
||||
|
||||
from .test_feature_extraction_speecht5 import floats_list
|
||||
|
||||
|
||||
SAMPLE_VOCAB = get_tests_dir("fixtures/test_sentencepiece_bpe_char.model")
|
||||
|
||||
|
||||
@require_torch
|
||||
@require_torchaudio
|
||||
class SpeechT5ProcessorTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdirname = tempfile.mkdtemp()
|
||||
|
||||
tokenizer = SpeechT5Tokenizer(SAMPLE_VOCAB)
|
||||
tokenizer.save_pretrained(self.tmpdirname)
|
||||
|
||||
feature_extractor_map = {
|
||||
"feature_size": 1,
|
||||
"padding_value": 0.0,
|
||||
"sampling_rate": 16000,
|
||||
"do_normalize": False,
|
||||
"num_mel_bins": 80,
|
||||
"hop_length": 16,
|
||||
"win_length": 64,
|
||||
"win_function": "hann_window",
|
||||
"frame_signal_scale": 1.0,
|
||||
"fmin": 80,
|
||||
"fmax": 7600,
|
||||
"mel_floor": 1e-10,
|
||||
"reduction_factor": 2,
|
||||
"return_attention_mask": True,
|
||||
}
|
||||
|
||||
self.feature_extraction_file = os.path.join(self.tmpdirname, FEATURE_EXTRACTOR_NAME)
|
||||
with open(self.feature_extraction_file, "w", encoding="utf-8") as fp:
|
||||
fp.write(json.dumps(feature_extractor_map) + "\n")
|
||||
|
||||
def get_tokenizer(self, **kwargs):
|
||||
return SpeechT5Tokenizer.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
def get_feature_extractor(self, **kwargs):
|
||||
return SpeechT5FeatureExtractor.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.tmpdirname)
|
||||
|
||||
def test_save_load_pretrained_default(self):
|
||||
tokenizer = self.get_tokenizer()
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
processor.save_pretrained(self.tmpdirname)
|
||||
processor = SpeechT5Processor.from_pretrained(self.tmpdirname)
|
||||
|
||||
self.assertEqual(processor.tokenizer.get_vocab(), tokenizer.get_vocab())
|
||||
self.assertIsInstance(processor.tokenizer, SpeechT5Tokenizer)
|
||||
|
||||
self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor.to_json_string())
|
||||
self.assertIsInstance(processor.feature_extractor, SpeechT5FeatureExtractor)
|
||||
|
||||
def test_save_load_pretrained_additional_features(self):
|
||||
processor = SpeechT5Processor(tokenizer=self.get_tokenizer(), feature_extractor=self.get_feature_extractor())
|
||||
processor.save_pretrained(self.tmpdirname)
|
||||
|
||||
tokenizer_add_kwargs = self.get_tokenizer(bos_token="(BOS)", eos_token="(EOS)")
|
||||
feature_extractor_add_kwargs = self.get_feature_extractor(do_normalize=False, padding_value=1.0)
|
||||
|
||||
processor = SpeechT5Processor.from_pretrained(
|
||||
self.tmpdirname, bos_token="(BOS)", eos_token="(EOS)", do_normalize=False, padding_value=1.0
|
||||
)
|
||||
|
||||
self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab())
|
||||
self.assertIsInstance(processor.tokenizer, SpeechT5Tokenizer)
|
||||
|
||||
self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor_add_kwargs.to_json_string())
|
||||
self.assertIsInstance(processor.feature_extractor, SpeechT5FeatureExtractor)
|
||||
|
||||
def test_feature_extractor(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
raw_speech = floats_list((3, 1000))
|
||||
|
||||
input_feat_extract = feature_extractor(audio=raw_speech, return_tensors="np")
|
||||
input_processor = processor(audio=raw_speech, return_tensors="np")
|
||||
|
||||
for key in input_feat_extract.keys():
|
||||
self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2)
|
||||
|
||||
def test_feature_extractor_target(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
raw_speech = floats_list((3, 1000))
|
||||
|
||||
input_feat_extract = feature_extractor(audio_target=raw_speech, return_tensors="np")
|
||||
input_processor = processor(audio_target=raw_speech, return_tensors="np")
|
||||
|
||||
for key in input_feat_extract.keys():
|
||||
self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2)
|
||||
|
||||
def test_tokenizer(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
input_str = "This is a test string"
|
||||
|
||||
encoded_processor = processor(text=input_str)
|
||||
encoded_tok = tokenizer(input_str)
|
||||
|
||||
for key in encoded_tok.keys():
|
||||
self.assertListEqual(encoded_tok[key], encoded_processor[key])
|
||||
|
||||
def test_tokenizer_target(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
input_str = "This is a test string"
|
||||
|
||||
encoded_processor = processor(text_target=input_str)
|
||||
encoded_tok = tokenizer(input_str)
|
||||
|
||||
for key in encoded_tok.keys():
|
||||
self.assertListEqual(encoded_tok[key], encoded_processor[key])
|
||||
|
||||
def test_tokenizer_decode(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]]
|
||||
|
||||
decoded_processor = processor.batch_decode(predicted_ids)
|
||||
decoded_tok = tokenizer.batch_decode(predicted_ids)
|
||||
|
||||
self.assertListEqual(decoded_tok, decoded_processor)
|
||||
|
||||
def test_model_input_names(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = SpeechT5Processor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
self.assertListEqual(
|
||||
processor.model_input_names,
|
||||
feature_extractor.model_input_names,
|
||||
msg="`processor` and `feature_extractor` model input names do not match",
|
||||
)
|
186
tests/models/speecht5/test_tokenization_speecht5.py
Normal file
186
tests/models/speecht5/test_tokenization_speecht5.py
Normal file
File diff suppressed because one or more lines are too long
@ -133,6 +133,18 @@ IGNORE_NON_TESTED = PRIVATE_MODELS.copy() + [
|
||||
"BlipTextLMHeadModel", # No need to test it as it is tested by BlipTextVision models
|
||||
"BridgeTowerTextModel", # No need to test it as it is tested by BridgeTowerModel model.
|
||||
"BridgeTowerVisionModel", # No need to test it as it is tested by BridgeTowerModel model.
|
||||
"SpeechT5Decoder", # Building part of bigger (tested) model.
|
||||
"SpeechT5DecoderWithoutPrenet", # Building part of bigger (tested) model.
|
||||
"SpeechT5DecoderWithSpeechPrenet", # Building part of bigger (tested) model.
|
||||
"SpeechT5DecoderWithTextPrenet", # Building part of bigger (tested) model.
|
||||
"SpeechT5Encoder", # Building part of bigger (tested) model.
|
||||
"SpeechT5EncoderWithoutPrenet", # Building part of bigger (tested) model.
|
||||
"SpeechT5EncoderWithSpeechPrenet", # Building part of bigger (tested) model.
|
||||
"SpeechT5EncoderWithTextPrenet", # Building part of bigger (tested) model.
|
||||
"SpeechT5SpeechDecoder", # Building part of bigger (tested) model.
|
||||
"SpeechT5SpeechEncoder", # Building part of bigger (tested) model.
|
||||
"SpeechT5TextDecoder", # Building part of bigger (tested) model.
|
||||
"SpeechT5TextEncoder", # Building part of bigger (tested) model.
|
||||
]
|
||||
|
||||
# Update this list with test files that don't have a tester with a `all_model_classes` variable and which don't
|
||||
@ -269,6 +281,9 @@ IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [
|
||||
"AltCLIPTextModel",
|
||||
"AltCLIPVisionModel",
|
||||
"AltRobertaModel",
|
||||
"SpeechT5ForSpeechToSpeech",
|
||||
"SpeechT5ForTextToSpeech",
|
||||
"SpeechT5HifiGan",
|
||||
]
|
||||
|
||||
# Update this list for models that have multiple model types for the same
|
||||
@ -378,6 +393,8 @@ def is_a_private_model(model):
|
||||
return True
|
||||
if model.endswith("Decoder"):
|
||||
return True
|
||||
if model.endswith("Prenet"):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
|
@ -169,6 +169,8 @@ src/transformers/models/speech_to_text/configuration_speech_to_text.py
|
||||
src/transformers/models/speech_to_text/modeling_speech_to_text.py
|
||||
src/transformers/models/speech_to_text_2/configuration_speech_to_text_2.py
|
||||
src/transformers/models/speech_to_text_2/modeling_speech_to_text_2.py
|
||||
src/transformers/models/speecht5/modeling_speecht5.py
|
||||
src/transformers/models/speecht5/tokenization_speecht5.py
|
||||
src/transformers/models/segformer/modeling_tf_segformer.py
|
||||
src/transformers/models/squeezebert/configuration_squeezebert.py
|
||||
src/transformers/models/swin/configuration_swin.py
|
||||
|
Loading…
Reference in New Issue
Block a user