mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
Add CLVP (#24745)
* init commit * attention arch done except rotary emb * rotary emb done * text encoder working * outputs matching * arch first pass done * make commands done, tests and docs remaining * all tests passed, only docs remaining * docs done * doc-builder fix * convert script removed(not relevant) * minor comments done * added ckpt conversion script * tokenizer done * very minor fix of index.md 2 * mostly make fixup related * all done except fe and rotary emb * very small change * removed unidecode dependency * style changes * tokenizer removed require_backends * added require_inflect to tokenizer tests * removed VOCAB_FILES in tokenizer test * inflect dependency removed * added rotary pos emb cache and simplified the apply method * style * little doc change * more comments * feature extractor added * added processor * auto-regressive config added * added CLVPConditioningEncoder * comments done except the test one * weights added successfull(NOT tested) * tokenizer fix with numbers * generate outputs matching * almost tests passing Integ tests not written * Integ tests added * major CUDA error fixed * docs done * rebase and multiple fixes * fixed rebase overwrites * generate code simplified and tests for AutoRegressive model added * minor changes * refectored gpt2 code in clvp file * weights done and all code refactored * mostly done except the fast_tokenizer * doc test fix * config file's doc fixes * more config fix * more comments * tokenizer comments mostly done * modeling file mostly refactored and can load modules * ClvpEncoder tested * ClvpDecoder, ClvpModel and ClvpForCausalLM tested * integration and all tests passed * more fixes * docs almost done * ckpt conversion refectored * style and some failing tests fix * comments * temporary output fix but test_assisted_decoding_matches_greedy_search test fails * majority changes done * use_cache outputs same now! Along with the asisted_greedy_decoding test fix * more comments * more comments * prepare_inputs_for_generation fixed and _prepare_model_inputs added * style fix * clvp.md change * moved clvpconditionalencoder norms * add model to new index * added tokenizer input_ids_with_special_tokens * small fix * config mostly done * added config-tester and changed conversion script * more comments * comments * style fix * some comments * tokenizer changed back to prev state * small commnets * added output hidden states for the main model * style fix * comments * small change * revert small change * . * Update clvp.md * Update test_modeling_clvp.py * :) * some minor change * new fixes * remove to_dict from FE
This commit is contained in:
parent
9dd58c53dd
commit
7e9f10ac94
@ -321,6 +321,7 @@ Current number of checkpoints: ** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (from MetaAI) released with the paper [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
|
||||
|
@ -296,6 +296,7 @@ Número actual de puntos de control: ** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (from MetaAI) released with the paper [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
|
||||
|
@ -270,6 +270,7 @@ conda install -c huggingface transformers
|
||||
1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI से) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. द्वाराअनुसंधान पत्र [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) के साथ जारी किया गया
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI से) साथ वाला पेपर [लर्निंग ट्रांसफरेबल विजुअल मॉडल फ्रॉम नेचुरल लैंग्वेज सुपरविजन](https://arxiv.org /abs/2103.00020) एलेक रैडफोर्ड, जोंग वूक किम, क्रिस हैलासी, आदित्य रमेश, गेब्रियल गोह, संध्या अग्रवाल, गिरीश शास्त्री, अमांडा एस्केल, पामेला मिश्किन, जैक क्लार्क, ग्रेचेन क्रुएगर, इल्या सुत्स्केवर द्वारा।
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (सेल्सफोर्स से) साथ में पेपर [प्रोग्राम सिंथेसिस के लिए एक संवादात्मक प्रतिमान](https://arxiv.org/abs/2203.13474) एरिक निजकैंप, बो पैंग, हिरोआकी हयाशी, लिफू तू, हुआन वांग, यिंगबो झोउ, सिल्वियो सावरेस, कैमिंग जिओंग रिलीज।
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (MetaAI से) Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. द्वाराअनुसंधान पत्र [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) के साथ जारी किया गया
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (माइक्रोसॉफ्ट रिसर्च एशिया से) कागज के साथ [फास्ट ट्रेनिंग कन्वर्जेंस के लिए सशर्त डीईटीआर](https://arxiv. org/abs/2108.06152) डेपू मेंग, ज़ियाओकांग चेन, ज़ेजिया फैन, गैंग ज़ेंग, होउकियांग ली, युहुई युआन, लेई सन, जिंगडोंग वांग द्वारा।
|
||||
|
@ -330,6 +330,7 @@ Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それ
|
||||
1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI から) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. から公開された研究論文 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687)
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI から) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever から公開された研究論文: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Göttingen から) Timo Lüddecke and Alexander Ecker から公開された研究論文: [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003)
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce から) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong から公開された研究論文: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474)
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (MetaAI から) Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. から公開された研究論文 [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia から) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang から公開された研究論文: [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152)
|
||||
|
@ -245,6 +245,7 @@ Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는
|
||||
1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI 에서 제공)은 Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.의 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687)논문과 함께 발표했습니다.
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI 에서) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 의 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 논문과 함께 발표했습니다.
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (University of Göttingen 에서) Timo Lüddecke and Alexander Ecker 의 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 논문과 함께 발표했습니다.
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (Salesforce 에서) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 의 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 논문과 함께 발표했습니다.
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (MetaAI 에서 제공)은 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.의 [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)논문과 함께 발표했습니다.
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (Microsoft Research Asia 에서) Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 의 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 논문과 함께 발표했습니다.
|
||||
|
@ -269,6 +269,7 @@ conda install -c huggingface transformers
|
||||
1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (来自 LAION-AI) 伴随论文 [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) 由 Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov 发布。
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (来自 University of Göttingen) 伴随论文 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 由 Timo Lüddecke and Alexander Ecker 发布。
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (来自 MetaAI) 伴随论文 [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) 由 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve 发布。
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (来自 Microsoft Research Asia) 伴随论文 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 由 Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 发布。
|
||||
|
@ -281,6 +281,7 @@ conda install -c huggingface transformers
|
||||
1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
|
||||
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
|
||||
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
|
||||
1. **[CLVP](https://huggingface.co/docs/transformers/main/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
|
||||
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (from MetaAI) released with the paper [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
|
||||
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
|
||||
|
@ -663,6 +663,8 @@
|
||||
title: CLIP
|
||||
- local: model_doc/clipseg
|
||||
title: CLIPSeg
|
||||
- local: model_doc/clvp
|
||||
title: CLVP
|
||||
- local: model_doc/data2vec
|
||||
title: Data2Vec
|
||||
- local: model_doc/deplot
|
||||
|
@ -92,6 +92,7 @@ Flax), PyTorch, and/or TensorFlow.
|
||||
| [CLAP](model_doc/clap) | ✅ | ❌ | ❌ |
|
||||
| [CLIP](model_doc/clip) | ✅ | ✅ | ✅ |
|
||||
| [CLIPSeg](model_doc/clipseg) | ✅ | ❌ | ❌ |
|
||||
| [CLVP](model_doc/clvp) | ✅ | ❌ | ❌ |
|
||||
| [CodeGen](model_doc/codegen) | ✅ | ❌ | ❌ |
|
||||
| [CodeLlama](model_doc/code_llama) | ✅ | ❌ | ❌ |
|
||||
| [Conditional DETR](model_doc/conditional_detr) | ✅ | ❌ | ❌ |
|
||||
|
126
docs/source/en/model_doc/clvp.md
Normal file
126
docs/source/en/model_doc/clvp.md
Normal file
@ -0,0 +1,126 @@
|
||||
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# CLVP
|
||||
|
||||
## Overview
|
||||
|
||||
The CLVP (Contrastive Language-Voice Pretrained Transformer) model was proposed in [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodology of improving performance need not be confined to images. This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise - an expressive, multi-voice text-to-speech system.*
|
||||
|
||||
|
||||
This model was contributed by [Susnato Dhar](https://huggingface.co/susnato).
|
||||
The original code can be found [here](https://github.com/neonbjb/tortoise-tts).
|
||||
|
||||
|
||||
## Usage tips
|
||||
|
||||
1. CLVP is an integral part of the Tortoise TTS model.
|
||||
2. CLVP can be used to compare different generated speech candidates with the provided text, and the best speech tokens are forwarded to the diffusion model.
|
||||
3. The use of the [`ClvpModelForConditionalGeneration.generate()`] method is strongly recommended for tortoise usage.
|
||||
4. Note that the CLVP model expects the audio to be sampled at 22.05 kHz contrary to other audio models which expects 16 kHz.
|
||||
|
||||
|
||||
## Brief Explanation:
|
||||
|
||||
- The [`ClvpTokenizer`] tokenizes the text input, and the [`ClvpFeatureExtractor`] extracts the log mel-spectrogram from the desired audio.
|
||||
- [`ClvpConditioningEncoder`] takes those text tokens and audio representations and converts them into embeddings conditioned on the text and audio.
|
||||
- The [`ClvpForCausalLM`] uses those embeddings to generate multiple speech candidates.
|
||||
- Each speech candidate is passed through the speech encoder ([`ClvpEncoder`]) which converts them into a vector representation, and the text encoder ([`ClvpEncoder`]) converts the text tokens into the same latent space.
|
||||
- At the end, we compare each speech vector with the text vector to see which speech vector is most similar to the text vector.
|
||||
- [`ClvpModelForConditionalGeneration.generate()`] compresses all of the logic described above into a single method.
|
||||
|
||||
|
||||
Example :
|
||||
|
||||
```python
|
||||
>>> import datasets
|
||||
>>> from transformers import ClvpProcessor, ClvpModelForConditionalGeneration
|
||||
|
||||
>>> # Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using `datasets` library).
|
||||
>>> text = "This is an example text."
|
||||
|
||||
>>> ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
>>> ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
|
||||
>>> sample = ds[0]["audio"]
|
||||
|
||||
>>> # Define processor and model.
|
||||
>>> processor = ClvpProcessor.from_pretrained("susnato/clvp_dev")
|
||||
>>> model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev")
|
||||
|
||||
>>> # Generate processor output and model output.
|
||||
>>> processor_output = processor(raw_speech=sample["array"], sampling_rate=sample["sampling_rate"], text=text, return_tensors="pt")
|
||||
>>> generated_output = model.generate(**processor_output)
|
||||
```
|
||||
|
||||
|
||||
## ClvpConfig
|
||||
|
||||
[[autodoc]] ClvpConfig
|
||||
- from_sub_model_configs
|
||||
|
||||
## ClvpEncoderConfig
|
||||
|
||||
[[autodoc]] ClvpEncoderConfig
|
||||
|
||||
## ClvpDecoderConfig
|
||||
|
||||
[[autodoc]] ClvpDecoderConfig
|
||||
|
||||
## ClvpTokenizer
|
||||
|
||||
[[autodoc]] ClvpTokenizer
|
||||
- save_vocabulary
|
||||
|
||||
## ClvpFeatureExtractor
|
||||
|
||||
[[autodoc]] ClvpFeatureExtractor
|
||||
- __call__
|
||||
|
||||
## ClvpProcessor
|
||||
|
||||
[[autodoc]] ClvpProcessor
|
||||
- __call__
|
||||
- decode
|
||||
- batch_decode
|
||||
|
||||
## ClvpModelForConditionalGeneration
|
||||
|
||||
[[autodoc]] ClvpModelForConditionalGeneration
|
||||
- forward
|
||||
- generate
|
||||
- get_text_features
|
||||
- get_speech_features
|
||||
|
||||
## ClvpForCausalLM
|
||||
|
||||
[[autodoc]] ClvpForCausalLM
|
||||
|
||||
## ClvpModel
|
||||
|
||||
[[autodoc]] ClvpModel
|
||||
|
||||
## ClvpEncoder
|
||||
|
||||
[[autodoc]] ClvpEncoder
|
||||
|
||||
## ClvpDecoder
|
||||
|
||||
[[autodoc]] ClvpDecoder
|
||||
|
@ -256,6 +256,15 @@ _import_structure = {
|
||||
"CLIPSegTextConfig",
|
||||
"CLIPSegVisionConfig",
|
||||
],
|
||||
"models.clvp": [
|
||||
"CLVP_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"ClvpConfig",
|
||||
"ClvpDecoderConfig",
|
||||
"ClvpEncoderConfig",
|
||||
"ClvpFeatureExtractor",
|
||||
"ClvpProcessor",
|
||||
"ClvpTokenizer",
|
||||
],
|
||||
"models.code_llama": [],
|
||||
"models.codegen": ["CODEGEN_PRETRAINED_CONFIG_ARCHIVE_MAP", "CodeGenConfig", "CodeGenTokenizer"],
|
||||
"models.conditional_detr": ["CONDITIONAL_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "ConditionalDetrConfig"],
|
||||
@ -1458,6 +1467,17 @@ else:
|
||||
"CLIPSegVisionModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.clvp"].extend(
|
||||
[
|
||||
"CLVP_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"ClvpDecoder",
|
||||
"ClvpEncoder",
|
||||
"ClvpForCausalLM",
|
||||
"ClvpModel",
|
||||
"ClvpModelForConditionalGeneration",
|
||||
"ClvpPreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.codegen"].extend(
|
||||
[
|
||||
"CODEGEN_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
@ -4446,6 +4466,15 @@ if TYPE_CHECKING:
|
||||
CLIPSegTextConfig,
|
||||
CLIPSegVisionConfig,
|
||||
)
|
||||
from .models.clvp import (
|
||||
CLVP_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
ClvpConfig,
|
||||
ClvpDecoderConfig,
|
||||
ClvpEncoderConfig,
|
||||
ClvpFeatureExtractor,
|
||||
ClvpProcessor,
|
||||
ClvpTokenizer,
|
||||
)
|
||||
from .models.codegen import CODEGEN_PRETRAINED_CONFIG_ARCHIVE_MAP, CodeGenConfig, CodeGenTokenizer
|
||||
from .models.conditional_detr import CONDITIONAL_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, ConditionalDetrConfig
|
||||
from .models.convbert import CONVBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, ConvBertConfig, ConvBertTokenizer
|
||||
@ -5516,6 +5545,15 @@ if TYPE_CHECKING:
|
||||
CLIPSegTextModel,
|
||||
CLIPSegVisionModel,
|
||||
)
|
||||
from .models.clvp import (
|
||||
CLVP_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
ClvpDecoder,
|
||||
ClvpEncoder,
|
||||
ClvpForCausalLM,
|
||||
ClvpModel,
|
||||
ClvpModelForConditionalGeneration,
|
||||
ClvpPreTrainedModel,
|
||||
)
|
||||
from .models.codegen import (
|
||||
CODEGEN_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
CodeGenForCausalLM,
|
||||
|
@ -46,6 +46,7 @@ from . import (
|
||||
clap,
|
||||
clip,
|
||||
clipseg,
|
||||
clvp,
|
||||
code_llama,
|
||||
codegen,
|
||||
conditional_detr,
|
||||
|
@ -57,6 +57,7 @@ CONFIG_MAPPING_NAMES = OrderedDict(
|
||||
("clap", "ClapConfig"),
|
||||
("clip", "CLIPConfig"),
|
||||
("clipseg", "CLIPSegConfig"),
|
||||
("clvp", "ClvpConfig"),
|
||||
("code_llama", "LlamaConfig"),
|
||||
("codegen", "CodeGenConfig"),
|
||||
("conditional_detr", "ConditionalDetrConfig"),
|
||||
@ -276,6 +277,7 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict(
|
||||
("clap", "CLAP_PRETRAINED_MODEL_ARCHIVE_LIST"),
|
||||
("clip", "CLIP_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("clipseg", "CLIPSEG_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("clvp", "CLVP_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("codegen", "CODEGEN_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("conditional_detr", "CONDITIONAL_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("convbert", "CONVBERT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
@ -481,6 +483,7 @@ MODEL_NAMES_MAPPING = OrderedDict(
|
||||
("clap", "CLAP"),
|
||||
("clip", "CLIP"),
|
||||
("clipseg", "CLIPSeg"),
|
||||
("clvp", "CLVP"),
|
||||
("code_llama", "CodeLlama"),
|
||||
("codegen", "CodeGen"),
|
||||
("conditional_detr", "Conditional DETR"),
|
||||
|
@ -44,6 +44,7 @@ FEATURE_EXTRACTOR_MAPPING_NAMES = OrderedDict(
|
||||
("clap", "ClapFeatureExtractor"),
|
||||
("clip", "CLIPFeatureExtractor"),
|
||||
("clipseg", "ViTFeatureExtractor"),
|
||||
("clvp", "ClvpFeatureExtractor"),
|
||||
("conditional_detr", "ConditionalDetrFeatureExtractor"),
|
||||
("convnext", "ConvNextFeatureExtractor"),
|
||||
("cvt", "ConvNextFeatureExtractor"),
|
||||
|
@ -55,6 +55,7 @@ MODEL_MAPPING_NAMES = OrderedDict(
|
||||
("clap", "ClapModel"),
|
||||
("clip", "CLIPModel"),
|
||||
("clipseg", "CLIPSegModel"),
|
||||
("clvp", "ClvpModelForConditionalGeneration"),
|
||||
("code_llama", "LlamaModel"),
|
||||
("codegen", "CodeGenModel"),
|
||||
("conditional_detr", "ConditionalDetrModel"),
|
||||
|
@ -53,6 +53,7 @@ PROCESSOR_MAPPING_NAMES = OrderedDict(
|
||||
("clap", "ClapProcessor"),
|
||||
("clip", "CLIPProcessor"),
|
||||
("clipseg", "CLIPSegProcessor"),
|
||||
("clvp", "ClvpProcessor"),
|
||||
("flava", "FlavaProcessor"),
|
||||
("fuyu", "FuyuProcessor"),
|
||||
("git", "GitProcessor"),
|
||||
|
@ -121,6 +121,7 @@ else:
|
||||
"CLIPTokenizerFast" if is_tokenizers_available() else None,
|
||||
),
|
||||
),
|
||||
("clvp", ("ClvpTokenizer", None)),
|
||||
(
|
||||
"code_llama",
|
||||
(
|
||||
|
83
src/transformers/models/clvp/__init__.py
Normal file
83
src/transformers/models/clvp/__init__.py
Normal file
@ -0,0 +1,83 @@
|
||||
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from ...utils import (
|
||||
OptionalDependencyNotAvailable,
|
||||
_LazyModule,
|
||||
is_torch_available,
|
||||
)
|
||||
|
||||
|
||||
_import_structure = {
|
||||
"configuration_clvp": [
|
||||
"CLVP_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"ClvpConfig",
|
||||
"ClvpDecoderConfig",
|
||||
"ClvpEncoderConfig",
|
||||
],
|
||||
"feature_extraction_clvp": ["ClvpFeatureExtractor"],
|
||||
"processing_clvp": ["ClvpProcessor"],
|
||||
"tokenization_clvp": ["ClvpTokenizer"],
|
||||
}
|
||||
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["modeling_clvp"] = [
|
||||
"CLVP_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"ClvpModelForConditionalGeneration",
|
||||
"ClvpForCausalLM",
|
||||
"ClvpModel",
|
||||
"ClvpPreTrainedModel",
|
||||
"ClvpEncoder",
|
||||
"ClvpDecoder",
|
||||
]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .configuration_clvp import (
|
||||
CLVP_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
ClvpConfig,
|
||||
ClvpDecoderConfig,
|
||||
ClvpEncoderConfig,
|
||||
)
|
||||
from .feature_extraction_clvp import ClvpFeatureExtractor
|
||||
from .processing_clvp import ClvpProcessor
|
||||
from .tokenization_clvp import ClvpTokenizer
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .modeling_clvp import (
|
||||
CLVP_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
ClvpDecoder,
|
||||
ClvpEncoder,
|
||||
ClvpForCausalLM,
|
||||
ClvpModel,
|
||||
ClvpModelForConditionalGeneration,
|
||||
ClvpPreTrainedModel,
|
||||
)
|
||||
|
||||
else:
|
||||
import sys
|
||||
|
||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
457
src/transformers/models/clvp/configuration_clvp.py
Normal file
457
src/transformers/models/clvp/configuration_clvp.py
Normal file
@ -0,0 +1,457 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" CLVP model configuration"""
|
||||
|
||||
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Union
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
pass
|
||||
|
||||
from ...configuration_utils import PretrainedConfig
|
||||
from ...utils import logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
CLVP_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||
"susnato/clvp_dev": "https://huggingface.co/susnato/clvp_dev/resolve/main/config.json",
|
||||
}
|
||||
|
||||
|
||||
class ClvpEncoderConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`ClvpEncoder`]. It is used to instantiate a CLVP
|
||||
text or CLVP speech encoder according to the specified arguments. Instantiating a configuration with the defaults
|
||||
will yield a similar configuration to that of the encoder of the CLVP
|
||||
[susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
vocab_size (`int`, *optional*, defaults to 256):
|
||||
Vocabulary size of the CLVP Encoder model.
|
||||
hidden_size (`int`, *optional*, defaults to 768):
|
||||
Dimensionality of the encoder layers and the pooler layer.
|
||||
intermediate_size (`int`, *optional*, defaults to 1536):
|
||||
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
|
||||
projection_dim (`int`, *optional*, defaults to 768):
|
||||
Dimensionality of the projection vector.
|
||||
num_hidden_layers (`int`, *optional*, defaults to 20):
|
||||
Number of hidden layers in the Transformer encoder.
|
||||
num_attention_heads (`int`, *optional*, defaults to 12):
|
||||
Number of attention heads for each attention layer in the Transformer encoder.
|
||||
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
|
||||
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
|
||||
`"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported.
|
||||
layer_norm_eps (`float`, *optional*, defaults to 1e-05):
|
||||
The epsilon used by the layer normalization layers.
|
||||
attention_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio for the attention probabilities.
|
||||
dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio for the feed-forward layers in [`ClvpEncoderMLP`].
|
||||
use_rotary_embedding (`bool`, *optional*, defaults to `True`):
|
||||
Whether to use rotary_embedding or not.
|
||||
use_attention_bias (`bool`, *optional*, defaults to `False`):
|
||||
Whether to use bias in Query, Key and Value layers during self attention.
|
||||
summary_type (`str`, *optional*, defaults to `"mean"`):
|
||||
What strategy to use to get pooler_output from the last_hidden_state. `"last"`, `"first"`, `"mean"` and
|
||||
`"cls_index"` are supported.
|
||||
initializer_factor (`float`, *optional*, defaults to 1.0):
|
||||
A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization
|
||||
testing).
|
||||
bos_token_id (`int`, *optional*, defaults to 255):
|
||||
Beginning of sequence token id.
|
||||
eos_token_id (`int`, *optional*, defaults to 0):
|
||||
End of sequence token id.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import ClvpEncoderConfig, ClvpEncoder
|
||||
|
||||
>>> # Initializing a ClvpEncoderConfig with susnato/clvp_dev style configuration
|
||||
>>> encoder_configuration = ClvpEncoderConfig()
|
||||
|
||||
>>> # Initializing a ClvpEncoder (with random weights) from the susnato/clvp_dev style configuration
|
||||
>>> model = ClvpEncoder(encoder_configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
|
||||
model_type = "clvp_encoder"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size=256,
|
||||
hidden_size=768,
|
||||
intermediate_size=1536,
|
||||
projection_dim=768,
|
||||
num_hidden_layers=20,
|
||||
num_attention_heads=12,
|
||||
hidden_act="gelu",
|
||||
layer_norm_eps=1e-5,
|
||||
attention_dropout=0.1,
|
||||
dropout=0.1,
|
||||
use_rotary_embedding=True,
|
||||
use_attention_bias=False,
|
||||
summary_type="mean",
|
||||
initializer_factor=1.0,
|
||||
bos_token_id=255,
|
||||
eos_token_id=0,
|
||||
**kwargs,
|
||||
):
|
||||
self.vocab_size = vocab_size
|
||||
self.hidden_size = hidden_size
|
||||
self.intermediate_size = intermediate_size
|
||||
self.projection_dim = projection_dim
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.layer_norm_eps = layer_norm_eps
|
||||
self.hidden_act = hidden_act
|
||||
self.initializer_factor = initializer_factor
|
||||
self.attention_dropout = attention_dropout
|
||||
self.dropout = dropout
|
||||
self.use_rotary_embedding = use_rotary_embedding
|
||||
self.use_attention_bias = use_attention_bias
|
||||
self.summary_type = summary_type
|
||||
self.bos_token_id = bos_token_id
|
||||
self.eos_token_id = eos_token_id
|
||||
|
||||
super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def from_pretrained(
|
||||
cls, pretrained_model_name_or_path: Union[str, os.PathLike], config_type: str = "text_config", **kwargs
|
||||
) -> "PretrainedConfig":
|
||||
cls._set_token_in_kwargs(kwargs)
|
||||
|
||||
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
|
||||
|
||||
# make sure to have the config_type be either "text_config" or "speech_config"
|
||||
# this is to make sure that we can load only text or speech configs from the nested ClvpConfig.
|
||||
if config_type not in ["text_config", "speech_config"]:
|
||||
raise ValueError(
|
||||
f"We can only load either 'text_config' or 'speech_config' but you are trying to load" f"{config_type}"
|
||||
)
|
||||
|
||||
# get the text config dict if we are loading from ClvpConfig
|
||||
if config_dict.get("model_type") == "clvp":
|
||||
config_dict = config_dict[config_type]
|
||||
|
||||
if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
|
||||
logger.warning(
|
||||
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
|
||||
f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
|
||||
)
|
||||
|
||||
return cls.from_dict(config_dict, **kwargs)
|
||||
|
||||
|
||||
class ClvpDecoderConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`ClvpDecoder`]. It is used to instantiate a CLVP
|
||||
Decoder Model according to the specified arguments, defining the model architecture. Instantiating a configuration
|
||||
with the defaults will yield a similar configuration to that of the Decoder part of the CLVP
|
||||
[susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
The architecture is similar to GPT2.
|
||||
|
||||
Args:
|
||||
vocab_size (`int`, *optional*, defaults to 8194):
|
||||
Vocabulary size of the model.
|
||||
max_position_embeddings (`int`, *optional*, defaults to 608):
|
||||
The maximum sequence length of mel tokens that this model might ever be used with. Similar to `n_positions`
|
||||
in `GPT2Config`.
|
||||
max_text_tokens (`int`, *optional*, defaults to 404):
|
||||
The maximum sequence length of text tokens that this model might ever be used with. Similar to
|
||||
`n_positions` in `GPT2Config`.
|
||||
hidden_size (`int`, *optional*, defaults to 1024):
|
||||
Dimensionality of the embeddings and hidden states.
|
||||
num_hidden_layers (`int`, *optional*, defaults to 30):
|
||||
Number of hidden layers in the Transformer encoder.
|
||||
num_attention_heads (`int`, *optional*, defaults to 16):
|
||||
Number of attention heads for each attention layer in the Transformer encoder.
|
||||
n_inner (`int`, *optional*):
|
||||
Dimensionality of the inner feed-forward layers. `None` will set it to 4 times `hidden_size`.
|
||||
num_mel_attn_blocks (`int`, *optional*, defaults to 6):
|
||||
Denotes the number of self attention layers in [`ClvpConditioningEncoder`].
|
||||
activation_function (`str`, *optional*, defaults to `"gelu_new"`):
|
||||
Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`.
|
||||
resid_pdrop (`float`, *optional*, defaults to 0.1):
|
||||
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
||||
embd_pdrop (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio for the embeddings.
|
||||
attention_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio for the attention.
|
||||
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
|
||||
The epsilon to use in the layer normalization layers.
|
||||
initializer_range (`float`, *optional*, defaults to 0.02):
|
||||
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
||||
summary_type (`string`, *optional*, defaults to `"cls_index"`):
|
||||
Argument used when doing sequence summary.
|
||||
|
||||
Has to be one of the following options:
|
||||
|
||||
- `"last"`: Take the last token hidden state (like XLNet).
|
||||
- `"first"`: Take the first token hidden state (like BERT).
|
||||
- `"mean"`: Take the mean of all tokens hidden states.
|
||||
- `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2).
|
||||
- `"attn"`: Not implemented now, use multi-head attention.
|
||||
summary_use_proj (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not to add a projection after the vector extraction.
|
||||
summary_activation (`str`, *optional*):
|
||||
Pass `"tanh"` for a tanh activation to the output, any other value will result in no activation.
|
||||
summary_proj_to_labels (`bool`, *optional*, defaults to `True`):
|
||||
Whether the projection outputs should have `config.num_labels` or `config.hidden_size` classes.
|
||||
summary_first_dropout (`float`, *optional*, defaults to 0.1):
|
||||
The dropout ratio to be used after the projection and activation.
|
||||
use_cache (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not the model should return the last key/values attentions (not used by all models).
|
||||
bos_token_id (`int`, *optional*, defaults to 8192):
|
||||
Beginning of sequence token id, used at the start of the generation.
|
||||
eos_token_id (`int`, *optional*, defaults to 8193):
|
||||
End of sequence token id, used in the method
|
||||
[`ClvpModelForConditionalGeneration.fix_speech_decoder_output()`] to correct decoder outputs.
|
||||
feature_size (`int`, *optional*, defaults to 80):
|
||||
The feature dimension of the extracted mel features. This value is used in [`ClvpConditioningEncoder`].
|
||||
use_attention_bias (`bool`, *optional*, defaults to `True`):
|
||||
Whether to use bias in Query, Key and Value layers during self attention.
|
||||
initializer_factor (`float`, *optional*, defaults to 1.0):
|
||||
A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization
|
||||
testing).
|
||||
decoder_fixing_codes (`list`, *optional*, defaults to `[83, 45, 45, 248]`):
|
||||
These values are used in the method `fix_speech_decoder_output` to fix decoder generated outputs.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import ClvpDecoderConfig, ClvpDecoder
|
||||
|
||||
>>> # Initializing a ClvpDecoderConfig with susnato/clvp_dev style configuration
|
||||
>>> decoder_configuration = ClvpDecoderConfig()
|
||||
|
||||
>>> # Initializing a ClvpDecoder (with random weights) from the susnato/clvp_dev style configuration
|
||||
>>> model = ClvpDecoder(decoder_configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
|
||||
model_type = "clvp_decoder"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size=8194,
|
||||
max_position_embeddings=608,
|
||||
max_text_tokens=404,
|
||||
hidden_size=1024,
|
||||
num_hidden_layers=30,
|
||||
num_attention_heads=16,
|
||||
n_inner=None,
|
||||
num_mel_attn_blocks=6,
|
||||
activation_function="gelu_new",
|
||||
resid_pdrop=0.1,
|
||||
embd_pdrop=0.1,
|
||||
attention_dropout=0.1,
|
||||
layer_norm_epsilon=1e-5,
|
||||
initializer_range=0.02,
|
||||
summary_type="cls_index",
|
||||
summary_use_proj=True,
|
||||
summary_activation=None,
|
||||
summary_proj_to_labels=True,
|
||||
summary_first_dropout=0.1,
|
||||
use_cache=True,
|
||||
bos_token_id=8192,
|
||||
eos_token_id=8193,
|
||||
feature_size=80,
|
||||
use_attention_bias=True,
|
||||
initializer_factor=1.0,
|
||||
decoder_fixing_codes=[83, 45, 45, 248],
|
||||
**kwargs,
|
||||
):
|
||||
self.vocab_size = vocab_size
|
||||
self.max_position_embeddings = max_position_embeddings
|
||||
self.max_text_tokens = max_text_tokens
|
||||
self.hidden_size = hidden_size
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.n_inner = n_inner
|
||||
self.num_mel_attn_blocks = num_mel_attn_blocks
|
||||
self.activation_function = activation_function
|
||||
self.resid_pdrop = resid_pdrop
|
||||
self.embd_pdrop = embd_pdrop
|
||||
self.attention_dropout = attention_dropout
|
||||
self.layer_norm_epsilon = layer_norm_epsilon
|
||||
self.initializer_range = initializer_range
|
||||
self.summary_type = summary_type
|
||||
self.summary_use_proj = summary_use_proj
|
||||
self.summary_activation = summary_activation
|
||||
self.summary_first_dropout = summary_first_dropout
|
||||
self.summary_proj_to_labels = summary_proj_to_labels
|
||||
self.use_cache = use_cache
|
||||
self.feature_size = feature_size
|
||||
self.use_attention_bias = use_attention_bias
|
||||
self.initializer_factor = initializer_factor
|
||||
self.decoder_fixing_codes = decoder_fixing_codes
|
||||
|
||||
self.bos_token_id = bos_token_id
|
||||
self.eos_token_id = eos_token_id
|
||||
|
||||
super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
|
||||
cls._set_token_in_kwargs(kwargs)
|
||||
|
||||
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
|
||||
|
||||
# get the speech config dict if we are loading from ClvpConfig
|
||||
if config_dict.get("model_type") == "clvp":
|
||||
config_dict = config_dict["decoder_config"]
|
||||
|
||||
if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
|
||||
logger.warning(
|
||||
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
|
||||
f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
|
||||
)
|
||||
|
||||
return cls.from_dict(config_dict, **kwargs)
|
||||
|
||||
|
||||
class ClvpConfig(PretrainedConfig):
|
||||
r"""
|
||||
[`ClvpConfig`] is the configuration class to store the configuration of a [`ClvpModelForConditionalGeneration`]. It
|
||||
is used to instantiate a CLVP model according to the specified arguments, defining the text model, speech model and
|
||||
decoder model configs. Instantiating a configuration with the defaults will yield a similar configuration to that
|
||||
of the CLVP [susnato/clvp_dev](https://huggingface.co/susnato/clvp_dev) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
text_config (`dict`, *optional*):
|
||||
Dictionary of configuration options used to initialize the CLVP text encoder.
|
||||
speech_config (`dict`, *optional*):
|
||||
Dictionary of configuration options used to initialize CLVP speech encoder.
|
||||
decoder_config (`dict`, *optional*):
|
||||
Dictionary of configuration options used to initialize [`ClvpDecoderConfig`].
|
||||
projection_dim (`int`, *optional*, defaults to 768):
|
||||
Dimentionality of text and speech projection layers.
|
||||
logit_scale_init_value (`float`, *optional*, defaults to 2.6592):
|
||||
The inital value of the *logit_scale* paramter. Default is used as per the original CLVP implementation.
|
||||
initializer_factor (`float`, *optional*, defaults to 1.0):
|
||||
A factor for initializing all weight matrices (should be kept to 1.0, used internally for initialization
|
||||
testing).
|
||||
kwargs (*optional*):
|
||||
Dictionary of keyword arguments.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import ClvpConfig, ClvpModelForConditionalGeneration
|
||||
|
||||
>>> # Initializing a ClvpConfig with susnato/clvp_dev style configuration
|
||||
>>> configuration = ClvpConfig()
|
||||
|
||||
>>> # Initializing a ClvpModelForConditionalGeneration (with random weights) from the susnato/clvp_dev style configuration
|
||||
>>> model = ClvpModelForConditionalGeneration(configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
|
||||
>>> # We can also initialize a CLVPConfig from a CLVPTextConfig, CLVPSpeechConfig and a CLVPAutoRegressiveConfig
|
||||
>>> from transformers import ClvpEncoderConfig, ClvpDecoderConfig
|
||||
|
||||
>>> # Initializing a CLVP text, CLVP speech and CLVP decoder configuration
|
||||
>>> config_text = ClvpEncoderConfig()
|
||||
>>> config_speech = ClvpEncoderConfig()
|
||||
>>> decoder_config = ClvpDecoderConfig()
|
||||
|
||||
>>> config = ClvpConfig.from_sub_model_configs(config_text, config_speech, decoder_config)
|
||||
```"""
|
||||
|
||||
model_type = "clvp"
|
||||
is_composition = True
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
text_config=None,
|
||||
speech_config=None,
|
||||
decoder_config=None,
|
||||
projection_dim=768,
|
||||
logit_scale_init_value=2.6592,
|
||||
initializer_factor=1.0,
|
||||
**kwargs,
|
||||
):
|
||||
super().__init__(**kwargs)
|
||||
|
||||
if text_config is None:
|
||||
text_config = {}
|
||||
logger.info("`text_config` is `None`. Initializing the `ClvpEncoderConfig` with default values.")
|
||||
|
||||
if speech_config is None:
|
||||
speech_config = {}
|
||||
logger.info("`speech_config` is `None`. initializing the `ClvpEncoderConfig` with default values.")
|
||||
|
||||
if decoder_config is None:
|
||||
decoder_config = {}
|
||||
logger.info("`decoder_config` is `None`. initializing the `ClvpDecoderConfig` with default values.")
|
||||
|
||||
self.text_config = ClvpEncoderConfig(**text_config)
|
||||
self.speech_config = ClvpEncoderConfig(**speech_config)
|
||||
self.decoder_config = ClvpDecoderConfig(**decoder_config)
|
||||
|
||||
self.projection_dim = projection_dim
|
||||
self.logit_scale_init_value = logit_scale_init_value
|
||||
self.initializer_factor = initializer_factor
|
||||
|
||||
@classmethod
|
||||
def from_sub_model_configs(
|
||||
cls,
|
||||
text_config: ClvpEncoderConfig,
|
||||
speech_config: ClvpEncoderConfig,
|
||||
decoder_config: ClvpDecoderConfig,
|
||||
**kwargs,
|
||||
):
|
||||
r"""
|
||||
Instantiate a [`ClvpConfig`] (or a derived class) from CLVP text model configuration, CLVP speech model
|
||||
configuration and CLVP decoder model configuration.
|
||||
|
||||
Args:
|
||||
text_config (`ClvpEncoderConfig`):
|
||||
Text model configuration of type [`ClvpEncoderConfig`].
|
||||
speech_config (`ClvpEncoderConfig`):
|
||||
Speech model configuration of type [`ClvpEncoderConfig`].
|
||||
decoder_config (`ClvpDecoderConfig`):
|
||||
Decoder model configuration of type [`ClvpDecoderConfig`].
|
||||
|
||||
Returns:
|
||||
[`ClvpConfig`]: An instance of a configuration object
|
||||
"""
|
||||
|
||||
return cls(
|
||||
text_config=text_config.to_dict(),
|
||||
speech_config=speech_config.to_dict(),
|
||||
decoder_config=decoder_config.to_dict(),
|
||||
**kwargs,
|
||||
)
|
234
src/transformers/models/clvp/convert_clvp_to_hf.py
Normal file
234
src/transformers/models/clvp/convert_clvp_to_hf.py
Normal file
@ -0,0 +1,234 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Weights conversion script for CLVP
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
|
||||
import torch
|
||||
from huggingface_hub import hf_hub_download
|
||||
|
||||
from transformers import ClvpConfig, ClvpModelForConditionalGeneration
|
||||
|
||||
|
||||
_MODELS = {
|
||||
"clvp": "https://huggingface.co/jbetker/tortoise-tts-v2/blob/main/.models/clvp2.pth",
|
||||
"decoder": "https://huggingface.co/jbetker/tortoise-tts-v2/blob/main/.models/autoregressive.pth",
|
||||
}
|
||||
|
||||
dim = 1024
|
||||
sub_dim = dim // 16
|
||||
|
||||
CLVP_ENCODERS_MAPPING = {
|
||||
"text_transformer.transformer.attn_layers": "text_encoder_model",
|
||||
"speech_transformer.transformer.attn_layers": "speech_encoder_model",
|
||||
"text_transformer.transformer.norm": "text_encoder_model.final_layer_norm",
|
||||
"speech_transformer.transformer.norm": "speech_encoder_model.final_layer_norm",
|
||||
"to_text_latent": "text_encoder_model.projection",
|
||||
"to_speech_latent": "speech_encoder_model.projection",
|
||||
"text_emb": "text_encoder_model.token_embedding",
|
||||
"speech_emb": "speech_encoder_model.token_embedding",
|
||||
"1.wrap.net.0": "mlp.fc1",
|
||||
"1.wrap.net.3": "mlp.fc2",
|
||||
"1.wrap": "self_attn",
|
||||
"to_out": "out_proj",
|
||||
"to_q": "q_proj",
|
||||
"to_k": "k_proj",
|
||||
"to_v": "v_proj",
|
||||
"temperature": "logit_scale",
|
||||
}
|
||||
|
||||
CLVP_DECODER_MAPPING = {
|
||||
"conditioning_encoder.init": "conditioning_encoder.mel_conv",
|
||||
"conditioning_encoder.attn": "conditioning_encoder.mel_attn_blocks",
|
||||
"mel_attn_blocks": "group_norms",
|
||||
".norm.weight": ".weight",
|
||||
".norm.bias": ".bias",
|
||||
"text_embedding": "conditioning_encoder.text_token_embedding",
|
||||
"text_pos_embedding.emb": "conditioning_encoder.text_position_embedding",
|
||||
"final_norm": "speech_decoder_model.final_norm",
|
||||
"mel_head": "speech_decoder_model.lm_head",
|
||||
"gpt.ln_f": "speech_decoder_model.model.decoder.layer_norm",
|
||||
"mel_embedding": "speech_decoder_model.model.decoder.input_embeds_layer",
|
||||
"mel_pos_embedding.emb": "speech_decoder_model.model.decoder.position_embeds_layer",
|
||||
"gpt.h": "speech_decoder_model.model.decoder.layers",
|
||||
"ln_1": "input_layernorm",
|
||||
"ln_2": "post_attention_layernorm",
|
||||
}
|
||||
|
||||
|
||||
def update_index(present_index):
|
||||
if present_index % 2 == 0:
|
||||
return int(present_index / 2)
|
||||
else:
|
||||
return int((present_index - 1) / 2)
|
||||
|
||||
|
||||
def convert_encoder_weights(original_weights):
|
||||
converted_weights = {}
|
||||
original_weights_keys = sorted(original_weights.keys())
|
||||
for original_key in original_weights_keys:
|
||||
updated_key = original_key
|
||||
# for input_rmsnorm.weight and post_attention_rmsnorm.weight
|
||||
if "0.0.g" in updated_key:
|
||||
present_index = updated_key.split(".")[4]
|
||||
if int(present_index) % 2 == 0:
|
||||
updated_key = updated_key.replace("0.0.g", "input_rmsnorm.weight")
|
||||
else:
|
||||
updated_key = updated_key.replace("0.0.g", "post_attention_rmsnorm.weight")
|
||||
|
||||
if "transformer.attn_layers.layers" in updated_key:
|
||||
present_index = updated_key.split(".")[4]
|
||||
updated_index = update_index(int(present_index))
|
||||
updated_key = updated_key.replace(
|
||||
f"transformer.attn_layers.layers.{present_index}", f"transformer.attn_layers.layers.{updated_index}"
|
||||
)
|
||||
|
||||
for k, v in CLVP_ENCODERS_MAPPING.items():
|
||||
if k in updated_key:
|
||||
updated_key = updated_key.replace(k, v)
|
||||
|
||||
converted_weights[updated_key] = original_weights.pop(original_key)
|
||||
|
||||
return converted_weights
|
||||
|
||||
|
||||
def convert_decoder_weights(original_weights):
|
||||
converted_weights = {}
|
||||
original_weights_keys = sorted(original_weights.keys())
|
||||
for original_key in original_weights_keys:
|
||||
updated_key = original_key
|
||||
if len(updated_key.split(".")) > 3:
|
||||
index, attr = updated_key.split(".")[2], updated_key.split(".")[-1]
|
||||
|
||||
# for decoder attention
|
||||
if "attn.c_attn" in updated_key:
|
||||
if attr == "weight":
|
||||
slice1, slice2, slice3 = original_weights[updated_key].squeeze(-1).T.split(split_size=dim, dim=0)
|
||||
else:
|
||||
slice1, slice2, slice3 = original_weights[updated_key].split(split_size=dim, dim=0)
|
||||
converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.q_proj.{attr}"] = slice1
|
||||
converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.k_proj.{attr}"] = slice2
|
||||
converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.v_proj.{attr}"] = slice3
|
||||
continue
|
||||
|
||||
if "attn.c_proj" in updated_key:
|
||||
converted_weights[f"speech_decoder_model.model.decoder.layers.{index}.attn.out_proj.{attr}"] = (
|
||||
original_weights[updated_key].squeeze(-1).T
|
||||
)
|
||||
continue
|
||||
|
||||
if "attn.bias" in updated_key or "attn.masked_bias" in updated_key or "text_head" in updated_key:
|
||||
original_weights.pop(updated_key)
|
||||
continue
|
||||
|
||||
# conditional encoder attention
|
||||
if "qkv" in updated_key:
|
||||
if attr == "weight":
|
||||
slice1, slice2, slice3 = original_weights[updated_key].squeeze(-1).split(split_size=dim, dim=0)
|
||||
else:
|
||||
slice1, slice2, slice3 = original_weights[updated_key].split(split_size=dim, dim=0)
|
||||
|
||||
indices = torch.arange(dim)
|
||||
index1, index2, index3 = (
|
||||
indices.unfold(0, sub_dim, sub_dim * 3).flatten(),
|
||||
indices[sub_dim:].unfold(0, sub_dim, sub_dim * 3).flatten(),
|
||||
indices[2 * sub_dim :].unfold(0, sub_dim, sub_dim * 3).flatten(),
|
||||
)
|
||||
|
||||
converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.q_proj.{attr}"] = torch.concatenate(
|
||||
[slice1[index1], slice2[index3], slice3[index2]],
|
||||
axis=0,
|
||||
)
|
||||
converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.k_proj.{attr}"] = torch.concatenate(
|
||||
[slice1[index2], slice2[index1], slice3[index3]],
|
||||
axis=0,
|
||||
)
|
||||
converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.v_proj.{attr}"] = torch.concatenate(
|
||||
[slice1[index3], slice2[index2], slice3[index1]],
|
||||
axis=0,
|
||||
)
|
||||
continue
|
||||
|
||||
if "proj_out" in updated_key:
|
||||
converted_weights[f"conditioning_encoder.mel_attn_blocks.{index}.out_proj.{attr}"] = original_weights[
|
||||
updated_key
|
||||
].squeeze(-1)
|
||||
continue
|
||||
|
||||
for k, v in CLVP_DECODER_MAPPING.items():
|
||||
if k in updated_key:
|
||||
updated_key = updated_key.replace(k, v)
|
||||
|
||||
converted_weights[updated_key] = original_weights.pop(original_key)
|
||||
|
||||
return converted_weights
|
||||
|
||||
|
||||
def _download(url: str, root: str):
|
||||
repo_id = f"{url.split('/')[3]}/{url.split('/')[4]}"
|
||||
filename = f"{url.split('/')[-2]}/{url.split('/')[-1]}"
|
||||
hf_hub_download(
|
||||
repo_id=repo_id,
|
||||
filename=filename,
|
||||
force_filename=root,
|
||||
local_dir_use_symlinks=False,
|
||||
)
|
||||
|
||||
|
||||
def convert_clvp_weights(checkpoint_path, pytorch_dump_folder_path):
|
||||
converted_checkpoint = {}
|
||||
|
||||
for each_model_name, each_model_url in _MODELS.items():
|
||||
each_model_path = os.path.join(checkpoint_path, each_model_url.split("/")[-1])
|
||||
if not os.path.exists(each_model_path):
|
||||
print(f"\n{each_model_name} was not found! Downloading it to {each_model_path}")
|
||||
_download(url=each_model_url, root=each_model_path)
|
||||
|
||||
if each_model_name == "clvp":
|
||||
clvp_checkpoint = torch.load(each_model_path, map_location="cpu")
|
||||
else:
|
||||
decoder_checkpoint = torch.load(each_model_path, map_location="cpu")
|
||||
|
||||
# Converting the weights
|
||||
converted_checkpoint.update(**convert_encoder_weights(clvp_checkpoint))
|
||||
converted_checkpoint.update(**convert_decoder_weights(decoder_checkpoint))
|
||||
|
||||
config = ClvpConfig.from_pretrained("susnato/clvp_dev")
|
||||
model = ClvpModelForConditionalGeneration(config)
|
||||
|
||||
model.load_state_dict(converted_checkpoint, strict=True)
|
||||
model.save_pretrained(pytorch_dump_folder_path)
|
||||
print(f"Model saved at {pytorch_dump_folder_path}!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
# # Required parameters
|
||||
parser.add_argument(
|
||||
"--checkpoint_path", type=str, help="Path to the folder of downloaded checkpoints. (Please enter full path)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pytorch_dump_folder_path",
|
||||
default=None,
|
||||
type=str,
|
||||
help="Path to the output PyTorch model. (Please enter full path)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
convert_clvp_weights(args.checkpoint_path, args.pytorch_dump_folder_path)
|
238
src/transformers/models/clvp/feature_extraction_clvp.py
Normal file
238
src/transformers/models/clvp/feature_extraction_clvp.py
Normal file
@ -0,0 +1,238 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Feature extractor class for CLVP
|
||||
"""
|
||||
|
||||
from typing import List, Optional, Union
|
||||
|
||||
import numpy as np
|
||||
|
||||
from ...audio_utils import mel_filter_bank, spectrogram, window_function
|
||||
from ...feature_extraction_sequence_utils import SequenceFeatureExtractor
|
||||
from ...feature_extraction_utils import BatchFeature
|
||||
from ...utils import TensorType, logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
|
||||
class ClvpFeatureExtractor(SequenceFeatureExtractor):
|
||||
r"""
|
||||
Constructs a CLVP feature extractor.
|
||||
|
||||
This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
|
||||
most of the main methods. Users should refer to this superclass for more information regarding those methods.
|
||||
|
||||
This class extracts log-mel-spectrogram features from raw speech using a custom numpy implementation of the `Short
|
||||
Time Fourier Transform` which should match pytorch's `torch.stft` equivalent.
|
||||
|
||||
Args:
|
||||
feature_size (`int`, *optional*, defaults to 80):
|
||||
The feature dimension of the extracted features.
|
||||
sampling_rate (`int`, *optional*, defaults to 22050):
|
||||
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
|
||||
default_audio_length (`int`, *optional*, defaults to 6):
|
||||
The default length of raw audio in seconds. If `max_length` is not set during `__call__` then it will
|
||||
automatically be set to default_audio_length * `self.sampling_rate`.
|
||||
hop_length (`int`, *optional*, defaults to 256):
|
||||
Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients.
|
||||
chunk_length (`int`, *optional*, defaults to 30):
|
||||
The maximum number of chuncks of `sampling_rate` samples used to trim and pad longer or shorter audio
|
||||
sequences.
|
||||
n_fft (`int`, *optional*, defaults to 1024):
|
||||
Size of the Fourier transform.
|
||||
padding_value (`float`, *optional*, defaults to 0.0):
|
||||
Padding value used to pad the audio. Should correspond to silences.
|
||||
mel_norms (`list` of length `feature_size`, *optional*):
|
||||
If `mel_norms` is provided then it will be used to normalize the log-mel spectrograms along each
|
||||
mel-filter.
|
||||
return_attention_mask (`bool`, *optional*, defaults to `False`):
|
||||
Whether to return the attention mask. If left to the default, it will return the attention mask.
|
||||
|
||||
[What are attention masks?](../glossary#attention-mask)
|
||||
"""
|
||||
|
||||
model_input_names = ["input_features", "attention_mask"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
feature_size=80,
|
||||
sampling_rate=22050,
|
||||
default_audio_length=6,
|
||||
hop_length=256,
|
||||
chunk_length=30,
|
||||
n_fft=1024,
|
||||
padding_value=0.0,
|
||||
mel_norms=None,
|
||||
return_attention_mask=False, # pad inputs to max length with silence token (zero) and no attention mask
|
||||
**kwargs,
|
||||
):
|
||||
super().__init__(
|
||||
feature_size=feature_size,
|
||||
sampling_rate=sampling_rate,
|
||||
padding_value=padding_value,
|
||||
return_attention_mask=return_attention_mask,
|
||||
**kwargs,
|
||||
)
|
||||
self.n_fft = n_fft
|
||||
self.hop_length = hop_length
|
||||
self.chunk_length = chunk_length
|
||||
self.n_samples = chunk_length * sampling_rate
|
||||
self.nb_max_frames = self.n_samples // hop_length
|
||||
self.sampling_rate = sampling_rate
|
||||
self.default_audio_length = default_audio_length
|
||||
self.mel_norms = mel_norms
|
||||
self.mel_filters = mel_filter_bank(
|
||||
num_frequency_bins=1 + (n_fft // 2),
|
||||
num_mel_filters=feature_size,
|
||||
min_frequency=0.0,
|
||||
max_frequency=8000.0,
|
||||
sampling_rate=sampling_rate,
|
||||
norm="slaney",
|
||||
mel_scale="htk",
|
||||
)
|
||||
|
||||
def _np_extract_fbank_features(self, waveform: np.array) -> np.ndarray:
|
||||
"""
|
||||
This method first computes the log-mel spectrogram of the provided audio then applies normalization along the
|
||||
each mel-filterbank, if `mel_norms` is provided.
|
||||
"""
|
||||
log_spec = spectrogram(
|
||||
waveform,
|
||||
window_function(self.n_fft, "hann"),
|
||||
frame_length=self.n_fft,
|
||||
hop_length=self.hop_length,
|
||||
power=2.0,
|
||||
mel_filters=self.mel_filters,
|
||||
log_mel=None,
|
||||
)
|
||||
|
||||
log_spec = np.log(np.clip(log_spec, a_min=1e-5, a_max=None))
|
||||
|
||||
if self.mel_norms is not None:
|
||||
log_spec = log_spec / np.array(self.mel_norms)[:, None]
|
||||
|
||||
return log_spec
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
|
||||
sampling_rate: Optional[int] = None,
|
||||
truncation: bool = True,
|
||||
pad_to_multiple_of: Optional[int] = None,
|
||||
return_tensors: Optional[Union[str, TensorType]] = None,
|
||||
return_attention_mask: Optional[bool] = True,
|
||||
padding: Optional[str] = "max_length",
|
||||
max_length: Optional[int] = None,
|
||||
**kwargs,
|
||||
) -> BatchFeature:
|
||||
"""
|
||||
`ClvpFeatureExtractor` is used to extract various voice specific properties such as the pitch and tone of the
|
||||
voice, speaking speed, and even speaking defects like a lisp or stuttering from a sample voice or `raw_speech`.
|
||||
|
||||
First the voice is padded or truncated in a way such that it becomes a waveform of `self.default_audio_length`
|
||||
seconds long and then the log-mel spectrogram is extracted from it.
|
||||
|
||||
Args:
|
||||
raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
|
||||
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
|
||||
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
|
||||
stereo, i.e. single float per timestep.
|
||||
sampling_rate (`int`, *optional*):
|
||||
The sampling rate at which the `raw_speech` input was sampled. It is strongly recommended to pass
|
||||
`sampling_rate` at the forward call to prevent silent errors and allow automatic speech recognition
|
||||
pipeline.
|
||||
truncation (`bool`, *optional*, default to `True`):
|
||||
Activates truncation to cut input sequences longer than *max_length* to *max_length*.
|
||||
pad_to_multiple_of (`int`, *optional*):
|
||||
If set will pad the sequence to a multiple of the provided value.
|
||||
|
||||
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
|
||||
`>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
|
||||
return_attention_mask (`bool`, *optional*, defaults to `True`):
|
||||
Whether to return the attention mask. If left to the default, it will return the attention mask.
|
||||
|
||||
[What are attention masks?](../glossary#attention-mask)
|
||||
return_tensors (`str` or [`~utils.TensorType`], *optional*):
|
||||
If set, will return tensors instead of list of python integers. Acceptable values are:
|
||||
|
||||
- `'tf'`: Return TensorFlow `tf.constant` objects.
|
||||
- `'pt'`: Return PyTorch `torch.Tensor` objects.
|
||||
- `'np'`: Return Numpy `np.ndarray` objects.
|
||||
padding_value (`float`, defaults to 0.0):
|
||||
The value that is used to fill the padding values / vectors.
|
||||
max_length (`int`, *optional*):
|
||||
The maximum input length of the inputs.
|
||||
"""
|
||||
|
||||
if sampling_rate is not None:
|
||||
if sampling_rate != self.sampling_rate:
|
||||
raise ValueError(
|
||||
f"The model corresponding to this feature extractor: {self.__class__.__name__} was trained using a"
|
||||
f" sampling rate of {self.sampling_rate}. Please make sure that the provided `raw_speech` input"
|
||||
f" was sampled with {self.sampling_rate} and not {sampling_rate}."
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"It is strongly recommended to pass the `sampling_rate` argument to this function. "
|
||||
"Failing to do so can result in silent errors that might be hard to debug."
|
||||
)
|
||||
|
||||
is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1
|
||||
if is_batched_numpy and len(raw_speech.shape) > 2:
|
||||
raise ValueError(f"Only mono-channel audio is supported for input to {self}")
|
||||
is_batched = is_batched_numpy or (
|
||||
isinstance(raw_speech, (list, tuple)) and (isinstance(raw_speech[0], (np.ndarray, tuple, list)))
|
||||
)
|
||||
|
||||
if is_batched:
|
||||
raw_speech = [np.asarray([speech], dtype=np.float32).T for speech in raw_speech]
|
||||
elif not is_batched and not isinstance(raw_speech, np.ndarray):
|
||||
raw_speech = np.asarray(raw_speech, dtype=np.float32)
|
||||
elif isinstance(raw_speech, np.ndarray) and raw_speech.dtype is np.dtype(np.float64):
|
||||
raw_speech = raw_speech.astype(np.float32)
|
||||
|
||||
# always return batch
|
||||
if not is_batched:
|
||||
raw_speech = [np.asarray([raw_speech]).T]
|
||||
|
||||
batched_speech = BatchFeature({"input_features": raw_speech})
|
||||
|
||||
max_length = self.default_audio_length * self.sampling_rate if max_length is None else max_length
|
||||
|
||||
padded_inputs = self.pad(
|
||||
batched_speech,
|
||||
padding=padding,
|
||||
max_length=max_length,
|
||||
truncation=truncation,
|
||||
pad_to_multiple_of=pad_to_multiple_of,
|
||||
return_attention_mask=return_attention_mask,
|
||||
)
|
||||
|
||||
# make sure list is in array format
|
||||
input_features = padded_inputs.get("input_features").transpose(2, 0, 1)
|
||||
|
||||
input_features = [
|
||||
self._np_extract_fbank_features(waveform).astype(np.float32) for waveform in input_features[0]
|
||||
]
|
||||
|
||||
if isinstance(input_features[0], List):
|
||||
padded_inputs["input_features"] = [np.asarray(feature) for feature in input_features]
|
||||
else:
|
||||
padded_inputs["input_features"] = input_features
|
||||
|
||||
return padded_inputs.convert_to_tensors(return_tensors)
|
1945
src/transformers/models/clvp/modeling_clvp.py
Normal file
1945
src/transformers/models/clvp/modeling_clvp.py
Normal file
File diff suppressed because it is too large
Load Diff
238
src/transformers/models/clvp/number_normalizer.py
Normal file
238
src/transformers/models/clvp/number_normalizer.py
Normal file
@ -0,0 +1,238 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""English Normalizer class for CLVP."""
|
||||
|
||||
|
||||
import re
|
||||
|
||||
|
||||
class EnglishNormalizer:
|
||||
def __init__(self):
|
||||
# List of (regular expression, replacement) pairs for abbreviations:
|
||||
self._abbreviations = [
|
||||
(re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
|
||||
for x in [
|
||||
("mrs", "misess"),
|
||||
("mr", "mister"),
|
||||
("dr", "doctor"),
|
||||
("st", "saint"),
|
||||
("co", "company"),
|
||||
("jr", "junior"),
|
||||
("maj", "major"),
|
||||
("gen", "general"),
|
||||
("drs", "doctors"),
|
||||
("rev", "reverend"),
|
||||
("lt", "lieutenant"),
|
||||
("hon", "honorable"),
|
||||
("sgt", "sergeant"),
|
||||
("capt", "captain"),
|
||||
("esq", "esquire"),
|
||||
("ltd", "limited"),
|
||||
("col", "colonel"),
|
||||
("ft", "fort"),
|
||||
]
|
||||
]
|
||||
|
||||
self.ones = ["", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine"]
|
||||
self.teens = [
|
||||
"ten",
|
||||
"eleven",
|
||||
"twelve",
|
||||
"thirteen",
|
||||
"fourteen",
|
||||
"fifteen",
|
||||
"sixteen",
|
||||
"seventeen",
|
||||
"eighteen",
|
||||
"nineteen",
|
||||
]
|
||||
self.tens = ["", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety"]
|
||||
|
||||
def number_to_words(self, num: int) -> str:
|
||||
"""
|
||||
Converts numbers(`int`) to words(`str`).
|
||||
|
||||
Please note that it only supports upto - "'nine hundred ninety-nine quadrillion, nine hundred ninety-nine
|
||||
trillion, nine hundred ninety-nine billion, nine hundred ninety-nine million, nine hundred ninety-nine
|
||||
thousand, nine hundred ninety-nine'" or `number_to_words(999_999_999_999_999_999)`.
|
||||
"""
|
||||
if num == 0:
|
||||
return "zero"
|
||||
elif num < 0:
|
||||
return "minus " + self.number_to_words(abs(num))
|
||||
elif num < 10:
|
||||
return self.ones[num]
|
||||
elif num < 20:
|
||||
return self.teens[num - 10]
|
||||
elif num < 100:
|
||||
return self.tens[num // 10] + ("-" + self.number_to_words(num % 10) if num % 10 != 0 else "")
|
||||
elif num < 1000:
|
||||
return (
|
||||
self.ones[num // 100] + " hundred" + (" " + self.number_to_words(num % 100) if num % 100 != 0 else "")
|
||||
)
|
||||
elif num < 1_000_000:
|
||||
return (
|
||||
self.number_to_words(num // 1000)
|
||||
+ " thousand"
|
||||
+ (", " + self.number_to_words(num % 1000) if num % 1000 != 0 else "")
|
||||
)
|
||||
elif num < 1_000_000_000:
|
||||
return (
|
||||
self.number_to_words(num // 1_000_000)
|
||||
+ " million"
|
||||
+ (", " + self.number_to_words(num % 1_000_000) if num % 1_000_000 != 0 else "")
|
||||
)
|
||||
elif num < 1_000_000_000_000:
|
||||
return (
|
||||
self.number_to_words(num // 1_000_000_000)
|
||||
+ " billion"
|
||||
+ (", " + self.number_to_words(num % 1_000_000_000) if num % 1_000_000_000 != 0 else "")
|
||||
)
|
||||
elif num < 1_000_000_000_000_000:
|
||||
return (
|
||||
self.number_to_words(num // 1_000_000_000_000)
|
||||
+ " trillion"
|
||||
+ (", " + self.number_to_words(num % 1_000_000_000_000) if num % 1_000_000_000_000 != 0 else "")
|
||||
)
|
||||
elif num < 1_000_000_000_000_000_000:
|
||||
return (
|
||||
self.number_to_words(num // 1_000_000_000_000_000)
|
||||
+ " quadrillion"
|
||||
+ (
|
||||
", " + self.number_to_words(num % 1_000_000_000_000_000)
|
||||
if num % 1_000_000_000_000_000 != 0
|
||||
else ""
|
||||
)
|
||||
)
|
||||
else:
|
||||
return "number out of range"
|
||||
|
||||
def convert_to_ascii(self, text: str) -> str:
|
||||
"""
|
||||
Converts unicode to ascii
|
||||
"""
|
||||
return text.encode("ascii", "ignore").decode("utf-8")
|
||||
|
||||
def _expand_dollars(self, m: str) -> str:
|
||||
"""
|
||||
This method is used to expand numerical dollar values into spoken words.
|
||||
"""
|
||||
match = m.group(1)
|
||||
parts = match.split(".")
|
||||
if len(parts) > 2:
|
||||
return match + " dollars" # Unexpected format
|
||||
|
||||
dollars = int(parts[0]) if parts[0] else 0
|
||||
cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
|
||||
if dollars and cents:
|
||||
dollar_unit = "dollar" if dollars == 1 else "dollars"
|
||||
cent_unit = "cent" if cents == 1 else "cents"
|
||||
return "%s %s, %s %s" % (dollars, dollar_unit, cents, cent_unit)
|
||||
elif dollars:
|
||||
dollar_unit = "dollar" if dollars == 1 else "dollars"
|
||||
return "%s %s" % (dollars, dollar_unit)
|
||||
elif cents:
|
||||
cent_unit = "cent" if cents == 1 else "cents"
|
||||
return "%s %s" % (cents, cent_unit)
|
||||
else:
|
||||
return "zero dollars"
|
||||
|
||||
def _remove_commas(self, m: str) -> str:
|
||||
"""
|
||||
This method is used to remove commas from sentences.
|
||||
"""
|
||||
return m.group(1).replace(",", "")
|
||||
|
||||
def _expand_decimal_point(self, m: str) -> str:
|
||||
"""
|
||||
This method is used to expand '.' into spoken word ' point '.
|
||||
"""
|
||||
return m.group(1).replace(".", " point ")
|
||||
|
||||
def _expand_ordinal(self, num: str) -> str:
|
||||
"""
|
||||
This method is used to expand ordinals such as '1st', '2nd' into spoken words.
|
||||
"""
|
||||
ordinal_suffixes = {1: "st", 2: "nd", 3: "rd"}
|
||||
|
||||
num = int(num.group(0)[:-2])
|
||||
if 10 <= num % 100 and num % 100 <= 20:
|
||||
suffix = "th"
|
||||
else:
|
||||
suffix = ordinal_suffixes.get(num % 10, "th")
|
||||
return self.number_to_words(num) + suffix
|
||||
|
||||
def _expand_number(self, m: str) -> str:
|
||||
"""
|
||||
This method acts as a preprocessing step for numbers between 1000 and 3000 (same as the original repository,
|
||||
link :
|
||||
https://github.com/neonbjb/tortoise-tts/blob/4003544b6ff4b68c09856e04d3eff9da26d023c2/tortoise/utils/tokenizer.py#L86)
|
||||
"""
|
||||
num = int(m.group(0))
|
||||
|
||||
if num > 1000 and num < 3000:
|
||||
if num == 2000:
|
||||
return "two thousand"
|
||||
elif num > 2000 and num < 2010:
|
||||
return "two thousand " + self.number_to_words(num % 100)
|
||||
elif num % 100 == 0:
|
||||
return self.number_to_words(num // 100) + " hundred"
|
||||
else:
|
||||
return self.number_to_words(num)
|
||||
else:
|
||||
return self.number_to_words(num)
|
||||
|
||||
def normalize_numbers(self, text: str) -> str:
|
||||
"""
|
||||
This method is used to normalize numbers within a text such as converting the numbers to words, removing
|
||||
commas, etc.
|
||||
"""
|
||||
text = re.sub(re.compile(r"([0-9][0-9\,]+[0-9])"), self._remove_commas, text)
|
||||
text = re.sub(re.compile(r"£([0-9\,]*[0-9]+)"), r"\1 pounds", text)
|
||||
text = re.sub(re.compile(r"\$([0-9\.\,]*[0-9]+)"), self._expand_dollars, text)
|
||||
text = re.sub(re.compile(r"([0-9]+\.[0-9]+)"), self._expand_decimal_point, text)
|
||||
text = re.sub(re.compile(r"[0-9]+(st|nd|rd|th)"), self._expand_ordinal, text)
|
||||
text = re.sub(re.compile(r"[0-9]+"), self._expand_number, text)
|
||||
return text
|
||||
|
||||
def expand_abbreviations(self, text: str) -> str:
|
||||
"""
|
||||
Expands the abbreviate words.
|
||||
"""
|
||||
for regex, replacement in self._abbreviations:
|
||||
text = re.sub(regex, replacement, text)
|
||||
return text
|
||||
|
||||
def collapse_whitespace(self, text: str) -> str:
|
||||
"""
|
||||
Removes multiple whitespaces
|
||||
"""
|
||||
return re.sub(re.compile(r"\s+"), " ", text)
|
||||
|
||||
def __call__(self, text):
|
||||
"""
|
||||
Converts text to ascii, numbers / number-like quantities to their spelt-out counterparts and expands
|
||||
abbreviations
|
||||
"""
|
||||
|
||||
text = self.convert_to_ascii(text)
|
||||
text = text.lower()
|
||||
text = self.normalize_numbers(text)
|
||||
text = self.expand_abbreviations(text)
|
||||
text = self.collapse_whitespace(text)
|
||||
text = text.replace('"', "")
|
||||
|
||||
return text
|
90
src/transformers/models/clvp/processing_clvp.py
Normal file
90
src/transformers/models/clvp/processing_clvp.py
Normal file
@ -0,0 +1,90 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Processor class for CLVP
|
||||
"""
|
||||
|
||||
|
||||
from ...processing_utils import ProcessorMixin
|
||||
|
||||
|
||||
class ClvpProcessor(ProcessorMixin):
|
||||
r"""
|
||||
Constructs a CLVP processor which wraps a CLVP Feature Extractor and a CLVP Tokenizer into a single processor.
|
||||
|
||||
[`ClvpProcessor`] offers all the functionalities of [`ClvpFeatureExtractor`] and [`ClvpTokenizer`]. See the
|
||||
[`~ClvpProcessor.__call__`], [`~ClvpProcessor.decode`] and [`~ClvpProcessor.batch_decode`] for more information.
|
||||
|
||||
Args:
|
||||
feature_extractor (`ClvpFeatureExtractor`):
|
||||
An instance of [`ClvpFeatureExtractor`]. The feature extractor is a required input.
|
||||
tokenizer (`ClvpTokenizer`):
|
||||
An instance of [`ClvpTokenizer`]. The tokenizer is a required input.
|
||||
"""
|
||||
feature_extractor_class = "ClvpFeatureExtractor"
|
||||
tokenizer_class = "ClvpTokenizer"
|
||||
model_input_names = [
|
||||
"input_ids",
|
||||
"input_features",
|
||||
"attention_mask",
|
||||
]
|
||||
|
||||
def __init__(self, feature_extractor, tokenizer):
|
||||
super().__init__(feature_extractor, tokenizer)
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
"""
|
||||
Forwards the `audio` and `sampling_rate` arguments to [`~ClvpFeatureExtractor.__call__`] and the `text`
|
||||
argument to [`~ClvpTokenizer.__call__`]. Please refer to the doctsring of the above two methods for more
|
||||
information.
|
||||
"""
|
||||
|
||||
raw_speech = kwargs.pop("raw_speech", None)
|
||||
sampling_rate = kwargs.pop("sampling_rate", None)
|
||||
text = kwargs.pop("text", None)
|
||||
|
||||
if raw_speech is None and text is None:
|
||||
raise ValueError("You need to specify either an `raw_speech` or `text` input to process.")
|
||||
|
||||
if raw_speech is not None:
|
||||
inputs = self.feature_extractor(raw_speech, sampling_rate=sampling_rate, **kwargs)
|
||||
if text is not None:
|
||||
encodings = self.tokenizer(text, **kwargs)
|
||||
|
||||
if text is None:
|
||||
return inputs
|
||||
elif raw_speech is None:
|
||||
return encodings
|
||||
else:
|
||||
inputs["input_ids"] = encodings["input_ids"]
|
||||
inputs["attention_mask"] = encodings["attention_mask"]
|
||||
return inputs
|
||||
|
||||
# Copied from transformers.models.whisper.processing_whisper.WhisperProcessor.batch_decode with Whisper->Clvp
|
||||
def batch_decode(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to ClvpTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please refer
|
||||
to the docstring of this method for more information.
|
||||
"""
|
||||
return self.tokenizer.batch_decode(*args, **kwargs)
|
||||
|
||||
# Copied from transformers.models.whisper.processing_whisper.WhisperProcessor.decode with Whisper->Clvp
|
||||
def decode(self, *args, **kwargs):
|
||||
"""
|
||||
This method forwards all its arguments to ClvpTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to the
|
||||
docstring of this method for more information.
|
||||
"""
|
||||
return self.tokenizer.decode(*args, **kwargs)
|
379
src/transformers/models/clvp/tokenization_clvp.py
Normal file
379
src/transformers/models/clvp/tokenization_clvp.py
Normal file
@ -0,0 +1,379 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Tokenization class for CLVP."""
|
||||
|
||||
import json
|
||||
import os
|
||||
from functools import lru_cache
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
import regex as re
|
||||
|
||||
from ...tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||
from ...utils import logging
|
||||
from .number_normalizer import EnglishNormalizer
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
VOCAB_FILES_NAMES = {
|
||||
"vocab_file": "vocab.json",
|
||||
"merges_file": "merges.txt",
|
||||
}
|
||||
|
||||
PRETRAINED_VOCAB_FILES_MAP = {
|
||||
"vocab_file": {
|
||||
"clvp_dev": "https://huggingface.co/susnato/clvp_dev/blob/main/vocab.json",
|
||||
},
|
||||
"merges_file": {
|
||||
"clvp_dev": "https://huggingface.co/susnato/clvp_dev/blob/main/merges.txt",
|
||||
},
|
||||
}
|
||||
|
||||
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
|
||||
"clvp_dev": 1024,
|
||||
}
|
||||
|
||||
|
||||
@lru_cache()
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.bytes_to_unicode
|
||||
def bytes_to_unicode():
|
||||
"""
|
||||
Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control
|
||||
characters the bpe code barfs on.
|
||||
|
||||
The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab
|
||||
if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for
|
||||
decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
|
||||
tables between utf-8 bytes and unicode strings.
|
||||
"""
|
||||
bs = (
|
||||
list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
|
||||
)
|
||||
cs = bs[:]
|
||||
n = 0
|
||||
for b in range(2**8):
|
||||
if b not in bs:
|
||||
bs.append(b)
|
||||
cs.append(2**8 + n)
|
||||
n += 1
|
||||
cs = [chr(n) for n in cs]
|
||||
return dict(zip(bs, cs))
|
||||
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.get_pairs
|
||||
def get_pairs(word):
|
||||
"""
|
||||
Return set of symbol pairs in a word.
|
||||
|
||||
Word is represented as tuple of symbols (symbols being variable-length strings).
|
||||
"""
|
||||
pairs = set()
|
||||
prev_char = word[0]
|
||||
for char in word[1:]:
|
||||
pairs.add((prev_char, char))
|
||||
prev_char = char
|
||||
return pairs
|
||||
|
||||
|
||||
class ClvpTokenizer(PreTrainedTokenizer):
|
||||
"""
|
||||
Construct a CLVP tokenizer. Based on byte-level Byte-Pair-Encoding.
|
||||
|
||||
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
|
||||
be encoded differently whether it is at the beginning of the sentence (without space) or not:
|
||||
|
||||
```python
|
||||
>>> from transformers import ClvpTokenizer
|
||||
|
||||
>>> tokenizer = ClvpTokenizer.from_pretrained("susnato/clvp_dev")
|
||||
>>> tokenizer("Hello world")["input_ids"]
|
||||
[62, 84, 28, 2, 179, 79]
|
||||
|
||||
>>> tokenizer(" Hello world")["input_ids"]
|
||||
[2, 62, 84, 28, 2, 179, 79]
|
||||
```
|
||||
|
||||
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
|
||||
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
|
||||
|
||||
<Tip>
|
||||
|
||||
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
|
||||
|
||||
</Tip>
|
||||
|
||||
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
|
||||
this superclass for more information regarding those methods.
|
||||
|
||||
Args:
|
||||
vocab_file (`str`):
|
||||
Path to the vocabulary file.
|
||||
merges_file (`str`):
|
||||
Path to the merges file.
|
||||
errors (`str`, *optional*, defaults to `"replace"`):
|
||||
Paradigm to follow when decoding bytes to UTF-8. See
|
||||
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
|
||||
unk_token (`str`, *optional*, defaults to `"[UNK]"`):
|
||||
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
|
||||
token instead.
|
||||
bos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
|
||||
The beginning of sequence token.
|
||||
eos_token (`str`, *optional*, defaults to `"[STOP]"`):
|
||||
The end of sequence token.
|
||||
pad_token (`str`, *optional*, defaults to `"[STOP]"`):
|
||||
The pad token of the sequence.
|
||||
add_prefix_space (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
|
||||
other word. (CLVP tokenizer detect beginning of words by the preceding space).
|
||||
add_bos_token (`bool`, *optional*, defaults to `False`):
|
||||
Whether to add `bos_token` in front of the sequence when add_special_tokens=True.
|
||||
add_eos_token (`bool`, *optional*, defaults to `False`):
|
||||
Whether to add `eos_token` in end of the sequence when add_special_tokens=True.
|
||||
"""
|
||||
|
||||
vocab_files_names = VOCAB_FILES_NAMES
|
||||
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
||||
model_input_names = [
|
||||
"input_ids",
|
||||
"attention_mask",
|
||||
]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_file,
|
||||
merges_file,
|
||||
errors="replace",
|
||||
unk_token="[UNK]",
|
||||
bos_token="<|endoftext|>",
|
||||
eos_token="[STOP]",
|
||||
pad_token="[STOP]",
|
||||
add_prefix_space=False,
|
||||
add_bos_token=False,
|
||||
add_eos_token=False,
|
||||
**kwargs,
|
||||
):
|
||||
bos_token = AddedToken(bos_token, special=True) if isinstance(bos_token, str) else bos_token
|
||||
eos_token = AddedToken(eos_token, special=True) if isinstance(eos_token, str) else eos_token
|
||||
unk_token = AddedToken(unk_token, special=True) if isinstance(unk_token, str) else unk_token
|
||||
pad_token = AddedToken(pad_token, special=True) if isinstance(pad_token, str) else pad_token
|
||||
|
||||
self.add_bos_token = add_bos_token
|
||||
self.add_eos_token = add_eos_token
|
||||
self._normalizer = None
|
||||
|
||||
with open(vocab_file, encoding="utf-8") as vocab_handle:
|
||||
self.encoder = json.load(vocab_handle)
|
||||
self.decoder = {v: k for k, v in self.encoder.items()}
|
||||
self.errors = errors # how to handle errors in decoding
|
||||
self.byte_encoder = bytes_to_unicode()
|
||||
self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
|
||||
with open(merges_file, encoding="utf-8") as merges_handle:
|
||||
bpe_merges = merges_handle.read().split("\n")[1:-1]
|
||||
bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
|
||||
self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
|
||||
self.cache = {}
|
||||
self.add_prefix_space = add_prefix_space
|
||||
|
||||
# Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
|
||||
self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")
|
||||
|
||||
super().__init__(
|
||||
errors=errors,
|
||||
unk_token=unk_token,
|
||||
bos_token=bos_token,
|
||||
eos_token=eos_token,
|
||||
pad_token=pad_token,
|
||||
add_prefix_space=add_prefix_space,
|
||||
add_bos_token=add_bos_token,
|
||||
add_eos_token=add_eos_token,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@property
|
||||
def vocab_size(self):
|
||||
return len(self.encoder)
|
||||
|
||||
@property
|
||||
def normalizer(self):
|
||||
if self._normalizer is None:
|
||||
self._normalizer = EnglishNormalizer()
|
||||
return self._normalizer
|
||||
|
||||
def get_vocab(self):
|
||||
return dict(self.encoder, **self.added_tokens_encoder)
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.bpe
|
||||
def bpe(self, token):
|
||||
if token in self.cache:
|
||||
return self.cache[token]
|
||||
word = tuple(token)
|
||||
pairs = get_pairs(word)
|
||||
|
||||
if not pairs:
|
||||
return token
|
||||
|
||||
while True:
|
||||
bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
|
||||
if bigram not in self.bpe_ranks:
|
||||
break
|
||||
first, second = bigram
|
||||
new_word = []
|
||||
i = 0
|
||||
while i < len(word):
|
||||
try:
|
||||
j = word.index(first, i)
|
||||
except ValueError:
|
||||
new_word.extend(word[i:])
|
||||
break
|
||||
else:
|
||||
new_word.extend(word[i:j])
|
||||
i = j
|
||||
|
||||
if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
|
||||
new_word.append(first + second)
|
||||
i += 2
|
||||
else:
|
||||
new_word.append(word[i])
|
||||
i += 1
|
||||
new_word = tuple(new_word)
|
||||
word = new_word
|
||||
if len(word) == 1:
|
||||
break
|
||||
else:
|
||||
pairs = get_pairs(word)
|
||||
word = " ".join(word)
|
||||
self.cache[token] = word
|
||||
return word
|
||||
|
||||
# Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.build_inputs_with_special_tokens
|
||||
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
||||
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
|
||||
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
|
||||
|
||||
output = bos_token_id + token_ids_0 + eos_token_id
|
||||
|
||||
if token_ids_1 is not None:
|
||||
output = output + bos_token_id + token_ids_1 + eos_token_id
|
||||
|
||||
return output
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.get_special_tokens_mask
|
||||
def get_special_tokens_mask(
|
||||
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
|
||||
) -> List[int]:
|
||||
"""
|
||||
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
|
||||
special tokens using the tokenizer `prepare_for_model` or `encode_plus` methods.
|
||||
|
||||
Args:
|
||||
token_ids_0 (`List[int]`):
|
||||
List of IDs.
|
||||
token_ids_1 (`List[int]`, *optional*):
|
||||
Optional second list of IDs for sequence pairs.
|
||||
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not the token list is already formatted with special tokens for the model.
|
||||
|
||||
Returns:
|
||||
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
|
||||
"""
|
||||
if already_has_special_tokens:
|
||||
return super().get_special_tokens_mask(
|
||||
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
|
||||
)
|
||||
|
||||
if not self.add_bos_token:
|
||||
return super().get_special_tokens_mask(
|
||||
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=False
|
||||
)
|
||||
|
||||
if token_ids_1 is None:
|
||||
return [1] + ([0] * len(token_ids_0))
|
||||
return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1))
|
||||
|
||||
def _tokenize(self, text):
|
||||
"""Tokenize a string."""
|
||||
bpe_tokens = []
|
||||
text = self.normalizer(text)
|
||||
for token in re.findall(self.pat, text):
|
||||
token = "".join(
|
||||
self.byte_encoder[b] for b in token.encode("utf-8")
|
||||
) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
|
||||
|
||||
# if the token is "Ġ" we replace it with "[SPACE]" (if "[SPACE]" is present in the vocab), otherwise we keep the "Ġ".
|
||||
bpe_tokens.extend(
|
||||
"[SPACE]" if bpe_token == "\u0120" and "[SPACE]" in self.encoder.keys() else bpe_token
|
||||
for bpe_token in self.bpe(token).split(" ")
|
||||
)
|
||||
|
||||
return bpe_tokens
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer._convert_token_to_id
|
||||
def _convert_token_to_id(self, token):
|
||||
"""Converts a token (str) in an id using the vocab."""
|
||||
return self.encoder.get(token, self.encoder.get(self.unk_token))
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer._convert_id_to_token
|
||||
def _convert_id_to_token(self, index):
|
||||
"""Converts an index (integer) in a token (str) using the vocab."""
|
||||
return self.decoder.get(index)
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.convert_tokens_to_string
|
||||
def convert_tokens_to_string(self, tokens):
|
||||
"""Converts a sequence of tokens (string) in a single string."""
|
||||
text = "".join(tokens)
|
||||
text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
|
||||
return text
|
||||
|
||||
def clean_up_tokenization(self, text):
|
||||
text = "".join(text)
|
||||
vocab_tokens = list(self.encoder.keys()) + list(self.added_tokens_encoder.keys())
|
||||
|
||||
text = text.replace("[SPACE]", " ") if "[SPACE]" in vocab_tokens else text
|
||||
text = text.replace("[STOP]", " ") if "[STOP]" in vocab_tokens else text
|
||||
|
||||
text = text.replace(self.unk_token, "").replace(" ", " ").replace(" ", " ")
|
||||
return text
|
||||
|
||||
# Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.save_vocabulary
|
||||
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
|
||||
if not os.path.isdir(save_directory):
|
||||
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
|
||||
return
|
||||
vocab_file = os.path.join(
|
||||
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
|
||||
)
|
||||
merge_file = os.path.join(
|
||||
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
|
||||
)
|
||||
|
||||
with open(vocab_file, "w", encoding="utf-8") as f:
|
||||
f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")
|
||||
|
||||
index = 0
|
||||
with open(merge_file, "w", encoding="utf-8") as writer:
|
||||
writer.write("#version: 0.2\n")
|
||||
for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
|
||||
if index != token_index:
|
||||
logger.warning(
|
||||
f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
|
||||
" Please check that the tokenizer is not corrupted!"
|
||||
)
|
||||
index = token_index
|
||||
writer.write(" ".join(bpe_tokens) + "\n")
|
||||
index += 1
|
||||
|
||||
return vocab_file, merge_file
|
@ -1940,6 +1940,51 @@ class CLIPSegVisionModel(metaclass=DummyObject):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
CLVP_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
class ClvpDecoder(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class ClvpEncoder(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class ClvpForCausalLM(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class ClvpModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class ClvpModelForConditionalGeneration(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class ClvpPreTrainedModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
CODEGEN_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
|
0
tests/models/clvp/__init__.py
Normal file
0
tests/models/clvp/__init__.py
Normal file
237
tests/models/clvp/test_feature_extraction_clvp.py
Normal file
237
tests/models/clvp/test_feature_extraction_clvp.py
Normal file
@ -0,0 +1,237 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 HuggingFace Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import gc
|
||||
import itertools
|
||||
import os
|
||||
import random
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
import numpy as np
|
||||
from datasets import Audio, load_dataset
|
||||
|
||||
from transformers import ClvpFeatureExtractor
|
||||
from transformers.testing_utils import check_json_file_has_correct_format, require_torch, slow
|
||||
from transformers.utils.import_utils import is_torch_available
|
||||
|
||||
from ...test_sequence_feature_extraction_common import SequenceFeatureExtractionTestMixin
|
||||
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
global_rng = random.Random()
|
||||
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_feature_extraction_whisper.floats_list
|
||||
def floats_list(shape, scale=1.0, rng=None, name=None):
|
||||
"""Creates a random float32 tensor"""
|
||||
if rng is None:
|
||||
rng = global_rng
|
||||
|
||||
values = []
|
||||
for batch_idx in range(shape[0]):
|
||||
values.append([])
|
||||
for _ in range(shape[1]):
|
||||
values[-1].append(rng.random() * scale)
|
||||
|
||||
return values
|
||||
|
||||
|
||||
@require_torch
|
||||
class ClvpFeatureExtractionTester(unittest.TestCase):
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=7,
|
||||
min_seq_length=400,
|
||||
max_seq_length=2000,
|
||||
feature_size=10,
|
||||
hop_length=160,
|
||||
chunk_length=8,
|
||||
padding_value=0.0,
|
||||
sampling_rate=4_000,
|
||||
return_attention_mask=False,
|
||||
):
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.min_seq_length = min_seq_length
|
||||
self.max_seq_length = max_seq_length
|
||||
self.seq_length_diff = (self.max_seq_length - self.min_seq_length) // (self.batch_size - 1)
|
||||
self.padding_value = padding_value
|
||||
self.sampling_rate = sampling_rate
|
||||
self.return_attention_mask = return_attention_mask
|
||||
self.feature_size = feature_size
|
||||
self.chunk_length = chunk_length
|
||||
self.hop_length = hop_length
|
||||
|
||||
def prepare_feat_extract_dict(self):
|
||||
return {
|
||||
"feature_size": self.feature_size,
|
||||
"hop_length": self.hop_length,
|
||||
"chunk_length": self.chunk_length,
|
||||
"padding_value": self.padding_value,
|
||||
"sampling_rate": self.sampling_rate,
|
||||
"return_attention_mask": self.return_attention_mask,
|
||||
}
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTester.prepare_inputs_for_common
|
||||
def prepare_inputs_for_common(self, equal_length=False, numpify=False):
|
||||
def _flatten(list_of_lists):
|
||||
return list(itertools.chain(*list_of_lists))
|
||||
|
||||
if equal_length:
|
||||
speech_inputs = [floats_list((self.max_seq_length, self.feature_size)) for _ in range(self.batch_size)]
|
||||
else:
|
||||
# make sure that inputs increase in size
|
||||
speech_inputs = [
|
||||
floats_list((x, self.feature_size))
|
||||
for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff)
|
||||
]
|
||||
if numpify:
|
||||
speech_inputs = [np.asarray(x) for x in speech_inputs]
|
||||
return speech_inputs
|
||||
|
||||
|
||||
@require_torch
|
||||
class ClvpFeatureExtractionTest(SequenceFeatureExtractionTestMixin, unittest.TestCase):
|
||||
feature_extraction_class = ClvpFeatureExtractor
|
||||
|
||||
def setUp(self):
|
||||
self.feat_extract_tester = ClvpFeatureExtractionTester(self)
|
||||
|
||||
def tearDown(self):
|
||||
super().tearDown()
|
||||
# clean-up as much as possible GPU memory occupied by PyTorch
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTest.test_feat_extract_from_and_save_pretrained
|
||||
def test_feat_extract_from_and_save_pretrained(self):
|
||||
feat_extract_first = self.feature_extraction_class(**self.feat_extract_dict)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdirname:
|
||||
saved_file = feat_extract_first.save_pretrained(tmpdirname)[0]
|
||||
check_json_file_has_correct_format(saved_file)
|
||||
feat_extract_second = self.feature_extraction_class.from_pretrained(tmpdirname)
|
||||
|
||||
dict_first = feat_extract_first.to_dict()
|
||||
dict_second = feat_extract_second.to_dict()
|
||||
mel_1 = feat_extract_first.mel_filters
|
||||
mel_2 = feat_extract_second.mel_filters
|
||||
self.assertTrue(np.allclose(mel_1, mel_2))
|
||||
self.assertEqual(dict_first, dict_second)
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTest.test_feat_extract_to_json_file
|
||||
def test_feat_extract_to_json_file(self):
|
||||
feat_extract_first = self.feature_extraction_class(**self.feat_extract_dict)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdirname:
|
||||
json_file_path = os.path.join(tmpdirname, "feat_extract.json")
|
||||
feat_extract_first.to_json_file(json_file_path)
|
||||
feat_extract_second = self.feature_extraction_class.from_json_file(json_file_path)
|
||||
|
||||
dict_first = feat_extract_first.to_dict()
|
||||
dict_second = feat_extract_second.to_dict()
|
||||
mel_1 = feat_extract_first.mel_filters
|
||||
mel_2 = feat_extract_second.mel_filters
|
||||
self.assertTrue(np.allclose(mel_1, mel_2))
|
||||
self.assertEqual(dict_first, dict_second)
|
||||
|
||||
def test_call(self):
|
||||
# Tests that all call wrap to encode_plus and batch_encode_plus
|
||||
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
# create three inputs of length 800, 1000, and 1200
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
||||
np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs]
|
||||
|
||||
# Test feature size
|
||||
input_features = feature_extractor(np_speech_inputs, padding="max_length", return_tensors="np").input_features
|
||||
self.assertTrue(input_features.ndim == 3)
|
||||
self.assertTrue(input_features.shape[-2] == feature_extractor.feature_size)
|
||||
|
||||
# Test not batched input
|
||||
encoded_sequences_1 = feature_extractor(speech_inputs[0], return_tensors="np").input_features
|
||||
encoded_sequences_2 = feature_extractor(np_speech_inputs[0], return_tensors="np").input_features
|
||||
self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3))
|
||||
|
||||
# Test batched
|
||||
encoded_sequences_1 = feature_extractor(speech_inputs, return_tensors="np").input_features
|
||||
encoded_sequences_2 = feature_extractor(np_speech_inputs, return_tensors="np").input_features
|
||||
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
||||
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
||||
|
||||
# Test 2-D numpy arrays are batched.
|
||||
speech_inputs = [floats_list((1, x))[0] for x in (800, 800, 800)]
|
||||
np_speech_inputs = np.asarray(speech_inputs)
|
||||
encoded_sequences_1 = feature_extractor(speech_inputs, return_tensors="np").input_features
|
||||
encoded_sequences_2 = feature_extractor(np_speech_inputs, return_tensors="np").input_features
|
||||
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
||||
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
||||
|
||||
# Test truncation required
|
||||
speech_inputs = [floats_list((1, x))[0] for x in range(200, (feature_extractor.n_samples + 500), 200)]
|
||||
np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs]
|
||||
|
||||
speech_inputs_truncated = [x[: feature_extractor.n_samples] for x in speech_inputs]
|
||||
np_speech_inputs_truncated = [np.asarray(speech_input) for speech_input in speech_inputs_truncated]
|
||||
|
||||
encoded_sequences_1 = feature_extractor(np_speech_inputs, return_tensors="np").input_features
|
||||
encoded_sequences_2 = feature_extractor(np_speech_inputs_truncated, return_tensors="np").input_features
|
||||
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
||||
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_feature_extraction_whisper.WhisperFeatureExtractionTest.test_double_precision_pad
|
||||
def test_double_precision_pad(self):
|
||||
import torch
|
||||
|
||||
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
||||
np_speech_inputs = np.random.rand(100, 32).astype(np.float64)
|
||||
py_speech_inputs = np_speech_inputs.tolist()
|
||||
|
||||
for inputs in [py_speech_inputs, np_speech_inputs]:
|
||||
np_processed = feature_extractor.pad([{"input_features": inputs}], return_tensors="np")
|
||||
self.assertTrue(np_processed.input_features.dtype == np.float32)
|
||||
pt_processed = feature_extractor.pad([{"input_features": inputs}], return_tensors="pt")
|
||||
self.assertTrue(pt_processed.input_features.dtype == torch.float32)
|
||||
|
||||
def _load_datasamples(self, num_samples):
|
||||
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
ds = ds.cast_column("audio", Audio(sampling_rate=22050))
|
||||
# automatic decoding with librispeech
|
||||
speech_samples = ds.sort("id").select(range(num_samples))[:num_samples]["audio"]
|
||||
|
||||
return [x["array"] for x in speech_samples], [x["sampling_rate"] for x in speech_samples]
|
||||
|
||||
@slow
|
||||
def test_integration(self):
|
||||
# fmt: off
|
||||
EXPECTED_INPUT_FEATURES = torch.tensor(
|
||||
[
|
||||
0.9271, 1.1405, 1.4419, 1.2470, 1.2438, 1.1787, 1.0595, 1.0570, 1.1070,
|
||||
1.2205, 1.2376, 1.2997, 1.1131, 1.0843, 1.0459, 1.1858, 1.2323, 1.3582,
|
||||
1.3401, 1.3770, 1.4173, 1.3381, 1.2291, 1.0854, 1.2116, 1.1873, 1.2178,
|
||||
1.2137, 1.3001, 1.4274
|
||||
]
|
||||
)
|
||||
# fmt: on
|
||||
|
||||
input_speech, sr = self._load_datasamples(1)
|
||||
|
||||
feature_extractor = ClvpFeatureExtractor.from_pretrained("susnato/clvp_dev")
|
||||
input_features = feature_extractor(input_speech, sampling_rate=sr[0], return_tensors="pt").input_features
|
||||
self.assertEqual(input_features.shape, (1, 80, 517))
|
||||
self.assertTrue(torch.allclose(input_features[0, 0, :30], EXPECTED_INPUT_FEATURES, atol=1e-4))
|
640
tests/models/clvp/test_modeling_clvp.py
Normal file
640
tests/models/clvp/test_modeling_clvp.py
Normal file
@ -0,0 +1,640 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" Testing suite for the PyTorch Clvp model. """
|
||||
|
||||
|
||||
import gc
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
import datasets
|
||||
import numpy as np
|
||||
|
||||
from transformers import ClvpConfig, ClvpDecoderConfig, ClvpEncoderConfig
|
||||
from transformers.testing_utils import (
|
||||
require_torch,
|
||||
slow,
|
||||
torch_device,
|
||||
)
|
||||
from transformers.utils import is_torch_available
|
||||
|
||||
from ...generation.test_utils import GenerationTesterMixin
|
||||
from ...test_configuration_common import ConfigTester
|
||||
from ...test_modeling_common import (
|
||||
ModelTesterMixin,
|
||||
_config_zero_init,
|
||||
ids_tensor,
|
||||
random_attention_mask,
|
||||
)
|
||||
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
from transformers import ClvpEncoder, ClvpForCausalLM, ClvpModel, ClvpModelForConditionalGeneration
|
||||
from transformers.models.clvp.modeling_clvp import CLVP_PRETRAINED_MODEL_ARCHIVE_LIST
|
||||
|
||||
from transformers import ClvpFeatureExtractor, ClvpTokenizer
|
||||
|
||||
|
||||
class ClvpEncoderTester:
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=2,
|
||||
seq_length=7,
|
||||
is_training=False,
|
||||
use_input_mask=True,
|
||||
use_labels=True,
|
||||
vocab_size=50,
|
||||
hidden_size=128,
|
||||
projection_dim=16,
|
||||
num_hidden_layers=2,
|
||||
num_attention_heads=4,
|
||||
intermediate_size=32,
|
||||
dropout=0.1,
|
||||
attention_dropout=0.1,
|
||||
initializer_range=0.02,
|
||||
scope=None,
|
||||
):
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.seq_length = seq_length
|
||||
self.is_training = is_training
|
||||
self.use_input_mask = use_input_mask
|
||||
self.use_labels = use_labels
|
||||
self.vocab_size = vocab_size
|
||||
self.hidden_size = hidden_size
|
||||
self.projection_dim = projection_dim
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.intermediate_size = intermediate_size
|
||||
self.dropout = dropout
|
||||
self.attention_dropout = attention_dropout
|
||||
self.initializer_range = initializer_range
|
||||
self.scope = scope
|
||||
self.bos_token_id = vocab_size - 1
|
||||
self.eos_token_id = vocab_size - 1
|
||||
|
||||
def get_config(self):
|
||||
encoder_config = ClvpEncoderConfig(
|
||||
vocab_size=self.vocab_size,
|
||||
hidden_size=self.hidden_size,
|
||||
projection_dim=self.projection_dim,
|
||||
num_hidden_layers=self.num_hidden_layers,
|
||||
num_attention_heads=self.num_attention_heads,
|
||||
intermediate_size=self.intermediate_size,
|
||||
dropout=self.dropout,
|
||||
attention_dropout=self.attention_dropout,
|
||||
initializer_range=self.initializer_range,
|
||||
bos_token_id=self.bos_token_id,
|
||||
eos_token_id=self.eos_token_id,
|
||||
)
|
||||
|
||||
return encoder_config
|
||||
|
||||
def prepare_config_and_inputs(self):
|
||||
input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)
|
||||
|
||||
input_mask = None
|
||||
if self.use_input_mask:
|
||||
input_mask = random_attention_mask([self.batch_size, self.seq_length])
|
||||
|
||||
if input_mask is not None:
|
||||
batch_size, seq_length = input_mask.shape
|
||||
rnd_start_indices = np.random.randint(1, seq_length - 1, size=(batch_size,))
|
||||
for batch_idx, start_index in enumerate(rnd_start_indices):
|
||||
input_mask[batch_idx, :start_index] = 1
|
||||
input_mask[batch_idx, start_index:] = 0
|
||||
|
||||
encoder_config = self.get_config()
|
||||
|
||||
return encoder_config, input_ids, input_mask
|
||||
|
||||
def prepare_config_and_inputs_for_common(self):
|
||||
config_and_inputs = self.prepare_config_and_inputs()
|
||||
speech_config, input_ids, input_mask = config_and_inputs
|
||||
inputs_dict = {"input_ids": input_ids.to(torch_device), "attention_mask": input_mask.to(torch_device)}
|
||||
return speech_config, inputs_dict
|
||||
|
||||
def create_and_check_model(self, speech_config, input_ids, input_mask):
|
||||
text_config = ClvpEncoderConfig(
|
||||
vocab_size=self.vocab_size,
|
||||
hidden_size=self.hidden_size,
|
||||
projection_dim=self.projection_dim,
|
||||
num_hidden_layers=self.num_hidden_layers,
|
||||
num_attention_heads=self.num_attention_heads,
|
||||
intermediate_size=self.intermediate_size,
|
||||
dropout=self.dropout,
|
||||
attention_dropout=self.attention_dropout,
|
||||
initializer_range=self.initializer_range,
|
||||
)
|
||||
text_encoder_model = ClvpEncoder(config=text_config)
|
||||
text_encoder_model.to(torch_device)
|
||||
text_encoder_model.eval()
|
||||
with torch.no_grad():
|
||||
result = text_encoder_model(input_ids, attention_mask=input_mask)
|
||||
result = text_encoder_model(input_ids)
|
||||
self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size))
|
||||
self.parent.assertEqual(result[0].shape, (self.batch_size, self.projection_dim))
|
||||
|
||||
# now check with speech config
|
||||
speech_encoder_model = ClvpEncoder(config=speech_config)
|
||||
speech_encoder_model.to(torch_device)
|
||||
speech_encoder_model.eval()
|
||||
with torch.no_grad():
|
||||
result = speech_encoder_model(input_ids, attention_mask=input_mask)
|
||||
result = speech_encoder_model(input_ids)
|
||||
self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size))
|
||||
self.parent.assertEqual(result[0].shape, (self.batch_size, self.projection_dim))
|
||||
|
||||
|
||||
@require_torch
|
||||
class ClvpEncoderTest(ModelTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (ClvpEncoder,) if is_torch_available() else ()
|
||||
test_pruning = False
|
||||
test_head_masking = False
|
||||
test_torchscript = False
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = ClvpEncoderTester(self)
|
||||
self.encoder_config_tester = ConfigTester(self, config_class=ClvpEncoderConfig, hidden_size=32)
|
||||
|
||||
def tearDown(self):
|
||||
super().tearDown()
|
||||
# clean-up as much as possible GPU memory occupied by PyTorch
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
def test_config(self):
|
||||
self.encoder_config_tester.run_common_tests()
|
||||
|
||||
def test_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_model(*config_and_inputs)
|
||||
|
||||
@unittest.skip(reason="ClvpEncoder does not output loss")
|
||||
def test_training(self):
|
||||
pass
|
||||
|
||||
@unittest.skip(reason="ClvpEncoder does not output loss")
|
||||
def test_training_gradient_checkpointing(self):
|
||||
pass
|
||||
|
||||
|
||||
class ClvpDecoderTester:
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=2,
|
||||
seq_length=3,
|
||||
is_training=False,
|
||||
vocab_size=300,
|
||||
max_position_embeddings=256,
|
||||
max_text_tokens=256,
|
||||
use_input_mask=True,
|
||||
hidden_size=128,
|
||||
num_hidden_layers=2,
|
||||
num_attention_heads=2,
|
||||
bos_token_id=97,
|
||||
eos_token_id=98,
|
||||
relative_attention_num_buckets=4,
|
||||
relative_attention_max_distance=16,
|
||||
):
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.seq_length = seq_length
|
||||
self.is_training = is_training
|
||||
self.vocab_size = vocab_size
|
||||
self.max_position_embeddings = max_position_embeddings
|
||||
self.max_text_tokens = max_text_tokens
|
||||
self.use_input_mask = use_input_mask
|
||||
self.hidden_size = hidden_size
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.bos_token_id = bos_token_id
|
||||
self.eos_token_id = eos_token_id
|
||||
self.relative_attention_num_buckets = relative_attention_num_buckets
|
||||
self.relative_attention_max_distance = relative_attention_max_distance
|
||||
|
||||
def get_config(self):
|
||||
decoder_config = ClvpDecoderConfig(
|
||||
vocab_size=self.vocab_size,
|
||||
max_position_embeddings=self.max_position_embeddings,
|
||||
max_text_tokens=self.max_text_tokens,
|
||||
hidden_size=self.hidden_size,
|
||||
num_hidden_layers=self.num_hidden_layers,
|
||||
num_attention_heads=self.num_attention_heads,
|
||||
bos_token_id=self.bos_token_id,
|
||||
eos_token_id=self.eos_token_id,
|
||||
relative_attention_num_buckets=self.relative_attention_num_buckets,
|
||||
relative_attention_max_distance=self.relative_attention_max_distance,
|
||||
)
|
||||
|
||||
return decoder_config
|
||||
|
||||
def prepare_config_and_inputs(self):
|
||||
input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)
|
||||
|
||||
input_mask = None
|
||||
if self.use_input_mask:
|
||||
input_mask = random_attention_mask([self.batch_size, self.seq_length])
|
||||
|
||||
if input_mask is not None:
|
||||
batch_size, seq_length = input_mask.shape
|
||||
rnd_start_indices = np.random.randint(1, seq_length - 1, size=(batch_size,))
|
||||
for batch_idx, start_index in enumerate(rnd_start_indices):
|
||||
input_mask[batch_idx, :start_index] = 1
|
||||
input_mask[batch_idx, start_index:] = 0
|
||||
|
||||
decoder_config = self.get_config()
|
||||
|
||||
return decoder_config, input_ids, input_mask
|
||||
|
||||
def create_and_check_model(self, config, input_ids, attention_mask):
|
||||
model = ClvpForCausalLM(config).to(torch_device).eval()
|
||||
with torch.no_grad():
|
||||
result = model(input_ids=input_ids, attention_mask=attention_mask)
|
||||
|
||||
self.parent.assertEqual(result[0].shape, (self.batch_size, self.seq_length, self.vocab_size))
|
||||
|
||||
def prepare_config_and_inputs_for_common(self):
|
||||
config_and_inputs = self.prepare_config_and_inputs()
|
||||
config, input_ids, attention_mask = config_and_inputs
|
||||
inputs_dict = {
|
||||
"input_ids": input_ids.to(torch_device),
|
||||
"attention_mask": attention_mask.to(torch_device),
|
||||
}
|
||||
return config, inputs_dict
|
||||
|
||||
|
||||
@require_torch
|
||||
class ClvpDecoderTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (ClvpModel, ClvpForCausalLM) if is_torch_available() else ()
|
||||
all_generative_model_classes = (ClvpForCausalLM,) if is_torch_available() else ()
|
||||
|
||||
test_pruning = False
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = ClvpDecoderTester(self)
|
||||
self.decoder_config_tester = ConfigTester(self, config_class=ClvpDecoderConfig, hidden_size=32)
|
||||
|
||||
def tearDown(self):
|
||||
super().tearDown()
|
||||
# clean-up as much as possible GPU memory occupied by PyTorch
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
def test_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_model(*config_and_inputs)
|
||||
|
||||
def _prepare_for_class(self, inputs_dict, model_class, return_labels=False):
|
||||
if return_labels and model_class == ClvpForCausalLM:
|
||||
inputs_dict["labels"] = torch.zeros(
|
||||
[self.model_tester.batch_size, self.model_tester.seq_length], device=torch_device
|
||||
).long()
|
||||
|
||||
return inputs_dict
|
||||
|
||||
def test_training(self):
|
||||
# we will only test the ClvpForCausalLM since it outputs loss
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
config.return_dict = True
|
||||
|
||||
model = ClvpForCausalLM(config)
|
||||
model.to(torch_device)
|
||||
model.train()
|
||||
inputs = self._prepare_for_class(inputs_dict, ClvpForCausalLM, return_labels=True)
|
||||
loss = model(**inputs).loss
|
||||
loss.backward()
|
||||
|
||||
def test_training_gradient_checkpointing(self):
|
||||
# we will only test the ClvpForCausalLM since it outputs loss
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
config.use_cache = False
|
||||
config.return_dict = True
|
||||
|
||||
model = ClvpForCausalLM(config)
|
||||
model.to(torch_device)
|
||||
model.gradient_checkpointing_enable()
|
||||
model.train()
|
||||
inputs = self._prepare_for_class(inputs_dict, ClvpForCausalLM, return_labels=True)
|
||||
|
||||
loss = model(**inputs).loss
|
||||
loss.backward()
|
||||
|
||||
|
||||
class ClvpModelForConditionalGenerationTester:
|
||||
def __init__(self, parent, is_training=False):
|
||||
self.parent = parent
|
||||
self.clvp_encoder_tester = ClvpEncoderTester(parent)
|
||||
self.is_training = is_training
|
||||
|
||||
def get_config(self):
|
||||
decoder_config = ClvpDecoderConfig(
|
||||
vocab_size=50,
|
||||
max_position_embeddings=30,
|
||||
max_text_tokens=30,
|
||||
hidden_size=128,
|
||||
num_hidden_layers=1,
|
||||
num_attention_heads=2,
|
||||
bos_token_id=97,
|
||||
eos_token_id=98,
|
||||
relative_attention_num_buckets=4,
|
||||
relative_attention_max_distance=16,
|
||||
)
|
||||
text_config = self.clvp_encoder_tester.get_config()
|
||||
speech_config = self.clvp_encoder_tester.get_config()
|
||||
speech_config.vocab_size = 300
|
||||
|
||||
return ClvpConfig.from_sub_model_configs(
|
||||
text_config,
|
||||
speech_config,
|
||||
decoder_config,
|
||||
projection_dim=16,
|
||||
)
|
||||
|
||||
def prepare_config_and_inputs(self):
|
||||
_, input_ids, attention_mask = self.clvp_encoder_tester.prepare_config_and_inputs()
|
||||
|
||||
ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
|
||||
_, audio, sr = ds.sort("id").select(range(1))[:1]["audio"][0].values()
|
||||
|
||||
feature_extractor = ClvpFeatureExtractor()
|
||||
input_features = feature_extractor(raw_speech=audio, sampling_rate=sr, return_tensors="pt")[
|
||||
"input_features"
|
||||
].to(torch_device)
|
||||
|
||||
config = self.get_config()
|
||||
|
||||
return config, input_ids, attention_mask, input_features
|
||||
|
||||
def create_and_check_model(self, config, input_ids, attention_mask, input_features):
|
||||
model = ClvpModelForConditionalGeneration(config).to(torch_device).eval()
|
||||
with torch.no_grad():
|
||||
result = model(input_ids=input_ids, input_features=input_features, attention_mask=attention_mask)
|
||||
|
||||
self.parent.assertEqual(result.logits_per_speech.shape, (2, self.clvp_encoder_tester.batch_size))
|
||||
self.parent.assertEqual(result.logits_per_text.shape, (self.clvp_encoder_tester.batch_size, 2))
|
||||
|
||||
def prepare_config_and_inputs_for_common(self):
|
||||
config_and_inputs = self.prepare_config_and_inputs()
|
||||
config, input_ids, attention_mask, input_features = config_and_inputs
|
||||
inputs_dict = {
|
||||
"input_ids": input_ids.to(torch_device),
|
||||
"attention_mask": attention_mask.to(torch_device),
|
||||
"input_features": input_features.to(torch_device),
|
||||
"return_loss": False,
|
||||
}
|
||||
return config, inputs_dict
|
||||
|
||||
|
||||
@require_torch
|
||||
class ClvpModelForConditionalGenerationTest(ModelTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (ClvpModelForConditionalGeneration,) if is_torch_available() else ()
|
||||
|
||||
test_head_masking = False
|
||||
test_pruning = False
|
||||
test_resize_embeddings = False
|
||||
test_attention_outputs = False
|
||||
test_torchscript = False
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = ClvpModelForConditionalGenerationTester(self)
|
||||
self.clvp_config_tester = ConfigTester(self, config_class=ClvpConfig, hidden_size=32)
|
||||
|
||||
def tearDown(self):
|
||||
super().tearDown()
|
||||
# clean-up as much as possible GPU memory occupied by PyTorch
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
def test_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_model(*config_and_inputs)
|
||||
|
||||
def test_hidden_states_output(self):
|
||||
def check_hidden_states_output(inputs_dict, config, model_class):
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
|
||||
# check for decoder model, text encoder model and speech encoder model hidden states
|
||||
decoder_hidden_states = outputs.decoder_hidden_states
|
||||
text_encoder_hidden_states = outputs.text_encoder_hidden_states
|
||||
speech_encoder_hidden_states = outputs.speech_encoder_hidden_states
|
||||
|
||||
# check length of the hidden states
|
||||
expected_decoder_num_layers = config.decoder_config.num_hidden_layers + 1
|
||||
self.assertEqual(len(decoder_hidden_states), expected_decoder_num_layers)
|
||||
|
||||
expected_speech_encoder_num_layers = config.text_config.num_hidden_layers + 1
|
||||
self.assertEqual(len(text_encoder_hidden_states), expected_speech_encoder_num_layers)
|
||||
|
||||
expected_text_encoder_num_layers = config.speech_config.num_hidden_layers + 1
|
||||
self.assertEqual(len(speech_encoder_hidden_states), expected_text_encoder_num_layers)
|
||||
|
||||
# check shapes of each hidden state
|
||||
|
||||
# for the decoder model we will only test the dimension because the ClvpConditioningEncoder could increase
|
||||
# the sequence lengths.
|
||||
self.assertEqual(decoder_hidden_states[0].shape[-1], config.decoder_config.hidden_size)
|
||||
|
||||
# the testing for text encoder stays standard because we just pass the text tokens here.
|
||||
self.assertListEqual(
|
||||
list(text_encoder_hidden_states[0].shape[-2:]),
|
||||
[self.model_tester.clvp_encoder_tester.seq_length, config.text_config.hidden_size],
|
||||
)
|
||||
|
||||
# for the decoder model we will only test the dimension because the fix_decoder_outputs method could increase
|
||||
# the sequence lengths by adding `decoder_fixing_codes` tokens at the end.
|
||||
self.assertEqual(speech_encoder_hidden_states[0].shape[-1], config.speech_config.hidden_size)
|
||||
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
inputs_dict["output_hidden_states"] = True
|
||||
check_hidden_states_output(inputs_dict, config, model_class)
|
||||
|
||||
# check that output_hidden_states also work using config
|
||||
del inputs_dict["output_hidden_states"]
|
||||
config.output_hidden_states = True
|
||||
|
||||
check_hidden_states_output(inputs_dict, config, model_class)
|
||||
|
||||
@unittest.skip(reason="Retain_grad is tested in individual model tests")
|
||||
def test_retain_grad_hidden_states_attentions(self):
|
||||
pass
|
||||
|
||||
@unittest.skip(reason="ClvpModelForConditionalGeneration does not have get_input_embeddings")
|
||||
def test_inputs_embeds(self):
|
||||
pass
|
||||
|
||||
@unittest.skip(reason="ClvpModelForConditionalGeneration does not have get_input_embeddings")
|
||||
def test_model_common_attributes(self):
|
||||
pass
|
||||
|
||||
# override as the `logit_scale` parameter initilization is different for Clvp
|
||||
def test_initialization(self):
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
configs_no_init = _config_zero_init(config)
|
||||
for model_class in self.all_model_classes:
|
||||
model = model_class(config=configs_no_init)
|
||||
for name, param in model.named_parameters():
|
||||
if param.requires_grad:
|
||||
# check if `logit_scale` is initilized as per the original implementation
|
||||
if name == "logit_scale":
|
||||
expected_value = np.log(1 / 0.07)
|
||||
returned_value = param.data.item()
|
||||
|
||||
self.assertAlmostEqual(
|
||||
returned_value,
|
||||
expected_value,
|
||||
delta=1e-3,
|
||||
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
|
||||
)
|
||||
else:
|
||||
expected_range = [0.0, 1.0]
|
||||
returned_range = ((param.data.mean() * 1e9).round() / 1e9).item()
|
||||
|
||||
self.assertIn(
|
||||
returned_range,
|
||||
expected_range,
|
||||
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
|
||||
)
|
||||
|
||||
def test_load_speech_text_decoder_config(self):
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
# Save ClvpConfig and check if we can load ClvpEncoderConfig from it
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
config.save_pretrained(tmp_dir_name)
|
||||
encoder_config = ClvpEncoderConfig.from_pretrained(tmp_dir_name)
|
||||
self.assertDictEqual(config.text_config.to_dict(), encoder_config.to_dict())
|
||||
|
||||
# Save ClvpConfig and check if we can load ClvpDecoderConfig from it
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
config.save_pretrained(tmp_dir_name)
|
||||
decoder_config = ClvpDecoderConfig.from_pretrained(tmp_dir_name)
|
||||
self.assertDictEqual(config.decoder_config.to_dict(), decoder_config.to_dict())
|
||||
|
||||
@slow
|
||||
def test_model_from_pretrained(self):
|
||||
for model_name in CLVP_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
|
||||
model = ClvpModelForConditionalGeneration.from_pretrained(model_name)
|
||||
self.assertIsNotNone(model)
|
||||
|
||||
|
||||
# Since Clvp has a lot of different models connected with each other it's better to test each of them individually along
|
||||
# with a test_full_model_integration. If the model breaks in future, it could be of a great help to identify the broken part.
|
||||
|
||||
|
||||
@slow
|
||||
@require_torch
|
||||
class ClvpIntegrationTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.text = "This is an example text."
|
||||
ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
||||
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050))
|
||||
_, self.speech_samples, self.sr = ds.sort("id").select(range(1))[:1]["audio"][0].values()
|
||||
|
||||
self.model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev").to(torch_device)
|
||||
self.model.eval()
|
||||
tokenizer = ClvpTokenizer.from_pretrained("susnato/clvp_dev")
|
||||
feature_extractor = ClvpFeatureExtractor.from_pretrained("susnato/clvp_dev")
|
||||
|
||||
tokenizer_output = tokenizer(self.text, return_tensors="pt")
|
||||
self.text_tokens = tokenizer_output["input_ids"].to(torch_device)
|
||||
self.input_features = feature_extractor(
|
||||
raw_speech=self.speech_samples, sampling_rate=self.sr, return_tensors="pt"
|
||||
)["input_features"].to(torch_device)
|
||||
|
||||
def tearDown(self):
|
||||
super().tearDown()
|
||||
# clean-up as much as possible GPU memory occupied by PyTorch
|
||||
gc.collect()
|
||||
torch.cuda.empty_cache()
|
||||
|
||||
def test_conditional_encoder(self):
|
||||
with torch.no_grad():
|
||||
conditioning_encoder_outputs = self.model.conditioning_encoder(
|
||||
input_features=self.input_features, input_ids=self.text_tokens
|
||||
).to("cpu")
|
||||
|
||||
self.assertEqual(
|
||||
conditioning_encoder_outputs.shape,
|
||||
torch.Size((self.input_features.shape[0], 18, self.model.config.decoder_config.hidden_size)),
|
||||
)
|
||||
|
||||
EXPECTED_OUTPUTS = torch.tensor(
|
||||
[[-0.8582, 0.5228, 1.9944], [-0.0465, -1.1017, -0.0093], [-0.0466, -0.6030, -0.1280]]
|
||||
)
|
||||
|
||||
self.assertTrue(torch.allclose(conditioning_encoder_outputs[0, :3, :3], EXPECTED_OUTPUTS, atol=1e-4))
|
||||
|
||||
def test_decoder_model_generate(self):
|
||||
autoregressive_model_output = self.model.speech_decoder_model.generate(input_ids=self.text_tokens).cpu()
|
||||
|
||||
EXPECTED_OUTPUTS = torch.tensor([[147, 2, 54, 2, 43, 2, 169, 122, 29, 64, 2, 136, 37, 33, 9, 8193]])
|
||||
|
||||
self.assertTrue(torch.allclose(autoregressive_model_output, EXPECTED_OUTPUTS))
|
||||
|
||||
def test_text_and_speech_encoder_models(self):
|
||||
# check for text embeds
|
||||
text_embeds = self.model.text_encoder_model(input_ids=self.text_tokens, return_dict=True)[0].cpu()
|
||||
|
||||
# fmt: off
|
||||
EXPECTED_TEXT_EMBEDS = torch.tensor(
|
||||
[ 1.8060e+00, -2.7928e+00, 3.2021e+00, -1.5673e+00, 2.3284e+00, -3.2065e+00, -1.3368e+00, 2.2322e+00,
|
||||
-1.7667e+00, 4.1505e-01, 2.4119e+00, -5.8133e-03, -4.6367e+00, 1.6450e-01, 6.7459e+00, 6.6292e+00,
|
||||
1.1046e+00, 3.6196e+00, -1.0496e+01, 5.4924e+00
|
||||
]
|
||||
)
|
||||
# fmt: on
|
||||
|
||||
self.assertTrue(torch.allclose(text_embeds[0, :20], EXPECTED_TEXT_EMBEDS, atol=1e-4))
|
||||
|
||||
# check for speech embeds
|
||||
speech_embeds = self.model.speech_encoder_model(input_ids=self.text_tokens, return_dict=True)[0].cpu()
|
||||
|
||||
# fmt: off
|
||||
EXPECTED_SPEECH_EMBEDS = torch.tensor(
|
||||
[ 4.6143, -5.5784, 0.8983, -3.9665, -0.6714, -1.0665, -1.1277, 1.5619, 2.6322, -7.2008, -2.4932, 0.3265,
|
||||
-1.4738, 0.1425, 5.0825, 4.1760, -5.4708, 2.1935, -6.0044, 3.9540
|
||||
]
|
||||
)
|
||||
# fmt: on
|
||||
|
||||
self.assertTrue(torch.allclose(speech_embeds[0, :20], EXPECTED_SPEECH_EMBEDS, atol=1e-4))
|
||||
|
||||
def test_full_model_integration(self):
|
||||
full_model_output = self.model.generate(
|
||||
input_ids=self.text_tokens,
|
||||
input_features=self.input_features,
|
||||
do_sample=False,
|
||||
num_beams=4,
|
||||
num_return_sequences=4,
|
||||
max_new_tokens=10,
|
||||
).speech_ids.cpu()
|
||||
|
||||
EXPECTED_OUTPUTS = torch.tensor([[1953, 1080, 612], [1953, 1953, 612], [1953, 612, 716]])
|
||||
|
||||
self.assertTrue(torch.allclose(full_model_output[-3:, -3:], EXPECTED_OUTPUTS))
|
136
tests/models/clvp/test_processor_clvp.py
Normal file
136
tests/models/clvp/test_processor_clvp.py
Normal file
@ -0,0 +1,136 @@
|
||||
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import gc
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
from transformers import ClvpFeatureExtractor, ClvpProcessor, ClvpTokenizer
|
||||
from transformers.testing_utils import require_torch
|
||||
|
||||
from .test_feature_extraction_clvp import floats_list
|
||||
|
||||
|
||||
@require_torch
|
||||
class ClvpProcessorTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.checkpoint = "susnato/clvp_dev"
|
||||
self.tmpdirname = tempfile.mkdtemp()
|
||||
|
||||
def tearDown(self):
|
||||
super().tearDown()
|
||||
shutil.rmtree(self.tmpdirname)
|
||||
gc.collect()
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.get_tokenizer with Whisper->Clvp
|
||||
def get_tokenizer(self, **kwargs):
|
||||
return ClvpTokenizer.from_pretrained(self.checkpoint, **kwargs)
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.get_feature_extractor with Whisper->Clvp
|
||||
def get_feature_extractor(self, **kwargs):
|
||||
return ClvpFeatureExtractor.from_pretrained(self.checkpoint, **kwargs)
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_save_load_pretrained_default with Whisper->Clvp
|
||||
def test_save_load_pretrained_default(self):
|
||||
tokenizer = self.get_tokenizer()
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
|
||||
processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
processor.save_pretrained(self.tmpdirname)
|
||||
processor = ClvpProcessor.from_pretrained(self.tmpdirname)
|
||||
|
||||
self.assertEqual(processor.tokenizer.get_vocab(), tokenizer.get_vocab())
|
||||
self.assertIsInstance(processor.tokenizer, ClvpTokenizer)
|
||||
|
||||
self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor.to_json_string())
|
||||
self.assertIsInstance(processor.feature_extractor, ClvpFeatureExtractor)
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_feature_extractor with Whisper->Clvp,processor(raw_speech->processor(raw_speech=raw_speech
|
||||
def test_feature_extractor(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
raw_speech = floats_list((3, 1000))
|
||||
|
||||
input_feat_extract = feature_extractor(raw_speech, return_tensors="np")
|
||||
input_processor = processor(raw_speech=raw_speech, return_tensors="np")
|
||||
|
||||
for key in input_feat_extract.keys():
|
||||
self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2)
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_tokenizer with Whisper->Clvp
|
||||
def test_tokenizer(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
input_str = "This is a test string"
|
||||
|
||||
encoded_processor = processor(text=input_str)
|
||||
|
||||
encoded_tok = tokenizer(input_str)
|
||||
|
||||
for key in encoded_tok.keys():
|
||||
self.assertListEqual(encoded_tok[key], encoded_processor[key])
|
||||
|
||||
# Copied from transformers.tests.models.whisper.test_processor_whisper.WhisperProcessorTest.test_tokenizer_decode with Whisper->Clvp
|
||||
def test_tokenizer_decode(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]]
|
||||
|
||||
decoded_processor = processor.batch_decode(predicted_ids)
|
||||
decoded_tok = tokenizer.batch_decode(predicted_ids)
|
||||
|
||||
self.assertListEqual(decoded_tok, decoded_processor)
|
||||
|
||||
def test_save_load_pretrained_additional_features(self):
|
||||
processor = ClvpProcessor(tokenizer=self.get_tokenizer(), feature_extractor=self.get_feature_extractor())
|
||||
processor.save_pretrained(self.tmpdirname)
|
||||
|
||||
tokenizer_add_kwargs = self.get_tokenizer(pad_token="(PAD)")
|
||||
feature_extractor_add_kwargs = self.get_feature_extractor(sampling_rate=16000)
|
||||
|
||||
processor = ClvpProcessor.from_pretrained(
|
||||
self.tmpdirname,
|
||||
pad_token="(PAD)",
|
||||
sampling_rate=16000,
|
||||
)
|
||||
|
||||
self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab())
|
||||
self.assertIsInstance(processor.tokenizer, ClvpTokenizer)
|
||||
|
||||
self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor_add_kwargs.to_json_string())
|
||||
self.assertIsInstance(processor.feature_extractor, ClvpFeatureExtractor)
|
||||
|
||||
def test_model_input_names(self):
|
||||
feature_extractor = self.get_feature_extractor()
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
processor = ClvpProcessor(tokenizer=tokenizer, feature_extractor=feature_extractor)
|
||||
|
||||
self.assertListEqual(
|
||||
sorted(processor.model_input_names),
|
||||
sorted(set(feature_extractor.model_input_names + tokenizer.model_input_names)),
|
||||
msg="`processor` and `feature_extractor` model input names do not match",
|
||||
)
|
312
tests/models/clvp/test_tokenization_clvp.py
Normal file
312
tests/models/clvp/test_tokenization_clvp.py
Normal file
@ -0,0 +1,312 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import json
|
||||
import os
|
||||
import unittest
|
||||
from typing import List
|
||||
|
||||
from transformers import ClvpTokenizer
|
||||
|
||||
from ...test_tokenization_common import TokenizerTesterMixin, slow
|
||||
|
||||
|
||||
class ClvpTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
|
||||
tokenizer_class = ClvpTokenizer
|
||||
test_rust_tokenizer = False
|
||||
from_pretrained_kwargs = {"add_prefix_space": True}
|
||||
test_seq2seq = False
|
||||
test_sentencepiece_ignore_case = True
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
|
||||
# Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt
|
||||
vocab = [
|
||||
"l",
|
||||
"o",
|
||||
"w",
|
||||
"e",
|
||||
"r",
|
||||
"s",
|
||||
"t",
|
||||
"i",
|
||||
"d",
|
||||
"n",
|
||||
"\u0120",
|
||||
"\u0120l",
|
||||
"\u0120n",
|
||||
"\u0120lo",
|
||||
"\u0120low",
|
||||
"er",
|
||||
"\u0120lowest",
|
||||
"\u0120newer",
|
||||
"\u0120wider",
|
||||
"<unk>",
|
||||
"<|endoftext|>",
|
||||
"[SPACE]",
|
||||
]
|
||||
vocab_tokens = dict(zip(vocab, range(len(vocab))))
|
||||
merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""]
|
||||
self.special_tokens_map = {"unk_token": "<unk>"}
|
||||
|
||||
self.vocab_file = os.path.join(self.tmpdirname, "vocab.json")
|
||||
self.merges_file = os.path.join(self.tmpdirname, "merges.txt")
|
||||
with open(self.vocab_file, "w", encoding="utf-8") as fp:
|
||||
fp.write(json.dumps(vocab_tokens) + "\n")
|
||||
with open(self.merges_file, "w", encoding="utf-8") as fp:
|
||||
fp.write("\n".join(merges))
|
||||
|
||||
# Copied from transformers.tests.models.gpt2.test_tokenization_gpt2.GPT2TokenizationTest.get_tokenizer with GPT2->Clvp
|
||||
def get_tokenizer(self, **kwargs):
|
||||
kwargs.update(self.special_tokens_map)
|
||||
return ClvpTokenizer.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
# Copied from transformers.tests.models.gpt2.test_tokenization_gpt2.GPT2TokenizationTest.get_input_output_texts
|
||||
def get_input_output_texts(self, tokenizer):
|
||||
input_text = "lower newer"
|
||||
output_text = "lower newer"
|
||||
return input_text, output_text
|
||||
|
||||
# Copied from transformers.tests.models.layoutxlm.test_tokenization_layoutxlm.LayoutXLMTokenizationTest.test_add_special_tokens
|
||||
def test_add_special_tokens(self):
|
||||
tokenizers: List[ClvpTokenizer] = self.get_tokenizers(do_lower_case=False)
|
||||
for tokenizer in tokenizers:
|
||||
with self.subTest(f"{tokenizer.__class__.__name__}"):
|
||||
special_token = "[SPECIAL_TOKEN]"
|
||||
special_token_box = [1000, 1000, 1000, 1000]
|
||||
|
||||
tokenizer.add_special_tokens({"cls_token": special_token})
|
||||
encoded_special_token = tokenizer.encode(
|
||||
[special_token], boxes=[special_token_box], add_special_tokens=False
|
||||
)
|
||||
self.assertEqual(len(encoded_special_token), 1)
|
||||
|
||||
decoded = tokenizer.decode(encoded_special_token, skip_special_tokens=True)
|
||||
self.assertTrue(special_token not in decoded)
|
||||
|
||||
# Copied from transformers.tests.models.gpt2.test_tokenization_gpt2.GPT2TokenizationTest.test_rust_and_python_full_tokenizers
|
||||
def test_rust_and_python_full_tokenizers(self):
|
||||
if not self.test_rust_tokenizer:
|
||||
return
|
||||
|
||||
tokenizer = self.get_tokenizer()
|
||||
rust_tokenizer = self.get_rust_tokenizer(add_prefix_space=True)
|
||||
|
||||
sequence = "lower newer"
|
||||
|
||||
# Testing tokenization
|
||||
tokens = tokenizer.tokenize(sequence, add_prefix_space=True)
|
||||
rust_tokens = rust_tokenizer.tokenize(sequence)
|
||||
self.assertListEqual(tokens, rust_tokens)
|
||||
|
||||
# Testing conversion to ids without special tokens
|
||||
ids = tokenizer.encode(sequence, add_special_tokens=False, add_prefix_space=True)
|
||||
rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=False)
|
||||
self.assertListEqual(ids, rust_ids)
|
||||
|
||||
# Testing conversion to ids with special tokens
|
||||
rust_tokenizer = self.get_rust_tokenizer(add_prefix_space=True)
|
||||
ids = tokenizer.encode(sequence, add_prefix_space=True)
|
||||
rust_ids = rust_tokenizer.encode(sequence)
|
||||
self.assertListEqual(ids, rust_ids)
|
||||
|
||||
# Testing the unknown token
|
||||
input_tokens = tokens + [rust_tokenizer.unk_token]
|
||||
input_bpe_tokens = [14, 15, 10, 9, 3, 2, 15, 19]
|
||||
self.assertListEqual(rust_tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens)
|
||||
|
||||
# Copied from transformers.tests.models.gpt2.test_tokenization_gpt2.GPT2TokenizationTest.test_padding
|
||||
def test_padding(self, max_length=15):
|
||||
for tokenizer, pretrained_name, kwargs in self.tokenizers_list:
|
||||
with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"):
|
||||
tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs)
|
||||
|
||||
# Simple input
|
||||
s = "This is a simple input"
|
||||
s2 = ["This is a simple input 1", "This is a simple input 2"]
|
||||
p = ("This is a simple input", "This is a pair")
|
||||
p2 = [
|
||||
("This is a simple input 1", "This is a simple input 2"),
|
||||
("This is a simple pair 1", "This is a simple pair 2"),
|
||||
]
|
||||
|
||||
# Simple input tests
|
||||
self.assertRaises(ValueError, tokenizer_r.encode, s, max_length=max_length, padding="max_length")
|
||||
|
||||
# Simple input
|
||||
self.assertRaises(ValueError, tokenizer_r.encode_plus, s, max_length=max_length, padding="max_length")
|
||||
|
||||
# Simple input
|
||||
self.assertRaises(
|
||||
ValueError,
|
||||
tokenizer_r.batch_encode_plus,
|
||||
s2,
|
||||
max_length=max_length,
|
||||
padding="max_length",
|
||||
)
|
||||
|
||||
# Pair input
|
||||
self.assertRaises(ValueError, tokenizer_r.encode, p, max_length=max_length, padding="max_length")
|
||||
|
||||
# Pair input
|
||||
self.assertRaises(ValueError, tokenizer_r.encode_plus, p, max_length=max_length, padding="max_length")
|
||||
|
||||
# Pair input
|
||||
self.assertRaises(
|
||||
ValueError,
|
||||
tokenizer_r.batch_encode_plus,
|
||||
p2,
|
||||
max_length=max_length,
|
||||
padding="max_length",
|
||||
)
|
||||
|
||||
# Copied from transformers.tests.models.gpt2.test_tokenization_gpt2.GPT2TokenizationTest.test_padding_if_pad_token_set_slow
|
||||
def test_padding_if_pad_token_set_slow(self):
|
||||
tokenizer = ClvpTokenizer.from_pretrained(self.tmpdirname, pad_token="<pad>")
|
||||
|
||||
# Simple input
|
||||
s = "This is a simple input"
|
||||
s2 = ["This is a simple input looooooooong", "This is a simple input"]
|
||||
p = ("This is a simple input", "This is a pair")
|
||||
p2 = [
|
||||
("This is a simple input loooooong", "This is a simple input"),
|
||||
("This is a simple pair loooooong", "This is a simple pair"),
|
||||
]
|
||||
|
||||
pad_token_id = tokenizer.pad_token_id
|
||||
|
||||
out_s = tokenizer(s, padding="max_length", max_length=30, return_tensors="np")
|
||||
out_s2 = tokenizer(s2, padding=True, truncate=True, return_tensors="np")
|
||||
out_p = tokenizer(*p, padding="max_length", max_length=60, return_tensors="np")
|
||||
out_p2 = tokenizer(p2, padding=True, truncate=True, return_tensors="np")
|
||||
|
||||
# s
|
||||
# test single string max_length padding
|
||||
self.assertEqual(out_s["input_ids"].shape[-1], 30)
|
||||
self.assertTrue(pad_token_id in out_s["input_ids"])
|
||||
self.assertTrue(0 in out_s["attention_mask"])
|
||||
|
||||
# s2
|
||||
# test automatic padding
|
||||
self.assertEqual(out_s2["input_ids"].shape[-1], 33)
|
||||
# long slice doesn't have padding
|
||||
self.assertFalse(pad_token_id in out_s2["input_ids"][0])
|
||||
self.assertFalse(0 in out_s2["attention_mask"][0])
|
||||
# short slice does have padding
|
||||
self.assertTrue(pad_token_id in out_s2["input_ids"][1])
|
||||
self.assertTrue(0 in out_s2["attention_mask"][1])
|
||||
|
||||
# p
|
||||
# test single pair max_length padding
|
||||
self.assertEqual(out_p["input_ids"].shape[-1], 60)
|
||||
self.assertTrue(pad_token_id in out_p["input_ids"])
|
||||
self.assertTrue(0 in out_p["attention_mask"])
|
||||
|
||||
# p2
|
||||
# test automatic padding pair
|
||||
self.assertEqual(out_p2["input_ids"].shape[-1], 52)
|
||||
# long slice pair doesn't have padding
|
||||
self.assertFalse(pad_token_id in out_p2["input_ids"][0])
|
||||
self.assertFalse(0 in out_p2["attention_mask"][0])
|
||||
# short slice pair does have padding
|
||||
self.assertTrue(pad_token_id in out_p2["input_ids"][1])
|
||||
self.assertTrue(0 in out_p2["attention_mask"][1])
|
||||
|
||||
# Copied from transformers.tests.models.gpt2.test_tokenization_gpt2.GPT2TokenizationTest.test_special_tokens_mask_input_pairs_and_bos_token
|
||||
def test_special_tokens_mask_input_pairs_and_bos_token(self):
|
||||
# TODO: change to self.get_tokenizers() when the fast version is implemented
|
||||
tokenizers = [self.get_tokenizer(do_lower_case=False, add_bos_token=True)]
|
||||
for tokenizer in tokenizers:
|
||||
with self.subTest(f"{tokenizer.__class__.__name__}"):
|
||||
sequence_0 = "Encode this."
|
||||
sequence_1 = "This one too please."
|
||||
encoded_sequence = tokenizer.encode(sequence_0, add_special_tokens=False)
|
||||
encoded_sequence += tokenizer.encode(sequence_1, add_special_tokens=False)
|
||||
encoded_sequence_dict = tokenizer.encode_plus(
|
||||
sequence_0,
|
||||
sequence_1,
|
||||
add_special_tokens=True,
|
||||
return_special_tokens_mask=True,
|
||||
)
|
||||
encoded_sequence_w_special = encoded_sequence_dict["input_ids"]
|
||||
special_tokens_mask = encoded_sequence_dict["special_tokens_mask"]
|
||||
self.assertEqual(len(special_tokens_mask), len(encoded_sequence_w_special))
|
||||
|
||||
filtered_sequence = [
|
||||
(x if not special_tokens_mask[i] else None) for i, x in enumerate(encoded_sequence_w_special)
|
||||
]
|
||||
filtered_sequence = [x for x in filtered_sequence if x is not None]
|
||||
self.assertEqual(encoded_sequence, filtered_sequence)
|
||||
|
||||
def test_token_type_ids(self):
|
||||
tokenizer = self.get_tokenizer()
|
||||
seq_0 = "Test this method."
|
||||
|
||||
# We want to have sequence 0 and sequence 1 are tagged
|
||||
# respectively with 0 and 1 token_ids
|
||||
# (regardless of whether the model use token type ids)
|
||||
# We use this assumption in the QA pipeline among other place
|
||||
output = tokenizer(seq_0, return_token_type_ids=True, add_special_tokens=True)
|
||||
self.assertIn(0, output["token_type_ids"])
|
||||
|
||||
def test_full_tokenizer(self):
|
||||
tokenizer = ClvpTokenizer(self.vocab_file, self.merges_file, **self.special_tokens_map)
|
||||
text = "lower newer"
|
||||
bpe_tokens = ["l", "o", "w", "er", "[SPACE]", "n", "e", "w", "er"]
|
||||
tokens = tokenizer.tokenize(text, add_prefix_space=False)
|
||||
self.assertListEqual(tokens, bpe_tokens)
|
||||
|
||||
input_tokens = tokens + [tokenizer.unk_token]
|
||||
input_bpe_tokens = [0, 1, 2, 15, 21, 9, 3, 2, 15, 19]
|
||||
self.assertListEqual(tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens)
|
||||
|
||||
@slow
|
||||
def test_outputs_with_numbers(self):
|
||||
text = "hello and this is an example text and I have $1000. my lucky number is 12345."
|
||||
tokenizer = ClvpTokenizer.from_pretrained("susnato/clvp_dev")
|
||||
|
||||
# fmt: off
|
||||
EXPECTED_OUTPUT = [62, 84, 28, 2, 53, 2,147, 2, 54, 2, 43, 2, 169, 122, 29, 64, 2, 136, 37, 33, 2, 53, 2, 22,
|
||||
2, 148, 2, 110, 2, 40, 206, 53, 2, 134, 84, 59, 32, 9, 2, 125, 2, 25, 34, 197, 38, 2, 27,
|
||||
231, 15, 44, 2, 54, 2, 33, 100, 25, 76, 2, 40, 206, 53, 7, 2, 40, 46, 18, 2, 21, 97, 17,
|
||||
219, 2, 87, 210, 8, 19, 22, 76, 9,
|
||||
]
|
||||
# fmt: on
|
||||
|
||||
self.assertListEqual(tokenizer.encode(text, add_special_tokens=False), EXPECTED_OUTPUT)
|
||||
|
||||
@slow
|
||||
def test_tokenizer_integration(self):
|
||||
sequences = [
|
||||
"Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides "
|
||||
"general-purpose architectures (BERT, RoBERTa, XLM, DistilBert, XLNet...) for Natural "
|
||||
"Language Understanding (NLU) and Natural Language Generation (NLG) with over multiple pretrained "
|
||||
"models and deep interoperability between Jax, PyTorch and TensorFlow.",
|
||||
"BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly "
|
||||
"conditioning on both left and right context in all layers.",
|
||||
"The quick brown fox jumps over the lazy dog.",
|
||||
]
|
||||
|
||||
# fmt: off
|
||||
expected_encoding = {'input_ids': [[144, 43, 32, 87, 26, 173, 2, 5, 87, 26, 44, 70, 2, 209, 27, 2, 55, 2, 29, 38, 51, 31, 71, 8, 144, 43, 32, 87, 26, 173, 2, 53, 2, 29, 38, 51, 31, 71, 8, 29, 46, 144, 137, 49, 8, 15, 44, 33, 6, 2, 187, 35, 83, 61, 2, 20, 50, 44, 56, 8, 29, 121, 139, 66, 2, 59, 71, 60, 18, 16, 33, 34, 175, 2, 5, 15, 44, 33, 7, 2, 89, 15, 44, 33, 14, 7, 2, 37, 25, 26, 7, 2, 17, 54, 78, 25, 15, 44, 33, 7, 2, 37, 25, 111, 33, 9, 9, 9, 6, 2, 87, 2, 27, 48, 121, 56, 2, 25, 43, 20, 34, 14, 112, 2, 97, 234, 63, 53, 52, 2, 5, 27, 25, 34, 6, 2, 53, 2, 27, 48, 121, 56, 2, 25, 43, 20, 34, 14, 112, 2, 20, 50, 44, 158, 2, 5, 27, 25, 20, 6, 2, 103, 2, 253, 2, 26, 167, 78, 29, 64, 2, 29, 46, 144, 137, 49, 2, 115, 126, 25, 32, 2, 53, 2, 126, 18, 29, 2, 41, 114, 161, 44, 109, 151, 240, 2, 67, 33, 100, 50, 2, 23, 14, 37, 7, 2, 29, 38, 51, 31, 71, 2, 53, 2, 33, 50, 32, 57, 19, 25, 69, 9], [ 15, 44, 33, 2, 54, 2, 17, 61, 22, 20, 27, 49, 2, 51, 2, 29, 46, 8, 144, 137, 2, 126, 18, 29, 2, 15, 83, 22, 46, 16, 181, 56, 2, 46, 29, 175, 86, 158, 32, 2, 154, 2, 97, 25, 14, 67, 25, 49, 2, 136, 37, 33, 2, 185, 2, 23, 28, 41, 33, 70, 2, 135, 17, 60, 107, 52, 2, 47, 2, 165, 40, 2, 64, 19, 33, 2, 53, 2, 101, 104, 2, 135, 136, 37, 33, 2, 41, 2, 108, 2, 25, 88, 173, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 42, 2, 194, 91, 24, 2, 243, 190, 2, 182, 37, 2, 23, 231, 29, 32, 2, 253, 2, 42, 2, 25, 14, 39, 38, 2, 134, 20, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], # noqa: E501
|
||||
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], # noqa: E501
|
||||
}
|
||||
# fmt: on
|
||||
|
||||
self.tokenizer_integration_test_util(
|
||||
sequences=sequences, expected_encoding=expected_encoding, model_name="susnato/clvp_dev", padding=True
|
||||
)
|
@ -207,6 +207,8 @@ IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [
|
||||
"CLIPTextModelWithProjection",
|
||||
"CLIPVisionModel",
|
||||
"CLIPVisionModelWithProjection",
|
||||
"ClvpForCausalLM",
|
||||
"ClvpModel",
|
||||
"GroupViTTextModel",
|
||||
"GroupViTVisionModel",
|
||||
"TFCLIPTextModel",
|
||||
|
Loading…
Reference in New Issue
Block a user