mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
add GPTSAN model (reopen) (#21291)
* add GPTSAN-Japanese * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN (update for review) * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * fix typo in comment text * add GPTSAN * add GPTSAN * add GPTSAN * add GPTSAN * fix document and comments * fix class name GPTSAN->GPTSan * fix import and test for tokenizer
This commit is contained in:
parent
c87bbe1ff0
commit
f56174ac5b
@ -340,6 +340,7 @@ Current number of checkpoints: ** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -333,6 +333,7 @@ Número actual de puntos de control: ** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -305,6 +305,7 @@ conda install -c huggingface transformers
|
||||
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (ओपनएआई से) साथ में पेपर [लैंग्वेज मॉडल्स अनसुपरवाइज्ड मल्टीटास्क लर्नर्स हैं](https://blog.openai.com/better-language-models/) एलेक रैडफोर्ड*, जेफरी वू*, रेवन चाइल्ड, डेविड लुआन, डारियो एमोडी* द्वारा * और इल्या सुत्सकेवर** ने पोस्ट किया।
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI से) साथ वाला पेपर [kingoflolz/mesh-transformer-jax](https://github. com/kingoflolz/mesh-transformer-jax/) बेन वांग और अरन कोमात्सुजाकी द्वारा।
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA से) साथ में कागज [GroupViT: टेक्स्ट सुपरविजन से सिमेंटिक सेगमेंटेशन इमर्जेस](https://arxiv .org/abs/2202.11094) जियारुई जू, शालिनी डी मेलो, सिफ़ी लियू, वोनमिन बायन, थॉमस ब्रेउएल, जान कौट्ज़, ज़ियाओलोंग वांग द्वारा।
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (फेसबुक से) साथ में पेपर [ह्यूबर्ट: सेल्फ सुपरवाइज्ड स्पीच रिप्रेजेंटेशन लर्निंग बाय मास्क्ड प्रेडिक्शन ऑफ हिडन यूनिट्स](https ://arxiv.org/abs/2106.07447) वेई-निंग सू, बेंजामिन बोल्टे, याओ-हंग ह्यूबर्ट त्साई, कुशाल लखोटिया, रुस्लान सालाखुतदीनोव, अब्देलरहमान मोहम्मद द्वारा।
|
||||
|
@ -367,6 +367,7 @@ Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それ
|
||||
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI から) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** から公開された研究論文: [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/)
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI から) Ben Wang and Aran Komatsuzaki から公開されたレポジトリー [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/)
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (AI-Sweden から) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren から公開された研究論文: [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf)
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) 坂本俊之(tanreinama)からリリースされました.
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (Microsoft から) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu から公開された研究論文: [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234).
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA から) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang から公開された研究論文: [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094)
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook から) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed から公開された研究論文: [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447)
|
||||
|
@ -282,6 +282,7 @@ Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는
|
||||
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (OpenAI 에서) Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 의 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 논문과 함께 발표했습니다.
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (AI-Sweden 에서) Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. 의 [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) 논문과 함께 발표했습니다.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu 의 [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) 논문과 함께 발표했습니다.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA 에서) Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 의 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 논문과 함께 발표했습니다.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (Facebook 에서) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 의 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 논문과 함께 발표했습니다.
|
||||
|
@ -306,6 +306,7 @@ conda install -c huggingface transformers
|
||||
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。
|
||||
|
@ -318,6 +318,7 @@ conda install -c huggingface transformers
|
||||
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
|
||||
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/main/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by 坂本俊之(tanreinama).
|
||||
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
|
@ -98,6 +98,7 @@ Die Bibliothek enthält derzeit JAX-, PyTorch- und TensorFlow-Implementierungen,
|
||||
1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
|
||||
1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
|
||||
|
@ -301,6 +301,8 @@
|
||||
title: GPT-J
|
||||
- local: model_doc/gpt2
|
||||
title: GPT2
|
||||
- local: model_doc/gptsan-japanese
|
||||
title: GPTSAN Japanese
|
||||
- local: model_doc/gpt-sw3
|
||||
title: GPTSw3
|
||||
- local: model_doc/herbert
|
||||
|
@ -119,6 +119,7 @@ The documentation is organized into five sections:
|
||||
1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPT-Sw3](model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
|
||||
1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Graphormer](model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
|
||||
1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
@ -306,6 +307,7 @@ Flax), PyTorch, and/or TensorFlow.
|
||||
| GPT NeoX Japanese | ✅ | ❌ | ✅ | ❌ | ❌ |
|
||||
| GPT-J | ❌ | ❌ | ✅ | ✅ | ✅ |
|
||||
| GPT-Sw3 | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||
| GPTSAN-japanese | ✅ | ❌ | ✅ | ❌ | ❌ |
|
||||
| Graphormer | ❌ | ❌ | ✅ | ❌ | ❌ |
|
||||
| GroupViT | ❌ | ❌ | ✅ | ✅ | ❌ |
|
||||
| Hubert | ❌ | ❌ | ✅ | ✅ | ❌ |
|
||||
|
117
docs/source/en/model_doc/gptsan-japanese.mdx
Normal file
117
docs/source/en/model_doc/gptsan-japanese.mdx
Normal file
@ -0,0 +1,117 @@
|
||||
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
-->
|
||||
|
||||
# GPTSAN-japanese
|
||||
|
||||
## Overview
|
||||
|
||||
The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama).
|
||||
|
||||
GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM
|
||||
in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can
|
||||
fine-tune for translation or summarization.
|
||||
|
||||
### Generation
|
||||
|
||||
The `generate()` method can be used to generate text using GPTSAN-Japanese model.
|
||||
|
||||
```python
|
||||
>>> from transformers import AutoModel, AutoTokenizer
|
||||
>>> import torch
|
||||
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
>>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda()
|
||||
>>> x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt")
|
||||
>>> torch.manual_seed(0)
|
||||
>>> gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20)
|
||||
>>> tokenizer.decode(gen_tok[0])
|
||||
'織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉'
|
||||
```
|
||||
|
||||
## GPTSAN Features
|
||||
|
||||
GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models.
|
||||
The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text.
|
||||
GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details.
|
||||
|
||||
### Prefix-LM Model
|
||||
|
||||
GPTSAN has the structure of the model named Prefix-LM in the `T5` paper. (The original GPTSAN repository calls it `hybrid`)
|
||||
In GPTSAN, the `Prefix` part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length.
|
||||
Arbitrary lengths can also be specified differently for each batch.
|
||||
This length applies to the text entered in `prefix_text` for the tokenizer.
|
||||
The tokenizer returns the mask of the `Prefix` part of Prefix-LM as `token_type_ids`.
|
||||
The model treats the part where `token_type_ids` is 1 as a `Prefix` part, that is, the input can refer to both tokens before and after.
|
||||
|
||||
Tips:
|
||||
|
||||
Specifying the Prefix part is done with a mask passed to self-attention.
|
||||
When token_type_ids=None or all zero, it is equivalent to regular causal mask
|
||||
|
||||
for example:
|
||||
|
||||
>>> x_token = tokenizer("アイウエ")
|
||||
input_ids: | SOT | SEG | ア | イ | ウ | エ |
|
||||
token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
|
||||
prefix_lm_mask:
|
||||
SOT | 1 0 0 0 0 0 |
|
||||
SEG | 1 1 0 0 0 0 |
|
||||
ア | 1 1 1 0 0 0 |
|
||||
イ | 1 1 1 1 0 0 |
|
||||
ウ | 1 1 1 1 1 0 |
|
||||
エ | 1 1 1 1 1 1 |
|
||||
|
||||
>>> x_token = tokenizer("", prefix_text="アイウエ")
|
||||
input_ids: | SOT | ア | イ | ウ | エ | SEG |
|
||||
token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
|
||||
prefix_lm_mask:
|
||||
SOT | 1 1 1 1 1 0 |
|
||||
ア | 1 1 1 1 1 0 |
|
||||
イ | 1 1 1 1 1 0 |
|
||||
ウ | 1 1 1 1 1 0 |
|
||||
エ | 1 1 1 1 1 0 |
|
||||
SEG | 1 1 1 1 1 1 |
|
||||
|
||||
>>> x_token = tokenizer("ウエ", prefix_text="アイ")
|
||||
input_ids: | SOT | ア | イ | SEG | ウ | エ |
|
||||
token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
|
||||
prefix_lm_mask:
|
||||
SOT | 1 1 1 0 0 0 |
|
||||
ア | 1 1 1 0 0 0 |
|
||||
イ | 1 1 1 0 0 0 |
|
||||
SEG | 1 1 1 1 0 0 |
|
||||
ウ | 1 1 1 1 1 0 |
|
||||
エ | 1 1 1 1 1 1 |
|
||||
|
||||
### Spout Vector
|
||||
|
||||
A Spout Vector is a special vector for controlling text generation.
|
||||
This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens.
|
||||
In the pre-trained model published from `Tanrei/GPTSAN-japanese`, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention.
|
||||
The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions.
|
||||
|
||||
## GPTSanJapaneseConfig
|
||||
|
||||
[[autodoc]] GPTSanJapaneseConfig
|
||||
|
||||
## GPTSanJapaneseTokenizer
|
||||
|
||||
[[autodoc]] GPTSanJapaneseTokenizer
|
||||
|
||||
## GPTSanJapaneseModel
|
||||
|
||||
[[autodoc]] GPTSanJapaneseModel
|
||||
|
||||
## GPTSanJapaneseForConditionalGeneration
|
||||
|
||||
[[autodoc]] GPTSanJapaneseForConditionalGeneration
|
||||
- forward
|
@ -29,7 +29,7 @@ The task illustrated in this tutorial is supported by the following model archit
|
||||
|
||||
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
|
||||
|
||||
[BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet)
|
||||
[BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet)
|
||||
|
||||
<!--End of the generated tip-->
|
||||
|
||||
|
@ -26,7 +26,7 @@ The task illustrated in this tutorial is supported by the following model archit
|
||||
|
||||
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
|
||||
|
||||
[BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet)
|
||||
[BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet)
|
||||
|
||||
<!--End of the generated tip-->
|
||||
|
||||
|
@ -87,6 +87,7 @@ La biblioteca actualmente contiene implementaciones de JAX, PyTorch y TensorFlow
|
||||
1. **[GPT-2](model_doc/gpt2)** (de OpenAI) publicado con el paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) por Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** y Ilya Sutskever**.
|
||||
1. **[GPT-J](model_doc/gptj)** (de EleutherAI) publicado con el repositorio [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) por Ben Wang y Aran Komatsuzaki.
|
||||
1. **[GPT Neo](model_doc/gpt_neo)** (de EleutherAI) publicado en el paper [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) por Sid Black, Stella Biderman, Leo Gao, Phil Wang y Connor Leahy.
|
||||
1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released with [GPTSAN](https://github.com/tanreinama/GPTSAN) by Toshiyuki Sakamoto (tanreinama).
|
||||
1. **[Hubert](model_doc/hubert)** (de Facebook) publicado con el paper [HuBERT: Self-Supervised Speech Representation Learning por Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) por Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
1. **[I-BERT](model_doc/ibert)** (de Berkeley) publicado con el paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) por Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
|
||||
1. **[ImageGPT](model_doc/imagegpt)** (de OpenAI) publicado con el paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) por Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
|
||||
|
@ -104,6 +104,7 @@ specific language governing permissions and limitations under the License.
|
||||
1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
|
||||
1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
|
||||
1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
|
||||
|
@ -101,6 +101,7 @@ Atualmente a biblioteca contém implementações do PyTorch, TensorFlow e JAX, p
|
||||
1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
|
||||
1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
|
||||
1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
|
||||
1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
|
||||
1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
|
||||
1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
|
||||
1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
|
||||
|
@ -294,6 +294,11 @@ _import_structure = {
|
||||
"models.gpt_neox_japanese": ["GPT_NEOX_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP", "GPTNeoXJapaneseConfig"],
|
||||
"models.gpt_sw3": [],
|
||||
"models.gptj": ["GPTJ_PRETRAINED_CONFIG_ARCHIVE_MAP", "GPTJConfig"],
|
||||
"models.gptsan_japanese": [
|
||||
"GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"GPTSanJapaneseConfig",
|
||||
"GPTSanJapaneseTokenizer",
|
||||
],
|
||||
"models.graphormer": ["GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "GraphormerConfig"],
|
||||
"models.groupvit": [
|
||||
"GROUPVIT_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
@ -1636,6 +1641,14 @@ else:
|
||||
"GPTJPreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.gptsan_japanese"].extend(
|
||||
[
|
||||
"GPTSAN_JAPANESE_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"GPTSanJapaneseForConditionalGeneration",
|
||||
"GPTSanJapaneseModel",
|
||||
"GPTSanJapanesePreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.graphormer"].extend(
|
||||
[
|
||||
"GRAPHORMER_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
@ -3844,6 +3857,11 @@ if TYPE_CHECKING:
|
||||
from .models.gpt_neox import GPT_NEOX_PRETRAINED_CONFIG_ARCHIVE_MAP, GPTNeoXConfig
|
||||
from .models.gpt_neox_japanese import GPT_NEOX_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP, GPTNeoXJapaneseConfig
|
||||
from .models.gptj import GPTJ_PRETRAINED_CONFIG_ARCHIVE_MAP, GPTJConfig
|
||||
from .models.gptsan_japanese import (
|
||||
GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
GPTSanJapaneseConfig,
|
||||
GPTSanJapaneseTokenizer,
|
||||
)
|
||||
from .models.graphormer import GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, GraphormerConfig
|
||||
from .models.groupvit import (
|
||||
GROUPVIT_PRETRAINED_CONFIG_ARCHIVE_MAP,
|
||||
@ -4986,6 +5004,12 @@ if TYPE_CHECKING:
|
||||
GPTJModel,
|
||||
GPTJPreTrainedModel,
|
||||
)
|
||||
from .models.gptsan_japanese import (
|
||||
GPTSAN_JAPANESE_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
GPTSanJapaneseForConditionalGeneration,
|
||||
GPTSanJapaneseModel,
|
||||
GPTSanJapanesePreTrainedModel,
|
||||
)
|
||||
from .models.graphormer import (
|
||||
GRAPHORMER_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
GraphormerForGraphClassification,
|
||||
|
@ -286,6 +286,55 @@ class BaseModelOutputWithPastAndCrossAttentions(ModelOutput):
|
||||
cross_attentions: Optional[Tuple[torch.FloatTensor]] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class MoECausalLMOutputWithPast(ModelOutput):
|
||||
"""
|
||||
Base class for causal language model (or autoregressive) outputs as well as Mixture of Expert's router hidden
|
||||
states terms, to train a MoE model.
|
||||
|
||||
Args:
|
||||
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
|
||||
Language modeling loss (for next-token prediction).
|
||||
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
|
||||
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
|
||||
past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
|
||||
Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
|
||||
`(batch_size, num_heads, sequence_length, embed_size_per_head)`)
|
||||
|
||||
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
|
||||
`past_key_values` input) to speed up sequential decoding.
|
||||
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
|
||||
Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
|
||||
one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
|
||||
|
||||
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
|
||||
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
|
||||
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
|
||||
sequence_length)`.
|
||||
|
||||
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
|
||||
heads.
|
||||
z_loss (`torch.FloatTensor`, *optional*, returned when `labels` is provided):
|
||||
z_loss for the sparse modules.
|
||||
aux_loss (`torch.FloatTensor`, *optional*, returned when `labels` is provided):
|
||||
aux_loss for the sparse modules.
|
||||
router_logits (`tuple(torch.FloatTensor)`, *optional*, returned when `output_router_logits=True` is passed or when `config.add_router_probs=True`):
|
||||
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, sequence_length, num_experts)`.
|
||||
|
||||
Router logits of the encoder model, useful to compute the auxiliary loss and the z_loss for the sparse
|
||||
modules.
|
||||
"""
|
||||
|
||||
loss: Optional[torch.FloatTensor] = None
|
||||
logits: torch.FloatTensor = None
|
||||
past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
|
||||
hidden_states: Optional[Tuple[torch.FloatTensor]] = None
|
||||
attentions: Optional[Tuple[torch.FloatTensor]] = None
|
||||
z_loss: torch.FloatTensor = None
|
||||
aux_loss: torch.FloatTensor = None
|
||||
router_logits: Optional[Tuple[torch.FloatTensor]] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class MoEModelOutput(ModelOutput):
|
||||
"""
|
||||
|
@ -84,6 +84,7 @@ from . import (
|
||||
gpt_neox_japanese,
|
||||
gpt_sw3,
|
||||
gptj,
|
||||
gptsan_japanese,
|
||||
graphormer,
|
||||
groupvit,
|
||||
herbert,
|
||||
|
@ -92,6 +92,7 @@ CONFIG_MAPPING_NAMES = OrderedDict(
|
||||
("gpt_neox", "GPTNeoXConfig"),
|
||||
("gpt_neox_japanese", "GPTNeoXJapaneseConfig"),
|
||||
("gptj", "GPTJConfig"),
|
||||
("gptsan-japanese", "GPTSanJapaneseConfig"),
|
||||
("graphormer", "GraphormerConfig"),
|
||||
("groupvit", "GroupViTConfig"),
|
||||
("hubert", "HubertConfig"),
|
||||
@ -263,6 +264,7 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict(
|
||||
("gpt_neox", "GPT_NEOX_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("gpt_neox_japanese", "GPT_NEOX_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("gptj", "GPTJ_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("gptsan-japanese", "GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("graphormer", "GRAPHORMER_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("groupvit", "GROUPVIT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("hubert", "HUBERT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
@ -434,6 +436,7 @@ MODEL_NAMES_MAPPING = OrderedDict(
|
||||
("gpt_neox", "GPT NeoX"),
|
||||
("gpt_neox_japanese", "GPT NeoX Japanese"),
|
||||
("gptj", "GPT-J"),
|
||||
("gptsan-japanese", "GPTSAN-japanese"),
|
||||
("graphormer", "Graphormer"),
|
||||
("groupvit", "GroupViT"),
|
||||
("herbert", "HerBERT"),
|
||||
|
@ -90,6 +90,7 @@ MODEL_MAPPING_NAMES = OrderedDict(
|
||||
("gpt_neox", "GPTNeoXModel"),
|
||||
("gpt_neox_japanese", "GPTNeoXJapaneseModel"),
|
||||
("gptj", "GPTJModel"),
|
||||
("gptsan-japanese", "GPTSanJapaneseForConditionalGeneration"),
|
||||
("graphormer", "GraphormerModel"),
|
||||
("groupvit", "GroupViTModel"),
|
||||
("hubert", "HubertModel"),
|
||||
@ -216,6 +217,7 @@ MODEL_FOR_PRETRAINING_MAPPING_NAMES = OrderedDict(
|
||||
("funnel", "FunnelForPreTraining"),
|
||||
("gpt-sw3", "GPT2LMHeadModel"),
|
||||
("gpt2", "GPT2LMHeadModel"),
|
||||
("gptsan-japanese", "GPTSanJapaneseForConditionalGeneration"),
|
||||
("ibert", "IBertForMaskedLM"),
|
||||
("layoutlm", "LayoutLMForMaskedLM"),
|
||||
("longformer", "LongformerForMaskedLM"),
|
||||
@ -286,6 +288,7 @@ MODEL_WITH_LM_HEAD_MAPPING_NAMES = OrderedDict(
|
||||
("gpt_neox", "GPTNeoXForCausalLM"),
|
||||
("gpt_neox_japanese", "GPTNeoXJapaneseForCausalLM"),
|
||||
("gptj", "GPTJForCausalLM"),
|
||||
("gptsan-japanese", "GPTSanJapaneseForConditionalGeneration"),
|
||||
("ibert", "IBertForMaskedLM"),
|
||||
("layoutlm", "LayoutLMForMaskedLM"),
|
||||
("led", "LEDForConditionalGeneration"),
|
||||
@ -579,6 +582,7 @@ MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES = OrderedDict(
|
||||
("blenderbot-small", "BlenderbotSmallForConditionalGeneration"),
|
||||
("encoder-decoder", "EncoderDecoderModel"),
|
||||
("fsmt", "FSMTForConditionalGeneration"),
|
||||
("gptsan-japanese", "GPTSanJapaneseForConditionalGeneration"),
|
||||
("led", "LEDForConditionalGeneration"),
|
||||
("longt5", "LongT5ForConditionalGeneration"),
|
||||
("m2m_100", "M2M100ForConditionalGeneration"),
|
||||
|
@ -154,6 +154,7 @@ else:
|
||||
("gpt_neox", (None, "GPTNeoXTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("gpt_neox_japanese", ("GPTNeoXJapaneseTokenizer", None)),
|
||||
("gptj", ("GPT2Tokenizer", "GPT2TokenizerFast" if is_tokenizers_available() else None)),
|
||||
("gptsan-japanese", ("GPTSanJapaneseTokenizer", None)),
|
||||
("groupvit", ("CLIPTokenizer", "CLIPTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("herbert", ("HerbertTokenizer", "HerbertTokenizerFast" if is_tokenizers_available() else None)),
|
||||
("hubert", ("Wav2Vec2CTCTokenizer", None)),
|
||||
|
70
src/transformers/models/gptsan_japanese/__init__.py
Normal file
70
src/transformers/models/gptsan_japanese/__init__.py
Normal file
@ -0,0 +1,70 @@
|
||||
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from ...utils import (
|
||||
OptionalDependencyNotAvailable,
|
||||
_LazyModule,
|
||||
is_flax_available,
|
||||
is_tf_available,
|
||||
is_torch_available,
|
||||
)
|
||||
|
||||
|
||||
_import_structure = {
|
||||
"configuration_gptsan_japanese": ["GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP", "GPTSanJapaneseConfig"],
|
||||
"tokenization_gptsan_japanese": ["GPTSanJapaneseTokenizer"],
|
||||
}
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["modeling_gptsan_japanese"] = [
|
||||
"GPTSAN_JAPANESE_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"GPTSanJapaneseForConditionalGeneration",
|
||||
"GPTSanJapaneseModel",
|
||||
"GPTSanJapanesePreTrainedModel",
|
||||
]
|
||||
_import_structure["tokenization_gptsan_japanese"] = [
|
||||
"GPTSanJapaneseTokenizer",
|
||||
]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .configuration_gptsan_japanese import GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP, GPTSanJapaneseConfig
|
||||
from .tokenization_gptsan_japanese import GPTSanJapaneseTokenizer
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .modeling_gptsan_japanese import (
|
||||
GPTSAN_JAPANESE_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
GPTSanJapaneseForConditionalGeneration,
|
||||
GPTSanJapaneseModel,
|
||||
GPTSanJapanesePreTrainedModel,
|
||||
)
|
||||
from .tokenization_gptsan_japanese import GPTSanJapaneseTokenizer
|
||||
|
||||
|
||||
else:
|
||||
import sys
|
||||
|
||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
@ -0,0 +1,159 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023, HuggingFace Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" GPTSAN-japanese model configuration"""
|
||||
from ...configuration_utils import PretrainedConfig
|
||||
from ...utils import logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
GPTSAN_JAPANESE_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||
"tanreinama/GPTSAN-2.8B-spout_is_uniform": (
|
||||
"https://huggingface.co/tanreinama/GPTSAN-2.8B-spout_is_uniform/resolve/main/config.json"
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
class GPTSanJapaneseConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of a [`GPTSanJapaneseModel`]. It is used to instantiate
|
||||
a GPTSANJapanese model according to the specified arguments, defining the model architecture. Instantiating a
|
||||
configuration with the defaults will yield a similar configuration to that of the GPTSANJapanese
|
||||
[tanreinama/GPTSAN-2.8B-spout_is_uniform](https://huggingface.co/tanreinama/GPTSAN-2.8B-spout_is_uniform)
|
||||
architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Arguments:
|
||||
vocab_size (`int`, *optional*, defaults to 36000):
|
||||
Vocabulary size of the GPTSANJapanese model. Defines the number of different tokens that can be represented
|
||||
by the `inputs_ids` passed when calling [`GPTSanJapaneseModel`].
|
||||
max_position_embeddings (`int`, *optional*, defaults to 1280):
|
||||
The maximum sequence length that this model might ever be used with. Defaults set this to 1280.
|
||||
d_model (`int`, *optional*, defaults to 1024):
|
||||
Size of the encoder layers and the pooler layer.
|
||||
d_ff (`int`, *optional*, defaults to 8192):
|
||||
Size of the intermediate feed forward layer in each `SwitchTransformersBlock`.
|
||||
d_ext (`int`, *optional*, defaults to 4096):
|
||||
Size of the intermediate feed forward layer in each Extra-layers.
|
||||
d_spout (`int`, *optional*, defaults to 128):
|
||||
Size of the `spout` vector.
|
||||
num_switch_layers (`int`, *optional*, defaults to 10):
|
||||
Number of layers in the Switch Transformer layer.
|
||||
num_ext_layers (`int`, *optional*, defaults to 0):
|
||||
Number of layers in the Extra-layers.
|
||||
num_heads (`int`, *optional*, defaults to 16):
|
||||
Number of attention heads for each attention layer in the Transformer encoder.
|
||||
num_experts (`int`, *optional*, defaults to 16):
|
||||
Number of experts for each SwitchTransformer layer.
|
||||
expert_capacity (`int`, *optional*, defaults to 128):
|
||||
Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular
|
||||
Transformer.
|
||||
dropout_rate (`float`, *optional*, defaults to 0.0):
|
||||
The ratio for all dropout layers.
|
||||
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
|
||||
The epsilon used by the layer normalization layers.
|
||||
router_bias (`bool`, *optional*, defaults to `False`):
|
||||
Whether to add a bias to the router.
|
||||
router_jitter_noise (`float`, *optional*, defaults to 0.0):
|
||||
Amount of noise to add to the router. Set it to 0.0 during prediction or set small value (usually 1e-2)
|
||||
during training.
|
||||
router_dtype (`str`, *optional*, default to `"float32"`):
|
||||
The `dtype` used for the routers. It is preferable to keep the `dtype` to `"float32"` as specified in the
|
||||
*selective precision* discussion in [the paper](https://arxiv.org/abs/2101.03961).
|
||||
router_ignore_padding_tokens (`bool`, *optional*, defaults to `False`):
|
||||
Whether to ignore padding tokens when routing.
|
||||
output_hidden_states (`bool`, *optional*, default to `False`):
|
||||
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
|
||||
more detail.
|
||||
output_attentions (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not to return the attentions tensors of all attention layers.
|
||||
initializer_factor (`float`, *optional*, defaults to 0.002):
|
||||
A factor for initializing all weight matrices.
|
||||
output_router_logits (`bool`, *optional*, default to `False`):
|
||||
Whether or not to return the router logits of all experts.
|
||||
use_cache (`bool`, *optional*, defaults to `True`):
|
||||
Whether or not the model should return the last key/values attentions (not used by all models)
|
||||
"""
|
||||
model_type = "gptsan-japanese"
|
||||
keys_to_ignore_at_inference = [
|
||||
"past_key_values",
|
||||
]
|
||||
attribute_map = {
|
||||
"hidden_size": "d_model",
|
||||
"num_attention_heads": "num_heads",
|
||||
"num_hidden_layers": "num_layers",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_size=36000,
|
||||
max_position_embeddings=1280,
|
||||
d_model=1024,
|
||||
d_ff=8192,
|
||||
d_ext=4096,
|
||||
d_spout=128,
|
||||
num_switch_layers=10,
|
||||
num_ext_layers=0,
|
||||
num_heads=16,
|
||||
num_experts=16,
|
||||
expert_capacity=128,
|
||||
dropout_rate=0.0,
|
||||
layer_norm_epsilon=1e-5,
|
||||
router_bias=False,
|
||||
router_jitter_noise=0.0,
|
||||
router_dtype="float32",
|
||||
router_ignore_padding_tokens=False,
|
||||
output_hidden_states=False,
|
||||
output_attentions=False,
|
||||
initializer_factor=0.002,
|
||||
output_router_logits=False,
|
||||
use_cache=True,
|
||||
separator_token_id=35998,
|
||||
pad_token_id=35995,
|
||||
eos_token_id=35999,
|
||||
**kwargs,
|
||||
):
|
||||
self.vocab_size = vocab_size
|
||||
self.max_position_embeddings = max_position_embeddings
|
||||
self.d_model = d_model
|
||||
self.d_ff = d_ff
|
||||
self.d_ext = d_ext
|
||||
self.d_spout = d_spout
|
||||
self.num_switch_layers = num_switch_layers
|
||||
self.num_ext_layers = num_ext_layers
|
||||
self.num_layers = num_switch_layers + num_ext_layers
|
||||
self.num_heads = num_heads
|
||||
self.num_experts = num_experts
|
||||
self.expert_capacity = expert_capacity
|
||||
self.dropout_rate = dropout_rate
|
||||
self.layer_norm_epsilon = layer_norm_epsilon
|
||||
self.router_bias = router_bias
|
||||
self.router_jitter_noise = router_jitter_noise
|
||||
self.router_dtype = router_dtype
|
||||
self.router_ignore_padding_tokens = router_ignore_padding_tokens
|
||||
self.output_hidden_states = output_hidden_states
|
||||
self.output_attentions = output_attentions
|
||||
self.initializer_factor = initializer_factor
|
||||
self.output_router_logits = output_router_logits
|
||||
self.use_cache = use_cache
|
||||
|
||||
super().__init__(
|
||||
separator_token_id=separator_token_id,
|
||||
pad_token_id=pad_token_id,
|
||||
eos_token_id=eos_token_id,
|
||||
**kwargs,
|
||||
)
|
@ -0,0 +1,181 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Convert GPTSANJapanese checkpoints from the original repository to pytorch model."""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
from collections import OrderedDict
|
||||
|
||||
import numpy as np
|
||||
import tensorflow as tf
|
||||
import torch
|
||||
|
||||
|
||||
def convert_tf_gptsan_to_pt(args):
|
||||
parameter_file = os.path.join(args.tf_model_dir, "parameters.json")
|
||||
params = json.loads(open(parameter_file).read())
|
||||
if not params:
|
||||
raise ValueError(
|
||||
f"It seems that the json file at {parameter_file} is empty. Make sure you have a correct json file."
|
||||
)
|
||||
if not args.output.endswith(".pt"):
|
||||
args.output = args.output + ".pt"
|
||||
new_state = OrderedDict()
|
||||
with tf.device("/CPU:0"):
|
||||
reader = tf.train.load_checkpoint(args.tf_model_dir)
|
||||
shapes = reader.get_variable_to_shape_map()
|
||||
for key_name in shapes.keys():
|
||||
vnp = reader.get_tensor(key_name).astype(np.float16)
|
||||
if key_name.endswith("/adam_m") or key_name.endswith("/adam_v"):
|
||||
continue
|
||||
if key_name.startswith("pasts/"):
|
||||
if key_name.startswith("pasts/mlp"):
|
||||
player = int(key_name[9])
|
||||
elif key_name.startswith("pasts/out"):
|
||||
player = 8
|
||||
name = "model.sqout.%d.weight" % (player * 2) # enter to nn.Sequencial with Tanh, so 2 at a time
|
||||
state = vnp.transpose([1, 0]).copy() # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.startswith("model/moe"):
|
||||
player = int(key_name[9:].split("/")[0])
|
||||
if key_name.endswith("/switch_gating/kernel"):
|
||||
name = "model.blocks.%d.feed_forward.mlp.router.classifier.weight" % player
|
||||
state = vnp.transpose([1, 0]).copy() # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/softmlp/kernel"):
|
||||
name = "model.blocks.%d.feed_forward.soft_bypass_mlp.weight" % player
|
||||
state = vnp.transpose([1, 0]).copy() # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/wo/kernel") or key_name.endswith("/wi/kernel"):
|
||||
nlayer = key_name[-9:-7]
|
||||
for i in range(16):
|
||||
name = "model.blocks.%d.feed_forward.mlp.experts.expert_%d.%s.weight" % (player, i, nlayer)
|
||||
state = (
|
||||
vnp[i].transpose([1, 0]).copy()
|
||||
) # In Mesh-Tensorflow, it is one array, so it is divided
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.startswith("model/mlp"):
|
||||
player = int(key_name[9:].split("/")[0])
|
||||
if key_name.endswith("/p1/kernel"):
|
||||
name = "model.blocks.%d.feed_forward.mlp.wi.weight" % player
|
||||
state = vnp.transpose([1, 0]).copy() # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/p1/bias"):
|
||||
name = "model.blocks.%d.feed_forward.mlp.wi.bias" % player
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/p2/kernel"):
|
||||
name = "model.blocks.%d.feed_forward.mlp.wo.weight" % player
|
||||
state = vnp.transpose([1, 0]).copy() # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/p2/bias"):
|
||||
name = "model.blocks.%d.feed_forward.mlp.wo.bias" % player
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.startswith("model/ln"):
|
||||
player = int(key_name[8:].split("/")[0])
|
||||
if key_name.endswith("/b"):
|
||||
name = "model.blocks.%d.feed_forward.norm.bias" % player
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/g"):
|
||||
name = "model.blocks.%d.feed_forward.norm.weight" % player
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.startswith("model/att"):
|
||||
player = int(key_name[9:].split("/")[0])
|
||||
if key_name.endswith("/qkv/kernel"):
|
||||
state = vnp.copy() # Compute same dimension as Mesh-tensorflow using einsum
|
||||
state_q = state[:, 0, :, :]
|
||||
state_k = state[:, 1, :, :]
|
||||
state_v = state[:, 2, :, :]
|
||||
state_q = (
|
||||
state_q.reshape([state_q.shape[0], state_q.shape[1] * state_q.shape[2]])
|
||||
.transpose([1, 0])
|
||||
.copy()
|
||||
) # Mesh-Tensorflow is a diagonal matrix
|
||||
state_k = (
|
||||
state_k.reshape([state_k.shape[0], state_k.shape[1] * state_k.shape[2]])
|
||||
.transpose([1, 0])
|
||||
.copy()
|
||||
) # Mesh-Tensorflow is a diagonal matrix
|
||||
state_v = (
|
||||
state_v.reshape([state_v.shape[0], state_v.shape[1] * state_v.shape[2]])
|
||||
.transpose([1, 0])
|
||||
.copy()
|
||||
) # Mesh-Tensorflow is a diagonal matrix
|
||||
name = "model.blocks.%d.self_attn.self_attn.q_proj.weight" % player
|
||||
new_state[name] = torch.tensor(state_q)
|
||||
name = "model.blocks.%d.self_attn.self_attn.k_proj.weight" % player
|
||||
new_state[name] = torch.tensor(state_k)
|
||||
name = "model.blocks.%d.self_attn.self_attn.v_proj.weight" % player
|
||||
new_state[name] = torch.tensor(state_v)
|
||||
elif key_name.endswith("/o/kernel"):
|
||||
name = "model.blocks.%d.self_attn.self_attn.out_proj.weight" % player
|
||||
state = (
|
||||
vnp.reshape([vnp.shape[0] * vnp.shape[1], vnp.shape[2]]).transpose([1, 0]).copy()
|
||||
) # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.startswith("model/an"):
|
||||
player = int(key_name[8:].split("/")[0])
|
||||
if key_name.endswith("/b"):
|
||||
name = "model.blocks.%d.self_attn.norm.bias" % player
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.endswith("/g"):
|
||||
name = "model.blocks.%d.self_attn.norm.weight" % player
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif (
|
||||
key_name.startswith("model/wte")
|
||||
or key_name.startswith("model/wpe")
|
||||
or key_name.startswith("model/ete")
|
||||
):
|
||||
nlayer = {"wte": "embed_tokens", "wpe": "position_embeddings", "ete": "extra_position_embeddings"}[
|
||||
key_name[-3:]
|
||||
]
|
||||
name = "model.%s.weight" % nlayer
|
||||
state = vnp.copy() # same in embedded
|
||||
new_state[name] = torch.tensor(state)
|
||||
if key_name.startswith("model/wte"):
|
||||
name = "lm_head.weight"
|
||||
state = vnp.copy() # same in embedded
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name.startswith("model/wob"):
|
||||
name = "final_logits_bias"
|
||||
state = vnp.copy() # same in embedded
|
||||
state = state.reshape((1, -1))
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name == "model/dense/kernel":
|
||||
name = "model.last_project.weight"
|
||||
state = vnp.transpose([1, 0]).copy() # Mesh-Tensorflow is a diagonal matrix
|
||||
new_state[name] = torch.tensor(state)
|
||||
elif key_name == "model/dense_1/bias":
|
||||
name = "model.last_project.bias"
|
||||
state = vnp.copy() # same because it is one dimensional
|
||||
new_state[name] = torch.tensor(state)
|
||||
torch.save(new_state, args.output)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="model converter.", formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||
)
|
||||
parser.add_argument("--tf_model_dir", metavar="PATH", type=str, required=True, help="import model")
|
||||
parser.add_argument("--output", metavar="PATH", type=str, required=True, help="output model")
|
||||
args = parser.parse_args()
|
||||
convert_tf_gptsan_to_pt(args)
|
1348
src/transformers/models/gptsan_japanese/modeling_gptsan_japanese.py
Normal file
1348
src/transformers/models/gptsan_japanese/modeling_gptsan_japanese.py
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,534 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""Tokenization classes for GPTSANJapanese."""
|
||||
import collections
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple, Union
|
||||
|
||||
import numpy as np
|
||||
|
||||
from ...tokenization_utils import PreTrainedTokenizer
|
||||
from ...tokenization_utils_base import (
|
||||
BatchEncoding,
|
||||
PreTokenizedInput,
|
||||
PreTokenizedInputPair,
|
||||
TextInput,
|
||||
TextInputPair,
|
||||
TruncationStrategy,
|
||||
)
|
||||
from ...utils import PaddingStrategy, logging
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from transformers.pipelines.conversational import Conversation
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt", "emoji_file": "emoji.json"}
|
||||
|
||||
PRETRAINED_VOCAB_FILES_MAP = {
|
||||
"vocab_file": {
|
||||
"Tanrei/GPTSAN-japanese": "https://huggingface.co/Tanrei/GPTSAN-japanese/blob/main/vocab.txt",
|
||||
},
|
||||
"emoji_file": {
|
||||
"Tanrei/GPTSAN-japanese": "https://huggingface.co/Tanrei/GPTSAN-japanese/blob/main/emoji.json",
|
||||
},
|
||||
}
|
||||
|
||||
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
|
||||
"Tanrei/GPTSAN-japanese": 1280,
|
||||
}
|
||||
|
||||
|
||||
# Copied from transformers.models.gpt_neox_japanese.tokenization_gpt_neox_japanese.load_vocab_and_emoji
|
||||
def load_vocab_and_emoji(vocab_file, emoji_file):
|
||||
"""Loads a vocabulary file and emoji file into a dictionary."""
|
||||
with open(emoji_file, "r", encoding="utf-8") as f:
|
||||
emoji = json.loads(f.read())
|
||||
|
||||
vocab = collections.OrderedDict()
|
||||
raw_vocab = collections.OrderedDict()
|
||||
ids_to_tokens = collections.OrderedDict()
|
||||
with open(vocab_file, "r", encoding="utf-8") as f:
|
||||
token = f.readlines()
|
||||
token = [[t.rstrip("\n")] if (t == "," or "," not in t) else t.rstrip("\n").split(",") for t in token]
|
||||
for idx, b in enumerate(token):
|
||||
ids_to_tokens[idx] = b
|
||||
raw_vocab[",".join(b)] = idx
|
||||
for wd in b:
|
||||
vocab[wd] = idx
|
||||
|
||||
return vocab, raw_vocab, ids_to_tokens, emoji
|
||||
|
||||
|
||||
class GPTSanJapaneseTokenizer(PreTrainedTokenizer):
|
||||
"""
|
||||
This tokenizer is based on GPTNeoXJapaneseTokenizer and has the following modifications
|
||||
- Decoding byte0~byte255 tokens correctly
|
||||
- Added bagofword token handling
|
||||
- Return token_type_ids for Prefix-LM model
|
||||
The bagofword token represents a repetition of the previous token and is converted to 3 consecutive tokens when
|
||||
decoding In addition, the original Japanese special Sub-Word-Encoding has been released in this repository
|
||||
(https://github.com/tanreinama/Japanese-BPEEncoder_V2). The token_type_ids is a mask indicating the prefix input
|
||||
position of the Prefix-LM model. To specify a prefix position, specify a prefix input for prefix_text, or specify a
|
||||
sentence of the prefix part and the part after it as a text pair of batch input.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
>>> from transformers import GPTSanJapaneseTokenizer
|
||||
|
||||
>>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
>>> # You can confirm both 慶応 and 慶應 are encoded to 17750
|
||||
>>> tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]
|
||||
[34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281]
|
||||
|
||||
>>> # Both 慶応 and 慶應 are decoded to 慶応
|
||||
>>> tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"])
|
||||
'吾輩は猫である🐯。実は慶応(慶応)大学出身'
|
||||
```
|
||||
|
||||
Example for Prefix-LM:
|
||||
|
||||
```python
|
||||
>>> from transformers import GPTSanJapaneseTokenizer
|
||||
|
||||
>>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
>>> tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["input_ids"]
|
||||
[35993, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 35998, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281]
|
||||
|
||||
>>> # Mask for Prefix-LM inputs
|
||||
>>> tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["token_type_ids"]
|
||||
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
|
||||
```
|
||||
|
||||
Example for batch encode:
|
||||
|
||||
```python
|
||||
>>> from transformers import GPTSanJapaneseTokenizer
|
||||
|
||||
>>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
>>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["input_ids"]
|
||||
[[35993, 8640, 25948, 35998, 30647, 35675, 35999, 35999], [35993, 10382, 9868, 35998, 30646, 9459, 30646, 35675]]
|
||||
|
||||
>>> # Mask for Prefix-LM inputs
|
||||
>>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["token_type_ids"]
|
||||
[[1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 0]]
|
||||
|
||||
>>> # Mask for padding
|
||||
>>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["attention_mask"]
|
||||
[[1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1]]
|
||||
```
|
||||
|
||||
Args:
|
||||
vocab_file (`str`):
|
||||
File containing the vocabulary.
|
||||
emoji_file (`str`):
|
||||
File containing the emoji.
|
||||
unk_token (`str`, *optional*, defaults to `"<|nottoken|>"`):
|
||||
The token used for unknown charactor
|
||||
pad_token (`str`, *optional*, defaults to `"<|separator|>"`):
|
||||
The token used for padding
|
||||
bos_token (`str`, *optional*, defaults to `"<|startoftext|>""`):
|
||||
The beginning of sequence token.
|
||||
eos_token (`str`, *optional*, defaults to `"<|endoftext|>"`):
|
||||
The end of sequence token.
|
||||
sep_token (`str`, *optional*, defaults to `"<|segmenter|>"`):
|
||||
A special token to separate token to prefix part and general input part.
|
||||
do_clean_text (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE.
|
||||
"""
|
||||
|
||||
vocab_files_names = VOCAB_FILES_NAMES
|
||||
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
||||
model_input_names = ["input_ids", "attention_mask", "token_type_ids"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_file,
|
||||
emoji_file,
|
||||
unk_token="<|nottoken|>",
|
||||
pad_token="<|separator|>",
|
||||
bos_token="<|startoftext|>",
|
||||
eos_token="<|endoftext|>",
|
||||
sep_token="<|segmenter|>",
|
||||
do_clean_text=False,
|
||||
**kwargs,
|
||||
):
|
||||
super().__init__(
|
||||
unk_token=unk_token,
|
||||
pad_token=pad_token,
|
||||
bos_token=bos_token,
|
||||
eos_token=eos_token,
|
||||
sep_token=sep_token,
|
||||
do_clean_text=do_clean_text,
|
||||
**kwargs,
|
||||
)
|
||||
if not os.path.isfile(vocab_file):
|
||||
raise ValueError(
|
||||
f"Can't find a vocabulary file at path '{vocab_file}'. To load the vocabulary from a Google pretrained"
|
||||
" model use `tokenizer = GPTSanJapaneseTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`"
|
||||
)
|
||||
if not os.path.isfile(emoji_file):
|
||||
raise ValueError(
|
||||
f"Can't find a emoji file at path '{emoji_file}'. To load the emoji information from a Google"
|
||||
" pretrained model use `tokenizer = GPTSanJapaneseTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`"
|
||||
)
|
||||
self.do_clean_text = do_clean_text
|
||||
self.vocab, self.raw_vocab, self.ids_to_tokens, self.emoji = load_vocab_and_emoji(vocab_file, emoji_file)
|
||||
self.subword_tokenizer = SubWordJapaneseTokenizer(
|
||||
vocab=self.vocab, ids_to_tokens=self.ids_to_tokens, emoji=self.emoji
|
||||
)
|
||||
|
||||
@property
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer.vocab_size
|
||||
def vocab_size(self):
|
||||
# self.vocab contains support for character fluctuation unique to Japanese, and has a large number of vocab
|
||||
return len(self.raw_vocab)
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer.get_vocab
|
||||
def get_vocab(self):
|
||||
return dict(self.raw_vocab, **self.added_tokens_encoder)
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer._tokenize
|
||||
def _tokenize(self, text):
|
||||
return self.subword_tokenizer.tokenize(text, clean=self.do_clean_text)
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer._convert_token_to_id
|
||||
def _convert_token_to_id(self, token):
|
||||
"""Converts a token (str) in an id using the vocab."""
|
||||
return self.vocab.get(token, self.vocab.get(self.unk_token))
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer._convert_id_to_token
|
||||
def _convert_id_to_token(self, index):
|
||||
"""Converts an index (integer) in a token (str) using the vocab."""
|
||||
return self.subword_tokenizer.convert_id_to_token(index)
|
||||
|
||||
def convert_tokens_to_string(self, tokens):
|
||||
"""Converts a sequence of tokens (string) in a single string."""
|
||||
words = []
|
||||
byte_tokens = []
|
||||
for word in tokens:
|
||||
if word[:6] == "<|byte" and word[-2:] == "|>":
|
||||
byte_tokens.append(int(word[6:-2]))
|
||||
else:
|
||||
if len(byte_tokens) > 0:
|
||||
words.append(bytearray(byte_tokens).decode("utf-8", errors="replace"))
|
||||
byte_tokens = []
|
||||
if word[:7] == "<|emoji" and word[-2:] == "|>":
|
||||
words.append(self.emoji["emoji_inv"][word])
|
||||
elif word == "<SP>":
|
||||
words.append(" ")
|
||||
elif word == "<BR>":
|
||||
words.append("\n")
|
||||
elif word == "<TAB>":
|
||||
words.append("\t")
|
||||
elif word == "<BLOCK>":
|
||||
words.append("▀")
|
||||
elif word == "<KIGOU>":
|
||||
words.append("ǀ")
|
||||
elif word == "<U2000U2BFF>":
|
||||
words.append("‖")
|
||||
elif word == "<|bagoftoken|>":
|
||||
if len(words) > 0:
|
||||
words.append(words[-1])
|
||||
words.append(words[-1])
|
||||
words.append(words[-1])
|
||||
elif word.startswith("<|") and word.endswith("|>"):
|
||||
words.append("")
|
||||
else:
|
||||
words.append(word)
|
||||
if len(byte_tokens) > 0:
|
||||
words.append(bytearray(byte_tokens).decode("utf-8", errors="replace"))
|
||||
text = "".join(words)
|
||||
return text
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer._build_conversation_input_ids
|
||||
def _build_conversation_input_ids(self, conversation: "Conversation") -> List[int]:
|
||||
"""This corresponds to DialoGPT variants of models."""
|
||||
input_ids = []
|
||||
for is_user, text in conversation.iter_texts():
|
||||
input_ids.extend(self.encode(text, add_special_tokens=False) + [self.eos_token_id])
|
||||
|
||||
if len(input_ids) > self.model_max_length:
|
||||
input_ids = input_ids[-self.model_max_length :]
|
||||
return input_ids
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizer.save_vocabulary
|
||||
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
|
||||
index = 0
|
||||
if os.path.isdir(save_directory):
|
||||
vocab_file = os.path.join(
|
||||
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
|
||||
)
|
||||
emoji_file = os.path.join(
|
||||
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["emoji_file"]
|
||||
)
|
||||
else:
|
||||
vocab_file = (
|
||||
(filename_prefix + "-" if filename_prefix else "") + save_directory + VOCAB_FILES_NAMES["vocab_file"]
|
||||
)
|
||||
emoji_file = (
|
||||
(filename_prefix + "-" if filename_prefix else "") + save_directory + VOCAB_FILES_NAMES["emoji_file"]
|
||||
)
|
||||
with open(vocab_file, "w", encoding="utf-8") as writer:
|
||||
for token_index, token in self.ids_to_tokens.items():
|
||||
if index != token_index:
|
||||
logger.warning(
|
||||
f"Saving vocabulary to {vocab_file}: vocabulary indices are not consecutive."
|
||||
" Please check that the vocabulary is not corrupted!"
|
||||
)
|
||||
index = token_index
|
||||
writer.write(",".join(token) + "\n")
|
||||
index += 1
|
||||
with open(emoji_file, "w", encoding="utf-8") as writer:
|
||||
json.dump(self.emoji, writer)
|
||||
return vocab_file, emoji_file
|
||||
|
||||
def create_token_type_ids_from_sequences(
|
||||
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
|
||||
) -> List[int]:
|
||||
# docstyle-ignore
|
||||
"""
|
||||
The tokenizer returns token_type_ids as separators between the Prefix part and the rest.
|
||||
token_type_ids is 1 for the Prefix part and 0 for the rest of the token.
|
||||
|
||||
Example:
|
||||
```python
|
||||
>>> x_token = tokenizer("アイウエ")
|
||||
>>> # input_ids: | SOT | SEG | ア | イ | ウ | エ |
|
||||
>>> # token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 |
|
||||
|
||||
>>> x_token = tokenizer("", prefix_text="アイウエ")
|
||||
>>> # input_ids: | SOT | ア | イ | ウ | エ | SEG |
|
||||
>>> # token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 |
|
||||
|
||||
>>> x_token = tokenizer("ウエ", prefix_text="アイ")
|
||||
>>> # input_ids: | SOT | ア | イ | SEG | ウ | エ |
|
||||
>>> # token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 |
|
||||
```"""
|
||||
prefix_len = 0
|
||||
if self.sep_token in self.vocab:
|
||||
segid = self.vocab[self.sep_token]
|
||||
if segid in token_ids_0:
|
||||
prefix_len = token_ids_0.index(segid)
|
||||
if token_ids_1 is None:
|
||||
total_len = len(token_ids_0)
|
||||
else:
|
||||
total_len = len(token_ids_0 + token_ids_1)
|
||||
return prefix_len * [1] + (total_len - prefix_len) * [0]
|
||||
|
||||
def prepare_for_tokenization(self, text, prefix_text=None, add_sep_token=None, **kwargs):
|
||||
# GPTSAN inserts extra SEP tokens in Prefix-LM in addition to SOT for text generation.
|
||||
# SOT at the beginning of the text, and SEP at the separator between the Prefix part and the rest.
|
||||
if add_sep_token is None:
|
||||
add_sep_token = self.sep_token not in text # If insert un-prefix position explicitly
|
||||
prepared = self.bos_token if self.bos_token in self.vocab else ""
|
||||
prepared += prefix_text if prefix_text is not None else ""
|
||||
if add_sep_token:
|
||||
prepared += self.sep_token if self.sep_token in self.vocab else ""
|
||||
prepared += text
|
||||
return (prepared, kwargs)
|
||||
|
||||
def _batch_encode_plus(
|
||||
self,
|
||||
batch_text_or_text_pairs: Union[
|
||||
List[TextInput], List[TextInputPair], List[PreTokenizedInput], List[PreTokenizedInputPair]
|
||||
],
|
||||
add_special_tokens: bool = True,
|
||||
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
|
||||
truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,
|
||||
max_length: Optional[int] = None,
|
||||
stride: int = 0,
|
||||
is_split_into_words: bool = False,
|
||||
pad_to_multiple_of: Optional[int] = None,
|
||||
return_tensors: Optional[str] = None,
|
||||
return_token_type_ids: Optional[bool] = None,
|
||||
return_attention_mask: Optional[bool] = None,
|
||||
return_overflowing_tokens: bool = False,
|
||||
return_special_tokens_mask: bool = False,
|
||||
return_offsets_mapping: bool = False,
|
||||
return_length: bool = False,
|
||||
verbose: bool = True,
|
||||
) -> BatchEncoding:
|
||||
# This tokenizer converts input text pairs into Prefix input and subsequent input
|
||||
if type(batch_text_or_text_pairs[0]) is tuple or type(batch_text_or_text_pairs[0]) is list:
|
||||
# As a single text with an explicit un-prefix position
|
||||
batch_prefix_texts = []
|
||||
for pref, txt in batch_text_or_text_pairs:
|
||||
batch_prefix_texts.append(pref + self.sep_token + txt)
|
||||
batch_text_or_text_pairs = batch_prefix_texts
|
||||
|
||||
return super()._batch_encode_plus(
|
||||
batch_text_or_text_pairs,
|
||||
add_special_tokens,
|
||||
padding_strategy,
|
||||
truncation_strategy,
|
||||
max_length,
|
||||
stride,
|
||||
is_split_into_words,
|
||||
pad_to_multiple_of,
|
||||
return_tensors,
|
||||
return_token_type_ids,
|
||||
return_attention_mask,
|
||||
return_overflowing_tokens,
|
||||
return_special_tokens_mask,
|
||||
return_offsets_mapping,
|
||||
return_length,
|
||||
verbose,
|
||||
)
|
||||
|
||||
|
||||
class SubWordJapaneseTokenizer(object):
|
||||
"""
|
||||
This tokenizer is based on GPTNeoXJapaneseTokenizer and has the following modifications
|
||||
- Decoding byte0~byte255 tokens correctly
|
||||
- Added bagofword token handling
|
||||
|
||||
https://github.com/tanreinama/Japanese-BPEEncoder_V2 This tokenizer class is under MIT Lisence according to the
|
||||
original repository.
|
||||
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2020 tanreinama
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
|
||||
documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
|
||||
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of
|
||||
the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
|
||||
THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
"""
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.SubWordJapaneseTokenizer.__init__
|
||||
def __init__(self, vocab, ids_to_tokens, emoji):
|
||||
self.vocab = vocab # same as swe
|
||||
self.ids_to_tokens = ids_to_tokens # same as bpe
|
||||
self.emoji = emoji
|
||||
self.maxlen = np.max([len(w) for w in self.vocab.keys()])
|
||||
self.content_repatter1 = re.compile(r"(https?|ftp)(:\/\/[-_\.!~*\'()a-zA-Z0-9;\/?:\@&=\+$,%#]+)")
|
||||
self.content_repatter2 = re.compile(r"[A-Za-z0-9\._+]*@[\-_0-9A-Za-z]+(\.[A-Za-z]+)*")
|
||||
self.content_repatter3 = re.compile(r"[\(]{0,1}[0-9]{2,4}[\)\-\(]{0,1}[0-9]{2,4}[\)\-]{0,1}[0-9]{3,4}")
|
||||
self.content_repatter4 = re.compile(
|
||||
r"([12]\d{3}[/\-年])*(0?[1-9]|1[0-2])[/\-月]((0?[1-9]|[12][0-9]|3[01])日?)*(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*"
|
||||
)
|
||||
self.content_repatter5 = re.compile(
|
||||
r"(明治|大正|昭和|平成|令和|㍾|㍽|㍼|㍻|\u32ff)\d{1,2}年(0?[1-9]|1[0-2])月(0?[1-9]|[12][0-9]|3[01])日(\d{1,2}|:|\d{1,2}時|\d{1,2}分|\(日\)|\(月\)|\(火\)|\(水\)|\(木\)|\(金\)|\(土\)|㈰|㈪|㈫|㈬|㈭|㈮|㈯)*"
|
||||
)
|
||||
self.content_repatter6 = re.compile(
|
||||
r"((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*億)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*万)*((0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*千)*(0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})+)*(千円|万円|千万円|円|千ドル|万ドル|千万ドル|ドル|千ユーロ|万ユーロ|千万ユーロ|ユーロ)+(\(税込\)|\(税抜\)|\+tax)*"
|
||||
)
|
||||
keisen = "─━│┃┄┅┆┇┈┉┊┋┌┍┎┏┐┑┒┓└┕┖┗┘┙┚┛├┝┞┟┠┡┢┣┤┥┦┧┨┩┪┫┬┭┮┯┰┱┲┳┴┵┶┷┸┹┺┻┼┽┾┿╀╁╂╃╄╅╆╇╈╉╊╋╌╍╎╏═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬╭╮╯╰╱╲╳╴╵╶╷╸╹╺╻╼╽╾╿"
|
||||
blocks = "▀▁▂▃▄▅▆▇█▉▊▋▌▍▎▏▐░▒▓▔▕▖▗▘▙▚▛▜▝▞▟"
|
||||
self.content_trans1 = str.maketrans({k: "<BLOCK>" for k in keisen + blocks})
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.SubWordJapaneseTokenizer.__len__
|
||||
def __len__(self):
|
||||
return len(self.ids_to_tokens)
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.SubWordJapaneseTokenizer.clean_text
|
||||
def clean_text(self, content):
|
||||
content = self.content_repatter1.sub("<URL>", content)
|
||||
content = self.content_repatter2.sub("<EMAIL>", content)
|
||||
content = self.content_repatter3.sub("<TEL>", content)
|
||||
content = self.content_repatter4.sub("<DATE>", content)
|
||||
content = self.content_repatter5.sub("<DATE>", content)
|
||||
content = self.content_repatter6.sub("<PRICE>", content)
|
||||
content = content.translate(self.content_trans1)
|
||||
while "<BLOCK><BLOCK>" in content:
|
||||
content = content.replace("<BLOCK><BLOCK>", "<BLOCK>")
|
||||
return content
|
||||
|
||||
# Copied from tokenization_gpt_neox_japanese.SubWordJapaneseTokenizer.tokenize
|
||||
def tokenize(self, text, clean=False):
|
||||
text = text.replace(" ", "<SP>")
|
||||
text = text.replace(" ", "<SP>")
|
||||
text = text.replace("\r\n", "<BR>")
|
||||
text = text.replace("\n", "<BR>")
|
||||
text = text.replace("\r", "<BR>")
|
||||
text = text.replace("\t", "<TAB>")
|
||||
text = text.replace("—", "ー")
|
||||
text = text.replace("−", "ー")
|
||||
for k, v in self.emoji["emoji"].items():
|
||||
if k in text:
|
||||
text = text.replace(k, v)
|
||||
if clean:
|
||||
text = self.clean_text(text)
|
||||
|
||||
def check_simbol(x):
|
||||
e = x.encode()
|
||||
if len(x) == 1 and len(e) == 2:
|
||||
c = (int(e[0]) << 8) + int(e[1])
|
||||
if (
|
||||
(c >= 0xC2A1 and c <= 0xC2BF)
|
||||
or (c >= 0xC780 and c <= 0xC783)
|
||||
or (c >= 0xCAB9 and c <= 0xCBBF)
|
||||
or (c >= 0xCC80 and c <= 0xCDA2)
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
||||
def checku2e(x):
|
||||
e = x.encode()
|
||||
if len(x) == 1 and len(e) == 3:
|
||||
c = (int(e[0]) << 16) + (int(e[1]) << 8) + int(e[2])
|
||||
if c >= 0xE28080 and c <= 0xE2B07F:
|
||||
return True
|
||||
return False
|
||||
|
||||
pos = 0
|
||||
result = []
|
||||
while pos < len(text):
|
||||
end = min(len(text), pos + self.maxlen + 1) if text[pos] == "<" else pos + 3
|
||||
candidates = [] # (token_id, token, pos)
|
||||
for e in range(end, pos, -1):
|
||||
wd = text[pos:e]
|
||||
if wd in self.vocab:
|
||||
if wd[0] == "<" and len(wd) > 2:
|
||||
candidates = [(self.vocab[wd], wd, e)]
|
||||
break
|
||||
else:
|
||||
candidates.append((self.vocab[wd], wd, e))
|
||||
if len(candidates) > 0:
|
||||
# the smallest token_id is adopted
|
||||
_, wd, e = sorted(candidates, key=lambda x: x[0])[0]
|
||||
result.append(wd)
|
||||
pos = e
|
||||
else:
|
||||
end = pos + 1
|
||||
wd = text[pos:end]
|
||||
if check_simbol(wd):
|
||||
result.append("<KIGOU>")
|
||||
elif checku2e(wd):
|
||||
result.append("<U2000U2BFF>")
|
||||
else:
|
||||
for i in wd.encode("utf-8"):
|
||||
result.append("<|byte%d|>" % i)
|
||||
pos = end
|
||||
return result
|
||||
|
||||
def convert_id_to_token(self, index):
|
||||
return self.ids_to_tokens[index][0]
|
@ -3085,6 +3085,30 @@ class GPTJPreTrainedModel(metaclass=DummyObject):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
GPTSAN_JAPANESE_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
class GPTSanJapaneseForConditionalGeneration(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class GPTSanJapaneseModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class GPTSanJapanesePreTrainedModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
GRAPHORMER_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
|
0
tests/models/gptsan_japanese/__init__.py
Normal file
0
tests/models/gptsan_japanese/__init__.py
Normal file
428
tests/models/gptsan_japanese/test_modeling_gptsan_japanese.py
Normal file
428
tests/models/gptsan_japanese/test_modeling_gptsan_japanese.py
Normal file
@ -0,0 +1,428 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 Toshiyuki Sakamoto(tanreinama) and HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import unittest
|
||||
|
||||
import numpy as np
|
||||
|
||||
from transformers import (
|
||||
GPTSanJapaneseConfig,
|
||||
GPTSanJapaneseForConditionalGeneration,
|
||||
GPTSanJapaneseModel,
|
||||
GPTSanJapaneseTokenizer,
|
||||
is_torch_available,
|
||||
)
|
||||
from transformers.generation import GenerationConfig
|
||||
from transformers.testing_utils import require_torch, slow, tooslow, torch_device
|
||||
|
||||
from ...generation.test_utils import GenerationTesterMixin
|
||||
from ...test_configuration_common import ConfigTester
|
||||
from ...test_modeling_common import ModelTesterMixin, ids_tensor
|
||||
|
||||
|
||||
class GPTSanJapaneseTester:
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
vocab_size=36000,
|
||||
batch_size=13,
|
||||
num_contexts=7,
|
||||
# For common tests
|
||||
is_training=True,
|
||||
hidden_size=32,
|
||||
ext_size=42,
|
||||
num_hidden_layers=5,
|
||||
num_ext_layers=2,
|
||||
num_attention_heads=4,
|
||||
num_experts=2,
|
||||
d_ff=32,
|
||||
d_ext=80,
|
||||
d_spout=33,
|
||||
dropout_rate=0.0,
|
||||
layer_norm_epsilon=1e-6,
|
||||
expert_capacity=100,
|
||||
router_jitter_noise=0.0,
|
||||
):
|
||||
self.vocab_size = vocab_size
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.num_contexts = num_contexts
|
||||
# For common tests
|
||||
self.seq_length = self.num_contexts
|
||||
self.is_training = is_training
|
||||
self.hidden_size = hidden_size
|
||||
self.num_ext_layers = num_ext_layers
|
||||
self.ext_size = ext_size
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.num_experts = num_experts
|
||||
self.d_ff = d_ff
|
||||
self.d_ext = d_ext
|
||||
self.d_spout = d_spout
|
||||
self.dropout_rate = dropout_rate
|
||||
self.layer_norm_epsilon = layer_norm_epsilon
|
||||
self.expert_capacity = expert_capacity
|
||||
self.router_jitter_noise = router_jitter_noise
|
||||
|
||||
def get_large_model_config(self):
|
||||
return GPTSanJapaneseConfig.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
|
||||
def prepare_config_and_inputs(self):
|
||||
input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)
|
||||
|
||||
config = self.get_config()
|
||||
|
||||
return (config, input_ids)
|
||||
|
||||
def prepare_config_and_inputs_for_common(self):
|
||||
input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)
|
||||
|
||||
config = self.get_config()
|
||||
|
||||
return (config, {"input_ids": input_ids})
|
||||
|
||||
def get_config(self):
|
||||
return GPTSanJapaneseConfig(
|
||||
vocab_size=36000,
|
||||
num_contexts=self.seq_length,
|
||||
d_model=self.hidden_size,
|
||||
d_ff=self.d_ff,
|
||||
d_ext=self.d_ext,
|
||||
d_spout=self.d_spout,
|
||||
num_switch_layers=self.num_hidden_layers - self.num_ext_layers,
|
||||
num_ext_layers=self.num_ext_layers,
|
||||
num_heads=self.num_attention_heads,
|
||||
num_experts=self.num_experts,
|
||||
expert_capacity=self.expert_capacity,
|
||||
dropout_rate=self.dropout_rate,
|
||||
layer_norm_epsilon=self.layer_norm_epsilon,
|
||||
router_jitter_noise=self.router_jitter_noise,
|
||||
)
|
||||
|
||||
def create_and_check_model(
|
||||
self,
|
||||
config,
|
||||
input_ids,
|
||||
):
|
||||
model = GPTSanJapaneseForConditionalGeneration(config=config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
result = model(
|
||||
input_ids=input_ids,
|
||||
)
|
||||
self.parent.assertIsNotNone(result)
|
||||
|
||||
|
||||
@require_torch
|
||||
class GPTSanJapaneseTest(ModelTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (GPTSanJapaneseModel,) if is_torch_available() else ()
|
||||
fx_compatible = False
|
||||
is_encoder_decoder = False
|
||||
test_pruning = False
|
||||
test_headmasking = False
|
||||
test_cpu_offload = False
|
||||
test_disk_offload = False
|
||||
test_save_load_fast_init_to_base = False
|
||||
test_training = False
|
||||
# The small GPTSAN_JAPANESE model needs higher percentages for CPU/MP tests
|
||||
model_split_percents = [0.8, 0.9]
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = GPTSanJapaneseTester(self)
|
||||
self.config_tester = ConfigTester(self, config_class=GPTSanJapaneseConfig, d_model=37)
|
||||
|
||||
def test_config(self):
|
||||
GPTSanJapaneseConfig()
|
||||
|
||||
def test_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_model(*config_and_inputs)
|
||||
|
||||
|
||||
@require_torch
|
||||
class GPTSanJapaneseForConditionalGenerationTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (GPTSanJapaneseForConditionalGeneration,) if is_torch_available() else ()
|
||||
fx_compatible = False
|
||||
is_encoder_decoder = False
|
||||
test_pruning = False
|
||||
test_headmasking = False
|
||||
test_cpu_offload = False
|
||||
test_disk_offload = False
|
||||
# The small GPTSAN_JAPANESE model needs higher percentages for CPU/MP tests
|
||||
model_split_percents = [0.8, 0.9]
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = GPTSanJapaneseTester(self)
|
||||
self.config_tester = ConfigTester(self, config_class=GPTSanJapaneseConfig, d_model=37)
|
||||
|
||||
def test_config(self):
|
||||
GPTSanJapaneseConfig()
|
||||
|
||||
def test_model(self):
|
||||
config_and_inputs = self.model_tester.prepare_config_and_inputs()
|
||||
self.model_tester.create_and_check_model(*config_and_inputs)
|
||||
|
||||
@slow
|
||||
def test_logits(self):
|
||||
model = GPTSanJapaneseForConditionalGeneration.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
input_ids = tokenizer.encode("武田信玄は", return_tensors="pt")
|
||||
outputs = model(input_ids)
|
||||
output_logits = outputs.logits.detach().cpu().numpy()
|
||||
# Output of original model created with mesh-tensoflow
|
||||
target = [
|
||||
# fmt: off
|
||||
[-12.037839889526367, -12.433061599731445, -14.333840370178223, -12.450345993041992, -11.1661376953125,
|
||||
-11.930137634277344, -10.659740447998047, -12.909574508666992, -13.241043090820312, -13.398579597473145,
|
||||
-11.107524871826172, -12.3685941696167, -22.97943115234375, -10.481067657470703, -12.484030723571777,
|
||||
-12.807360649108887, -14.769700050354004, -12.233579635620117, -13.428145408630371, -22.624177932739258],
|
||||
[-7.511149883270264, -8.281851768493652, -7.943127155303955, -7.55021333694458, -6.49869966506958,
|
||||
-7.586796283721924, -6.978085994720459, -7.839145183563232, -8.21964168548584, -8.695091247558594,
|
||||
-6.706910610198975, -6.6585798263549805, -19.565698623657227, -5.353842735290527, -8.350686073303223,
|
||||
-8.039388656616211, -10.856569290161133, -7.75154447555542, -8.819022178649902, -19.51532745361328],
|
||||
[-9.73066234588623, -10.223922729492188, -9.932981491088867, -11.857836723327637, -7.662626266479492,
|
||||
-11.13529109954834, -7.765097618103027, -11.472923278808594, -9.543149948120117, -11.905633926391602,
|
||||
-9.366164207458496, -11.5734281539917, -23.699003219604492, -9.429590225219727, -10.42839241027832,
|
||||
-10.585240364074707, -10.94771957397461, -11.095416069030762, -10.390240669250488, -23.769372940063477],
|
||||
[-9.728265762329102, -9.859712600708008, -10.09729290008545, -9.678522109985352, -6.879519939422607,
|
||||
-9.68487548828125, -4.2803425788879395, -10.018914222717285, -9.308445930480957, -10.63394546508789,
|
||||
-8.083646774291992, -9.06301498413086, -21.904266357421875, -8.90160846710205, -8.841876029968262,
|
||||
-11.856719970703125, -12.079398155212402, -11.233753204345703, -10.177338600158691, -21.87256622314453],
|
||||
[-9.669764518737793, -9.614198684692383, -9.814510345458984, -9.996501922607422, -11.375690460205078,
|
||||
-10.113405227661133, -10.546867370605469, -10.04369068145752, -10.907809257507324, -10.504216194152832,
|
||||
-11.129199028015137, -10.151124000549316, -21.96586799621582, -9.086349487304688, -11.730339050292969,
|
||||
-10.460667610168457, -10.298049926757812, -10.784148216247559, -10.840693473815918, -22.03152847290039],
|
||||
# fmt: on
|
||||
]
|
||||
target = np.array(target).flatten()
|
||||
predict = output_logits[0, :, :20].flatten()
|
||||
|
||||
def check(a, b, epsilon=5e-4):
|
||||
return abs(a - b) < epsilon * max(abs(a), abs(b))
|
||||
|
||||
self.assertTrue(np.all([check(target[i], predict[i]) for i in range(len(target))]))
|
||||
|
||||
@slow
|
||||
def test_batch_generation(self):
|
||||
model = GPTSanJapaneseForConditionalGeneration.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
model.to(torch_device)
|
||||
|
||||
# set deterministically
|
||||
generation_config = GenerationConfig.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
generation_config.top_k = 1
|
||||
|
||||
# use different length sentences to test batching
|
||||
sentences = [
|
||||
"甲斐なら武田と言うほど",
|
||||
"織田信長は、",
|
||||
]
|
||||
|
||||
tokenizer.padding_side = "left"
|
||||
inputs = tokenizer(sentences, return_tensors="pt", padding=True)
|
||||
input_ids = inputs["input_ids"].to(torch_device)
|
||||
|
||||
self.assertNotEqual(inputs["attention_mask"][0].numpy().tolist(), inputs["attention_mask"][1].numpy().tolist())
|
||||
|
||||
outputs = model.generate(
|
||||
input_ids=input_ids,
|
||||
attention_mask=inputs["attention_mask"].to(torch_device),
|
||||
max_new_tokens=3,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
inputs_non_padded = tokenizer(sentences[0], return_tensors="pt").input_ids.to(torch_device)
|
||||
output_non_padded = model.generate(
|
||||
input_ids=inputs_non_padded, max_new_tokens=3, generation_config=generation_config
|
||||
)
|
||||
|
||||
inputs_padded = tokenizer(sentences[1], return_tensors="pt").input_ids.to(torch_device)
|
||||
output_padded = model.generate(input_ids=inputs_padded, max_new_tokens=3, generation_config=generation_config)
|
||||
|
||||
self.assertNotEqual(inputs_non_padded.shape, inputs_padded.shape)
|
||||
|
||||
batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
||||
non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True)
|
||||
padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True)
|
||||
|
||||
expected_output_sentence = [
|
||||
"甲斐なら武田と言うほど甲斐の武田",
|
||||
"織田信長は、このような",
|
||||
]
|
||||
self.assertListEqual(expected_output_sentence, batch_out_sentence)
|
||||
self.assertListEqual(batch_out_sentence, [non_padded_sentence, padded_sentence])
|
||||
|
||||
@tooslow
|
||||
def test_sample(self):
|
||||
model = GPTSanJapaneseForConditionalGeneration.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
# Output of original model created with mesh-tensoflow
|
||||
target = [
|
||||
("武田信玄は", 35675),
|
||||
("武田信玄は、", 45),
|
||||
("武田信玄は、この", 29),
|
||||
("武田信玄は、このよう", 30642),
|
||||
("武田信玄は、このような", 35680),
|
||||
("武田信玄は、このような「", 8640),
|
||||
("武田信玄は、このような「武田", 31617),
|
||||
("武田信玄は、このような「武田家", 30646),
|
||||
("武田信玄は、このような「武田家の", 31617),
|
||||
("武田信玄は、このような「武田家の家", 31381),
|
||||
]
|
||||
for input, output in target:
|
||||
input_ids = tokenizer.encode(input, return_tensors="pt")
|
||||
outputs = model(input_ids)
|
||||
output_logits = outputs.logits.detach().cpu().numpy()[0]
|
||||
output_id = np.argmax(output_logits[-1])
|
||||
self.assertEqual(output_id, output)
|
||||
|
||||
@slow
|
||||
def test_spout_generation(self):
|
||||
model = GPTSanJapaneseForConditionalGeneration.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
model.to(torch_device)
|
||||
|
||||
# set deterministically
|
||||
generation_config = GenerationConfig.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
generation_config.top_k = 1
|
||||
|
||||
input_text = "武田信玄は、"
|
||||
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(torch_device)
|
||||
input_ids_batch = tokenizer([input_text, input_text], return_tensors="pt").input_ids.to(torch_device)
|
||||
|
||||
# spout from uniform and one-hot
|
||||
spouts = [
|
||||
# fmt: off
|
||||
[0.87882208, 0.38426396, 0.33220248, 0.43890406, 0.16562252,
|
||||
0.04803985, 0.211572 , 0.23188473, 0.37153068, 0.7836377 ,
|
||||
0.02160172, 0.38761719, 0.75290772, 0.90198857, 0.34365777,
|
||||
0.64168169, 0.44318471, 0.14575746, 0.92562881, 0.40812148,
|
||||
0.29019122, 0.88861599, 0.65524846, 0.43563456, 0.38177187,
|
||||
0.70832965, 0.81527892, 0.68832812, 0.38833192, 0.4561522 ,
|
||||
0.14828817, 0.47248213, 0.54357335, 0.82009566, 0.1338884 ,
|
||||
0.02755417, 0.19764677, 0.2422084 , 0.04757674, 0.65409606,
|
||||
0.0824589 , 0.03304383, 0.94387689, 0.98764509, 0.82433901,
|
||||
0.27646741, 0.64907493, 0.76009406, 0.30087915, 0.17904689,
|
||||
0.41601714, 0.67046398, 0.10422822, 0.08447374, 0.07354344,
|
||||
0.61423565, 0.70284866, 0.7532333 , 0.1972038 , 0.29575659,
|
||||
0.90583886, 0.29265307, 0.50000175, 0.70407655, 0.889363 ,
|
||||
0.81904418, 0.66829128, 0.64468815, 0.56563723, 0.85601875,
|
||||
0.94924672, 0.00166762, 0.25220643, 0.74540219, 0.67993247,
|
||||
0.1549675 , 0.39385352, 0.92153607, 0.63745931, 0.27759043,
|
||||
0.84702295, 0.65904271, 0.58676614, 0.8666936 , 0.39607438,
|
||||
0.79954983, 0.42220697, 0.39650381, 0.7849864 , 0.56150201,
|
||||
0.15678925, 0.14746032, 0.34542114, 0.47026783, 0.11956489,
|
||||
0.25421435, 0.33788901, 0.68934842, 0.36424685, 0.71737898,
|
||||
0.38983449, 0.94393779, 0.39575588, 0.36616553, 0.87104665,
|
||||
0.64630203, 0.22516905, 0.88270804, 0.15031338, 0.75144345,
|
||||
0.46459025, 0.85396454, 0.86355643, 0.65139851, 0.70266061,
|
||||
0.30241389, 0.81056497, 0.88865969, 0.38773807, 0.70635849,
|
||||
0.90718459, 0.43245789, 0.28000654, 0.45935562, 0.08773519,
|
||||
0.9552151 , 0.93901511, 0.22489288], # uniform
|
||||
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
|
||||
0., 0., 0., 0., 0., 0., 0., 0.],
|
||||
# fmt: on
|
||||
]
|
||||
|
||||
output1 = model.generate(
|
||||
input_ids=input_ids,
|
||||
spout=spouts[0],
|
||||
max_new_tokens=20,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
output2 = model.generate(
|
||||
input_ids=input_ids,
|
||||
spout=spouts[1],
|
||||
max_new_tokens=20,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
output3 = model.generate(
|
||||
input_ids=input_ids_batch,
|
||||
spout=spouts,
|
||||
max_new_tokens=20,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
out1_sentence = tokenizer.decode(output1[0])
|
||||
out2_sentence = tokenizer.decode(output2[0])
|
||||
batch_out_sentence = tokenizer.batch_decode(output3)
|
||||
|
||||
expected_output_sentence = [
|
||||
"武田信玄は、武田氏の滅亡後、武田氏の居城であった甲斐武田氏の居城である",
|
||||
"武田信玄は、武田家の滅亡を防ぐため、武田家の家臣である武田信虎を討",
|
||||
]
|
||||
self.assertListEqual(expected_output_sentence, batch_out_sentence)
|
||||
self.assertListEqual(batch_out_sentence, [out1_sentence, out2_sentence])
|
||||
|
||||
@slow
|
||||
def test_prefix_lm_generation(self):
|
||||
model = GPTSanJapaneseForConditionalGeneration.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
model.to(torch_device)
|
||||
|
||||
# set deterministically
|
||||
generation_config = GenerationConfig.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
generation_config.top_k = 1
|
||||
|
||||
prefix_text_1 = "武田信玄"
|
||||
prefix_text_2 = "織田信長"
|
||||
input_text_1 = "は、"
|
||||
input_text_2 = "が、"
|
||||
input_tok_1 = tokenizer(input_text_1, prefix_text=prefix_text_1, return_tensors="pt")
|
||||
input_tok_2 = tokenizer(input_text_2, prefix_text=prefix_text_2, return_tensors="pt")
|
||||
input_tok_3 = tokenizer([[prefix_text_1, input_text_1], [prefix_text_2, input_text_2]], return_tensors="pt")
|
||||
|
||||
output1 = model.generate(
|
||||
input_ids=input_tok_1.input_ids.to(torch_device),
|
||||
token_type_ids=input_tok_1.token_type_ids.to(torch_device),
|
||||
max_new_tokens=20,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
output2 = model.generate(
|
||||
input_ids=input_tok_2.input_ids.to(torch_device),
|
||||
token_type_ids=input_tok_2.token_type_ids.to(torch_device),
|
||||
max_new_tokens=20,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
output3 = model.generate(
|
||||
input_ids=input_tok_3.input_ids.to(torch_device),
|
||||
token_type_ids=input_tok_3.token_type_ids.to(torch_device),
|
||||
attention_mask=input_tok_3.attention_mask.to(torch_device),
|
||||
max_new_tokens=20,
|
||||
generation_config=generation_config,
|
||||
)
|
||||
|
||||
out1_sentence = tokenizer.decode(output1[0])
|
||||
out2_sentence = tokenizer.decode(output2[0])
|
||||
batch_out_sentence = tokenizer.batch_decode(output3)
|
||||
|
||||
expected_output_sentence = [
|
||||
"武田信玄は、武田氏の祖である武田信虎を、その子・武田信友を擁して",
|
||||
"織田信長が、織田信長の妻・お市の方を妻として迎えたという逸話が残",
|
||||
]
|
||||
self.assertListEqual(expected_output_sentence, batch_out_sentence)
|
||||
self.assertListEqual(batch_out_sentence, [out1_sentence, out2_sentence])
|
@ -0,0 +1,195 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 Toshiyuki Sakamoto(tanreinama) and HuggingFace Inc. team.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import json
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from transformers.models.gptsan_japanese.tokenization_gptsan_japanese import (
|
||||
VOCAB_FILES_NAMES,
|
||||
GPTSanJapaneseTokenizer,
|
||||
)
|
||||
from transformers.testing_utils import require_tokenizers, slow
|
||||
|
||||
from ...test_tokenization_common import TokenizerTesterMixin
|
||||
|
||||
|
||||
@require_tokenizers
|
||||
class GPTSanJapaneseTokenizationTest(TokenizerTesterMixin, unittest.TestCase):
|
||||
tokenizer_class = GPTSanJapaneseTokenizer
|
||||
test_rust_tokenizer = False
|
||||
from_pretrained_kwargs = {"do_clean_text": False, "add_prefix_space": False}
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
|
||||
# fmt: off
|
||||
vocab_tokens = ["こん", "こんに", "にちは", "ばんは", "世界,㔺界", "、", "。", "<BR>", "<SP>", "<TAB>", "<URL>", "<EMAIL>", "<TEL>", "<DATE>", "<PRICE>", "<BLOCK>", "<KIGOU>", "<U2000U2BFF>", "<|emoji1|>", "<unk>", "<|bagoftoken|>", "<|endoftext|>"]
|
||||
# fmt: on
|
||||
emoji_tokens = {"emoji": {"\ud83d\ude00": "<|emoji1|>"}, "emoji_inv": {"<|emoji1|>": "\ud83d\ude00"}} # 😀
|
||||
self.special_tokens_map = {"unk_token": "<unk>"}
|
||||
|
||||
self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"])
|
||||
self.emoji_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["emoji_file"])
|
||||
with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer:
|
||||
vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
|
||||
with open(self.emoji_file, "w") as emoji_writer:
|
||||
emoji_writer.write(json.dumps(emoji_tokens))
|
||||
|
||||
def get_tokenizer(self, **kwargs):
|
||||
kwargs.update(self.special_tokens_map)
|
||||
return GPTSanJapaneseTokenizer.from_pretrained(self.tmpdirname, **kwargs)
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.get_input_output_texts
|
||||
def get_input_output_texts(self, tokenizer):
|
||||
input_text = "こんにちは、世界。 \nこんばんは、㔺界。😀"
|
||||
output_text = "こんにちは、世界。 \nこんばんは、世界。😀"
|
||||
return input_text, output_text
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.get_clean_sequence
|
||||
def get_clean_sequence(self, tokenizer):
|
||||
input_text, output_text = self.get_input_output_texts(tokenizer)
|
||||
ids = tokenizer.encode(output_text, add_special_tokens=False)
|
||||
text = tokenizer.decode(ids, clean_up_tokenization_spaces=False)
|
||||
return text, ids
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.test_pretokenized_inputs
|
||||
def test_pretokenized_inputs(self):
|
||||
pass # TODO add if relevant
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.test_maximum_encoding_length_pair_input
|
||||
def test_maximum_encoding_length_pair_input(self):
|
||||
pass # TODO add if relevant
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.test_maximum_encoding_length_single_input
|
||||
def test_maximum_encoding_length_single_input(self):
|
||||
pass # TODO add if relevant
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.test_full_tokenizer
|
||||
def test_full_tokenizer(self):
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
# Testing tokenization
|
||||
input_text = "こんにちは、世界。 こんばんは、㔺界。"
|
||||
expected_token = ["こん", "にちは", "、", "世界", "。", "<SP>", "こん", "ばんは", "、", "㔺界", "。"]
|
||||
tokens = tokenizer.tokenize(input_text)
|
||||
self.assertListEqual(tokens, expected_token)
|
||||
|
||||
# Testing conversion to ids without special tokens
|
||||
expected_ids = [0, 2, 5, 4, 6, 8, 0, 3, 5, 4, 6]
|
||||
input_ids = tokenizer.convert_tokens_to_ids(tokens)
|
||||
self.assertListEqual(input_ids, expected_ids)
|
||||
|
||||
# Testing conversion to ids with special tokens
|
||||
input_tokens = tokens + [tokenizer.unk_token]
|
||||
expected_ids = [0, 2, 5, 4, 6, 8, 0, 3, 5, 4, 6, 19]
|
||||
input_ids = tokenizer.convert_tokens_to_ids(input_tokens)
|
||||
self.assertListEqual(input_ids, expected_ids)
|
||||
|
||||
def test_token_bagging(self):
|
||||
tokenizer = self.get_tokenizer()
|
||||
|
||||
# Testing tokenization
|
||||
input_text = "こんにちは、<|bagoftoken|>世界。こんばんは、<|bagoftoken|>㔺界。"
|
||||
expected_text = "こんにちは、、、、世界。こんばんは、、、、世界。"
|
||||
tokens = tokenizer.encode(input_text)
|
||||
output_text = tokenizer.decode(tokens)
|
||||
self.assertEqual(output_text, expected_text)
|
||||
|
||||
@slow
|
||||
def test_prefix_input(self):
|
||||
tokenizer = self.tokenizer_class.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
|
||||
# Testing tokenization
|
||||
prefix_text = "こんにちは、世界。"
|
||||
input_text = "こんばんは、㔺界。😀"
|
||||
expected_text = "こんにちは、世界。こんばんは、世界。😀"
|
||||
tokens_1 = tokenizer.encode(prefix_text + input_text)
|
||||
tokens_2 = tokenizer.encode("", prefix_text=prefix_text + input_text)
|
||||
tokens_3 = tokenizer.encode(input_text, prefix_text=prefix_text)
|
||||
output_text_1 = tokenizer.decode(tokens_1)
|
||||
output_text_2 = tokenizer.decode(tokens_2)
|
||||
output_text_3 = tokenizer.decode(tokens_3)
|
||||
self.assertEqual(output_text_1, expected_text)
|
||||
self.assertEqual(output_text_2, expected_text)
|
||||
self.assertEqual(output_text_3, expected_text)
|
||||
|
||||
@slow
|
||||
def test_token_type_ids(self):
|
||||
tokenizer = self.tokenizer_class.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
|
||||
# Testing tokenization
|
||||
prefix_text = "こんにちは、世界。"
|
||||
input_text = "こんばんは、㔺界。😀"
|
||||
|
||||
len_prefix = len(tokenizer.encode(prefix_text)) - 2
|
||||
len_text = len(tokenizer.encode(input_text)) - 2
|
||||
|
||||
expected_mask_1 = [1] + [0] * (len_prefix + len_text + 1)
|
||||
expected_mask_2 = [1] * (len_prefix + len_text + 1) + [0]
|
||||
expected_mask_3 = [1] + [1] * (len_prefix) + [0] * (len_text + 1)
|
||||
|
||||
type_id_1 = tokenizer(prefix_text + input_text).token_type_ids
|
||||
type_id_2 = tokenizer("", prefix_text=prefix_text + input_text).token_type_ids
|
||||
type_id_3 = tokenizer(input_text, prefix_text=prefix_text).token_type_ids
|
||||
self.assertListEqual(type_id_1, expected_mask_1)
|
||||
self.assertListEqual(type_id_2, expected_mask_2)
|
||||
self.assertListEqual(type_id_3, expected_mask_3)
|
||||
|
||||
@slow
|
||||
def test_prefix_tokens(self):
|
||||
tokenizer = self.tokenizer_class.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
|
||||
x_token_1 = tokenizer.encode("あンいワ")
|
||||
x_token_2 = tokenizer.encode("", prefix_text="あンいワ")
|
||||
x_token_3 = tokenizer.encode("いワ", prefix_text="あン")
|
||||
|
||||
self.assertEqual(tokenizer.decode(x_token_1), tokenizer.decode(x_token_2))
|
||||
self.assertEqual(tokenizer.decode(x_token_1), tokenizer.decode(x_token_3))
|
||||
self.assertNotEqual(x_token_1, x_token_2)
|
||||
self.assertNotEqual(x_token_1, x_token_3)
|
||||
self.assertEqual(x_token_1[1], x_token_2[-1]) # SEG token
|
||||
self.assertEqual(x_token_1[1], x_token_3[3]) # SEG token
|
||||
|
||||
@slow
|
||||
def test_batch_encode(self):
|
||||
tokenizer = self.tokenizer_class.from_pretrained("Tanrei/GPTSAN-japanese")
|
||||
|
||||
input_pairs = [["武田信玄", "は、"], ["織田信長", "の配下の、"]]
|
||||
x_token = tokenizer(input_pairs, padding=True)
|
||||
x_token_2 = tokenizer.batch_encode_plus(input_pairs, padding=True)
|
||||
|
||||
# fmt: off
|
||||
expected_outputs = [[35993, 8640, 25948, 35998, 30647, 35675, 35999, 35999], [35993, 10382, 9868, 35998, 30646, 9459, 30646, 35675]]
|
||||
expected_typeids = [[1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 0]]
|
||||
expected_attmask = [[1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1]]
|
||||
# fmt: on
|
||||
self.assertListEqual(x_token.input_ids, expected_outputs)
|
||||
self.assertListEqual(x_token.token_type_ids, expected_typeids)
|
||||
self.assertListEqual(x_token.attention_mask, expected_attmask)
|
||||
self.assertListEqual(x_token_2.input_ids, expected_outputs)
|
||||
self.assertListEqual(x_token_2.token_type_ids, expected_typeids)
|
||||
self.assertListEqual(x_token_2.attention_mask, expected_attmask)
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.test_conversion_reversible
|
||||
def test_conversion_reversible(self):
|
||||
# Intentionally convert some words to accommodate character fluctuations unique to Japanese
|
||||
pass
|
||||
|
||||
# Copied from tests.models.gpt_neox_japanese.test_tokenization_gpt_neox_japanese.GPTNeoXJapaneseTokenizationTest.test_padding_different_model_input_name
|
||||
def test_padding_different_model_input_name(self):
|
||||
# tokenizer has no padding token
|
||||
pass
|
@ -198,6 +198,7 @@ IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [
|
||||
"CLIPSegVisionModel",
|
||||
"CLIPSegTextModel",
|
||||
"EsmForProteinFolding",
|
||||
"GPTSanJapaneseModel",
|
||||
"TimeSeriesTransformerForPrediction",
|
||||
"JukeboxVQVAE",
|
||||
"JukeboxPrior",
|
||||
|
@ -92,6 +92,7 @@ src/transformers/models/glpn/modeling_glpn.py
|
||||
src/transformers/models/gpt2/configuration_gpt2.py
|
||||
src/transformers/models/gpt2/modeling_gpt2.py
|
||||
src/transformers/models/gptj/modeling_gptj.py
|
||||
src/transformers/models/gptsan_japanese/modeling_gptsan_japanese.py
|
||||
src/transformers/models/gpt_neo/configuration_gpt_neo.py
|
||||
src/transformers/models/gpt_neox/configuration_gpt_neox.py
|
||||
src/transformers/models/gpt_neox_japanese/configuration_gpt_neox_japanese.py
|
||||
|
Loading…
Reference in New Issue
Block a user