mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-02 19:21:31 +06:00
The model was merged before final review and approval.
This reverts commit 2ac5b9325e
.
This commit is contained in:
parent
a4616c6767
commit
78f6ed6c70
@ -439,7 +439,6 @@ Current number of checkpoints: ** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (from Google AI) released with the paper [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (from IBM) released with the paper [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
|
||||
|
@ -414,7 +414,6 @@ Número actual de puntos de control: ** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (from Google AI) released with the paper [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (from IBM) released with the paper [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
|
||||
|
@ -388,7 +388,6 @@ conda install -c huggingface transformers
|
||||
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI से) साथ में कागज [विज़न ट्रांसफॉर्मर्स के साथ सिंपल ओपन-वोकैबुलरी ऑब्जेक्ट डिटेक्शन](https:/ /arxiv.org/abs/2205.06230) मैथियास मिंडरर, एलेक्सी ग्रिट्सेंको, ऑस्टिन स्टोन, मैक्सिम न्यूमैन, डिर्क वीसेनबोर्न, एलेक्सी डोसोवित्स्की, अरविंद महेंद्रन, अनुराग अर्नब, मुस्तफा देहघानी, ज़ुओरन शेन, जिओ वांग, ज़ियाओहुआ झाई, थॉमस किफ़, और नील हॉल्सबी द्वारा पोस्ट किया गया।
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI से) Matthias Minderer, Alexey Gritsenko, Neil Houlsby. द्वाराअनुसंधान पत्र [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) के साथ जारी किया गया
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (IBM से) Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. द्वाराअनुसंधान पत्र [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) के साथ जारी किया गया
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google की ओर से) साथ में दिया गया पेपर [लंबे इनपुट सारांश के लिए ट्रांसफ़ॉर्मरों को बेहतर तरीके से एक्सटेंड करना](https://arxiv .org/abs/2208.04347) जेसन फांग, याओ झाओ, पीटर जे लियू द्वारा।
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (दीपमाइंड से) साथ में पेपर [पर्सीवर आईओ: संरचित इनपुट और आउटपुट के लिए एक सामान्य वास्तुकला] (https://arxiv.org/abs/2107.14795) एंड्रयू जेगल, सेबेस्टियन बोरग्यूड, जीन-बैप्टिस्ट अलायराक, कार्ल डोर्श, कैटलिन इओनेस्कु, डेविड द्वारा डिंग, स्कंद कोप्पुला, डैनियल ज़ोरान, एंड्रयू ब्रॉक, इवान शेलहैमर, ओलिवियर हेनाफ, मैथ्यू एम। बोट्विनिक, एंड्रयू ज़िसरमैन, ओरिओल विनियल्स, जोआओ कैरेरा द्वारा पोस्ट किया गया।
|
||||
|
@ -448,7 +448,6 @@ Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それ
|
||||
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI から) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al から公開された研究論文: [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068)
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI から) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby から公開された研究論文: [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230)
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI から) Matthias Minderer, Alexey Gritsenko, Neil Houlsby. から公開された研究論文 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683)
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (IBM から) Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. から公開された研究論文 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf)
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google から) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu から公開された研究論文: [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google から) Jason Phang, Yao Zhao, and Peter J. Liu から公開された研究論文: [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347)
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind から) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira から公開された研究論文: [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795)
|
||||
|
@ -363,7 +363,6 @@ Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는
|
||||
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (Meta AI 에서) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 의 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 논문과 함께 발표했습니다.
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI 에서) Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 의 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 논문과 함께 발표했습니다.
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI 에서 제공)은 Matthias Minderer, Alexey Gritsenko, Neil Houlsby.의 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683)논문과 함께 발표했습니다.
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (IBM 에서 제공)은 Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.의 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf)논문과 함께 발표했습니다.
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (Google 에서) Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 의 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 논문과 함께 발표했습니다.
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google 에서) Jason Phang, Yao Zhao, Peter J. Liu 의 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 논문과 함께 발표했습니다.
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (Deepmind 에서) Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 의 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 논문과 함께 발표했습니다.
|
||||
|
@ -387,7 +387,6 @@ conda install -c huggingface transformers
|
||||
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (来自 Google AI) 伴随论文 [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) 由 Matthias Minderer, Alexey Gritsenko, Neil Houlsby 发布。
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (来自 IBM) 伴随论文 [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) 由 Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam 发布。
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (来自 Google) 伴随论文 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 由 Jason Phang, Yao Zhao, Peter J. Liu 发布。
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。
|
||||
|
@ -399,7 +399,6 @@ conda install -c huggingface transformers
|
||||
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
|
||||
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
|
||||
1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (from Google AI) released with the paper [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
|
||||
1. **[PatchTST](https://huggingface.co/docs/transformers/main/model_doc/patchtst)** (from IBM) released with the paper [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
|
||||
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
|
||||
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu.
|
||||
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
|
||||
|
@ -747,8 +747,6 @@
|
||||
title: Autoformer
|
||||
- local: model_doc/informer
|
||||
title: Informer
|
||||
- local: model_doc/patchtst
|
||||
title: PatchTST
|
||||
- local: model_doc/time_series_transformer
|
||||
title: Time Series Transformer
|
||||
title: Time series models
|
||||
|
@ -213,7 +213,6 @@ Flax), PyTorch, and/or TensorFlow.
|
||||
| [OPT](model_doc/opt) | ✅ | ✅ | ✅ |
|
||||
| [OWL-ViT](model_doc/owlvit) | ✅ | ❌ | ❌ |
|
||||
| [OWLv2](model_doc/owlv2) | ✅ | ❌ | ❌ |
|
||||
| [PatchTST](model_doc/patchtst) | ✅ | ❌ | ❌ |
|
||||
| [Pegasus](model_doc/pegasus) | ✅ | ✅ | ✅ |
|
||||
| [PEGASUS-X](model_doc/pegasus_x) | ✅ | ❌ | ❌ |
|
||||
| [Perceiver](model_doc/perceiver) | ✅ | ❌ | ❌ |
|
||||
|
@ -1,73 +0,0 @@
|
||||
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# PatchTST
|
||||
|
||||
## Overview
|
||||
|
||||
The PatchTST model was proposed in [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*We propose an efficient design of Transformer-based models for multivariate time series forecasting and self-supervised representation learning. It is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. Patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. We also apply our model to self-supervised pre-training tasks and attain excellent fine-tuning performance, which outperforms supervised training on large datasets. Transferring of masked pre-trained representation on one dataset to others also produces SOTA forecasting accuracy.*
|
||||
|
||||
Tips:
|
||||
|
||||
The model can also be used for time series classification and time series regression. See the respective [`PatchTSTForClassification`] and [`PatchTSTForRegression`] classes.
|
||||
|
||||
At a high level the model vectorizes time series into patches of a given size and encodes them via a Transformer which then outputs the prediction length forecasts:
|
||||
|
||||

|
||||
|
||||
|
||||
This model was contributed by [namctin](https://huggingface.co/namctin), [gsinthong](https://huggingface.co/gsinthong), [diepi](https://huggingface.co/diepi), [vijaye12](https://huggingface.co/vijaye12), [wmgifford](https://huggingface.co/wmgifford), and [kashif](https://huggingface.co/kashif).
|
||||
|
||||
The original code can be found [here](https://github.com/yuqinie98/PatchTST).
|
||||
|
||||
|
||||
## PatchTSTConfig
|
||||
|
||||
[[autodoc]] PatchTSTConfig
|
||||
|
||||
|
||||
## PatchTSTModel
|
||||
|
||||
[[autodoc]] PatchTSTModel
|
||||
- forward
|
||||
|
||||
|
||||
## PatchTSTForPrediction
|
||||
|
||||
[[autodoc]] PatchTSTForPrediction
|
||||
- forward
|
||||
|
||||
|
||||
## PatchTSTForClassification
|
||||
|
||||
[[autodoc]] PatchTSTForClassification
|
||||
- forward
|
||||
|
||||
|
||||
## PatchTSTForPretraining
|
||||
|
||||
[[autodoc]] PatchTSTForPretraining
|
||||
- forward
|
||||
|
||||
|
||||
## PatchTSTForRegression
|
||||
|
||||
[[autodoc]] PatchTSTForRegression
|
||||
- forward
|
@ -493,7 +493,6 @@ _import_structure = {
|
||||
"OwlViTTextConfig",
|
||||
"OwlViTVisionConfig",
|
||||
],
|
||||
"models.patchtst": ["PATCHTST_PRETRAINED_CONFIG_ARCHIVE_MAP", "PatchTSTConfig"],
|
||||
"models.pegasus": ["PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP", "PegasusConfig", "PegasusTokenizer"],
|
||||
"models.pegasus_x": ["PEGASUS_X_PRETRAINED_CONFIG_ARCHIVE_MAP", "PegasusXConfig"],
|
||||
"models.perceiver": ["PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP", "PerceiverConfig", "PerceiverTokenizer"],
|
||||
@ -1168,8 +1167,6 @@ else:
|
||||
"MODEL_FOR_TEXT_ENCODING_MAPPING",
|
||||
"MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING",
|
||||
"MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING",
|
||||
"MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING",
|
||||
"MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING",
|
||||
"MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING",
|
||||
"MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING",
|
||||
"MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING",
|
||||
@ -2488,17 +2485,6 @@ else:
|
||||
"OwlViTVisionModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.patchtst"].extend(
|
||||
[
|
||||
"PATCHTST_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"PatchTSTForClassification",
|
||||
"PatchTSTForPrediction",
|
||||
"PatchTSTForPretraining",
|
||||
"PatchTSTForRegression",
|
||||
"PatchTSTModel",
|
||||
"PatchTSTPreTrainedModel",
|
||||
]
|
||||
)
|
||||
_import_structure["models.pegasus"].extend(
|
||||
["PegasusForCausalLM", "PegasusForConditionalGeneration", "PegasusModel", "PegasusPreTrainedModel"]
|
||||
)
|
||||
@ -4711,7 +4697,6 @@ if TYPE_CHECKING:
|
||||
OwlViTTextConfig,
|
||||
OwlViTVisionConfig,
|
||||
)
|
||||
from .models.patchtst import PATCHTST_PRETRAINED_CONFIG_ARCHIVE_MAP, PatchTSTConfig
|
||||
from .models.pegasus import PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP, PegasusConfig, PegasusTokenizer
|
||||
from .models.pegasus_x import PEGASUS_X_PRETRAINED_CONFIG_ARCHIVE_MAP, PegasusXConfig
|
||||
from .models.perceiver import PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP, PerceiverConfig, PerceiverTokenizer
|
||||
@ -5318,8 +5303,6 @@ if TYPE_CHECKING:
|
||||
MODEL_FOR_TEXT_ENCODING_MAPPING,
|
||||
MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING,
|
||||
MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING,
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING,
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING,
|
||||
MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
|
||||
MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING,
|
||||
MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING,
|
||||
@ -6404,15 +6387,6 @@ if TYPE_CHECKING:
|
||||
OwlViTTextModel,
|
||||
OwlViTVisionModel,
|
||||
)
|
||||
from .models.patchtst import (
|
||||
PATCHTST_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
PatchTSTForClassification,
|
||||
PatchTSTForPrediction,
|
||||
PatchTSTForPretraining,
|
||||
PatchTSTForRegression,
|
||||
PatchTSTModel,
|
||||
PatchTSTPreTrainedModel,
|
||||
)
|
||||
from .models.pegasus import (
|
||||
PegasusForCausalLM,
|
||||
PegasusForConditionalGeneration,
|
||||
|
@ -158,7 +158,6 @@ from . import (
|
||||
opt,
|
||||
owlv2,
|
||||
owlvit,
|
||||
patchtst,
|
||||
pegasus,
|
||||
pegasus_x,
|
||||
perceiver,
|
||||
|
@ -77,8 +77,6 @@ else:
|
||||
"MODEL_WITH_LM_HEAD_MAPPING",
|
||||
"MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING",
|
||||
"MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING",
|
||||
"MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING",
|
||||
"MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING",
|
||||
"AutoModel",
|
||||
"AutoBackbone",
|
||||
"AutoModelForAudioClassification",
|
||||
@ -252,8 +250,6 @@ if TYPE_CHECKING:
|
||||
MODEL_FOR_TEXT_ENCODING_MAPPING,
|
||||
MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING,
|
||||
MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING,
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING,
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING,
|
||||
MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING,
|
||||
MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING,
|
||||
MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING,
|
||||
|
@ -164,7 +164,6 @@ CONFIG_MAPPING_NAMES = OrderedDict(
|
||||
("opt", "OPTConfig"),
|
||||
("owlv2", "Owlv2Config"),
|
||||
("owlvit", "OwlViTConfig"),
|
||||
("patchtst", "PatchTSTConfig"),
|
||||
("pegasus", "PegasusConfig"),
|
||||
("pegasus_x", "PegasusXConfig"),
|
||||
("perceiver", "PerceiverConfig"),
|
||||
@ -377,7 +376,6 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict(
|
||||
("opt", "OPT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("owlv2", "OWLV2_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("owlvit", "OWLVIT_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("patchtst", "PATCHTST_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("pegasus", "PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("pegasus_x", "PEGASUS_X_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
("perceiver", "PERCEIVER_PRETRAINED_CONFIG_ARCHIVE_MAP"),
|
||||
@ -609,7 +607,6 @@ MODEL_NAMES_MAPPING = OrderedDict(
|
||||
("opt", "OPT"),
|
||||
("owlv2", "OWLv2"),
|
||||
("owlvit", "OWL-ViT"),
|
||||
("patchtst", "PatchTST"),
|
||||
("pegasus", "Pegasus"),
|
||||
("pegasus_x", "PEGASUS-X"),
|
||||
("perceiver", "Perceiver"),
|
||||
|
@ -157,7 +157,6 @@ MODEL_MAPPING_NAMES = OrderedDict(
|
||||
("opt", "OPTModel"),
|
||||
("owlv2", "Owlv2Model"),
|
||||
("owlvit", "OwlViTModel"),
|
||||
("patchtst", "PatchTSTModel"),
|
||||
("pegasus", "PegasusModel"),
|
||||
("pegasus_x", "PegasusXModel"),
|
||||
("perceiver", "PerceiverModel"),
|
||||
@ -1131,18 +1130,6 @@ MODEL_FOR_TEXT_ENCODING_MAPPING_NAMES = OrderedDict(
|
||||
]
|
||||
)
|
||||
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING_NAMES = OrderedDict(
|
||||
[
|
||||
("patchtst", "PatchTSTForClassification"),
|
||||
]
|
||||
)
|
||||
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING_NAMES = OrderedDict(
|
||||
[
|
||||
("patchtst", "PatchTSTForRegression"),
|
||||
]
|
||||
)
|
||||
|
||||
MODEL_FOR_IMAGE_TO_IMAGE_MAPPING_NAMES = OrderedDict(
|
||||
[
|
||||
("swin2sr", "Swin2SRForImageSuperResolution"),
|
||||
@ -1234,14 +1221,6 @@ MODEL_FOR_MASK_GENERATION_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, MODEL
|
||||
|
||||
MODEL_FOR_TEXT_ENCODING_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, MODEL_FOR_TEXT_ENCODING_MAPPING_NAMES)
|
||||
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING = _LazyAutoMapping(
|
||||
CONFIG_MAPPING_NAMES, MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING_NAMES
|
||||
)
|
||||
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING = _LazyAutoMapping(
|
||||
CONFIG_MAPPING_NAMES, MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING_NAMES
|
||||
)
|
||||
|
||||
MODEL_FOR_IMAGE_TO_IMAGE_MAPPING = _LazyAutoMapping(CONFIG_MAPPING_NAMES, MODEL_FOR_IMAGE_TO_IMAGE_MAPPING_NAMES)
|
||||
|
||||
|
||||
|
@ -208,70 +208,71 @@ class AutoformerFeatureEmbedder(nn.Module):
|
||||
)
|
||||
|
||||
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesStdScaler with TimeSeriesTransformer->Autoformer,TimeSeries->Autoformer
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesStdScaler with TimeSeries->Autoformer
|
||||
class AutoformerStdScaler(nn.Module):
|
||||
"""
|
||||
Standardize features by calculating the mean and scaling along the first dimension, and then normalizes it by
|
||||
subtracting from the mean and dividing by the standard deviation.
|
||||
Standardize features by calculating the mean and scaling along some given dimension `dim`, and then normalizes it
|
||||
by subtracting from the mean and dividing by the standard deviation.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to calculate the mean and standard deviation.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
minimum_scale (`float`, *optional*, defaults to 1e-5):
|
||||
Default scale that is used for elements that are constantly zero along dimension `dim`.
|
||||
"""
|
||||
|
||||
def __init__(self, config: AutoformerConfig):
|
||||
def __init__(self, dim: int, keepdim: bool = False, minimum_scale: float = 1e-5):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.minimum_scale = config.minimum_scale if hasattr(config, "minimum_scale") else 1e-10
|
||||
if not dim > 0:
|
||||
raise ValueError("Cannot compute scale along dim = 0 (batch dimension), please provide dim > 0")
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
self.minimum_scale = minimum_scale
|
||||
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
Calculating the scale on the observed indicator.
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
denominator = observed_indicator.sum(self.dim, keepdim=self.keepdim)
|
||||
@torch.no_grad()
|
||||
def forward(self, data: torch.Tensor, weights: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
denominator = weights.sum(self.dim, keepdim=self.keepdim)
|
||||
denominator = denominator.clamp_min(1.0)
|
||||
loc = (data * observed_indicator).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
loc = (data * weights).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
|
||||
variance = (((data - loc) * observed_indicator) ** 2).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
variance = (((data - loc) * weights) ** 2).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
scale = torch.sqrt(variance + self.minimum_scale)
|
||||
return (data - loc) / scale, loc, scale
|
||||
|
||||
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesMeanScaler with TimeSeriesTransformer->Autoformer,TimeSeries->Autoformer
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesMeanScaler with TimeSeries->Autoformer
|
||||
class AutoformerMeanScaler(nn.Module):
|
||||
"""
|
||||
Computes a scaling factor as the weighted average absolute value along the first dimension, and scales the data
|
||||
Computes a scaling factor as the weighted average absolute value along dimension `dim`, and scales the data
|
||||
accordingly.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to compute the scale.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
default_scale (`float`, *optional*, defaults to `None`):
|
||||
Default scale that is used for elements that are constantly zero. If `None`, we use the scale of the batch.
|
||||
minimum_scale (`float`, *optional*, defaults to 1e-10):
|
||||
Default minimum possible scale that is used for any item.
|
||||
"""
|
||||
|
||||
def __init__(self, config: AutoformerConfig):
|
||||
def __init__(
|
||||
self, dim: int = -1, keepdim: bool = True, default_scale: Optional[float] = None, minimum_scale: float = 1e-10
|
||||
):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.minimum_scale = config.minimum_scale if hasattr(config, "minimum_scale") else 1e-10
|
||||
self.default_scale = config.default_scale if hasattr(config, "default_scale") else None
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
self.minimum_scale = minimum_scale
|
||||
self.default_scale = default_scale
|
||||
|
||||
@torch.no_grad()
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
Calculating the scale on the observed indicator.
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
# shape: (N, [C], T=1)
|
||||
ts_sum = (data * observed_indicator).abs().sum(self.dim, keepdim=True)
|
||||
num_observed = observed_indicator.sum(self.dim, keepdim=True)
|
||||
|
||||
@ -299,29 +300,26 @@ class AutoformerMeanScaler(nn.Module):
|
||||
return scaled_data, torch.zeros_like(scale), scale
|
||||
|
||||
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesNOPScaler with TimeSeriesTransformer->Autoformer,TimeSeries->Autoformer
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesNOPScaler with TimeSeries->Autoformer
|
||||
class AutoformerNOPScaler(nn.Module):
|
||||
"""
|
||||
Assigns a scaling factor equal to 1 along the first dimension, and therefore applies no scaling to the input data.
|
||||
Assigns a scaling factor equal to 1 along dimension `dim`, and therefore applies no scaling to the input data.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to compute the scale.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
"""
|
||||
|
||||
def __init__(self, config: AutoformerConfig):
|
||||
def __init__(self, dim: int, keepdim: bool = False):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor = None
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
scale = torch.ones_like(data, requires_grad=False).mean(dim=self.dim, keepdim=self.keepdim)
|
||||
loc = torch.zeros_like(data, requires_grad=False).mean(dim=self.dim, keepdim=self.keepdim)
|
||||
return data, loc, scale
|
||||
@ -1435,11 +1433,11 @@ class AutoformerModel(AutoformerPreTrainedModel):
|
||||
super().__init__(config)
|
||||
|
||||
if config.scaling == "mean" or config.scaling is True:
|
||||
self.scaler = AutoformerMeanScaler(config)
|
||||
self.scaler = AutoformerMeanScaler(dim=1, keepdim=True)
|
||||
elif config.scaling == "std":
|
||||
self.scaler = AutoformerStdScaler(config)
|
||||
self.scaler = AutoformerStdScaler(dim=1, keepdim=True)
|
||||
else:
|
||||
self.scaler = AutoformerNOPScaler(config)
|
||||
self.scaler = AutoformerNOPScaler(dim=1, keepdim=True)
|
||||
|
||||
if config.num_static_categorical_features > 0:
|
||||
self.embedder = AutoformerFeatureEmbedder(
|
||||
|
@ -81,70 +81,71 @@ class InformerFeatureEmbedder(nn.Module):
|
||||
)
|
||||
|
||||
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesStdScaler with TimeSeriesTransformer->Informer,TimeSeries->Informer
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesStdScaler with TimeSeries->Informer
|
||||
class InformerStdScaler(nn.Module):
|
||||
"""
|
||||
Standardize features by calculating the mean and scaling along the first dimension, and then normalizes it by
|
||||
subtracting from the mean and dividing by the standard deviation.
|
||||
Standardize features by calculating the mean and scaling along some given dimension `dim`, and then normalizes it
|
||||
by subtracting from the mean and dividing by the standard deviation.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to calculate the mean and standard deviation.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
minimum_scale (`float`, *optional*, defaults to 1e-5):
|
||||
Default scale that is used for elements that are constantly zero along dimension `dim`.
|
||||
"""
|
||||
|
||||
def __init__(self, config: InformerConfig):
|
||||
def __init__(self, dim: int, keepdim: bool = False, minimum_scale: float = 1e-5):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.minimum_scale = config.minimum_scale if hasattr(config, "minimum_scale") else 1e-10
|
||||
if not dim > 0:
|
||||
raise ValueError("Cannot compute scale along dim = 0 (batch dimension), please provide dim > 0")
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
self.minimum_scale = minimum_scale
|
||||
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
Calculating the scale on the observed indicator.
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
denominator = observed_indicator.sum(self.dim, keepdim=self.keepdim)
|
||||
@torch.no_grad()
|
||||
def forward(self, data: torch.Tensor, weights: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
denominator = weights.sum(self.dim, keepdim=self.keepdim)
|
||||
denominator = denominator.clamp_min(1.0)
|
||||
loc = (data * observed_indicator).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
loc = (data * weights).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
|
||||
variance = (((data - loc) * observed_indicator) ** 2).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
variance = (((data - loc) * weights) ** 2).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
scale = torch.sqrt(variance + self.minimum_scale)
|
||||
return (data - loc) / scale, loc, scale
|
||||
|
||||
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesMeanScaler with TimeSeriesTransformer->Informer,TimeSeries->Informer
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesMeanScaler with TimeSeries->Informer
|
||||
class InformerMeanScaler(nn.Module):
|
||||
"""
|
||||
Computes a scaling factor as the weighted average absolute value along the first dimension, and scales the data
|
||||
Computes a scaling factor as the weighted average absolute value along dimension `dim`, and scales the data
|
||||
accordingly.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to compute the scale.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
default_scale (`float`, *optional*, defaults to `None`):
|
||||
Default scale that is used for elements that are constantly zero. If `None`, we use the scale of the batch.
|
||||
minimum_scale (`float`, *optional*, defaults to 1e-10):
|
||||
Default minimum possible scale that is used for any item.
|
||||
"""
|
||||
|
||||
def __init__(self, config: InformerConfig):
|
||||
def __init__(
|
||||
self, dim: int = -1, keepdim: bool = True, default_scale: Optional[float] = None, minimum_scale: float = 1e-10
|
||||
):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.minimum_scale = config.minimum_scale if hasattr(config, "minimum_scale") else 1e-10
|
||||
self.default_scale = config.default_scale if hasattr(config, "default_scale") else None
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
self.minimum_scale = minimum_scale
|
||||
self.default_scale = default_scale
|
||||
|
||||
@torch.no_grad()
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
Calculating the scale on the observed indicator.
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
# shape: (N, [C], T=1)
|
||||
ts_sum = (data * observed_indicator).abs().sum(self.dim, keepdim=True)
|
||||
num_observed = observed_indicator.sum(self.dim, keepdim=True)
|
||||
|
||||
@ -172,29 +173,26 @@ class InformerMeanScaler(nn.Module):
|
||||
return scaled_data, torch.zeros_like(scale), scale
|
||||
|
||||
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesNOPScaler with TimeSeriesTransformer->Informer,TimeSeries->Informer
|
||||
# Copied from transformers.models.time_series_transformer.modeling_time_series_transformer.TimeSeriesNOPScaler with TimeSeries->Informer
|
||||
class InformerNOPScaler(nn.Module):
|
||||
"""
|
||||
Assigns a scaling factor equal to 1 along the first dimension, and therefore applies no scaling to the input data.
|
||||
Assigns a scaling factor equal to 1 along dimension `dim`, and therefore applies no scaling to the input data.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to compute the scale.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
"""
|
||||
|
||||
def __init__(self, config: InformerConfig):
|
||||
def __init__(self, dim: int, keepdim: bool = False):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor = None
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
scale = torch.ones_like(data, requires_grad=False).mean(dim=self.dim, keepdim=self.keepdim)
|
||||
loc = torch.zeros_like(data, requires_grad=False).mean(dim=self.dim, keepdim=self.keepdim)
|
||||
return data, loc, scale
|
||||
@ -1448,11 +1446,11 @@ class InformerModel(InformerPreTrainedModel):
|
||||
super().__init__(config)
|
||||
|
||||
if config.scaling == "mean" or config.scaling is True:
|
||||
self.scaler = InformerMeanScaler(config)
|
||||
self.scaler = InformerMeanScaler(dim=1, keepdim=True)
|
||||
elif config.scaling == "std":
|
||||
self.scaler = InformerStdScaler(config)
|
||||
self.scaler = InformerStdScaler(dim=1, keepdim=True)
|
||||
else:
|
||||
self.scaler = InformerNOPScaler(config)
|
||||
self.scaler = InformerNOPScaler(dim=1, keepdim=True)
|
||||
|
||||
if config.num_static_categorical_features > 0:
|
||||
self.embedder = InformerFeatureEmbedder(
|
||||
|
@ -1,66 +0,0 @@
|
||||
# Copyright 2023 The HuggingFace Team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
# rely on isort to merge the imports
|
||||
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
|
||||
|
||||
|
||||
_import_structure = {
|
||||
"configuration_patchtst": [
|
||||
"PATCHTST_PRETRAINED_CONFIG_ARCHIVE_MAP",
|
||||
"PatchTSTConfig",
|
||||
],
|
||||
}
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
_import_structure["modeling_patchtst"] = [
|
||||
"PATCHTST_PRETRAINED_MODEL_ARCHIVE_LIST",
|
||||
"PatchTSTModel",
|
||||
"PatchTSTPreTrainedModel",
|
||||
"PatchTSTForPrediction",
|
||||
"PatchTSTForPretraining",
|
||||
"PatchTSTForRegression",
|
||||
"PatchTSTForClassification",
|
||||
]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from .configuration_patchtst import PATCHTST_PRETRAINED_CONFIG_ARCHIVE_MAP, PatchTSTConfig
|
||||
|
||||
try:
|
||||
if not is_torch_available():
|
||||
raise OptionalDependencyNotAvailable()
|
||||
except OptionalDependencyNotAvailable:
|
||||
pass
|
||||
else:
|
||||
from .modeling_patchtst import (
|
||||
PATCHTST_PRETRAINED_MODEL_ARCHIVE_LIST,
|
||||
PatchTSTForClassification,
|
||||
PatchTSTForPrediction,
|
||||
PatchTSTForPretraining,
|
||||
PatchTSTForRegression,
|
||||
PatchTSTModel,
|
||||
PatchTSTPreTrainedModel,
|
||||
)
|
||||
|
||||
else:
|
||||
import sys
|
||||
|
||||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
|
@ -1,274 +0,0 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""PatchTST model configuration"""
|
||||
|
||||
from typing import List, Optional, Union
|
||||
|
||||
from transformers.configuration_utils import PretrainedConfig
|
||||
from transformers.utils import logging
|
||||
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
PATCHTST_PRETRAINED_CONFIG_ARCHIVE_MAP = {
|
||||
"ibm/patchtst-base": "https://huggingface.co/ibm/patchtst-base/resolve/main/config.json",
|
||||
# See all PatchTST models at https://huggingface.co/ibm/models?filter=patchtst
|
||||
}
|
||||
|
||||
|
||||
class PatchTSTConfig(PretrainedConfig):
|
||||
r"""
|
||||
This is the configuration class to store the configuration of an [`PatchTSTModel`]. It is used to instantiate an
|
||||
PatchTST model according to the specified arguments, defining the model architecture.
|
||||
[ibm/patchtst](https://huggingface.co/ibm/patchtst) architecture.
|
||||
|
||||
Configuration objects inherit from [`PretrainedConfig`] can be used to control the model outputs. Read the
|
||||
documentation from [`PretrainedConfig`] for more information.
|
||||
|
||||
Args:
|
||||
num_input_channels (`int`, *optional*, defaults to 1):
|
||||
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
|
||||
multivariate targets.
|
||||
context_length (`int`, *optional*, defaults to 32):
|
||||
The context length for the encoder.
|
||||
distribution_output (`str`, *optional*, defaults to `"student_t"`):
|
||||
The distribution emission head for the model when loss is "nll". Could be either "student_t", "normal" or
|
||||
"negative_binomial".
|
||||
loss (`str`, *optional*, defaults to `"mse"`):
|
||||
The loss function for the model corresponding to the `distribution_output` head. For parametric
|
||||
distributions it is the negative log likelihood ("nll") and for point estimates it is the mean squared
|
||||
error "mse".
|
||||
patch_length (`int`, *optional*, defaults to 1):
|
||||
Define the patch length of the patchification process.
|
||||
patch_stride (`int`, *optional*, defaults to 1):
|
||||
define the stride of the patchification process.
|
||||
encoder_layers (`int`, *optional*, defaults to 3):
|
||||
Number of encoder layers.
|
||||
d_model (`int`, *optional*, defaults to 64):
|
||||
Dimensionality of the transformer layers.
|
||||
encoder_attention_heads (`int`, *optional*, defaults to 4):
|
||||
Number of attention heads for each attention layer in the Transformer encoder.
|
||||
shared_embedding (`bool`, *optional*, defaults to `True`):
|
||||
Sharing the input embedding across all channels.
|
||||
channel_attention (`bool`, *optional*, defaults to `False`):
|
||||
Activate channel attention block in the Transformer to allow channels to attend each other.
|
||||
encoder_ffn_dim (`int`, *optional*, defaults to 256):
|
||||
Dimension of the "intermediate" (often named feed-forward) layer in encoder.
|
||||
norm (`str` , *optional*, defaults to `"BatchNorm"`):
|
||||
Normalization at each Transformer layer. Can be `"BatchNorm"` or `"LayerNorm"`.
|
||||
norm_eps (`float`, *optional*, defaults to 1e-05):
|
||||
A value added to the denominator for numerical stability of normalization.
|
||||
attention_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout probability for the attention probabilities.
|
||||
dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout probability for all fully connected layers in the encoder, and decoder.
|
||||
positional_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout probability in the positional embedding layer.
|
||||
dropout_path (`float`, *optional*, defaults to 0.0):
|
||||
The dropout path in the residual block.
|
||||
ff_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout probability used between the two layers of the feed-forward networks.
|
||||
bias (`bool`, *optional*, defaults to `True`):
|
||||
Consider bias in the feed-forward networks.
|
||||
activation_function (`str`, *optional*, defaults to `"gelu"`):
|
||||
The non-linear activation function (string) in the encoder.`"gelu"` and `"relu"` are supported.
|
||||
pre_norm (`bool`, *optional*, defaults to `True`):
|
||||
Normalization is applied before self-attention if pre_norm is set to `True`. Otherwise, normalization is
|
||||
applied after residual block.
|
||||
positional_encoding_type (`str`, *optional*, defaults to `"sincos"`):
|
||||
Positional encodings. `"zeros"`, `"normal"`, `"uniform"' and `"sincos"` are supported.
|
||||
learn_pe (`bool`, *optional*, defaults to `False`):
|
||||
Whether the positional encoding is updated during training.
|
||||
use_cls_token (`bool`, *optional*, defaults to `False`):
|
||||
Whether cls token is used.
|
||||
init_std (`float`, *optional*, defaults to 0.02):
|
||||
The standard deviation of the truncated normal weight initialization distribution.
|
||||
shared_projection (`bool`, *optional*, defaults to `True`):
|
||||
Sharing the projection layer across different channels in the forecast head.
|
||||
seed_number (`Optional`, *optional*):
|
||||
Seed number used for random masking. If unset, no seed is set.
|
||||
scaling (`Union`, *optional*, defaults to `"mean"`):
|
||||
Whether to scale the input targets via "mean" scaler, "std" scaler or no scaler if `None`. If `True`, the
|
||||
scaler is set to "mean".
|
||||
mask_input (`bool`, *optional*, defaults to `False`):
|
||||
Apply masking during the pretraining.
|
||||
mask_type (`str`, *optional*, defaults to `"random"`):
|
||||
Masking type. Only `"random"` and `"forecast"` are currently supported.
|
||||
random_mask_ratio (`float`, *optional*, defaults to 0.5):
|
||||
Masking ratio is applied to mask the input data during random pretraining.
|
||||
forecast_mask_patches (`List`, *optional*, defaults to `[2, 3]`):
|
||||
List of patch lengths to mask in the end of the data.
|
||||
forecast_mask_ratios (`List`, *optional*, defaults to `[1, 1]`):
|
||||
List of weights to use for each patch length. For Ex. if patch_lengths is [5,4] and mix_ratio is [1,1],
|
||||
then equal weights to both patch lengths. Defaults to None.
|
||||
channel_consistent_masking (`bool`, *optional*, defaults to `False`):
|
||||
If channel consistent masking is True, all the channels will have the same masking.
|
||||
unmasked_channel_indices (`list`, *optional*):
|
||||
Channels that are not masked during pretraining.
|
||||
mask_value (`int`, *optional*, defaults to 0):
|
||||
Define the value of entries to be masked when pretraining.
|
||||
pooling_type (`str`, *optional*, defaults to `"mean"`):
|
||||
Pooling of the embedding. `"mean"`, `"max"` and `None` are supported.
|
||||
head_dropout (`float`, *optional*, defaults to 0.0):
|
||||
The dropout probability for head.
|
||||
prediction_length (`int`, *optional*, defaults to 24):
|
||||
The prediction length for the encoder. In other words, the prediction horizon of the model.
|
||||
num_targets (`int`, *optional*, defaults to 1):
|
||||
Number of targets for regression and classificastion tasks. For classification, it is the number of
|
||||
classes.
|
||||
output_range (`list`, *optional*):
|
||||
Output range for regression task. The range of output values can be set to enforce the model to produce
|
||||
values within a range.
|
||||
num_parallel_samples (`int`, *optional*, defaults to 100):
|
||||
The number of samples is generated in parallel for probablistic prediction.
|
||||
|
||||
|
||||
```python
|
||||
>>> from transformers import PatchTSTConfig, PatchTSTModel
|
||||
|
||||
>>> # Initializing an PatchTST configuration with 12 time steps for prediction
|
||||
>>> configuration = PatchTSTConfig(prediction_length=12)
|
||||
|
||||
>>> # Randomly initializing a model (with random weights) from the configuration
|
||||
>>> model = PatchTSTModel(configuration)
|
||||
|
||||
>>> # Accessing the model configuration
|
||||
>>> configuration = model.config
|
||||
```"""
|
||||
model_type = "patchtst"
|
||||
attribute_map = {
|
||||
"hidden_size": "d_model",
|
||||
"num_attention_heads": "encoder_attention_heads",
|
||||
"num_hidden_layers": "encoder_layers",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
# time series specific configuration
|
||||
num_input_channels: int = 1,
|
||||
context_length: int = 32,
|
||||
distribution_output: str = "student_t",
|
||||
loss: str = "mse",
|
||||
# PatchTST arguments
|
||||
patch_length: int = 1,
|
||||
patch_stride: int = 1,
|
||||
# Transformer architecture configuration
|
||||
encoder_layers: int = 3,
|
||||
d_model: int = 64,
|
||||
encoder_attention_heads: int = 4,
|
||||
shared_embedding: bool = True,
|
||||
channel_attention: bool = False,
|
||||
encoder_ffn_dim: int = 256,
|
||||
norm: str = "BatchNorm",
|
||||
norm_eps: float = 1e-5,
|
||||
attention_dropout: float = 0.0,
|
||||
dropout: float = 0.0,
|
||||
positional_dropout: float = 0.0,
|
||||
dropout_path: float = 0.0,
|
||||
ff_dropout: float = 0.0,
|
||||
bias: bool = True,
|
||||
activation_function: str = "gelu",
|
||||
pre_norm: bool = True,
|
||||
positional_encoding_type: str = "sincos",
|
||||
learn_pe: bool = False,
|
||||
use_cls_token: bool = False,
|
||||
init_std: float = 0.02,
|
||||
shared_projection: bool = True,
|
||||
seed_number: Optional[int] = None,
|
||||
scaling: Optional[Union[str, bool]] = "mean",
|
||||
# mask pretraining
|
||||
mask_input: Optional[bool] = None,
|
||||
mask_type: str = "random",
|
||||
random_mask_ratio: float = 0.5,
|
||||
forecast_mask_patches: List[int] = [2, 3],
|
||||
forecast_mask_ratios: List[int] = [1, 1],
|
||||
channel_consistent_masking: bool = False,
|
||||
unmasked_channel_indices: Optional[List[int]] = None,
|
||||
mask_value=0,
|
||||
# head
|
||||
pooling_type: str = "mean",
|
||||
head_dropout: float = 0.0,
|
||||
prediction_length: int = 24,
|
||||
num_targets: int = 1,
|
||||
output_range: List = None,
|
||||
# distribution head
|
||||
num_parallel_samples: int = 100,
|
||||
**kwargs,
|
||||
):
|
||||
# time series specific configuration
|
||||
self.context_length = context_length
|
||||
self.num_input_channels = num_input_channels # n_vars
|
||||
self.loss = loss
|
||||
self.distribution_output = distribution_output
|
||||
self.num_parallel_samples = num_parallel_samples
|
||||
|
||||
# Transformer architecture configuration
|
||||
self.d_model = d_model
|
||||
self.encoder_attention_heads = encoder_attention_heads
|
||||
self.encoder_ffn_dim = encoder_ffn_dim
|
||||
self.encoder_layers = encoder_layers
|
||||
self.dropout = dropout
|
||||
self.attention_dropout = attention_dropout
|
||||
self.shared_embedding = shared_embedding
|
||||
self.channel_attention = channel_attention
|
||||
self.norm = norm
|
||||
self.norm_eps = norm_eps
|
||||
self.positional_dropout = positional_dropout
|
||||
self.dropout_path = dropout_path
|
||||
self.ff_dropout = ff_dropout
|
||||
self.bias = bias
|
||||
self.activation_function = activation_function
|
||||
self.pre_norm = pre_norm
|
||||
self.positional_encoding_type = positional_encoding_type
|
||||
self.learn_pe = learn_pe
|
||||
self.use_cls_token = use_cls_token
|
||||
self.init_std = init_std
|
||||
self.scaling = scaling
|
||||
|
||||
# PatchTST parameters
|
||||
self.patch_length = patch_length
|
||||
self.patch_stride = patch_stride
|
||||
self.num_patches = self._num_patches()
|
||||
|
||||
# Mask pretraining
|
||||
self.seed_number = seed_number
|
||||
self.mask_input = mask_input
|
||||
self.mask_type = mask_type
|
||||
self.random_mask_ratio = random_mask_ratio # for random masking
|
||||
self.forecast_mask_patches = forecast_mask_patches # for forecast masking
|
||||
self.forecast_mask_ratios = forecast_mask_ratios
|
||||
self.channel_consistent_masking = channel_consistent_masking
|
||||
self.unmasked_channel_indices = unmasked_channel_indices
|
||||
self.mask_value = mask_value
|
||||
|
||||
# general head params
|
||||
self.pooling_type = pooling_type
|
||||
self.head_dropout = head_dropout
|
||||
|
||||
# For prediction head
|
||||
self.shared_projection = shared_projection
|
||||
self.prediction_length = prediction_length
|
||||
|
||||
# For prediction and regression head
|
||||
self.num_parallel_samples = num_parallel_samples
|
||||
|
||||
# Regression
|
||||
self.num_targets = num_targets
|
||||
self.output_range = output_range
|
||||
|
||||
super().__init__(**kwargs)
|
||||
|
||||
def _num_patches(self):
|
||||
return (max(self.context_length, self.patch_length) - self.patch_length) // self.patch_stride + 1
|
File diff suppressed because it is too large
Load Diff
@ -83,66 +83,67 @@ class TimeSeriesFeatureEmbedder(nn.Module):
|
||||
|
||||
class TimeSeriesStdScaler(nn.Module):
|
||||
"""
|
||||
Standardize features by calculating the mean and scaling along the first dimension, and then normalizes it by
|
||||
subtracting from the mean and dividing by the standard deviation.
|
||||
Standardize features by calculating the mean and scaling along some given dimension `dim`, and then normalizes it
|
||||
by subtracting from the mean and dividing by the standard deviation.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to calculate the mean and standard deviation.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
minimum_scale (`float`, *optional*, defaults to 1e-5):
|
||||
Default scale that is used for elements that are constantly zero along dimension `dim`.
|
||||
"""
|
||||
|
||||
def __init__(self, config: TimeSeriesTransformerConfig):
|
||||
def __init__(self, dim: int, keepdim: bool = False, minimum_scale: float = 1e-5):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.minimum_scale = config.minimum_scale if hasattr(config, "minimum_scale") else 1e-10
|
||||
if not dim > 0:
|
||||
raise ValueError("Cannot compute scale along dim = 0 (batch dimension), please provide dim > 0")
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
self.minimum_scale = minimum_scale
|
||||
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
Calculating the scale on the observed indicator.
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
denominator = observed_indicator.sum(self.dim, keepdim=self.keepdim)
|
||||
@torch.no_grad()
|
||||
def forward(self, data: torch.Tensor, weights: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
denominator = weights.sum(self.dim, keepdim=self.keepdim)
|
||||
denominator = denominator.clamp_min(1.0)
|
||||
loc = (data * observed_indicator).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
loc = (data * weights).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
|
||||
variance = (((data - loc) * observed_indicator) ** 2).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
variance = (((data - loc) * weights) ** 2).sum(self.dim, keepdim=self.keepdim) / denominator
|
||||
scale = torch.sqrt(variance + self.minimum_scale)
|
||||
return (data - loc) / scale, loc, scale
|
||||
|
||||
|
||||
class TimeSeriesMeanScaler(nn.Module):
|
||||
"""
|
||||
Computes a scaling factor as the weighted average absolute value along the first dimension, and scales the data
|
||||
Computes a scaling factor as the weighted average absolute value along dimension `dim`, and scales the data
|
||||
accordingly.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to compute the scale.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
default_scale (`float`, *optional*, defaults to `None`):
|
||||
Default scale that is used for elements that are constantly zero. If `None`, we use the scale of the batch.
|
||||
minimum_scale (`float`, *optional*, defaults to 1e-10):
|
||||
Default minimum possible scale that is used for any item.
|
||||
"""
|
||||
|
||||
def __init__(self, config: TimeSeriesTransformerConfig):
|
||||
def __init__(
|
||||
self, dim: int = -1, keepdim: bool = True, default_scale: Optional[float] = None, minimum_scale: float = 1e-10
|
||||
):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.minimum_scale = config.minimum_scale if hasattr(config, "minimum_scale") else 1e-10
|
||||
self.default_scale = config.default_scale if hasattr(config, "default_scale") else None
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
self.minimum_scale = minimum_scale
|
||||
self.default_scale = default_scale
|
||||
|
||||
@torch.no_grad()
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
observed_indicator (`torch.BoolTensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
Calculating the scale on the observed indicator.
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
# shape: (N, [C], T=1)
|
||||
ts_sum = (data * observed_indicator).abs().sum(self.dim, keepdim=True)
|
||||
num_observed = observed_indicator.sum(self.dim, keepdim=True)
|
||||
|
||||
@ -172,26 +173,23 @@ class TimeSeriesMeanScaler(nn.Module):
|
||||
|
||||
class TimeSeriesNOPScaler(nn.Module):
|
||||
"""
|
||||
Assigns a scaling factor equal to 1 along the first dimension, and therefore applies no scaling to the input data.
|
||||
Assigns a scaling factor equal to 1 along dimension `dim`, and therefore applies no scaling to the input data.
|
||||
|
||||
Args:
|
||||
dim (`int`):
|
||||
Dimension along which to compute the scale.
|
||||
keepdim (`bool`, *optional*, defaults to `False`):
|
||||
Controls whether to retain dimension `dim` (of length 1) in the scale tensor, or suppress it.
|
||||
"""
|
||||
|
||||
def __init__(self, config: TimeSeriesTransformerConfig):
|
||||
def __init__(self, dim: int, keepdim: bool = False):
|
||||
super().__init__()
|
||||
self.dim = config.scaling_dim if hasattr(config, "scaling_dim") else 1
|
||||
self.keepdim = config.keepdim if hasattr(config, "keepdim") else True
|
||||
self.dim = dim
|
||||
self.keepdim = keepdim
|
||||
|
||||
def forward(
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor = None
|
||||
self, data: torch.Tensor, observed_indicator: torch.Tensor
|
||||
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
"""
|
||||
Parameters:
|
||||
data (`torch.Tensor` of shape `(batch_size, sequence_length, num_input_channels)`):
|
||||
input for Batch norm calculation
|
||||
Returns:
|
||||
tuple of `torch.Tensor` of shapes
|
||||
(`(batch_size, sequence_length, num_input_channels)`,`(batch_size, 1, num_input_channels)`,
|
||||
`(batch_size, 1, num_input_channels)`)
|
||||
"""
|
||||
scale = torch.ones_like(data, requires_grad=False).mean(dim=self.dim, keepdim=self.keepdim)
|
||||
loc = torch.zeros_like(data, requires_grad=False).mean(dim=self.dim, keepdim=self.keepdim)
|
||||
return data, loc, scale
|
||||
@ -1182,11 +1180,11 @@ class TimeSeriesTransformerModel(TimeSeriesTransformerPreTrainedModel):
|
||||
super().__init__(config)
|
||||
|
||||
if config.scaling == "mean" or config.scaling is True:
|
||||
self.scaler = TimeSeriesMeanScaler(config)
|
||||
self.scaler = TimeSeriesMeanScaler(dim=1, keepdim=True)
|
||||
elif config.scaling == "std":
|
||||
self.scaler = TimeSeriesStdScaler(config)
|
||||
self.scaler = TimeSeriesStdScaler(dim=1, keepdim=True)
|
||||
else:
|
||||
self.scaler = TimeSeriesNOPScaler(config)
|
||||
self.scaler = TimeSeriesNOPScaler(dim=1, keepdim=True)
|
||||
|
||||
if config.num_static_categorical_features > 0:
|
||||
self.embedder = TimeSeriesFeatureEmbedder(
|
||||
|
@ -627,12 +627,6 @@ MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING = None
|
||||
MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING = None
|
||||
|
||||
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING = None
|
||||
|
||||
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING = None
|
||||
|
||||
|
||||
MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING = None
|
||||
|
||||
|
||||
@ -6025,51 +6019,6 @@ class OwlViTVisionModel(metaclass=DummyObject):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
PATCHTST_PRETRAINED_MODEL_ARCHIVE_LIST = None
|
||||
|
||||
|
||||
class PatchTSTForClassification(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class PatchTSTForPrediction(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class PatchTSTForPretraining(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class PatchTSTForRegression(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class PatchTSTModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class PatchTSTPreTrainedModel(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
requires_backends(self, ["torch"])
|
||||
|
||||
|
||||
class PegasusForCausalLM(metaclass=DummyObject):
|
||||
_backends = ["torch"]
|
||||
|
||||
|
@ -1,353 +0,0 @@
|
||||
# coding=utf-8
|
||||
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
""" Testing suite for the PyTorch PatchTST model. """
|
||||
|
||||
import inspect
|
||||
import random
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
from huggingface_hub import hf_hub_download
|
||||
|
||||
from transformers import is_torch_available
|
||||
from transformers.models.auto import get_values
|
||||
from transformers.testing_utils import is_flaky, require_torch, slow, torch_device
|
||||
|
||||
from ...test_configuration_common import ConfigTester
|
||||
from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor
|
||||
from ...test_pipeline_mixin import PipelineTesterMixin
|
||||
|
||||
|
||||
TOLERANCE = 1e-4
|
||||
|
||||
if is_torch_available():
|
||||
import torch
|
||||
|
||||
from transformers import (
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING,
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING,
|
||||
PatchTSTConfig,
|
||||
PatchTSTForClassification,
|
||||
PatchTSTForPrediction,
|
||||
PatchTSTForPretraining,
|
||||
PatchTSTForRegression,
|
||||
PatchTSTModel,
|
||||
)
|
||||
|
||||
|
||||
@require_torch
|
||||
class PatchTSTModelTester:
|
||||
def __init__(
|
||||
self,
|
||||
parent,
|
||||
batch_size=13,
|
||||
prediction_length=7,
|
||||
context_length=14,
|
||||
patch_length=5,
|
||||
patch_stride=5,
|
||||
num_input_channels=1,
|
||||
num_time_features=1,
|
||||
is_training=True,
|
||||
hidden_size=16,
|
||||
num_hidden_layers=2,
|
||||
num_attention_heads=4,
|
||||
intermediate_size=4,
|
||||
hidden_act="gelu",
|
||||
hidden_dropout_prob=0.1,
|
||||
attention_probs_dropout_prob=0.1,
|
||||
lags_sequence=[1, 2, 3, 4, 5],
|
||||
distil=False,
|
||||
seed_number=42,
|
||||
num_targets=2,
|
||||
num_output_channels=2,
|
||||
):
|
||||
self.parent = parent
|
||||
self.batch_size = batch_size
|
||||
self.prediction_length = prediction_length
|
||||
self.context_length = context_length
|
||||
self.patch_length = patch_length
|
||||
self.patch_stride = patch_stride
|
||||
self.num_input_channels = num_input_channels
|
||||
self.num_time_features = num_time_features
|
||||
self.lags_sequence = lags_sequence
|
||||
self.is_training = is_training
|
||||
self.hidden_size = hidden_size
|
||||
self.num_hidden_layers = num_hidden_layers
|
||||
self.num_attention_heads = num_attention_heads
|
||||
self.intermediate_size = intermediate_size
|
||||
self.hidden_act = hidden_act
|
||||
self.hidden_dropout_prob = hidden_dropout_prob
|
||||
self.attention_probs_dropout_prob = attention_probs_dropout_prob
|
||||
|
||||
self.seed_number = seed_number
|
||||
self.num_targets = num_targets
|
||||
self.num_output_channels = num_output_channels
|
||||
self.distil = distil
|
||||
self.num_patches = (max(self.context_length, self.patch_length) - self.patch_length) // self.patch_stride + 1
|
||||
|
||||
def get_config(self):
|
||||
return PatchTSTConfig(
|
||||
prediction_length=self.prediction_length,
|
||||
patch_length=self.patch_length,
|
||||
patch_stride=self.patch_stride,
|
||||
num_input_channels=self.num_input_channels,
|
||||
d_model=self.hidden_size,
|
||||
encoder_layers=self.num_hidden_layers,
|
||||
encoder_attention_heads=self.num_attention_heads,
|
||||
encoder_ffn_dim=self.intermediate_size,
|
||||
dropout=self.hidden_dropout_prob,
|
||||
attention_dropout=self.attention_probs_dropout_prob,
|
||||
context_length=self.context_length,
|
||||
activation_function=self.hidden_act,
|
||||
seed_number=self.seed_number,
|
||||
num_targets=self.num_targets,
|
||||
num_output_channels=self.num_output_channels,
|
||||
)
|
||||
|
||||
def prepare_patchtst_inputs_dict(self, config):
|
||||
_past_length = config.context_length
|
||||
# bs, num_input_channels, num_patch, patch_len
|
||||
|
||||
# [bs x seq_len x num_input_channels]
|
||||
past_values = floats_tensor([self.batch_size, _past_length, self.num_input_channels])
|
||||
|
||||
future_values = floats_tensor([self.batch_size, config.prediction_length, self.num_input_channels])
|
||||
|
||||
inputs_dict = {
|
||||
"past_values": past_values,
|
||||
"future_values": future_values,
|
||||
}
|
||||
return inputs_dict
|
||||
|
||||
def prepare_config_and_inputs(self):
|
||||
config = self.get_config()
|
||||
inputs_dict = self.prepare_patchtst_inputs_dict(config)
|
||||
return config, inputs_dict
|
||||
|
||||
def prepare_config_and_inputs_for_common(self):
|
||||
config, inputs_dict = self.prepare_config_and_inputs()
|
||||
return config, inputs_dict
|
||||
|
||||
|
||||
@require_torch
|
||||
class PatchTSTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase):
|
||||
all_model_classes = (
|
||||
(
|
||||
PatchTSTModel,
|
||||
PatchTSTForPrediction,
|
||||
PatchTSTForPretraining,
|
||||
PatchTSTForClassification,
|
||||
PatchTSTForRegression,
|
||||
)
|
||||
if is_torch_available()
|
||||
else ()
|
||||
)
|
||||
all_generative_model_classes = (
|
||||
(PatchTSTForPrediction, PatchTSTForRegression, PatchTSTForPretraining) if is_torch_available() else ()
|
||||
)
|
||||
pipeline_model_mapping = {"feature-extraction": PatchTSTModel} if is_torch_available() else {}
|
||||
test_pruning = False
|
||||
test_head_masking = False
|
||||
test_missing_keys = False
|
||||
test_torchscript = False
|
||||
test_inputs_embeds = False
|
||||
test_model_common_attributes = False
|
||||
|
||||
test_resize_embeddings = True
|
||||
test_resize_position_embeddings = False
|
||||
test_mismatched_shapes = True
|
||||
test_model_parallel = False
|
||||
has_attentions = False
|
||||
|
||||
def setUp(self):
|
||||
self.model_tester = PatchTSTModelTester(self)
|
||||
self.config_tester = ConfigTester(
|
||||
self,
|
||||
config_class=PatchTSTConfig,
|
||||
has_text_modality=False,
|
||||
prediction_length=self.model_tester.prediction_length,
|
||||
)
|
||||
|
||||
def test_config(self):
|
||||
self.config_tester.run_common_tests()
|
||||
|
||||
def _prepare_for_class(self, inputs_dict, model_class, return_labels=False):
|
||||
inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels)
|
||||
|
||||
# if PatchTSTForPretraining
|
||||
if model_class == PatchTSTForPretraining:
|
||||
inputs_dict.pop("future_values")
|
||||
# else if classification model:
|
||||
elif model_class in get_values(MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING):
|
||||
rng = random.Random(self.model_tester.seed_number)
|
||||
labels = ids_tensor([self.model_tester.batch_size], self.model_tester.num_targets, rng=rng)
|
||||
inputs_dict["target_values"] = labels
|
||||
inputs_dict.pop("future_values")
|
||||
elif model_class in get_values(MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING):
|
||||
rng = random.Random(self.model_tester.seed_number)
|
||||
target_values = floats_tensor(
|
||||
[self.model_tester.batch_size, self.model_tester.num_output_channels], rng=rng
|
||||
)
|
||||
inputs_dict["target_values"] = target_values
|
||||
inputs_dict.pop("future_values")
|
||||
return inputs_dict
|
||||
|
||||
def test_save_load_strict(self):
|
||||
config, _ = self.model_tester.prepare_config_and_inputs()
|
||||
for model_class in self.all_model_classes:
|
||||
model = model_class(config)
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdirname:
|
||||
model.save_pretrained(tmpdirname)
|
||||
model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True)
|
||||
self.assertEqual(info["missing_keys"], [])
|
||||
|
||||
def test_hidden_states_output(self):
|
||||
def check_hidden_states_output(inputs_dict, config, model_class):
|
||||
model = model_class(config)
|
||||
model.to(torch_device)
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
outputs = model(**self._prepare_for_class(inputs_dict, model_class))
|
||||
|
||||
hidden_states = outputs.hidden_states
|
||||
|
||||
expected_num_layers = getattr(
|
||||
self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers
|
||||
)
|
||||
self.assertEqual(len(hidden_states), expected_num_layers)
|
||||
|
||||
num_patch = self.model_tester.num_patches
|
||||
self.assertListEqual(
|
||||
list(hidden_states[0].shape[-2:]),
|
||||
[num_patch, self.model_tester.hidden_size],
|
||||
)
|
||||
|
||||
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
inputs_dict["output_hidden_states"] = True
|
||||
print("model_class: ", model_class)
|
||||
|
||||
check_hidden_states_output(inputs_dict, config, model_class)
|
||||
|
||||
# check that output_hidden_states also work using config
|
||||
del inputs_dict["output_hidden_states"]
|
||||
config.output_hidden_states = True
|
||||
|
||||
check_hidden_states_output(inputs_dict, config, model_class)
|
||||
|
||||
@unittest.skip(reason="we have no tokens embeddings")
|
||||
def test_resize_tokens_embeddings(self):
|
||||
pass
|
||||
|
||||
def test_model_main_input_name(self):
|
||||
model_signature = inspect.signature(getattr(PatchTSTModel, "forward"))
|
||||
# The main input is the name of the argument after `self`
|
||||
observed_main_input_name = list(model_signature.parameters.keys())[1]
|
||||
self.assertEqual(PatchTSTModel.main_input_name, observed_main_input_name)
|
||||
|
||||
def test_forward_signature(self):
|
||||
config, _ = self.model_tester.prepare_config_and_inputs_for_common()
|
||||
|
||||
for model_class in self.all_model_classes:
|
||||
model = model_class(config)
|
||||
signature = inspect.signature(model.forward)
|
||||
# signature.parameters is an OrderedDict => so arg_names order is deterministic
|
||||
arg_names = [*signature.parameters.keys()]
|
||||
|
||||
expected_arg_names = [
|
||||
"past_values",
|
||||
"past_observed_mask",
|
||||
"future_values",
|
||||
]
|
||||
if model_class == PatchTSTForPretraining:
|
||||
expected_arg_names.remove("future_values")
|
||||
elif model_class in get_values(MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING) or model_class in get_values(
|
||||
MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING
|
||||
):
|
||||
expected_arg_names.remove("future_values")
|
||||
expected_arg_names.remove("past_observed_mask")
|
||||
expected_arg_names.append("target_values") if model_class in get_values(
|
||||
MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING
|
||||
) else expected_arg_names.append("target_values")
|
||||
expected_arg_names.append("past_observed_mask")
|
||||
expected_arg_names.extend(
|
||||
[
|
||||
"output_hidden_states",
|
||||
"output_attentions",
|
||||
"return_dict",
|
||||
]
|
||||
)
|
||||
|
||||
self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names)
|
||||
|
||||
@is_flaky()
|
||||
def test_retain_grad_hidden_states_attentions(self):
|
||||
super().test_retain_grad_hidden_states_attentions()
|
||||
|
||||
|
||||
# Note: Publishing of this dataset is under internal review. The dataset is not yet downloadable.
|
||||
def prepare_batch(repo_id="ibm/etth1-forecast-test", file="train-batch.pt"):
|
||||
file = hf_hub_download(repo_id=repo_id, filename=file, repo_type="dataset")
|
||||
batch = torch.load(file, map_location=torch_device)
|
||||
return batch
|
||||
|
||||
|
||||
# Note: Publishing of pretrained weights is under internal review. Pretrained model is not yet downloadable.
|
||||
@require_torch
|
||||
@slow
|
||||
class PatchTSTModelIntegrationTests(unittest.TestCase):
|
||||
# Publishing of pretrained weights are under internal review. Pretrained model is not yet downloadable.
|
||||
def test_pretrain_head(self):
|
||||
model = PatchTSTForPretraining.from_pretrained("ibm/patchtst-etth1-pretrain").to(torch_device)
|
||||
batch = prepare_batch()
|
||||
|
||||
torch.manual_seed(0)
|
||||
with torch.no_grad():
|
||||
output = model(past_values=batch["past_values"].to(torch_device)).prediction_output
|
||||
num_patch = (
|
||||
max(model.config.context_length, model.config.patch_length) - model.config.patch_length
|
||||
) // model.config.patch_stride + 1
|
||||
expected_shape = torch.Size([64, model.config.num_input_channels, num_patch, model.config.patch_length])
|
||||
self.assertEqual(output.shape, expected_shape)
|
||||
|
||||
expected_slice = torch.tensor(
|
||||
[[[-0.5409]], [[0.3093]], [[-0.3759]], [[0.5068]], [[-0.8387]], [[0.0937]], [[0.2809]]],
|
||||
device=torch_device,
|
||||
)
|
||||
self.assertTrue(torch.allclose(output[0, :7, :1, :1], expected_slice, atol=TOLERANCE))
|
||||
|
||||
# Publishing of pretrained weights are under internal review. Pretrained model is not yet downloadable.
|
||||
def test_prediction_head(self):
|
||||
model = PatchTSTForPrediction.from_pretrained("ibm/patchtst-etth1-forecast").to(torch_device)
|
||||
|
||||
batch = prepare_batch(file="test-batch.pt")
|
||||
|
||||
torch.manual_seed(0)
|
||||
with torch.no_grad():
|
||||
output = model(
|
||||
past_values=batch["past_values"].to(torch_device),
|
||||
future_values=batch["future_values"].to(torch_device),
|
||||
).prediction_outputs
|
||||
expected_shape = torch.Size([64, model.config.prediction_length, model.config.num_input_channels])
|
||||
self.assertEqual(output.shape, expected_shape)
|
||||
|
||||
expected_slice = torch.tensor(
|
||||
[[0.3228, 0.4320, 0.4591, 0.4066, -0.3461, 0.3094, -0.8426]],
|
||||
device=torch_device,
|
||||
)
|
||||
self.assertTrue(torch.allclose(output[0, :1, :7], expected_slice, atol=TOLERANCE))
|
@ -185,8 +185,6 @@ IGNORE_NON_AUTO_CONFIGURED = PRIVATE_MODELS.copy() + [
|
||||
"TimeSeriesTransformerForPrediction",
|
||||
"InformerForPrediction",
|
||||
"AutoformerForPrediction",
|
||||
"PatchTSTForPretraining",
|
||||
"PatchTSTForPrediction",
|
||||
"JukeboxVQVAE",
|
||||
"JukeboxPrior",
|
||||
"SamModel",
|
||||
|
Loading…
Reference in New Issue
Block a user