Arthur
f5d45d89c4
🚨 Early-error 🚨 config will error out if output_attentions=True
and the attn implementation is wrong ( #38288 )
...
* Protect ParallelInterface
* early error out on output attention setting for no wraning in modeling
* modular update
* fixup
* update model tests
* update
* oups
* set model's config
* more cases
* ??
* properly fix
* fixup
* update
* last onces
* update
* fix?
* fix wrong merge commit
* fix hub test
* nits
* wow I am tired
* updates
* fix pipeline!
---------
Co-authored-by: Lysandre <hi@lysand.re>
2025-05-23 17:17:38 +02:00
Cyril Vallez
0cfbf9c95b
Force torch>=2.6 with torch.load to avoid vulnerability issue ( #37785 )
...
* fix all main files
* fix test files
* oups forgot modular
* add link
* update message
2025-04-25 16:57:09 +02:00
cyyever
1e6b546ea6
Use Python 3.9 syntax in tests ( #37343 )
...
Signed-off-by: cyy <cyyever@outlook.com>
2025-04-08 14:12:08 +02:00
cyyever
41a0e58e5b
Set weights_only in torch.load ( #36991 )
2025-03-27 14:55:50 +00:00
co63oc
996f512d52
Fix typos in tests ( #36547 )
...
Signed-off-by: co63oc <co63oc@users.noreply.github.com>
2025-03-05 15:04:06 -08:00
ivarflakstad
7ec35bc3bd
Add missing atol to torch.testing.assert_close where rtol is specified ( #36234 )
2025-02-17 14:57:50 +01:00
Joao Gante
62c7ea0201
CI: avoid human error, automatically infer generative models ( #33212 )
...
* tmp commit
* move tests to the right class
* remove ALL all_generative_model_classes = ...
* skip tf roberta
* skip InstructBlipForConditionalGenerationDecoderOnlyTest
* videollava
* reduce diff
* reduce diff
* remove on vlms
* fix a few more
* manual rebase bits
* more manual rebase
* remove all manual generative model class test entries
* fix up to ernie
* a few more removals
* handle remaining cases
* recurrent gemma
* it's better here
* make fixup
* tf idefics is broken
* tf bert + generate is broken
* don't touch tf :()
* don't touch tf :(
* make fixup
* better comments for test skips
* revert tf changes
* remove empty line removal
* one more
* missing one
2025-02-13 16:27:11 +01:00
Arthur
b912f5ee43
use torch.testing.assertclose instead to get more details about error in cis ( #35659 )
...
* use torch.testing.assertclose instead to get more details about error in cis
* fix
* style
* test_all
* revert for I bert
* fixes and updates
* more image processing fixes
* more image processors
* fix mamba and co
* style
* less strick
* ok I won't be strict
* skip and be done
* up
2025-01-24 16:55:28 +01:00
amyeroberts
1de7dc7403
Skip tests properly ( #31308 )
...
* Skip tests properly
* [test_all]
* Add 'reason' as kwarg for skipTest
* [test_all] Fix up
* [test_all]
2024-06-26 21:59:08 +01:00
amyeroberts
25245ec26d
Rename test_model_common_attributes -> test_model_get_set_embeddings ( #31321 )
...
* Rename to test_model_common_attributes
The method name is misleading - it is testing being able to get and set embeddings, not common attributes to all models
* Explicitly skip
2024-06-07 19:40:26 +01:00
Arthur
673440d073
update ruff version ( #30932 )
...
* update ruff version
* fix research projects
* Empty
* Fix errors
---------
Co-authored-by: Lysandre <lysandre@huggingface.co>
2024-05-22 06:40:15 +02:00
Raushan Turganbay
8e64ba2890
Add tests for batching support ( #29297 )
...
* add tests for batching support
* Update src/transformers/models/fastspeech2_conformer/modeling_fastspeech2_conformer.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/fastspeech2_conformer/modeling_fastspeech2_conformer.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* fixes and comments
* use cosine distance for conv models
* skip mra model testing
* Update tests/models/vilt/test_modeling_vilt.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* finzalize and make style
* check model type by input names
* Update tests/models/vilt/test_modeling_vilt.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixed batch size for all testers
* Revert "fixed batch size for all testers"
This reverts commit 525f3a0a05
.
* add batch_size for all testers
* dict from model output
* do not skip layoutlm
* bring back some code from git revert
* Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* clean-up
* where did minus go in tolerance
* make whisper happy
* deal with consequences of losing minus
* deal with consequences of losing minus
* maskformer needs its own test for happiness
* fix more models
* tag flaky CV models from Amy's approval
* make codestyle
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-03-12 17:46:19 +00:00
Younes Belkada
f7ea959b96
[core
/ GC
/ tests
] Stronger GC tests ( #27124 )
...
* stronger GC tests
* better tests and skip failing tests
* break down into 3 sub-tests
* break down into 3 sub-tests
* refactor a bit
* more refactor
* fix
* last nit
* credits contrib and suggestions
* credits contrib and suggestions
---------
Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-10-30 19:53:46 +01:00
Younes Belkada
3ce3385c47
Revert "Fix gradient checkpointing + fp16 autocast for most models" ( #24420 )
...
Revert "Fix gradient checkpointing + fp16 autocast for most models (#24247 )"
This reverts commit 285a48011d
.
2023-06-22 16:11:27 +02:00
Younes Belkada
285a48011d
Fix gradient checkpointing + fp16 autocast for most models ( #24247 )
...
* fix gc bug
* continue PoC on OPT
* fixes
* 🤯
* fix tests
* remove pytest.mark
* fixup
* forward contrib credits from discussions
* forward contrib credits from discussions
* reverting changes on untouched files.
---------
Co-authored-by: zhaoqf123 <zhaoqf123@users.noreply.github.com>
Co-authored-by: 7eu7d7 <7eu7d7@users.noreply.github.com>
2023-06-21 17:04:59 +02:00
Eli Simhayev
4b6a5a7caa
[Time-Series] Autoformer model ( #21891 )
...
* ran `transformers-cli add-new-model-like`
* added `AutoformerLayernorm` and `AutoformerSeriesDecomposition`
* added `decomposition_layer` in `init` and `moving_avg` to config
* added `AutoformerAutoCorrelation` to encoder & decoder
* removed caninical self attention `AutoformerAttention`
* added arguments in config and model tester. Init works! 😁
* WIP autoformer attention with autocorrlation
* fixed `attn_weights` size
* wip time_delay_agg_training
* fixing sizes and debug time_delay_agg_training
* aggregation in training works! 😁
* `top_k_delays` -> `top_k_delays_index` and added `contiguous()`
* wip time_delay_agg_inference
* finish time_delay_agg_inference 😎
* added resize to autocorrelation
* bug fix: added the length of the output signal to `irfft`
* `attention_mask = None` in the decoder
* fixed test: changed attention expected size, `test_attention_outputs` works!
* removed unnecessary code
* apply AutoformerLayernorm in final norm in enc & dec
* added series decomposition to the encoder
* added series decomp to decoder, with inputs
* added trend todos
* added autoformer to README
* added to index
* added autoformer.mdx
* remove scaling and init attention_mask in the decoder
* make style
* fix copies
* make fix-copies
* inital fix-copies
* fix from https://github.com/huggingface/transformers/pull/22076
* make style
* fix class names
* added trend
* added d_model and projection layers
* added `trend_projection` source, and decomp layer init
* added trend & seasonal init for decoder input
* AutoformerModel cannot be copied as it has the decomp layer too
* encoder can be copied from time series transformer
* fixed generation and made distrb. out more robust
* use context window to calculate decomposition
* use the context_window for decomposition
* use output_params helper
* clean up AutoformerAttention
* subsequences_length off by 1
* make fix copies
* fix test
* added init for nn.Conv1d
* fix IGNORE_NON_TESTED
* added model_doc
* fix ruff
* ignore tests
* remove dup
* fix SPECIAL_CASES_TO_ALLOW
* do not copy due to conv1d weight init
* remove unused imports
* added short summary
* added label_length and made the model non-autoregressive
* added params docs
* better doc for `factor`
* fix tests
* renamed `moving_avg` to `moving_average`
* renamed `factor` to `autocorrelation_factor`
* make style
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fix configurations
* fix integration tests
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixing `lags_sequence` doc
* Revert "fixing `lags_sequence` doc"
This reverts commit 21e34911e3
.
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* model layers now take the config
* added `layer_norm_eps` to the config
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* added `config.layer_norm_eps` to AutoformerLayernorm
* added `config.layer_norm_eps` to all layernorm layers
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix variable names
* added inital pretrained model
* added use_cache docstring
* doc strings for trend and use_cache
* fix order of args
* imports on one line
* fixed get_lagged_subsequences docs
* add docstring for create_network_inputs
* get rid of layer_norm_eps config
* add back layernorm
* update fixture location
* fix signature
* use AutoformerModelOutput dataclass
* fix pretrain config
* no need as default exists
* subclass ModelOutput
* remove layer_norm_eps config
* fix test_model_outputs_equivalence test
* test hidden_states_output
* make fix-copies
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* removed unused attr
* Update tests/models/autoformer/test_modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* use AutoFormerDecoderOutput
* fix formatting
* fix formatting
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-30 10:23:32 +02:00
Eli Simhayev
9eae4aa576
[Time-Series] fix past_observed_mask type ( #22076 )
...
added > 0.5 to `past_observed_mask`
2023-04-03 09:07:21 -04:00
Kashif Rasul
db979f7588
[time series] Add Time series inputs tests ( #21846 )
...
* intial test of inputs
* added test for generation
* remove asserts
* fixed test
* Update tests/models/time_series_transformer/test_modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
---------
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
2023-03-02 20:43:35 +01:00
Yih-Dar
871c31a6f1
🔥 Rework pipeline testing by removing PipelineTestCaseMeta
🚀 ( #21516 )
...
* Add PipelineTesterMixin
* remove class PipelineTestCaseMeta
* move validate_test_components
* Add for ViT
* Add to SPECIAL_MODULE_TO_TEST_MAP
* style and quality
* Add feature-extraction
* update
* raise instead of skip
* add tiny_model_summary.json
* more explicit
* skip tasks not in mapping
* add availability check
* Add Copyright
* A way to diable irrelevant tests
* update with main
* remove disable_irrelevant_tests
* skip tests
* better skip message
* better skip message
* Add all pipeline task tests
* revert
* Import PipelineTesterMixin
* subclass test classes with PipelineTesterMixin
* Add pipieline_model_mapping
* Fix import after adding pipieline_model_mapping
* Fix style and quality after adding pipieline_model_mapping
* Fix one more import after adding pipieline_model_mapping
* Fix style and quality after adding pipieline_model_mapping
* Fix test issues
* Fix import requirements
* Fix mapping for MobileViTModelTest
* Update
* Better skip message
* pipieline_model_mapping could not be None
* Remove some PipelineTesterMixin
* Fix typo
* revert tests_fetcher.py
* update
* rename
* revert
* Remove PipelineTestCaseMeta from ZeroShotAudioClassificationPipelineTests
* style and quality
* test fetcher for all pipeline/model tests
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-02-28 19:40:57 +01:00
Kashif Rasul
ba0e370dc1
[time series] updated expected values for integration test. ( #21762 )
...
* updated expected
* prediction_length fix
* prediction_length default value
* default prediction_length 24
* revert back prediction_length default
* move prediction_length test
2023-02-24 12:36:54 +01:00
Kashif Rasul
df06fb1f0b
Time series transformer: input projection and Std scaler ( #21020 )
...
* added loc and scale outputs from scalers
* fix typo
* fix tests
* fixed formatting
* initial StdScaler
* move scaling to optional str
* calculate std feature for scalers
* undid change as it does not help
* added StdScaler with weights
* added input projection layer and d_model hyperparam
* use linear proj
* add back layernorm_embedding
* add sin-cos pos embeddings
* updated scalers
* formatting
* fix type
* fixed test
* fix repeated_past_values cal.
* fix when keepdim=false
* fix default_scale
* backward compatibility of scaling config
* update integration test expected output
* fix style
* fix docs
* use the actual num_static_real_features in feature_dim cal
* clarified docs
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* prediction_length is not optional
* fix for reviewer
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* get rid of un-needed new lines
* fix doc
* remove unneeded new lines
* fix style
* static_categorical_features and static_real_features are optional
* fix integration test
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixing docs for multivariate setting
* documentation for generate
---------
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-02-22 07:50:13 +01:00
Sylvain Gugger
6f79d26442
Update quality tooling for formatting ( #21480 )
...
* Result of black 23.1
* Update target to Python 3.7
* Switch flake8 to ruff
* Configure isort
* Configure isort
* Apply isort with line limit
* Put the right black version
* adapt black in check copies
* Fix copies
2023-02-06 18:10:56 -05:00
Sylvain Gugger
209bec4636
Add a decorator for flaky tests ( #19498 )
...
* Add a decorator for flaky tests
* Quality
* Don't break the rest
* Address review comments
* Fix test name
* Fix typo and print to stderr
2022-10-12 14:00:17 -04:00
Kashif Rasul
5cd16f01db
time series forecasting model ( #17965 )
...
* initial files
* initial model via cli
* typos
* make a start on the model config
* ready with configuation
* remove tokenizer ref.
* init the transformer
* added initial model forward to return dec_output
* require gluonts
* update dep. ver table and add as extra
* fixed typo
* add type for prediction_length
* use num_time_features
* use config
* more config
* typos
* opps another typo
* freq can be none
* default via transformation is 1
* initial transformations
* fix imports
* added transform_start_field
* add helper to create pytorch dataloader
* added inital val and test data loader
* added initial distr head and loss
* training working
* remove TimeSeriesTransformerTokenizer
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed copyright
* removed docs
* remove time series tokenizer
* fixed docs
* fix text
* fix second
* fix default
* fix order
* use config directly
* undo change
* fix comment
* fix year
* fix import
* add additional arguments for training vs. test
* initial greedy inference loop
* fix inference
* comment out token inputs to enc dec
* Use HF encoder/decoder
* fix inference
* Use Seq2SeqTSModelOutput output
* return Seq2SeqTSPredictionOutput
* added default arguments
* fix return_dict true
* scale is a tensor
* output static_features for inference
* clean up some unused bits
* fixed typo
* set return_dict if none
* call model once for both train/predict
* use cache if future_target is none
* initial generate func
* generate arguments
* future_time_feat is required
* return SampleTSPredictionOutput
* removed unneeded classes
* fix when params is none
* fix return dict
* fix num_attention_heads
* fix arguments
* remove unused shift_tokens_right
* add different dropout configs
* implement FeatureEmbedder, Scaler and weighted_average
* remove gluonts dependency
* fix class names
* avoid _variable names
* remove gluonts dependency
* fix imports
* remove gluonts from configuration
* fix docs
* fixed typo
* move utils to examples
* add example requirements
* config has no freq
* initial run_ts_no_trainer
* remove from ignore
* fix output_attentions and removed unsued getters/setters
* removed unsed tests
* add dec seq len
* add test_attention_outputs
* set has_text_modality=False
* add config attribute_map
* make style
* make fix-copies
* add encoder_outputs to TimeSeriesTransformerForPrediction forward
* Improve docs, add model to README
* added test_forward_signature
* More improvements
* Add more copied from
* Fix README
* Fix remaining quality issues
* updated encoder and decoder
* fix generate
* output_hidden_states and use_cache are optional
* past key_values returned too
* initialize weights of distribution_output module
* fixed more tests
* update test_forward_signature
* fix return_dict outputs
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* removed commented out tests
* added neg. bin and normal output
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* move to one line
* Add docstrings
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* add try except for assert and raise
* try and raise exception
* fix the documentation formatting
* fix assert call
* fix docstring formatting
* removed input_ids from DOCSTRING
* Update input docstring
* Improve variable names
* Update order of inputs
* Improve configuration
* Improve variable names
* Improve docs
* Remove key_length from tests
* Add extra docs
* initial unittests
* added test_inference_no_head test
* added test_inference_head
* add test_seq_to_seq_generation
* make style
* one line
* assert mean prediction
* removed comments
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fix order of args
* make past_observed_mask optional as well
* added Amazon license header
* updated utils with new fieldnames
* make style
* cleanup
* undo position of past_observed_mask
* fix import
* typo
* more typo
* rename example files
* remove example for now
* Update docs/source/en/_toctree.yml
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update modeling_time_series_transformer.py
fix style
* fixed typo
* fix typo and grammer
* fix style
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-30 15:32:59 -04:00