Partho
692c5be74e
wrap forward passes with torch.no_grad() ( #19439 )
2022-10-10 14:54:36 -04:00
amyeroberts
e3f028f3af
Add TF whisper ( #19378 )
...
* simplify loop
* add featur extractor
* add model
* start conversion
* add dropout
* initial commit of test files
* copnversion for all models
* update processor for correct padding
* update feature extraction
* update integration test logits match
* fmnt: off for the logits
* on the fly mel bank
* small nit
* update test
* update tokenizer
* nit feature extraction
* update
* update tokenizer test
* adds logit processor and update tokenizer to get supress tokens
* style
* clean convert
* revert to original modeling tf utils
* Update
* update
* nit
* clean convert file
* update tests and nits
* quality
* slow generation test
* ffn_dim to allow customization
* update readme
* add to toctreee
* start fixing integration tests
* update tests and code
* fix feature extractor
* fix config tests common
* update code to fix tests
* fix feature exctractor
* nit feature extraction
* update test for new feature extractor
* style
* add absrtact
* large logits wioth custom decoder input ids
* wraap around is otrch available
* fix feature extractor
* correct logits for whisper small.en
* nit
* fix encoder_attentino_mask
* some fixes
* remove unnecessary inputs
* nits
* add normalizer file
* update etst tokenization
* fix attention mask not defined
* fix generate
* remove uncoder attention mask useless
* update test modeling whisper
* update condfig to add second non supress tokens
* nits on feature exrtactor
* nit for test tokenizers
* update etsts
* update tests
* update tokenization test
* fixup
* invalidated hf token. Clean convert openai to whisper
* fix logit tests
* fixup
* Add model to README
* Fix doc tests
* clean merge
* revert toc_tree changes
* remove useless LogitProcessor
* Update whisper .mdx
* update config file doc
* update configuration docstring
* update test tokenization
* update test tokenization
* update tokenization whisper
Added copied from where needed
* update feature extraction
* nit test name
* style
* quality
* remove get suppress tokens and update non_speech tokens global variables
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* clean modeling whisper and test
Removed the attention mask arguments that are deprecated
* fix large test
* Add multilingual audio test, and translate test
* style
* fix larg multilingual test
* nits
* add copied from for attention layer
* remove attention masks in doc
* add english normalizer
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update tokenization test
* remove copied from in whisper attention : no bias in k_proj only
* wrap around dependencies in english normalizer
* style
* correct import generation logits
* for now, wrap feature extractor with torch
* remove torch depencies for feature extraction and style
* Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixup
* nit
* update logitds
* style
* nit
* nits and fix final tests
* add `is_more_itertools_available` to utils
* quality
* add begin supress tokens, supress tokens to generate args and config
* clean supressTokensLogitProcessor in generation logits
* Nit naming
* add supressTokensAtBegin
* udpate tests, supress tokens to None or correct values
* nit and style
* update RAG to fit test and generate_logit
* add copy pasted statment on english normalizer
* add arguments to config_common_kwargs
* Update src/transformers/generation_utils.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/generation_logits_process.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* revert changes based on reviews
* update doc and nits
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* more nits
* last nits
* update test configuration common
* add BART name in decoder attention mask documentation
* Update src/transformers/models/whisper/modeling_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* style
* nit
* nit
* add english.json file to git
* nits on documentation
* nit
* nits
* last styling
* add main toctree file
* remove sentence piece dependency
* clean init file
* fix tokenizer that has no dependencies on sentencepiece
* update whisper init file, nit
* remove english.json file
* add get decoder prompt id
* All weights loading
* Remove hanging pdb
* Fixup and tidy up
* Use same copied from as PT model
* Remove whitespace changes
* Remove torch references
* Tie embeddings
* Remove logits processor input to generate
* Update logit values
* revert changes and add forced logit processor
* nit
* clean normalizer
* remove protected
* Add logit processors and update generation code & tests
* Some tidy up
* Update docstring
* update
* update based on review
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update to reflect changes on the PT model branch
* Tidy up
* Remove extra whitespace
* Fix test - make input ids small enough we can append
* Include upstream changes on main
* PR comments - add batch tests, remove comments & defaults
* Fix model output imports
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation_tf_logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/models/whisper/test_modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update docstring example
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Remove changes to adjust_logits_during_generation function
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Tidy up imports that don't require TF
* Update tests - skip and no more skip
* Update tests/generation/test_generation_tf_logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Add training flags
* Add (skipped) XLA generation tests
* Add embedding correctness test
* Add constant ids for generation tests
* Make logits finding a bit tidier
* Remove unused args
* xla generation enabled
* Don't skip XLA tests anymore
* Fix tests - add position ids to expected signature and update rag generation
* Undo method reorder
* Remove added whitespace
* Remove copy-paste gradient checkopint ref
* Remove
* Trigger CI - (issue with refs when pulling)
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
Co-authored-by: Joao Gante <joao@huggingface.co>
2022-10-10 14:48:17 +01:00
APAVOU Clément
af69360bf9
Add OPTForQuestionAnswering
( #19402 )
...
* Add `OPTForQuestionAnswering`
- added `OPTForQuestionAnswering` class based on `BloomForQuestionAnswering`
- added `OPTForQuestionAnswering` in common tests
- all common tests pass
- make fixup done
* added docstrings for OPTForQuestionAnswering
* Fix docstrings for OPTForQuestionAnswering
2022-10-10 09:30:59 -04:00
Matt
4107445a0f
Fix repo names for ESM tests ( #19451 )
2022-10-10 13:20:00 +01:00
Yih-Dar
cbb8a37929
Skip BloomEmbeddingTest.test_embeddings
for PyTorch < 1.10 ( #19261 )
...
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-10 10:05:30 +02:00
Arthur
45e14038f2
Add WhisperModel to transformers ( #19166 )
...
* simplify loop
* add featur extractor
* add model
* start conversion
* add dropout
* initial commit of test files
* copnversion for all models
* update processor for correct padding
* update feature extraction
* update integration test logits match
* fmnt: off for the logits
* on the fly mel bank
* small nit
* update test
* update tokenizer
* nit feature extraction
* update
* update tokenizer test
* adds logit processor and update tokenizer to get supress tokens
* style
* clean convert
* revert to original modeling tf utils
* Update
* update
* nit
* clean convert file
* update tests and nits
* quality
* slow generation test
* ffn_dim to allow customization
* update readme
* add to toctreee
* start fixing integration tests
* update tests and code
* fix feature extractor
* fix config tests common
* update code to fix tests
* fix feature exctractor
* nit feature extraction
* update test for new feature extractor
* style
* add absrtact
* large logits wioth custom decoder input ids
* wraap around is otrch available
* fix feature extractor
* correct logits for whisper small.en
* nit
* fix encoder_attentino_mask
* some fixes
* remove unnecessary inputs
* nits
* add normalizer file
* update etst tokenization
* fix attention mask not defined
* Add model to README
* Fix doc tests
* fix generate
* remove uncoder attention mask useless
* update test modeling whisper
* update condfig to add second non supress tokens
* nits on feature exrtactor
* nit for test tokenizers
* update etsts
* update tests
* update tokenization test
* fixup
* invalidated hf token. Clean convert openai to whisper
* fix logit tests
* fixup
* clean merge
* revert toc_tree changes
* remove useless LogitProcessor
* Update whisper .mdx
* update config file doc
* update configuration docstring
* update test tokenization
* update test tokenization
* update tokenization whisper
Added copied from where needed
* update feature extraction
* nit test name
* style
* quality
* remove get suppress tokens and update non_speech tokens global variables
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* clean modeling whisper and test
Removed the attention mask arguments that are deprecated
* fix large test
* Add multilingual audio test, and translate test
* style
* fix larg multilingual test
* nits
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add copied from for attention layer
* remove attention masks in doc
* add english normalizer
* update tokenization test
* remove copied from in whisper attention : no bias in k_proj only
* wrap around dependencies in english normalizer
* style
* correct import generation logits
* for now, wrap feature extractor with torch
* Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* remove torch depencies for feature extraction and style
* fixup
* nit
* update logitds
* style
* nit
* nits and fix final tests
* add `is_more_itertools_available` to utils
* quality
* add begin supress tokens, supress tokens to generate args and config
* clean supressTokensLogitProcessor in generation logits
* Nit naming
* add supressTokensAtBegin
* udpate tests, supress tokens to None or correct values
* nit and style
* update RAG to fit test and generate_logit
* add copy pasted statment on english normalizer
* add arguments to config_common_kwargs
* Update src/transformers/generation_utils.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/generation_logits_process.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* revert changes based on reviews
* update doc and nits
* more nits
* last nits
* update test configuration common
* add BART name in decoder attention mask documentation
* Update src/transformers/models/whisper/modeling_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* style
* nit
* nit
* add english.json file to git
* nits on documentation
* nit
* nits
* last styling
* add main toctree file
* remove sentence piece dependency
* clean init file
* fix tokenizer that has no dependencies on sentencepiece
* update whisper init file, nit
* remove english.json file
* add get decoder prompt id
* revert changes and add forced logit processor
* nit
* clean normalizer
* remove protected
* update
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* update based on review
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add batched tests
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-05 22:28:31 +02:00
Alara Dirik
7598791c09
Fix MaskFormer failing postprocess tests ( #19354 )
...
Ensures post_process_instance_segmentation and post_process_panoptic_segmentation methods return a tensor of shape (target_height, target_width) filled with -1 values if no segment with score > threshold is found.
2022-10-05 23:25:58 +03:00
Sylvain Gugger
c875a96eb1
Test failing test while we resolve the issue. ( #19355 )
2022-10-05 12:23:48 -04:00
r-terada
2f53ab5745
Add sudachi and jumanpp tokenizers for bert_japanese ( #19043 )
...
* add sudachipy and jumanpp tokenizers for bert_japanese
* use ImportError instead of ModuleNotFoundError in SudachiTokenizer and JumanppTokenizer
* put test cases of test_tokenization_bert_japanese in one line
* add require_sudachi and require_jumanpp decorator for testing
* add sudachi and pyknp(jumanpp) to dependencies
* remove sudachi_dict_small and sudachi_dict_full from dependencies
* empty commit for ci
2022-10-05 11:41:37 -04:00
Alara Dirik
07e94bf159
Maskformer post-processing fixes and improvements ( #19172 )
...
- Improves MaskFormer docs, corrects minor typos
- Restructures MaskFormerFeatureExtractor.post_process_panoptic_segmentation for better readability, adds target_sizes argument for optional resizing
- Adds post_process_semantic_segmentation and post_process_instance_segmentation methods.
- Adds a deprecation warning to post_process_segmentation method in favour of post_process_instance_segmentation
2022-10-05 15:27:15 +03:00
Younes Belkada
587d84b178
Add BloomForQuestionAnswering
( #19310 )
...
* add bloom for question answering
- attempt to add Bloom for question answering
- adapted from `GPTJForQuestionAnswering`
- Fixed `num_labels` to `2` for common tests
- Added a bit of docstring
- All common tests pass
* Update src/transformers/models/bloom/modeling_bloom.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* revert changes related to `num_labels`
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-04 17:52:13 +02:00
Partho
a9782881a4
wrap forward passes with torch.no_grad() ( #19273 )
2022-10-04 16:13:22 +02:00
Partho
d6e920449e
wrap forward passes with torch.no_grad() ( #19274 )
2022-10-04 16:12:03 +02:00
Partho
2403dbd607
wrap forward passes with torch.no_grad() ( #19278 )
2022-10-04 16:09:23 +02:00
Partho
f134d38553
wrap forward passes with torch.no_grad() ( #19279 )
2022-10-04 16:08:29 +02:00
Kashif Rasul
5cd16f01db
time series forecasting model ( #17965 )
...
* initial files
* initial model via cli
* typos
* make a start on the model config
* ready with configuation
* remove tokenizer ref.
* init the transformer
* added initial model forward to return dec_output
* require gluonts
* update dep. ver table and add as extra
* fixed typo
* add type for prediction_length
* use num_time_features
* use config
* more config
* typos
* opps another typo
* freq can be none
* default via transformation is 1
* initial transformations
* fix imports
* added transform_start_field
* add helper to create pytorch dataloader
* added inital val and test data loader
* added initial distr head and loss
* training working
* remove TimeSeriesTransformerTokenizer
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed copyright
* removed docs
* remove time series tokenizer
* fixed docs
* fix text
* fix second
* fix default
* fix order
* use config directly
* undo change
* fix comment
* fix year
* fix import
* add additional arguments for training vs. test
* initial greedy inference loop
* fix inference
* comment out token inputs to enc dec
* Use HF encoder/decoder
* fix inference
* Use Seq2SeqTSModelOutput output
* return Seq2SeqTSPredictionOutput
* added default arguments
* fix return_dict true
* scale is a tensor
* output static_features for inference
* clean up some unused bits
* fixed typo
* set return_dict if none
* call model once for both train/predict
* use cache if future_target is none
* initial generate func
* generate arguments
* future_time_feat is required
* return SampleTSPredictionOutput
* removed unneeded classes
* fix when params is none
* fix return dict
* fix num_attention_heads
* fix arguments
* remove unused shift_tokens_right
* add different dropout configs
* implement FeatureEmbedder, Scaler and weighted_average
* remove gluonts dependency
* fix class names
* avoid _variable names
* remove gluonts dependency
* fix imports
* remove gluonts from configuration
* fix docs
* fixed typo
* move utils to examples
* add example requirements
* config has no freq
* initial run_ts_no_trainer
* remove from ignore
* fix output_attentions and removed unsued getters/setters
* removed unsed tests
* add dec seq len
* add test_attention_outputs
* set has_text_modality=False
* add config attribute_map
* make style
* make fix-copies
* add encoder_outputs to TimeSeriesTransformerForPrediction forward
* Improve docs, add model to README
* added test_forward_signature
* More improvements
* Add more copied from
* Fix README
* Fix remaining quality issues
* updated encoder and decoder
* fix generate
* output_hidden_states and use_cache are optional
* past key_values returned too
* initialize weights of distribution_output module
* fixed more tests
* update test_forward_signature
* fix return_dict outputs
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* removed commented out tests
* added neg. bin and normal output
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* move to one line
* Add docstrings
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* add try except for assert and raise
* try and raise exception
* fix the documentation formatting
* fix assert call
* fix docstring formatting
* removed input_ids from DOCSTRING
* Update input docstring
* Improve variable names
* Update order of inputs
* Improve configuration
* Improve variable names
* Improve docs
* Remove key_length from tests
* Add extra docs
* initial unittests
* added test_inference_no_head test
* added test_inference_head
* add test_seq_to_seq_generation
* make style
* one line
* assert mean prediction
* removed comments
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fix order of args
* make past_observed_mask optional as well
* added Amazon license header
* updated utils with new fieldnames
* make style
* cleanup
* undo position of past_observed_mask
* fix import
* typo
* more typo
* rename example files
* remove example for now
* Update docs/source/en/_toctree.yml
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update modeling_time_series_transformer.py
fix style
* fixed typo
* fix typo and grammer
* fix style
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-30 15:32:59 -04:00
Yih-Dar
f33858d18a
Fix Encoder-Decoder testing issue about repo. names ( #19250 )
...
* Change "../gpt2" to "gpt2"
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-30 18:15:07 +02:00
Matt
368b649af6
Rebase ESM PR and update all file formats ( #19055 )
...
* Rebase ESM PR and update all file formats
* Fix test relative imports
* Add __init__.py to the test dir
* Disable gradient checkpointing
* Remove references to TFESM... FOR NOW >:|
* Remove completed TODOs from tests
* Convert docstrings to mdx, fix-copies from BERT
* fix-copies for the README and index
* Update ESM's __init__.py to the modern format
* Add to _toctree.yml
* Ensure we correctly copy the pad_token_id from the original ESM model
* Ensure we correctly copy the pad_token_id from the original ESM model
* Tiny grammar nitpicks
* Make the layer norm after embeddings an optional flag
* Make the layer norm after embeddings an optional flag
* Update the conversion script to handle other model classes
* Remove token_type_ids entirely, fix attention_masking and add checks to convert_esm.py
* Break the copied from link from BertModel.forward to remove token_type_ids
* Remove debug array saves
* Begin ESM-2 porting
* Add a hacky workaround for the precision issue in original repo
* Code cleanup
* Remove unused checkpoint conversion code
* Remove unused checkpoint conversion code
* Fix copyright notices
* Get rid of all references to the TF weights conversion
* Remove token_type_ids from the tests
* Fix test code
* Update src/transformers/__init__.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/__init__.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update README.md
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Add credit
* Remove _ args and __ kwargs in rotary embedding
* Assertively remove asserts
* Replace einsum with torch.outer()
* Fix docstring formatting
* Remove assertions in tokenization
* Add paper citation to ESMModel docstring
* Move vocab list to single line
* Remove ESMLayer from init
* Add Facebook copyrights
* Clean up RotaryEmbedding docstring
* Fix docstring formatting
* Fix docstring for config object
* Add explanation for new config methods
* make fix-copies
* Rename all the ESM- classes to Esm-
* Update conversion script to allow pushing to hub
* Update tests to point at my repo for now
* Set config properly for tests
* Remove the gross hack that forced loss of precision in inv_freq and instead copy the data from the model being converted
* make fixup
* Update expected values for slow tests
* make fixup
* Remove EsmForCausalLM for now
* Remove EsmForCausalLM for now
* Fix padding idx test
* Updated README and docs with ESM-1b and ESM-2 separately (#19221 )
* Updated README and docs with ESM-1b and ESM-2 separately
* Update READMEs, longer entry with 3 citations
* make fix-copies
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Tom Sercu <tsercu@fb.com>
Co-authored-by: Your Name <you@example.com>
2022-09-30 14:16:25 +01:00
NielsRogge
f3d2f7a6e0
Add MarkupLM ( #19198 )
...
* First draft
* Make basic test work
* Fix most tokenizer tests
* More improvements
* Make more tests pass
* Fix more tests
* Fix some code quality
* Improve truncation
* Implement feature extractor
* Improve feature extractor and add tests
* Improve feature extractor tests
* Fix pair_input test partly
* Add fast tokenizer
* Improve implementation
* Fix rebase
* Fix rebase
* Fix most of the tokenizer tests.
* propose solution for fast
* add: integration test for fasttokenizer, warning for decode, fix template in slow tokenizer
* add: modify markuplmconverter
* add: some modify on converter and tokenizerfast
* Fix style, copies
* Make fixup
* Update tokenization_markuplm.py
* Update test_tokenization_markuplm.py
* Update markuplm related
* Improve processor, add integration test
* Add processor test file
* Improve processor
* Improve processor tests
* Fix more processor tests
* Fix processor tests
* Update docstrings
* Add Copied from statements
* Add more Copied from statements
* Add code examples
* Improve code examples
* Add model to doc tests
* Adding dependency check
* Add dummy file
* Add requires_backends
* Add model to toctree
* Fix more things, disable dependency check for now
* Apply more suggestions
* Add soft dependency
* Add annotators to tests
* Fix style
* Remove from_slow=True
* Remove print statements
* Add sanity check
* Fix processor test
* Fix processor tests, add more docs
* Add doc tests for mdx file
* Add more tips
* Apply suggestions
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
Co-authored-by: lockon-n <45759388+lockon-n@users.noreply.github.com>
Co-authored-by: SaulLu <lucilesaul.com@gmail.com>
Co-authored-by: lockon-n <dd098309@126.com>
2022-09-30 08:25:43 +02:00
Aritra Roy Gosthipaty
0dc7b3a785
[TensorFlow] Adding GroupViT ( #18020 )
...
* chore: initial commit
* chore: adding util methods
yet to work on the nn.functional.interpolate port with align_corener=True
* chore: refactor the utils
* used tf.compat.v1.image.resize to align the F.interpolate function
* added type hints to the method signatures
* added references to the gists where one 2 one alignment of torch and tf has been shown
* chore: adding the layers
* chore: porting all the layers from torch to tf
This is the initial draft, nothing is tested yet.
* chore: aligning the layers with reference to tf clip
* chore: aligning the modules
* added demaraction comments
* added copied and adapted from comments
* chore: aligning with CLIP
* chore: wrangling the layers to keep it tf compatible
* chore: aligning the names of the layers for porting
* chore: style changes
* chore: adding docs and inits
* chore: adding tfp dependencis
the code is taken from TAPAS
* chore: initial commit for testing
* chore: aligning the vision embeddings with the vit implementatino
* chore: changing model prefix
* chore: fixing the name of the model and the layer normalization test case
* chore: every test passes but the slow ones
* chore: fix style and integration test
* chore: moving comments below decorators
* chore: make fixup and fix-copies changes
* chore: adding the Vision and Text Model to check_repo
* chore: modifying the prefix name to align it with the torch implementation
* chore: fix typo in configuration
* choer: changing the name of the model variable
* chore: adding segmentation flag
* chore: gante's review
* chore: style refactor
* chore: amy review
* chore: adding shape_list to parts that have been copied from other snippets
* chore: init batchnorm with torch defaults
* chore: adding shape_list to pass the tests
* test fix: adding seed as 0
* set seed
* chore: changing the straight through trick to fix -ve dimensinos
* chore: adding a dimension to the loss
* chore: adding reviewers and contributors names to the docs
* chore: added changes after review
* chore: code quality fixup
* chore: fixing the segmentation snippet
* chore: adding to the layer calls
* chore: changing int32 to int64 for inputs of serving
* chore: review changes
* chore: style changes
* chore: remove from_pt=True
* fix: repo consistency
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-29 10:48:04 +01:00
Gabriele Sarti
9d732fd2dd
XGLM - Fix Softmax NaNs when using FP16 ( #18057 )
...
* fix fp16 for xglm
* Removed misleading comment
* Fix undefined variable
Co-authored-by: Gabriele Sarti <gsarti@amazon.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2022-09-29 10:42:07 +02:00
Sylvain Gugger
c20b2c7e18
Use repo_type instead of deprecated datasets repo IDs ( #19202 )
...
* Use repo_type instead of deprecated datasets repo IDs
* Add missing one in doc
2022-09-26 09:50:48 -04:00
Yih-Dar
ea75e9f10e
Use assertAlmostEqual
in BloomEmbeddingTest.test_logits
( #19200 )
...
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-26 14:56:41 +02:00
Alara Dirik
7e84723fe4
Add semantic segmentation post-processing method to MobileViT ( #19105 )
...
* add post-processing method for semantic segmentation
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-23 16:24:28 +03:00
Sayak Paul
3a396c59b8
fix: ckpt paths. ( #19159 )
2022-09-22 11:03:01 -04:00
Sayak Paul
2d9853b226
MSN (Masked Siamese Networks) for ViT ( #18815 )
...
* feat: modeling and conversion scripts for msn.
* chore: change license year.
* chore: remove unneeded modules.
* feat: direct loading of state_dict from remote url.
* fix: import paths.
* add: rest of the files.
* add and fix rest of the files.
Co-authored-by: Niels <niels.rogge1@gmail.com>
* chore: formatting.
* code quality fix.
* chore: remove pooler.
* feat: add classification top.
* fix: configuration object.
* add: initial test cases (one failing).
* fix: basemodeloutput.
* add: caution on using the classification head.
* add: rest of the model related files.
* add: vit msn readme.
* fix: copied from statement.
* fix: dummy objects.
* add: ViTMSNPreTrainedModel to inits.
* fix: repo consistency.
* minor change in the model doc.
* fix: tests.
* Empty-Commit
* Update src/transformers/models/vit_msn/configuration_vit_msn.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* address PR comments.
* Update src/transformers/models/vit_msn/modeling_vit_msn.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* chore: put model in no_grad() and formatting.
Co-authored-by: Niels <niels.rogge1@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2022-09-22 07:15:03 -04:00
Younes Belkada
4d0f8c05f5
Add accelerate
support for ViLT ( #18683 )
2022-09-22 13:14:39 +02:00
NielsRogge
9393f966bc
[fix] Add DeformableDetrFeatureExtractor ( #19140 )
...
* Add DeformableDetrFeatureExtractor
* Fix post_process
* Fix name
* Add tests for feature extractor
* Fix doc tests
* Fix name
* Address comments
* Apply same fix to DETR and YOLOS as well
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-09-22 09:45:24 +02:00
DepuMeng
126a739058
Add support for conditional detr ( #18948 )
...
* added conditional_detr files
* checked copies
* checked copies
* fixed style and copies
* fixed style and copies
* fixed hub
* fixed style
* Update README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/_toctree.yml
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/index.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/convert_conditional_detr_original_pytorch_checkpoint_to_pytorch.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/conditional_detr.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed some doc issue
* changed prefix to ConditionalDetr
* fixed docs
* Update README_ko.md
* added spatial_model_name
* fixed fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added some copied from
* added some copied from
* added some copied from
* added some copied from
* fixed use_pretrained issue
* changed post-process
* added conditional_detr files
* checked copies
* checked copies
* fixed style and copies
* fixed style and copies
* fixed hub
* fixed style
* Update README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/_toctree.yml
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/index.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/convert_conditional_detr_original_pytorch_checkpoint_to_pytorch.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed some doc issue
* Update docs/source/en/model_doc/conditional_detr.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* changed prefix to ConditionalDetr
* fixed docs
* Update README_ko.md
* added spatial_model_name
* fixed fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added some copied from
* added some copied from
* added some copied from
* added some copied from
* fixed use_pretrained issue
* changed post-process
* fix style quality and copies
* fix style quality and copies
* fix style quality and copies
* fix style quality and copies
* add more fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed some variable names & added more fix-copies
* fixed some variable names & added more fix-copies
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added more copied from
* fixed quality
* changed pretrained config
* added more copied-from and fixed the issue in feature_extraction_auto
* added conditional_detr files
* checked copies
* checked copies
* fixed style and copies
* fixed style and copies
* fixed hub
* fixed style
* Update README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/_toctree.yml
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/index.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/convert_conditional_detr_original_pytorch_checkpoint_to_pytorch.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed some doc issue
* Update docs/source/en/model_doc/conditional_detr.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* changed prefix to ConditionalDetr
* fixed docs
* Update README_ko.md
* added spatial_model_name
* fixed fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added some copied from
* added some copied from
* added some copied from
* added some copied from
* fixed use_pretrained issue
* changed post-process
* added conditional_detr files
* checked copies
* fixed style and copies
* fixed some doc issue
* changed prefix to ConditionalDetr
* fixed docs
* added spatial_model_name
* fixed fix-copies
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added some copied from
* added some copied from
* added some copied from
* added some copied from
* fix style quality and copies
* fix style quality and copies
* fix style quality and copies
* add more fix-copies
* fixed some variable names & added more fix-copies
* fixed some variable names & added more fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added more copied from
* fixed quality
* changed pretrained config
* added more copied-from and fixed the issue in feature_extraction_auto
* fixed style
* added conditional_detr files
* checked copies
* checked copies
* fixed style and copies
* fixed style and copies
* fixed hub
* fixed style
* Update README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/_toctree.yml
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/index.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/convert_conditional_detr_original_pytorch_checkpoint_to_pytorch.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed some doc issue
* Update docs/source/en/model_doc/conditional_detr.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* changed prefix to ConditionalDetr
* fixed docs
* Update README_ko.md
* added spatial_model_name
* fixed fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added some copied from
* added some copied from
* added some copied from
* added some copied from
* fixed use_pretrained issue
* changed post-process
* added conditional_detr files
* checked copies
* fixed style and copies
* fixed some doc issue
* changed prefix to ConditionalDetr
* fixed docs
* added spatial_model_name
* fixed fix-copies
* Update src/transformers/models/conditional_detr/modeling_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added some copied from
* added some copied from
* added some copied from
* added some copied from
* fix style quality and copies
* fix style quality and copies
* fix style quality and copies
* add more fix-copies
* fixed some variable names & added more fix-copies
* fixed some variable names & added more fix-copies
* Update src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/conditional_detr/configuration_conditional_detr.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* added more copied from
* fixed quality
* changed pretrained config
* added more copied-from and fixed the issue in feature_extraction_auto
* rebased
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Depu Meng <depumeng@Depus-MacBook-Pro.local>
2022-09-22 09:45:04 +02:00
Alara Dirik
e7fdfc720a
Add post_process_semantic_segmentation method to DPTFeatureExtractor ( #19107 )
...
* add post-processing method for semantic segmentation
* add test for post-processing
2022-09-21 15:15:26 +03:00
Alara Dirik
9e95706648
Add post_process_semantic_segmentation method to SegFormer ( #19072 )
...
* add post_process_semantic_segmentation method to SegformerFeatureExtractor
* add test for semantic segmentation post-processing
2022-09-21 11:40:35 +03:00
Yih-Dar
18643ff29a
Skip test_export_to_onnx
for LongT5
if torch
< 1.11 ( #19122 )
...
* Skip if torch < 1.11
* fix quality
* fix import
* fix typo
* fix condition
* fix condition
* fix condition
* fix quality
* fix condition
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-20 21:52:18 +02:00
Alara Dirik
36b9a99433
Fix BeitFeatureExtractor postprocessing ( #19119 )
...
* return post-processed segmentations as list, add test
* use torch to resize logits
* fix assertion error if no target_size is specified
2022-09-20 18:53:40 +03:00
Joao Gante
658010c739
TF: tests for (de)serializable models with resized tokens ( #19013 )
...
* resized models that we can actually load
* separate embeddings check
* add test for embeddings out of bounds
* add fake slows
2022-09-16 16:38:08 +01:00
Michael Benayoun
c603c80f46
FX support for ConvNext, Wav2Vec2 and ResNet ( #19053 )
...
* Support for ConvNext
* Support for Wav2Vec2
* Support for Resnet
* Fix small issue in test_modeling_convnext
2022-09-16 10:57:41 +02:00
Shijie Wu
f3d3863255
fix arg name in BLOOM testing and remove unused arg document ( #18843 )
2022-09-15 20:25:32 +02:00
Nicolas Patry
68bb33d770
Fixing OPT fast tokenizer option. ( #18753 )
...
* Fixing OPT fast tokenizer option.
* Remove dependency on `pt`.
* Move it to GPT2 tokenization tests.
* Added a few tests.
2022-09-15 17:12:58 +02:00
Yih-Dar
0a42b61ede
Fix test_save_load
for TFViTMAEModelTest
( #19040 )
...
* Fix test_save_load for TFViTMAEModelTest
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-15 15:21:57 +02:00
SaulLu
0efbb6e93e
fix GPT2 token's special_tokens_mask
when used with add_bos_token=True
( #19036 )
2022-09-14 19:32:12 +02:00
Sylvain Gugger
4eb36f2921
Mark right save_load test as slow ( #19031 )
2022-09-14 10:38:39 -04:00
Shinya Otani
f5f430e5c8
Add support for Japanese GPT-NeoX-based model by ABEJA, Inc. ( #18814 )
...
* add gpt-neox-japanese model and tokenizer as new model
* Correction to PR's comment for GPT NeoX Japanese
- Fix to be able to use gpu
- Add comment # Copied... at the top of RotaryEmbedding
- Implement nn.Linear instead of original linear class
- Add generation test under @slow
* fix bias treatment for gpt-neox-japanese
* Modidy gpt-neox-japanese following PR
- add doc for bias_dropout_add
- style change following a PR comment
* add document for gpt-neox-japanese
* remove unused import from gpt-neox-japanese
* fix README for gpt-neox-japanese
2022-09-14 10:17:40 -04:00
Sylvain Gugger
1207deb806
Typo fix
2022-09-14 10:02:14 -04:00
Sylvain Gugger
e1224a2a0f
Making save_load test slow as it times out
2022-09-14 10:01:22 -04:00
Yih-Dar
77b18783c2
Fix CI for PegasusX
( #19025 )
...
* Skip test_torchscript_output_attentions for PegasusXModelTest
* fix test_inference_no_head
* fix test_inference_head
* fix test_seq_to_seq_generation
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-14 14:45:00 +02:00
Sylvain Gugger
6f8f2f6a77
Make AutoProcessor a magic loading class for all modalities ( #18963 )
...
* Make AutoProcessor a magic loading class for all modalities
* Quality
2022-09-14 07:36:12 -04:00
NielsRogge
59407bbeb3
Add Deformable DETR ( #17281 )
...
* First draft
* More improvements
* Improve model, add custom CUDA code
* Import torch before
* Add script that imports custom layer
* Add everything in new ops directory
* Import custom layer in modeling file
* Fix ARCHIVE_MAP typo
* Creating the custom kernel on the fly.
* Import custom layer in modeling file
* More improvements
* Fix CUDA loading
* More improvements
* Improve conversion script
* Improve conversion script
* Make it work until encoder_outputs
* Make forward pass work
* More improvements
* Make logits match original implementation
* Make implementation also support single_scale model
* Add support for single_scale and dilation checkpoint
* Add support for with_box_refine model
* Support also two stage model
* Improve tests
* Fix more tests
* Make more tests pass
* Upload all models to the hub
* Clean up some code
* Improve decoder outputs
* Rename intermediate hidden states and reference points
* Improve model outputs
* Move tests to dedicated folder
* Improve model outputs
* Fix retain_grad test
* Improve docs
* Clean up and make test_initialization pass
* Improve variable names
* Add copied from statements
* Improve docs
* Fix style
* Improve docs
* Improve docs, move tests to model folder
* Fix rebase
* Remove DetrForSegmentation from auto mapping
* Apply suggestions from code review
* Improve variable names and docstrings
* Apply some more suggestions from code review
* Apply suggestion from code review
* better docs and variables names
* hint to num_queries and two_stage confusion
* remove asserts and code refactor
* add exception if two_stage is True and with_box_refine is False
* use f-strings
* Improve docs and variable names
* Fix code quality
* Fix rebase
* Add require_torch_gpu decorator
* Add pip install ninja to CI jobs
* Apply suggestion of @sgugger
* Remove DeformableDetrForObjectDetection from auto mapping
* Remove DeformableDetrModel from auto mapping
* Add model to toctree
* Add model back to mappings, skip model in pipeline tests
* Apply @sgugger's suggestion
* Fix imports in the init
* Fix copies
* Add CPU implementation
* Comment out GPU function
* Undo previous change
* Apply more suggestions
* Remove require_torch_gpu annotator
* Fix quality
* Add logger.info
* Fix logger
* Fix variable names
* Fix initializaztion
* Add missing initialization
* Update checkpoint name
* Add model to doc tests
* Add CPU/GPU equivalence test
* Add Deformable DETR to pipeline tests
* Skip model for object detection pipeline
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Nouamane Tazi <nouamane98@gmail.com>
Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-09-14 11:45:21 +02:00
Yih-Dar
ad5045e3e3
add missing require_tf
for TFOPTGenerationTest
( #19010 )
...
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-13 18:10:11 +02:00
Joao Gante
1182b945a6
TF: TF 2.10 unpin + related onnx test skips ( #18995 )
2022-09-12 19:30:27 +01:00
Matt
c126a239bc
Fix tflongformer int dtype ( #18907 )
...
* Use int64 throughout TFLongFormer
* make style
* Do some more fixed casting in TFLongFormer
* Fix some wonky "is None" conditionals
* Cast all the dtypes, salt the earth
* Fix copies to TFLED as well and do some casting there
* dtype fix in TFLongformer test
* Make fixup
* Expand tolerances on the LED tests too (I think this is a TF32 thing)
* Expand test tolerances for LED a tiny bit (probably a Tensorfloat thing again)
2022-09-12 17:51:10 +01:00
Yih-Dar
0b36970371
Remove decoder_position_ids
from check_decoder_model_past_large_inputs
( #18980 )
...
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-09-12 15:19:48 +02:00