Saad Mahmud
5b62f8ea2b
Add to DeBERTa resources ( #20155 )
...
* Add to DeBERTa resources
* Fix mistakes with chapter number
* Add fill-mask pipeline
* Add sequence, token and QA pipeline
* Change token classification pipeline order
* Remove flax script and notebook links
2022-11-15 13:26:07 -05:00
Suraj Patil
7f74433814
[CLIP] allow loading projection layer in vision and text model ( #18962 )
...
* allow loading projection in text and vision model
* begin tests
* finish test for CLIPTextModelTest
* style
* add slow tests
* add new classes for projection heads
* remove with_projection
* add in init
* add in doc
* fix tests
* fix some more tests
* fix copies
* fix docs
* remove leftover from fix-copies
* add the head models in IGNORE_NON_AUTO_CONFIGURED
* fix docstr
* fix tests
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add docstr for models
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-11-15 17:50:07 +01:00
Younes Belkada
163ac3d3ee
Add Switch transformers ( #19323 )
...
* first commit
* add more comments
* add router v1
* clean up
- remove `tf` modeling files
* clean up
- remove `tf` modeling files
* clean up
* v0 routers
* added more router
- Implemented `ExpertsChooseMaskedRouter`
- added tests
- 2 more routers to implement
* last router
* improved docstring
- completed the docstring in `router.py`
- added more args in the config
* v0 sparse mlp
* replace wrong naming
* forward pass run
* update MOE layer
* small router update
* fixup
* consistency
* remove scatter router
* remove abstract layer
* update test and model for integration testing
* v1 conversion
* update
* hardcode hack
* all keys match
* add gin conversion, without additional libraries
* update conversion sctipy
* delete router file
* update tests wrt router deletion
* fix router issues
* update expert code
* update, logits match, code needsREFACTORING
* Refactor code
Co-authored-by: Younes Belkada <younesbelkada@users.noreply.github.com>
* add generate tests
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
* add support for router loss
Co-authored-by: Younes Belkada <younesbelkada@users.noreply.github.com>
* fix forward error
* refactor a bit
* remove `FlaxSwitchTransformers` modules
* more tests pass
* Update code
Co-authored-by: Younes Belkada <younesbelkada@users.noreply.github.com>
* fixup
* fix tests
* fix doc
* fix doc + tokenization
* fix tokenizer test
* fix test
* fix loss output
* update code for backward pass
* add loss support
* update documentation
* fix documentation, clean tokenizer
* more doc fix, cleanup example_switch
* fix failing test
* fix test
* fix test
* fix loss issue
* move layer
* update doc and fix router capacity usage
* fixup
* add sparse mlp index for documentation on hub
* fixup
* test sparse mix architecture
* Apply suggestions from code review
* Update docs/source/en/model_doc/switch_transformers.mdx
* fixup on update
* fix tests
* fix another test
* attempt fix
* Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/switch_transformers/convert_switch_transformers_original_flax_checkpoint_to_pytorch.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* try
* all tests pass
* fix jitter noise
* Apply suggestions from code review
* doc tests pass
* Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/switch_transformers/modeling_switch_transformers.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* remove assert
* change config order
* fix readme japanese
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* remove parallelizable tests + add one liners
* remove ONNX config
* fix nits
- add `T5Tokenizer` in auto mapping
- remove `Switch Transformers` from ONNX supported models
* remove `_get_router`
* remove asserts
* add check in test for `router_dtype`
* add `SwitchTransformersConfig` in `run_pipeline_test`
* Update tests/pipelines/test_pipelines_summarization.py
* add huge model conversion script
* fix slow tests
- add better casting for `Linear8bitLt`
- remove `torchscript` tests
* add make dir
* style on new script
* fix nits
- doctest
- remove `_keys_to_ignore_on_load_unexpected`
* Update src/transformers/models/switch_transformers/configuration_switch_transformers.py
* add google as authors
* fix year
* remove last `assert` statements
* standardize vertical spaces
* fix failing import
* fix another failing test
* Remove strange àuthorized_keys`
* removing todo and padding that is never used
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: ybelkada <younes@huggingface.co>
Co-authored-by: Younes Belkada <younesbelkada@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Arthur Zucker <arthur@huggingface.co>
2022-11-15 13:06:45 +01:00
Bartosz Szmelczynski
78a471ff71
Fix tapas scatter ( #20149 )
...
* First draft
* Remove scatter dependency
* Add require_torch
* update vectorized sum test, add clone call
* remove artifacts
* fix style
* fix style v2
* remove "scatter" mentions from the code base
* fix isort error
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-11-14 01:04:26 -05:00
Matthijs Hollemans
f711d683b5
add MobileNetV2 model ( #17845 )
...
* add model files etc for MobileNetV2
* rename files for MobileNetV1
* initial implementation of MobileNetV1
* fix conversion script
* cleanup
* write docs
* tweaks
* fix conversion script
* extract hidden states
* fix test cases
* make fixup
* fixup it all
* rename V1 to V2
* fix checkpoints
* fixup
* implement first block + weight conversion
* add remaining layers
* add output stride and dilation
* fixup
* add tests
* add deeplabv3+ head
* a bit of fixup
* finish deeplab conversion
* add link to doc
* fix issue with JIT trace
in_height and in_width would be Tensor objects during JIT trace, which caused Core ML conversion to fail on the remainder op. By making them ints, the result of the padding calculation becomes a constant value.
* cleanup
* fix order of models
* fix rebase error
* remove main from doc link
* add image processor
* remove old feature extractor
* fix converter + other issues
* fixup
* fix unit test
* add to onnx tests (but these appear broken now)
* add post_process_semantic_segmentation
* use google org
* remove unused imports
* move args
* replace weird assert
2022-11-14 01:00:10 -05:00
Arthur
61a51f5f23
Add Jukebox model (replaces #16875 ) ( #17826 )
2022-11-10 21:05:27 +01:00
NielsRogge
9f0c72f93b
Add doc tests ( #20158 )
...
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MBP.localdomain>
2022-11-10 15:25:30 +01:00
NielsRogge
93e14486d6
[CLIPSeg] Add resources ( #20118 )
...
* Add resource
* Add tag
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-11-09 18:31:22 +01:00
Joao Gante
f270b960d6
Generate: move generation_*.py src files into generation/*.py ( #20096 )
...
* move generation_*.py src files into generation/*.py
* populate generation.__init__ with lazy loading
* move imports and references from generation.xxx.object to generation.object
2022-11-09 15:34:08 +00:00
amyeroberts
4eb918e656
AutoImageProcessor ( #20111 )
...
* AutoImageProcessor skeleton
* Update references
* Add mapping in init
* Add model image processors to __init__ for importing
* Add AutoImageProcessor tests
* Fix up
* Image Processor documentation
* Remove pdb
* Update docs/source/en/model_doc/mobilevit.mdx
* Update docs
* Don't add whitespace on json files
* Remove fixtures
* Move checking model config down
* Fix up
* Add check for image processor
* Remove FeatureExtractorMixin in docstrings
* Rename model_tmpfile to config_tmpfile
* Don't make None if not in image processor map
2022-11-08 19:54:41 +00:00
Weiwe Shi
efa889d2e4
Add RocBert ( #20013 )
...
* add roc_bert
* update roc_bert readme
* code style
* change name and delete unuse file
* udpate model file
* delete unuse log file
* delete tokenizer fast
* reformat code and change model file path
* add RocBertForPreTraining
* update docs
* delete wrong notes
* fix copies
* fix make repo-consistency error
* fix files are not present in the table of contents error
* change RocBert -> RoCBert
* add doc, add detail test
Co-authored-by: weiweishi <weiweishi@tencent.com>
2022-11-08 10:03:43 -05:00
NielsRogge
258963062b
Add CLIPSeg ( #20066 )
...
* Add first draft
* Update conversion script
* Improve conversion script
* Improve conversion script some more
* Add conditional embeddings
* Add initial decoder
* Fix activation function of decoder
* Make decoder outputs match original implementation
* Make decoder outputs match original implementation
* Add more copied from statements
* Improve model outputs
* Fix auto tokenizer file
* Fix more tests
* Add test
* Improve README and docs, improve conditional embeddings
* Fix more tests
* Remove print statements
* Remove initial embeddings
* Improve conversion script
* Add interpolation of position embeddings
* Finish addition of interpolation of position embeddings
* Add support for refined checkpoint
* Fix refined checkpoint
* Remove unused parameter
* Improve conversion script
* Add support for training
* Fix conversion script
* Add CLIPSegFeatureExtractor
* Fix processor
* Fix CLIPSegProcessor
* Fix conversion script
* Fix most tests
* Fix equivalence test
* Fix README
* Add model to doc tests
* Use better variable name
* Convert other checkpoint as well
* Update config, add link to paper
* Add docs
* Update organization
* Replace base_model_prefix with clip
* Fix base_model_prefix
* Fix checkpoint of config
* Fix config checkpoint
* Remove file
* Use logits for output
* Fix tests
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-11-08 10:55:47 +01:00
Tom Aarsen
3222fc645b
docs: Resolve many typos in the English docs ( #20088 )
...
* docs: Fix typo in ONNX parser help: 'tolerence' => 'tolerance'
* docs: Resolve many typos in the English docs
Typos found via 'codespell ./docs/source/en'
2022-11-07 09:19:04 -05:00
Jordan Clive
3bd0007e87
Update documentation on seq2seq models with absolute positional embeddings, to be in line with Tips section for BERT and GPT2 ( #20068 )
...
Co-authored-by: jordiclive <jordiclive19@imperial.ac.uk>
2022-11-04 11:32:44 -04:00
Sanchit Gandhi
06d488061f
[Whisper Tokenizer] Make more user-friendly ( #19921 )
...
* [Whisper Tokenizer] Make more user-friendly
* use property
* make indexing rigorous
* small clean-up
* tests
* skip seq2seq tests
* remove multilingual arg
* reorder args
* collapse to one function
Co-authored-by: ArthurZucker <arthur@huggingface.co>
* option to override attributes
Co-authored-by: ArthurZucker <arthur@huggingface.co>
* add to docs
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* make comment more clear
Co-authored-by: sgugger <sylvain@huggingface.co>
* don't add special tokens in get_decoder_prompt_ids
* add test for set_prefix_tokens
Co-authored-by: ArthurZucker <arthur@huggingface.co>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: sgugger <sylvain@huggingface.co>
2022-11-03 14:22:40 +00:00
Yih-Dar
9ccea7acb1
Fix some doctests after PR 15775 ( #20036 )
...
* Add skip_special_tokens=True in some doctest
* For T5
* Fix for speech_to_text.mdx
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-11-03 14:18:45 +01:00
Steven Liu
ab74ac11e4
Add LayoutLMv3 resource ( #19932 )
...
* add layoutlmv3 resource
* add layoutlmv2 resources
* fix button
2022-11-01 11:10:46 -07:00
Steven Liu
dec8578e70
Add BERT resources ( #19852 )
...
* add resources for bert
* add course chapters
* apply reviews
* add pipeline icons and community resource
* fix buttons
2022-11-01 11:09:53 -07:00
Matt
7f9b7b3f0e
Add ESMFold ( #19977 )
...
* initial commit
* First draft that gets outputs without crashing!
* Add all the ported openfold dependencies
* testing
* Restructure config files for ESMFold
* Debugging to find output discrepancies
* Mainly style
* Make model runnable without extra deps
* Remove utils and merge them to the modeling file
* Use correct gelu and remove some debug prints
* More cleanup
* Update esm docs
* Update conversion script to support ESMFold properly
* Port some top-level changes from ESMFold repo
* Expand EsmFold docstrings
* Make attention_mask optional (default to all 1s)
* Add inference test for ESMFold
* Use config and not n kwargs
* Add modeling output class
* Remove einops
* Remove chunking in ESM FFN
* Update tests for ESMFold
* Quality
* REpo consistency
* Remove tree dependency from ESMFold
* make fixup
* Add an error in case my structure map function breaks later
* Remove needless code
* Stop auto-casting the LM to float16 so CPU tests pass
* Stop auto-casting the LM to float16 so CPU tests pass
* Final test updates
* Split test file
* Copyright and quality
* Unpin PyTorch to see built doc
* Fix config file to_dict() method
* Add some docstrings to the output
* Skip TF checkpoint tests for ESM until we reupload those
* make fixup
* More docstrings
* Unpin to get even with main
* Flag example to write
Co-authored-by: Sylvain Gugger <Sylvain.gugger@gmail.com>
2022-10-31 21:32:58 -04:00
NielsRogge
0b294c2334
[Conditional, Deformable DETR] Add postprocessing methods ( #19709 )
...
* Add postprocessing methods
* Update docs
* Add fix
* Add test
* Add test for deformable detr postprocessing
* Add post processing methods for segmentation
* Update code examples
* Add post_process to make the pipeline work
* Apply updates
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-10-31 08:28:44 +01:00
Steven Liu
2e35bac4e7
Add wav2vec2 resources ( #19931 )
...
* add wav2vec2 resources
* apply review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2022-10-28 13:28:18 -07:00
Steven Liu
9d2788b46b
add resources for distilbert ( #19930 )
2022-10-28 13:16:07 -07:00
Steven Liu
b0a2c3a2d6
add resources for bart ( #19928 )
2022-10-28 13:15:43 -07:00
Steven Liu
e4132952a1
Add GPT2 resources ( #19879 )
...
* add resources for gpt2
* add pipeline icons and community resources
2022-10-27 11:34:00 -07:00
Steven Liu
d818dd3a41
Add BLOOM resources ( #19881 )
...
* add bloom resources
* add pipeline icon
2022-10-27 11:33:52 -07:00
Steven Liu
50f5266b2c
Add T5 resources ( #19878 )
...
* add resources for t5
* add pipeline icons and community resources
2022-10-27 11:33:37 -07:00
Steven Liu
536a8ae6ad
Add RoBERTa resources ( #19911 )
...
* add roberta resources
* fix typo
2022-10-27 11:33:15 -07:00
Younes Belkada
7a1c68a845
Add flan-t5
documentation page ( #19892 )
...
* add `flan-t5` documentation page
* Update README.md
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add more content
* revert `_toctree` modif
* revert `toctree` modif - 2
* Update README.md
* Revert "Update README.md"
This reverts commit 5660714429
.
* Update README_es.md
* Update README_zh-hans.md
* Update README_zh-hant.md
* Update README_ko.md
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-26 17:22:57 +02:00
Lysandre Debut
eedaba682f
[Past CI] Vilt only supports PT >= v1.10 ( #19851 )
...
* Support for Vilt in v1.9
* Skip if not higher or equal than 1.10
* Move test :)
* I am bad at python
2022-10-25 15:59:35 +02:00
Yih-Dar
072ed01c38
Fix doctest for MarkupLM
( #19845 )
...
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2022-10-24 17:54:23 +02:00
NielsRogge
14fe3e0410
Add docs ( #19729 )
...
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-10-18 17:42:46 +02:00
NielsRogge
dd523da577
Add table transformer [v2] ( #19614 )
...
* First draft
* Add conversion script
* Make conversion work
* Upload checkpoints
* Add final fixes
* Revert changes of conditional and deformable detr
* Fix toctree, add and remove copied from
* Use model type
* Improve docs
* Improve code example
* Update copies
* Add copied formt
* Don't update conditional detr
* Don't update deformable detr
2022-10-18 15:20:09 +02:00
Antonio Carlos Falcão Petri
af150e4a1c
Allow user-managed Pool in Wav2Vec2ProcessorWithLM.batch_decode ( #18351 )
...
* [Wav2Vec2] Allow user-managed Pool in Wav2Vec2ProcessorWithLM.batch_decode
* [Wav2Vec2] Add user-managed LM's pool tests and usage examples
* Improve styling
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* [Wav2Vec2] Fix hyperlink references
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-18 08:48:03 -04:00
NielsRogge
90071fe42b
Improve DETR models ( #19644 )
...
* Improve DETR models
* Fix Deformable DETR loss and matcher
* Fixup
* Fix integration tests
* Improve variable names
* Apply suggestion
* Fix copies
* Fix DeformableDetrLoss
* Make Conditional DETR copy from Deformable DETR
* Copy from deformable detr's hungarian matcher
* Fix bug
2022-10-18 10:29:14 +02:00
NielsRogge
fd9a027aca
Fix docs ( #19687 )
...
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
2022-10-18 09:52:51 +02:00
Matt
3b3024da70
TF port of ESM ( #19587 )
...
* Partial TF port for ESM model
* Add ESM-TF tests
* Add the various imports for TF-ESM
* TF weight conversion almost ready
* Stop ignoring the decoder weights in PT
* Add tests and lots of fixes
* fix-copies
* Fix imports, add model docs
* Add get_vocab() to tokenizer
* Fix vocab links for pretrained files
* Allow multiple inputs with a sep
* Use EOS as SEP token because ESM vocab lacks SEP
* Correctly return special tokens mask from ESM tokenizer
* make fixup
* Stop testing unsupported embedding resizing
* Handle TF bias correctly
* Skip all models with slow tokenizers in the token classification test
* Fixing the batch/unbatcher of pipelines to accomodate the `None` being
passed around.
* Fixing pipeline bug caused by slow tokenizer being different.
* Update src/transformers/models/esm/modeling_tf_esm.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/esm/modeling_tf_esm.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/esm/modeling_tf_esm.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update set_input_embeddings and the copyright notices
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2022-10-17 14:16:16 +01:00
Akash Mahajan
504cd71a6b
add a note to whisper docs clarifying support of long-form decoding ( #19497 )
2022-10-13 10:39:03 +02:00
Daniel van Strien
af539d6f0a
fix MarkupLMProcessor option flag ( #19526 )
2022-10-12 15:08:48 +02:00
Ritik Nandwal
e94384e4d8
Add depth estimation pipeline ( #18618 )
...
* Add initial files for depth estimation pipelines
* Add test file for depth estimation pipeline
* Update model mapping names
* Add updates for depth estimation output
* Add generic test
* Hopefully fixing the tests.
* Check if test passes
* Add make fixup and make fix-copies changes after rebase with main
* Rebase with main
* Fixing up depth pipeline.
* This is not used anymore.
* Fixing the test. `Image` is a module `Image.Image` is the type.
* Update docs/source/en/main_classes/pipelines.mdx
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 08:54:20 -04:00
NielsRogge
4d367a3c81
Add LiLT ( #19450 )
...
* First draft
* Fix more things
* Improve more things
* Remove some head models
* Fix more things
* Add missing layers
* Remove tokenizer
* Fix more things
* Fix copied from statements
* Make all tests pass
* Remove print statements
* Remove files
* Fix README and docs
* Add integration test and fix organization
* Add tips
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Make tests faster, improve docs
* Fix doc tests
* Add model to toctree
* Add docs
* Add note about creating new checkpoint
* Remove is_decoder
* Make tests smaller, add docs
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-12 10:11:20 +02:00
Mathieu Jouffroy
5ca131f3d4
[CvT] Tensorflow implementation ( #18597 )
...
* implemented TFCvtModel and TFCvtForImageClassification and modified relevant files, added an exception in convert_tf_weight_name_to_pt_weight_name, added quick testing file to compare with pytorch model
* added docstring + testing file in transformers testing suite
* added test in testing file, modified docs to pass repo-consistency, passed formatting test
* refactoring + passing all test
* small refacto, removing unwanted comments
* improved testing config
* corrected import error
* modified acces to pretrained model archive list, to pass tf_test
* corrected import structure in init files
* modified testing for keras_fit with cpu
* correcting PR issues + Refactoring
* Refactoring : improving readability and reducing the number of permutations
* corrected momentum value + cls_token initialization
* removed from_pt as weights were added to the hub
* Update tests/models/cvt/test_modeling_tf_cvt.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
2022-10-11 18:16:52 +01:00
amyeroberts
e3f028f3af
Add TF whisper ( #19378 )
...
* simplify loop
* add featur extractor
* add model
* start conversion
* add dropout
* initial commit of test files
* copnversion for all models
* update processor for correct padding
* update feature extraction
* update integration test logits match
* fmnt: off for the logits
* on the fly mel bank
* small nit
* update test
* update tokenizer
* nit feature extraction
* update
* update tokenizer test
* adds logit processor and update tokenizer to get supress tokens
* style
* clean convert
* revert to original modeling tf utils
* Update
* update
* nit
* clean convert file
* update tests and nits
* quality
* slow generation test
* ffn_dim to allow customization
* update readme
* add to toctreee
* start fixing integration tests
* update tests and code
* fix feature extractor
* fix config tests common
* update code to fix tests
* fix feature exctractor
* nit feature extraction
* update test for new feature extractor
* style
* add absrtact
* large logits wioth custom decoder input ids
* wraap around is otrch available
* fix feature extractor
* correct logits for whisper small.en
* nit
* fix encoder_attentino_mask
* some fixes
* remove unnecessary inputs
* nits
* add normalizer file
* update etst tokenization
* fix attention mask not defined
* fix generate
* remove uncoder attention mask useless
* update test modeling whisper
* update condfig to add second non supress tokens
* nits on feature exrtactor
* nit for test tokenizers
* update etsts
* update tests
* update tokenization test
* fixup
* invalidated hf token. Clean convert openai to whisper
* fix logit tests
* fixup
* Add model to README
* Fix doc tests
* clean merge
* revert toc_tree changes
* remove useless LogitProcessor
* Update whisper .mdx
* update config file doc
* update configuration docstring
* update test tokenization
* update test tokenization
* update tokenization whisper
Added copied from where needed
* update feature extraction
* nit test name
* style
* quality
* remove get suppress tokens and update non_speech tokens global variables
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* clean modeling whisper and test
Removed the attention mask arguments that are deprecated
* fix large test
* Add multilingual audio test, and translate test
* style
* fix larg multilingual test
* nits
* add copied from for attention layer
* remove attention masks in doc
* add english normalizer
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update tokenization test
* remove copied from in whisper attention : no bias in k_proj only
* wrap around dependencies in english normalizer
* style
* correct import generation logits
* for now, wrap feature extractor with torch
* remove torch depencies for feature extraction and style
* Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixup
* nit
* update logitds
* style
* nit
* nits and fix final tests
* add `is_more_itertools_available` to utils
* quality
* add begin supress tokens, supress tokens to generate args and config
* clean supressTokensLogitProcessor in generation logits
* Nit naming
* add supressTokensAtBegin
* udpate tests, supress tokens to None or correct values
* nit and style
* update RAG to fit test and generate_logit
* add copy pasted statment on english normalizer
* add arguments to config_common_kwargs
* Update src/transformers/generation_utils.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/generation_logits_process.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* revert changes based on reviews
* update doc and nits
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* more nits
* last nits
* update test configuration common
* add BART name in decoder attention mask documentation
* Update src/transformers/models/whisper/modeling_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* style
* nit
* nit
* add english.json file to git
* nits on documentation
* nit
* nits
* last styling
* add main toctree file
* remove sentence piece dependency
* clean init file
* fix tokenizer that has no dependencies on sentencepiece
* update whisper init file, nit
* remove english.json file
* add get decoder prompt id
* All weights loading
* Remove hanging pdb
* Fixup and tidy up
* Use same copied from as PT model
* Remove whitespace changes
* Remove torch references
* Tie embeddings
* Remove logits processor input to generate
* Update logit values
* revert changes and add forced logit processor
* nit
* clean normalizer
* remove protected
* Add logit processors and update generation code & tests
* Some tidy up
* Update docstring
* update
* update based on review
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update to reflect changes on the PT model branch
* Tidy up
* Remove extra whitespace
* Fix test - make input ids small enough we can append
* Include upstream changes on main
* PR comments - add batch tests, remove comments & defaults
* Fix model output imports
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/generation_tf_logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update tests/models/whisper/test_modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update docstring example
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Remove changes to adjust_logits_during_generation function
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Tidy up imports that don't require TF
* Update tests - skip and no more skip
* Update tests/generation/test_generation_tf_logits_process.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Update src/transformers/models/whisper/modeling_tf_whisper.py
* Update src/transformers/models/whisper/modeling_tf_whisper.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* Add training flags
* Add (skipped) XLA generation tests
* Add embedding correctness test
* Add constant ids for generation tests
* Make logits finding a bit tidier
* Remove unused args
* xla generation enabled
* Don't skip XLA tests anymore
* Fix tests - add position ids to expected signature and update rag generation
* Undo method reorder
* Remove added whitespace
* Remove copy-paste gradient checkopint ref
* Remove
* Trigger CI - (issue with refs when pulling)
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
Co-authored-by: Joao Gante <joao@huggingface.co>
2022-10-10 14:48:17 +01:00
APAVOU Clément
af69360bf9
Add OPTForQuestionAnswering
( #19402 )
...
* Add `OPTForQuestionAnswering`
- added `OPTForQuestionAnswering` class based on `BloomForQuestionAnswering`
- added `OPTForQuestionAnswering` in common tests
- all common tests pass
- make fixup done
* added docstrings for OPTForQuestionAnswering
* Fix docstrings for OPTForQuestionAnswering
2022-10-10 09:30:59 -04:00
Amrit Sahu
e9a49babee
[WIP] Add ZeroShotObjectDetectionPipeline ( #18445 ) ( #18930 )
...
* Add ZeroShotObjectDetectionPipeline (#18445 )
* Add AutoModelForZeroShotObjectDetection task
This commit also adds the following
- Add explicit _processor method for ZeroShotObjectDetectionPipeline.
This is necessary as pipelines don't auto infer processors yet and
`OwlVitProcessor` wraps tokenizer and feature_extractor together, to
process multiple images at once
- Add auto tests and other tests for ZeroShotObjectDetectionPipeline
* Add AutoModelForZeroShotObjectDetection task
This commit also adds the following
- Add explicit _processor method for ZeroShotObjectDetectionPipeline.
This is necessary as pipelines don't auto infer processors yet and
`OwlVitProcessor` wraps tokenizer and feature_extractor together, to
process multiple images at once
- Add auto tests and other tests for ZeroShotObjectDetectionPipeline
* Add batching for ZeroShotObjectDetectionPipeline
* Fix doc-string ZeroShotObjectDetectionPipeline
* Fix output format: ZeroShotObjectDetectionPipeline
2022-10-07 10:00:19 -04:00
Alara Dirik
ae3e3bc60a
fix docs example, add object_detection to DETR docs ( #19377 )
2022-10-07 00:02:26 +02:00
Arthur
45e14038f2
Add WhisperModel to transformers ( #19166 )
...
* simplify loop
* add featur extractor
* add model
* start conversion
* add dropout
* initial commit of test files
* copnversion for all models
* update processor for correct padding
* update feature extraction
* update integration test logits match
* fmnt: off for the logits
* on the fly mel bank
* small nit
* update test
* update tokenizer
* nit feature extraction
* update
* update tokenizer test
* adds logit processor and update tokenizer to get supress tokens
* style
* clean convert
* revert to original modeling tf utils
* Update
* update
* nit
* clean convert file
* update tests and nits
* quality
* slow generation test
* ffn_dim to allow customization
* update readme
* add to toctreee
* start fixing integration tests
* update tests and code
* fix feature extractor
* fix config tests common
* update code to fix tests
* fix feature exctractor
* nit feature extraction
* update test for new feature extractor
* style
* add absrtact
* large logits wioth custom decoder input ids
* wraap around is otrch available
* fix feature extractor
* correct logits for whisper small.en
* nit
* fix encoder_attentino_mask
* some fixes
* remove unnecessary inputs
* nits
* add normalizer file
* update etst tokenization
* fix attention mask not defined
* Add model to README
* Fix doc tests
* fix generate
* remove uncoder attention mask useless
* update test modeling whisper
* update condfig to add second non supress tokens
* nits on feature exrtactor
* nit for test tokenizers
* update etsts
* update tests
* update tokenization test
* fixup
* invalidated hf token. Clean convert openai to whisper
* fix logit tests
* fixup
* clean merge
* revert toc_tree changes
* remove useless LogitProcessor
* Update whisper .mdx
* update config file doc
* update configuration docstring
* update test tokenization
* update test tokenization
* update tokenization whisper
Added copied from where needed
* update feature extraction
* nit test name
* style
* quality
* remove get suppress tokens and update non_speech tokens global variables
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* clean modeling whisper and test
Removed the attention mask arguments that are deprecated
* fix large test
* Add multilingual audio test, and translate test
* style
* fix larg multilingual test
* nits
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add copied from for attention layer
* remove attention masks in doc
* add english normalizer
* update tokenization test
* remove copied from in whisper attention : no bias in k_proj only
* wrap around dependencies in english normalizer
* style
* correct import generation logits
* for now, wrap feature extractor with torch
* Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update docs/source/en/model_doc/whisper.mdx
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* remove torch depencies for feature extraction and style
* fixup
* nit
* update logitds
* style
* nit
* nits and fix final tests
* add `is_more_itertools_available` to utils
* quality
* add begin supress tokens, supress tokens to generate args and config
* clean supressTokensLogitProcessor in generation logits
* Nit naming
* add supressTokensAtBegin
* udpate tests, supress tokens to None or correct values
* nit and style
* update RAG to fit test and generate_logit
* add copy pasted statment on english normalizer
* add arguments to config_common_kwargs
* Update src/transformers/generation_utils.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/generation_logits_process.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* revert changes based on reviews
* update doc and nits
* more nits
* last nits
* update test configuration common
* add BART name in decoder attention mask documentation
* Update src/transformers/models/whisper/modeling_whisper.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* style
* nit
* nit
* add english.json file to git
* nits on documentation
* nit
* nits
* last styling
* add main toctree file
* remove sentence piece dependency
* clean init file
* fix tokenizer that has no dependencies on sentencepiece
* update whisper init file, nit
* remove english.json file
* add get decoder prompt id
* revert changes and add forced logit processor
* nit
* clean normalizer
* remove protected
* update
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* update based on review
* Update src/transformers/models/whisper/configuration_whisper.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* add batched tests
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-05 22:28:31 +02:00
Alara Dirik
07e94bf159
Maskformer post-processing fixes and improvements ( #19172 )
...
- Improves MaskFormer docs, corrects minor typos
- Restructures MaskFormerFeatureExtractor.post_process_panoptic_segmentation for better readability, adds target_sizes argument for optional resizing
- Adds post_process_semantic_segmentation and post_process_instance_segmentation methods.
- Adds a deprecation warning to post_process_segmentation method in favour of post_process_instance_segmentation
2022-10-05 15:27:15 +03:00
Younes Belkada
587d84b178
Add BloomForQuestionAnswering
( #19310 )
...
* add bloom for question answering
- attempt to add Bloom for question answering
- adapted from `GPTJForQuestionAnswering`
- Fixed `num_labels` to `2` for common tests
- Added a bit of docstring
- All common tests pass
* Update src/transformers/models/bloom/modeling_bloom.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* revert changes related to `num_labels`
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-10-04 17:52:13 +02:00
Alara Dirik
36f52e9593
Restructure DETR post-processing, return prediction scores ( #19262 )
...
* Restructure DetrFeatureExtractor post-processing methods
* Update post_process_instance_segmentation and post_process_panoptic_segmentation methods to return prediction scores
* Update DETR models docs
2022-10-03 12:02:51 +03:00
Kashif Rasul
5cd16f01db
time series forecasting model ( #17965 )
...
* initial files
* initial model via cli
* typos
* make a start on the model config
* ready with configuation
* remove tokenizer ref.
* init the transformer
* added initial model forward to return dec_output
* require gluonts
* update dep. ver table and add as extra
* fixed typo
* add type for prediction_length
* use num_time_features
* use config
* more config
* typos
* opps another typo
* freq can be none
* default via transformation is 1
* initial transformations
* fix imports
* added transform_start_field
* add helper to create pytorch dataloader
* added inital val and test data loader
* added initial distr head and loss
* training working
* remove TimeSeriesTransformerTokenizer
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/__init__.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fixed copyright
* removed docs
* remove time series tokenizer
* fixed docs
* fix text
* fix second
* fix default
* fix order
* use config directly
* undo change
* fix comment
* fix year
* fix import
* add additional arguments for training vs. test
* initial greedy inference loop
* fix inference
* comment out token inputs to enc dec
* Use HF encoder/decoder
* fix inference
* Use Seq2SeqTSModelOutput output
* return Seq2SeqTSPredictionOutput
* added default arguments
* fix return_dict true
* scale is a tensor
* output static_features for inference
* clean up some unused bits
* fixed typo
* set return_dict if none
* call model once for both train/predict
* use cache if future_target is none
* initial generate func
* generate arguments
* future_time_feat is required
* return SampleTSPredictionOutput
* removed unneeded classes
* fix when params is none
* fix return dict
* fix num_attention_heads
* fix arguments
* remove unused shift_tokens_right
* add different dropout configs
* implement FeatureEmbedder, Scaler and weighted_average
* remove gluonts dependency
* fix class names
* avoid _variable names
* remove gluonts dependency
* fix imports
* remove gluonts from configuration
* fix docs
* fixed typo
* move utils to examples
* add example requirements
* config has no freq
* initial run_ts_no_trainer
* remove from ignore
* fix output_attentions and removed unsued getters/setters
* removed unsed tests
* add dec seq len
* add test_attention_outputs
* set has_text_modality=False
* add config attribute_map
* make style
* make fix-copies
* add encoder_outputs to TimeSeriesTransformerForPrediction forward
* Improve docs, add model to README
* added test_forward_signature
* More improvements
* Add more copied from
* Fix README
* Fix remaining quality issues
* updated encoder and decoder
* fix generate
* output_hidden_states and use_cache are optional
* past key_values returned too
* initialize weights of distribution_output module
* fixed more tests
* update test_forward_signature
* fix return_dict outputs
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* removed commented out tests
* added neg. bin and normal output
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* move to one line
* Add docstrings
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* add try except for assert and raise
* try and raise exception
* fix the documentation formatting
* fix assert call
* fix docstring formatting
* removed input_ids from DOCSTRING
* Update input docstring
* Improve variable names
* Update order of inputs
* Improve configuration
* Improve variable names
* Improve docs
* Remove key_length from tests
* Add extra docs
* initial unittests
* added test_inference_no_head test
* added test_inference_head
* add test_seq_to_seq_generation
* make style
* one line
* assert mean prediction
* removed comments
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fix order of args
* make past_observed_mask optional as well
* added Amazon license header
* updated utils with new fieldnames
* make style
* cleanup
* undo position of past_observed_mask
* fix import
* typo
* more typo
* rename example files
* remove example for now
* Update docs/source/en/_toctree.yml
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update modeling_time_series_transformer.py
fix style
* fixed typo
* fix typo and grammer
* fix style
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: NielsRogge <niels.rogge1@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2022-09-30 15:32:59 -04:00