Arthur
799df10aef
[Umt5
] Add google's umt5 to transformers
( #24477 )
...
* add tokenization template
* update conversion script
* update modeling code
* update
* update convert checkpoint
* update modeling
* revert changes on convert script
* new conversion script for new format
* correct position bias
* cleaning a bit
* Credit co authors
Co-authored-by: agemagician
<ahmed.elnaggar@tum.de>
Co-authored-by: stefan-it
<>
* styling
* Add docq
* fix copies
* add co author
* Other Author
* Merge branch 'main' of https://github.com/huggingface/transformers into add-umt5
* add testing
* nit
* Update docs/source/en/model_doc/umt5.mdx
Co-authored-by: Stefan Schweter <stefan@schweter.it>
* fix t5
* actual fix?
* revert wrong changes
* remove
* update test
* more fixes
* revert some changes
* add SPIECE_UNDERLINE
* add a commone xample
* upfate
* fix copies
* revert changes on t5 conversion script
* revert bytefallback changes since there was no addition yet
* fixup
* fixup
* ingore umt5 cutom testing folder
* fix readmes
* revertT5 changes
* same outputs
* fixup
* update example
* Apply suggestions from code review
* style
* draft addition of all new files
* current update
* fix attention and stuff
* finish refactoring
* auto config
* fixup
* more nits
* add umt5 to init
* use md format
* Update README.md
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* revert changes on mt5
* revert mt4 changes
* update test
* more fixes
* add to mapping
* fix-copies
* fix copies
* foix retain grad
* fix some tests
* nits
* done
* Update src/transformers/models/umt5/modeling_umt5.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Update docs/source/en/model_doc/umt5.md
* Update src/transformers/models/umt5/__init__.py
* Update docs/source/en/model_doc/umt5.md
Co-authored-by: Stefan Schweter <stefan@schweter.it>
* Update src/transformers/models/umt5/modeling_umt5.py
* update conversion script + use google checkpoints
* nits
* update test and modelling
* stash slow convert
* update fixupd
* don't change slow
---------
Co-authored-by: stefan-it <>
Co-authored-by: Stefan Schweter <stefan@schweter.it>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-07-03 07:38:21 +02:00
Yih-Dar
c817bc44e2
Check all objects are equally in the main __init__
file ( #24573 )
...
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-06-29 17:49:59 +02:00
amyeroberts
b324557aac
Removal of deprecated vision methods and specify deprecation versions ( #24570 )
...
* Removal of deprecated methods and specify versions
* Fix tests
2023-06-29 15:09:51 +01:00
Sanchit Gandhi
1c1c90756d
Add Musicgen ( #24109 )
...
* Add Audiocraft
* add cross attention
* style
* add for lm
* convert and verify
* introduce t5
* split configs
* load t5 + lm
* clean conversion
* copy from t5
* style
* start pattern provider
* make generation work
* style
* fix pos embs
* propagate shape changes
* propagate shape changes
* style
* delay pattern: pad tokens at end
* audiocraft -> musicgen
* fix inits
* add mdx
* style
* fix pad token in processor
* override generate and add todos
* add init to test
* undo pattern delay mask after gen
* remove cfg logits processor
* remove cfg logits processor
* remove logits processor in favour of mask
* clean pos embs
* make fix copies
* update readmes
* clean pos emb
* refactor encoder/decoder
* make fix copies
* update conversion
* fix config imports
* update config docs
* make style
* send pattern mask to device
* pattern mask with delay
* recover prompted audio tokens
* fix docstrings
* laydown test file
* pattern edge case
* remove t5 ref
* add processing class
* config refactor
* better pattern comment
* check if mask is not present
* check if mask is not present
* refactor to auto class
* remove encoder configs
* fix processor
* processor import
* start updating conversion
* start updating tests
* make style
* convert t5, encodec, lm
* convert as composite
* also convert processor
* run generate
* classifier free gen
* comments and clean up
* make style
* docs for logit proc
* docstring for uncond gen
* start lm tests
* work tests
* let the lm generate
* refactor: reshape inside forward
* undo greedy loop changes
* from_enc_dec -> from_sub_model
* fix input id shapes in docstrings
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* undo generate changes
* from sub model config
* Update src/transformers/models/musicgen/modeling_musicgen.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* make generate work again
* generate uncond -> get uncond inputs
* remove prefix allowed tokens fn
* better error message
* logit proc checks
* Apply suggestions from code review
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* make decoder only tests work
* composite fast tests
* make style
* uncond generation
* feat extr padding
* make audio prompt work
* fix inputs docstrings
* unconditional inputs: dict -> model output
* clean up tests
* more clean up tests
* make style
* t5 encoder -> auto text encoder
* remove comments
* deal with frames
* fix auto text
* slow tests
* nice mdx
* remove can generate
* todo - hub id
* convert m/l
* make fix copies
* only import generation with torch
* ignore decoder from tests
* don't wrap uncond inputs
* make style
* cleaner uncond inputs
* add example to musicgen forward
* fix docs
* ignore MusicGen Model/ForConditionalGeneration in auto mapping
* add doc section to toctree
* add to doc tests
* add processor tests
* fix push to hub in conversion
* tips for decoder only loading
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* fix conversion for s / m / l checkpoints
* import stopping criteria from module
* remove from pipeline tests
* fix uncond docstring
* decode audio method
* fix docs
* org: sanchit-gandhi -> facebook
* fix max pos embeddings
* remove auto doc (not compatible with shapes)
* bump max pos emb
* make style
* fix doc
* fix config doc
* fix config doc
* ignore musicgen config from docstring
* make style
* fix config
* fix config for doctest
* consistent from_sub_models
* don't automap decoder
* fix mdx save audio file
* fix mdx save audio file
* processor batch decode for audio
* remove keys to ignore
* update doc md
* update generation config
* allow changes for default generation config
* update tests
* make style
* fix docstring for uncond
* fix processor test
* fix processor test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-29 14:48:59 +01:00
amyeroberts
ae454f41d4
Update old existing feature extractor references ( #24552 )
...
* Update old existing feature extractor references
* Typo
* Apply suggestions from code review
* Apply suggestions from code review
* Apply suggestions from code review
* Address comments from review - update 'feature extractor'
Co-authored by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
2023-06-29 10:17:36 +01:00
Sebastian
06910f5a76
[T5
] Add T5ForQuestionAnswering and MT5ForQuestionAnswering ( #24481 )
...
* Adding T5ForQuestionAnswering
* Changed weight initialization that results in better initial loss when fine-tuning
* Update to class variables
* Running make fixup
* Running make fix-copies
* Remove model_parallel
* Adding MT5ForQuestionAnswering
* Adding docs
* Fix wrong doc
* Update src/transformers/models/mt5/modeling_mt5.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Update src/transformers/models/t5/modeling_t5.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* File formatting
* Undoing change
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2023-06-27 10:07:06 -04:00
NielsRogge
868363abb9
Add InstructBLIP ( #23460 )
...
* Squash 88 commits
* Use markdown
* Remove mdx files due to bad rebase
* Fix modeling files due to bad rebase
* Fix style
* Update comment
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-06-26 11:23:57 +02:00
Sanchit Gandhi
ea91c2adca
[AutoModel] Add AutoModelForTextEncoding ( #24305 )
...
* [AutoModel] Add AutoModelForTextEncoding
* add mt5
* add other models
* add to docs
* fix tf imports
* add tf to docs / init
* up
* fix inits
* add to dummy objects
2023-06-23 10:01:37 +01:00
Steven Liu
ad78d9597b
[docs] Fix NLLB-MoE links ( #24388 )
...
fix broken links
2023-06-20 17:34:20 -07:00
Sylvain Gugger
eb849f6604
Migrate doc files to Markdown. ( #24376 )
...
* Rename index.mdx to index.md
* With saved modifs
* Address review comment
* Treat all files
* .mdx -> .md
* Remove special char
* Update utils/tests_fetcher.py
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
---------
Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-06-20 18:07:47 -04:00
Vineel Pratap
7761b1893a
Update MMS integration docs ( #24311 )
...
* Update mms.mdx
* Update mms.mdx
* Update docs/source/en/model_doc/mms.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update mms.mdx
* Update docs/source/en/model_doc/mms.mdx
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
2023-06-19 14:49:01 +01:00
hitchhicker
c3ca346b49
[Docs] Fix the paper URL for MMS model ( #24302 )
...
Fix the paper URL for MMS model
2023-06-15 15:45:49 +01:00
Patrick von Platen
604a21b1e6
[Docs] Improve docs for MMS loading of other languages ( #24292 )
...
* Improve docs
* Apply suggestions from code review
* upload readme
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
---------
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-06-15 14:29:32 +02:00
Matthijs Hollemans
0c3fdccf2f
[WIP] add EnCodec model ( #23655 )
...
* boilerplate stuff
* messing around with the feature extractor
* fix feature extractor
* unit tests for feature extractor
* rename speech to audio
* quick-and-dirty import of Meta's code
* import weights (sort of)
* cleaning up
* more cleaning up
* move encoder/decoder args into config
* cleanup model
* rename EnCodec -> Encodec
* RVQ parameters in config
* add slow test
* add lstm init and test_init
* Add save & load
* finish EncodecModel
* remove decoder_input_values as they are ont used anywhere (not removed from doc yet)
* fix test feature extraction model name
* Add better slow test
* Fix tests
* some fixup and cleaning
* Improve further
* cleaning up quantizer
* fix up conversion script
* test don't pass, _encode_fram does not work
* update tests with output per encode and decode
* more cleanup
* rename _codebook
* remove old config cruft
* ratios & hop_length
* use ModuleList instead of Sequential
* clean up resnet block
* update types
* update tests
* fixup
* quick cleanup
* fix padding
* more styl,ing
* add patrick feedback
* fix copies
* fixup
* fix lstm
* fix shape issues
* fixup
* rename conv layers
* fixup
* fix decoding
* small conv refactoring
* remove norm_params
* simplify conv layers
* rename conv layers
* stuff
* Clean up
* Add padding logic
use padding mask
small conv refactoring
remove norm_params
simplify conv layers
rename conv layers
stuff
add batched test
update
Clean up
merge and update for padding
fix padding
fixup
* clean up more
* clean up more
* More clean ups
* cleanup convolutions
* typo
* fix typos
* fixup
* build PR doc?
* start refactoring docstring
* fix don't pad when no strid and chunk
* update docstring
* update docstring
* nits
* update going to lunch
* update config and model
* fix broken testse (becaue of the config changes)
* fix scale computation
* fixu[
* only return dict if speciefied or if config returns it
* remove todos
* update defaults in config
* update conversion script
* fix doctest
* more docstring + fixup
* nits on batched_tests
* more nits
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update basxed on review
* fix update
* updaet tests
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* fixup
* add overlap and chunl_length_s
* cleanup feature extraction
* teste edge cases truncation and padding
* correct processor values
* update config encodec, nits
* fix tests
* fixup
* fix 24Hz test
* elle tests are green
* fix fixup
* Apply suggestions from code review
* revert readme changes
* fixup
* add example
* use facebook checkpoints
* fix typo
* no pipeline tests
* use slef.pad everywhere we can
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* update based on review
* update
* update mdx
* fix bug and tests
* fixup
* fix doctest
* remove comment
* more nits
* add more coverage for `test_truncation_and_padding`
* fixup
* add last test
* fix text
* nits
* Update tests/models/encodec/test_modeling_encodec.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* take care of the last comments
* typo
* fix test
* nits
* fixup
* Update src/transformers/models/encodec/feature_extraction_encodec.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: arthur.zucker@gmail.com <arthur.zucker@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-06-14 18:57:23 +02:00
Arthur
5af3a1aa48
[lamaTokenizerFast] Update documentation ( #24132 )
...
* Update documentation
* nits
2023-06-09 16:30:20 +02:00
Elliott Wang
e2972dffdd
PLAM => PaLM ( #24129 )
2023-06-09 12:32:16 +01:00
Eli Simhayev
bacaab1629
Added time-series blogs to the models ( #23857 )
...
* added blogs to docs
* removed new-line
2023-06-02 12:32:34 -04:00
Patrick von Platen
dcb5e18c9e
add new mms functions to doc ( #23954 )
2023-06-02 11:35:52 +01:00
Shehan Munasinghe
07c54413ac
Add MobileViTv2 ( #22820 )
...
* generated code from add-new-model-like
* Add code for modeling, config, and weight conversion
* add tests for image-classification, update modeling and config
* add code, tests for semantic-segmentation
* make style, make quality, make fix-copies
* make fix-copies
* Update modeling_mobilevitv2.py
fix bugs
* Update _toctree.yml
* update modeling, config
fix bugs
* Edit docs - fix bug MobileViTv2v2 -> MobileViTv2
* Update mobilevitv2.mdx
* update docstrings
* Update configuration_mobilevitv2.py
make style
* Update convert_mlcvnets_to_pytorch.py
remove unused options
* Update convert_mlcvnets_to_pytorch.py
make style
* Add suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* make style, make quality
* Add suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add suggestions from code review
Remove MobileViTv2ImageProcessor
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* make style
* Add suggestions from code review
Rename MobileViTv2 -> MobileViTV2
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update modeling_mobilevitv2.py
make style
* Update serialization.mdx
* Update modeling_mobilevitv2.py
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-06-02 10:37:02 +01:00
Patrick von Platen
5dfd407b37
[MMS] Scaling Speech Technology to 1,000+ Languages | Add attention adapter to Wav2Vec2 ( #23813 )
...
* add fine-tuned with adapter layer
* Add set_target_lang to tokenizer
* Implement load adapter
* add tests
* make style
* Apply suggestions from code review
* Update src/transformers/models/wav2vec2/tokenization_wav2vec2.py
* make fix-copies
* Apply suggestions from code review
* make fix-copies
* make style again
* mkae style again
* fix doc string
* Update tests/models/wav2vec2/test_tokenization_wav2vec2.py
* Apply suggestions from code review
* fix
* Correct wav2vec2 adapter
* mkae style
* Update src/transformers/models/wav2vec2/modeling_wav2vec2.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* add more nice docs
* finish
* finish
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
* all finish
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-06-02 10:30:24 +01:00
Denisa Roberts
88f50a1e89
Add TensorFlow implementation of EfficientFormer ( #22620 )
...
* Add tf code for efficientformer
* Fix return dict bug - return last hidden state after last stage
* Fix corresponding return dict bug
* Override test tol
* Change default values of training to False
* Set training to default False X3
* Rm axis from ln
* Set init in dense projection
* Rm debug stuff
* Make style; all tests pass.
* Modify year to 2023
* Fix attention biases codes
* Update the shape list logic
* Add a batch norm eps config
* Remove extract comments in test files
* Add conditional attn and hidden states return for serving output
* Change channel dim checking logic
* Add exception for withteacher model in training mode
* Revert layer count for now
* Add layer count for conditional layer naming
* Transpose for conv happens only in main layer
* Make tests smaller
* Make style
* Update doc
* Rm from_pt
* Change to actual expect image class label
* Remove stray print in tests
* Update image processor test
* Remove the old serving output logic
* Make style
* Make style
* Complete test
2023-05-31 10:43:12 +01:00
Eli Simhayev
4b6a5a7caa
[Time-Series] Autoformer model ( #21891 )
...
* ran `transformers-cli add-new-model-like`
* added `AutoformerLayernorm` and `AutoformerSeriesDecomposition`
* added `decomposition_layer` in `init` and `moving_avg` to config
* added `AutoformerAutoCorrelation` to encoder & decoder
* removed caninical self attention `AutoformerAttention`
* added arguments in config and model tester. Init works! 😁
* WIP autoformer attention with autocorrlation
* fixed `attn_weights` size
* wip time_delay_agg_training
* fixing sizes and debug time_delay_agg_training
* aggregation in training works! 😁
* `top_k_delays` -> `top_k_delays_index` and added `contiguous()`
* wip time_delay_agg_inference
* finish time_delay_agg_inference 😎
* added resize to autocorrelation
* bug fix: added the length of the output signal to `irfft`
* `attention_mask = None` in the decoder
* fixed test: changed attention expected size, `test_attention_outputs` works!
* removed unnecessary code
* apply AutoformerLayernorm in final norm in enc & dec
* added series decomposition to the encoder
* added series decomp to decoder, with inputs
* added trend todos
* added autoformer to README
* added to index
* added autoformer.mdx
* remove scaling and init attention_mask in the decoder
* make style
* fix copies
* make fix-copies
* inital fix-copies
* fix from https://github.com/huggingface/transformers/pull/22076
* make style
* fix class names
* added trend
* added d_model and projection layers
* added `trend_projection` source, and decomp layer init
* added trend & seasonal init for decoder input
* AutoformerModel cannot be copied as it has the decomp layer too
* encoder can be copied from time series transformer
* fixed generation and made distrb. out more robust
* use context window to calculate decomposition
* use the context_window for decomposition
* use output_params helper
* clean up AutoformerAttention
* subsequences_length off by 1
* make fix copies
* fix test
* added init for nn.Conv1d
* fix IGNORE_NON_TESTED
* added model_doc
* fix ruff
* ignore tests
* remove dup
* fix SPECIAL_CASES_TO_ALLOW
* do not copy due to conv1d weight init
* remove unused imports
* added short summary
* added label_length and made the model non-autoregressive
* added params docs
* better doc for `factor`
* fix tests
* renamed `moving_avg` to `moving_average`
* renamed `factor` to `autocorrelation_factor`
* make style
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* fix configurations
* fix integration tests
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixing `lags_sequence` doc
* Revert "fixing `lags_sequence` doc"
This reverts commit 21e34911e3
.
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* model layers now take the config
* added `layer_norm_eps` to the config
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* added `config.layer_norm_eps` to AutoformerLayernorm
* added `config.layer_norm_eps` to all layernorm layers
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix variable names
* added inital pretrained model
* added use_cache docstring
* doc strings for trend and use_cache
* fix order of args
* imports on one line
* fixed get_lagged_subsequences docs
* add docstring for create_network_inputs
* get rid of layer_norm_eps config
* add back layernorm
* update fixture location
* fix signature
* use AutoformerModelOutput dataclass
* fix pretrain config
* no need as default exists
* subclass ModelOutput
* remove layer_norm_eps config
* fix test_model_outputs_equivalence test
* test hidden_states_output
* make fix-copies
* Update src/transformers/models/autoformer/configuration_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* removed unused attr
* Update tests/models/autoformer/test_modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/autoformer/modeling_autoformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* use AutoFormerDecoderOutput
* fix formatting
* fix formatting
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-30 10:23:32 +02:00
Arthur
8d28dba35d
[OPT] Doc nit, using fast is fine ( #23789 )
...
small doc nit
2023-05-26 14:30:32 +02:00
Matt
1c460a5273
TF port of the Segment Anything Model (SAM) ( #22970 )
...
* First commit
* Add auto-translation with GPT-4
* make fixup
* Add a functional layernorm for TF
* Add all the auxiliary imports etc.
* Add the extra processor and tests
* rebase to main
* Add all the needed fixes to the GPT code
* make fixup
* Make convolutions channels-last so they run on CPU
* make fixup
* Fix final issues
* Fix other models affected by test change
* Clarify comment on the sparse_prompt_embeddings check
* Refactor functional_layernorm, use shape_list in place of .shape in some places
* Remove deprecated torch-alike code
* Update tests/models/sam/test_modeling_tf_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/sam/test_modeling_tf_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Refactor processor with common methods and separated private methods
* make fixup
* Quietly delete the file that didn't do anything (sorry Sylvain)
* Refactor the processor tests into one file
* make fixup
* Clean up some unnecessary indirection
* Fix TF mask postprocessing
* Add more processor equivalence tests
* Refactor generate_crop_boxes to use framework-neutral np code
* Make the serving output correctly conditional
* Fix error message line length
* Use dict keys rather than indices internally in both TF and PT SAM call/forward
* Return dicts internally in the call/forward methods
* Revert changes to common tests and just override check_pt_tf_outputs
* Revert changes to other model tests
* Clarify comments for functional layernorm
* Add missing transpose from PT code
* Removed unused copied from in PT code
* Remove overrides for tests that don't exist in TF
* Fix transpose and update tests for PT and TF to check pred_masks
* Add training flag
* Update tests to use TF checkpoints
* Update index.mdx
* Add missing cross-test decorator
* Remove optional extra asterisks
* Revert return_dict changes in PT code
* Update src/transformers/models/sam/modeling_tf_sam.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* Remove None return annotations on init methods
* Update tests/models/sam/test_processor_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fix input_boxes shapes
* make fixup
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-05-19 14:14:13 +01:00
Yih-Dar
21741e8c7e
Update test_batched_inference_image_captioning_conditioned
( #23391 )
...
* fix
* fix
* fix test + add more docs
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
2023-05-16 14:49:24 +02:00
richardachen
65b885027a
Typo suggestion ( #23360 )
...
Update graphormer.mdx
Typo suggestion
2023-05-15 12:04:16 +01:00
Shehan Munasinghe
c045249049
Add swiftformer ( #22686 )
...
* Commit the automatically generated code
using add-new-model-like
* Update description at swiftformer.mdx file
* remove autogenerated code for MaskedImageModeling
* update weight conversion scripts
* Update modeling_swiftformer.py
* update configuration_swiftformer.py
* Update test_modeling_swiftformer.py
* update modeling code - remove einops dependency
* Update _toctree.yml
* update modeling code - remove copied from comments
* update docs
* Revert "update docs"
This reverts commit c2e05e2998
.
* update docs
* remove unused reference SwiftFormerImageProcessor
* update dependency_versions_table.py
* update swiftformer.mdx
* update swiftformer.mdx
* change model output type - no attentions
* update model org name
* Fix typo
* fix copies
* Update tests/models/swiftformer/test_modeling_swiftformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/auto/image_processing_auto.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/auto/feature_extraction_auto.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/swiftformer.mdx
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/swiftformer/configuration_swiftformer.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update modeling_swiftformer.py
fix-copies
* make style, make quality, fix-copies
* Apply suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Apply suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* make style
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Add suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* make fix-copies
* Update modeling_swiftformer.py
* Update modeling_swiftformer.py
* Add suggestions from code review
Co-Authored-By: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-12 11:52:31 +01:00
Sylvain Gugger
b4d4d6fe87
Add RWKV-4 ( #22797 )
...
* First draft of RWKV-4
* Add support for generate
* Style post-rebase
* Properly use state
* Write doc
* Fix doc
* More math
* Add model to README, dummies and clean config
* Fix init
* multiple fixes:
- fix common tests
- fix configuraion default values
- add CI test for checking state computation
- fix some CI tests
* correct tokenizer
* some tweaks
- fix config docstring
- fix failing tests
* fix CI tests
- add output_attention / output_hidden_states
- override test_initialization
- fix failing CIs
* fix conversion script
- fix sharded case
- add new arguments
* add slow tests + more fixes on conversion script
* add another test
* final fixes
* change single name variable
* add mock attention mask for pipeline to work
* correct eos token id
* fix nits
* add checkpoints
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add `tie_word_embeddings` in docstring
* change tensor name
* fix final nits
* Trigger CI
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-09 13:04:10 -04:00
NielsRogge
431b04d8c4
[SAM] Add resources ( #23224 )
...
Add resources
2023-05-09 08:58:19 -04:00
Ashwin Mathur
ef0c380c12
Update LLaMA docs with arxiv link ( #23191 )
...
* Update docs with arxiv link
* Update llama model docs
2023-05-07 18:52:44 -04:00
raghavanone
312b104ff6
Add FlaxWhisperForAudioClassification model ( #23173 )
...
* Add FlaxWhisperForAudioClassification model
* Add models to init
* Add models to init
* Fix copies
* Fix automapping
* Fix failing test
2023-05-05 13:23:46 -04:00
Perry Huang
1b9c352e55
Add TrOCR resources ( #23142 )
...
* Add TrOCR resources
* Made fixes suggested by stevhliu
2023-05-05 11:29:20 -04:00
Sylvain Gugger
01734dba84
Revert "Add FlaxWhisperForAudioClassification model" ( #23154 )
...
Revert "Add FlaxWhisperForAudioClassification model (#22883 )"
This reverts commit c8f2c5c56e
.
2023-05-04 13:47:07 -04:00
raghavanone
c8f2c5c56e
Add FlaxWhisperForAudioClassification model ( #22883 )
...
* Add FlaxWhisperForAudioClassification model
* Add models to init
* Add models to init
* Fix copies
* Fix automapping
2023-05-04 13:00:16 -04:00
peter-sk
83b38fbea8
GPTNeoXForQuestionAnswering ( #23059 )
...
* first draft - gives index error in question_answering.py
* maturing
* no labels
* pipeline should know about QA
* fixing checks
* formatting
* fixed docstring
* initial commit
* formatting
* adding the class to many places
* towards less unhappy checks
* nearly there
* and gpt neox for qa
* use right model
* forgot this one
* base_model_prefix is "gpt_neox" for GPTNeoX* models
* unnecessary stuff
* Update src/transformers/models/gpt_neox/modeling_gpt_neox.py
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
* format
* Update src/transformers/models/gpt_neox/modeling_gpt_neox.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* removed gpt2 stuff
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-04 10:15:15 -04:00
peter-sk
78b7debf56
GPTNeoForQuestionAnswering ( #23057 )
...
* first draft - gives index error in question_answering.py
* maturing
* no labels
* pipeline should know about QA
* fixing checks
* formatting
* fixed docstring
* initial commit
* formatting
* adding the class to many places
* towards less unhappy checks
* nearly there
* Update src/transformers/models/gpt_neo/modeling_gpt_neo.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* avoid error
* moving to device of star/end_logits
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-03 15:59:19 -04:00
Julien Chaumond
ca7eb27ed5
[doc] Try a few ≠ ways of linking to Papers, users, and org profiles ( #22611 )
...
* [doc] Try a few ≠ ways of linking to Papers, users, and org profiles
* Empty commit
* Empty commit now that the backend is fixed
---------
Co-authored-by: Lysandre <lysandre@huggingface.co>
2023-05-03 18:23:09 +02:00
Samin Yasar
b53004fdce
Add resources for LayoutLmV2 and reformat documentation resources ( #23115 )
...
* add resources for layoutlmv2
* remove 🌎 from some resources
2023-05-03 09:53:00 -04:00
peter-sk
2b0c924568
GPT2ForQuestionAnswering ( #23030 )
...
* first draft - gives index error in question_answering.py
* maturing
* no labels
* pipeline should know about QA
* fixing checks
* formatting
* fixed docstring
* make sure legacy code executes
* comment
* like this
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
2023-05-02 09:25:46 -04:00
Ashwin Mathur
487f132a6f
Add BioGPTForSequenceClassification
( #22253 )
...
* added BioGptForSequenceClassification
* added source of copied code
* typo
* Format code with black
* Update comments for copied code
* Remove code copy comment
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Fix failing tests
* Update code copied from comments
* Fix code quality
* Update src/transformers/models/biogpt/modeling_biogpt.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Fix lint error
* Update src/transformers/models/biogpt/modeling_biogpt.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Rename model to biogpt for consistency
* Add PipelineTesterMixin to test_modeling_biogpt.py
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Resolve merge confict
---------
Co-authored-by: Guillem García Subies <37592763+GuillemGSubies@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-05-01 09:17:27 -04:00
s-JoL
c2c99dc7ef
add open-llama model with ckpt ( #22795 )
...
* update Open-Llama model
* update
* update format
* update doc
* update
* update stable embedding test
* update test case
* update format
* update readme
* fix typo
* update name
* remove tokenizer and update format
* remove convert_open_llama_weights_to_hf
* update warning and doc_string
---------
Co-authored-by: songliang.bayesian <songliang.bayesian@bytedance.com>
2023-04-28 11:01:32 -04:00
peter-sk
d65b14ed67
added GPTNeoForTokenClassification ( #22908 )
...
* added GPTNeoForTokenClassification
* add to top-level init
* fixup
* test
* more fixup
* add to gpt_neo.mdx
* repo consistency
* dummy copy
* fix copies
* optax >= 0.1.5 assumes jax.Array exists - which it doesn't for jax <= 0.3.6
* merge with main made this superfluous
* added classifier_dropout
* remove legacy code
* removed fmt:on/off
removed expected_outputs
* doc style fix
* classifier_dropout is always in config
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
2023-04-27 12:10:03 -04:00
peter-sk
614e191c4d
added GPTNeoXForTokenClassification ( #23002 )
...
* initial commit
* added GPTNeoXForTokenClassification
* typo
* doc
fixed extra comma that turned into a tuple
* unifying variable names
fixing forward call
* classifier_dropout is in config
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
---------
Co-authored-by: Prof. Peter Schneider-Kamp <jps@ordbogen.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2023-04-27 11:08:26 -04:00
Ritik Nandwal
20ac86c6f1
Add TensorFlow Wav2Vec2 for sequence classification ( #22073 )
...
* Add initial changes for TF wav2vec2 for sequence classification
* Add suggested changes
* Add serving and serving output methods
* Add serving_output implementation and fix layer_weights
* Add fixes
* Fixed test cases
* Fixing test and adding suggested changes
2023-04-26 13:35:30 +01:00
Daniel Levenson
4e1522d65a
Fix typo in mega.mdx ( #22998 )
...
MegaConfiig -> MegaConfig
2023-04-25 17:58:45 -04:00
Arthur
df017c3ccc
[CLAP] Doc nits ( #22957 )
...
clap nits
2023-04-24 14:00:29 +02:00
NielsRogge
3d3204c025
Add FocalNet ( #21532 )
...
Adds FocalNet by Microsoft to transformers
---------
Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
Co-authored-by: alaradirik <alaradirik@gmail.com>
2023-04-23 20:03:05 +03:00
Connor Henderson
b950c38565
tests: Fix flaky test for NLLB-MoE ( #22880 )
...
* add test update and docs edits
* docs edit suggestion
2023-04-21 17:09:40 +01:00
fxmarty
3d852da2db
Expose AutoModelForMaskGeneration ( #22910 )
...
* expose
* style
* add dummy object
* amazed by the quality of transformers CI
2023-04-21 10:04:45 -04:00
Arthur
f143037789
Add automatic-mask-generation
pipeline for Segment Anything Model (SAM) ( #22840 )
...
* cleanup
* updates
* more refactoring
* make style
* update inits
* support other inputs in base
* update based on review
Co-authored-by: Nicolas Patry <patry.nicolas@gmail.com>
* Update tests/pipelines/test_pipelines_automatic_mask_generation.py
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
* update
* fixup
* TODO x and y to refactor, _h _w refactored here
* update docstring
* more nits
* style on these
* more doc fix
* rename variables
* update
* updates
* style
* update
* fix `_mask_to_rle_pytorch`
* styling
* fix ask to rle, wrong outputs
* add device arg
* update
* more updates, fix tets
* udpate
* update docstrings
* styling
* fixup
* add notebook on the docs
* update orginal sizes
* fix docstring
* updat condition on point_per-batch
* updates tests
* fix CI test
* extend is required, append does not work!
* fixup
* fix CI tests
* whit pixels left
* address doc comments
* fix doc
* slow pipeline tests
* update auto init
* add revision
* make fixup
* update p!ipoeline tag when calling tests
* alphabeitcal order in inits
* fix copies
* last style nits
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* reformat docstring
* more reformat
* address most of the comments
* Update src/transformers/pipelines/mask_generation.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* final refactor
* Update src/transformers/models/sam/image_processing_sam.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixup and fix slow tests
* revert
---------
Co-authored-by: Nicolas Patry <patry.nicolas@gmail.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-04-20 19:27:24 +02:00