* first commit
* correct default value non causal
* update config and modeling code
* update converting checkpoint
* clean modeling and fix tests
* make style
* add new config parameters to docstring
* fix copied from statements
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* make position_embeddings_type docstrings clearer
* clean converting script
* remove function not used
* clean modeling file
* apply suggestion for test file + add convert script to not_doctested
* modify tests according to review - cleaner logic and more tests
* Apply nit suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add checker of valid position embeddings type
* instantiate new layer norm layer with the right eps
* fix freeze_feature_encoder since it can be None in some cases
* add test same output in convert script
* restore wav2vec2conformer and add new model
* create processor and FE + clean
* add new model code
* fix convert script and set default config parameters
* correct model id paths
* make style
* make fix-copies and cleaning files
* fix copied from statements
* complete .md and fixe copies
* clean convert script argument defaults
* fix config parameters docstrings
* fix config docstring
* add copied from and enrich FE tests
* fix copied from and repo-consistency
* add autotokenizer
* make test input length shorter and change docstring code
* fix docstrings and copied from
* add add_adapter to ASR training example
* make testing of adapters more robust
* adapt to multi adapter layers
* refactor input_values->input_features and remove w2v2-bert feature extractor
* remove pretraining model
* remove depreciated features and useless lines
* add copied from and ignore statements to modeling tests
* remove pretraining model #2
* change import in convert script
* change default in convert script
* update readme and remove useless line
* Update tests/models/wav2vec2_bert/test_processor_wav2vec2_bert.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* refactor BERT to Bert for consistency
* remove useless ignore copy statement
* add persistent to buffer in rotary
* add eps in LayerNorm init and remove copied from
* add adapter activation parameters and add copied from statements
* Fix copied statements and add unitest.skip reasons
* add copied statement in test_processor
* refactor processor
* make style
* replace numpy random by torch rand
* remove expected output CTC
* improve converting script with processor class
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* remove gumbel class
* remove tests related to previously deleted class
* Update src/transformers/models/wav2vec2_bert/configuration_wav2vec2_bert.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* correct typos
* remove uused parameters
* update processor to takes both text and audio
* update checkpoints
* update expected output and add ctc expected output
* add label_attention_mask
* replace pt with np in processor tests
* fix typo
* revert to behaviour with labels_attention_mask
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Improving Training Performance and Scaling documentation by adding PEFT techniques to suggestions to reduce memory requirements for training
* Update docs/source/en/perf_train_gpu_one.md
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* added args to the pipeline
* added test
* more sensical tests
* fixup
* docs
* typo
;
* docs
* made changes to support named args
* fixed test
* docs update
* styles
* docs
* docs
* Add first draft
* Use appropriate gelu function
* More improvements
* More improvements
* More improvements
* Convert checkpoint
* More improvements
* Improve docs, remove print statements
* More improvements
* Add link
* remove unused masking function
* begin tokenizer
* do_lower_case
* debug
* set split_special_tokens=True
* Remove script
* Fix style
* Fix rebase
* Use same design as CLIP
* Add fast tokenizer
* Add SiglipTokenizer to init, remove extra_ids
* Improve conversion script
* Use smaller inputs in conversion script
* Update conversion script
* More improvements
* Add processor to conversion script
* Add tests
* Remove print statements
* Add tokenizer tests
* Fix more tests
* More improvements related to weight initialization
* More improvements
* Make more tests pass
* More improvements
* More improvements
* Add copied from
* Add canonicalize_text
* Enable fast tokenizer tests
* More improvements
* Fix most slow tokenizer tests
* Address comments
* Fix style
* Remove script
* Address some comments
* Add copied from to tests
* Add more copied from
* Add more copied from
* Add more copied from
* Remove is_flax_available
* More updates
* Address comment
* Remove SiglipTokenizerFast for now
* Add caching
* Remove umt5 test
* Add canonicalize_text inside _tokenize, thanks Arthur
* Fix image processor tests
* Skip tests which are not applicable
* Skip test_initialization
* More improvements
* Compare pixel values
* Fix doc tests, add integration test
* Add do_normalize
* Remove causal mask and leverage ignore copy
* Fix attention_mask
* Fix remaining tests
* Fix dummies
* Rename temperature and bias
* Address comments
* Add copied from to tokenizer tests
* Add SiglipVisionModel to auto mapping
* Add copied from to image processor tests
* Improve doc
* Remove SiglipVisionModel from index
* Address comments
* Improve docs
* Simplify config
* Add first draft
* Make it like mistral
* More improvements
* Fix attention_mask
* Fix output_attentions
* Add note in docs
* Convert multilingual model
* Convert large checkpoint
* Convert more checkpoints
* Add pipeline support, correct image_mean and image_std
* Use padding=max_length by default
* Make processor like llava
* Add code snippet
* Convert more checkpoints
* Set keep_punctuation_string=None as in OpenCLIP
* Set normalized=False for special tokens
* Fix doc test
* Update integration test
* Add figure
* Update organization
* Happy new year
* Use AutoModel everywhere
---------
Co-authored-by: patil-suraj <surajp815@gmail.com>
* Sort es/_toctree.yml like en/_toctree.yml
* Run make style
* Add -Rendimiento y escalabilidad- section to es/_toctree.yml
* Run make style
* Add s to section
* Add translate of performance.md
* Add performance.md to es/_toctree.yml
* Run make styele
* Fix docs links
* Run make style
* start - docs, SpeechT5 copy and rename
* add relevant code from FastSpeech2 draft, have tests pass
* make it an actual conformer, demo ex.
* matching inference with original repo, includes debug code
* refactor nn.Sequentials, start more desc. var names
* more renaming
* more renaming
* vocoder scratchwork
* matching vocoder outputs
* hifigan vocoder conversion script
* convert model script, rename some config vars
* replace postnet with speecht5's implementation
* passing common tests, file cleanup
* expand testing, add output hidden states and attention
* tokenizer + passing tokenizer tests
* variety of updates and tests
* g2p_en pckg setup
* import structure edits
* docstrings and cleanup
* repo consistency
* deps
* small cleanup
* forward signature param order
* address comments except for masks and labels
* address comments on attention_mask and labels
* address second round of comments
* remove old unneeded line
* address comments part 1
* address comments pt 2
* rename auto mapping
* fixes for failing tests
* address comments part 3 (bart-like, train loss)
* make style
* pass config where possible
* add forward method + tests to WithHifiGan model
* make style
* address arg passing and generate_speech comments
* address Arthur comments
* address Arthur comments pt2
* lint changes
* Sanchit comment
* add g2p-en to doctest deps
* move up self.encoder
* onnx compatible tensor method
* fix is symbolic
* fix paper url
* move models to espnet org
* make style
* make fix-copies
* update docstring
* Arthur comments
* update docstring w/ new updates
* add model architecture images
* header size
* md wording update
* make style
* fix: minor enhancement and fix in bounding box visualization example
The example that was trying to visualize the bounding box was not considering an edge case,
where the bounding box can be un-normalized. So using the same set of code, we can not get
results with a different dataset with un-normalized bounding box. This commit fixes that.
* run make clean
* add an additional note on the scenarios where the box viz code works
---------
Co-authored-by: Anindyadeep <anindya@pop-os.localdomain>
* upfaste
* Update
* Update docs/source/ja/model_doc/deformable_detr.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/data2vec.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/cvt.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* add suggestions
* Toctree update
* remove git references
* Update docs/source/ja/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/decision_transformer.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Add glossary to es/_toctree.yml
* Add glossary.md to es/
* A section translated
* B and C section translated
* Fix typo in en/glossary.md C section
* D section translated | Add a extra line in en/glossary.md
* E and F section translated | Fix typo in en/glossary.md
* Fix words preentrenado
* H and I section translated | Fix typo in en/glossary.md
* L section translated
* M and N section translated
* P section translated
* R section translated
* S section translated
* T section translated
* U and Z section translated | Fix TensorParallel link in both files
* Fix word
* add sdpa
* wip
* cleaning
* add ref
* yet more cleaning
* and more :)
* wip llama
* working llama
* add output_attentions=True support
* bigcode sdpa support
* fixes
* gpt-bigcode support, require torch>=2.1.1
* add falcon support
* fix conflicts falcon
* style
* fix attention_mask definition
* remove output_attentions from attnmaskconverter
* support whisper without removing any Copied from statement
* fix mbart default to eager renaming
* fix typo in falcon
* fix is_causal in SDPA
* check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained
* add warnings when falling back on the manual implementation
* precise doc
* wip replace _flash_attn_enabled by config.attn_implementation
* fix typo
* add tests
* style
* add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace
* obey to config.attn_implementation if a config is passed in from_pretrained
* fix is_torch_sdpa_available when torch is not installed
* remove dead code
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/bart/modeling_bart.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* remove duplicate pretraining_tp code
* add dropout in llama
* precise comment on attn_mask
* add fmt: off for _unmask_unattended docstring
* precise num_masks comment
* nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion
* cleanup modeling_utils
* backward compatibility
* fix style as requested
* style
* improve documentation
* test pass
* style
* add _unmask_unattended tests
* skip meaningless tests for idefics
* hard_check SDPA requirements when specifically requested
* standardize the use if XXX_ATTENTION_CLASSES
* fix SDPA bug with mem-efficient backend on CUDA when using fp32
* fix test
* rely on SDPA is_causal parameter to handle the causal mask in some cases
* fix FALCON_ATTENTION_CLASSES
* remove _flash_attn_2_enabled occurences
* fix test
* add OPT to the list of supported flash models
* improve test
* properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test
* remove remaining _flash_attn_2_enabled occurence
* Update src/transformers/modeling_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update docs/source/en/perf_infer_gpu_one.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* remove use_attn_implementation
* fix docstring & slight bug
* make attn_implementation internal (_attn_implementation)
* typos
* fix tests
* deprecate use_flash_attention_2=True
* fix test
* add back llama that was removed by mistake
* fix tests
* remove _flash_attn_2_enabled occurences bis
* add check & test that passed attn_implementation is valid
* fix falcon torchscript export
* fix device of mask in tests
* add tip about torch.jit.trace and move bt doc below sdpa
* fix parameterized.expand order
* move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there
* update sdpaattention class with the new cache
* Update src/transformers/configuration_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/bark/modeling_bark.py
* address review comments
* WIP torch.jit.trace fix. left: test both eager & sdpa
* add test for torch.jit.trace for both eager/sdpa
* fix falcon with torch==2.0 that needs to use sdpa
* fix doc
* hopefully last fix
* fix key_value_length that has no default now in mask converter
* is it flacky?
* fix speculative decoding bug
* tests do pass
* fix following #27907
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add pad_truncation to es/_toctree.yml
* Add pad_truncation.md to es/
* Translated first two paragraph
* Translated paddig argument section
* Translated truncation argument section
* Translated final paragraphs
* Translated table
* Fixed typo in the table of en/pad_truncation.md
* Run make style | Fix a word
* Add Padding (relleno) y el Truncation (truncamiento) in the final paragraphs
* Fix relleno and truncamiento words