* Add glossary to es/_toctree.yml
* Add glossary.md to es/
* A section translated
* B and C section translated
* Fix typo in en/glossary.md C section
* D section translated | Add a extra line in en/glossary.md
* E and F section translated | Fix typo in en/glossary.md
* Fix words preentrenado
* H and I section translated | Fix typo in en/glossary.md
* L section translated
* M and N section translated
* P section translated
* R section translated
* S section translated
* T section translated
* U and Z section translated | Fix TensorParallel link in both files
* Fix word
* add sdpa
* wip
* cleaning
* add ref
* yet more cleaning
* and more :)
* wip llama
* working llama
* add output_attentions=True support
* bigcode sdpa support
* fixes
* gpt-bigcode support, require torch>=2.1.1
* add falcon support
* fix conflicts falcon
* style
* fix attention_mask definition
* remove output_attentions from attnmaskconverter
* support whisper without removing any Copied from statement
* fix mbart default to eager renaming
* fix typo in falcon
* fix is_causal in SDPA
* check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained
* add warnings when falling back on the manual implementation
* precise doc
* wip replace _flash_attn_enabled by config.attn_implementation
* fix typo
* add tests
* style
* add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace
* obey to config.attn_implementation if a config is passed in from_pretrained
* fix is_torch_sdpa_available when torch is not installed
* remove dead code
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/bart/modeling_bart.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* remove duplicate pretraining_tp code
* add dropout in llama
* precise comment on attn_mask
* add fmt: off for _unmask_unattended docstring
* precise num_masks comment
* nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion
* cleanup modeling_utils
* backward compatibility
* fix style as requested
* style
* improve documentation
* test pass
* style
* add _unmask_unattended tests
* skip meaningless tests for idefics
* hard_check SDPA requirements when specifically requested
* standardize the use if XXX_ATTENTION_CLASSES
* fix SDPA bug with mem-efficient backend on CUDA when using fp32
* fix test
* rely on SDPA is_causal parameter to handle the causal mask in some cases
* fix FALCON_ATTENTION_CLASSES
* remove _flash_attn_2_enabled occurences
* fix test
* add OPT to the list of supported flash models
* improve test
* properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test
* remove remaining _flash_attn_2_enabled occurence
* Update src/transformers/modeling_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/modeling_attn_mask_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update docs/source/en/perf_infer_gpu_one.md
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* remove use_attn_implementation
* fix docstring & slight bug
* make attn_implementation internal (_attn_implementation)
* typos
* fix tests
* deprecate use_flash_attention_2=True
* fix test
* add back llama that was removed by mistake
* fix tests
* remove _flash_attn_2_enabled occurences bis
* add check & test that passed attn_implementation is valid
* fix falcon torchscript export
* fix device of mask in tests
* add tip about torch.jit.trace and move bt doc below sdpa
* fix parameterized.expand order
* move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there
* update sdpaattention class with the new cache
* Update src/transformers/configuration_utils.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/bark/modeling_bark.py
* address review comments
* WIP torch.jit.trace fix. left: test both eager & sdpa
* add test for torch.jit.trace for both eager/sdpa
* fix falcon with torch==2.0 that needs to use sdpa
* fix doc
* hopefully last fix
* fix key_value_length that has no default now in mask converter
* is it flacky?
* fix speculative decoding bug
* tests do pass
* fix following #27907
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add pad_truncation to es/_toctree.yml
* Add pad_truncation.md to es/
* Translated first two paragraph
* Translated paddig argument section
* Translated truncation argument section
* Translated final paragraphs
* Translated table
* Fixed typo in the table of en/pad_truncation.md
* Run make style | Fix a word
* Add Padding (relleno) y el Truncation (truncamiento) in the final paragraphs
* Fix relleno and truncamiento words
* Draft version of new KV Caching
This should allow Attention Sinks (https://github.com/tomaarsen/attention_sinks)
/ StreamingLLM (https://arxiv.org/abs/2309.17453) to be easily implemented
in a third-party or in transformers directly
* Address numerous PR suggestions
1. Move layer_idx from cache to ...Attention. Removes confusing set_layer_idx magic.
2. Always convert past_key_values to Cache instance at the start of ...Attention, removes all other isinstance calls.
3. Remove __bool__ and __getitem__ magic as they're confusing.
4. past_key_values.update(key, value, idx) now returns key, value.
5. Add use_legacy_cache flag, defaults to None, i.e. Falsey. This breaks generate for now, until 1) the cache is used is generate() or 2) use_legacy_cache is defaulted to True in generate() until we change it in another PR.
6. Separate key_cache and value_cache.
Some work is still needed to see if the SinkCache can conveniently be implemented with just one update method.
* Implement the SinkCache through backward+forward rotations
* Integrate (Sink)Cache with Llama FA2
* Set use_legacy_cache=True as default, allows for test passes
* Move from/to_legacy_cache to ...Model class
* Undo unnecessary newline change
* Remove copy utility from deprecated OpenLlama
* Match import style
* manual rebase with main
* Cache class working with generate (#1)
* Draft version of new KV Caching
This should allow Attention Sinks (https://github.com/tomaarsen/attention_sinks)
/ StreamingLLM (https://arxiv.org/abs/2309.17453) to be easily implemented
in a third-party or in transformers directly
* Address numerous PR suggestions
1. Move layer_idx from cache to ...Attention. Removes confusing set_layer_idx magic.
2. Always convert past_key_values to Cache instance at the start of ...Attention, removes all other isinstance calls.
3. Remove __bool__ and __getitem__ magic as they're confusing.
4. past_key_values.update(key, value, idx) now returns key, value.
5. Add use_legacy_cache flag, defaults to None, i.e. Falsey. This breaks generate for now, until 1) the cache is used is generate() or 2) use_legacy_cache is defaulted to True in generate() until we change it in another PR.
6. Separate key_cache and value_cache.
Some work is still needed to see if the SinkCache can conveniently be implemented with just one update method.
* Integrate (Sink)Cache with Llama FA2
* Move from/to_legacy_cache to ...Model class
* Undo unnecessary newline change
* Match import style
* working generate
* Add tests; Simplify code; Apply changes to Mistral and Persimmon
* fix rebase mess
* a few more manual fixes
* last manual fix
* propagate changes to phi
* upgrade test
* add use_legacy_cache docstring; beef up tests
* reintroduce unwanted deletes
---------
Co-authored-by: Tom Aarsen <Cubiegamedev@gmail.com>
* move import
* add default to model_kwargs.get('use_legacy_cache')
* correct failing test
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* apply PR suggestions
* fix failing test
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Tom Aarsen <37621491+tomaarsen@users.noreply.github.com>
* PR comments
* tmp commit
* add docstrings
* more tests, more docstrings, add to docs
* derp
* tmp commit
* tmp dbg
* more dbg
* fix beam search bug
* cache can be a list of tuples in some models
* fix group beam search
* all but sinkcache integration tests
* fix sink cache and add hard integration test
* now also compatible with input_embeds input
* PR comments
* add Cache support to Phi+FA2
* make fixup
---------
Co-authored-by: Joao Gante <joao@huggingface.co>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Updates the Distributed CPU documentation to add a Kubernetes example
* Small edits
* Fixing link
* Adding missing new lines
* Minor edits
* Update to include Dockerfile snippet
* Add comment about tuning env var
* Updates based on review comments
* add model like
* logits match
* minor fixes
* fixes
* up
* up
* add todo
* llava processor
* keep the processor simple
* add conversion script
* fixup
* fix copies
* up
* add to index
* fix config + logits
* fix
* refactor
* more refactor
* more refactor
* fix copies
* add authors
* v1 tests
* add `LlavaProcessor` in init
* remove unneeded import
* up
* up
* docs
* up
* fix CI
* fix CI
* add attention mask in test
* make fixup
* remove the vision model
* that' s the dirty way to do it
* nits
* nits
* updates
* add more tests
* add input tests
* fixup
* more styling
* nits
* updates amd cleanup
* fixup the generation expected results
* fix the testing script
* some cleanup and simplification which does not work yet but almost there!
* make correct dispatch operations
* vectorize works for batch of images and text
* last todos
* nits
* update test and modeling code
* remove useless function for now
* fix few issues
* fix generation
* some nits
* add bakllava
* nits
* remove duplicated code
* finis merge
* cleanup
* missed this line
* fill the todos
* add left padding offset
* add left and rignt padding logic
* bool to properly index
* make sure
* more cleanups
* batch is fixed 😉
* add correct device for tensor creation
* fix some dtype missmatch
* ruff
* update conversion script
* Update src/transformers/__init__.py
* fa 2 support + fix conversion script
* more
* correct reshaping
* fix test dict
* fix copies by ignoring
* fix nit
* skip clip vision model
* fixup
* fixup
* LlavaForVisionText2Text -> LlavaForCausalLM
* update
* fix
* raise correct errors
* fix
* docs
* nuke for now
* nits here and there
* fixup
* fix remaining tests
* update LlavaForConditionalGeneration instead of CausalLM
* fixups
* pipeline support
* slow and piepline tests
* supports batch
* nits
* cleanup
* fix first integration tests
* add pad token where needed
* correct etsts
* fixups
* update pipeline testr
* fix quality
* nits
* revert unneeded change
* nit
* use BatchFeature
* from ...feature_extraction_utils import BatchFeature
* nits
* nits
* properly update
* more f*** nits
* fix copies
* comment
* keep slow test slow
* Update src/transformers/models/llava/processing_llava.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* add piepline example
* add pixel values in docstrign
* update pr doctest
* fix
* fix slow tests
* remove hack
* fixup
* small note
* forward contrib credits from PR25789
* forward contrib credits from original implementation and work
* add arthur
* Update src/transformers/models/llava/processing_llava.py
Co-authored-by: Lysandre Debut <hi@lysand.re>
* update docstring
* nit
* move to not doctested because of timeout issues
* fixup
* add description
* more
* fix-copies
* fix docs
* add beam search
* add more comments
* add typehints on processor
* add speedup plot
* update slow tests and docs
* push test
* push batched test
* fix batched generation with different number of images
* remove benchmark due to a bug
* fix test
* fix copies
* add gcolab demo
---------
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: shauray8 <shauray8@users.noreply.github.com>
Co-authored-by: haotian-liu <haotian-liu@users.noreply.github.com>
Co-authored-by: Lysandre Debut <hi@lysand.re>
* Copies `modeling_flax_gpt_neo.py` to start
* MLP Block. WIP Attention and Block
* Adds Flax implementation of `LlamaMLP`
Validated with in-file test.
Some slight numeric differences, but assuming it isn't an issue
* Adds `FlaxLlamaRMSNorm` layer
`flax.linen` includes `RMSNorm` layer but not necessarily in all
versions. Hence, we add in-file.
* Adds FlaxLlamaAttention
Copied from GPT-J as it has efficient caching implementation as well as
rotary embeddings.
Notice numerically different, but not by a huge amount. Needs
investigating
* Adds `FlaxLlamaDecoderLayer`
numerically inaccurate, debugging..
* debugging rotary mismatch
gptj uses interleaved whilst llama uses contiguous
i think they match now but still final result is wrong.
maybe drop back to just debugging attention layer?
* fixes bug with decoder layer
still somewhat numerically inaccurate, but close enough for now
* adds markers for what to implement next
the structure here diverges a lot from the PT version.
not a big fan of it, but just get something working for now
* implements `FlaxLlamaBlockCollection`]
tolerance must be higher than expected, kinda disconcerting
* Adds `FlaxLlamaModule`
equivalent PyTorch model is `LlamaModel`
yay! a language model🤗
* adds `FlaxLlamaForCausalLMModule`
equivalent to `LlamaForCausalLM`
still missing returning dict or tuple, will add later
* start porting pretrained wrappers
realised it probably needs return dict as a prereq
* cleanup, quality, style
* readds `return_dict` and model output named tuples
* (tentatively) pretrained wrappers work 🔥
* fixes numerical mismatch in `FlaxLlamaRMSNorm`
seems `jax.lax.rsqrt` does not match `torch.sqrt`.
manually computing `1 / jax.numpy.sqrt` results in matching values.
* [WIP] debugging numerics
* numerical match
I think issue was accidental change of backend. forcing CPU fixes test.
We expect some mismatch on GPU.
* adds in model and integration tests for Flax Llama
summary of failing:
- mul invalid combination of dimensions
- one numerical mismatch
- bf16 conversion (maybe my local backend issue)
- params are not FrozenDict
* adds missing TYPE_CHECKING import and `make fixup`
* adds back missing docstrings
needs review on quality of docstrings, not sure what is required.
Furthermore, need to check if `CHECKPOINT_FOR_DOC` is valid. See TODO
* commenting out equivalence test as can just use common
* debugging
* Fixes bug where mask and pos_ids were swapped in pretrained models
This results in all tests passing now 🔥
* cleanup of modeling file
* cleanup of test file
* Resolving simpler review comments
* addresses more minor review comments
* fixing introduced pytest errors from review
* wip additional slow tests
* wip tests
need to grab a GPU machine to get real logits for comparison
otherwise, slow tests should be okay
* `make quality`, `make style`
* adds slow integration tests
- checking logits
- checking hidden states
- checking generation outputs
* `make fix-copies`
* fix mangled function following `make fix-copies`
* adds missing type checking imports
* fixes missing parameter checkpoint warning
* more finegrained 'Copied from' tags
avoids issue of overwriting `LLAMA_INPUTS_DOCSTRING`
* swaps import guards
??? how did these get swapped initially?
* removing `inv_freq` again as pytorch version has now removed
* attempting to get CI to pass
* adds doc entries for llama flax models
* fixes typo in __init__.py imports
* adds back special equivalence tests
these come from the gpt neo flax tests. there is special behaviour for these models that needs to override the common version
* overrides tests with dummy to see if CI passes
need to fill in these tests later
* adds my contribution to docs
* `make style; make quality`
* replaces random masking with fixed to work with flax version
* `make quality; make style`
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* updates `x`->`tensor` in `rotate_half`
* addresses smaller review comments
* Update docs/source/en/model_doc/llama.md
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* adds integration test class
* adds `dtype` to rotary embedding to cast outputs
* adds type to flax llama rotary layer
* `make style`
* `make fix-copies`
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* applies suggestions from review
* Update modeling_flax_llama.py
* `make fix-copies`
* Update tests/models/llama/test_modeling_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update src/transformers/models/llama/modeling_flax_llama.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* fixes shape mismatch in FlaxLlamaMLP
* applies some suggestions from reviews
* casts attn output logits to f32 regardless of dtype
* adds attn bias using `LlamaConfig.attention_bias`
* adds Copied From comments to Flax Llama test
* mistral and persimmon test change -copy from llama
* updates docs index
* removes Copied from in tests
it was preventing `make fix-copies` from succeeding
* quality and style
* ignores FlaxLlama input docstring
* adds revision to `_CHECKPOINT_FOR_DOC`
* repo consistency and quality
* removes unused import
* removes copied from from Phi test
now diverges from llama tests following FlaxLlama changes
* adds `_REAL_CHECKPOINT_FOR_DOC`
* removes refs from pr tests
* reformat to make ruff happy
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Add models
* Add models and update `_toctree.yml`
* Update docs/source/ja/model_doc/chinese_clip.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/camembert.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/bros.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/bros.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/blip-2.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/camembert.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* solve merge conflicts and update paper titles
* Update docs/source/ja/model_doc/bridgetower.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/canine.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/model_doc/chinese_clip.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update the authons name in bros..md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* v1 fusing modules
* add fused mlp support
* up
* fix CI
* block save_pretrained
* fixup
* small fix
* add new condition
* add v1 docs
* add some comments
* style
* fix nit
* adapt from suggestion
* add check
* change arg names
* change variables name
* Update src/transformers/integrations/awq.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* style
* split up into 3 different private methods
* more conditions
* more checks
* add fused tests for custom models
* fix
* fix tests
* final update docs
* final fixes
* fix importlib metadata
* Update src/transformers/utils/quantization_config.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* change it to `do_fuse`
* nit
* Update src/transformers/utils/quantization_config.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Update src/transformers/utils/quantization_config.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Update src/transformers/utils/quantization_config.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* few fixes
* revert
* fix test
* fix copies
* raise error if model is not quantized
* add test
* use quantization_config.config when fusing
* Update src/transformers/modeling_utils.py
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Create asr.md
* Create audio_classification.md
* Create document_question_answering.md
* Update document_question_answering.md
* add
* add
* ggg
* gg
* add masked_language_modeling.md
* add monocular_depth estimation
* new
* dd
* add
* add
* cl
* add
* Add Traslation.md
* hgf
* Added docs to Toctree file
* Update docs/source/ja/tasks/asr.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/asr.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/image_classification.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/idefics.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/image_captioning.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Fix docs and revert changes
* Update docs/source/en/tasks/idefics.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/language_modeling.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/language_modeling.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/language_modeling.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/prompting.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/masked_language_modeling.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/masked_language_modeling.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/prompting.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/object_detection.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/semantic_segmentation.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/semantic_segmentation.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/token_classification.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/translation.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/visual_question_answering.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/summarization.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* changes in review 1 and 2
* add
* Update docs/source/ja/tasks/asr.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/tasks/translation.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* changes
* Update docs/source/ja/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update _toctree.yml
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* add working convertion script
* first non-working version of modeling code
* update modeling code (working)
* make style
* make fix-copies
* add config docstrings
* add config to ignore docstrings formatage due to unconventional markdown
* fix copies
* fix generation num_return_sequences
* enrich docs
* add and fix tests beside integration tests
* update integration tests
* update repo id
* add tie weights and make style
* correct naming in .md
* fix imports and so on
* correct docstrings
* fix fp16 speech forward
* fix speechencoder attention
* make style
* fix copied from
* rename SeamlessM4Tv2-v2 to SeamlessM4Tv2
* Apply suggestions on configuration
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* remove useless public models
* fix private models + better naming for T2U models
* clean speech encoder relative position embeddings
* refactor chunk attention
* add docstrings to chunk attention method
* improve naming and docstrings
* rename some attention variables + add temperature sampling in T2U model
* rename DOCSTRINGS variable names
* make style + remove 2 useless config parameters
* enrich model card
* remove any attention_head reference + fix temperature in T2U
* new fmt and make style
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* rename spkr_id->speaker_id and change docstrings of get_char_input_ids
* simplify v2attention
* make style
* Update seamless_m4t_v2.md
* update code and tests with last update
* update repo ids
* fill article name, abstract andauthors
* update not_doctested and slow_doc tests
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* add distribution head to forecasting
* formatting
* Add generate function for forecasting
* Add generate function to prediction task
* formatting
* use argsort
* add past_observed_mask ordering
* fix arguments
* docs
* add back test_model_outputs_equivalence test
* formatting
* cleanup
* formatting
* use ACT2CLS
* formatting
* fix add_start_docstrings decorator
* add distribution head and generate function to regression task
add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput, PatchTSTForRegressionOutput.
* add distribution head and generate function to regression task
add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput, PatchTSTForRegressionOutput.
* fix typos
* add forecast_masking
* fixed tests
* use set_seed
* fix doc test
* formatting
* Update docs/source/en/model_doc/patchtst.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* better var names
* rename PatchTSTTranspose
* fix argument names and docs string
* remove compute_num_patches and unused class
* remove assert
* renamed to PatchTSTMasking
* use num_labels for classification
* use num_labels
* use default num_labels from super class
* move model_type after docstring
* renamed PatchTSTForMaskPretraining
* bs -> batch_size
* more review fixes
* use hidden_state
* rename encoder layer and block class
* remove commented seed_number
* edit docstring
* Add docstring
* formatting
* use past_observed_mask
* doc suggestion
* make fix-copies
* use Args:
* add docstring
* add docstring
* change some variable names and add PatchTST before some class names
* formatting
* fix argument types
* fix tests
* change x variable to patch_input
* format
* formatting
* fix-copies
* Update tests/models/patchtst/test_modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* move loss to forward
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* formatting
* fix a bug when pre_norm is set to True
* output_hidden_states is set to False as default
* set pre_norm=True as default
* format docstring
* format
* output_hidden_states is None by default
* add missing docs
* better var names
* docstring: remove default to False in output_hidden_states
* change labels name to target_values in regression task
* format
* fix tests
* change to forecast_mask_ratios and random_mask_ratio
* change mask names
* change future_values to target_values param in the prediction class
* remove nn.Sequential and make PatchTSTBatchNorm class
* black
* fix argument name for prediction
* add output_attentions option
* add output_attentions to PatchTSTEncoder
* formatting
* Add attention output option to all classes
* Remove PatchTSTEncoderBlock
* create PatchTSTEmbedding class
* use config in PatchTSTPatchify
* Use config in PatchTSTMasking class
* add channel_attn_weights
* Add PatchTSTScaler class
* add output_attentions arg to test function
* format
* Update doc with image patchtst.md
* fix-copies
* rename Forecast <-> Prediction
* change name of a few parameters to match with PatchTSMixer.
* Remove *ForForecasting class to match with other time series models.
* make style
* Remove PatchTSTForForecasting in the test
* remove PatchTSTForForecastingOutput class
* change test_forecast_head to test_prediction_head
* style
* fix docs
* fix tests
* change num_labels to num_targets
* Remove PatchTSTTranspose
* remove arguments in PatchTSTMeanScaler
* remove arguments in PatchTSTStdScaler
* add config as an argument to all the scaler classes
* reformat
* Add norm_eps for batchnorm and layernorm
* reformat.
* reformat
* edit docstring
* update docstring
* change variable name pooling to pooling_type
* fix output_hidden_states as tuple
* fix bug when calling PatchTSTBatchNorm
* change stride to patch_stride
* create PatchTSTPositionalEncoding class and restructure the PatchTSTEncoder
* formatting
* initialize scalers with configs
* edit output_hidden_states
* style
* fix forecast_mask_patches doc string
* doc improvements
* move summary to the start
* typo
* fix docstring
* turn off masking when using prediction, regression, classification
* return scaled output
* adjust output when using distribution head
* remove _num_patches function in the config
* get config.num_patches from patchifier init
* add output_attentions docstring, remove tuple in output_hidden_states
* change SamplePatchTSTPredictionOutput and SamplePatchTSTRegressionOutput to SamplePatchTSTOutput
* remove print("model_class: ", model_class)
* change encoder_attention_heads to num_attention_heads
* change norm to norm_layer
* change encoder_layers to num_hidden_layers
* change shared_embedding to share_embedding, shared_projection to share_projection
* add output_attentions
* more robust check of norm_type
* change dropout_path to path_dropout
* edit docstring
* remove positional_encoding function and add _init_pe in PatchTSTPositionalEncoding
* edit shape of cls_token and initialize it
* add a check on the num_input_channels.
* edit head_dim in the Prediction class to allow the use of cls_token
* remove some positional_encoding_type options, remove learn_pe arg, initalize pe
* change Exception to ValueError
* format
* norm_type is "batchnorm"
* make style
* change cls_token shape
* Change forecast_mask_patches to num_mask_patches. Remove forecast_mask_ratios.
* Bring PatchTSTClassificationHead on top of PatchTSTForClassification
* change encoder_ffn_dim to ffn_dim and edit the docstring.
* update variable names to match with the config
* add generation tests
* change num_mask_patches to num_forecast_mask_patches
* Add examples explaining the use of these models
* make style
* Revert "Revert "[time series] Add PatchTST (#25927)" (#27486)"
This reverts commit 78f6ed6c70.
* make style
* fix default std scaler's minimum_scale
* fix docstring
* close code blocks
* Update docs/source/en/model_doc/patchtst.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/patchtst/test_modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/configuration_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix tests
* add add_start_docstrings
* move examples to the forward's docstrings
* update prepare_batch
* update test
* fix test_prediction_head
* fix generation test
* use seed to create generator
* add output_hidden_states and config.num_patches
* add loc and scale args in PatchTSTForPredictionOutput
* edit outputs if if not return_dict
* use self.share_embedding to check instead checking type.
* remove seed
* make style
* seed is an optional int
* fix test
* generator device
* Fix assertTrue test
* swap order of items in outputs when return_dict=False.
* add mask_type and random_mask_ratio to unittest
* Update modeling_patchtst.py
* add add_start_docstrings for regression model
* make style
* update model path
* Edit the ValueError comment in forecast_masking
* update examples
* make style
* fix commented code
* update examples: remove config from from_pretrained call
* Edit example outputs
* Set default target_values to None
* remove config setting in regression example
* Update configuration_patchtst.py
* Update configuration_patchtst.py
* remove config from examples
* change default d_model and ffn_dim
* norm_eps default
* set has_attentions to Trye and define self.seq_length = self.num_patche
* update docstring
* change variable mask_input to do_mask_input
* fix blank space.
* change logger.debug to logger.warning.
* remove unused PATCHTST_INPUTS_DOCSTRING
* remove all_generative_model_classes
* set test_missing_keys=True
* remove undefined params in the docstring.
---------
Co-authored-by: nnguyen <nnguyen@us.ibm.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Nam Nguyen <namctin@gmail.com>
Co-authored-by: Wesley Gifford <79663411+wgifford@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* docs: replace torch.distributed.run by torchrun
`transformers` now officially support pytorch >= 1.10.
The entrypoint `torchrun`` is present from 1.10 onwards.
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
* Update src/transformers/trainer.py
with @ArthurZucker's suggestion
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
---------
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Change "convert predictions to logits" to "convert logits to
predictions" to fix semantic error in the evaluation section. Logits
need to be converted to predictions to evaluate the accuracy, not the
other way round
* added flash attention for opt
* added to list
* fix use cache (#3)
* style fix
* fix text
* test fix2
* reverted until 689f599
* torch fx tests are working now!
* small fix
* added TODO docstring
* changes
* comments and .md file modification
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* initial commit
* Add inital testing files and modify __init__ files to add UnivNet imports.
* Fix some bugs
* Add checkpoint conversion script and add references to transformers pre-trained model.
* Add UnivNet entries for auto.
* Add initial docs for UnivNet.
* Handle input and output shapes in UnivNetGan.forward and add initial docstrings.
* Write tests and make them pass.
* Write docs.
* Add UnivNet doc to _toctree.yml and improve docs.
* fix typo
* make fixup
* make fix-copies
* Add upsample_rates parameter to config and improve config documentation.
* make fixup
* make fix-copies
* Remove unused upsample_rates config parameter.
* apply suggestions from review
* make style
* Verify and add reason for skipped tests inherited from ModelTesterMixin.
* Add initial UnivNetGan integration tests
* make style
* Remove noise_length input to UnivNetGan and improve integration tests.
* Fix bug and make style
* Make UnivNet integration tests pass
* Add initial code for UnivNetFeatureExtractor.
* make style
* Add initial tests for UnivNetFeatureExtractor.
* make style
* Properly initialize weights for UnivNetGan
* Get feature extractor fast tests passing
* make style
* Get feature extractor integration tests passing
* Get UnivNet integration tests passing
* make style
* Add UnivNetGan usage example
* make style and use feature extractor from hub in integration tests
* Update tips in docs
* apply suggestions from review
* make style
* Calculate padding directly instead of using get_padding methods.
* Update UnivNetFeatureExtractor.to_dict to be UnivNet-specific.
* Update feature extractor to support using model(**inputs) and add the ability to generate noise and pad the end of the spectrogram in __call__.
* Perform padding before generating noise to ensure the shapes are correct.
* Rename UnivNetGan.forward's noise_waveform argument to noise_sequence.
* make style
* Add tests to test generating noise and padding the end for UnivNetFeatureExtractor.__call__.
* Add tests for checking batched vs unbatched inputs for UnivNet feature extractor and model.
* Add expected mean and stddev checks to the integration tests and make them pass.
* make style
* Make it possible to use model(**inputs), where inputs is the output of the feature extractor.
* fix typo in UnivNetGanConfig example
* Calculate spectrogram_zero from other config values.
* apply suggestions from review
* make style
* Refactor UnivNet conversion script to use load_state_dict (following persimmon).
* Rename UnivNetFeatureExtractor to UnivNetGanFeatureExtractor.
* make style
* Switch to using torch.tensor and torch.testing.assert_close for testing expected values/slices.
* make style
* Use config in UnivNetGan modeling blocks.
* make style
* Rename the spectrogram argument of UnivNetGan.forward to input_features, following Whisper.
* make style
* Improving padding documentation.
* Add UnivNet usage example to the docs.
* apply suggestions from review
* Move dynamic_range_compression computation into the mel_spectrogram method of the feature extractor.
* Improve UnivNetGan.forward return docstring.
* Update table in docs/source/en/index.md.
* make fix-copies
* Rename UnivNet components to have pattern UnivNet*.
* make style
* make fix-copies
* Update docs
* make style
* Increase tolerance on flaky unbatched integration test.
* Remove torch.no_grad decorators from UnivNet integration tests to try to avoid flax/Tensorflow test errors.
* Add padding_mask argument to UnivNetModel.forward and add batch_decode feature extractor method to remove padding.
* Update documentation and clean up padding code.
* make style
* make style
* Remove torch dependency from UnivNetFeatureExtractor.
* make style
* Fix UnivNetModel usage example
* Clean up feature extractor code/docstrings.
* apply suggestions from review
* make style
* Add comments for tests skipped via ModelTesterMixin flags.
* Add comment for model parallel tests skipped via the test_model_parallel ModelTesterMixin flag.
* Add # Copied from statements to copied UnivNetFeatureExtractionTest tests.
* Simplify UnivNetFeatureExtractorTest.test_batch_decode.
* Add support for unbatched padding_masks in UnivNetModel.forward.
* Refactor unbatched padding_mask support.
* make style
* tvp model for video grounding
add tokenizer auto
fix param in TVPProcessor
add docs
clear comments and enable different torch dtype
add image processor test and model test and fix code style
* fix conflict
* fix model doc
* fix image processing tests
* fix tvp tests
* remove torch in processor
* fix grammar error
* add more details on tvp.md
* fix model arch for loss, grammar, and processor
* add docstring and do not regard TvpTransformer, TvpVisionModel as individual model
* use pad_image
* update copyright
* control first downsample stride
* reduce first only works for ResNetBottleNeckLayer
* fix param name
* fix style
* add testing
* fix style
* rm init_weight
* fix style
* add post init
* fix comments
* do not test TvpTransformer
* fix warning
* fix style
* fix example
* fix config map
* add link in config
* fix comments
* fix style
* rm useless param
* change attention
* change test
* add notes
* fix comments
* fix tvp
* import checkpointing
* fix gradient checkpointing
* Use a more accurate example in readme
* update
* fix copy
* fix style
* update readme
* delete print
* remove tvp test_forward_signature
* remove TvpTransformer
* fix test init model
* merge main and make style
* fix tests and others
* fix image processor
* fix style and model_input_names
* fix tests
* Enable large-v3 downloading and update language list
* Fix type annotation
* make fixup
* Export Whisper feature extractor
* Fix error after extractor loading
* Do not use pre-computed mel filters
* Save the full preprocessor properly
* Update docs
* Remove comment
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Add alignment heads consistent with each Whisper version
* Remove alignment heads calculation
* Save fast tokenizer format as well
* Fix slow to fast conversion
* Fix bos/eos/pad token IDs in the model config
* Add decoder_start_token_id to config
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Updated albert.md doc for ALBERT model
* Update docs/source/en/model_doc/albert.md
Fixed Resources heading
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update the ALBERT model doc resources
Fixed resource example for fine-tuning the ALBERT sentence-pair classification.
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/albert.md
Removed resource duplicate
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Updated albert.md doc with reviewed changes
* Updated albert.md doc for ALBERT
* Update docs/source/en/model_doc/albert.md
Removed duplicates from updated docs/source/en/model_doc/albert.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/albert.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* try to stylify using ruff
* might need to remove these changes?
* use ruf format andruff check
* use isinstance instead of type comparision
* use # fmt: skip
* use # fmt: skip
* nits
* soem styling changes
* update ci job
* nits isinstance
* more files update
* nits
* more nits
* small nits
* check and format
* revert wrong changes
* actually use formatter instead of checker
* nits
* well docbuilder is overwriting this commit
* revert notebook changes
* try to nuke docbuilder
* style
* fix feature exrtaction test
* remve `indent-width = 4`
* fixup
* more nits
* update the ruff version that we use
* style
* nuke docbuilder styling
* leve the print for detected changes
* nits
* Remove file I/O
Co-authored-by: charliermarsh
<charlie.r.marsh@gmail.com>
* style
* nits
* revert notebook changes
* Add # fmt skip when possible
* Add # fmt skip when possible
* Fix
* More ` # fmt: skip` usage
* More ` # fmt: skip` usage
* More ` # fmt: skip` usage
* NIts
* more fixes
* fix tapas
* Another way to skip
* Recommended way
* Fix two more fiels
* Remove asynch
Remove asynch
---------
Co-authored-by: charliermarsh <charlie.r.marsh@gmail.com>
* Update and reorder docs for chat templates
* Fix Mistral docstring
* Add section link and small fixes
* Remove unneeded line in Mistral example
* Add comment on saving memory
* Fix generation prompts linl
* Fix code block languages
* Initial commit of PatchTST model classes
Co-authored-by: Phanwadee Sinthong <phsinthong@gmail.com>
Co-authored-by: Nam Nguyen <namctin@gmail.com>
Co-authored-by: Vijay Ekambaram <vijaykr.e@gmail.com>
Co-authored-by: Ngoc Diep Do <55230119+diepi@users.noreply.github.com>
Co-authored-by: Wesley Gifford <79663411+wgifford@users.noreply.github.com>
* Add PatchTSTForPretraining
* update to include classification
Co-authored-by: Phanwadee Sinthong <phsinthong@gmail.com>
Co-authored-by: Nam Nguyen <namctin@gmail.com>
Co-authored-by: Vijay Ekambaram <vijaykr.e@gmail.com>
Co-authored-by: Ngoc Diep Do <55230119+diepi@users.noreply.github.com>
Co-authored-by: Wesley Gifford <79663411+wgifford@users.noreply.github.com>
* clean up auto files
* Add PatchTSTForPrediction
* Fix relative import
* Replace original PatchTSTEncoder with ChannelAttentionPatchTSTEncoder
* temporary adding absolute path + add PatchTSTForForecasting class
* Update base PatchTSTModel + Unittest
* Update ForecastHead to use the config class
* edit cv_random_masking, add mask to model output
* Update configuration_patchtst.py
* add masked_loss to the pretraining
* add PatchEmbeddings
* Update configuration_patchtst.py
* edit loss which considers mask in the pretraining
* remove patch_last option
* Add commits from internal repo
* Update ForecastHead
* Add model weight initilization + unittest
* Update PatchTST unittest to use local import
* PatchTST integration tests for pretraining and prediction
* Added PatchTSTForRegression + update unittest to include label generation
* Revert unrelated model test file
* Combine similar output classes
* update PredictionHead
* Update configuration_patchtst.py
* Add Revin
* small edit to PatchTSTModelOutputWithNoAttention
* Update modeling_patchtst.py
* Updating integration test for forecasting
* Fix unittest after class structure changed
* docstring updates
* change input_size to num_input_channels
* more formatting
* Remove some unused params
* Add a comment for pretrained models
* add channel_attention option
add channel_attention option and remove unused positional encoders.
* Update PatchTST models to use HF's MultiHeadAttention module
* Update paper + github urls
* Fix hidden_state return value
* Update integration test to use PatchTSTForForecasting
* Adding dataclass decorator for model output classes
* Run fixup script
* Rename model repos for integration test
* edit argument explanation
* change individual option to shared_projection
* style
* Rename integration test + import cleanup
* Fix outpu_hidden_states return value
* removed unused mode
* added std, mean and nops scaler
* add initial distributional loss for predition
* fix typo in docs
* add generate function
* formatting
* add num_parallel_samples
* Fix a typo
* copy weighted_average function, edit PredictionHead
* edit PredictionHead
* add distribution head to forecasting
* formatting
* Add generate function for forecasting
* Add generate function to prediction task
* formatting
* use argsort
* add past_observed_mask ordering
* fix arguments
* docs
* add back test_model_outputs_equivalence test
* formatting
* cleanup
* formatting
* use ACT2CLS
* formatting
* fix add_start_docstrings decorator
* add distribution head and generate function to regression task
add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput, PatchTSTForRegressionOutput.
* add distribution head and generate function to regression task
add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput, PatchTSTForRegressionOutput.
* fix typos
* add forecast_masking
* fixed tests
* use set_seed
* fix doc test
* formatting
* Update docs/source/en/model_doc/patchtst.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* better var names
* rename PatchTSTTranspose
* fix argument names and docs string
* remove compute_num_patches and unused class
* remove assert
* renamed to PatchTSTMasking
* use num_labels for classification
* use num_labels
* use default num_labels from super class
* move model_type after docstring
* renamed PatchTSTForMaskPretraining
* bs -> batch_size
* more review fixes
* use hidden_state
* rename encoder layer and block class
* remove commented seed_number
* edit docstring
* Add docstring
* formatting
* use past_observed_mask
* doc suggestion
* make fix-copies
* use Args:
* add docstring
* add docstring
* change some variable names and add PatchTST before some class names
* formatting
* fix argument types
* fix tests
* change x variable to patch_input
* format
* formatting
* fix-copies
* Update tests/models/patchtst/test_modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* move loss to forward
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/transformers/models/patchtst/modeling_patchtst.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* formatting
* fix a bug when pre_norm is set to True
* output_hidden_states is set to False as default
* set pre_norm=True as default
* format docstring
* format
* output_hidden_states is None by default
* add missing docs
* better var names
* docstring: remove default to False in output_hidden_states
* change labels name to target_values in regression task
* format
* fix tests
* change to forecast_mask_ratios and random_mask_ratio
* change mask names
* change future_values to target_values param in the prediction class
* remove nn.Sequential and make PatchTSTBatchNorm class
* black
* fix argument name for prediction
* add output_attentions option
* add output_attentions to PatchTSTEncoder
* formatting
* Add attention output option to all classes
* Remove PatchTSTEncoderBlock
* create PatchTSTEmbedding class
* use config in PatchTSTPatchify
* Use config in PatchTSTMasking class
* add channel_attn_weights
* Add PatchTSTScaler class
* add output_attentions arg to test function
* format
* Update doc with image patchtst.md
* fix-copies
* rename Forecast <-> Prediction
* change name of a few parameters to match with PatchTSMixer.
* Remove *ForForecasting class to match with other time series models.
* make style
* Remove PatchTSTForForecasting in the test
* remove PatchTSTForForecastingOutput class
* change test_forecast_head to test_prediction_head
* style
* fix docs
* fix tests
* change num_labels to num_targets
* Remove PatchTSTTranspose
* remove arguments in PatchTSTMeanScaler
* remove arguments in PatchTSTStdScaler
* add config as an argument to all the scaler classes
* reformat
* Add norm_eps for batchnorm and layernorm
* reformat.
* reformat
* edit docstring
* update docstring
* change variable name pooling to pooling_type
* fix output_hidden_states as tuple
* fix bug when calling PatchTSTBatchNorm
* change stride to patch_stride
* create PatchTSTPositionalEncoding class and restructure the PatchTSTEncoder
* formatting
* initialize scalers with configs
* edit output_hidden_states
* style
* fix forecast_mask_patches doc string
---------
Co-authored-by: Gift Sinthong <gift.sinthong@ibm.com>
Co-authored-by: Nam Nguyen <namctin@gmail.com>
Co-authored-by: Vijay Ekambaram <vijaykr.e@gmail.com>
Co-authored-by: Ngoc Diep Do <55230119+diepi@users.noreply.github.com>
Co-authored-by: Wesley Gifford <79663411+wgifford@users.noreply.github.com>
Co-authored-by: Wesley M. Gifford <wmgifford@us.ibm.com>
Co-authored-by: nnguyen <nnguyen@us.ibm.com>
Co-authored-by: Ngoc Diep Do <diiepy@gmail.com>
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* only dir not even init
* init
* tokenizer removed and reference of codegen added
* modeling file updated a lot remaining app_rotary_emb
* conversion script done
* conversion script fixed, a lot of factoring done and most tests pass
* added token_clf and extractive_QA_head
* integration tests pass
* flash attn tests pass!
* config done
* more docs in modeling file
* some style fix
* style and others
* doc test error fix
* more doc fix
* some attention fixes
* most fixes
* style and other fixes
* docs fix and config
* doc fix
* some comments
* conversion script updated
* conversion script updated
* Revert "conversion script updated"
This reverts commit e92378c54084ec0747041b113083d1746ecb6c7f.
* final comments
* add Phi to language_modeling.md
* edit phi.md file
* rebase and fix
* removed phi-1.5 example
* changed model_type from 'phi'->'mixformer-sequential'
* small change
* small change
* revert \small change
* changed mixformer-sequential->phi
* small change
* added phi-1.5 example instead of phi-1
* doc test might pass now
* rebase and small change
* added the dropout layer
* more fixes
* modified .md file
* very very small doc change
* init commit
* attention arch done except rotary emb
* rotary emb done
* text encoder working
* outputs matching
* arch first pass done
* make commands done, tests and docs remaining
* all tests passed, only docs remaining
* docs done
* doc-builder fix
* convert script removed(not relevant)
* minor comments done
* added ckpt conversion script
* tokenizer done
* very minor fix of index.md 2
* mostly make fixup related
* all done except fe and rotary emb
* very small change
* removed unidecode dependency
* style changes
* tokenizer removed require_backends
* added require_inflect to tokenizer tests
* removed VOCAB_FILES in tokenizer test
* inflect dependency removed
* added rotary pos emb cache and simplified the apply method
* style
* little doc change
* more comments
* feature extractor added
* added processor
* auto-regressive config added
* added CLVPConditioningEncoder
* comments done except the test one
* weights added successfull(NOT tested)
* tokenizer fix with numbers
* generate outputs matching
* almost tests passing Integ tests not written
* Integ tests added
* major CUDA error fixed
* docs done
* rebase and multiple fixes
* fixed rebase overwrites
* generate code simplified and tests for AutoRegressive model added
* minor changes
* refectored gpt2 code in clvp file
* weights done and all code refactored
* mostly done except the fast_tokenizer
* doc test fix
* config file's doc fixes
* more config fix
* more comments
* tokenizer comments mostly done
* modeling file mostly refactored and can load modules
* ClvpEncoder tested
* ClvpDecoder, ClvpModel and ClvpForCausalLM tested
* integration and all tests passed
* more fixes
* docs almost done
* ckpt conversion refectored
* style and some failing tests fix
* comments
* temporary output fix but test_assisted_decoding_matches_greedy_search test fails
* majority changes done
* use_cache outputs same now! Along with the asisted_greedy_decoding test fix
* more comments
* more comments
* prepare_inputs_for_generation fixed and _prepare_model_inputs added
* style fix
* clvp.md change
* moved clvpconditionalencoder norms
* add model to new index
* added tokenizer input_ids_with_special_tokens
* small fix
* config mostly done
* added config-tester and changed conversion script
* more comments
* comments
* style fix
* some comments
* tokenizer changed back to prev state
* small commnets
* added output hidden states for the main model
* style fix
* comments
* small change
* revert small change
* .
* Update clvp.md
* Update test_modeling_clvp.py
* :)
* some minor change
* new fixes
* remove to_dict from FE
* Fix error in convert_openai_to_hf.py: "_download() missing 1 required positional argument: root"
* Fix error in convert_openai_to_hf.py: "TypeError: byte indices must be integers or slices, not str"
* Fix decoder_attention_heads value in convert_openai_to_hf.py.
Correct the assignment for `decoder_attention_heads` in the conversion script for the Whisper model.
* Black reformat convert_openai_to_hf.py file.
* Fix Whisper model configuration defaults (for Tiny).
- Correct encoder/decoder layers and attention heads count.
- Update model width (`d_model`) to 384.
* Add docstring to the convert_openai_to_hf.py script with a doctest
* Add shebang and +x permission to the convert_openai_to_hf.py
* convert_openai_to_hf.py: reuse the read model_bytes in the _download() function
* Move convert_openai_to_hf.py doctest example to whisper.md
* whisper.md: Add an inference example to the Conversion section.
* whisper.md: remove `model.config.forced_decoder_ids` from examples (deprecated)
* whisper.md: Remove "## Format Conversion" section; not used by users
* whisper.md: Use librispeech_asr_dummy dataset and load_dataset()
I'm adding accelerate as one of the libraries to install because otherwise when running the Trainer, the model errorr out with the error.
ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
Further context:
1. I've tried this across different environments so I believe that the environment is not the issue.
2. I had the latest transformers library version running.
3. Typically even after install accelerate and import it, it wouldn't resolve the issue until I restart the notebook and try again.
* first batch of structure improvements for model_docs
* second batch of structure improvements for model_docs
* more structure improvements for model_docs
* more structure improvements for model_docs
* structure improvements for cv model_docs
* more structural refactoring
* addressed feedback about image processors
* Add type annotations to TFConvNextDropPath
* Use tf.debugging.assert_equal for TFConvNextEmbeddings shape check
* Add TensorFlow implementation of ConvNeXTV2
* check_docstrings: add TFConvNextV2Model to exclusions
TFConvNextV2Model and TFConvNextV2ForImageClassification have docstrings
which are equivalent to their PyTorch cousins, but a parsing issue prevents them
from passing the test.
Adding exclusions for these two classes as discussed in #25558.
* Add support for loading GPTQ models on CPU
Right now, we can only load the GPTQ Quantized model on the CUDA
device. The attribute `gptq_supports_cpu` checks if the current
auto_gptq version is the one which has the cpu support for the
model or not.
The larger variants of the model are hard to load/run/trace on
the GPU and that's the rationale behind adding this attribute.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
* Update quantization.md
* Update quantization.md
* Update quantization.md
* add
* add
* add
* Add deepspeed.md
* Add
* add
* Update docs/source/ja/main_classes/callback.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/main_classes/output.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/main_classes/pipelines.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/main_classes/processors.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/main_classes/processors.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/main_classes/text_generation.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/ja/main_classes/processors.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update logging.md
* Update toctree.yml
* Update docs/source/ja/main_classes/deepspeed.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Add suggesitons
* m
* Update docs/source/ja/main_classes/trainer.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update toctree.yml
* Update Quantization.md
* Update docs/source/ja/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update toctree.yml
* Update docs/source/en/main_classes/deepspeed.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/main_classes/deepspeed.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>