* Add Arcee model support to transformers
- Add ArceeConfig and model mappings for all task types (CausalLM, SequenceClassification, QuestionAnswering, TokenClassification)
- Add auto-loading support through AutoModel, AutoConfig, and AutoTokenizer
- Use LlamaTokenizer for tokenization
- Add FX graph support for Arcee models
- Create lazy loading module structure for Arcee
* feat: update YARN scaling and RoPE validation for Arcee model
* feat: add auto_docstring checkpoint config to Arcee model classes
* docs: add pre-trained model weights reference to Arcee configuration files
* refactor: move RoPE utilities to dedicated modeling_rope_utils module
* Add comprehensive test suite for Arcee model
- Add test_modeling_arcee.py following standard transformers test patterns
- Include tests for all model variants (CausalLM, SequenceClassification, QuestionAnswering, TokenClassification)
- Add specific test for ReLU² activation in ArceeMLP
- Add RoPE scaling tests including YARN support
- Follow CausalLMModelTest pattern used by similar models
* Add documentation for Arcee model
- Add comprehensive model documentation with usage examples
- Include all model variants in autodoc
- Add to table of contents in proper alphabetical order
- Fixes documentation coverage for Arcee model classes
* Make style/fixup
* fix copyright year
* Sync modular conversion
* revert in legacy supported models in src/transformers/utils/fx
* cleaned redundant code in modular_arcee.py
* cleaned testing
* removed pretraining tp
* fix styles
* integration testing
---------
Co-authored-by: Pranav <veldurthipranav@gmail.com>
Co-authored-by: Pranav <56645758+pranav4501@users.noreply.github.com>
* Typos
- corrected bf16 training argument
- corrected header for SDPA
* improved readability for SDPA suggested by @stevhliu
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* some fixes
* some fixes
* now the pipeline can take list of tokens as input and is_split_into_words argument
* now the pipeline can take list of tokens as input and is_split_into_words argument
* now the pipeline can take list of tokens as input and is_split_into_words argument and we can handle batches of tokenized input
* now the pipeline can take list of tokens as input and is_split_into_words argument and we can handle batches of tokenized input
* solving test problems
* some fixes
* some fixes
* modify tests
* aligning start and end correctly
* adding tests
* some formatting
* some formatting
* some fixes
* some fixes
* some fixes
* resolve conflicts
* removing unimportant lines
* removing unimportant lines
* generalize to other languages
* generalize to other languages
* generalize to other languages
* generalize to other languages
* fix: add __bool__ operator to tokenizer to avoid bloated asserts
When a user does 'assert tokenizer' to ensure that the tokenizer is not None, they inadvertently set off a rather expensive process in the '__len__()' operator. This fix adds a trivial '__bool__()' that returns True, so that a None tokenizer asserts and an actual tokenizer returns True when asserted, without calling length op.
* typo
* add working idefics2 fast and improvements for fast nested images processing
* add fast image processors idefics 3 and smolvlm
* cleanup tests
* fic doc idefics2
* PR review and fix issues after merge
* Force providing disable_grouping to group_images_by_shape
* simplify group_images_by_shape
* fix modular
* Fix nits after review
* Fix(time_series): Correct scaler tensor shape in base model
The create_network_inputs function in TimeSeriesTransformerModel
handled the scaler's loc and scale tensors inconsistently.
When input_size=1, the tensors were not squeezed, leading to
downstream dimension errors for models like Informer.
This commit refactors the logic to unconditionally apply .squeeze(1),
which correctly handles all input_size cases and fixes the bug at its source.
Fixes#38745
* Fix(time_series): Correct scaler tensor shape in base model
The create_network_inputs function in TimeSeriesTransformerModel
handled the scaler's loc and scale tensors inconsistently.
When input_size=1, the tensors were not squeezed, leading to
downstream dimension errors for models like Informer.
This commit refactors the logic to unconditionally apply .squeeze(1),
which correctly handles all input_size cases and fixes the bug at its source.
Fixes#38745
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
* remove it everywhere
* Update trainer_pt_utils.py
* Update trainer_pt_utils.py
* style
* sort list in test
* CIs
* use recursion same way as before (for intermediate layer names)
* first commit
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* use rls pytorch
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* feat: add flexible Liger Kernel configuration to TrainingArguments
Add support for granular Liger Kernel configuration through a new
`liger_kernel_config` parameter in TrainingArguments. This allows users
to selectively enable/disable specific kernels (rope, swiglu, cross_entropy,
etc.) instead of the current approach that rely on default configuration.
Features:
- Add `liger_kernel_config` dict parameter to TrainingArguments
- Support selective kernel application for all supported models
- Maintain full backward compatibility with existing `use_liger_kernel` flag
Example usage:
```python
TrainingArguments(
use_liger_kernel=True,
liger_kernel_config={
"rope": True,
"swiglu": True,
"cross_entropy": False,
"fused_linear_cross_entropy": True
}
)
Closes#38905
* Address comments and update Liger section in Trainer docs
* siwtch to device agnostic autocast in nemotron to align xpu behavior w/
cuda
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* fix issue
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* fix style
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* use torch.cast as other modeling code for decision_transformer&gpt2&imagegpt
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* refine
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
* update get_autocast_gpu_dtype to device agnostic one
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
* fix style
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
* fix comments
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* fix style
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
---------
Signed-off-by: Matrix Yao <matrix.yao@intel.com>
Signed-off-by: Matrix YAO <matrix.yao@intel.com>
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
* Update bamba model card
* Update the doc for bamba
* Update docs/source/en/model_doc/bamba.md
Bamba paragraph
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
Bamba collection url
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
Update Padding-Free Training to Notes heading
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
update examples
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
Update additional info
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
consistent casing
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
simplify sentences
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Include pipeline and cli examples + fix formatting
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/bamba.md
update cli id
* Update quantization example
* Fix auto code formatter changes
* Update cli command + include BambaModel
* Update docs/source/en/model_doc/bamba.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* we need to check against mapping to be safe
* need to check only when inferring from image type, otherwise messes custom code
---------
Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
* Update trocr.md
Docs: add community fine‑tuning notebook link to TrOCR page
* apply suggested changes from PR review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/model_doc/trocr.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* log: Add logging when user uses split_batches and per_device_train_batch_size
* refactor: remove whitespace from blank line
* Update src/transformers/training_args.py
Change logging level to info
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>