* Add docs/source/ar/benchmarks.md to Add_docs_source_ar_benchmarks.md
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update docs/source/ar/benchmarks.md
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Update _toctree.yml
* Update benchmarks.md
---------
Co-authored-by: Abdullah Mohammed <554032+abodacs@users.noreply.github.com>
* Initial draft
* Add .jinja file loading for processors
* Add processor saving of naked chat template files
* make fixup
* Add save-load test for tokenizers
* Add save-load test for tokenizers
* stash commit
* Try popping the file
* make fixup
* Pop the arg correctly
* Pop the arg correctly
* Add processor test
* Fix processor code
* stash commit
* Processor clobbers child tokenizer's chat template
* Processor clobbers child tokenizer's chat template
* make fixup
* Split processor/tokenizer files to avoid interactions
* fix test
* Expand processor tests
* Rename arg to "save_raw_chat_template" across all classes
* Update processor warning
* Move templates to single file
* Move templates to single file
* Improve testing for processor/tokenizer clashes
* Improve testing for processor/tokenizer clashes
* Extend saving test
* Test file priority correctly
* make fixup
* Don't pop the chat template file before the slow tokenizer gets a look
* Remove breakpoint
* make fixup
* Fix error
* change apply_rotary_pos_emb
* upload for glm-edge
* remove useless part
* follow the suggestion
* fix
* format
* format
* test
* format again
* format again
* remove modular change
* remove modular change
* this apply_rotary_pos_emb need modify?
* fix with this
* format
* format
* ruff check
* modify modular_glm failed
* remove partial_rotary_factor of function partial_rotary_factor
* fix wrong change of examples/research_projects
* revert
* remove line 118
* use q_rot
* fix test_tiny_timestamp_generation
* fix test_large_timestamp_generation
* fix test_whisper_shortform_single_batch_prev_cond
* fix test_whisper_shortform_multi_batch_hard_prev_cond
* return_timestamps necessary with long form
* fix test_default_multilingual_transcription_long_form
* fix test_tiny_token_timestamp_generation_longform
* fix test_whisper_longform_multi_batch_hard
* Update tests/models/whisper/test_modeling_whisper.py
Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
* fix typo
* do not expect special tokens
* fix test_whisper_longform_single_batch_beam
* fix test_whisper_longform_multi_batch_hard_prev_cond
* update test_whisper_longform_multi_batch_hard_prev_cond
* update test_whisper_longform_multi_batch_hard_prev_cond
* these tests does not make sense anymore
* this test does not make sense anymore
* make fixup
* suggested nits
* add test with forced_decoder_ids
* this test does not make sense anymore
* change assert for unittest test cases
* make fixup
* test with prompt_ids and task and language
* fix unittest test case call
* fix test_tiny_generation
* fix test_tiny_en_generation
* fix test_tiny_en_batched_generation
* fix test_tiny_longform_timestamps_generation
* fix test_tiny_timestamp_generation
* fix test_large_generation
* fix test_large_batched_generation
* fix test_large_generation_multilingual
* fix test_large_timestamp_generation
* fix test_large_timestamp_generation
* fix test_tiny_token_timestamp_generation_longform
* fix test_tiny_en_batched_generation
* make fixup
* [run-slow] whisper
---------
Co-authored-by: Yoach Lacombe <52246514+ylacombe@users.noreply.github.com>
* Updated documentation and added conversion utility
* Update docs/source/en/tiktoken.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Update docs/source/en/tiktoken.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Moved util function to integration folder + allow for str
* Update formatting
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Updated formatting
* style changes
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
The old AWQ version is failing with the latest (unreleased)
transformers, giving the error:
> ImportError: cannot import name 'shard_checkpoint' from
'transformers.modeling_utils'
This has been resolved in awq v0.2.7:
https://github.com/casper-hansen/AutoAWQ/pull/644
* allow unused parameter passthrough when chunking in asr pipelines
* format code
* format
* run fixup
* update tests
* update parameters to pipline in test
* updates parametrs in tests
* change spelling in gitignore
* revert .gitignore to main
* add git ignore of devcontainer folder
* assert asr output follows expected inference output type
* run fixup
* Remove .devcontainer from .gitignore
* remove compliance check
Starting from version 2.4 PyTorch introduces a stricter check for the objects which
can be loaded with torch.load(). Starting from version 2.6 loading with weights_only=True
requires allowlisting of such objects.
This commit adds allowlist of some numpy objects used to load model checkpoints.
Usage is restricted by context manager. User can still additionally call
torch.serialization.add_safe_globals() to add other objects into the safe globals list.
Accelerate library also stepped into same problem and addressed it with PR-3036.
Fixes: #34631
See: https://github.com/pytorch/pytorch/pull/137602
See: https://pytorch.org/docs/stable/notes/serialization.html#torch.serialization.add_safe_globals
See: https://github.com/huggingface/accelerate/pull/3036
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* CI Skip EETQ tests while package is broken
EETQ tries to import the shard_checkpoint function from transformers but
the function has been removed. Therefore, trying to use EETQ currently
results in an import error. This fix results in EETQ tests being skipped
if there is an import error.
The issue has been reported to EETQ:
https://github.com/NetEase-FuXi/EETQ/issues/34
* Raise helpful error when trying to use eetq
* Forget to raise the error in else clause
* skip nested deepspeed.zero.Init call
* make fixup
* solve conflict
* solve conflict
* put back local
* use context mangers instead of local thread
* Skip recursive calls to deepspeed.zero.Init
* Skip recursive calls to deepspeed.zero.Init
* back to old notebooks
* make style
* add tensor processing system to separate logic for models
* format refactoring
* small fix
* make some methods private
* move custom methods to processors
* refactor tensor processing
* format fix
* Add Nemotron GGUF Loading Support
* fix the Nemotron architecture assignation
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>