transformers/utils
Arthur 25b7f27234
Add llama4 (#37307)
* remove one of the last deps

* update fast image processor after refactor

* styling

* more quality of life improvements

* nit

* update

* cleanups

* some cleanups

* vllm updates

* update fake image token

* [convert] Fix typo

* [convert] Strip extraneous bytes from shards

* [convert] Minor fixes

* [convert] Use num_experts

* multi-image fixes in modeling + processor

* fixup size

* 128 experts

* Use default rope

* Unfuse mlp

* simplify a lot inputs embeds merging

* remove .item() 👀

* fix from review

* Address feedback

* Use None "default" for rope_scaling. Add eot.

* set seed

* return aspect ratios and bug fixes

* Moe 128 rebased (#8)

* 128 experts

* Use default rope

* Unfuse mlp

* Address feedback

* Use None "default" for rope_scaling. Add eot.

* Meta/llama quant compat (#7)

* add quant compatible model & conversion code for llama4

* fix a few issues

* fix a few issues

* minor type mapping fix

---------

Co-authored-by: Lu Fang <fanglu@fb.com>

* use a new config parameter to determine which model definition to use for MoE

---------

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Lu Fang <fanglu@fb.com>

* un-comment write_tokenizer from converting script

* remove un-used imports

* [llama4] Pop aspect_ratios from image processor output in Llama4Processor

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* Fix parameter_count name

* Update src/transformers/models/llama4/configuration_llama4.py

* nit

* Add changes for no_rope, moe_layers, chunked attention. Just need to test all

* Update src/transformers/models/llama4/image_processing_llama4_fast.py

* nit

* fix post merge with main

* support flex attention

* fixes

* fix

* add layer

* small updates

* rebase and delete llm_compressor

* nit

* [llama4/mm] Add back <|image|> token that delimits global tile

* [llama4/mm] Fix Llama 4 image processing unit tests

* add explicit dtype

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* sdpa works

* comment todo small

* fix model loading

Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>

* revert

* nits

* small fix for TP on 1 node

* Read new params from config

* Add <|eom|>

* lol don't know how this got here

* adding fp8

* Save processor, fix chat template

* style

* Add boi/eoi tokens

We don't use them.

* fixes for now flex seems to work :)

* updates

* nits

* updates

* missking keys

* add context parallel

* update

* update

* fix

* nits

* add worldsize and make eager attn work for vision

* Ignore new key present in base models

* add tp_plan

* fix nope

Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>

* minor fix

Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>

* Clean up Llama4 vision model

* current updates

* add support for `attn_temperature_tuning`

* add floor scale

* add missing attn scales

* push what works, dirty trick for the device synch

* oups

* Fix pad_token_id

See
https://huggingface.co/ll-re/Llama-4-Scout-17B-16E/discussions/2/files
Confirmed in the original codebase.

* fix causallml loading

* rm

* fix tied-weights

* fix sdpa

* push current version

* should work with both short and long

* add compressed_tensos & fix fbgemm tp

* Fix flex impl

* style

* chunking

* try to revert the potentially breaking change

* fix auto factory

* fix shapes in general

* rm processing

* commit cache utils cleanup

* Fix context length

* fix

* allocate

* update tp_plan

* fix SDPA!

* Add support for sparse `Llama4TextMoe` layer from the kernel hub

* cleanup

* better merge

* update

* still broken fixing now

* nits

* revert print

* Write max_position_embeddings and max_model_length

* Update modeling_llama4.py

* Save attention_chunk_size

* Sync eos terminators

* Read initializer_range

* style

* remove `dict`

* fix

* eager should use `chunked_attention_mask`

* revert

* fixup

* fix config

* Revert "Merge pull request #36 from huggingface/sparse-llama4-moe"

This reverts commit ccda19f050, reversing
changes made to a515579aed.

* Fix typo and remove warning with compiled flex and chunked prefill

* Fix MoE vs FF (#41)

* fix

* Use correct no_rope_layers if provided one is empty list

* update tests

* fix

* skipping some tests

* fix fp8 loading

Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>

* fix text geneartion pipeline

Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>

* eager needs 4D mask

* fix

* Some cleanup

* fix

* update

* fix

* replace correctly module

* patch

* modulelist

* update

* update

* clean up

* Don't move to `cuda:0` in distributed mode

* restrict to compressed tensors for now

* rm print

* Docs!

* Fixes

* Update docs/source/en/model_doc/llama4.md

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Fixes

* cuda graph fix

* revert some stuff

* fixup

* styling

* Update src/transformers/models/llama4/modeling_llama4.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fixup

* commit licence, cleanup here and there and style

* more styling changes

* fix dummies

* fix and clean docstrings

* remove comment

* remove warning

* Only fast image processor is supported

* nit

* trigger CI

* fix issue with flex encoder

* fix dynamic cache

* Code quality

* Code quality

* fix more tests for now

* Code quality

* Code quality

* Nuke bunch of failing stuff

* Code quality

* Code quality

* cleanup removal of slow image processor

* ruff fix fast image processor

* fix

* fix styling

* Docs

* Repo consistency

* Repo consistency

* fix sliding window issue

* separate llama cache

* styling

* Repo consistency

* Repo consistency

* push waht works

* L4 Repo consistency

* Docs

* fix last last alst alst alst alstsaltlsltlaslt

---------

Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
Co-authored-by: yonigozlan <yoni.gozlan10@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pablo Montalvo <pablo.montalvo.leroux@gmail.com>
Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: Zijing Liu <liuzijing2014@users.noreply.github.com>
Co-authored-by: Lu Fang <fanglu@fb.com>
Co-authored-by: Zijing Liu <liuzijing2014@gmail.com>
Co-authored-by: Jon Swenson <jmswen@gmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: MekkCyber <mekk.cyber@gmail.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com>
Co-authored-by: Yong Hoon Shin <yhshin@meta.com>
Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: drisspg <drisspguessous@gmail.com>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
Co-authored-by: Daniël de Kok <me@danieldk.eu>
Co-authored-by: Lysandre <hi@lysand.re>
Co-authored-by: Ye (Charlotte) Qi <ye.charlotte.qi@gmail.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2025-04-05 22:02:22 +02:00
..
test_module AutoImageProcessor (#20111) 2022-11-08 19:54:41 +00:00
tf_ops Check TF ops for ONNX compliance (#10025) 2021-02-15 07:55:10 -05:00
add_pipeline_model_mapping_to_test.py update ruff version (#30932) 2024-05-22 06:40:15 +02:00
check_bad_commit.py Fix utils/check_bad_commit.py (#37272) 2025-04-04 12:18:20 +02:00
check_build.py Use deformable_detr kernel from the Hub (#36853) 2025-03-21 13:08:47 +01:00
check_config_attributes.py Add llama4 (#37307) 2025-04-05 22:02:22 +02:00
check_config_docstrings.py Adding Qwen3 and Qwen3MoE (#36878) 2025-03-31 09:50:49 +02:00
check_copies.py chore: fix typos in utils module (#36668) 2025-03-13 15:12:44 +00:00
check_doc_toc.py update ruff version (#30932) 2024-05-22 06:40:15 +02:00
check_docstrings.py Add llama4 (#37307) 2025-04-05 22:02:22 +02:00
check_doctest_list.py update ruff version (#30932) 2024-05-22 06:40:15 +02:00
check_dummies.py Add llama4 (#37307) 2025-04-05 22:02:22 +02:00
check_inits.py chore: fix typos in utils module (#36668) 2025-03-13 15:12:44 +00:00
check_model_tester.py Add a new script to check model testers' config (#22063) 2023-03-13 19:11:19 +01:00
check_modular_conversion.py [modular] Sort modular skips (#36304) 2025-03-20 10:55:12 +00:00
check_repo.py Add llama4 (#37307) 2025-04-05 22:02:22 +02:00
check_self_hosted_runner.py Tiny fix for check_self_hosted_runner.py (#24052) 2023-06-06 18:17:41 +02:00
check_tf_ops.py Check TF ops for ONNX compliance (#10025) 2021-02-15 07:55:10 -05:00
create_dependency_mapping.py Modular Conversion --fix_and_overwrite on Windows (#36583) 2025-03-06 13:12:30 +00:00
create_dummy_models.py CI: fix efficientnet pipeline timeout and prevent future similar issues due to large image size (#33123) 2024-08-27 11:58:27 +01:00
custom_init_isort.py chore: fix typos in utils module (#36668) 2025-03-13 15:12:44 +00:00
deprecate_models.py chore: fix typos in utils module (#36668) 2025-03-13 15:12:44 +00:00
download_glue_data.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
extract_warnings.py update github actions packages' version to suppress warnings (#30249) 2024-04-15 15:08:09 +02:00
fetch_hub_objects_for_ci.py Try to avoid/reduce some remaining CI job failures (#37202) 2025-04-02 14:39:57 +02:00
get_ci_error_statistics.py Add artifact name in job step to maintain job / artifact correspondence (#28682) 2024-01-31 15:58:17 +01:00
get_github_job_time.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
get_modified_files.py exclude deleted files in the fixup script (#21436) 2023-02-03 12:57:02 -05:00
get_previous_daily_ci.py Ping team members for new failed tests in daily CI (#34171) 2024-10-17 16:11:52 +02:00
get_test_info.py CI: fix efficientnet pipeline timeout and prevent future similar issues due to large image size (#33123) 2024-08-27 11:58:27 +01:00
important_models.txt ENH: [CI] Add new workflow to run slow tests of important models on push main if they are modified (#29235) 2024-04-12 10:01:28 +02:00
models_to_deprecate.py update ruff version (#30932) 2024-05-22 06:40:15 +02:00
modular_model_converter.py Introduce modular files for speech models (#35902) 2025-04-04 11:46:27 +02:00
not_doctested.txt Use deformable_detr kernel from the Hub (#36853) 2025-03-21 13:08:47 +01:00
notification_service_doc_tests.py Refactor doctest (#30210) 2024-04-15 13:20:36 +02:00
notification_service_quantization.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
notification_service.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
past_ci_versions.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
patch_helper.py [Patch helper] update to not have to checkout main (#34006) 2024-10-09 09:21:46 +02:00
pr_slow_ci_models.py notify new model merged to main (#36375) 2025-02-24 17:53:18 +01:00
print_env.py Print more library versions in CI (#17384) 2022-06-02 10:24:16 +02:00
process_bad_commit_report.py Tiny update after #34383 (#34404) 2024-10-28 12:01:05 +01:00
process_circleci_workflow_test_reports.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
process_test_artifacts.py fix the parallel number of CI nodes when it is smaller than number of tests (#33276) 2024-09-03 16:53:21 +02:00
release.py Remove research projects (#36645) 2025-03-11 13:47:38 +00:00
set_cuda_devices_for_ci.py Fix Cohere CI (#31263) 2024-06-10 15:16:58 +02:00
slow_documentation_tests.txt Update CodeLlama references (#30218) 2024-05-09 22:57:52 +02:00
sort_auto_mappings.py update ruff version (#30932) 2024-05-22 06:40:15 +02:00
split_doctest_jobs.py chore: fix typos in utils module (#36668) 2025-03-13 15:12:44 +00:00
split_model_tests.py consistent job / pytest report / artifact name correspondence (#30392) 2024-04-24 22:32:42 +02:00
tests_fetcher.py byebye CircleCI TF jobs (#36998) 2025-03-26 12:49:50 +01:00
update_metadata.py Update ruff to 0.11.2 (#36962) 2025-03-25 16:00:11 +01:00
update_tiny_models.py Mention model_info.id instead of model_info.modelId (#32106) 2024-07-22 14:14:47 +01:00