transformers/docs/source/en
Cyril Vallez 163138a911
🚨🚨[core] Completely rewrite the masking logic for all attentions (#37866)
* start

* start having a clean 4d mask primitive

* Update mask_utils.py

* Update mask_utils.py

* switch name

* Update masking_utils.py

* add a new AttentionMask tensor class

* fix import

* nits

* fixes

* use full and quandrants

* general sdpa mask for all caches

* style

* start some tests

* tests with sliding, chunked

* add styling

* test hybrid

* Update masking_utils.py

* small temp fixes

* Update modeling_gemma2.py

* compile compatible

* Update masking_utils.py

* improve

* start making it more general

* Update masking_utils.py

* generate

* make it work with flex style primitives!

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* improve

* Update cache_utils.py

* Update masking_utils.py

* simplify - starting to look good!

* Update masking_utils.py

* name

* Update masking_utils.py

* style

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* small fix for flex

* flex compile

* FA2

* Update masking_utils.py

* Escape for TGI/vLLM!

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* General case without cache

* rename

* full test on llama4

* small fix for FA2 guard with chunk

* Update modeling_gemma2.py

* post rebase cleanup

* FA2 supports static cache!

* Update modeling_flash_attention_utils.py

* Update flex_attention.py

* Update masking_utils.py

* Update masking_utils.py

* Update utils.py

* override for export

* Update executorch.py

* Update executorch.py

* Update executorch.py

* Update executorch.py

* Update masking_utils.py

* Update masking_utils.py

* output attentions

* style

* Update masking_utils.py

* Update executorch.py

* Add doicstring

* Add license and put mask visualizer at the end

* Update test_modeling_common.py

* fix broken test

* Update test_modeling_gemma.py

* Update test_modeling_gemma2.py

* Use fullgraph=False with FA2

* Update utils.py

* change name

* Update masking_utils.py

* improve doc

* change name

* Update modeling_attn_mask_utils.py

* more explicit logic based on model's property

* pattern in config

* extend

* fixes

* make it better

* generalize to other test models

* fix

* Update masking_utils.py

* fix

* do not check mask equivalence if layer types are different

* executorch

* Update modeling_gemma2.py

* Update masking_utils.py

* use layer_idx instead

* adjust

* Update masking_utils.py

* test

* fix imports

* Update modeling_gemma2.py

* other test models

* Update modeling_llama4.py

* Update masking_utils.py

* improve

* simplify

* Update masking_utils.py

* typos

* typo

* fix

* Update masking_utils.py

* default DynamicCache

* remove default cache

* simplify

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* simplify

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* export

* Update executorch.py

* Update executorch.py

* Update flex_attention.py

* Update executorch.py

* upstream to modular gemma 1 & 2

* Update modular_mistral.py

* switch names

* use dict

* put it in the Layer directly

* update copy model source for mask functions

* apply so many modular (hopefully 1 shot)

* use explicite dicts for make style happy

* protect import

* check docstring

* better default in hybrid caches

* qwens

* Update modular_qwen2.py

* simplify core logic!

* Update executorch.py

* qwen3 moe

* Update masking_utils.py

* Update masking_utils.py

* simplify a lot sdpa causal skip

* Update masking_utils.py

* post-rebase

* gemma3 finally

* style

* check it before

* gemma3

* More general with newer torch

* align gemma3

* Update utils.py

* Update utils.py

* Update masking_utils.py

* Update test_modeling_common.py

* Update flex_attention.py

* Update flex_attention.py

* Update flex_attention.py

* test

* executorch

* Update test_modeling_common.py

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* Update masking_utils.py

* Update executorch.py

* Update test_modeling_common.py

* fix copies

* device

* sdpa can be used without mask -> pass the torchscript tests in this case

* Use enum for check

* revert enum and add check instead

* remove broken test

* cohere2

* some doc & reorganize the Interface

* Update tensor_parallel.py

* Update tensor_parallel.py

* doc and dummy

* Update test_modeling_paligemma2.py

* Update modeling_falcon_h1.py

* Update masking_utils.py

* executorch patch

* style

* CIs

* use register in executorch

* final comments!

---------

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
2025-05-22 11:38:26 +02:00
..
internal 🚨🚨[core] Completely rewrite the masking logic for all attentions (#37866) 2025-05-22 11:38:26 +02:00
main_classes 🔴 Video processors as a separate class (#35206) 2025-05-12 11:55:51 +02:00
model_doc [Whisper] handle deprecation of forced_decoder_ids (#38232) 2025-05-22 09:16:38 +00:00
quantization Support AOPerModuleConfig and include_embedding (#37802) 2025-04-30 20:16:29 +02:00
tasks Enhance documentation to explain chat-based few-shot prompting (#37828) 2025-04-30 11:00:10 -07:00
_config.py Add optimized PixtralImageProcessorFast (#34836) 2024-11-28 16:04:05 +01:00
_redirects.yml Docs / Quantization: Redirect deleted page (#31063) 2024-05-28 18:29:22 +02:00
_toctree.yml [MODEL] Add Falcon H1 (#38249) 2025-05-21 10:43:11 +02:00
accelerate.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
add_new_model.md Transformers cli clean command (#37657) 2025-04-30 12:15:43 +01:00
add_new_pipeline.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
agents.md [agents] remove agents 🧹 (#37368) 2025-04-11 18:42:37 +01:00
attention_interface.md 🚨🚨[core] Completely rewrite the masking logic for all attentions (#37866) 2025-05-22 11:38:26 +02:00
attention.md [Docs] Fix broken links and syntax issues (#28918) 2024-02-08 14:13:35 -08:00
auto_docstring.md [AutoDocstring] Based on inspect parsing of the signature (#33771) 2025-05-08 17:46:07 -04:00
backbones.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
cache_explanation.md Fix typos (#36910) 2025-03-24 14:08:29 +00:00
chat_extras.md Update chat_extras.md with content correction (#36599) 2025-03-07 13:09:02 +00:00
chat_templating_multimodal.md [chat-template] Unify tests and clean up 🧼 (#37275) 2025-04-10 14:42:32 +02:00
chat_templating_writing.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
chat_templating.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
community.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
contributing.md Enable doc in Spanish (#16518) 2022-04-04 10:25:46 -04:00
conversations.md [chat] generate parameterization powered by GenerationConfig and UX-related changes (#38047) 2025-05-12 14:04:41 +01:00
custom_models.md Fix typos (#36910) 2025-03-24 14:08:29 +00:00
debugging.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
deepspeed.md chore: Fix typos in docs and examples (#36524) 2025-03-04 13:47:41 +00:00
executorch.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
fast_tokenizers.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
feature_extractors.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
fsdp.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
generation_features.md chore: Fix typos in docs and examples (#36524) 2025-03-04 13:47:41 +00:00
generation_strategies.md [generate] Run custom generation code from the Hub (#36405) 2025-05-15 10:35:54 +01:00
gguf.md Fix gguf docs (#36601) 2025-03-11 15:29:14 +01:00
glossary.md Fix typos (#31819) 2024-07-08 11:52:47 +01:00
gpu_selection.md Fix typos (#36910) 2025-03-24 14:08:29 +00:00
how_to_hack_models.md [doc] fix bugs in how_to_hack_models.md (#38198) 2025-05-19 10:37:54 -07:00
hpo_train.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
image_processors.md 🔴 Video processors as a separate class (#35206) 2025-05-12 11:55:51 +02:00
index.md Adding Qwen3 and Qwen3MoE (#36878) 2025-03-31 09:50:49 +02:00
installation.md byebye torch 2.0 (#37277) 2025-04-07 15:19:47 +02:00
kv_cache.md fix link in kv_cache.md (#37652) 2025-04-21 09:01:11 -07:00
llm_optims.md [CI] green llama tests (#37244) 2025-04-03 14:15:53 +01:00
llm_tutorial_optimization.md fix typos in the docs directory (#36639) 2025-03-11 09:41:41 -07:00
llm_tutorial.md [chat] generate parameterization powered by GenerationConfig and UX-related changes (#38047) 2025-05-12 14:04:41 +01:00
model_memory_anatomy.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
model_sharing.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
model_summary.md model_summary.md - Restore link to Harvard's Annotated Transformer. (#29702) 2024-03-23 18:29:39 -07:00
models.md [docs] minor fixes in models.md (#38193) 2025-05-19 13:14:21 +00:00
modular_transformers.md Support custom dosctrings in modular (#36726) 2025-03-18 14:00:54 -04:00
notebooks.md Enable doc in Spanish (#16518) 2022-04-04 10:25:46 -04:00
optimizers.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
pad_truncation.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
peft.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_hardware.md chore: Fix typos in docs and examples (#36524) 2025-03-04 13:47:41 +00:00
perf_infer_cpu.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_infer_gpu_multi.md Fix: make docs work better with doc builder (#38213) 2025-05-20 08:23:03 +00:00
perf_infer_gpu_one.md Small typo lines 47 and 199 perf_infer_gpu_one.md (#37938) 2025-05-06 14:32:55 +01:00
perf_torch_compile.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_train_cpu_many.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_train_cpu.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_train_gaudi.md Add Intel Gaudi doc (#37855) 2025-04-29 13:28:06 -07:00
perf_train_gpu_many.md docs: fix typo (#37567) 2025-04-17 14:54:44 +01:00
perf_train_gpu_one.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_train_special.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perf_train_tpu_tf.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
perplexity.md [docs] use device-agnostic API instead of cuda (#34913) 2024-11-26 09:23:34 -08:00
philosophy.md [docs] fixed links with 404 (#27327) 2023-11-06 19:45:03 +00:00
pipeline_gradio.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
pipeline_tutorial.md chore: Fix typos in docs and examples (#36524) 2025-03-04 13:47:41 +00:00
pipeline_webserver.md fix and enhance pipeline_webserver.md (#36992) 2025-04-15 08:35:05 -07:00
pr_checks.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
processors.md [docs] add Audio import (#38195) 2025-05-19 13:16:35 +00:00
quicktour.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
run_scripts.md Remove research projects (#36645) 2025-03-11 13:47:38 +00:00
serialization.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
serving.md fix docs serving typos. (#37936) 2025-05-06 14:32:44 +01:00
task_summary.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
tasks_explained.md fix: Wrong task mentioned in docs (#34757) 2024-11-18 18:42:28 +00:00
testing.md chore: Fix typos in docs and examples (#36524) 2025-03-04 13:47:41 +00:00
tf_xla.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
tflite.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
tokenizer_summary.md [docs] Spanish translation of tokenizer_summary.md (#31154) 2024-06-03 16:52:23 -07:00
tools.md [agents] remove agents 🧹 (#37368) 2025-04-11 18:42:37 +01:00
torchscript.md Fix wording in torchscript.md (#38004) 2025-05-08 16:47:45 +01:00
trainer.md Update trainer.md (#38113) 2025-05-14 12:40:00 +00:00
training.md [docs] Redesign (#31757) 2025-03-03 10:33:46 -08:00
troubleshooting.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
video_processors.md 🔴 Video processors as a separate class (#35206) 2025-05-12 11:55:51 +02:00