transformers/tests
fxmarty 80377eb018
F.scaled_dot_product_attention support (#26572)
* add sdpa

* wip

* cleaning

* add ref

* yet more cleaning

* and more :)

* wip llama

* working llama

* add output_attentions=True support

* bigcode sdpa support

* fixes

* gpt-bigcode support, require torch>=2.1.1

* add falcon support

* fix conflicts falcon

* style

* fix attention_mask definition

* remove output_attentions from attnmaskconverter

* support whisper without removing any Copied from statement

* fix mbart default to eager renaming

* fix typo in falcon

* fix is_causal in SDPA

* check is_flash_attn_2_available in the models init as well in case the model is not initialized through from_pretrained

* add warnings when falling back on the manual implementation

* precise doc

* wip replace _flash_attn_enabled by config.attn_implementation

* fix typo

* add tests

* style

* add a copy.deepcopy on the config in from_pretrained, as we do not want to modify it inplace

* obey to config.attn_implementation if a config is passed in from_pretrained

* fix is_torch_sdpa_available when torch is not installed

* remove dead code

* Update src/transformers/modeling_attn_mask_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_attn_mask_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_attn_mask_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_attn_mask_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_attn_mask_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/models/bart/modeling_bart.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* remove duplicate pretraining_tp code

* add dropout in llama

* precise comment on attn_mask

* add fmt: off for _unmask_unattended docstring

* precise num_masks comment

* nuke pretraining_tp in LlamaSDPAAttention following Arthur's suggestion

* cleanup modeling_utils

* backward compatibility

* fix style as requested

* style

* improve documentation

* test pass

* style

* add _unmask_unattended tests

* skip meaningless tests for idefics

* hard_check SDPA requirements when specifically requested

* standardize the use if XXX_ATTENTION_CLASSES

* fix SDPA bug with mem-efficient backend on CUDA when using fp32

* fix test

* rely on SDPA is_causal parameter to handle the causal mask in some cases

* fix FALCON_ATTENTION_CLASSES

* remove _flash_attn_2_enabled occurences

* fix test

* add OPT to the list of supported flash models

* improve test

* properly test on different SDPA backends, on different dtypes & properly handle separately the pad tokens in the test

* remove remaining _flash_attn_2_enabled occurence

* Update src/transformers/modeling_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/modeling_attn_mask_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update docs/source/en/perf_infer_gpu_one.md

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* remove use_attn_implementation

* fix docstring & slight bug

* make attn_implementation internal (_attn_implementation)

* typos

* fix tests

* deprecate use_flash_attention_2=True

* fix test

* add back llama that was removed by mistake

* fix tests

* remove _flash_attn_2_enabled occurences bis

* add check & test that passed attn_implementation is valid

* fix falcon torchscript export

* fix device of mask in tests

* add tip about torch.jit.trace and move bt doc below sdpa

* fix parameterized.expand order

* move tests from test_modeling_attn_mask_utils to test_modeling_utils as a relevant test class is already there

* update sdpaattention class with the new cache

* Update src/transformers/configuration_utils.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/models/bark/modeling_bark.py

* address review comments

* WIP torch.jit.trace fix. left: test both eager & sdpa

* add test for torch.jit.trace for both eager/sdpa

* fix falcon with torch==2.0 that needs to use sdpa

* fix doc

* hopefully last fix

* fix key_value_length that has no default now in mask converter

* is it flacky?

* fix speculative decoding bug

* tests do pass

* fix following #27907

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2023-12-09 05:38:14 +09:00
..
benchmark [Test refactor 1/5] Per-folder tests reorganization (#15725) 2022-02-23 15:46:28 -05:00
bettertransformer Fixed malapropism error (#26660) 2023-10-09 11:04:57 +02:00
deepspeed device-agnostic deepspeed testing (#27342) 2023-11-09 12:34:13 +01:00
extended Device agnostic trainer testing (#27131) 2023-10-30 18:16:40 +00:00
fixtures [WIP] add SpeechT5 model (#18922) 2023-02-03 12:43:46 -05:00
fsdp device agnostic fsdp testing (#27120) 2023-11-01 07:17:06 +01:00
generation Fix remaining issues in beam score calculation (#27808) 2023-12-08 14:14:16 +01:00
models F.scaled_dot_product_attention support (#26572) 2023-12-09 05:38:14 +09:00
optimization Make schedulers picklable by making lr_lambda fns global (#21768) 2023-03-02 12:08:43 -05:00
peft_integration [Peft] modules_to_save support for peft integration (#27466) 2023-11-14 10:32:57 +01:00
pipelines Fix 2 tests in FillMaskPipelineTests (#27889) 2023-12-08 14:55:29 +01:00
quantization Faster generation using AWQ + Fused modules (#27411) 2023-12-05 12:14:45 +01:00
repo_utils Allow # Ignore copy (#27328) 2023-12-07 10:00:08 +01:00
sagemaker Broken links fixed related to datasets docs (#27569) 2023-11-17 13:44:09 -08:00
tokenization [Styling] stylify using ruff (#27144) 2023-11-16 17:43:19 +01:00
tools Add support for for loops in python interpreter (#24429) 2023-06-26 09:58:14 -04:00
trainer Allow resume_from_checkpoint to handle auto_find_batch_size (#27568) 2023-12-08 11:51:02 -05:00
utils F.scaled_dot_product_attention support (#26572) 2023-12-09 05:38:14 +09:00
__init__.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_backbone_common.py [AutoBackbone] Add test (#26094) 2023-09-18 23:47:54 +02:00
test_cache_utils.py Generate: SinkCache can handle iterative prompts (#27907) 2023-12-08 20:02:20 +00:00
test_configuration_common.py [ PretrainedConfig] Improve messaging (#27438) 2023-11-15 14:10:39 +01:00
test_configuration_utils.py F.scaled_dot_product_attention support (#26572) 2023-12-09 05:38:14 +09:00
test_feature_extraction_common.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_feature_extraction_utils.py Remove-auth-token (#27060) 2023-11-13 14:20:54 +01:00
test_image_processing_common.py Input data format (#25464) 2023-08-16 17:45:02 +01:00
test_image_processing_utils.py Remove-auth-token (#27060) 2023-11-13 14:20:54 +01:00
test_image_transforms.py Normalize floating point cast (#27249) 2023-11-10 15:35:27 +00:00
test_modeling_common.py F.scaled_dot_product_attention support (#26572) 2023-12-09 05:38:14 +09:00
test_modeling_flax_common.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_modeling_flax_utils.py Default to msgpack for safetensors (#27460) 2023-11-13 15:17:01 +01:00
test_modeling_tf_common.py Deprecate TransfoXL (#27607) 2023-11-24 11:48:02 +01:00
test_modeling_tf_utils.py Default to msgpack for safetensors (#27460) 2023-11-13 15:17:01 +01:00
test_modeling_utils.py F.scaled_dot_product_attention support (#26572) 2023-12-09 05:38:14 +09:00
test_pipeline_mixin.py Shorten the conversation tests for speed + fixing position overflows (#26960) 2023-10-31 14:20:04 +00:00
test_sequence_feature_extraction_common.py Fix typo (#25966) 2023-09-05 10:12:25 +02:00
test_tokenization_common.py [Styling] stylify using ruff (#27144) 2023-11-16 17:43:19 +01:00
test_tokenization_utils.py Remove-auth-token (#27060) 2023-11-13 14:20:54 +01:00