transformers/docs/source
Anton Vlasjuk b07770c5eb
[GPT-NeoX] Add SDPA support (#31031)
* starting support for sdpa in `gptneox` models

* small comment on tests

* fix dropout

* documentation and style

* clarify concrete paths for reference

* generalise attn projections and rope application

added head mask check to sdpa mask creation

handle sdpa memory backend bug via own version flag

* update docs and style

* move dtype casting outside of general attn_projection_and_rope function

fix flash_attn_2 stuff

* more generic attn warning if output_attns or head_mask

* simplify head mask check by moving head mask creation to a later point

* remove copied llama artifact

* remove padding_mask from attention function signature

* removing unnecessary comments, only "save" attn implementation once

* [run_slow] gpt_neox
2024-06-26 13:56:36 +01:00
..
de Docs / Quantization: Replace all occurences of load_in_8bit with bnb config (#31136) 2024-05-30 16:47:35 +02:00
en [GPT-NeoX] Add SDPA support (#31031) 2024-06-26 13:56:36 +01:00
es [docs] Spanish translation of tokenizer_summary.md (#31154) 2024-06-03 16:52:23 -07:00
fr Add missing French translation of tutoriel_pipeline.md (#31396) 2024-06-13 17:48:54 +02:00
hi More fixes for doctest (#30265) 2024-04-16 11:58:55 +02:00
it Docs / Quantization: Replace all occurences of load_in_8bit with bnb config (#31136) 2024-05-30 16:47:35 +02:00
ja docs: fix broken link (#31370) 2024-06-12 11:33:00 +01:00
ko docs: fix broken link (#31370) 2024-06-12 11:33:00 +01:00
ms Remove old TF port docs (#30426) 2024-04-23 16:06:20 +01:00
pt Use HF_HUB_OFFLINE + fix has_file in offline mode (#31016) 2024-05-29 11:55:43 +01:00
te docs: fix broken link (#31370) 2024-06-12 11:33:00 +01:00
tr Translate index.md to Turkish (#27093) 2023-11-08 08:35:20 -05:00
zh docs: fix broken link (#31370) 2024-06-12 11:33:00 +01:00
_config.py [#29174] ImportError Fix: Trainer with PyTorch requires accelerate>=0.20.1 Fix (#29888) 2024-04-08 14:21:16 +01:00