transformers/docs/source/en
Abhishek Maurya 65753d6065
Remove graph breaks for torch.compile() in flash_attention_forward when Lllama Model is padding free tuned (#33932)
* fix: fixes for graph breaks

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix: formatting

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix: import error

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix: Add Fa2Kwargs

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix: PR Changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* Revert "PR changes"

This reverts commit 39d2868e5c.

* PR changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix: FlashAttentionKwarg

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix: FlashAttentionKwarg

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR Changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR Changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR Changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR Changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* PR Changes

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* addition of documentation

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* change in _flash_attention_forward

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* make fix-copies

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* revert make fix-copies

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>

* fix copies

* style

* loss kwargs typing

* style and pull latest changes

---------

Signed-off-by: Abhishek <maurya.abhishek@ibm.com>
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
2024-10-24 11:02:54 +02:00
..
internal Add SynthID (watermerking by Google DeepMind) (#34350) 2024-10-23 21:18:52 +01:00
main_classes Add SynthID (watermerking by Google DeepMind) (#34350) 2024-10-23 21:18:52 +01:00
model_doc Add post_process_depth_estimation to image processors and support ZoeDepth's inference intricacies (#32550) 2024-10-22 15:50:54 +02:00
quantization [Docs] Update compressed_tensors.md (#33961) 2024-10-10 15:22:41 +02:00
tasks Add post_process_depth_estimation to image processors and support ZoeDepth's inference intricacies (#32550) 2024-10-22 15:50:54 +02:00
_config.py [Doc]: Broken link in Kubernetes doc (#33879) 2024-10-04 11:20:56 +02:00
_redirects.yml Docs / Quantization: Redirect deleted page (#31063) 2024-05-28 18:29:22 +02:00
_toctree.yml add Glm (#33823) 2024-10-18 17:41:12 +02:00
accelerate.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
add_new_model.md Model addition timeline (#33762) 2024-09-27 17:15:13 +02:00
add_new_pipeline.md add push_to_hub to pipeline (#29172) 2024-04-16 15:34:04 +01:00
agents_advanced.md Decorator for easier tool building (#33439) 2024-09-18 11:07:51 +02:00
agents.md Fix method name which changes in tutorial (#34252) 2024-10-21 14:21:52 -03:00
attention.md [Docs] Fix broken links and syntax issues (#28918) 2024-02-08 14:13:35 -08:00
autoclass_tutorial.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
benchmarks.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
bertology.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
big_models.md [docs] Big model loading (#29920) 2024-04-01 18:47:32 -07:00
chat_templating.md Add a doc section on writing generation prompts (#34248) 2024-10-21 14:35:57 +01:00
community.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
contributing.md Enable doc in Spanish (#16518) 2022-04-04 10:25:46 -04:00
conversations.md [docs] change temperature to a positive value (#32077) 2024-07-23 17:47:51 +01:00
create_a_model.md Enable HF pretrained backbones (#31145) 2024-06-06 22:02:38 +01:00
custom_models.md Updated the custom_models.md changed cross_entropy code (#33118) 2024-08-26 13:15:43 +02:00
debugging.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
deepspeed.md Fix typos (#31819) 2024-07-08 11:52:47 +01:00
fast_tokenizers.md Migrate doc files to Markdown. (#24376) 2023-06-20 18:07:47 -04:00
fsdp.md [docs] Trainer docs (#28145) 2023-12-20 10:37:23 -08:00
generation_strategies.md Universal Assisted Generation: Assisted generation with any assistant model (by Intel Labs) (#33383) 2024-10-10 14:41:53 +02:00
gguf.md Add GGUF for starcoder2 (#34094) 2024-10-14 10:22:49 +02:00
glossary.md Fix typos (#31819) 2024-07-08 11:52:47 +01:00
how_to_hack_models.md [Docs] Add Developer Guide: How to Hack Any Transformers Model (#33979) 2024-10-07 10:08:20 +02:00
hpo_train.md Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
index.md add Glm (#33823) 2024-10-18 17:41:12 +02:00
installation.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
kv_cache.md Cache: don't show warning in forward passes when past_key_values is None (#33541) 2024-09-19 12:02:46 +01:00
llm_optims.md Remove graph breaks for torch.compile() in flash_attention_forward when Lllama Model is padding free tuned (#33932) 2024-10-24 11:02:54 +02:00
llm_tutorial_optimization.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
llm_tutorial.md Fix: typo (#33880) 2024-10-02 09:12:21 +01:00
model_memory_anatomy.md Enable BNB multi-backend support (#31098) 2024-09-24 03:40:56 -06:00
model_sharing.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
model_summary.md model_summary.md - Restore link to Harvard's Annotated Transformer. (#29702) 2024-03-23 18:29:39 -07:00
modular_transformers.md Improve modular converter (#33991) 2024-10-08 14:53:58 +02:00
multilingual.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
notebooks.md Enable doc in Spanish (#16518) 2022-04-04 10:25:46 -04:00
pad_truncation.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
peft.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
perf_hardware.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
perf_infer_cpu.md [Docs] Fix spelling and grammar mistakes (#28825) 2024-02-02 08:45:00 +01:00
perf_infer_gpu_one.md Attn implementation for composite models (#32238) 2024-10-22 06:54:44 +02:00
perf_torch_compile.md fix(docs): Fixed a link in docs (#32274) 2024-07-29 10:50:43 +01:00
perf_train_cpu_many.md [Doc]: Broken link in Kubernetes doc (#33879) 2024-10-04 11:20:56 +02:00
perf_train_cpu.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
perf_train_gpu_many.md Update perf_train_gpu_many.md (#31451) 2024-06-18 11:00:26 -07:00
perf_train_gpu_one.md Corrected max number for bf16 in transformer/docs (#33658) 2024-09-25 19:20:51 +02:00
perf_train_special.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
perf_train_tpu_tf.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
performance.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
perplexity.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
philosophy.md [docs] fixed links with 404 (#27327) 2023-11-06 19:45:03 +00:00
pipeline_tutorial.md Docs: Fixed whisper-large-v2 model link in docs (#32871) 2024-08-19 09:50:35 -07:00
pipeline_webserver.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
pr_checks.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
preprocessing.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
quicktour.md [docs] fix typo (#34235) 2024-10-22 09:46:07 -07:00
run_scripts.md [docs] refine the doc for train with a script (#33423) 2024-09-12 10:16:12 -07:00
sagemaker.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
serialization.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
task_summary.md More fixes for doctest (#30265) 2024-04-16 11:58:55 +02:00
tasks_explained.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
testing.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
tf_xla.md fix(docs): Fixed a link in docs (#32274) 2024-07-29 10:50:43 +01:00
tflite.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
tiktoken.md Support reading tiktoken tokenizer.model file (#31656) 2024-09-06 14:24:02 +02:00
tokenizer_summary.md [docs] Spanish translation of tokenizer_summary.md (#31154) 2024-06-03 16:52:23 -07:00
torchscript.md Fixed Majority of the Typos in transformers[en] Documentation (#33350) 2024-09-09 10:47:24 +02:00
trainer.md Trainer - deprecate tokenizer for processing_class (#32385) 2024-10-02 14:08:46 +01:00
training.md Added the necessay import of module (#30804) 2024-05-14 18:45:06 +01:00
troubleshooting.md Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00