transformers/docs/source/en
Mohamed Mekkouri efe72fe21f
Adding FP8 Quantization to transformers (#36026)
* first commit

* adding kernels

* fix create_quantized_param

* fix quantization logic

* end2end

* fix style

* fix imports

* fix consistency

* update

* fix style

* update

* udpate after review

* make style

* update

* update

* fix

* update

* fix docstring

* update

* update after review

* update

* fix scheme

* update

* update

* fix

* update

* fix docstring

* add source

* fix test

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2025-02-13 13:01:19 +01:00
..
internal Implement AsyncTextIteratorStreamer for asynchronous streaming (#34931) 2024-12-20 12:08:12 +01:00
main_classes Adding FP8 Quantization to transformers (#36026) 2025-02-13 13:01:19 +01:00
model_doc Helium documentation fixes (#36170) 2025-02-13 12:20:53 +01:00
quantization Adding FP8 Quantization to transformers (#36026) 2025-02-13 13:01:19 +01:00
tasks Move DataCollatorForMultipleChoice from the docs to the package (#34763) 2025-02-13 12:01:28 +01:00
_config.py Add optimized PixtralImageProcessorFast (#34836) 2024-11-28 16:04:05 +01:00
_redirects.yml
_toctree.yml Adding FP8 Quantization to transformers (#36026) 2025-02-13 13:01:19 +01:00
accelerate.md
add_new_model.md
add_new_pipeline.md [docs] Follow up register_pipeline (#35310) 2024-12-20 09:22:44 -08:00
agents_advanced.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
agents.md Multiple typo fixes in Tutorials docs (#35035) 2024-12-02 15:26:34 +00:00
attention.md
autoclass_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
bertology.md
big_models.md
chat_templating.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
community.md
contributing.md
conversations.md
create_a_model.md
custom_models.md
debugging.md DeepSpeed github repo move sync (#36021) 2025-02-05 08:19:31 -08:00
deepspeed.md DeepSpeed github repo move sync (#36021) 2025-02-05 08:19:31 -08:00
fast_tokenizers.md
fsdp.md Fix docs typos. (#35465) 2025-01-02 11:29:46 +01:00
generation_strategies.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
gguf.md Add Gemma2 GGUF support (#34002) 2025-01-03 14:50:07 +01:00
glossary.md
how_to_hack_models.md Add utility for Reload Transformers imports cache for development workflow #35508 (#35858) 2025-02-12 12:45:11 +01:00
hpo_train.md
index.md Add Apple's Depth-Pro for depth estimation (#34583) 2025-02-10 11:32:45 +00:00
installation.md [docs] uv install (#35821) 2025-01-27 08:49:28 -08:00
kv_cache.md [docs] no hard-coding cuda (#36043) 2025-02-05 08:22:33 -08:00
llm_optims.md Update llm_optims docs for sdpa_kernel (#35481) 2025-01-06 08:54:31 -08:00
llm_tutorial_optimization.md [docs] add explanation to release_memory() (#34911) 2024-11-27 07:47:28 -08:00
llm_tutorial.md [docs] no hard coding cuda as bnb has multi-backend support (#35867) 2025-02-05 08:20:02 -08:00
model_memory_anatomy.md
model_sharing.md [docs] update not-working model revision (#34682) 2024-11-11 07:09:31 -08:00
model_summary.md
modular_transformers.md Improve modular documentation (#35737) 2025-01-21 17:53:30 +01:00
multilingual.md
notebooks.md
pad_truncation.md
peft.md
perf_hardware.md
perf_infer_cpu.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
perf_infer_gpu_multi.md Update doc re list of models supporting TP (#35864) 2025-02-12 15:53:27 +01:00
perf_infer_gpu_one.md Add Apple's Depth-Pro for depth estimation (#34583) 2025-02-10 11:32:45 +00:00
perf_torch_compile.md [docs] use device-agnostic instead of cuda (#35047) 2024-12-03 10:53:45 -08:00
perf_train_cpu_many.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_cpu.md [doc] use full path for run_qa.py (#34914) 2024-11-26 09:23:44 -08:00
perf_train_gpu_many.md DeepSpeed github repo move sync (#36021) 2025-02-05 08:19:31 -08:00
perf_train_gpu_one.md layernorm_decay_fix (#35927) 2025-02-04 11:01:49 +01:00
perf_train_special.md
perf_train_tpu_tf.md
performance.md Simplify Tensor Parallel implementation with PyTorch TP (#34184) 2024-11-18 19:51:49 +01:00
perplexity.md [docs] use device-agnostic API instead of cuda (#34913) 2024-11-26 09:23:34 -08:00
philosophy.md
pipeline_tutorial.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
pipeline_webserver.md
pr_checks.md
preprocessing.md
quicktour.md [chat] docs fix (#35840) 2025-01-22 14:32:27 +00:00
run_scripts.md
sagemaker.md
serialization.md [docs] fix model checkpoint name (#36075) 2025-02-07 12:41:52 -08:00
task_summary.md [doctest] Fixes (#35863) 2025-01-26 15:26:38 -08:00
tasks_explained.md fix: Wrong task mentioned in docs (#34757) 2024-11-18 18:42:28 +00:00
testing.md [tests] add XPU part to testing (#34778) 2024-11-18 09:59:11 -08:00
tf_xla.md
tflite.md
tiktoken.md Updated documentation and added conversion utility (#34319) 2024-11-25 18:44:09 +01:00
tokenizer_summary.md
torchscript.md
trainer.md Optim: APOLLO optimizer integration (#36062) 2025-02-12 15:33:43 +01:00
training.md [docs] Increase visibility of torch_dtype="auto" (#35067) 2024-12-04 09:18:44 -08:00
troubleshooting.md