transformers/docs/source/en/perf_infer_gpu_many.md
Sylvain Gugger eb849f6604
Migrate doc files to Markdown. (#24376)
* Rename index.mdx to index.md

* With saved modifs

* Address review comment

* Treat all files

* .mdx -> .md

* Remove special char

* Update utils/tests_fetcher.py

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>

---------

Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2023-06-20 18:07:47 -04:00

1.3 KiB

Efficient Inference on a Multiple GPUs

This document contains information on how to efficiently infer on a multiple GPUs.

Note: A multi GPU setup can use the majority of the strategies described in the single GPU section. You must be aware of simple techniques, though, that can be used for a better usage.

BetterTransformer for faster inference

We have recently integrated BetterTransformer for faster inference on multi-GPU for text, image and audio models. Check the documentation about this integration here for more details.