
* squash commits * rename gpu * rename accelerator * change _toctree.yml * Apply suggestions from code review Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: sdp <sdp@a4bf01943ff7.jf.intel.com> Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
4.2 KiB
Accelerator selection
During distributed training, you can specify the number and order of accelerators (CUDA, XPU, MPS, HPU, etc.) to use. This can be useful when you have accelerators with different computing power and you want to use the faster accelerator first. Or you could only use a subset of the available accelerators. The selection process works for both DistributedDataParallel and DataParallel. You don't need Accelerate or DeepSpeed integration.
This guide will show you how to select the number of accelerators to use and the order to use them in.
Number of accelerators
For example, if there are 4 accelerators and you only want to use the first 2, run the command below.
Use the --nproc_per_node
to select how many accelerators to use.
torchrun --nproc_per_node=2 trainer-program.py ...
Use --num_processes
to select how many accelerators to use.
accelerate launch --num_processes 2 trainer-program.py ...
Use --num_gpus
to select how many GPUs to use.
deepspeed --num_gpus 2 trainer-program.py ...
Order of accelerators
To select specific accelerators to use and their order, use the environment variable appropriate for your hardware. This is often set on the command line for each run, but can also be added to your ~/.bashrc
or other startup config file.
For example, if there are 4 accelerators (0, 1, 2, 3) and you only want to run accelerators 0 and 2:
CUDA_VISIBLE_DEVICES=0,2 torchrun trainer-program.py ...
Only GPUs 0 and 2 are "visible" to PyTorch and are mapped to cuda:0
and cuda:1
respectively.
To reverse the order (use GPU 2 as cuda:0
and GPU 0 as cuda:1
):
CUDA_VISIBLE_DEVICES=2,0 torchrun trainer-program.py ...
To run without any GPUs:
CUDA_VISIBLE_DEVICES= python trainer-program.py ...
You can also control the order of CUDA devices using CUDA_DEVICE_ORDER
:
-
Order by PCIe bus ID (matches
nvidia-smi
):export CUDA_DEVICE_ORDER=PCI_BUS_ID
-
Order by compute capability (fastest first):
export CUDA_DEVICE_ORDER=FASTEST_FIRST
ZE_AFFINITY_MASK=0,2 torchrun trainer-program.py ...
Only XPUs 0 and 2 are "visible" to PyTorch and are mapped to xpu:0
and xpu:1
respectively.
To reverse the order (use XPU 2 as xpu:0
and XPU 0 as xpu:1
):
ZE_AFFINITY_MASK=2,0 torchrun trainer-program.py ...
You can also control the order of Intel XPUs with:
export ZE_ENABLE_PCI_ID_DEVICE_ORDER=1
For more information about device enumeration and sorting on Intel XPU, please refer to the Level Zero documentation.
Warning
Environment variables can be exported instead of being added to the command line. This is not recommended because it can be confusing if you forget how the environment variable was set up and you end up using the wrong accelerators. Instead, it is common practice to set the environment variable for a specific training run on the same command line.