mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
fix link in performance docs (#17419)
This commit is contained in:
parent
284fc6c0bb
commit
740a1574f1
@ -30,7 +30,7 @@ Training transformer models efficiently requires an accelerator such as a GPU or
|
||||
|
||||
Training large models on a single GPU can be challenging but there are a number of tools and methods that make it feasible. In this section methods such as mixed precision training, gradient accumulation and checkpointing, efficient optimizers, as well as strategies to determine the best batch size are discussed.
|
||||
|
||||
[Go to single GPU training section](perf_train_gpu_single)
|
||||
[Go to single GPU training section](perf_train_gpu_one)
|
||||
|
||||
### Multi-GPU
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user