mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-02 11:11:05 +06:00
examples/docs: caveat that PL examples don't work on TPU (#8309)
This commit is contained in:
parent
76e7a44dee
commit
ebde57acac
@ -63,6 +63,7 @@ very detailed [pytorch/xla README](https://github.com/pytorch/xla/blob/master/RE
|
|||||||
|
|
||||||
In this repo, we provide a very simple launcher script named [xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/xla_spawn.py) that lets you run our example scripts on multiple TPU cores without any boilerplate.
|
In this repo, we provide a very simple launcher script named [xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/xla_spawn.py) that lets you run our example scripts on multiple TPU cores without any boilerplate.
|
||||||
Just pass a `--num_cores` flag to this script, then your regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for torch.distributed).
|
Just pass a `--num_cores` flag to this script, then your regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for torch.distributed).
|
||||||
|
Note that this approach does not work for examples that use `pytorch-lightning`.
|
||||||
|
|
||||||
For example for `run_glue`:
|
For example for `run_glue`:
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user