Prevent excessive parallelism in PyTorch.

We're already using as many processes in parallel as we have CPU cores.
Furthermore, the number of core may be incorrectly calculated as 36
(we've seen this in pytest-xdist) which make compound the problem.

PyTorch performance craters without this.
This commit is contained in:
Aymeric Augustin 2019-12-20 20:56:59 +01:00
parent bb3bfa2d29
commit 80caf79d07

View File

@ -4,6 +4,8 @@ jobs:
working_directory: ~/transformers
docker:
- image: circleci/python:3.5
environment:
OMP_NUM_THREADS: 1
resource_class: xlarge
parallelism: 1
steps:
@ -19,6 +21,8 @@ jobs:
working_directory: ~/transformers
docker:
- image: circleci/python:3.5
environment:
OMP_NUM_THREADS: 1
resource_class: xlarge
parallelism: 1
steps:
@ -34,6 +38,8 @@ jobs:
working_directory: ~/transformers
docker:
- image: circleci/python:3.5
environment:
OMP_NUM_THREADS: 1
resource_class: xlarge
parallelism: 1
steps: