mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 21:00:08 +06:00
![]() We're already using as many processes in parallel as we have CPU cores. Furthermore, the number of core may be incorrectly calculated as 36 (we've seen this in pytest-xdist) which make compound the problem. PyTorch performance craters without this. |
||
---|---|---|
.. | ||
config.yml | ||
deploy.sh |