suggest a numerical limit of 50MB for determining @slow (#8824)

This commit is contained in:
Stas Bekman 2020-11-27 13:04:54 -08:00 committed by GitHub
parent 0a921b6459
commit 00ea45659f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -909,9 +909,10 @@ pipelines), then we should run that test in the non-slow test suite. If it's foc
such as the documentation or the examples, then we should run these tests in the slow test suite. And then, to refine
this approach we should have exceptions:
* All tests that need to download a heavy set of weights (e.g., model or tokenizer integration tests, pipeline
integration tests) should be set to slow. If you're adding a new model, you should create and upload to the hub a
tiny version of it (with random weights) for integration tests. This is discussed in the following paragraphs.
* All tests that need to download a heavy set of weights or a dataset that is larger than ~50MB (e.g., model or
tokenizer integration tests, pipeline integration tests) should be set to slow. If you're adding a new model, you
should create and upload to the hub a tiny version of it (with random weights) for integration tests. This is
discussed in the following paragraphs.
* All tests that need to do a training not specifically optimized to be fast should be set to slow.
* We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly slow, and set them to
``@slow``. Auto-modeling tests, which save and load large files to disk, are a good example of tests that are marked