Commit Graph

488 Commits

Author SHA1 Message Date
Stas Bekman
805a202e1a
[CIs] report slow tests add --durations=0 to some pytest jobs (#7884)
* add --durations=50 to some pytest runs

* report all tests
2020-10-19 08:23:14 -04:00
Stas Bekman
4eb61f8e88
remove USE_CUDA (#7861) 2020-10-19 07:08:34 -04:00
Thomas Wolf
ba8c4d0ac0
[Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659)
* splitting fast and slow tokenizers [WIP]

* [WIP] splitting sentencepiece and tokenizers dependencies

* update dummy objects

* add name_or_path to models and tokenizers

* prefix added to file names

* prefix

* styling + quality

* spliting all the tokenizer files - sorting sentencepiece based ones

* update tokenizer version up to 0.9.0

* remove hard dependency on sentencepiece 🎉

* and removed hard dependency on tokenizers 🎉

* update conversion script

* update missing models

* fixing tests

* move test_tokenization_fast to main tokenization tests - fix bugs

* bump up tokenizers

* fix bert_generation

* update ad fix several tokenizers

* keep sentencepiece in deps for now

* fix funnel and deberta tests

* fix fsmt

* fix marian tests

* fix layoutlm

* fix squeezebert and gpt2

* fix T5 tokenization

* fix xlnet tests

* style

* fix mbart

* bump up tokenizers to 0.9.2

* fix model tests

* fix tf models

* fix seq2seq examples

* fix tests without sentencepiece

* fix slow => fast  conversion without sentencepiece

* update auto and bert generation tests

* fix mbart tests

* fix auto and common test without tokenizers

* fix tests without tokenizers

* clean up tests lighten up when tokenizers + sentencepiece are both off

* style quality and tests fixing

* add sentencepiece to doc/examples reqs

* leave sentencepiece on for now

* style quality split hebert and fix pegasus

* WIP Herbert fast

* add sample_text_no_unicode and fix hebert tokenization

* skip FSMT example test for now

* fix style

* fix fsmt in example tests

* update following Lysandre and Sylvain's comments

* Update src/transformers/testing_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/testing_utils.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/tokenization_utils_base.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/tokenization_utils_base.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
2020-10-18 20:51:24 +02:00
Thomas Wolf
55cb2ee62e
Green tests: update torch-hub test dependencies (add protobuf and pin tokenizer 0.9.0-RC2) (#7658)
* pin torch-hub test

* add protobuf dep
2020-10-08 13:21:15 +02:00
Lysandre Debut
44a93c981f
Number of GPUs for multi-gpu (#7472) 2020-09-30 06:53:20 -04:00
Lysandre
35e94c68df Number of GPUs 2020-09-30 12:29:26 +02:00
Lysandre Debut
056723ad1d
Multi-GPU setup (#7453) 2020-09-30 05:53:34 -04:00
Sylvain Gugger
514486739c
Fix CI with change of name of nlp (#7054)
* nlp -> datasets

* More nlp -> datasets

* Woopsie

* More nlp -> datasets

* One last
2020-09-10 14:51:08 -04:00
Sam Shleifer
9d1b4db2aa
add nlp install (#6767) 2020-08-27 11:08:14 -04:00
Sylvain Gugger
64c7c2bc15
Install nlp for github actions test (#6728) 2020-08-25 14:58:38 -04:00
Funtowicz Morgan
ac9702c284
Fix ONNX test_quantize unittest (#6716) 2020-08-25 13:24:40 -04:00
Sam Shleifer
a99d09c6f9
add new line to make examples run (#6706) 2020-08-25 06:26:29 -04:00
Lysandre Debut
79588e6fdb
Ci GitHub caching (#6382)
* Cache Github Actions CI

* Remove useless file
2020-08-10 10:39:31 -04:00
Sam Shleifer
1f8e826518
[CI] Self-scheduled runner also pins torch (#6332) 2020-08-07 18:40:21 -04:00
Lysandre
c72f9c90a1 Remove --no-cache-dir from github CI 2020-08-07 09:07:22 +02:00
Lysandre Debut
0d9328f2ef
Patch GPU failures (#6281)
* Pin to 1.5.0

* Patch XLM GPU test
2020-08-07 02:58:15 -04:00
Lysandre Debut
1d5c3a3d96
Test with --no-cache-dir (#6235) 2020-08-04 03:20:19 -04:00
Lysandre Debut
d740351f7d
Upgrade pip when doing CI (#6234)
* Upgrade pip when doing CI

* Don't forget Github CI
2020-08-04 02:37:12 -04:00
Sam Shleifer
ae67b2439f
[CI] Install examples/requirements.txt (#5956) 2020-07-21 21:07:48 -04:00
Sam Shleifer
ddd40b3211
[CI] self-scheduled runner tests examples/ (#5927) 2020-07-21 17:01:07 -04:00
Sam Shleifer
c3c61ea017
[Fix] github actions CI by reverting #5138 (#5686) 2020-07-13 17:12:18 -04:00
Sam Shleifer
23231c0f78
[GH Runner] fix yaml indent (#5412) 2020-06-30 16:17:12 -04:00
Sam Shleifer
ac61114592
[CI] gh runner doesn't use -v, cats new result (#5409) 2020-06-30 16:12:14 -04:00
Sam Shleifer
80aa4b8aa6
[CI] GH-runner stores artifacts like CircleCI (#5318) 2020-06-30 15:01:53 -04:00
Julien Chaumond
365d452d4d
[ci] Slow GPU tests run daily (#4465) 2020-05-25 17:28:02 -04:00
Julien Chaumond
5e7fe8b585
Distributed eval: SequentialDistributedSampler + gather all results (#4243)
* Distributed eval: SequentialDistributedSampler + gather all results

* For consistency only write to disk from world_master

Close https://github.com/huggingface/transformers/issues/4272

* Working distributed eval

* Hook into scripts

* Fix #3721 again

* TPU.mesh_reduce: stay in tensor space

Thanks @jysohn23

* Just a small comment

* whitespace

* torch.hub: pip install packaging

* Add test scenarii
2020-05-18 22:02:39 -04:00
Funtowicz Morgan
b908f2e9dd
Attempt to unpin torch version for Github Action. (#4384) 2020-05-15 15:47:15 +02:00
Julien Chaumond
56e8ef632f
[ci] Restrict GPU tests to actual code commits 2020-05-11 20:40:41 -04:00
Julien Chaumond
ba6f6e44a8 [ci] Re-enable torch GPU tests 2020-05-12 00:05:36 +00:00
Julien Chaumond
97a375484c rm boto3 dependency 2020-04-27 11:17:14 -04:00
Julien Chaumond
d32585a304 Fix Torch.hub + Integration test 2020-04-21 14:13:30 -04:00
Julien Chaumond
88aecee6a2 [ci] GitHub-hosted runner has no space left on device 2020-04-17 20:16:00 -04:00
Julien Chaumond
a4c75f1492 [ci] last resort 2020-03-11 19:11:19 -04:00
Julien Chaumond
824e320d96 [ci] Fixup c6cf925 2020-03-11 18:52:10 -04:00
Julien Chaumond
c6cf925ff8 [ci] last resort
while looking for fix to https://twitter.com/julien_c/status/1237864185821708291
2020-03-11 18:49:19 -04:00
Julien Chaumond
f169957d0c
TF GPU CI (#3085)
* debug env

* Restrict TF GPU memory

* Fixup

* One more test

* rm debug logs

* Fixup
2020-03-02 15:45:25 -05:00
Julien Chaumond
13afb71208 [ci] Ensure that TF does not preempt all GPU memory for itself
see https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth

Co-Authored-By: Funtowicz Morgan <mfuntowicz@users.noreply.github.com>
Co-Authored-By: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2020-03-02 11:56:45 -05:00
Julien Chaumond
e36bd94345
[ci] Run all tests on (self-hosted) GPU (#3020)
* Create self-hosted.yml

* Update self-hosted.yml

* Update self-hosted.yml

* Update self-hosted.yml

* Update self-hosted.yml

* Update self-hosted.yml

* do not run slow tests, for now

* [ci] For comparison with circleci, let's also run CPU-tests

* [ci] reorganize

* clearer filenames

* [ci] Final tweaks before merging

* rm slow tests on circle ci

* Trigger CI

* On GPU this concurrency was way too high
2020-02-28 21:11:08 -05:00