Lysandre Debut
0d9328f2ef
Patch GPU failures ( #6281 )
...
* Pin to 1.5.0
* Patch XLM GPU test
2020-08-07 02:58:15 -04:00
Lysandre Debut
1d5c3a3d96
Test with --no-cache-dir ( #6235 )
2020-08-04 03:20:19 -04:00
Lysandre Debut
d740351f7d
Upgrade pip when doing CI ( #6234 )
...
* Upgrade pip when doing CI
* Don't forget Github CI
2020-08-04 02:37:12 -04:00
Sam Shleifer
31a5486e42
github issue template suggests who to tag ( #5790 )
...
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Co-authored-by: Teven <teven.lescao@gmail.com>
2020-07-28 08:41:27 -04:00
Sylvain Gugger
3996041d0a
Fix question template ( #6014 )
2020-07-24 10:04:25 -04:00
Sam Shleifer
ae67b2439f
[CI] Install examples/requirements.txt ( #5956 )
2020-07-21 21:07:48 -04:00
Sam Shleifer
ddd40b3211
[CI] self-scheduled runner tests examples/ ( #5927 )
2020-07-21 17:01:07 -04:00
Sam Shleifer
c3c61ea017
[Fix] github actions CI by reverting #5138 ( #5686 )
2020-07-13 17:12:18 -04:00
Sylvain Gugger
281e394889
Update question template ( #5585 )
2020-07-08 08:46:35 -04:00
Sam Shleifer
23231c0f78
[GH Runner] fix yaml indent ( #5412 )
2020-06-30 16:17:12 -04:00
Sam Shleifer
ac61114592
[CI] gh runner doesn't use -v, cats new result ( #5409 )
2020-06-30 16:12:14 -04:00
Sam Shleifer
80aa4b8aa6
[CI] GH-runner stores artifacts like CircleCI ( #5318 )
2020-06-30 15:01:53 -04:00
Julien Chaumond
365d452d4d
[ci] Slow GPU tests run daily ( #4465 )
2020-05-25 17:28:02 -04:00
Julien Chaumond
5e7fe8b585
Distributed eval: SequentialDistributedSampler + gather all results ( #4243 )
...
* Distributed eval: SequentialDistributedSampler + gather all results
* For consistency only write to disk from world_master
Close https://github.com/huggingface/transformers/issues/4272
* Working distributed eval
* Hook into scripts
* Fix #3721 again
* TPU.mesh_reduce: stay in tensor space
Thanks @jysohn23
* Just a small comment
* whitespace
* torch.hub: pip install packaging
* Add test scenarii
2020-05-18 22:02:39 -04:00
Funtowicz Morgan
b908f2e9dd
Attempt to unpin torch version for Github Action. ( #4384 )
2020-05-15 15:47:15 +02:00
Julien Chaumond
56e8ef632f
[ci] Restrict GPU tests to actual code commits
2020-05-11 20:40:41 -04:00
Julien Chaumond
ba6f6e44a8
[ci] Re-enable torch GPU tests
2020-05-12 00:05:36 +00:00
Julien Chaumond
211e130811
[github] Issue templates: populate some labels
...
cc @bramvanroy @stefan-it
2020-04-28 20:34:34 -04:00
Julien Chaumond
97a375484c
rm boto3 dependency
2020-04-27 11:17:14 -04:00
Julien Chaumond
d32585a304
Fix Torch.hub + Integration test
2020-04-21 14:13:30 -04:00
Julien Chaumond
88aecee6a2
[ci] GitHub-hosted runner has no space left on device
2020-04-17 20:16:00 -04:00
Teven
f8208fa456
Correct transformers-cli env call
2020-04-09 09:03:19 +02:00
Julien Chaumond
a4c75f1492
[ci] last resort
2020-03-11 19:11:19 -04:00
Julien Chaumond
824e320d96
[ci] Fixup c6cf925
2020-03-11 18:52:10 -04:00
Julien Chaumond
c6cf925ff8
[ci] last resort
...
while looking for fix to https://twitter.com/julien_c/status/1237864185821708291
2020-03-11 18:49:19 -04:00
Julien Chaumond
f169957d0c
TF GPU CI ( #3085 )
...
* debug env
* Restrict TF GPU memory
* Fixup
* One more test
* rm debug logs
* Fixup
2020-03-02 15:45:25 -05:00
Julien Chaumond
13afb71208
[ci] Ensure that TF does not preempt all GPU memory for itself
...
see https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
Co-Authored-By: Funtowicz Morgan <mfuntowicz@users.noreply.github.com>
Co-Authored-By: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
2020-03-02 11:56:45 -05:00
Julien Chaumond
e36bd94345
[ci] Run all tests on (self-hosted) GPU ( #3020 )
...
* Create self-hosted.yml
* Update self-hosted.yml
* Update self-hosted.yml
* Update self-hosted.yml
* Update self-hosted.yml
* Update self-hosted.yml
* do not run slow tests, for now
* [ci] For comparison with circleci, let's also run CPU-tests
* [ci] reorganize
* clearer filenames
* [ci] Final tweaks before merging
* rm slow tests on circle ci
* Trigger CI
* On GPU this concurrency was way too high
2020-02-28 21:11:08 -05:00
Bram Vanroy
9773e5e0d9
CLI script to gather environment info ( #2699 )
...
* add "info" command to CLI
As a convenience, add the info directive to CLI. Running `python transformers-cli info` will return a string containing the transformers version, platform, python version, PT/TF version and GPU support
* Swap f-strings for .format
Still supporting 3.5 so can't use f-strings (sad face)
* Add reference in issue to CLI
* Add the expected fields to issue template
This way, people can still add the information manually if they want. (Though I fear they'll just ignore it.)
* Remove heading from output
* black-ify
* order of imports
Should ensure isort test passes
* use is_X_available over import..pass
* style
* fix copy-paste bug
* Rename command info -> env
Also adds the command to CONTRIBUTING.md in "Did you find a bug" section
2020-02-01 10:38:14 -05:00
BramVanroy
9d87eafd11
Streamlining
...
- mostly stylistic streamlining
- removed 'additional context' sections. They seem to be rarely used and might cause confusion. If more details are needed, users can add them to the 'details' section
2020-01-28 10:41:10 -05:00
BramVanroy
a3b3638f6f
phrasing
2020-01-28 10:41:10 -05:00
BramVanroy
c96ca70f25
Update ---new-benchmark.md
2020-01-28 10:41:10 -05:00
BramVanroy
7b5eda32bb
Update --new-model-addition.md
...
Motivate users to @-tag authors of models to increase visibility and expand the community
2020-01-28 10:41:10 -05:00
BramVanroy
c63d91dd1c
Update bug-report.md
...
- change references to pytorch-transformers to transformers
- link to code formatting guidelines
2020-01-28 10:41:10 -05:00
BramVanroy
b2907cd06e
Update feature-request.md
...
- add 'your contribution' section
- add code formatting link to 'additional context'
2020-01-28 10:41:10 -05:00
BramVanroy
2fec88ee02
Update question-help.md
...
Prefer that general questions are asked on Stack Overflow
2020-01-28 10:41:10 -05:00
BramVanroy
7e03d2bd7c
update migration guide
...
Streamlines usages of pytorch-transformers and pytorch-pretrained-bert. Add link to the README for the migration guide.
2020-01-28 10:41:10 -05:00
Julien Chaumond
c301faa92b
Distributed or parallel setup
2020-01-06 18:41:08 -05:00
alberduris
81d6841b4b
GPU text generation: mMoved the encoded_prompt to correct device
2020-01-06 15:11:12 +01:00
alberduris
dd4df80f0b
Moved the encoded_prompts to correct device
2020-01-06 15:11:12 +01:00
Clement
a44f112fb9
add authors for models
2019-11-05 08:48:26 -05:00
Lysandre Debut
8efc0ec91a
Add Benchmarks to issue templates
2019-10-18 10:45:44 -04:00
Lysandre Debut
bb464289ce
New model addition issue template
2019-10-04 16:41:26 -04:00
thomwolf
31c23bd5ee
[BIG] pytorch-transformers => transformers
2019-09-26 10:15:53 +02:00
thomwolf
077ad693e9
tweak issue templates wordings
2019-08-05 16:46:29 +02:00
thomwolf
7c524d631e
add issue templates
2019-08-05 16:25:54 +02:00
thomwolf
724eb45cef
add stale bot
2019-04-11 17:12:00 +02:00