Commit Graph

15053 Commits

Author SHA1 Message Date
thomwolf
58830807d1 inidicate we only support pytorch 1.0.0+ now 2019-08-05 14:38:59 +02:00
thomwolf
328afb7097 cleaning up tokenizer tests structure (at last) - last remaining ppb refs 2019-08-05 14:08:56 +02:00
Thomas Wolf
0e918707dc
Merge pull request #907 from dhpollack/fix_convert_to_tf
Fix convert to tf
2019-08-05 12:55:04 +02:00
Julien Chaumond
cb9db101c7 Python 2 must DIE 2019-08-04 22:04:15 -04:00
Julien Chaumond
05c083520a [RoBERTa] model conversion, inference, tests 🔥 2019-08-04 21:39:21 -04:00
雷打不动!
d7fd10568c
Update modeling_bert.py 2019-08-05 08:58:19 +08:00
雷打不动!
84eb699082
Update modeling_xlnet.py 2019-08-05 08:57:09 +08:00
thomwolf
00132b7a7a updating docs - adding few tests to tokenizers 2019-08-04 22:42:55 +02:00
Ethan Perez
28ba345ecc
Fixing unused weight_decay argument
Currently the L2 regularization is hard-coded to "0.01", even though there is a --weight_decay flag implemented (that is unused). I'm making this flag control the weight decay used for fine-tuning in this script.
2019-08-04 12:31:46 -04:00
thomwolf
009273dbdd big doc update [WIP] 2019-08-04 12:14:57 +02:00
Saket Khandelwal
836e513698 Fixed small typo 2019-08-04 16:05:10 +10:00
wangfei
a24f830604 Fix comment typo 2019-08-03 12:17:06 +08:00
Julien Chaumond
44dd941efb link to swift-coreml-transformers 2019-08-01 09:50:30 -04:00
Anthony MOI
f2a3eb987e
Fix small typos 2019-07-31 11:05:06 -04:00
Pierric Cistac
97091acb8c
Small spelling fix 2019-07-31 10:37:56 -04:00
Grégory Châtel
769bb643ce Fixing a broken link. 2019-07-31 10:22:41 -04:00
David Pollack
c90119e543 spelling mistake 2019-07-29 16:56:02 +02:00
thomwolf
bfbe52ec39 cleaning up example docstrings 2019-07-27 20:25:39 +02:00
thomwolf
4cc1bf81ee typos 2019-07-27 12:08:21 +02:00
thomwolf
ac27548b25 fix unk_token test 2019-07-27 11:50:47 +02:00
thomwolf
c717d38573 dictionnary => dictionary 2019-07-26 23:30:48 +02:00
Thomas Wolf
6b763d04a9
Merge pull request #911 from huggingface/small_fixes
Small fixes
2019-07-26 21:36:21 +02:00
thomwolf
7b6e474c9a fix #901 2019-07-26 21:26:44 +02:00
thomwolf
632d711411 fix #908 2019-07-26 21:14:37 +02:00
Thomas Wolf
c054b5ee64
Merge pull request #896 from zijunsun/master
fix multi-gpu training bug when using fp16
2019-07-26 19:31:02 +02:00
thomwolf
27b0f86d36 clean up pretrained 2019-07-26 17:09:21 +02:00
thomwolf
57e54ec070 add unk_token to gpt2 2019-07-26 17:09:07 +02:00
thomwolf
ac42049c08 add auto models and auto tokenizer 2019-07-26 17:08:59 +02:00
David Pollack
09ecf225e9 fixed the fix. tf session madness. 2019-07-26 15:20:44 +02:00
David Pollack
edfd965ac8 fix convert_to_tf 2019-07-26 14:13:46 +02:00
zijunsun
f0aeb7a814 multi-gpu training also should be after apex fp16(squad) 2019-07-26 15:23:29 +08:00
Thomas Wolf
46cc9dd2b5
Merge pull request #899 from sukuya/master
Fixed import to use torchscript flag.
2019-07-25 15:03:21 +02:00
Thomas Wolf
6219ad7216
Merge pull request #888 from rococode/patch-1
Update docs for parameter rename
2019-07-25 15:01:22 +02:00
Thomas Wolf
0b6122e96a
Merge pull request #882 from Liangtaiwan/squad_v1_bug
fix squad v1 error (na_prob_file should be None)
2019-07-25 14:59:59 +02:00
Thomas Wolf
c244562cae
Merge pull request #893 from joelgrus/patch-2
make save_pretrained do the right thing with added tokens
2019-07-25 14:58:48 +02:00
Sukuya
e1e2ab3482
Merge pull request #1 from sukuya/sukuya-patch-1
Update torchscript.rst
2019-07-25 16:53:11 +08:00
Sukuya
35c52f2f3c
Update torchscript.rst
Import fixed to pytorch_transformers else torchscript flag can't be used.
2019-07-25 16:51:11 +08:00
zijunsun
adb3ef6368 multi-gpu training also should be after apex fp16 2019-07-25 13:09:10 +08:00
Joel Grus
ae152cec09
make save_pretrained work with added tokens
right now it's dumping the *decoder* when it should be dumping the *encoder*. this fixes that.
2019-07-24 16:54:48 -07:00
rococo // Ron
66b15f73f0
Update docs for parameter rename
OpenAIGPTLMHeadModel now accepts `labels` instead of `lm_labels`
2019-07-24 11:27:08 -07:00
Chi-Liang Liu
a7fce6d917 fix squad v1 error (na_prob_file should be None) 2019-07-24 16:11:36 +08:00
Thomas Wolf
067923d326
Merge pull request #873 from huggingface/identity_replacement
Add nn.Identity replacement for old PyTorch
2019-07-23 18:16:35 +02:00
Thomas Wolf
368670ac31
Merge pull request #866 from xanlsh/master
Rework how PreTrainedModel.from_pretrained handles its arguments
2019-07-23 18:05:30 +02:00
thomwolf
1383c7b87a Fix #869 2019-07-23 17:52:20 +02:00
thomwolf
6070b55443 fix #868 2019-07-23 17:46:01 +02:00
thomwolf
2c9a3115b7 fix #858 2019-07-23 16:45:55 +02:00
Anish Moorthy
4fb56c7729 Remove unused *args parameter from PreTrainedConfig.from_pretrained 2019-07-23 10:43:01 -04:00
Anish Moorthy
e179c55490 Add docs for from_pretrained functions, rename return_unused_args 2019-07-23 10:43:01 -04:00
Thomas Wolf
fec76a481d
Update readme 2019-07-23 16:05:29 +02:00
Thomas Wolf
859c441776
Merge pull request #872 from huggingface/saving_schedules
Updating schedules for state_dict saving/loading
2019-07-23 16:03:06 +02:00