thomwolf
|
3ea3b00e59
|
merge squad example in single example
|
2019-02-05 16:10:27 +01:00 |
|
thomwolf
|
d8e3bdbb4c
|
moved up to current master
|
2019-02-05 16:09:39 +01:00 |
|
Thomas Wolf
|
64ce900974
|
Merge pull request #248 from JoeDumoulin/squad1.1-fix
fix prediction on run-squad.py example
|
2019-02-05 16:00:51 +01:00 |
|
thomwolf
|
0ad9b239a1
|
gitignore
|
2019-02-05 15:43:11 +01:00 |
|
Thomas Wolf
|
e9e77cd3c4
|
Merge pull request #218 from matej-svejda/master
Fix learning rate problems in run_classifier.py
|
2019-02-05 15:40:44 +01:00 |
|
thomwolf
|
1579c53635
|
more explicit notation: num_train_step => num_train_optimization_steps
|
2019-02-05 15:36:33 +01:00 |
|
Thibault Fevry
|
f3bda2352a
|
Only keep the active part mof the loss for token classification
|
2019-02-04 11:46:36 -05:00 |
|
thomwolf
|
6179f537a3
|
clean up tokenization spaces
|
2019-02-04 17:41:22 +01:00 |
|
thomwolf
|
850da1cc36
|
strip decoded outputs
|
2019-02-04 17:35:05 +01:00 |
|
thomwolf
|
01a3966bc6
|
more options on special tokens
|
2019-02-04 17:26:25 +01:00 |
|
thomwolf
|
05f961840b
|
logging
|
2019-02-04 13:06:19 +01:00 |
|
joe dumoulin
|
aa90e0c36a
|
fix prediction on run-squad.py example
|
2019-02-01 10:15:44 -08:00 |
|
Thomas Wolf
|
8f8bbd4a4c
|
Merge pull request #244 from deepset-ai/prettify_lm_masking
Avoid confusion of inplace LM masking
|
2019-02-01 12:17:50 +01:00 |
|
Thomas Wolf
|
e2d53d95b0
|
Merge pull request #242 from ksurya/argparse
Fix argparse type error
|
2019-02-01 12:14:55 +01:00 |
|
Thomas Wolf
|
7e0b415ab4
|
Merge pull request #240 from girishponkiya/patch-1
Minor update in README
|
2019-02-01 12:14:05 +01:00 |
|
tholor
|
ce75b169bd
|
avoid confusion of inplace masking of tokens_a / tokens_b
|
2019-01-31 11:42:06 +01:00 |
|
Surya Kasturi
|
9bf528877e
|
Update run_squad.py
|
2019-01-30 15:09:31 -05:00 |
|
Surya Kasturi
|
af2b78601b
|
Update run_squad2.py
|
2019-01-30 15:08:56 -05:00 |
|
Girishkumar
|
0dd2b750ca
|
Minor update in README
Update links to classes in `modeling.py`
|
2019-01-30 23:49:15 +05:30 |
|
Matej Svejda
|
5169069997
|
make examples consistent, revert error in num_train_steps calculation
|
2019-01-30 11:47:25 +01:00 |
|
thomwolf
|
3a848111e6
|
update config, docstrings and readme to switch to seperated tokens and position embeddings
|
2019-01-29 11:00:11 +01:00 |
|
thomwolf
|
98c96fb1a7
|
splitting position and tokens embeddings in OpenAI GPT - updating tf imports - tests
|
2019-01-29 10:31:42 +01:00 |
|
thomwolf
|
5456d82311
|
more versatile model loading
|
2019-01-29 09:54:18 +01:00 |
|
thomwolf
|
9b2540b5a7
|
update __init__
|
2019-01-29 09:54:08 +01:00 |
|
thomwolf
|
bd3b3aee9c
|
update
|
2019-01-28 17:47:29 +01:00 |
|
thomwolf
|
a45a9cc0e1
|
update tests
|
2019-01-28 17:16:02 +01:00 |
|
thomwolf
|
b12616fd8e
|
updating code organization to fix imports
|
2019-01-28 17:03:39 +01:00 |
|
thomwolf
|
d77dd62ff8
|
directly load from TF checkpoints + code cleanup
|
2019-01-28 16:50:23 +01:00 |
|
Matej Svejda
|
9c6a48c8c3
|
fix learning rate/fp16 and warmup problem for all examples
|
2019-01-27 14:07:24 +01:00 |
|
Matej Svejda
|
01ff4f82ba
|
learning rate problems in run_classifier.py
|
2019-01-22 23:40:06 +01:00 |
|
liangtaiwan
|
4eb2a49d41
|
Merge run_squad.py and run_squad2.py
|
2019-01-19 10:18:10 +08:00 |
|
Thomas Wolf
|
0a9d7c7edb
|
Merge pull request #201 from Liangtaiwan/squad2_save_bug
run_squad2 Don't save model if do not train
|
2019-01-18 09:28:11 +01:00 |
|
liangtaiwan
|
be9fa192f0
|
don't save if do not train
|
2019-01-18 00:41:55 +08:00 |
|
thomwolf
|
9c35c132fa
|
apex LayerNorm
|
2019-01-17 09:19:19 +01:00 |
|
thomwolf
|
b9c77b98d5
|
fix transposition in model conversion and memory initialization
|
2019-01-17 00:33:21 +01:00 |
|
Thomas Wolf
|
f040a43cb3
|
Merge pull request #199 from davidefiocco/patch-1
(very) minor update to README
|
2019-01-16 23:51:52 +01:00 |
|
Davide Fiocco
|
35115eaf93
|
(very) minor update to README
|
2019-01-16 21:05:24 +01:00 |
|
thomwolf
|
009101de12
|
fix loading bug and check full conversion of model
|
2019-01-16 12:16:20 +01:00 |
|
thomwolf
|
fea15cc9f5
|
update model conversion
|
2019-01-16 11:54:54 +01:00 |
|
thomwolf
|
a28dfc8659
|
fix eval for wt103
|
2019-01-16 11:18:19 +01:00 |
|
thomwolf
|
c03c12687f
|
fix __main__ entry script
|
2019-01-16 10:55:22 +01:00 |
|
thomwolf
|
8831c68803
|
fixing various parts of model conversion, loading and weights sharing
|
2019-01-16 10:31:16 +01:00 |
|
thomwolf
|
bcd4aa8fe0
|
update evaluation example
|
2019-01-15 23:32:34 +01:00 |
|
thomwolf
|
a69ec2c722
|
improved corpus and tokenization conversion - added evaluation script
|
2019-01-15 23:17:46 +01:00 |
|
thomwolf
|
7d03c53718
|
conversion working
|
2019-01-15 16:07:25 +01:00 |
|
thomwolf
|
3a9c88377f
|
adding Transformer XL
|
2019-01-15 12:59:38 +01:00 |
|
Thomas Wolf
|
647c983530
|
Merge pull request #193 from nhatchan/20190113_global_step
Fix importing unofficial TF models
|
2019-01-14 09:44:01 +01:00 |
|
Thomas Wolf
|
4e0cba1053
|
Merge pull request #191 from nhatchan/20190113_py35_finetune
lm_finetuning compatibility with Python 3.5
|
2019-01-14 09:40:07 +01:00 |
|
Thomas Wolf
|
c94455651e
|
Merge pull request #190 from nhatchan/20190113_finetune_doc
Fix documentation (missing backslashes)
|
2019-01-14 09:39:03 +01:00 |
|
Thomas Wolf
|
25eae7b0ae
|
Merge pull request #189 from donglixp/patch-1
[bug fix] args.do_lower_case is always True
|
2019-01-14 09:38:37 +01:00 |
|