thomwolf
|
bd3b3aee9c
|
update
|
2019-01-28 17:47:29 +01:00 |
|
thomwolf
|
a45a9cc0e1
|
update tests
|
2019-01-28 17:16:02 +01:00 |
|
thomwolf
|
b12616fd8e
|
updating code organization to fix imports
|
2019-01-28 17:03:39 +01:00 |
|
thomwolf
|
d77dd62ff8
|
directly load from TF checkpoints + code cleanup
|
2019-01-28 16:50:23 +01:00 |
|
Matej Svejda
|
9c6a48c8c3
|
fix learning rate/fp16 and warmup problem for all examples
|
2019-01-27 14:07:24 +01:00 |
|
Matej Svejda
|
01ff4f82ba
|
learning rate problems in run_classifier.py
|
2019-01-22 23:40:06 +01:00 |
|
liangtaiwan
|
4eb2a49d41
|
Merge run_squad.py and run_squad2.py
|
2019-01-19 10:18:10 +08:00 |
|
Thomas Wolf
|
0a9d7c7edb
|
Merge pull request #201 from Liangtaiwan/squad2_save_bug
run_squad2 Don't save model if do not train
|
2019-01-18 09:28:11 +01:00 |
|
liangtaiwan
|
be9fa192f0
|
don't save if do not train
|
2019-01-18 00:41:55 +08:00 |
|
thomwolf
|
9c35c132fa
|
apex LayerNorm
|
2019-01-17 09:19:19 +01:00 |
|
thomwolf
|
b9c77b98d5
|
fix transposition in model conversion and memory initialization
|
2019-01-17 00:33:21 +01:00 |
|
Thomas Wolf
|
f040a43cb3
|
Merge pull request #199 from davidefiocco/patch-1
(very) minor update to README
|
2019-01-16 23:51:52 +01:00 |
|
Davide Fiocco
|
35115eaf93
|
(very) minor update to README
|
2019-01-16 21:05:24 +01:00 |
|
thomwolf
|
009101de12
|
fix loading bug and check full conversion of model
|
2019-01-16 12:16:20 +01:00 |
|
thomwolf
|
fea15cc9f5
|
update model conversion
|
2019-01-16 11:54:54 +01:00 |
|
thomwolf
|
a28dfc8659
|
fix eval for wt103
|
2019-01-16 11:18:19 +01:00 |
|
thomwolf
|
c03c12687f
|
fix __main__ entry script
|
2019-01-16 10:55:22 +01:00 |
|
thomwolf
|
8831c68803
|
fixing various parts of model conversion, loading and weights sharing
|
2019-01-16 10:31:16 +01:00 |
|
thomwolf
|
bcd4aa8fe0
|
update evaluation example
|
2019-01-15 23:32:34 +01:00 |
|
thomwolf
|
a69ec2c722
|
improved corpus and tokenization conversion - added evaluation script
|
2019-01-15 23:17:46 +01:00 |
|
thomwolf
|
7d03c53718
|
conversion working
|
2019-01-15 16:07:25 +01:00 |
|
thomwolf
|
3a9c88377f
|
adding Transformer XL
|
2019-01-15 12:59:38 +01:00 |
|
Thomas Wolf
|
647c983530
|
Merge pull request #193 from nhatchan/20190113_global_step
Fix importing unofficial TF models
|
2019-01-14 09:44:01 +01:00 |
|
Thomas Wolf
|
4e0cba1053
|
Merge pull request #191 from nhatchan/20190113_py35_finetune
lm_finetuning compatibility with Python 3.5
|
2019-01-14 09:40:07 +01:00 |
|
Thomas Wolf
|
c94455651e
|
Merge pull request #190 from nhatchan/20190113_finetune_doc
Fix documentation (missing backslashes)
|
2019-01-14 09:39:03 +01:00 |
|
Thomas Wolf
|
25eae7b0ae
|
Merge pull request #189 from donglixp/patch-1
[bug fix] args.do_lower_case is always True
|
2019-01-14 09:38:37 +01:00 |
|
nhatchan
|
cd30565aed
|
Fix importing unofficial TF models
Importing unofficial TF models seems to be working well, at least for me.
This PR resolves #50.
|
2019-01-14 13:35:40 +09:00 |
|
nhatchan
|
8edc898f63
|
Fix documentation (missing backslashes)
This PR adds missing backslashes in LM Fine-tuning subsection in README.md.
|
2019-01-13 21:23:19 +09:00 |
|
nhatchan
|
6c65cb2492
|
lm_finetuning compatibility with Python 3.5
dicts are not ordered in Python 3.5 or prior, which is a cause of #175.
This PR replaces one with a list, to keep its order.
|
2019-01-13 21:09:13 +09:00 |
|
Li Dong
|
a2da2b4109
|
[bug fix] args.do_lower_case is always True
The "default=True" makes args.do_lower_case always True.
```python
parser.add_argument("--do_lower_case",
default=True,
action='store_true')
```
|
2019-01-13 19:51:11 +08:00 |
|
Thomas Wolf
|
35becc6d84
|
Merge pull request #182 from deepset-ai/fix_lowercase_and_saving
add do_lower_case arg and adjust model saving for lm finetuning.
|
2019-01-11 08:50:13 +01:00 |
|
tholor
|
506e5bb0c8
|
add do_lower_case arg and adjust model saving for lm finetuning.
|
2019-01-11 08:32:46 +01:00 |
|
Thomas Wolf
|
e485829a41
|
Merge pull request #174 from abeljim/master
Added Squad 2.0
|
2019-01-10 23:40:45 +01:00 |
|
Thomas Wolf
|
7e60205bd3
|
Merge pull request #179 from likejazz/patch-2
Fix it to run properly even if without `--do_train` param.
|
2019-01-10 23:39:10 +01:00 |
|
Sang-Kil Park
|
64326dccfb
|
Fix it to run properly even if without --do_train param.
It was modified similar to `run_classifier.py`, and Fixed to run properly even if without `--do_train` param.
|
2019-01-10 21:51:39 +09:00 |
|
thomwolf
|
e5c78c6684
|
update readme and few typos
|
2019-01-10 01:40:00 +01:00 |
|
thomwolf
|
fa5222c296
|
update readme
|
2019-01-10 01:25:28 +01:00 |
|
Thomas Wolf
|
0dd5f55ac8
|
Merge pull request #172 from WrRan/never_split
Never split some texts.
|
2019-01-09 13:44:09 +01:00 |
|
Unknown
|
b3628f117e
|
Added Squad 2.0
|
2019-01-08 15:13:13 -08:00 |
|
thomwolf
|
ab90d4cddd
|
adding docs and example for OpenAI GPT
|
2019-01-09 00:12:43 +01:00 |
|
thomwolf
|
dc5df92fa8
|
added LM head for OpenAI
|
2019-01-08 17:18:47 +01:00 |
|
thomwolf
|
3cf12b235a
|
added tests + fixed losses
|
2019-01-08 16:24:23 +01:00 |
|
thomwolf
|
eed51c5bdf
|
add OpenAI GPT
|
2019-01-08 12:26:58 +01:00 |
|
WrRan
|
3f60a60eed
|
text in never_split should not lowercase
|
2019-01-08 13:33:57 +08:00 |
|
WrRan
|
751beb9e73
|
never split some text
|
2019-01-08 10:54:51 +08:00 |
|
thomwolf
|
793dcd236b
|
Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT into fifth-release
|
2019-01-07 13:37:55 +01:00 |
|
thomwolf
|
2e4db64cab
|
add do_lower_case tokenizer loading optino in run_squad and ine_tuning examples
|
2019-01-07 13:06:42 +01:00 |
|
thomwolf
|
c9fd350567
|
remove default when action is store_true in arguments
|
2019-01-07 13:01:54 +01:00 |
|
thomwolf
|
93f563b8a8
|
adding OpenAI GPT
|
2019-01-07 12:55:36 +01:00 |
|
Thomas Wolf
|
e048c7f1c8
|
Merge pull request #171 from donglixp/patch-1
LayerNorm initialization
|
2019-01-07 12:44:46 +01:00 |
|