Commit Graph

410 Commits

Author SHA1 Message Date
tholor
4a450b25d5 removing unused argument eval_batch_size from LM finetuning #256 2019-02-07 10:06:38 +01:00
Thomas Wolf
58f0a2745c
Merge pull request #258 from BoeingX/master
Fix the undefined variable in squad example
2019-02-06 20:33:18 +01:00
Baoyang Song
7ac3311e48
Fix the undefined variable in squad example 2019-02-06 19:36:08 +01:00
thomwolf
822915142b fix docstring 2019-02-05 16:34:32 +01:00
Thomas Wolf
bd74632687
Merge pull request #251 from Iwontbecreative/active_loss_tok_classif
Only keep the active part mof the loss for token classification
2019-02-05 16:33:45 +01:00
Thomas Wolf
fd223374f0
Merge pull request #208 from Liangtaiwan/mergesquad
Merge run_squad.py and run_squad2.py
2019-02-05 16:15:03 +01:00
thomwolf
d609ba24cb resolving merge conflicts 2019-02-05 16:14:25 +01:00
thomwolf
bde1eeebe0 rename 2019-02-05 16:11:22 +01:00
thomwolf
3ea3b00e59 merge squad example in single example 2019-02-05 16:10:27 +01:00
thomwolf
d8e3bdbb4c moved up to current master 2019-02-05 16:09:39 +01:00
Thomas Wolf
64ce900974
Merge pull request #248 from JoeDumoulin/squad1.1-fix
fix prediction on run-squad.py example
2019-02-05 16:00:51 +01:00
thomwolf
0ad9b239a1 gitignore 2019-02-05 15:43:11 +01:00
Thomas Wolf
e9e77cd3c4
Merge pull request #218 from matej-svejda/master
Fix learning rate problems in run_classifier.py
2019-02-05 15:40:44 +01:00
thomwolf
1579c53635 more explicit notation: num_train_step => num_train_optimization_steps 2019-02-05 15:36:33 +01:00
Thibault Fevry
f3bda2352a Only keep the active part mof the loss for token classification 2019-02-04 11:46:36 -05:00
joe dumoulin
aa90e0c36a fix prediction on run-squad.py example 2019-02-01 10:15:44 -08:00
Thomas Wolf
8f8bbd4a4c
Merge pull request #244 from deepset-ai/prettify_lm_masking
Avoid confusion of inplace LM masking
2019-02-01 12:17:50 +01:00
Thomas Wolf
e2d53d95b0
Merge pull request #242 from ksurya/argparse
Fix argparse type error
2019-02-01 12:14:55 +01:00
Thomas Wolf
7e0b415ab4
Merge pull request #240 from girishponkiya/patch-1
Minor update in README
2019-02-01 12:14:05 +01:00
tholor
ce75b169bd avoid confusion of inplace masking of tokens_a / tokens_b 2019-01-31 11:42:06 +01:00
Surya Kasturi
9bf528877e
Update run_squad.py 2019-01-30 15:09:31 -05:00
Surya Kasturi
af2b78601b
Update run_squad2.py 2019-01-30 15:08:56 -05:00
Girishkumar
0dd2b750ca
Minor update in README
Update links to classes in `modeling.py`
2019-01-30 23:49:15 +05:30
Matej Svejda
5169069997 make examples consistent, revert error in num_train_steps calculation 2019-01-30 11:47:25 +01:00
Matej Svejda
9c6a48c8c3 fix learning rate/fp16 and warmup problem for all examples 2019-01-27 14:07:24 +01:00
Matej Svejda
01ff4f82ba learning rate problems in run_classifier.py 2019-01-22 23:40:06 +01:00
liangtaiwan
4eb2a49d41 Merge run_squad.py and run_squad2.py 2019-01-19 10:18:10 +08:00
Thomas Wolf
0a9d7c7edb
Merge pull request #201 from Liangtaiwan/squad2_save_bug
run_squad2 Don't save model if do not train
2019-01-18 09:28:11 +01:00
liangtaiwan
be9fa192f0 don't save if do not train 2019-01-18 00:41:55 +08:00
Thomas Wolf
f040a43cb3
Merge pull request #199 from davidefiocco/patch-1
(very) minor update to README
2019-01-16 23:51:52 +01:00
Davide Fiocco
35115eaf93
(very) minor update to README 2019-01-16 21:05:24 +01:00
Thomas Wolf
647c983530
Merge pull request #193 from nhatchan/20190113_global_step
Fix importing unofficial TF models
2019-01-14 09:44:01 +01:00
Thomas Wolf
4e0cba1053
Merge pull request #191 from nhatchan/20190113_py35_finetune
lm_finetuning compatibility with Python 3.5
2019-01-14 09:40:07 +01:00
Thomas Wolf
c94455651e
Merge pull request #190 from nhatchan/20190113_finetune_doc
Fix documentation (missing backslashes)
2019-01-14 09:39:03 +01:00
Thomas Wolf
25eae7b0ae
Merge pull request #189 from donglixp/patch-1
[bug fix] args.do_lower_case is always True
2019-01-14 09:38:37 +01:00
nhatchan
cd30565aed Fix importing unofficial TF models
Importing unofficial TF models seems to be working well, at least for me.
This PR resolves #50.
2019-01-14 13:35:40 +09:00
nhatchan
8edc898f63 Fix documentation (missing backslashes)
This PR adds missing backslashes in LM Fine-tuning subsection in README.md.
2019-01-13 21:23:19 +09:00
nhatchan
6c65cb2492 lm_finetuning compatibility with Python 3.5
dicts are not ordered in Python 3.5 or prior, which is a cause of #175.
This PR replaces one with a list, to keep its order.
2019-01-13 21:09:13 +09:00
Li Dong
a2da2b4109
[bug fix] args.do_lower_case is always True
The "default=True" makes args.do_lower_case always True.

```python
parser.add_argument("--do_lower_case",
                        default=True,
                        action='store_true')
```
2019-01-13 19:51:11 +08:00
Thomas Wolf
35becc6d84
Merge pull request #182 from deepset-ai/fix_lowercase_and_saving
add do_lower_case arg and adjust model saving for lm finetuning.
2019-01-11 08:50:13 +01:00
tholor
506e5bb0c8 add do_lower_case arg and adjust model saving for lm finetuning. 2019-01-11 08:32:46 +01:00
Thomas Wolf
e485829a41
Merge pull request #174 from abeljim/master
Added Squad 2.0
2019-01-10 23:40:45 +01:00
Thomas Wolf
7e60205bd3
Merge pull request #179 from likejazz/patch-2
Fix it to run properly even if without `--do_train` param.
2019-01-10 23:39:10 +01:00
Sang-Kil Park
64326dccfb
Fix it to run properly even if without --do_train param.
It was modified similar to `run_classifier.py`, and Fixed to run properly even if without `--do_train` param.
2019-01-10 21:51:39 +09:00
Thomas Wolf
0dd5f55ac8
Merge pull request #172 from WrRan/never_split
Never split some texts.
2019-01-09 13:44:09 +01:00
Unknown
b3628f117e Added Squad 2.0 2019-01-08 15:13:13 -08:00
WrRan
3f60a60eed text in never_split should not lowercase 2019-01-08 13:33:57 +08:00
WrRan
751beb9e73 never split some text 2019-01-08 10:54:51 +08:00
thomwolf
2e4db64cab add do_lower_case tokenizer loading optino in run_squad and ine_tuning examples 2019-01-07 13:06:42 +01:00
thomwolf
c9fd350567 remove default when action is store_true in arguments 2019-01-07 13:01:54 +01:00