thomwolf
2b56e98892
standardizing API across models - XLNetForSeqClass working
2019-06-28 16:35:09 +02:00
thomwolf
3a00674cbf
fix imports
2019-06-27 17:18:46 +02:00
Mayhul Arora
08ff056c43
Added option to use multiple workers to create training data for lm fine tuning
2019-06-26 16:16:12 -07:00
thomwolf
59cefd4f98
fix #726 - get_lr in examples
2019-06-26 11:28:27 +02:00
thomwolf
092dacfd62
changing is_regression to unified API
2019-06-26 09:54:05 +02:00
thomwolf
e55d4c4ede
various updates to conversion, models and examples
2019-06-26 00:57:53 +02:00
thomwolf
7334bf6c21
pad on left for xlnet
2019-06-24 15:05:11 +02:00
thomwolf
c888663f18
overwrite output directories if needed
2019-06-24 14:38:24 +02:00
thomwolf
62d78aa37e
updating GLUE utils for compatibility with XLNet
2019-06-24 14:36:11 +02:00
thomwolf
24ed0b9346
updating run_xlnet_classifier
2019-06-24 12:00:09 +02:00
thomwolf
f6081f2255
add xlnetforsequence classif and run_classifier example for xlnet
2019-06-24 10:01:07 +02:00
Rocketknight1
c7b2808ed7
Update LM finetuning README to include a literature reference
2019-06-22 15:04:01 +01:00
thomwolf
181075635d
updating model loading and adding special tokens ids
2019-06-21 23:23:37 +02:00
thomwolf
ebd2cb8d74
update from_pretrained to load XLNetModel as well
2019-06-21 21:08:44 +02:00
thomwolf
edfe91c36e
first version bertology ok
2019-06-19 23:43:04 +02:00
thomwolf
7766ce66dd
update bertology
2019-06-19 22:29:51 +02:00
thomwolf
e4b46d86ce
update head pruning
2019-06-19 22:16:30 +02:00
thomwolf
0f40e8d6a6
debugger
2019-06-19 15:38:46 +02:00
thomwolf
0e1e8128bf
more logging
2019-06-19 15:35:49 +02:00
thomwolf
909d4f1af2
cuda again
2019-06-19 15:32:10 +02:00
thomwolf
14f0e8e557
fix cuda
2019-06-19 15:29:28 +02:00
thomwolf
34d706a0e1
pruning in bertology
2019-06-19 15:25:49 +02:00
thomwolf
dc8e0019b7
updating examples
2019-06-19 13:23:20 +02:00
thomwolf
68ab9599ce
small fix and updates to readme
2019-06-19 09:38:38 +02:00
thomwolf
f7e2ac01ea
update barrier
2019-06-18 22:43:35 +02:00
thomwolf
4d8c4337ae
test barrier in distrib training
2019-06-18 22:41:28 +02:00
thomwolf
3359955622
updating run_classif
2019-06-18 22:23:10 +02:00
thomwolf
29b7b30eaa
updating evaluation on a single gpu
2019-06-18 22:20:21 +02:00
thomwolf
7d2001aa44
overwrite_output_dir
2019-06-18 22:13:30 +02:00
thomwolf
16a1f338c4
fixing
2019-06-18 17:06:31 +02:00
thomwolf
92e0ad5aba
no numpy
2019-06-18 17:00:52 +02:00
thomwolf
4e6edc3274
hop
2019-06-18 16:57:15 +02:00
thomwolf
f55b60b9ee
fixing again
2019-06-18 16:56:52 +02:00
thomwolf
8bd9118294
quick fix
2019-06-18 16:54:41 +02:00
thomwolf
3e847449ad
fix out_label_ids
2019-06-18 16:53:31 +02:00
thomwolf
aad3a54e9c
fix paths
2019-06-18 16:48:04 +02:00
thomwolf
40dbda6871
updating classification example
2019-06-18 16:45:52 +02:00
thomwolf
7388c83b60
update run_classifier for distributed eval
2019-06-18 16:32:49 +02:00
thomwolf
9727723243
fix pickle
2019-06-18 16:02:42 +02:00
thomwolf
9710b68dbc
fix pickles
2019-06-18 16:01:15 +02:00
thomwolf
15ebd67d4e
cache in run_classifier + various fixes to the examples
2019-06-18 15:58:22 +02:00
thomwolf
e6e5f19257
fix
2019-06-18 14:45:14 +02:00
thomwolf
a432b3d466
distributed traing t_total
2019-06-18 14:39:09 +02:00
thomwolf
c5407f343f
split squad example in two
2019-06-18 14:29:03 +02:00
thomwolf
335f57baf8
only on main process
2019-06-18 14:03:46 +02:00
thomwolf
326944d627
add tensorboard to run_squad
2019-06-18 14:02:42 +02:00
thomwolf
d82e5deeb1
set find_unused_parameters=True in DDP
2019-06-18 12:13:14 +02:00
thomwolf
a59abedfb5
DDP update
2019-06-18 12:06:26 +02:00
thomwolf
2ef5e0de87
switch to pytorch DistributedDataParallel
2019-06-18 12:03:13 +02:00
thomwolf
9ce37af99b
oups
2019-06-18 11:47:54 +02:00
thomwolf
a40955f071
no need to duplicate models anymore
2019-06-18 11:46:14 +02:00
thomwolf
382e2d1e50
spliting config and weight files for bert also
2019-06-18 10:37:16 +02:00
Thomas Wolf
cad88e19de
Merge pull request #672 from oliverguhr/master
...
Add vocabulary and model config to the finetune output
2019-06-14 17:02:47 +02:00
Thomas Wolf
460d9afd45
Merge pull request #640 from Barqawiz/master
...
Support latest multi language bert fine tune
2019-06-14 16:57:02 +02:00
Thomas Wolf
277c77f1c5
Merge pull request #630 from tguens/master
...
Update run_squad.py
2019-06-14 16:56:26 +02:00
Thomas Wolf
659af2cbd0
Merge pull request #604 from samuelbroscheit/master
...
Fixing issue "Training beyond specified 't_total' steps with schedule 'warmup_linear'" reported in #556
2019-06-14 16:49:24 +02:00
Meet Pragnesh Shah
e02ce4dc79
[hotfix] Fix frozen pooler parameters in SWAG example.
2019-06-11 15:13:53 -07:00
Oliver Guhr
5c08c8c273
adds the tokenizer + model config to the output
2019-06-11 13:46:33 +02:00
jeonsworld
a3a604cefb
Update pregenerate_training_data.py
...
apply Whole Word Masking technique.
referred to [create_pretraining_data.py](https://github.com/google-research/bert/blob/master/create_pretraining_data.py )
2019-06-10 12:17:23 +09:00
Ahmad Barqawi
c4fe56dcc0
support latest multi language bert fine tune
...
fix issue of bert-base-multilingual and add support for uncased multilingual
2019-05-27 11:27:41 +02:00
tguens
9e7bc51b95
Update run_squad.py
...
Indentation change so that the output "nbest_predictions.json" is not empty.
2019-05-22 17:27:59 +08:00
samuelbroscheit
94247ad6cb
Make num_train_optimization_steps int
2019-05-13 12:38:22 +02:00
samuel.broscheit
49a77ac16f
Clean up a little bit
2019-05-12 00:31:10 +02:00
samuel.broscheit
3bf3f9596f
Fixing the issues reported in https://github.com/huggingface/pytorch-pretrained-BERT/issues/556
...
Reason for issue was that optimzation steps where computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances.
Solution in this pull request is to compute num_optimization_steps directly from len(data_loader).
2019-05-12 00:13:45 +02:00
burcturkoglu
00c7fd2b79
Division to num_train_optimizer of global_step in lr_this_step is removed.
2019-05-09 10:57:03 +03:00
burcturkoglu
fa37b4da77
Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT
2019-05-09 10:55:24 +03:00
burcturkoglu
5289b4b9e0
Division to num_train_optimizer of global_step in lr_this_step is removed.
2019-05-09 10:51:38 +03:00
Thomas Wolf
0198399d84
Merge pull request #570 from MottoX/fix-1
...
Create optimizer only when args.do_train is True
2019-05-08 16:07:50 +02:00
MottoX
18c8aef9d3
Fix documentation typo
2019-05-02 19:23:36 +08:00
MottoX
74dbba64bc
Prepare optimizer only when args.do_train is True
2019-05-02 19:09:29 +08:00
Aneesh Pappu
365fb34c6c
small fix to remove shifting of lm labels during pre process of roc stories, as this shifting happens interanlly in the model
2019-04-30 13:53:04 -07:00
Thomas Wolf
2dee86319d
Merge pull request #527 from Mathieu-Prouveur/fix_value_training_loss
...
Update example files so that tr_loss is not affected by args.gradient…
2019-04-30 11:12:55 +02:00
Mathieu Prouveur
87b9ec3843
Fix tr_loss rescaling factor using global_step
2019-04-29 12:58:29 +02:00
Mathieu Prouveur
ed8fad7390
Update example files so that tr_loss is not affected by args.gradient_accumulation_step
2019-04-24 14:07:00 +02:00
thomwolf
d94c6b0144
fix training schedules in examples to match new API
2019-04-23 11:17:06 +02:00
Thomas Wolf
c36cca075a
Merge pull request #515 from Rocketknight1/master
...
Fix --reduce_memory in finetune_on_pregenerated
2019-04-23 10:30:23 +02:00
Matthew Carrigan
b8e2a9c584
Made --reduce_memory actually do something in finetune_on_pregenerated
2019-04-22 14:01:48 +01:00
Sangwhan Moon
14b1f719f4
Fix indentation weirdness in GPT-2 example.
2019-04-22 02:20:22 +09:00
Thomas Wolf
8407429d74
Merge pull request #494 from SudoSharma/patch-1
...
Fix indentation for unconditional generation
2019-04-17 11:11:36 +02:00
Ben Mann
87677fcc4d
[run_gpt2.py] temperature should be a float, not int
2019-04-16 15:23:21 -07:00
Abhi Sharma
07154dadb4
Fix indentation for unconditional generation
2019-04-16 11:11:49 -07:00
Thomas Wolf
3d78e226e6
Merge pull request #489 from huggingface/tokenization_serialization
...
Better serialization for Tokenizers and Configuration classes - Also fix #466
2019-04-16 08:49:54 +02:00
thomwolf
3571187ef6
fix saving models in distributed setting examples
2019-04-15 16:43:56 +02:00
thomwolf
2499b0a5fc
add ptvsd to run_squad
2019-04-15 15:33:04 +02:00
thomwolf
7816f7921f
clean up distributed training logging in run_squad example
2019-04-15 15:27:10 +02:00
thomwolf
1135f2384a
clean up logger in examples for distributed case
2019-04-15 15:22:40 +02:00
thomwolf
60ea6c59d2
added best practices for serialization in README and examples
2019-04-15 15:00:33 +02:00
thomwolf
179a2c2ff6
update example to work with new serialization semantic
2019-04-15 14:33:23 +02:00
thomwolf
3e65f255dc
add serialization semantics to tokenizers - fix transfo-xl tokenizer
2019-04-15 11:47:25 +02:00
Thomas Wolf
aff44f0c08
Merge branch 'master' into master
2019-04-15 10:58:34 +02:00
Thomas Wolf
bb61b747df
Merge pull request #474 from jiesutd/master
...
Fix tsv read error in Windows
2019-04-15 10:56:48 +02:00
Matthew Carrigan
dbbd6c7500
Replaced some randints with cleaner randranges, and added a helpful
...
error for users whose corpus is just one giant document.
2019-04-12 15:07:58 +01:00
Thomas Wolf
616743330e
Merge pull request #462 from 8enmann/master
...
fix run_gpt2.py
2019-04-11 21:54:46 +02:00
Thomas Wolf
2cdfb8b254
Merge pull request #467 from yaroslavvb/patch-2
...
Update README.md
2019-04-11 21:53:23 +02:00
Jie Yang
c49ce3c722
fix tsv read error in Windows
2019-04-11 15:40:19 -04:00
thomwolf
4bc4c69af9
finetuning any BERT model - fixes #455
2019-04-11 16:57:59 +02:00
Yaroslav Bulatov
8fffba5f47
Update README.md
...
Fix for
```> > > > 04/09/2019 21:39:38 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False
Traceback (most recent call last):
File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 642, in <module>
main()
File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 502, in main
raise ValueError("Training is currently the only implemented execution option. Please set `do_train`.")
ValueError: Training is currently the only implemented execution option. Please set `do_train`.
```
2019-04-09 14:45:47 -07:00
Benjamin Mann
fd8a3556f0
fix run_gpt2.py
2019-04-08 17:20:35 -07:00
Dhanajit Brahma
6c4c7be282
Merge remote-tracking branch 'upstream/master'
2019-04-07 16:59:36 +05:30
Dhanajit Brahma
4d3cf0d602
removing some redundant lines
2019-04-07 16:59:07 +05:30