Commit Graph

19383 Commits

Author SHA1 Message Date
VictorSanh
2576a5c6db update hubconf for gpt2 torchhub compatibility 2019-06-01 15:28:01 -04:00
VictorSanh
a92b6dc3c1 add GPT2 torchhub compatibility 2019-06-01 15:27:43 -04:00
Thomas Wolf
2a329c6186
Merge pull request #651 from huggingface/gpt_torchhub
Add GPT* compatibility to torchhub
2019-05-31 14:44:52 +02:00
VictorSanh
45d21502f0 update doc 2019-05-31 01:04:16 -04:00
VictorSanh
98f5c7864f decorelate dependencies + fix bug 2019-05-31 01:00:29 -04:00
VictorSanh
c8bd026ef6 move dependecies list to hubconf 2019-05-31 00:36:58 -04:00
VictorSanh
19ef2b0a66 Fix typo in hubconf 2019-05-31 00:33:33 -04:00
VictorSanh
d0f591051c gpt_hubconf 2019-05-31 00:28:10 -04:00
VictorSanh
4a210c9fc6 Move bert_hubconf to hubconfs 2019-05-31 00:28:00 -04:00
VictorSanh
0c5a4fe9c9 modify from_pretrained for OpenAIGPT 2019-05-31 00:27:18 -04:00
VictorSanh
372a5c1cee Hubconf doc - Specia case loading 2019-05-30 16:06:21 -04:00
Victor SANH
96592b544b
default in __init__s for classification BERT models (#650) 2019-05-30 15:53:13 -04:00
VictorSanh
4cda86b08f Update hubconf for torchhub: paths+examples+doc 2019-05-30 18:38:00 +00:00
Colanim
1eba8b9d96
Fix link in README 2019-05-30 14:01:46 +09:00
Chris
314bc6bb4e added transposes to attention.self.[query,key,value] 2019-05-27 09:47:59 -04:00
Ahmad Barqawi
c4fe56dcc0 support latest multi language bert fine tune
fix issue of bert-base-multilingual and add support for uncased multilingual
2019-05-27 11:27:41 +02:00
Chris
8de1faea6f update to hf->tf args 2019-05-22 20:38:16 -04:00
Chris
d0adab2c39 fn change; pytorch_model_dir required=False 2019-05-22 20:24:04 -04:00
Chris
a309459b92 fn change; pytorch_model_dir required=False 2019-05-22 20:17:27 -04:00
tguens
9e7bc51b95
Update run_squad.py
Indentation change so that the output "nbest_predictions.json" is not empty.
2019-05-22 17:27:59 +08:00
Chris
69749f3fc3 update to hf->tf args 2019-05-18 17:16:01 -04:00
Chris
f1433db4f1 update to hf->tf args 2019-05-18 17:09:08 -04:00
Chris
077a5b0dc4 Merge remote-tracking branch 'upstream/master' into convert-back-to-tf
merging
2019-05-18 16:06:08 -04:00
Chris
2bcda8d00c update 2019-05-18 15:55:11 -04:00
samuelbroscheit
94247ad6cb Make num_train_optimization_steps int 2019-05-13 12:38:22 +02:00
samuel.broscheit
49a77ac16f Clean up a little bit 2019-05-12 00:31:10 +02:00
samuel.broscheit
3bf3f9596f Fixing the issues reported in https://github.com/huggingface/pytorch-pretrained-BERT/issues/556
Reason for issue was that optimzation steps where computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances.

Solution in this pull request is to compute num_optimization_steps directly from len(data_loader).
2019-05-12 00:13:45 +02:00
Thomas Wolf
3fc63f126d
Merge pull request #598 from burcturkoglu/master
Updating learning rate with special warm up in examples
2019-05-10 13:48:12 +02:00
burcturkoglu
00c7fd2b79 Division to num_train_optimizer of global_step in lr_this_step is removed. 2019-05-09 10:57:03 +03:00
burcturkoglu
fa37b4da77 Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT 2019-05-09 10:55:24 +03:00
burcturkoglu
5289b4b9e0 Division to num_train_optimizer of global_step in lr_this_step is removed. 2019-05-09 10:51:38 +03:00
thomwolf
275179a003 output attentions in GPT-2 2019-05-08 22:24:42 +02:00
thomwolf
366a3b0285 clean up in tokenization 2019-05-08 21:43:51 +02:00
Thomas Wolf
701bd59b8b
Merge pull request #585 from huntzhan/master
Make the epsilon of LayerNorm configurable.
2019-05-08 16:56:38 +02:00
Thomas Wolf
303b5e2b92
Merge pull request #545 from ailzhang/cache_dir
move pytroch_pretrained_bert cache folder under same path as torch
2019-05-08 16:55:27 +02:00
Thomas Wolf
0198399d84
Merge pull request #570 from MottoX/fix-1
Create optimizer only when args.do_train is True
2019-05-08 16:07:50 +02:00
Thomas Wolf
50fa92c026
Merge pull request #571 from MottoX/patch-1
Fix documentation typo
2019-05-08 16:06:13 +02:00
thomwolf
0efc4ab632 adding dropout to GPT-2 and embedding dropout to GPT 2019-05-08 10:41:35 +02:00
thomwolf
ea9dbea9d5 update GPT2 loss computation for more flexbility 2019-05-07 23:27:18 +02:00
thomwolf
ce86336545 add predict_special_tokens option to GPT also 2019-05-07 16:47:22 +02:00
thomwolf
d1b6979aa5 GPT-2 option to avoid predicting special tokens 2019-05-07 16:25:53 +02:00
huntzhan
101ab4dd8e Make the epsilon of LayerNorm configurable. 2019-05-06 00:26:21 +08:00
Chris
41089bc7d3 added file to convert pytorch->tf 2019-05-02 13:26:22 -04:00
Chris
0a8b4d65be added file to convert pytorch->tf 2019-05-02 13:20:59 -04:00
Chris
968c1b44cb added file to convert pytorch->tf 2019-05-02 13:19:56 -04:00
Chris
96c2b77f0f added file to convert pytorch->tf 2019-05-02 13:14:25 -04:00
thomwolf
e211785ada extract attention weights from GPT 2019-05-02 18:31:26 +02:00
MottoX
18c8aef9d3 Fix documentation typo 2019-05-02 19:23:36 +08:00
MottoX
74dbba64bc Prepare optimizer only when args.do_train is True 2019-05-02 19:09:29 +08:00
thomwolf
db98a4a48b gpt-2 tokenizer 2019-05-01 11:40:48 +02:00