Commit Graph

85 Commits

Author SHA1 Message Date
thomwolf
5456d82311 more versatile model loading 2019-01-29 09:54:18 +01:00
thomwolf
bd3b3aee9c update 2019-01-28 17:47:29 +01:00
thomwolf
b12616fd8e updating code organization to fix imports 2019-01-28 17:03:39 +01:00
thomwolf
d77dd62ff8 directly load from TF checkpoints + code cleanup 2019-01-28 16:50:23 +01:00
thomwolf
e5c78c6684 update readme and few typos 2019-01-10 01:40:00 +01:00
thomwolf
ab90d4cddd adding docs and example for OpenAI GPT 2019-01-09 00:12:43 +01:00
thomwolf
3cf12b235a added tests + fixed losses 2019-01-08 16:24:23 +01:00
thomwolf
eed51c5bdf add OpenAI GPT 2019-01-08 12:26:58 +01:00
thomwolf
793dcd236b Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT into fifth-release 2019-01-07 13:37:55 +01:00
thomwolf
93f563b8a8 adding OpenAI GPT 2019-01-07 12:55:36 +01:00
Thomas Wolf
e048c7f1c8
Merge pull request #171 from donglixp/patch-1
LayerNorm initialization
2019-01-07 12:44:46 +01:00
Thomas Wolf
bcd607542c
Merge pull request #145 from wlhgtc/master
Correct the  wrong note
2019-01-07 12:23:05 +01:00
Li Dong
d0d9b384f2
LayerNorm initialization
The LayerNorm gamma and beta should be initialized by .fill_(1.0) and .zero_().

reference links:

989e78c412/tensorflow/contrib/layers/python/layers/layers.py (L2298)

989e78c412/tensorflow/contrib/layers/python/layers/layers.py (L2308)
2019-01-07 15:51:33 +08:00
wlhgtc
e626eecc25
Update modeling.py 2018-12-22 20:26:05 +08:00
Grégory Châtel
7176674849 Fixing various class documentations. 2018-12-20 13:11:17 +01:00
thomwolf
4a4b0e5783 remove logging. basicConfig from library code 2018-12-14 14:46:25 +01:00
thomwolf
ae88eb88a4 set encoding to 'utf-8' in calls to open 2018-12-14 13:48:58 +01:00
thomwolf
52c53f39d0 clean up apex integration 2018-12-13 13:02:17 +01:00
thomwolf
d23eed85bb model loading apex modification 2018-12-13 12:53:17 +01:00
thomwolf
93f335ef86 add pretrained loading from state_dict 2018-12-13 12:48:13 +01:00
Thomas Wolf
91aab2a6d3
Merge pull request #116 from FDecaYed/deyuf/fp16_with_apex
Change to use apex for better fp16 and multi-gpu support
2018-12-13 12:32:37 +01:00
Deyu Fu
3b0a14b761 add fallback path for apex used in modeling.py 2018-12-12 15:05:45 -08:00
Deyu Fu
c8ea286048 change to apex for better fp16 and multi-gpu support 2018-12-11 17:13:58 -08:00
thomwolf
270fa2f20b add pretrained loading from state_dict 2018-12-11 11:50:38 +01:00
Grégory Châtel
fc5a38ac92 Adding the BertForMultipleChoiceClass. 2018-12-06 18:42:23 +01:00
thomwolf
511bce58bd update new token classification model 2018-11-30 22:56:02 +01:00
thomwolf
d787c6be8c improve docstrings and fix new token classification model 2018-11-30 22:55:26 +01:00
thomwolf
d6f06c03f4 fixed loading pre-trained tokenizer from directory 2018-11-30 14:09:06 +01:00
thomwolf
532a81d3d6 fixed doc_strings 2018-11-30 13:57:01 +01:00
thomwolf
296f006132 added BertForTokenClassification model 2018-11-30 13:56:53 +01:00
thomwolf
298107fed7 Added new bert models 2018-11-30 13:56:02 +01:00
thomwolf
63ae5d2134 added cache_dir option in from_pretrained 2018-11-26 10:21:56 +01:00
thomwolf
ebaacba38b fixing typo in docstring 2018-11-26 09:55:15 +01:00
thomwolf
870d71636e fixing target size in crossentropy losses 2018-11-26 09:51:34 +01:00
thomwolf
1de35b624b preparing for first release 2018-11-15 20:56:10 +01:00