Commit Graph

178 Commits

Author SHA1 Message Date
Ailing Zhang
bfd6f6b257 fix from_pretrained positional args 2019-04-17 16:31:40 -07:00
thomwolf
074c869bbe fix OpenAIGPTMultipleChoiceHead 2019-04-11 20:53:50 +02:00
thomwolf
991b8e65f4 Merge branch 'master' of https://github.com/huggingface/pytorch-pretrained-BERT 2019-04-11 11:43:15 +02:00
thomwolf
e99b2014cc fixes #471 2019-04-11 11:43:13 +02:00
Thomas Wolf
94980b529f
Merge pull request #404 from CatalinVoss/fix_lm_loss
Fix Language Modeling Loss
2019-04-03 11:35:30 +02:00
Thomas Wolf
db4dccd1b5
Merge pull request #389 from lukovnikov/master
Fix cosine schedule
2019-04-03 11:21:43 +02:00
thomwolf
19666dcb3b Should fix #438 2019-04-03 11:01:01 +02:00
thomwolf
1d8c232324 Fix #436 2019-04-03 10:51:03 +02:00
Mike Arpaia
8b5c63e4de Fixes to the TensorFlow conversion tool 2019-04-01 13:17:54 -06:00
Catalin Voss
01520d5412 Remove my unhelpful comments :) 2019-03-27 10:45:28 -07:00
Ikuya Yamada
0401317b23 Remove padding_idx from position_embeddings and token_type_embeddings 2019-03-26 21:56:35 +09:00
Catalin Voss
fda2f62395 Fix test failures due to old torch issue with non-contiguous view 2019-03-24 14:37:13 -07:00
Catalin Voss
0dd796e359 Also fix loss function issue with the double head models 2019-03-24 14:35:55 -07:00
Catalin Voss
472857c47f Fix typo syntax err (sorry, c/p from my repo) 2019-03-24 14:14:49 -07:00
Catalin Voss
2e6f5ffb96 Fix GPT language model loss here as well 2019-03-24 14:14:44 -07:00
Catalin Voss
5938f31fa7 Fix c/p typo from my experiment code 2019-03-24 14:14:40 -07:00
Catalin Voss
7797d21b8d Fix GPT2 language modeling loss computation 2019-03-24 14:14:35 -07:00
lukovnikov
19cc2c084e same 2019-03-18 15:13:35 +01:00
lukovnikov
2283dcca5e import revert 2019-03-18 13:40:12 +01:00
lukovnikov
ef28b2c747 branches, optim cosine fix 2019-03-18 13:18:07 +01:00
lukovnikov
90430ae7ec Merge remote-tracking branch 'origin/master'
# Conflicts:
#	pytorch_pretrained_bert/optimization.py
2019-03-18 13:15:29 +01:00
lukovnikov
bed6408dcc branches, optim cosine fix 2019-03-18 13:09:55 +01:00
thomwolf
e5f2d9122c adding absolute imports to gpt2, openai and transfo-xl 2019-03-14 09:55:01 +01:00
lukovnikov
20e652209c relation classification: replacing entity mention with mask token 2019-03-13 16:13:37 +01:00
lukovnikov
eac039d21f changing docker 2019-03-12 13:45:12 +01:00
lukovnikov
471daf1b6c changing docker 2019-03-12 13:32:42 +01:00
lukovnikov
9024613337 changing docker 2019-03-12 13:23:58 +01:00
lukovnikov
baf66d1419 restart cosine lr schedule 2019-03-12 13:22:23 +01:00
Thomas Wolf
9b03d67b83
Merge pull request #362 from Bharat123rox/patch-1
Make the hyperlink of NVIDIA Apex clickable
2019-03-11 09:08:51 +01:00
Thomas Wolf
13aa13dbc0
Merge pull request #358 from cdjhz/patch-1
add 'padding_idx=0' for BertEmbeddings
2019-03-11 09:06:55 +01:00
Bharat Raghunathan
f91ce0b803
Make the hyperlink of NVIDIA Apex clickable 2019-03-09 20:05:39 +05:30
lukovnikov
51efde54a9 cos fix 2019-03-09 02:45:25 +01:00
lukovnikov
f113a2dfdc readme de 2019-03-09 02:29:57 +01:00
lukovnikov
90a41dbe14 BertAdam schedule objects 2019-03-09 02:23:20 +01:00
lukovnikov
88874f6cf0 BertAdam schedule objects 2019-03-08 19:08:30 +01:00
Haozhe Ji
72fa8d03a7
add 'padding_idx=0' for BertEmbeddings 2019-03-07 20:02:55 +08:00
Philipp Glock
6190e8ce4c Fix: use dropout layer 2019-03-07 10:12:45 +01:00
thomwolf
5c85fc3977 fix typo - logger info 2019-03-06 10:05:21 +01:00
Thomas Wolf
21c88a07b7
Merge pull request #341 from potatochip/patch-1
catch exception if pathlib not install
2019-03-06 09:48:01 +01:00
Thomas Wolf
477ec4b6cc
Merge pull request #337 from CatalinVoss/patch-2
Allow tokenization of sequences > 512 for caching
2019-03-06 09:45:49 +01:00
Thomas Wolf
7b9e5a54b5
Merge pull request #327 from lukovnikov/master
Issue#324: warmup linear fixes
2019-03-06 09:44:56 +01:00
Catalin Voss
4a49c22584 Warn instead of raising in BERT and GPT-2 tokenizers as well, to allow for pre-caching of tokens 2019-03-05 12:31:45 -08:00
Aaron Mangum
0c970caa4a
catch exception if pathlib not install 2019-03-04 14:30:19 -08:00
Catalin Voss
9775b2eb27
Allow tokenization of sequences > 512 for caching
For many applications requiring randomized data access, it's easier to cache the tokenized representations than the words. So why not turn this into a warning?
2019-03-02 16:30:21 -08:00
John Hewitt
4d1ad83236 update docstring of BERT tokenizer to reflect do_wordpiece_only 2019-02-27 14:50:41 -08:00
lukovnikov
35410da758 added warning 2019-02-27 17:11:42 +01:00
lukovnikov
4d79e0d386 added warning 2019-02-27 16:50:05 +01:00
lukovnikov
66a84b63b0 added warning 2019-02-27 16:38:00 +01:00
lukovnikov
070f3b21d8 added warning 2019-02-27 16:26:45 +01:00
lukovnikov
46ef646016 added warning 2019-02-27 16:22:27 +01:00