transformers/pytorch_pretrained_bert
Thomas Wolf 477ec4b6cc
Merge pull request #337 from CatalinVoss/patch-2
Allow tokenization of sequences > 512 for caching
2019-03-06 09:45:49 +01:00
..
__init__.py forgot to add regex to requirements :( 2019-02-18 11:54:51 +01:00
__main__.py adding gpt2 2019-02-17 23:38:51 +01:00
convert_gpt2_checkpoint_to_pytorch.py adding gpt2 2019-02-17 23:38:51 +01:00
convert_openai_checkpoint_to_pytorch.py python 2 compatibility 2019-02-06 00:07:46 +01:00
convert_tf_checkpoint_to_pytorch.py splitting position and tokens embeddings in OpenAI GPT - updating tf imports - tests 2019-01-29 10:31:42 +01:00
convert_transfo_xl_checkpoint_to_pytorch.py add two transformer xl models 2019-02-07 17:07:03 +01:00
file_utils.py fix python 2.7 imports 2019-02-11 10:35:36 +01:00
modeling_gpt2.py finish updating docstrings 2019-02-23 06:31:59 -08:00
modeling_openai.py fix tests - bump up version 2019-02-17 23:57:23 +01:00
modeling_transfo_xl_utilities.py update transfo xl example 2019-02-09 16:59:17 +01:00
modeling_transfo_xl.py fix TransfoXLModel loading 2019-02-13 09:32:46 +01:00
modeling.py Update activation function docstring 2019-02-16 12:17:52 -08:00
optimization_openai.py added warning 2019-02-27 17:11:42 +01:00
optimization.py added warning 2019-02-27 17:11:42 +01:00
tokenization_gpt2.py Warn instead of raising in BERT and GPT-2 tokenizers as well, to allow for pre-caching of tokens 2019-03-05 12:31:45 -08:00
tokenization_openai.py Allow tokenization of sequences > 512 for caching 2019-03-02 16:30:21 -08:00
tokenization_transfo_xl.py typo 2019-02-20 21:11:06 +08:00
tokenization.py Merge pull request #337 from CatalinVoss/patch-2 2019-03-06 09:45:49 +01:00