Commit Graph

15053 Commits

Author SHA1 Message Date
Julien Chaumond
248aeaa842 Merge branch 'patch-1' of https://github.com/Rexhaif/transformers into Rexhaif-patch-1 2020-01-15 18:22:01 -05:00
Aditya Bhargava
c76c3cebed Add check for token_type_ids before tensorizing
Fix an issue where `prepare_for_model()` gives a `KeyError` when
`return_token_type_ids` is set to `False` and `return_tensors` is
enabled.
2020-01-15 12:31:43 -05:00
Julien Chaumond
eb59e9f705 Graduate sst-2 to a canonical one 2020-01-15 16:28:50 +00:00
Julien Chaumond
e184ad13cf Close #2392 2020-01-15 15:43:44 +00:00
Lysandre
dfe012ad9d Fix misleading RoBERTa token type ids 2020-01-14 17:47:28 -05:00
Lysandre
c024ab98df Improve padding side documentation 2020-01-14 17:44:23 -05:00
Lysandre
9aeb0b9b8a Improve padding side documentation 2020-01-14 17:43:00 -05:00
Julien Chaumond
715fa638a7 Merge branch 'master' into from_scratch_training 2020-01-14 18:58:21 +00:00
Lysandre
100e3b6f21 Bias should be resized with the weights
Created a link between the linear layer bias and the model attribute bias. This does not change anything for the user nor for the conversion scripts, but allows the `resize_token_embeddings` method to resize the bias as well as the weights of the decoder.

Added a test.
2020-01-14 13:43:45 -05:00
Lysandre
6c32d8bb95 Size > Dimensionality + Remove final TODOs 2020-01-14 14:09:09 +01:00
Lysandre
760164d63b RoBERTa example 2020-01-14 14:09:09 +01:00
Lysandre
387217bd3e Added example usage 2020-01-14 14:09:09 +01:00
Lysandre
7d1bb7f256 Add missing XLNet and XLM models 2020-01-14 14:09:09 +01:00
Lysandre
a1cb100460 Wrap up configurations 2020-01-14 14:09:09 +01:00
Lysandre
c11b6fd393 Update links in all configurations 2020-01-14 14:09:09 +01:00
Lysandre Debut
632682726f Updated Configurations 2020-01-14 14:09:09 +01:00
Thomas Wolf
2b566c182e
Merge pull request #2384 from dimagalat/master
Releasing file lock
2020-01-14 13:19:01 +01:00
Julien Chaumond
764f836d52
Update test_tokenization_auto.py 2020-01-13 22:50:34 -05:00
Julien Chaumond
d5831acb07
Update test_tokenization_auto.py 2020-01-13 22:47:33 -05:00
Julien Chaumond
ed6cd597cc
Update test_tokenization_auto.py 2020-01-13 22:46:35 -05:00
Julien Chaumond
5cb463a714
Update test_tokenization_auto.py 2020-01-13 22:38:29 -05:00
Julien Chaumond
afc24ea5d4 In a parallel setup this could fail 2020-01-13 23:44:08 +00:00
Julien Chaumond
894812c652 Fixup mapping 2020-01-13 23:34:19 +00:00
Julien Chaumond
b20f11d4ca 🔫 Python35 2020-01-13 23:20:44 +00:00
Julien Chaumond
0304628590 Map configs to models and tokenizers 2020-01-13 23:11:44 +00:00
Julien Chaumond
1fc855e456 [tests] Safety checks on CONFIG_MAPPING 2020-01-13 21:52:55 +00:00
Julien Chaumond
3c86b6f3c5 Py35 doesn't like inline variable types 2020-01-13 20:44:33 +00:00
Julien Chaumond
b803b067bf Config to Model mapping 2020-01-13 20:05:20 +00:00
Thomas Wolf
896a0eb1fd
Merge pull request #2459 from Perseus14/patch-4
Update pipelines.py
2020-01-13 16:02:54 +01:00
Morgan Funtowicz
0d6c17fc1b black formatting 2020-01-13 11:18:27 +01:00
IWillPull
a3085020ed Added repetition penalty to PPLM example (#2436)
* Added repetition penalty

* Default PPLM repetition_penalty to neutral

* Minor modifications to comply with reviewer's suggestions. (j -> token_idx)

* Formatted code with `make style`
2020-01-10 23:00:07 -05:00
Julien Chaumond
cf8a70bf68 More AutoConfig tests 2020-01-11 03:43:57 +00:00
Julien Chaumond
6bb3edc300 Serialize model_type if exists 2020-01-11 03:18:56 +00:00
Julien Chaumond
c6f682c1eb flake 2020-01-11 03:18:31 +00:00
Julien Chaumond
4d1c98c012 AutoConfig + other Auto classes honor model_type 2020-01-11 02:46:17 +00:00
Julien Chaumond
2f32dfd33b Convention: name mixins mixins 2020-01-11 01:24:29 +00:00
VictorSanh
e83d9f1c1d cleaning - change ' to " (black requirements) 2020-01-10 19:34:25 -05:00
VictorSanh
ebba9e929d minor spring cleaning - missing configs + processing 2020-01-10 19:14:58 -05:00
Julien Chaumond
055e80cfad rm old ConfigTester 2020-01-10 21:36:18 +00:00
Thomas Wolf
b1e1a9f9b2
Merge pull request #2495 from mschrimpf/patch-1
T5: move rp_bucket to relative_attention_bias' device
2020-01-10 22:18:54 +01:00
Julien Chaumond
fd8423321f keep list sorted 2020-01-10 20:36:46 +00:00
Julien Chaumond
0cd81fb99f [isort] declare more third-parties in case no tf install 2020-01-10 20:35:45 +00:00
Martin Schrimpf
90d3b787f6
move rp_bucket to relative_attention_bias' device
otherwise, `rp_bucket` will always be on cpu and fail if `self.relative_attention_bias` is on cuda
2020-01-10 15:09:10 -05:00
Julien Chaumond
84c0aa1868 num_parameters helper 2020-01-10 17:40:02 +00:00
Victor SANH
331065e62d missing import 2020-01-10 11:42:53 +01:00
Victor SANH
414e9e7122 indents test 2020-01-10 11:42:53 +01:00
Victor SANH
3cdb38a7c0 indents 2020-01-10 11:42:53 +01:00
Victor SANH
ebd45980a0 Align with run_squad + fix some errors 2020-01-10 11:42:53 +01:00
Victor SANH
45634f87f8 fix Sampler in distributed training - evaluation 2020-01-10 11:42:53 +01:00
Victor SANH
af1ee9e648 Move torch.nn.utils.clip_grad_norm_ 2020-01-10 11:42:53 +01:00