Commit Graph

15053 Commits

Author SHA1 Message Date
Stefan Schweter
0b3d45eb64 camembert: add implementation for save_vocabulary method 2019-11-18 15:49:44 +01:00
Julien Chaumond
3916b334a8
[camembert] Acknowledge the full author list 2019-11-18 09:29:11 -05:00
Sebastian Stabinger
44455eb5b6 Adds CamemBERT to Model architectures list 2019-11-18 09:23:14 -05:00
Stefan Schweter
33753d9139 module: import CamembertForTokenClassification 2019-11-18 14:14:54 +01:00
Stefan Schweter
d32ce2c8df camembert: add wrapper for CamembertForTokenClassification 2019-11-18 14:14:19 +01:00
Yohei Tamura
d08a338c3b modified: transformers/modeling_utils.py 2019-11-16 18:47:37 +09:00
Julien Chaumond
0477b307c7 [camembert] tokenizer: use additional_special_tokens 2019-11-16 00:11:07 -05:00
Julien Chaumond
f9abf73e31 [camembert] realign w/ recent changes 2019-11-16 00:11:07 -05:00
Julien Chaumond
26858f27cb [camembert] Upload to s3 + rename script 2019-11-16 00:11:07 -05:00
Louis MARTIN
035fea5315 Add CamemBERT to auto files and docs 2019-11-16 00:11:07 -05:00
Louis MARTIN
694d4fcbb6 Add CamemBERT classes to __init__.py 2019-11-16 00:11:07 -05:00
Louis MARTIN
3e20c2e871 Update demo_camembert.py with new classes 2019-11-16 00:11:07 -05:00
Louis MARTIN
f12e4d8da7 Move demo_camembert.py to examples/contrib 2019-11-16 00:11:07 -05:00
Louis MARTIN
fb6c70a91d Update tokenization_camembert.py with urls 2019-11-16 00:11:07 -05:00
Louis MARTIN
e44b939e71 Add configuration_camembert.py and modeling_camembert.py 2019-11-16 00:11:07 -05:00
Louis MARTIN
6e72fd094c Add demo_camembert.py 2019-11-16 00:11:07 -05:00
Louis MARTIN
14b3aa3b3c Add tokenization_camembert.py 2019-11-16 00:11:07 -05:00
Xu Hongshen
ca99a2d500 Update example readme 2019-11-15 14:55:26 +08:00
Xu Hongshen
7da3ef24cd add is_impossible tensor to model inputs during fine-tuning xlnet on squad2.0 2019-11-15 14:18:53 +08:00
Thomas Wolf
74ce8de7d8
Merge pull request #1792 from stefan-it/distilbert-for-token-classification
DistilBERT for token classification
2019-11-14 22:47:53 +01:00
Thomas Wolf
05db5bc1af
added small comparison between BERT, RoBERTa and DistilBERT 2019-11-14 22:40:22 +01:00
Thomas Wolf
9629e2c676
Merge pull request #1804 from ronakice/master
fix multi-gpu eval in torch examples
2019-11-14 22:24:05 +01:00
Thomas Wolf
5b322a36db
Merge pull request #1811 from huggingface/special-tokens
Fix special tokens addition in decoder #1807
2019-11-14 22:17:24 +01:00
Thomas Wolf
1a237d7f42
Merge pull request #1831 from iedmrc/gpt2-tokenization-sum-func-replacement
sum() is replaced by itertools.chain.from_iterable()
2019-11-14 22:11:54 +01:00
Thomas Wolf
df99f8c5a1
Merge pull request #1832 from huggingface/memory-leak-schedulers
replace LambdaLR scheduler wrappers by function
2019-11-14 22:10:31 +01:00
Thomas Wolf
0be9ae7b3e
Merge pull request #1833 from huggingface/max-length-warning
Token indices sequence length is longer than the specified maximum sequence length for this model
2019-11-14 22:04:49 +01:00
Lysandre
be7f2aacce [CI][DOC] Don't rebuild if folder exists - Correct directory. 2019-11-14 14:54:44 -05:00
Lysandre
8f8d69716a [CI][DOC] Don't rebuild if folder exists. 2019-11-14 14:48:21 -05:00
Rémi Louf
2276bf69b7 update the examples, docs and template 2019-11-14 20:38:02 +01:00
Lysandre
d7929899da Specify checkpoint in saved file for run_lm_finetuning.py 2019-11-14 10:49:00 -05:00
Lysandre
a67e747889 Reorganized max_len warning 2019-11-14 10:30:22 -05:00
Lysandre
e18f786cd5 Quickstart example showcasing past 2019-11-14 10:06:00 -05:00
Rémi Louf
022525b003 replace LambdaLR scheduler wrappers by function
Custom schedulers are currently initiated by wrapping Pytorch's LambdaLR
class and passing a method of the wrapping class to the __init__
function of LambdaLR. This approach is not appropriate for several
reasons:

1. one does not need to define a class when it only defines a
__init__() method;
2. instantiating the parent class by passing a method of the child class
creates a cyclical reference which leads to memory leaks. See issues #1742 and #1134.

In this commit we replace the wrapper classes with functions that
instantiate `LambdaLR` with a custom learning rate function. We use a
closure to specify the parameter of the latter. We also do a bit of
renaming within the function to explicit the behaviour and removed
docstrings that were subsequently not necessary.
2019-11-14 15:39:08 +01:00
İbrahim Ethem Demirci
7627dde1f8 sum() is the leanest method to flatten a string list, so it's been replaced by itertools.chain.from_iterable() 2019-11-14 17:06:15 +03:00
Lysandre
74d0bcb6ff Fix special tokens addition in decoder 2019-11-12 15:27:57 -05:00
Julien Chaumond
155c782a2c [inputs_embeds] All TF models + tests 2019-11-12 11:29:21 -05:00
Julien Chaumond
2aef2f0bbc [common attributes] Fix previous commit for transfo-xl 2019-11-12 11:29:21 -05:00
Julien Chaumond
2f17464266 [common attributes] Slightly sharper test coverage 2019-11-12 11:29:21 -05:00
Julien Chaumond
9d2398fd99 Ooopsie 2019-11-12 11:29:21 -05:00
Julien Chaumond
70d97ddd60 [TF models] Common attributes as per #1721 2019-11-12 11:29:21 -05:00
Julien Chaumond
872403be1c This is not a @property after all 2019-11-12 11:29:21 -05:00
Julien Chaumond
dd6b2e05e1 whitespace 2019-11-12 11:29:21 -05:00
Lysandre
d409aca326 Clarify the use of past in GPT2 and CTRL 2019-11-12 10:59:37 -05:00
Michael Watkins
7246d3c2f9 Consider do_lower_case in PreTrainedTokenizer
As pointed out in #1545, when using an uncased model, and adding
a new uncased token, the tokenizer does not correctly identify this
in the case that the input text contains the token in a cased format.

For instance, if we load bert-base-uncased into BertTokenizer, and
then use .add_tokens() to add "cool-token", we get the expected
result for .tokenize('this is a cool-token'). However, we get a
possibly unexpected result for .tokenize('this is a cOOl-Token'),
which in fact mirrors the result for the former from before the new
token was added.

This commit adds
- functionality to PreTrainedTokenizer to handle this
situation in case a tokenizer (currently Bert, DistilBert,
and XLNet) has the do_lower_case=True kwarg by:
    1) lowercasing tokens added with .add_tokens()
    2) lowercasing text at the beginning of .tokenize()
- new common test case for tokenizers

https://github.com/huggingface/transformers/issues/1545
2019-11-12 13:08:30 +02:00
ronakice
2e31176557 fix multi-gpu eval 2019-11-12 05:55:11 -05:00
thomwolf
8aba81a0b6 fix #1789 2019-11-12 08:52:43 +01:00
Stefan Schweter
94e55253ae tests: add test case for DistilBertForTokenClassification implementation 2019-11-11 16:20:15 +01:00
Stefan Schweter
2b07b9e5ee examples: add DistilBert support for NER fine-tuning 2019-11-11 16:19:34 +01:00
Stefan Schweter
1806eabf59 module: add DistilBertForTokenClassification import 2019-11-11 16:18:48 +01:00
Stefan Schweter
1c7253cc5f modeling: add DistilBertForTokenClassification implementation 2019-11-11 16:18:16 +01:00