Commit Graph

19383 Commits

Author SHA1 Message Date
LysandreJik
fbc5bf10cf v2.6.0 release: isort un-pinned 2020-03-24 11:52:02 -04:00
Manuel Romero
b88bda6af3 Add right model and tokenizer path in example 2020-03-24 11:30:12 -04:00
Stefan Schweter
b31ef225cf [model_cards] 🇹🇷 Add new (uncased, 128k) BERTurk model 2020-03-24 11:29:06 -04:00
Stefan Schweter
b4009cb001 [model_cards] 🇹🇷 Add new (cased, 128k) BERTurk model 2020-03-24 11:29:06 -04:00
Stefan Schweter
d3283490ef [model_cards] 🇹🇷 Add new (uncased) BERTurk model 2020-03-24 11:29:06 -04:00
Mohamed El-Geish
e279a312d6
Model cards for CS224n SQuAD2.0 models (#3406)
* Model cards for CS224n SQuAD2.0 models

* consistent spacing
2020-03-24 11:28:33 -04:00
Gabriele Sarti
7372e62b2c
Added precisions in SciBERT-NLI model card (#3410) 2020-03-24 11:01:56 -04:00
LysandreJik
471cce24b3 Release: v2.6.0 2020-03-24 10:37:32 -04:00
Patrick von Platen
e392ba6938
Add camembert integration tests (#3375)
* add integration tests for camembert

* use jplu/tf-camembert fro the moment

* make style
2020-03-24 10:18:37 +01:00
Julien Chaumond
a8e3336a85 [examples] Use AutoModels in more examples 2020-03-23 20:11:14 -04:00
Julien Chaumond
ec6766a363 [deps] scikit-learn's transient issue was fixed 2020-03-23 18:38:09 -04:00
Julien Chaumond
f7dcf8fcea [BertAbs] Move files around for more consistent naming 2020-03-23 13:58:49 -04:00
Julien Chaumond
e25c4f4027 [ALBERT] move things around for more consistent naming
see #3359

cc @lysandrejik
2020-03-23 13:58:21 -04:00
Manuel Romero
85b324bee5 Add comparison table with older brother in family 2020-03-23 12:11:20 -04:00
Manuel Romero
b7aa077a63 Create card for the model 2020-03-23 12:10:41 -04:00
Manuel Romero
f740177c87 Add comparison table with new models 2020-03-23 12:10:23 -04:00
LysandreJik
e52482909b Correct order for dev/quality dependencies
cc @julien-c
2020-03-23 12:01:23 -04:00
Gabriele Sarti
28424906c2 Added scibert-nli model card 2020-03-23 11:55:41 -04:00
Julien Chaumond
18eec3a984 [ci] simpler way to load correct version of isort
hat/tip @bramvanroy
2020-03-23 10:03:22 -04:00
Julien Chaumond
cf72479bf1 One last reorder of {scheduler,optimizer}.step() 2020-03-20 18:05:50 -04:00
Elijah Rippeth
634bf6cf7e fixes lr_scheduler warning
For more details, see https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
2020-03-20 18:03:50 -04:00
Travis McGuire
265709f5cd New model, new model cards 2020-03-20 18:01:01 -04:00
Bram Vanroy
115abd2166 Handle pinned version of isort
The CONTRIBUTING file pins to a specific version of isort, so we might as well install that in `dev` . This makes it easier for contributors so they don't have to manually install the specific commit.
2020-03-20 18:00:04 -04:00
Patrick von Platen
95e00d0808
Clean special token init in modeling_....py (#3264)
* make style

* fix conflicts
2020-03-20 21:41:04 +01:00
Nitish Shirish Keskar
8becb73293
removing torch.cuda.empty_cache() from TF function (#3267)
torch.cuda.empty_cache() was being called from a TF function (even when torch is unavailable)
not sure any replacement is needed if TF OOMs
2020-03-19 23:25:30 +01:00
Julien Chaumond
ecfd336318
Simpler Error message when loading config/model with .from_pretrained() (#3341) 2020-03-19 23:23:03 +01:00
Kyeongpil Kang
8eeefcb576
Update 01-training-tokenizers.ipynb (typo issue) (#3343)
I found there are two grammar errors or typo issues in the explanation of the encoding properties.

The original sentences:
If your was made of multiple \"parts\" such as (question, context), then this would be a vector with for each token the segment it belongs to
If your has been truncated into multiple subparts because of a length limit (for BERT for example the sequence length is limited to 512), this will contain all the remaining overflowing parts.

I think "input" should be inserted after the phrase "If your".
2020-03-19 23:21:49 +01:00
Patrick von Platen
bbf26c4e61
Support T5 Generation (#3228)
* fix conflicts

* update bart max length test

* correct spelling mistakes

* implemented model specific encode function

* fix merge conflicts

* better naming

* save intermediate state -> need to rethink strucuture a bit

* leave tf problem as it is for now

* current version

* add layers.pop

* remove ipdb

* make style

* clean return cut decoding

* remove ipdbs

* Fix restoring layers in the decoders that doesnt exists.

* push good intermediate solution for now

* fix conflicts

* always good to refuse to merge conflicts when rebasing

* fix small bug

* improve function calls

* remove unused file

* add correct scope behavior for t5_generate

Co-authored-by: Morgan Funtowicz <funtowiczmo@gmail.com>
2020-03-19 23:18:23 +01:00
Julien Chaumond
656e1386a2 Fix #3305: run_ner only possible on ModelForTokenClassification models 2020-03-19 16:41:28 -04:00
husein zolkepli
0c44b11917 add bert bahasa readme 2020-03-19 15:08:19 -04:00
Manuel Romero
e99af3b17b Create model card for bert-small-finetuned-squadv2 2020-03-19 15:07:55 -04:00
Manuel Romero
39db055268
Merge pull request #3348 from mrm8488/patch-28
Create card for BERT-Mini finetuned on SQuAD v2
2020-03-19 15:07:39 -04:00
Manuel Romero
dedc7a8fdb Create card for BERT-Tiny fine-tuned on SQuAD v2
- Only 17MB of Model weights!!
2020-03-19 15:07:22 -04:00
Manuel Romero
676adf8625 Created card for spanbert-finetuned-squadv1 2020-03-19 15:06:35 -04:00
Antti Virtanen
11d8bcc9d7
Add model cards for FinBERT. (#3331)
* Add a model card for FinBERT

This is a copy of https://github.com/TurkuNLP/FinBERT/blob/master/README.md.

* Added a file for uncased.

* Add metadata for cased.

* Added metadata for uncased.
2020-03-19 15:06:01 -04:00
Lysandre Debut
f049be7ad4
Export ALBERT main layer in TensorFlow (#3354) 2020-03-19 13:53:05 -04:00
Kyeongpil Kang
3bedfd3347
Fix wrong link for the notebook file (#3344)
For the tutorial of "How to generate text", the URL link was wrong (it was linked to the tutorial of "How to train a language model").

I fixed the URL.
2020-03-19 17:22:47 +01:00
Serkan Karakulak
b2c2c31c60
Minor Bug Fix for Running Roberta on Glue (#3240)
* added return_token_type_ids argument for tokenizers which do not generate return_type_ids by default

* fixed styling

* Style

Co-authored-by: LysandreJik <lysandre.debut@reseau.eseo.fr>
2020-03-19 12:08:31 -04:00
Sam Shleifer
4e4403c9b4
[BART] torch 1.0 compatibility (#3322)
* config.activation_function
2020-03-19 11:56:54 -04:00
mataney
c44a17db1b
[FIX] not training when epoch is small (#3006)
* solving bug where for small epochs and large gradient_accumulation_steps we never train

* black formatting

* no need to change these files
2020-03-19 11:21:21 -04:00
Sam Shleifer
ad7233fc01
[BART] cleanup: remove redundant kwargs, improve docstrings (#3319) 2020-03-19 11:16:51 -04:00
Mohamed El-Geish
cd21d8bc00
Typo in warning message (#3219)
`T5Tokenizer` instead of `XLNetTokenizer`
2020-03-19 09:49:25 -04:00
Matthew Goldey
8d3e218ea6
fix typo in docstring demonstrating usage (#3213) 2020-03-19 09:47:54 -04:00
Patrick von Platen
cec3cdda15
Fix input ids can be none attn mask (#3345)
* fix issue 3289

* fix attention mask if input_ids None behavior
2020-03-19 09:55:17 +01:00
Junyi_Li
f6d813aaaa Create README.md 2020-03-18 23:45:02 -04:00
Junyi_Li
939328111b Create README.md
roberta_chinese_base card
2020-03-18 23:44:12 -04:00
Junyi_Li
29442d2edf Create README.md
albert_chinese_tiny card
2020-03-18 23:43:49 -04:00
Kyle Lo
20139b7c8d
Added model cards for SciBERT models uploaded under AllenAI org (#3330)
* Create README.md

* model card

* add model card for cased
2020-03-18 15:45:11 -04:00
Morgan Funtowicz
cae334c43c Improve fill-mask pipeline example in 03-pipelines notebook.
Remove hardcoded mask_token and use the value provided by the tokenizer.
2020-03-18 17:11:42 +01:00
Branden Chan
4b1970bb4c Create README.md 2020-03-18 11:37:17 -04:00