Morgan Funtowicz
|
a305067f2d
|
Removed __main__
|
2019-12-19 19:41:48 +01:00 |
|
Morgan Funtowicz
|
3492a6ec17
|
Addressing Thom's comments.
|
2019-12-19 19:06:44 +01:00 |
|
Lysandre
|
33adab2b91
|
Fix albert example
|
2019-12-19 12:40:43 -05:00 |
|
Lysandre
|
a1f1dce0ae
|
Correct max position for SQUAD and TFDS
|
2019-12-19 12:25:55 -05:00 |
|
Francesco
|
62c1fc3c1e
|
Removed duplicate XLMConfig, XLMForQuestionAnswering and XLMTokenizer from import statement of run_squad.py script
|
2019-12-19 09:50:56 -05:00 |
|
Ejar
|
284572efc0
|
Updated typo on the link
Updated documentation due to typo
|
2019-12-19 09:36:43 -05:00 |
|
patrickvonplaten
|
ed6ba93912
|
corrected typo in example for t5 model input argument
|
2019-12-19 09:34:55 -05:00 |
|
Morgan Funtowicz
|
81a911cce5
|
Doc, doc, ... doc.
|
2019-12-19 15:12:06 +01:00 |
|
Morgan Funtowicz
|
faef6f6191
|
Fix logic order for USE_TF/USE_TORCH
|
2019-12-19 12:28:17 +01:00 |
|
Morgan Funtowicz
|
5664327c24
|
Hide train command for now.
|
2019-12-19 12:27:54 +01:00 |
|
Morgan Funtowicz
|
3b29322d4c
|
Expose all the pipeline argument on serve command.
|
2019-12-19 12:24:17 +01:00 |
|
Morgan Funtowicz
|
fc624716aa
|
Renaming framework env variables flags from NO_ to USE_
|
2019-12-19 11:49:06 +01:00 |
|
Morgan Funtowicz
|
f516cf3956
|
Allow pipeline to write output in binary format
|
2019-12-19 11:42:33 +01:00 |
|
Morgan Funtowicz
|
d72fa2a0f6
|
Fix inputs_for_model call in QuestionAnsweringPipeline accessing __dict__ on list.
|
2019-12-19 10:54:10 +01:00 |
|
Morgan Funtowicz
|
bcc99fd92e
|
Fix wrong automatic config allocation through AutoConfig
|
2019-12-19 10:32:21 +01:00 |
|
Stefan Schweter
|
a26ce4dee1
|
examples: add XLM-RoBERTa to glue script
|
2019-12-19 02:23:01 +01:00 |
|
Morgan Funtowicz
|
ec5d6c6a70
|
Adressing issue with NER task omitting first and last word.
|
2019-12-19 00:12:10 +01:00 |
|
Stefan Schweter
|
fe9aab1055
|
tokenization: use S3 location for XLM-RoBERTa model
|
2019-12-18 23:47:48 +01:00 |
|
Stefan Schweter
|
5c5f67a256
|
modeling: use S3 location for XLM-RoBERTa model
|
2019-12-18 23:47:00 +01:00 |
|
Stefan Schweter
|
db90e12114
|
configuration: use S3 location for XLM-RoBERTa model
|
2019-12-18 23:46:33 +01:00 |
|
Morgan Funtowicz
|
d0724d0794
|
Add PipedPipelineDataFormat
|
2019-12-18 23:27:26 +01:00 |
|
Morgan Funtowicz
|
7711403bbd
|
Expose config through the cli arguments
|
2019-12-18 22:59:51 +01:00 |
|
Morgan Funtowicz
|
8bb166db5d
|
Expose more information in the output of TextClassificationPipeline
|
2019-12-18 22:53:19 +01:00 |
|
Stefan Schweter
|
f09d999641
|
docs: fix numbering 😅
|
2019-12-18 19:49:33 +01:00 |
|
Stefan Schweter
|
dd7a958fd6
|
docs: add XLM-RoBERTa to pretrained model list (incl. all parameters)
|
2019-12-18 19:45:46 +01:00 |
|
Stefan Schweter
|
d35405b7a3
|
docs: add XLM-RoBERTa to index page
|
2019-12-18 19:45:10 +01:00 |
|
Stefan Schweter
|
3e89fca543
|
readme: add XLM-RoBERTa to model architecture list
|
2019-12-18 19:44:23 +01:00 |
|
Stefan Schweter
|
128cfdee9b
|
tokenization add XLM-RoBERTa base model
|
2019-12-18 19:28:16 +01:00 |
|
Stefan Schweter
|
e778dd854d
|
modeling: add XLM-RoBERTa base model
|
2019-12-18 19:27:34 +01:00 |
|
Morgan Funtowicz
|
04b602f96f
|
Put module import on top of the module.
|
2019-12-18 18:28:39 +01:00 |
|
Stefan Schweter
|
64a971a915
|
auto: add XLM-RoBERTa to auto tokenization
|
2019-12-18 18:24:32 +01:00 |
|
Stefan Schweter
|
036831e279
|
auto: add XLM-RoBERTa to audo modeling
|
2019-12-18 18:23:42 +01:00 |
|
Stefan Schweter
|
41a13a6375
|
auto: add XLMRoBERTa to auto configuration
|
2019-12-18 18:20:27 +01:00 |
|
Morgan Funtowicz
|
0c88c856d5
|
Unnest QuestionAnsweringArgumentHandler
|
2019-12-18 18:18:16 +01:00 |
|
Lysandre
|
8efc6dd544
|
fix #2214
|
2019-12-18 10:47:59 -05:00 |
|
Gunnlaugur Thor Briem
|
a2978465a2
|
Merge branch 'master' into patch-1
|
2019-12-18 14:54:46 +00:00 |
|
Stefan Schweter
|
01b68be34f
|
converter: remove XLM-RoBERTa specific script (can be done with the script for RoBERTa now)
|
2019-12-18 12:24:46 +01:00 |
|
thomwolf
|
3d2096f516
|
further cleanup
|
2019-12-18 11:50:54 +01:00 |
|
Stefan Schweter
|
ca31abc6d6
|
tokenization: *align* fairseq and spm vocab to fix some tokenization errors
|
2019-12-18 11:36:54 +01:00 |
|
thomwolf
|
8e5587fb79
|
few fixes on sampling
|
2019-12-18 11:32:37 +01:00 |
|
Stefan Schweter
|
cce3089b65
|
Merge remote-tracking branch 'upstream/master' into xlmr
|
2019-12-18 11:05:16 +01:00 |
|
thomwolf
|
641a8decdc
|
clean up code and add arbitrary number of return sequences
|
2019-12-18 10:43:48 +01:00 |
|
Morgan Funtowicz
|
e347725d8c
|
More fine-grained control over pipeline creation with config argument.
|
2019-12-18 10:41:24 +01:00 |
|
Julien Chaumond
|
94c99db34c
|
[FinBERT] fix incorrect url
|
2019-12-17 20:35:25 -05:00 |
|
Julien Chaumond
|
7ffa817390
|
[s3] mv files and update links
|
2019-12-17 20:35:25 -05:00 |
|
Antti Virtanen
|
c5f35e61db
|
Uploaded files to AWS.
|
2019-12-17 20:35:25 -05:00 |
|
Antti Virtanen
|
abc43ffbff
|
Add pretrained model documentation for FinBERT.
|
2019-12-17 20:35:25 -05:00 |
|
Antti Virtanen
|
8ac840ff87
|
Adding Finnish BERT.
|
2019-12-17 20:35:25 -05:00 |
|
Julien Chaumond
|
a0d386455b
|
Fix outdated tokenizer doc
|
2019-12-17 20:07:39 -05:00 |
|
Julien Chaumond
|
ea636440d1
|
[roberta.conversion] Do not hardcode vocab size
and support for fairseq 0.9+
|
2019-12-17 18:12:22 -05:00 |
|