Morgan Funtowicz
|
d7c62661a3
|
Provide serving dependencies for tensorflow and pytorch (serving-tf, serving-torch)
|
2019-12-17 11:23:39 +01:00 |
|
Stefan Schweter
|
f349826a57
|
model: fix cls and sep token for XLM-RoBERTa documentation
|
2019-12-17 10:36:04 +01:00 |
|
Thomas Wolf
|
f061606277
|
Merge pull request #2164 from huggingface/cleanup-configs
[SMALL BREAKING CHANGE] Cleaning up configuration classes - Adding Model Cards
|
2019-12-17 09:10:16 +01:00 |
|
erenup
|
805c21aeba
|
tried to fix the failed checks
|
2019-12-17 11:36:00 +08:00 |
|
erenup
|
d000195ee6
|
add comment for example_index and unique_id in single process
|
2019-12-17 11:28:34 +08:00 |
|
erenup
|
3c6efd0ca3
|
updated usage example in modeling_roberta for question and answering
|
2019-12-17 11:18:12 +08:00 |
|
Julien Chaumond
|
3f5ccb183e
|
[doc] Clarify uploads
cf 855ff0e91d (commitcomment-36452545)
|
2019-12-16 18:20:29 -05:00 |
|
thomwolf
|
3cb51299c3
|
Fix #2109
|
2019-12-16 16:58:44 -05:00 |
|
Lysandre
|
18a879f475
|
fix #2180
|
2019-12-16 16:44:29 -05:00 |
|
Lysandre
|
d803409215
|
Fix run squad evaluate during training
|
2019-12-16 16:31:38 -05:00 |
|
thomwolf
|
a468870fd2
|
refactoring generation
|
2019-12-16 22:22:30 +01:00 |
|
Julien Chaumond
|
855ff0e91d
|
[doc] Model upload and sharing
ping @lysandrejik @thomwolf
Is this clear enough? Anything we should add?
|
2019-12-16 12:42:22 -05:00 |
|
Stefan Schweter
|
d064009b72
|
converter: fix vocab size
|
2019-12-16 17:23:25 +01:00 |
|
Stefan Schweter
|
a701a0cee1
|
configuration: fix model name for large XLM-RoBERTa model
|
2019-12-16 17:17:56 +01:00 |
|
Stefan Schweter
|
59a1aefb1c
|
tokenization: add support for new XLM-RoBERTa model. Add wrapper around fairseq tokenization logic
|
2019-12-16 17:00:55 +01:00 |
|
Stefan Schweter
|
69f4f058fa
|
model: add support for new XLM-RoBERTa model
|
2019-12-16 17:00:12 +01:00 |
|
Stefan Schweter
|
a648ff738c
|
configuration: add support for XLM-RoBERTa model
|
2019-12-16 16:47:39 +01:00 |
|
Stefan Schweter
|
9ed09cb4a3
|
converter: add conversion script for original XLM-RoBERTa weights to Transformers-compatible weights
|
2019-12-16 16:46:58 +01:00 |
|
Stefan Schweter
|
d3549b66af
|
module: add support for XLM-RoBERTa (__init__)
|
2019-12-16 16:38:39 +01:00 |
|
Morgan Funtowicz
|
a096e2a88b
|
WIP serving through HTTP internally using pipelines.
|
2019-12-16 16:38:02 +01:00 |
|
Stefan Schweter
|
71b4750517
|
examples: add support for XLM-RoBERTa to run_ner script
|
2019-12-16 16:37:27 +01:00 |
|
Morgan Funtowicz
|
43a4e1bbe4
|
Adressing issue in varargs handling for question answering.
|
2019-12-16 16:00:41 +01:00 |
|
Morgan Funtowicz
|
46ccbb42fc
|
Make CLI run command use integer mapping for device argument.
|
2019-12-16 15:49:41 +01:00 |
|
Morgan Funtowicz
|
bbc707cf39
|
Fix non-keyworded varargs handling in DefaultArgumentHandler for pipeline.
|
2019-12-16 15:49:09 +01:00 |
|
Morgan Funtowicz
|
9c391277cc
|
Allow tensors placement on specific device through CLI and pipeline.
|
2019-12-16 15:19:13 +01:00 |
|
thomwolf
|
1bbdbacd5b
|
update __init__ and saving
|
2019-12-16 14:38:20 +01:00 |
|
Morgan Funtowicz
|
955d7ecb57
|
Refactored Pipeline with dedicated argument handler.
|
2019-12-16 14:34:54 +01:00 |
|
thomwolf
|
031ad4eb37
|
improving JSON error messages (for model card and configurations)
|
2019-12-16 14:20:57 +01:00 |
|
thomwolf
|
db0a9ee6e0
|
adding albert to TF auto models cc @LysandreJik
|
2019-12-16 14:08:08 +01:00 |
|
thomwolf
|
a4d07b983a
|
dict of all config and model files cc @LysandreJik
|
2019-12-16 14:00:32 +01:00 |
|
thomwolf
|
d3418a94ff
|
update tests
|
2019-12-16 13:52:41 +01:00 |
|
thomwolf
|
56e98ba81a
|
add model cards cc @mfuntowicz
|
2019-12-16 11:07:27 +01:00 |
|
thomwolf
|
8669598abd
|
update t5 tf
|
2019-12-16 09:59:36 +01:00 |
|
thomwolf
|
1b8613acb3
|
updating t5 config class
|
2019-12-16 09:51:42 +01:00 |
|
Morgan Funtowicz
|
8e3b1c860f
|
Added FeatureExtraction pipeline.
|
2019-12-15 01:37:52 +01:00 |
|
Morgan Funtowicz
|
f1971bf303
|
Binding pipelines to the cli.
|
2019-12-15 01:37:16 +01:00 |
|
Pascal Voitot
|
cc0135134b
|
:zip: #2106 basic tokenizer.tokenize global speed improvement (3-8x) by simply caching added_tokens in a Set
|
2019-12-14 15:25:13 +01:00 |
|
thomwolf
|
dc667ce1a7
|
double check cc @LysandreJik
|
2019-12-14 09:56:27 +01:00 |
|
thomwolf
|
7140363e09
|
update bertabs
|
2019-12-14 09:44:53 +01:00 |
|
Thomas Wolf
|
a52d56c8d9
|
Merge branch 'master' into cleanup-configs
|
2019-12-14 09:43:07 +01:00 |
|
Thomas Wolf
|
e92bcb7eb6
|
Merge pull request #1739 from huggingface/t5
[WIP] Adding Google T5 model
|
2019-12-14 09:40:43 +01:00 |
|
thomwolf
|
cbb368ca06
|
distilbert tests
|
2019-12-14 09:31:18 +01:00 |
|
Julien Chaumond
|
b6d4284b26
|
[cli] Uploads: fix + test edge case
|
2019-12-13 22:44:57 -05:00 |
|
erenup
|
a1faaf9962
|
deleted useless file
|
2019-12-14 08:57:13 +08:00 |
|
erenup
|
c7780700f5
|
Merge branch 'refs/heads/squad_roberta'
# Conflicts:
# transformers/data/processors/squad.py
|
2019-12-14 08:53:59 +08:00 |
|
erenup
|
76f0d99f02
|
Merge remote-tracking branch 'refs/remotes/huggingface/master'
|
2019-12-14 08:45:17 +08:00 |
|
erenup
|
8e9526b4b5
|
add multiple processing
|
2019-12-14 08:43:58 +08:00 |
|
Lysandre
|
7bd11dda6f
|
Release: v2.2.2
|
2019-12-13 16:45:30 -05:00 |
|
LysandreJik
|
c3248cf122
|
Tests for all tokenizers
|
2019-12-13 16:41:44 -05:00 |
|
Pascal Voitot
|
f2ac50cb55
|
better for python2.x
|
2019-12-13 16:41:44 -05:00 |
|