mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-24 14:58:56 +06:00

* Implemented fast version of tokenizers
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Bumped tokenizers version requirements to latest 0.2.1
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Added matching tests
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Matching OpenAI GPT tokenization !
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Matching GPT2 on tokenizers
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Expose add_prefix_space as constructor parameter for GPT2
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Matching Roberta tokenization !
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Removed fast implementation of CTRL.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Binding TransformerXL tokenizers to Rust.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Updating tests accordingly.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Added tokenizers as top-level modules.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Black & isort.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Rename LookupTable to WordLevel to match Rust side.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Black.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Use "fast" suffix instead of "ru" for rust tokenizers implementations.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Introduce tokenize() method on fast tokenizers.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* encode_plus dispatchs to batch_encode_plus
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* batch_encode_plus now dispatchs to encode if there is only one input element.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Bind all the encode_plus parameter to the forwarded batch_encode_plus call.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Bump tokenizers dependency to 0.3.0
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Formatting.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Fix tokenization_auto with support for new (python, fast) mapping schema.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Give correct fixtures path in test_tokenization_fast.py for the CLI.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Expose max_len_ properties on BertTokenizerFast
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Move max_len_ properties to PreTrainedTokenizerFast and override in specific subclasses.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* _convert_encoding should keep the batch axis tensor if only one sample in the batch.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Add warning message for RobertaTokenizerFast if used for MLM.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Added use_fast (bool) parameter on AutoTokenizer.from_pretrained().
This allows to easily enable/disable Rust-based tokenizer instantiation.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Let's tokenizers handle all the truncation and padding stuff.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Allow to provide tokenizer arguments during pipeline creation.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Update test_fill_mask pipeline to not use fast tokenizers.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Fix too much parameters for convert_encoding.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* When enabling padding, max_length should be set to None.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Avoid returning nested tensors of length 1 when calling encode_plus
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Ensure output is padded when return_tensor is not None.
Tensor creation requires the inital list input to be of the exact same size.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Disable transfoxl unittest if pytorch is not available (required to load the model)
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* encode_plus should not remove the leading batch axis if return_tensor is set
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Temporary disable fast tokenizers on QA pipelines.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Fix formatting issues.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Update tokenizers to 0.4.0
* Update style
* Enable truncation + stride unit test on fast tokenizers.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Add unittest ensuring special_tokens set match between Python and Rust.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Ensure special_tokens are correctly set during construction.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Give more warning feedback to the user in case of padding without pad_token.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* quality & format.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Added possibility to add a single token as str
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Added unittest for add_tokens and add_special_tokens on fast tokenizers.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Fix rebase mismatch on pipelines qa default model.
QA requires cased input while the tokenizers would be uncased.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Using offset mapping relative to the original string + unittest.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: save_vocabulary requires folder and file name
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Simplify import for Bert.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: truncate_and_pad disables padding according to the same heuristic than the one enabling padding.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Remove private member access in tokenize()
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Bump tokenizers dependency to 0.4.2
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* format & quality.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Use named arguments when applicable.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Add Github link to Roberta/GPT2 space issue on masked input.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Move max_len_single_sentence / max_len_sentences_pair to PreTrainedTokenizerFast + tests.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Relax type checking to include tuple and list object.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Addressing review comment: Document the truncate_and_pad manager behavior.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Raise an exception if return_offsets_mapping is not available with the current tokenizer.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Ensure padding is set on the tokenizers before setting any padding strategy + unittest.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* On pytorch we need to stack tensor to get proper new axis.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Generalize tests to different framework removing hard written return_tensors="..."
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Bump tokenizer dependency for num_special_tokens_to_add
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Overflowing tokens in batch_encode_plus are now stacked over the batch axis.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Improved error message for padding strategy without pad token.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Bumping tokenizers dependency to 0.5.0 for release.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Optimizing convert_encoding around 4x improvement. 🚀
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* expose pad_to_max_length in encode_plus to avoid duplicating the parameters in kwargs
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Generate a proper overflow_to_sampling_mapping when return_overflowing_tokens is True.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Fix unittests for overflow_to_sampling_mapping not being returned as tensor.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Format & quality.
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Remove perfect alignment constraint for Roberta (allowing 1% difference max)
Signed-off-by: Morgan Funtowicz <morgan@huggingface.co>
* Triggering final CI
Co-authored-by: MOI Anthony <xn1t0x@gmail.com>
307 lines
12 KiB
Python
307 lines
12 KiB
Python
import unittest
|
|
from typing import Iterable, List, Optional
|
|
|
|
from transformers import pipeline
|
|
from transformers.pipelines import Pipeline
|
|
|
|
from .utils import require_tf, require_torch
|
|
|
|
|
|
QA_FINETUNED_MODELS = [
|
|
(("bert-base-uncased", {"use_fast": False}), "bert-large-uncased-whole-word-masking-finetuned-squad", None),
|
|
(("bert-base-cased", {"use_fast": False}), "bert-large-cased-whole-word-masking-finetuned-squad", None),
|
|
(("bert-base-cased", {"use_fast": False}), "distilbert-base-cased-distilled-squad", None),
|
|
]
|
|
|
|
TF_QA_FINETUNED_MODELS = [
|
|
(("bert-base-uncased", {"use_fast": False}), "bert-large-uncased-whole-word-masking-finetuned-squad", None),
|
|
(("bert-base-cased", {"use_fast": False}), "bert-large-cased-whole-word-masking-finetuned-squad", None),
|
|
(("bert-base-cased", {"use_fast": False}), "distilbert-base-cased-distilled-squad", None),
|
|
]
|
|
|
|
TF_NER_FINETUNED_MODELS = {
|
|
(
|
|
"bert-base-cased",
|
|
"dbmdz/bert-large-cased-finetuned-conll03-english",
|
|
"dbmdz/bert-large-cased-finetuned-conll03-english",
|
|
)
|
|
}
|
|
|
|
NER_FINETUNED_MODELS = {
|
|
(
|
|
"bert-base-cased",
|
|
"dbmdz/bert-large-cased-finetuned-conll03-english",
|
|
"dbmdz/bert-large-cased-finetuned-conll03-english",
|
|
)
|
|
}
|
|
|
|
FEATURE_EXTRACT_FINETUNED_MODELS = {
|
|
("bert-base-cased", "bert-base-cased", None),
|
|
# ('xlnet-base-cased', 'xlnet-base-cased', None), # Disabled for now as it crash for TF2
|
|
("distilbert-base-cased", "distilbert-base-cased", None),
|
|
}
|
|
|
|
TF_FEATURE_EXTRACT_FINETUNED_MODELS = {
|
|
("bert-base-cased", "bert-base-cased", None),
|
|
# ('xlnet-base-cased', 'xlnet-base-cased', None), # Disabled for now as it crash for TF2
|
|
("distilbert-base-cased", "distilbert-base-cased", None),
|
|
}
|
|
|
|
TF_TEXT_CLASSIF_FINETUNED_MODELS = {
|
|
(
|
|
"bert-base-uncased",
|
|
"distilbert-base-uncased-finetuned-sst-2-english",
|
|
"distilbert-base-uncased-finetuned-sst-2-english",
|
|
)
|
|
}
|
|
|
|
TEXT_CLASSIF_FINETUNED_MODELS = {
|
|
(
|
|
"bert-base-uncased",
|
|
"distilbert-base-uncased-finetuned-sst-2-english",
|
|
"distilbert-base-uncased-finetuned-sst-2-english",
|
|
)
|
|
}
|
|
|
|
FILL_MASK_FINETUNED_MODELS = [
|
|
(("distilroberta-base", {"use_fast": False}), "distilroberta-base", None),
|
|
]
|
|
|
|
TF_FILL_MASK_FINETUNED_MODELS = [
|
|
(("distilroberta-base", {"use_fast": False}), "distilroberta-base", None),
|
|
]
|
|
|
|
|
|
class MonoColumnInputTestCase(unittest.TestCase):
|
|
def _test_mono_column_pipeline(
|
|
self,
|
|
nlp: Pipeline,
|
|
valid_inputs: List,
|
|
invalid_inputs: List,
|
|
output_keys: Iterable[str],
|
|
expected_multi_result: Optional[List] = None,
|
|
expected_check_keys: Optional[List[str]] = None,
|
|
):
|
|
self.assertIsNotNone(nlp)
|
|
|
|
mono_result = nlp(valid_inputs[0])
|
|
self.assertIsInstance(mono_result, list)
|
|
self.assertIsInstance(mono_result[0], (dict, list))
|
|
|
|
if isinstance(mono_result[0], list):
|
|
mono_result = mono_result[0]
|
|
|
|
for key in output_keys:
|
|
self.assertIn(key, mono_result[0])
|
|
|
|
multi_result = [nlp(input) for input in valid_inputs]
|
|
self.assertIsInstance(multi_result, list)
|
|
self.assertIsInstance(multi_result[0], (dict, list))
|
|
|
|
if expected_multi_result is not None:
|
|
for result, expect in zip(multi_result, expected_multi_result):
|
|
for key in expected_check_keys or []:
|
|
self.assertEqual(
|
|
set([o[key] for o in result]), set([o[key] for o in expect]),
|
|
)
|
|
|
|
if isinstance(multi_result[0], list):
|
|
multi_result = multi_result[0]
|
|
|
|
for result in multi_result:
|
|
for key in output_keys:
|
|
self.assertIn(key, result)
|
|
|
|
self.assertRaises(Exception, nlp, invalid_inputs)
|
|
|
|
@require_torch
|
|
def test_ner(self):
|
|
mandatory_keys = {"entity", "word", "score"}
|
|
valid_inputs = ["HuggingFace is solving NLP one commit at a time.", "HuggingFace is based in New-York & Paris"]
|
|
invalid_inputs = [None]
|
|
for tokenizer, model, config in NER_FINETUNED_MODELS:
|
|
nlp = pipeline(task="ner", model=model, config=config, tokenizer=tokenizer)
|
|
self._test_mono_column_pipeline(nlp, valid_inputs, invalid_inputs, mandatory_keys)
|
|
|
|
@require_tf
|
|
def test_tf_ner(self):
|
|
mandatory_keys = {"entity", "word", "score"}
|
|
valid_inputs = ["HuggingFace is solving NLP one commit at a time.", "HuggingFace is based in New-York & Paris"]
|
|
invalid_inputs = [None]
|
|
for tokenizer, model, config in TF_NER_FINETUNED_MODELS:
|
|
nlp = pipeline(task="ner", model=model, config=config, tokenizer=tokenizer, framework="tf")
|
|
self._test_mono_column_pipeline(nlp, valid_inputs, invalid_inputs, mandatory_keys)
|
|
|
|
@require_torch
|
|
def test_sentiment_analysis(self):
|
|
mandatory_keys = {"label", "score"}
|
|
valid_inputs = ["HuggingFace is solving NLP one commit at a time.", "HuggingFace is based in New-York & Paris"]
|
|
invalid_inputs = [None]
|
|
for tokenizer, model, config in TEXT_CLASSIF_FINETUNED_MODELS:
|
|
nlp = pipeline(task="sentiment-analysis", model=model, config=config, tokenizer=tokenizer)
|
|
self._test_mono_column_pipeline(nlp, valid_inputs, invalid_inputs, mandatory_keys)
|
|
|
|
@require_tf
|
|
def test_tf_sentiment_analysis(self):
|
|
mandatory_keys = {"label", "score"}
|
|
valid_inputs = ["HuggingFace is solving NLP one commit at a time.", "HuggingFace is based in New-York & Paris"]
|
|
invalid_inputs = [None]
|
|
for tokenizer, model, config in TF_TEXT_CLASSIF_FINETUNED_MODELS:
|
|
nlp = pipeline(task="sentiment-analysis", model=model, config=config, tokenizer=tokenizer, framework="tf")
|
|
self._test_mono_column_pipeline(nlp, valid_inputs, invalid_inputs, mandatory_keys)
|
|
|
|
@require_torch
|
|
def test_feature_extraction(self):
|
|
valid_inputs = ["HuggingFace is solving NLP one commit at a time.", "HuggingFace is based in New-York & Paris"]
|
|
invalid_inputs = [None]
|
|
for tokenizer, model, config in FEATURE_EXTRACT_FINETUNED_MODELS:
|
|
nlp = pipeline(task="feature-extraction", model=model, config=config, tokenizer=tokenizer)
|
|
self._test_mono_column_pipeline(nlp, valid_inputs, invalid_inputs, {})
|
|
|
|
@require_tf
|
|
def test_tf_feature_extraction(self):
|
|
valid_inputs = ["HuggingFace is solving NLP one commit at a time.", "HuggingFace is based in New-York & Paris"]
|
|
invalid_inputs = [None]
|
|
for tokenizer, model, config in TF_FEATURE_EXTRACT_FINETUNED_MODELS:
|
|
nlp = pipeline(task="feature-extraction", model=model, config=config, tokenizer=tokenizer, framework="tf")
|
|
self._test_mono_column_pipeline(nlp, valid_inputs, invalid_inputs, {})
|
|
|
|
@require_torch
|
|
def test_fill_mask(self):
|
|
mandatory_keys = {"sequence", "score", "token"}
|
|
valid_inputs = [
|
|
"My name is <mask>",
|
|
"The largest city in France is <mask>",
|
|
]
|
|
invalid_inputs = [None]
|
|
expected_multi_result = [
|
|
[
|
|
{"sequence": "<s> My name is:</s>", "score": 0.009954338893294334, "token": 35},
|
|
{"sequence": "<s> My name is John</s>", "score": 0.0080940006300807, "token": 610},
|
|
],
|
|
[
|
|
{
|
|
"sequence": "<s> The largest city in France is Paris</s>",
|
|
"score": 0.3185044229030609,
|
|
"token": 2201,
|
|
},
|
|
{
|
|
"sequence": "<s> The largest city in France is Lyon</s>",
|
|
"score": 0.21112334728240967,
|
|
"token": 12790,
|
|
},
|
|
],
|
|
]
|
|
for tokenizer, model, config in FILL_MASK_FINETUNED_MODELS:
|
|
nlp = pipeline(task="fill-mask", model=model, config=config, tokenizer=tokenizer, topk=2)
|
|
self._test_mono_column_pipeline(
|
|
nlp,
|
|
valid_inputs,
|
|
invalid_inputs,
|
|
mandatory_keys,
|
|
expected_multi_result=expected_multi_result,
|
|
expected_check_keys=["sequence"],
|
|
)
|
|
|
|
@require_tf
|
|
def test_tf_fill_mask(self):
|
|
mandatory_keys = {"sequence", "score", "token"}
|
|
valid_inputs = [
|
|
"My name is <mask>",
|
|
"The largest city in France is <mask>",
|
|
]
|
|
invalid_inputs = [None]
|
|
expected_multi_result = [
|
|
[
|
|
{"sequence": "<s> My name is:</s>", "score": 0.009954338893294334, "token": 35},
|
|
{"sequence": "<s> My name is John</s>", "score": 0.0080940006300807, "token": 610},
|
|
],
|
|
[
|
|
{
|
|
"sequence": "<s> The largest city in France is Paris</s>",
|
|
"score": 0.3185044229030609,
|
|
"token": 2201,
|
|
},
|
|
{
|
|
"sequence": "<s> The largest city in France is Lyon</s>",
|
|
"score": 0.21112334728240967,
|
|
"token": 12790,
|
|
},
|
|
],
|
|
]
|
|
for tokenizer, model, config in TF_FILL_MASK_FINETUNED_MODELS:
|
|
nlp = pipeline(task="fill-mask", model=model, config=config, tokenizer=tokenizer, framework="tf", topk=2)
|
|
self._test_mono_column_pipeline(
|
|
nlp,
|
|
valid_inputs,
|
|
invalid_inputs,
|
|
mandatory_keys,
|
|
expected_multi_result=expected_multi_result,
|
|
expected_check_keys=["sequence"],
|
|
)
|
|
|
|
|
|
class MultiColumnInputTestCase(unittest.TestCase):
|
|
def _test_multicolumn_pipeline(self, nlp, valid_inputs: list, invalid_inputs: list, output_keys: Iterable[str]):
|
|
self.assertIsNotNone(nlp)
|
|
|
|
mono_result = nlp(valid_inputs[0])
|
|
self.assertIsInstance(mono_result, dict)
|
|
|
|
for key in output_keys:
|
|
self.assertIn(key, mono_result)
|
|
|
|
multi_result = nlp(valid_inputs)
|
|
self.assertIsInstance(multi_result, list)
|
|
self.assertIsInstance(multi_result[0], dict)
|
|
|
|
for result in multi_result:
|
|
for key in output_keys:
|
|
self.assertIn(key, result)
|
|
|
|
self.assertRaises(Exception, nlp, invalid_inputs[0])
|
|
self.assertRaises(Exception, nlp, invalid_inputs)
|
|
|
|
@require_torch
|
|
def test_question_answering(self):
|
|
mandatory_output_keys = {"score", "answer", "start", "end"}
|
|
valid_samples = [
|
|
{"question": "Where was HuggingFace founded ?", "context": "HuggingFace was founded in Paris."},
|
|
{
|
|
"question": "In what field is HuggingFace working ?",
|
|
"context": "HuggingFace is a startup based in New-York founded in Paris which is trying to solve NLP.",
|
|
},
|
|
]
|
|
invalid_samples = [
|
|
{"question": "", "context": "This is a test to try empty question edge case"},
|
|
{"question": None, "context": "This is a test to try empty question edge case"},
|
|
{"question": "What is does with empty context ?", "context": ""},
|
|
{"question": "What is does with empty context ?", "context": None},
|
|
]
|
|
|
|
for tokenizer, model, config in QA_FINETUNED_MODELS:
|
|
nlp = pipeline(task="question-answering", model=model, config=config, tokenizer=tokenizer)
|
|
self._test_multicolumn_pipeline(nlp, valid_samples, invalid_samples, mandatory_output_keys)
|
|
|
|
@require_tf
|
|
@unittest.skip("This test is failing intermittently. Skipping it until we resolve.")
|
|
def test_tf_question_answering(self):
|
|
mandatory_output_keys = {"score", "answer", "start", "end"}
|
|
valid_samples = [
|
|
{"question": "Where was HuggingFace founded ?", "context": "HuggingFace was founded in Paris."},
|
|
{
|
|
"question": "In what field is HuggingFace working ?",
|
|
"context": "HuggingFace is a startup based in New-York founded in Paris which is trying to solve NLP.",
|
|
},
|
|
]
|
|
invalid_samples = [
|
|
{"question": "", "context": "This is a test to try empty question edge case"},
|
|
{"question": None, "context": "This is a test to try empty question edge case"},
|
|
{"question": "What is does with empty context ?", "context": ""},
|
|
{"question": "What is does with empty context ?", "context": None},
|
|
]
|
|
|
|
for tokenizer, model, config in TF_QA_FINETUNED_MODELS:
|
|
nlp = pipeline(task="question-answering", model=model, config=config, tokenizer=tokenizer, framework="tf")
|
|
self._test_multicolumn_pipeline(nlp, valid_samples, invalid_samples, mandatory_output_keys)
|