mirror of
https://github.com/huggingface/transformers.git
synced 2025-08-02 03:01:07 +06:00

* simplify loop * add featur extractor * add model * start conversion * add dropout * initial commit of test files * copnversion for all models * update processor for correct padding * update feature extraction * update integration test logits match * fmnt: off for the logits * on the fly mel bank * small nit * update test * update tokenizer * nit feature extraction * update * update tokenizer test * adds logit processor and update tokenizer to get supress tokens * style * clean convert * revert to original modeling tf utils * Update * update * nit * clean convert file * update tests and nits * quality * slow generation test * ffn_dim to allow customization * update readme * add to toctreee * start fixing integration tests * update tests and code * fix feature extractor * fix config tests common * update code to fix tests * fix feature exctractor * nit feature extraction * update test for new feature extractor * style * add absrtact * large logits wioth custom decoder input ids * wraap around is otrch available * fix feature extractor * correct logits for whisper small.en * nit * fix encoder_attentino_mask * some fixes * remove unnecessary inputs * nits * add normalizer file * update etst tokenization * fix attention mask not defined * Add model to README * Fix doc tests * fix generate * remove uncoder attention mask useless * update test modeling whisper * update condfig to add second non supress tokens * nits on feature exrtactor * nit for test tokenizers * update etsts * update tests * update tokenization test * fixup * invalidated hf token. Clean convert openai to whisper * fix logit tests * fixup * clean merge * revert toc_tree changes * remove useless LogitProcessor * Update whisper .mdx * update config file doc * update configuration docstring * update test tokenization * update test tokenization * update tokenization whisper Added copied from where needed * update feature extraction * nit test name * style * quality * remove get suppress tokens and update non_speech tokens global variables * Update src/transformers/models/whisper/feature_extraction_whisper.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * clean modeling whisper and test Removed the attention mask arguments that are deprecated * fix large test * Add multilingual audio test, and translate test * style * fix larg multilingual test * nits * Update docs/source/en/model_doc/whisper.mdx Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * add copied from for attention layer * remove attention masks in doc * add english normalizer * update tokenization test * remove copied from in whisper attention : no bias in k_proj only * wrap around dependencies in english normalizer * style * correct import generation logits * for now, wrap feature extractor with torch * Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update docs/source/en/model_doc/whisper.mdx Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * remove torch depencies for feature extraction and style * fixup * nit * update logitds * style * nit * nits and fix final tests * add `is_more_itertools_available` to utils * quality * add begin supress tokens, supress tokens to generate args and config * clean supressTokensLogitProcessor in generation logits * Nit naming * add supressTokensAtBegin * udpate tests, supress tokens to None or correct values * nit and style * update RAG to fit test and generate_logit * add copy pasted statment on english normalizer * add arguments to config_common_kwargs * Update src/transformers/generation_utils.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/generation_logits_process.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * revert changes based on reviews * update doc and nits * more nits * last nits * update test configuration common * add BART name in decoder attention mask documentation * Update src/transformers/models/whisper/modeling_whisper.py Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> * style * nit * nit * add english.json file to git * nits on documentation * nit * nits * last styling * add main toctree file * remove sentence piece dependency * clean init file * fix tokenizer that has no dependencies on sentencepiece * update whisper init file, nit * remove english.json file * add get decoder prompt id * revert changes and add forced logit processor * nit * clean normalizer * remove protected * update * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * update based on review * Update src/transformers/models/whisper/configuration_whisper.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add batched tests Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: NielsRogge <niels.rogge1@gmail.com> Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
226 lines
9.3 KiB
Python
226 lines
9.3 KiB
Python
# coding=utf-8
|
|
# Copyright 2022 HuggingFace Inc.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
|
|
|
|
import itertools
|
|
import os
|
|
import random
|
|
import tempfile
|
|
import unittest
|
|
|
|
import numpy as np
|
|
|
|
from transformers import is_speech_available
|
|
from transformers.testing_utils import check_json_file_has_correct_format, require_torch, require_torchaudio
|
|
from transformers.utils.import_utils import is_torch_available
|
|
|
|
from ...test_sequence_feature_extraction_common import SequenceFeatureExtractionTestMixin
|
|
|
|
|
|
if is_speech_available():
|
|
from transformers import WhisperFeatureExtractor
|
|
|
|
if is_torch_available():
|
|
import torch
|
|
|
|
global_rng = random.Random()
|
|
|
|
|
|
def floats_list(shape, scale=1.0, rng=None, name=None):
|
|
"""Creates a random float32 tensor"""
|
|
if rng is None:
|
|
rng = global_rng
|
|
|
|
values = []
|
|
for batch_idx in range(shape[0]):
|
|
values.append([])
|
|
for _ in range(shape[1]):
|
|
values[-1].append(rng.random() * scale)
|
|
|
|
return values
|
|
|
|
|
|
@require_torch
|
|
@require_torchaudio
|
|
class WhisperFeatureExtractionTester(unittest.TestCase):
|
|
def __init__(
|
|
self,
|
|
parent,
|
|
batch_size=7,
|
|
min_seq_length=400,
|
|
max_seq_length=2000,
|
|
feature_size=10,
|
|
hop_length=160,
|
|
chunk_length=8,
|
|
padding_value=0.0,
|
|
sampling_rate=4_000,
|
|
return_attention_mask=True,
|
|
do_normalize=True,
|
|
):
|
|
self.parent = parent
|
|
self.batch_size = batch_size
|
|
self.min_seq_length = min_seq_length
|
|
self.max_seq_length = max_seq_length
|
|
self.seq_length_diff = (self.max_seq_length - self.min_seq_length) // (self.batch_size - 1)
|
|
self.padding_value = padding_value
|
|
self.sampling_rate = sampling_rate
|
|
self.return_attention_mask = return_attention_mask
|
|
self.do_normalize = do_normalize
|
|
self.feature_size = feature_size
|
|
self.chunk_length = chunk_length
|
|
self.hop_length = hop_length
|
|
|
|
def prepare_feat_extract_dict(self):
|
|
return {
|
|
"feature_size": self.feature_size,
|
|
"hop_length": self.hop_length,
|
|
"chunk_length": self.chunk_length,
|
|
"padding_value": self.padding_value,
|
|
"sampling_rate": self.sampling_rate,
|
|
"return_attention_mask": self.return_attention_mask,
|
|
"do_normalize": self.do_normalize,
|
|
}
|
|
|
|
def prepare_inputs_for_common(self, equal_length=False, numpify=False):
|
|
def _flatten(list_of_lists):
|
|
return list(itertools.chain(*list_of_lists))
|
|
|
|
if equal_length:
|
|
speech_inputs = [floats_list((self.max_seq_length, self.feature_size)) for _ in range(self.batch_size)]
|
|
else:
|
|
# make sure that inputs increase in size
|
|
speech_inputs = [
|
|
floats_list((x, self.feature_size))
|
|
for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff)
|
|
]
|
|
if numpify:
|
|
speech_inputs = [np.asarray(x) for x in speech_inputs]
|
|
return speech_inputs
|
|
|
|
|
|
@require_torch
|
|
@require_torchaudio
|
|
class WhisperFeatureExtractionTest(SequenceFeatureExtractionTestMixin, unittest.TestCase):
|
|
|
|
feature_extraction_class = WhisperFeatureExtractor if is_speech_available() else None
|
|
|
|
def setUp(self):
|
|
self.feat_extract_tester = WhisperFeatureExtractionTester(self)
|
|
|
|
def test_feat_extract_from_and_save_pretrained(self):
|
|
feat_extract_first = self.feature_extraction_class(**self.feat_extract_dict)
|
|
|
|
with tempfile.TemporaryDirectory() as tmpdirname:
|
|
saved_file = feat_extract_first.save_pretrained(tmpdirname)[0]
|
|
check_json_file_has_correct_format(saved_file)
|
|
feat_extract_second = self.feature_extraction_class.from_pretrained(tmpdirname)
|
|
|
|
dict_first = feat_extract_first.to_dict()
|
|
dict_second = feat_extract_second.to_dict()
|
|
mel_1 = dict_first.pop("mel_filters")
|
|
mel_2 = dict_second.pop("mel_filters")
|
|
self.assertTrue(np.allclose(mel_1, mel_2))
|
|
self.assertEqual(dict_first, dict_second)
|
|
|
|
def test_feat_extract_to_json_file(self):
|
|
feat_extract_first = self.feature_extraction_class(**self.feat_extract_dict)
|
|
|
|
with tempfile.TemporaryDirectory() as tmpdirname:
|
|
json_file_path = os.path.join(tmpdirname, "feat_extract.json")
|
|
feat_extract_first.to_json_file(json_file_path)
|
|
feat_extract_second = self.feature_extraction_class.from_json_file(json_file_path)
|
|
|
|
dict_first = feat_extract_first.to_dict()
|
|
dict_second = feat_extract_second.to_dict()
|
|
mel_1 = dict_first.pop("mel_filters")
|
|
mel_2 = dict_second.pop("mel_filters")
|
|
self.assertTrue(np.allclose(mel_1, mel_2))
|
|
self.assertEqual(dict_first, dict_second)
|
|
|
|
def test_call(self):
|
|
# Tests that all call wrap to encode_plus and batch_encode_plus
|
|
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
|
# create three inputs of length 800, 1000, and 1200
|
|
speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)]
|
|
np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs]
|
|
|
|
# Test feature size
|
|
input_features = feature_extractor(np_speech_inputs, padding="max_length", return_tensors="np").input_features
|
|
self.assertTrue(input_features.ndim == 3)
|
|
self.assertTrue(input_features.shape[-1] == feature_extractor.nb_max_frames)
|
|
self.assertTrue(input_features.shape[-2] == feature_extractor.feature_size)
|
|
|
|
# Test not batched input
|
|
encoded_sequences_1 = feature_extractor(speech_inputs[0], return_tensors="np").input_features
|
|
encoded_sequences_2 = feature_extractor(np_speech_inputs[0], return_tensors="np").input_features
|
|
self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3))
|
|
|
|
# Test batched
|
|
encoded_sequences_1 = feature_extractor(speech_inputs, return_tensors="np").input_features
|
|
encoded_sequences_2 = feature_extractor(np_speech_inputs, return_tensors="np").input_features
|
|
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
|
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
|
|
|
# Test truncation required
|
|
speech_inputs = [floats_list((1, x))[0] for x in range(200, (feature_extractor.n_samples + 500), 200)]
|
|
np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs]
|
|
|
|
speech_inputs_truncated = [x[: feature_extractor.n_samples] for x in speech_inputs]
|
|
np_speech_inputs_truncated = [np.asarray(speech_input) for speech_input in speech_inputs_truncated]
|
|
|
|
encoded_sequences_1 = feature_extractor(np_speech_inputs, return_tensors="np").input_features
|
|
encoded_sequences_2 = feature_extractor(np_speech_inputs_truncated, return_tensors="np").input_features
|
|
for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2):
|
|
self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3))
|
|
|
|
def test_double_precision_pad(self):
|
|
import torch
|
|
|
|
feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict())
|
|
np_speech_inputs = np.random.rand(100, 32).astype(np.float64)
|
|
py_speech_inputs = np_speech_inputs.tolist()
|
|
|
|
for inputs in [py_speech_inputs, np_speech_inputs]:
|
|
np_processed = feature_extractor.pad([{"input_features": inputs}], return_tensors="np")
|
|
self.assertTrue(np_processed.input_features.dtype == np.float32)
|
|
pt_processed = feature_extractor.pad([{"input_features": inputs}], return_tensors="pt")
|
|
self.assertTrue(pt_processed.input_features.dtype == torch.float32)
|
|
|
|
def _load_datasamples(self, num_samples):
|
|
from datasets import load_dataset
|
|
|
|
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
|
# automatic decoding with librispeech
|
|
speech_samples = ds.sort("id").select(range(num_samples))[:num_samples]["audio"]
|
|
|
|
return [x["array"] for x in speech_samples]
|
|
|
|
def test_integration(self):
|
|
# fmt: off
|
|
EXPECTED_INPUT_FEATURES = torch.tensor(
|
|
[
|
|
0.1193, -0.0946, -0.1098, -0.0196, 0.0225, -0.0690, -0.1736, 0.0951,
|
|
0.0971, -0.0817, -0.0702, 0.0162, 0.0260, 0.0017, -0.0192, -0.1678,
|
|
0.0709, -0.1867, -0.0655, -0.0274, -0.0234, -0.1884, -0.0516, -0.0554,
|
|
-0.0274, -0.1425, -0.1423, 0.0837, 0.0377, -0.0854
|
|
]
|
|
)
|
|
# fmt: on
|
|
|
|
input_speech = self._load_datasamples(1)
|
|
feaure_extractor = WhisperFeatureExtractor()
|
|
input_features = feaure_extractor(input_speech, return_tensors="pt").input_features
|
|
self.assertTrue(torch.allclose(input_features[0, 0, :30], EXPECTED_INPUT_FEATURES, atol=1e-4))
|