transformers/docs/source/en/model_doc/whisper.md
Arthur 88832c01c8
[Whisper] Add conversion script for the tokenizer (#27338)
* draft

* updates

* full conversion taken from `https://gist.github.com/xenova/a452a6474428de0182b17605a98631ee`

* psuh

* nits

* updates

* more nits

* Add co author

Co-authored-by: Joshua Lochner <admin@xenova.com>

* fixup

* cleanup

* styling

* add proper path

* update

* nits

* don't  push the exit

* clean

* update whisper doc

* don't error out if tiktoken is not here

* make sure we are BC with conversion

* nit

* Update docs/source/en/model_doc/whisper.md

Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>

* merge and update

* update markdwon

* Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>

---------

Co-authored-by: Joshua Lochner <admin@xenova.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2023-11-07 15:07:55 +01:00

5.5 KiB

Whisper

Overview

The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.

The abstract from the paper is the following:

We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.

This model was contributed by Arthur Zucker. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here.

Usage tips

  • The model usually performs well without requiring any finetuning.

  • The architecture follows a classic encoder-decoder architecture, which means that it relies on the [~generation.GenerationMixin.generate] function for inference.

  • Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release.

  • One can use [WhisperProcessor] to prepare audio for the model, and decode the predicted ID's back into text.

  • To convert the tokenizer, we recommend using the following:

python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path "" --pytorch_dump_folder_path "Arthur/whisper-3" --convert_tokenizer True --whisper_version 3 --multilingual True

Here the whisper_version will set the number of languages to 100 to account for cantonese which was added in whisper-large-v3.

Inference

Here is a step-by-step guide to transcribing an audio sample using a pre-trained Whisper model:

>>> from datasets import load_dataset
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration

>>> # Select an audio file and read it:
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> audio_sample = ds[0]["audio"]
>>> waveform = audio_sample["array"]
>>> sampling_rate = audio_sample["sampling_rate"]

>>> # Load the Whisper model in Hugging Face format:
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")

>>> # Use the model and processor to transcribe the audio:
>>> input_features = processor(
...     waveform, sampling_rate=sampling_rate, return_tensors="pt"
... ).input_features

>>> # Generate token ids
>>> predicted_ids = model.generate(input_features)

>>> # Decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)

>>> transcription[0]
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'

WhisperConfig

autodoc WhisperConfig

WhisperTokenizer

autodoc WhisperTokenizer - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary - batch_decode - decode

WhisperTokenizerFast

autodoc WhisperTokenizerFast - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary - batch_decode - decode

WhisperFeatureExtractor

autodoc WhisperFeatureExtractor - call

WhisperProcessor

autodoc WhisperProcessor - call - from_pretrained - save_pretrained - batch_decode - decode

WhisperModel

autodoc WhisperModel - forward - _mask_input_features

WhisperForConditionalGeneration

autodoc WhisperForConditionalGeneration - forward - generate

WhisperForCausalLM

autodoc WhisperForCausalLM - forward

WhisperForAudioClassification

autodoc WhisperForAudioClassification - forward

TFWhisperModel

autodoc TFWhisperModel - call

TFWhisperForConditionalGeneration

autodoc TFWhisperForConditionalGeneration - call

FlaxWhisperModel

autodoc FlaxWhisperModel - call

FlaxWhisperForConditionalGeneration

autodoc FlaxWhisperForConditionalGeneration - call

FlaxWhisperForAudioClassification

autodoc FlaxWhisperForAudioClassification - call