
* Rename index.mdx to index.md * With saved modifs * Address review comment * Treat all files * .mdx -> .md * Remove special char * Update utils/tests_fetcher.py Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr> --------- Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
3.7 KiB
Whisper
Overview
The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
The abstract from the paper is the following:
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
Tips:
- The model usually performs well without requiring any finetuning.
- The architecture follows a classic encoder-decoder architecture, which means that it relies on the [
~generation.GenerationMixin.generate
] function for inference. - Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release.
- One can use [
WhisperProcessor
] to prepare audio for the model, and decode the predicted ID's back into text.
This model was contributed by Arthur Zucker. The Tensorflow version of this model was contributed by amyeroberts. The original code can be found here.
WhisperConfig
autodoc WhisperConfig
WhisperTokenizer
autodoc WhisperTokenizer - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
WhisperTokenizerFast
autodoc WhisperTokenizerFast - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
WhisperFeatureExtractor
autodoc WhisperFeatureExtractor - call
WhisperProcessor
autodoc WhisperProcessor - call - from_pretrained - save_pretrained - batch_decode - decode
WhisperModel
autodoc WhisperModel - forward - _mask_input_features
WhisperForConditionalGeneration
autodoc WhisperForConditionalGeneration - forward
WhisperForAudioClassification
autodoc WhisperForAudioClassification - forward
TFWhisperModel
autodoc TFWhisperModel - call
TFWhisperForConditionalGeneration
autodoc TFWhisperForConditionalGeneration - call
FlaxWhisperModel
autodoc FlaxWhisperModel - call
FlaxWhisperForConditionalGeneration
autodoc FlaxWhisperForConditionalGeneration - call
FlaxWhisperForAudioClassification
autodoc FlaxWhisperForAudioClassification - call