mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 21:30:07 +06:00

* Reorganize doc for multilingual support * Fix style * Style * Toc trees * Adapt templates
41 lines
1.7 KiB
Plaintext
41 lines
1.7 KiB
Plaintext
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Speech Encoder Decoder Models
|
|
|
|
The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-sequence-to-text-sequence model
|
|
with any pretrained speech autoencoding model as the encoder (*e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder.
|
|
|
|
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
|
|
recognition and speech translation has *e.g.* been shown in [Large-Scale Self- and Semi-Supervised Learning for Speech
|
|
Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
|
|
Alexis Conneau.
|
|
|
|
An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in
|
|
[Speech2Text2](speech_to_text_2).
|
|
|
|
|
|
## SpeechEncoderDecoderConfig
|
|
|
|
[[autodoc]] SpeechEncoderDecoderConfig
|
|
|
|
## SpeechEncoderDecoderModel
|
|
|
|
[[autodoc]] SpeechEncoderDecoderModel
|
|
- forward
|
|
- from_encoder_decoder_pretrained
|
|
|
|
## FlaxSpeechEncoderDecoderModel
|
|
|
|
[[autodoc]] FlaxSpeechEncoderDecoderModel
|
|
- __call__
|
|
- from_encoder_decoder_pretrained |