mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-04 13:20:12 +06:00

* Add hubert classifier + tests * Add hubert classifier + tests * Dummies for all classification tests * Wav2Vec2 classifier + ER test * Fix hubert integration tests * Add hubert IC * Pass tests for all classification tasks on Hubert * Pass all tests + copies * Move models to the SUPERB org
87 lines
4.1 KiB
ReStructuredText
87 lines
4.1 KiB
ReStructuredText
..
|
|
Copyright 2021 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
Hubert
|
|
-----------------------------------------------------------------------------------------------------------------------
|
|
|
|
Overview
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
Hubert was proposed in `HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
|
|
<https://arxiv.org/abs/2106.07447>`__ by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan
|
|
Salakhutdinov, Abdelrahman Mohamed.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are
|
|
multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training
|
|
phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we
|
|
propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an
|
|
offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our
|
|
approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined
|
|
acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised
|
|
clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means
|
|
teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the
|
|
state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h,
|
|
10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER
|
|
reduction on the more challenging dev-other and test-other evaluation subsets.*
|
|
|
|
Tips:
|
|
|
|
- Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
|
|
- Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
|
|
using :class:`~transformers.Wav2Vec2CTCTokenizer`.
|
|
|
|
This model was contributed by `patrickvonplaten <https://huggingface.co/patrickvonplaten>`__.
|
|
|
|
|
|
HubertConfig
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.HubertConfig
|
|
:members:
|
|
|
|
|
|
HubertModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.HubertModel
|
|
:members: forward
|
|
|
|
|
|
HubertForCTC
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.HubertForCTC
|
|
:members: forward
|
|
|
|
|
|
HubertForSequenceClassification
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.HubertForSequenceClassification
|
|
:members: forward
|
|
|
|
|
|
TFHubertModel
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TFHubertModel
|
|
:members: call
|
|
|
|
|
|
TFHubertForCTC
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
.. autoclass:: transformers.TFHubertForCTC
|
|
:members: call
|