mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 02:02:21 +06:00
small improvements for wav2vec2 info script (#10829)
This commit is contained in:
parent
be87b84276
commit
5aaf6e1460
@ -131,25 +131,46 @@ Awesome you have successfully trained a XLSR-Wav2Vec2 model 😎. Now you can ju
|
||||
|
||||
### Local machine
|
||||
|
||||
We have provided `run_common_voice.py` script to run the fine-tuning on local machine. The script is similar to the colab but allows you to launch training using command line, save and continue training from previous checkpoints and launch training on multiple GPUs.
|
||||
We have provided `run_common_voice.py` script to run fine-tuning on local machine. The script is similar to the colab but allows you to launch training using command line, save and continue training from previous checkpoints and launch training on multiple GPUs.
|
||||
For bigger datasets, we recommend to train Wav2Vec2 locally instead of in a google colab.
|
||||
|
||||
1. First, head over to the `examples/research_projec/wav2vec2` directory where the `run_common_voice.py` script is located and install the required packages. The
|
||||
packages are listed in the `requirements.txt` file. To install run `pip install -r requirements.txt`.
|
||||
1. To begin with, we should clone transformers localy and install all the required packages.
|
||||
|
||||
First, you need to clone the `transformers` repo with:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/huggingface/transformers.git
|
||||
```
|
||||
|
||||
Second, head over to the `examples/research_projects/wav2vec2` directory, where the `run_common_voice.py` script is located.
|
||||
|
||||
```
|
||||
$ cd transformers/examplesh/research_projects/wav2vec2
|
||||
```
|
||||
|
||||
Third, install the required packages. The
|
||||
packages are listed in the `requirements.txt` file and can be installed with
|
||||
|
||||
```
|
||||
$ pip install -r requirements.txt`.
|
||||
```
|
||||
|
||||
**Note**: Installing the latest version of `torchaudio` will also upgrade `torch` to it's latest stable version. If you are using specific version of `torch` then make sure
|
||||
to use the correct `torchaudio` version compatible with your version of `torch`. By default the `requirements.txt` will install the latest version of `torchaudio`.
|
||||
|
||||
2. Next, take a look at the `run_common_voice.py` script to get an understanding of how it works. In short the script does the following things
|
||||
- Load the given common voice dataset.
|
||||
- Create vocab for the language.
|
||||
- Load the model with given hyperparameters.
|
||||
- Pre-process the dataset to input into the model.
|
||||
- Run training
|
||||
- Run evaluation.
|
||||
2. Next, take a look at the `run_common_voice.py` script to get an understanding of how it works. In short the script does the following:
|
||||
|
||||
3. The following examples show how you can launch fine-tuning for common voice Turkish dataset.
|
||||
- Load the given common voice dataset
|
||||
- Create vocab for the language
|
||||
- Load the model with given hyperparameters
|
||||
- Pre-process the dataset to input into the model
|
||||
- Run training
|
||||
- Run evaluation
|
||||
|
||||
3. The following examples show how you can launch fine-tuning for the common voice dataset.
|
||||
Here we will run the script on the *Turkish* Common Voice dataset for demonstration purposes.
|
||||
|
||||
**To lanuch fine-tuninig on singel GPU:**
|
||||
**To lanuch fine-tuninig on a single GPU:**
|
||||
|
||||
```bash
|
||||
python run_common_voice.py \
|
||||
@ -209,7 +230,7 @@ packages are listed in the `requirements.txt` file. To install run `pip install
|
||||
4. The script also allows you to resume training from the last saved checkpoint. To resume training from last saved checkpoint remove the `--overwrite_output_dir` option and run the same command again. And to continue training from a specific checkpoint, keep the `--overwrite_output_dir`
|
||||
option and pass the path of the checkpoint as `--model_name_or_path`.
|
||||
|
||||
As the script is based on the `Trainer` API, refer to the [Trainer docs](https://huggingface.co/transformers/main_classes/trainer.html) to know more about `Trainer` specific arguments.
|
||||
As the script is based on the `Trainer` API, refer to the [Trainer docs](https://huggingface.co/transformers/main_classes/trainer.html) for more information about ``Trainer`` and ``TrainingArguments``.
|
||||
|
||||
|
||||
## How to upload my trained checkpoint
|
||||
@ -281,7 +302,7 @@ tags:
|
||||
- xlsr-fine-tuning-week
|
||||
license: apache-2.0
|
||||
model-index:
|
||||
- name: {model_id} #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
|
||||
- name: {human_readable_name} #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
|
||||
results:
|
||||
- task:
|
||||
name: Speech Recognition
|
||||
|
Loading…
Reference in New Issue
Block a user