mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-03 12:50:06 +06:00
Multiple typo fixes in NLP, Audio docs (#35181)
Fixed multiple typos in Tutorials, NLP, and Audio sections
This commit is contained in:
parent
425af6cdc2
commit
52d135426f
@ -112,7 +112,7 @@ The next step is to load a Wav2Vec2 processor to process the audio signal:
|
||||
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base")
|
||||
```
|
||||
|
||||
The MInDS-14 dataset has a sampling rate of 8000kHz (you can find this information in its [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you'll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model:
|
||||
The MInDS-14 dataset has a sampling rate of 8000Hz (you can find this information in its [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you'll need to resample the dataset to 16000Hz to use the pretrained Wav2Vec2 model:
|
||||
|
||||
```py
|
||||
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
|
||||
|
@ -419,7 +419,7 @@ Get the class with the highest probability:
|
||||
```py
|
||||
>>> predicted_class = logits.argmax().item()
|
||||
>>> predicted_class
|
||||
'0'
|
||||
0
|
||||
```
|
||||
</pt>
|
||||
<tf>
|
||||
@ -448,7 +448,7 @@ Get the class with the highest probability:
|
||||
```py
|
||||
>>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
|
||||
>>> predicted_class
|
||||
'0'
|
||||
0
|
||||
```
|
||||
</tf>
|
||||
</frameworkcontent>
|
||||
|
@ -325,7 +325,7 @@ or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/no
|
||||
|
||||
Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [`Trainer`] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance.
|
||||
|
||||
If have more time and you're interested in how to evaluate your model for question answering, take a look at the [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#post-processing) chapter from the 🤗 Hugging Face Course!
|
||||
If you have more time and you're interested in how to evaluate your model for question answering, take a look at the [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#post-processing) chapter from the 🤗 Hugging Face Course!
|
||||
|
||||
## Inference
|
||||
|
||||
@ -397,7 +397,7 @@ Tokenize the text and return TensorFlow tensors:
|
||||
>>> from transformers import AutoTokenizer
|
||||
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
|
||||
>>> inputs = tokenizer(question, text, return_tensors="tf")
|
||||
>>> inputs = tokenizer(question, context, return_tensors="tf")
|
||||
```
|
||||
|
||||
Pass your inputs to the model and return the `logits`:
|
||||
|
@ -283,7 +283,7 @@ Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
|
||||
```py
|
||||
>>> from transformers.keras_callbacks import KerasMetricCallback
|
||||
|
||||
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
|
||||
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_test_set)
|
||||
```
|
||||
|
||||
Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
|
||||
|
@ -290,7 +290,7 @@ Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
|
||||
```py
|
||||
>>> from transformers.keras_callbacks import KerasMetricCallback
|
||||
|
||||
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
|
||||
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_test_set)
|
||||
```
|
||||
|
||||
Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
|
||||
|
@ -108,7 +108,7 @@ class PeftAdapterMixin:
|
||||
</Tip>
|
||||
|
||||
token (`str`, `optional`):
|
||||
Whether to use authentication token to load the remote folder. Userful to load private repositories
|
||||
Whether to use authentication token to load the remote folder. Useful to load private repositories
|
||||
that are on HuggingFace Hub. You might need to call `huggingface-cli login` and paste your tokens to
|
||||
cache it.
|
||||
device_map (`str` or `Dict[str, Union[int, str, torch.device]]` or `int` or `torch.device`, *optional*):
|
||||
|
Loading…
Reference in New Issue
Block a user