
* Moved the sources to the right * small Changes * Some Changes to moonshine * Added the install to pipline * updated the monshine model card * Update docs/source/en/model_doc/moonshine.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/moonshine.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/moonshine.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/moonshine.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/moonshine.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Updated Documentation According to changes * Fixed the model with the commits * Update moonshine.md * Update moshi.md --------- Co-authored-by: Your Name <sohamprabhu@Mac.fios-router.home> Co-authored-by: Your Name <sohamprabhu@Sohams-MacBook-Air.local> Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
3.5 KiB
Moonshine
Moonshine is an encoder-decoder speech recognition model optimized for real-time transcription and recognizing voice command. Instead of using traditional absolute position embeddings, Moonshine uses Rotary Position Embedding (RoPE) to handle speech with varying lengths without using padding. This improves efficiency during inference, making it ideal for resource-constrained devices.
You can find all the original Moonshine checkpoints under the Useful Sensors organization.
Tip
Click on the Moonshine models in the right sidebar for more examples of how to apply Moonshine to different speech recognition tasks.
The example below demonstrates how to transcribe speech into text with [Pipeline
] or the [AutoModel
] class.
import torch
from transformers import pipeline
pipeline = pipeline(
task="automatic-speech-recognition",
model="UsefulSensors/moonshine-base",
torch_dtype=torch.float16,
device=0
)
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
# pip install datasets
import torch
from datasets import load_dataset
from transformers import AutoProcessor, MoonshineForConditionalGeneration
processor = AutoProcessor.from_pretrained(
"UsefulSensors/moonshine-base",
)
model = MoonshineForConditionalGeneration.from_pretrained(
"UsefulSensors/moonshine-base",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
).to("cuda")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", split="validation")
audio_sample = ds[0]["audio"]
input_features = processor(
audio_sample["array"],
sampling_rate=audio_sample["sampling_rate"],
return_tensors="pt"
)
input_features = input_features.to("cuda", dtype=torch.float16)
predicted_ids = model.generate(**input_features, cache_implementation="static")
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
MoonshineConfig
autodoc MoonshineConfig
MoonshineModel
autodoc MoonshineModel - forward - _mask_input_features
MoonshineForConditionalGeneration
autodoc MoonshineForConditionalGeneration - forward - generate