Model Card for Lite-Whisper large-v3

Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our GitHub repository and paper for details.

Here's a code snippet to get started:

import librosa 
import torch
from transformers import AutoProcessor, AutoModel

device = "cuda:0"
dtype = torch.float16

# load the compressed Whisper model
model = AutoModel.from_pretrained(
    "efficient-speech/lite-whisper-large-v3-turbo", 
    trust_remote_code=True, 
)
model.to(dtype).to(device)

# we use the same processor as the original model
processor = AutoProcessor.from_pretrained("openai/whisper-large-v3")

# set the path to your audio file
path = "path/to/audio.wav"
audio, _ = librosa.load(path, sr=16000)

input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
input_features = input_features.to(dtype).to(device)

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(
    predicted_ids, 
    skip_special_tokens=True
)[0]

print(transcription)

Benchmark Results

Following is the average word error rate (WER) evaluated on the ESB datasets:

Model Average WER (↓) Encoder Size Decoder Size
whisper-large-v3 10.1 635M 907M
lite-whisper-large-v3-acc 10.1 429M 907M
lite-whisper-large-v3 10.2 377M 907M
lite-whisper-large-v3-fast 11.3 308M 907M
       
whisper-large-v3-turbo 10.1 635M 172M
lite-whisper-large-v3-turbo-acc 10.2 421M 172M
lite-whisper-large-v3-turbo 12.6 374M 172M
lite-whisper-large-v3-turbo-fast 20.1 313M 172M
       
whisper-medium 14.8 306M 457M
Downloads last month
102
Safetensors
Model size
1.35B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Model tree for efficient-speech/lite-whisper-large-v3

Finetuned
(417)
this model
Quantizations
1 model

Collection including efficient-speech/lite-whisper-large-v3