File size: 1,959 Bytes
e6ad90a e67b7dc e6ad90a e67b7dc e6ad90a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
language:
- th
- en
base_model:
- openai/whisper-medium
pipeline_tag: automatic-speech-recognition
library_name: transformers
metrics:
- wer
---
# Pathumma Whisper Medium (Th)
## Model Description
Additional information is needed
## Quickstart
You can transcribe audio files using the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class with the following code snippet:
```python
import torch
from transformers import pipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
lang = "th"
task = "transcribe"
pipe = pipeline(
task="automatic-speech-recognition",
model="nectec/Pathumma-whisper-th-medium",
torch_dtype=torch_dtype,
device=device,
)
pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task=task)
text = pipe("audio_path.wav")["text"]
print(text)
```
## Limitations and Future Work
Additional information is needed
## Acknowledgements
We extend our appreciation to the research teams engaged in the creation of the open speech model, including AIResearch, BiodatLab, Looloo Technology, SCB 10X, and OpenAI. We would like to express our gratitude to Dr. Titipat Achakulwisut of BiodatLab for the evaluation pipeline. We express our gratitude to ThaiSC, or NSTDA Supercomputer Centre, for supplying the LANTA used for model training, fine-tuning, and evaluation.
## Pathumma Audio Team
*Pattara Tipaksorn*, Wayupuk Sommuang, *Kwanchiva Thangthai*
## Citation
```
@misc{tipaksorn2024PathummaWhisper,
title = { {Pathumma Whisper Medium (TH)} },
author = { Pattara Tipaksorn and Wayupuk Sommuang and Kwanchiva Thangthai },
url = { https://huggingface.co/nectec/Pathumma-whisper-th-medium },
publisher = { Hugging Face },
year = { 2024 },
}
``` |