File size: 1,951 Bytes
6ab8bd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: mit
language:
- ko
metrics:
- wer
- cer
tags:
- transcribe
- whisper
---

# Fine-tune Whisper-small for Korean Speech Recognition sample data (PoC)

Fine-tuning was performed using sample voices recorded from this csv data(https://github.com/hyeonsangjeon/job-transcribe/blob/main/meta_voice_data_3922.csv). 
We do not publish sample voices, so if you want to fine-tune yourself from scratch, please record separately or use a public dataset.

Fine tuning training based on the guide at https://huggingface.co/blog/fine-tune-whisper

## Training

### Base model

OpenAI's `whisper-small` (https://huggingface.co/openai/whisper-small)

### Parameters
We used heuristic parameters without separate hyperparameter tuning. The sampling rate is set to 16,000Hz.
- learning_rate = 2e-5
- epochs = 5
- gradient_accumulation_steps = 4
- per_device_train_batch_size = 4
- fp16 = True
- gradient_checkpointing = True
- generation_max_length = 225

## Usage
You need to install librosa package in order to convert wave to Mel Spectrogram. (`pip install librosa`)

### inference.py

```python
import librosa
file = "nlp-voice-3922/data/0002d3428f0ddfa5a48eec5cc351daa8.wav"
arr, sampling_rate = librosa.load(file, sr=16000)

from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
import torch

# load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
model = WhisperForConditionalGeneration.from_pretrained("daekeun-ml/whisper-small-ko-finetuned-single-speaker-3922samples")

input_features = processor(arr, return_tensors="pt", sampling_rate=sampling_rate).input_features 
forced_decoder_ids = processor.get_decoder_prompt_ids(language="ko", task="transcribe")
predicted_ids = model.generate(input_features, forced_decoder_ids = forced_decoder_ids)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens = True)

print(transcription)
```