File size: 3,345 Bytes
170bfed 3cf89e9 170bfed 3cf89e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
library_name: transformers
tags:
- automatic-speech-recognition
license: apache-2.0
datasets:
- reazon-research/reazonspeech
language:
- ja
metrics:
- cer
base_model:
- reazon-research/japanese-wav2vec2-large
---
# `japanese-wav2vec2-large-rs35kh`
This model is a [wav2vec 2.0 Large](https://huggingface.co/reazon-research/japanese-wav2vec2-large) fine-tuned on the large-scale Japanese ASR corpus [ReazonSpeech v2.0](https://huggingface.co/datasets/reazon-research/reazonspeech).
## Usage
You can use this model through `transformers` library:
```python
import librosa
import numpy as np
from transformers import AutoProcessor, Wav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained(
"reazon-research/japanese-wav2vec2-large-rs35kh",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
).to("cuda")
processor = AutoProcessor.from_pretrained("reazon-research/japanese-wav2vec2-large-rs35kh")
audio, _ = librosa.load(audio_filepath, sr=16_000)
audio = np.pad(audio, pad_width=int(0.5 * 16_000)) # Recommend to pad audio before inference
input_values = processor(
audio,
return_tensors="pt",
sampling_rate=16_000
).input_values.to("cuda").to(torch.bfloat16)
with torch.inference_mode():
logits = model(input_values).logits.cpu()
predicted_ids = torch.argmax(logits, dim=-1)[0]
transcription = processor.decode(predicted_ids, skip_special_tokens=True)
```
## Test Results
We report the Character Error Rate (CER) of our model and the other wav2vec2 families.
| Model | #Prameters⬇ | AVERAGE⬇ | JSUT-BASIC5000⬇ | Common Voice⬇ | TEDxJP-10K⬇ |
| :---------------------------------------------- | :---------: | :--------: | :-------------: | :-----------: | :---------: |
| reazon-research/japanese-wav2vec2-large-rs35kh | 319M | **16.25%** | 11.00% | 18.23% | **19.53%** |
| reazon-research/japanese-wav2vec2-base-rs35kh | 96.7M | 20.40% | 13.22% | 23.76% | 24.23% |
| Ivydata/wav2vec2-large-xlsr-53-japanese | 318M | 24.23% | 13.83% | **18.15%** | 40.72% |
| jonatasgrosman/wav2vec2-large-xlsr-53-japanese | 317M | 31.82% | 4.25% | 40.58% | 50.63% |
| vumichien/wav2vec2-large-xlsr-japanese | 318M | 39.87% | **4.21%** | 53.29% | 62.12% |
We also report the CER for long-form speech.
| Model | #Prameters⬇ | JSUT-BOOK⬇ |
| :---------------------------------------------- | :---------: | :--------: |
| reazon-research/japanese-wav2vec2-large-rs35kh | 319M | **30.98%** |
| reazon-research/japanese-wav2vec2-base-rs35kh | 96.7M | 82.84% |
| Ivydata/wav2vec2-large-xlsr-53-japanese | 318M | 65.60% |
| jonatasgrosman/wav2vec2-large-xlsr-53-japanese | 317M | 46.20% |
| vumichien/wav2vec2-large-xlsr-japanese | 318M | 46.52% |
## Citation
```bibtex
@misc{reazon-research-japanese-wav2vec2-large-rs35kh,
title={japanese-wav2vec2-large-rs35kh},
author={Sasaki, Yuta},
url = {https://huggingface.co/reazon-research/japanese-wav2vec2-large-rs35kh},
year = {2024}
}
```
## License
[Apaceh Licence 2.0](https://choosealicense.com/licenses/apache-2.0/) |