Edit model card

wav2vec2-btb-cv-ft-cv-cy

This model is a version of techiaith/wav2vec2-xlsr-53-ft-btb-cv-cy fine-tuned with its encoder frozen and the training set commonvoice_cy_18

It achieves the following results on the Welsh Common Voice version 18 standard test set:

  • WER: 24.93
  • CER: 6.55

However, when the accompanying KenLM language model is used, it achieves the following results on the same test set:

  • WER: 15.30
  • CER: 4.57

Usage

wav2vec2 acoustic model only...

import torch
import torchaudio
import librosa

from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor

processor = Wav2Vec2Processor.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")

audio, rate = librosa.load(audio_file, sr=16000)

inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)

with torch.no_grad():
  logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits

# greedy decoding
predicted_ids = torch.argmax(logits, dim=-1)

print("Prediction:", processor.batch_decode(predicted_ids))

with language model...

import torch
import torchaudio
import librosa

from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM

processor = Wav2Vec2ProcessorWithLM.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")

audio, rate = librosa.load(audio_file, sr=16000)

inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)

with torch.no_grad():
  logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits

# ctc decoding
print("Prediction:", processor.batch_decode(logits.numpy()).text[0])
Downloads last month
3,077
Safetensors
Model size
315M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for techiaith/wav2vec2-btb-cv-ft-cv-cy

Dataset used to train techiaith/wav2vec2-btb-cv-ft-cv-cy

Collection including techiaith/wav2vec2-btb-cv-ft-cv-cy