wav2vec2-large-xls-r-300m-bashkir-cv7_opt

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset. It achieves the following results on the evaluation set:

  • Training Loss: 0.268400
  • Validation Loss: 0.088252
  • WER without LM: 0.085588
  • WER with LM: 0.04440795062008041
  • CER with LM: 0.010491234992390509

Model description

Trained with this jupiter notebook

Intended uses & limitations

In order to reduce the number of characters, the following letters have been replaced or removed:

  • 'я' -> 'йа'
  • 'ю' -> 'йу'
  • 'ё' -> 'йо'
  • 'е' -> 'йэ' for first letter
  • 'е' -> 'э' for other cases
  • 'ъ' -> deleted
  • 'ь' -> deleted

Therefore, in order to get the correct text, you need to do the reverse transformation and use the language model.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.16.1
  • Pytorch 1.10.0+cu113
  • Datasets 1.18.2
  • Tokenizers 0.10.3
Downloads last month
60
Safetensors
Model size
315M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt

Spaces using AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt 2

Evaluation results