Edit model card

wav2vec2-large-xls-r-300m-dsb-with-hsb-pretraining

This model continued training from a model based on facebook/wav2vec2-xls-r-300m fine-tuned on Upper Sorbian common_voice_11_0 dataset. Hence, it is a fine-tuned version of TiMauzi/wav2vec2-large-xls-r-300m-hsb on the dataset "maminorěcna dolnoserbšćina" (native Lower Sorbian corpus). The rights on this dataset are reserved by the Institute for the Study of the Language, History and Culture of the Lusatian Sorbs/Wends and Comparative Minority Research. In case of any copyright issues, feel free to contact me so I can take this model offline.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 12
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 3
  • total_train_batch_size: 36
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 1

Training results

Step Training Loss Validation Loss Wer
200 1.603500 1.160496 0.542402
400 0.826000 1.221802 0.542985
600 0.522700 1.348839 0.467980
800 0.401200 1.178484 0.443043
1000 0.280700 1.428626 0.438385
1200 0.216700 1.300642 0.433631

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TiMauzi/wav2vec2-large-xls-r-300m-dsb-with-hsb-pretraining

Finetuned
(1)
this model

Dataset used to train TiMauzi/wav2vec2-large-xls-r-300m-dsb-with-hsb-pretraining