SentenceBench / README.md
MahtaFetrat's picture
Update README.md
d95b380 verified
metadata
license: gpl
task_categories:
  - translation
language:
  - fa
tags:
  - grapheme-to-phoneme
  - g2p
  - persian
  - farsi
  - phoneme-translation
  - polyphone
  - mana-tts
  - commonvoice
  - sentence-bench
pretty_name: SentenceBench
size_categories:
  - n<1K

Sentence-Bench: A Sentence-Level Benchmarking Dataset for Persian Grapheme-to-Phoneme (G2P) Tasks

Hugging Face

Introduction

Sentence-Bench is the first sentence-level benchmarking dataset for evaluating grapheme-to-phoneme (G2P) models in Persian. To the best of our knowledge, no other dataset exists with phoneme-annotated sentences in Persian, designed specifically to address two significant challenges in sentence-level G2P tasks:

  1. Polyphone Word Pronunciation: Predicting the correct pronunciation of polyphone words within a sentence.
  2. Context-Sensitive Phonemes: Predicting context-sensitive phonemes, such as Ezafe, which requires consideration of sentence context.

This dataset allows comprehensive sentence-level evaluation of G2P tools using three metrics:

  • Phoneme Error Rate (PER): The conventional evaluation metric for phoneme-level tasks.
  • Polyphone Word Accuracy: Accuracy in predicting the correct pronunciation of polyphone words.

The dataset comprises 400 sentences, split into three parts:

  • 200 sentences manually constructed using approximately 100 polyphone words selected from the Kaamel [1] dictionary, each word appearing in various contexts to showcase multiple pronunciations.
  • 100 randomly selected sentences from the unpublished ManaTTS [2] dataset.
  • 100 of the most upvoted sentences from CommonVoice [3].

Each sentence in the first part is annotated with its corresponding phoneme sequence, and sentences containing polyphone words include an additional annotation for the correct pronunciation of the polyphone within that sentence.

Dataset Structure

The dataset is provided as a CSV file with the following columns:

  • dataset: The source of the sentence, which is one of mana-tts, commonvoice, or polyphone.
  • grapheme: The sentence in Persian script.
  • phoneme: The phonetic transcription of the sentence.
  • polyphone word: The Persian word with ambiguous pronunciation (only for sentences with polyphones).
  • pronunciation: The correct pronunciation of the polyphone word within the sentence (only for sentences with polyphones).

Phoneme Representation

The phonetic symbols used in this dataset correspond to Persian phonemes. Below is a reference table for the specific symbols and their IPA equivalents:

Symbol Persian Sound IPA Equivalent Example
A آ, ا (long vowel) æ ماه: mAh
a َ (short vowel) ɒː درد: dard
u او (long vowel) دوست: dust
i ای (long vowel) میز: miz
o ُ (short vowel) o ظهر: zohr
e ِ (short vowel) e ذهن: zehn
S ش (consonant) ʃ شهر: Sahr
C چ (consonant) tʃʰ چتر: Catr
Z ژ (consonant) ʒ ژاله: ZAle
q غ، ق (consonant) ɣ, q غذا: qazA, قند: qand
x خ (consonant) x خاک: xAk
r ر (consonant) ɾ روح: ruh
y ی (consonant) j یار: yAr
j ج (consonant) نجات: nejAt
v و (consonant) v ورم: varam
? ع، ء، ئ (consonant) ʔ عمر: ?omr, آینده: ?Ayande

The Ezafe phones are annotated by -e or -ye according to the context.

License

This dataset is released under a GNU license, in accordance with the licenses of its components.

References

The source datasets can be cited as follows:

@article{ardila2019common,
  title={Common voice: A massively-multilingual speech corpus},
  author={Ardila, Rosana and Branson, Megan and Davis, Kelly and Henretty, Michael and Kohler, Michael and Meyer, Josh and Morais, Reuben and Saunders, Lindsay and Tyers, Francis M and Weber, Gregor},
  journal={arXiv preprint arXiv:1912.06670},
  year={2019}
}
@article{fetrat2024manatts,
      title={ManaTTS Persian: a recipe for creating TTS datasets for lower resource languages}, 
      author={Mahta Fetrat Qharabagh and Zahra Dehghanian and Hamid R. Rabiee},
      journal={arXiv preprint arXiv:2409.07259},
      year={2024},
}

Contact

For any questions or inquiries, feel free to open an issue or contact the author at [[email protected]].

Citation

Please cite the following papers if you use this dataset:

@misc{fetrat2024llmpoweredg2p,
      title={LLM-Powered Grapheme-to-Phoneme Conversion: Benchmark and Case Study}, 
      author={Mahta Fetrat Qharabagh and Zahra Dehghanian and Hamid R. Rabiee},
      year={2024},
      eprint={2409.08554},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.08554}, 
}