Dataset Creation
Great work on this dataset! I'm particularly interested in understanding the methodology behind its creation. Given that this is a well-structured dataset with approximately 1 million records, I'm curious about the following:
Was any Large Language Model (LLM) used in the creation process?
How did you handle the specific challenges of Persian text processing, particularly:
The Ezafe construction (اضافه)
Word boundary issues and concatenation rules
Could you share details about the validation process that ensured such high accuracy in the phonetic transcriptions?
Thanks for your interest.
One of the biggest challenges when it comes to pre-processing Persian is the fact that this language is not diacritized (there's no اعراب گذاری). There's also the problem of ی میانجی, which I addressed by using Hazm to tag the role of each word in a sentence and developed an algorithm to add those, it's not as accurate as I want it to be mostly because hazm's POS Tagger doesn't do a good job on OOD. but It's better than nothing.
I have trained an ALBERT model and also an auto-regressive LLM that handles Phoneme-to-Grapheme conversion using this dataset. I have found that converting from Phonemes to Graphemes is a much easier task for LLMs to model than vice-versa. My ultimate goal is to create a proper G2P system, because without it, the entire field of Persian Speech Processing will remain stagnant. (Good news is, I have almost solved this problem. )
Overall, I wouldn't call this dataset "accurate" at all. It's a legacy that I decided to upload on the off chance that it may help some people. So I wouldn't recommend using it on any downstream task at all.
Thanks for Your Explanation
Sorry for bothering you. Can you please provide me the code for generating this?
Hi. Unfortunately I don't have an easy access to the scripts anymore, I generated this dataset a long time ago.
thanks for your understanding.
Yes, I understand it's hard to find old code. Thank you for trying. If you have time, could you possibly share the overall structure of the dataset? For example, any linguistic patterns like مضاف and مضاف الیه (possessive constructions in Arabic/Persian), or any other implementation details you might remember? This would be a helpful guide. If you don't remember, that's completely fine.
Thanks :)
if you're talking about the ی میانجی and نقشنمای اضافه , it depends on the role of the word in a sentence. for example the noun before an adjective gets the kasra (آسمانِ آبی), or if there are two nouns and one of them indicates a possession, the same can happen. this rule relies a lot on the quality of your POSTagger, but overall your dataset will be very noisy with a rule-based approach like this.
may I ask your specific use-case?
I am trying to create Persian medical TTS, and I have a plan to first convert Persian texts to phonetics and then pass them to algorithms like VITS or XTTS or other text-to-speech algorithms.
Awesome! Speech processing is kinda my moat. the above-mentioned approach may work for you if you want something that makes...speech.... but it's gonna leave many things to desire.
you probably shouldn't use XTTS, afair it uses its own tokenizer (similar to GPT2?) and those are based on Graphemes. but training Vits from scratch can give you something.
Unfortunately I don't have much to offer more than this. Making a Persian speech generation model as a service that solves all of these problems is definitely on my road map but won't happen for a while.
Good luck! It's my passion to see Persian people excel in the field of AI