Datasets:
Update README.md
#1
by
elnazrahmati
- opened
README.md
CHANGED
@@ -172,19 +172,11 @@ If efforts were made to anonymize the data, describe the anonymization process.
|
|
172 |
|
173 |
### Social Impact of Dataset
|
174 |
|
175 |
-
|
176 |
-
|
177 |
-
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
|
178 |
-
|
179 |
-
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
|
180 |
|
181 |
### Discussion of Biases
|
182 |
|
183 |
-
|
184 |
-
|
185 |
-
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
|
186 |
-
|
187 |
-
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
|
188 |
|
189 |
### Other Known Limitations
|
190 |
|
@@ -216,4 +208,4 @@ If the dataset has a [DOI](https://www.doi.org/), please provide it here.
|
|
216 |
|
217 |
### Contributions
|
218 |
|
219 |
-
Thanks to [@sadra](https://github.com/sadra) for adding this dataset.
|
|
|
172 |
|
173 |
### Social Impact of Dataset
|
174 |
|
175 |
+
Farsi is a language used by millions of people, for thousands of years; therefore, there exists numerous resources for this language. However, no-one has ever published a big enough easy to use corpus of textual data. Our dataset eases the path of pre-training and fine-tuning Farsi Language Models (LMs) in self-supervised manner which can lead to better tools for retention and development of Farsi. As a matter of fact, the informal portion of naab contains various dialects including, Turkish, Luri, etc. which are under-represented languages. Although the amount of data is comparably small, but it can be helpful in training a multi-lingual Tokenizer for Farsi variations. As mentioned before, some parts of our dataset are crawled from social media which in result means it contains ethnic, religious, and gender biases.
|
|
|
|
|
|
|
|
|
176 |
|
177 |
### Discussion of Biases
|
178 |
|
179 |
+
During Exploratory Data Analysis (EDA), we found samples of data including biased opinions about race, religion, and gender. Based on the result we saw in our samples, only a small portion of informal data can be considered biased. Therefore, we anticipate that it won’t affect the trained language model on this data. Furthermore, we decided to keep this small part of data as it may become helpful in training models for classifying harmful and hateful texts.
|
|
|
|
|
|
|
|
|
180 |
|
181 |
### Other Known Limitations
|
182 |
|
|
|
208 |
|
209 |
### Contributions
|
210 |
|
211 |
+
Thanks to [@sadra](https://github.com/sadra) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.
|