File size: 5,205 Bytes
14687ca
d757b00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14687ca
6bf65a1
 
d757b00
4aedc02
d757b00
 
 
 
 
 
6bf65a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d757b00
6bf65a1
 
 
eb650dd
6bf65a1
 
 
4aedc02
 
 
6bf65a1
 
 
4aedc02
6bf65a1
 
 
4aedc02
6bf65a1
4aedc02
6bf65a1
4aedc02
 
eb650dd
4aedc02
 
 
 
 
 
 
6bf65a1
 
 
 
4aedc02
6bf65a1
 
 
eb650dd
6bf65a1
 
 
 
 
eb650dd
6bf65a1
 
 
eb650dd
6bf65a1
 
 
eb650dd
6bf65a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb650dd
6bf65a1
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-to-text
- image
- image-captioning
-  text-to-image
task_ids:
- image-captioning
pretty_name: ShahNegar
---

# ShahNegar (A Plotted version of The Shahnameh)

This dataset is a plotted version of Ferdowsi's Shahnameh (which is a highly-regarded ancient set of Farsi poems) generated using DALL-E mini (aka [craiyon](https://www.craiyon.com/)). You can use this dataset using the code below: 

```python
from datasets import load_dataset

dataset = load_dataset("sadrasabouri/ShahNegar")
```

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Paper:**
- **Point of Contact:** [Sadra Sabouri](mailto:[email protected])

### Dataset Summary

This dataset contains more than 30K images with their corresponding text from the Shahnameh. For each Shahnameh paragraph, we generated at most 9 images. Images corresponding to the same paragraphs have the same `id` field. There was a human annotation post-process in which we removed some harmful/private generated images from the dataset. After all we reached to more than 30K, 256 * 256 images.

### Supported Tasks and Leaderboards

The main purpose of making this dataset open source is because of its artistic value, but it can also be used for the below tasks:
+ text-to-image
+ image-to-text (image captioning)

### Languages

The Shahnameh was generally written in Farsi (Persian) but the translated version we used for this dataset - [satoor](https://www.sattor.com/english/Shahnameh.pdf) - was completely in English with no alignments for the corresponding Farsi poem. We are planning to add another field to dataset entries which is the corresponding Farsi poem as soon as possible.

## Dataset Structure

### Data Fields

Here is an instance of our dataset:

```json
{
    "image": <PIL Image Bytes>,
    "id": 0,
    "text": "He took up his abode in the mountains, and clad himself and his people in tiger-skins, and from him sprang all kindly nurture and the arts of clothing, till then unknown."
}
```
+ `image`: the image for given text.
+ `id`: the id for the text (**Not for the image**).
+ `text`: the English text for the image.


### Data Splits

This dataset has only a split (`train` split).

## Dataset Creation

The translated version of the Shahnameh was generally derived from the [satoor](https://www.sattor.com/english/Shahnameh.pdf) website. We first extracted texts from the pdf. After that we divided paragraph into sentences and give each sentence to the DALL-E mini model through its online API. It generated nine images for each sentences. After few annotations we came up with more than 30000 images.

### Annotations

#### Annotation process

Through the process of image generation we noticed a bias in the DALL-E models towards word `iran`. It was biased so that each sentence with this given word would have pictures from Iran's political figures which were usually totally irrelevant. The annotation process mainly focused to deal with these pictures. We removed those images which seems to be harmful to those figures and/or were irrelevant to the context.

#### Who are the annotators?

Mahsa Namdar and Sadra Sabouri were the annotators of this dataset.

### Personal and Sensitive Information

Since the textual data is easily downloadable and the images were generated through a image generation model there shouldn't be any personal information in this dataset. Just in case which you find something harmful or violating for one's personal information please let us know. We will take proper action as soon as possible.

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

MIT

### Citation Information

[More Information Needed]

### Contributions

Thanks to [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset.