Datasets:
SLPL
/

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
sadrasabouri commited on
Commit
9b131ab
2 Parent(s): e2a514f 3c7e34c

Merge branch 'main' of https://huggingface.co/datasets/SLPL/naab into main

Browse files
Files changed (3) hide show
  1. README.md +48 -11
  2. dataset_info.json +3 -3
  3. naab.py +21 -10
README.md CHANGED
@@ -6,7 +6,7 @@ license:
6
  multilinguality:
7
  - monolingual
8
  size_categories:
9
- - 200M<n<300M
10
  task_categories:
11
  - language-modeling
12
  - masked-language-modeling
@@ -44,7 +44,7 @@ _[If you want to join our community to keep up with news, models and datasets fr
44
  ## Dataset Description
45
 
46
  - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
47
- - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
48
  - **Point of Contact:** [Sadra Sabouri](mailto:[email protected])
49
 
50
  ### Dataset Summary
@@ -56,8 +56,6 @@ from datasets import load_dataset
56
 
57
  dataset = load_dataset("SLPL/naab")
58
  ```
59
- _Note: be sure that your machine has at least 130 GB free space, also it may take a while to download._
60
-
61
  You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
62
  ```python
63
  from datasets import load_dataset
@@ -65,6 +63,42 @@ from datasets import load_dataset
65
  dataset = load_dataset("SLPL/naab", split="train[:10%]")
66
  ```
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ### Supported Tasks and Leaderboards
69
 
70
  This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
@@ -165,17 +199,20 @@ mit?
165
 
166
  ### Citation Information
167
 
168
- Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
169
  ```
170
- @article{article_id,
171
- author = {Author List},
172
- title = {Dataset Paper Title},
173
- journal = {Publication Venue},
174
- year = {2525}
 
 
 
 
175
  }
176
  ```
177
 
178
- If the dataset has a [DOI](https://www.doi.org/), please provide it here.
179
 
180
  ### Contributions
181
 
 
6
  multilinguality:
7
  - monolingual
8
  size_categories:
9
+ - 100M<n<1B
10
  task_categories:
11
  - language-modeling
12
  - masked-language-modeling
 
44
  ## Dataset Description
45
 
46
  - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
47
+ - **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
48
  - **Point of Contact:** [Sadra Sabouri](mailto:[email protected])
49
 
50
  ### Dataset Summary
 
56
 
57
  dataset = load_dataset("SLPL/naab")
58
  ```
 
 
59
  You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
60
  ```python
61
  from datasets import load_dataset
 
63
  dataset = load_dataset("SLPL/naab", split="train[:10%]")
64
  ```
65
 
66
+ **Note: be sure that your machine has at least 130 GB free space, also it may take a while to download. If you are facing disk or internet shortage, you can use below code snippet helping you download your costume sections of the naab:**
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ # ==========================================================
72
+ # You should just change this part in order to download your
73
+ # parts of corpus.
74
+ indices = {
75
+ "train": [5, 1, 2],
76
+ "test": [0, 2]
77
+ }
78
+ # ==========================================================
79
+
80
+
81
+ N_FILES = {
82
+ "train": 126,
83
+ "test": 3
84
+ }
85
+ _BASE_URL = "https://huggingface.co/datasets/SLPL/naab/resolve/main/data/"
86
+ data_url = {
87
+ "train": [_BASE_URL + "train-{:05d}-of-{:05d}.txt".format(x, N_FILES["train"]) for x in range(N_FILES["train"])],
88
+ "test": [_BASE_URL + "test-{:05d}-of-{:05d}.txt".format(x, N_FILES["test"]) for x in range(N_FILES["test"])],
89
+ }
90
+ for index in indices['train']:
91
+ assert index < N_FILES['train']
92
+ for index in indices['test']:
93
+ assert index < N_FILES['test']
94
+ data_files = {
95
+ "train": [data_url['train'][i] for i in indices['train']],
96
+ "test": [data_url['test'][i] for i in indices['test']]
97
+ }
98
+ print(data_files)
99
+ dataset = load_dataset('text', data_files=data_files, use_auth_token=True)
100
+ ```
101
+
102
  ### Supported Tasks and Leaderboards
103
 
104
  This corpus can be used for training all language models which can be trained by Masked Language Modeling (MLM) or any other self-supervised objective.
 
199
 
200
  ### Citation Information
201
 
 
202
  ```
203
+ @misc{https://doi.org/10.48550/arxiv.2208.13486,
204
+ doi = {10.48550/ARXIV.2208.13486},
205
+ url = {https://arxiv.org/abs/2208.13486},
206
+ author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
207
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
208
+ title = {naab: A ready-to-use plug-and-play corpus for Farsi},
209
+ publisher = {arXiv},
210
+ year = {2022},
211
+ copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
212
  }
213
  ```
214
 
215
+ DOI: [https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486)
216
 
217
  ### Contributions
218
 
dataset_info.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "description": "naab: A ready-to-use plug-and-play corpus in Farsi",
3
- "citation": "",
4
- "homepage": "",
5
- "license": "",
6
  "features": {
7
  "text": {
8
  "dtype": "string",
 
1
  {
2
  "description": "naab: A ready-to-use plug-and-play corpus in Farsi",
3
+ "citation": "@misc{https://doi.org/10.48550/arxiv.2208.13486, doi = {10.48550/ARXIV.2208.13486}, url = {https://arxiv.org/abs/2208.13486}, author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {naab: A ready-to-use plug-and-play corpus for Farsi}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}}",
4
+ "homepage": "https://huggingface.co/SLPL",
5
+ "license": "mit",
6
  "features": {
7
  "text": {
8
  "dtype": "string",
naab.py CHANGED
@@ -24,6 +24,17 @@ import datasets
24
  # TODO: Add BibTeX citation
25
  # Find for instance the citation on arxiv or on the dataset repo/website
26
  _CITATION = """\
 
 
 
 
 
 
 
 
 
 
 
27
  """
28
 
29
  # You can copy an official description
@@ -33,7 +44,6 @@ Huge corpora of textual data are always known to be a crucial need for training
33
 
34
  _HOMEPAGE = "https://huggingface.co/datasets/SLPL/naab"
35
 
36
- # TODO: ?
37
  _LICENSE = "mit"
38
 
39
  N_FILES = {
@@ -96,24 +106,25 @@ class Naab(datasets.GeneratorBasedBuilder):
96
  datasets.SplitGenerator(
97
  name=datasets.Split.TRAIN,
98
  gen_kwargs={
99
- "filepath": train_downloaded_files,
100
  "split": "train"
101
  }
102
  ),
103
  datasets.SplitGenerator(
104
  name=datasets.Split.TEST,
105
  gen_kwargs={
106
- "filepath": test_downloaded_files,
107
  "split": "test"
108
  }
109
  ),
110
  ]
111
 
112
 
113
- def _generate_examples(self, filepath, split):
114
- with open(filepath, encoding="utf-8") as f:
115
- for key, row in enumerate(f):
116
- if row.strip():
117
- yield idx, {"text": row}
118
- else:
119
- yield idx, {"text": ""}
 
 
24
  # TODO: Add BibTeX citation
25
  # Find for instance the citation on arxiv or on the dataset repo/website
26
  _CITATION = """\
27
+ @misc{https://doi.org/10.48550/arxiv.2208.13486,
28
+ doi = {10.48550/ARXIV.2208.13486},
29
+ url = {https://arxiv.org/abs/2208.13486},
30
+ author = {Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
31
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
32
+ title = {naab: A ready-to-use plug-and-play corpus for Farsi},
33
+ publisher = {arXiv},
34
+ year = {2022},
35
+ copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
36
+ }
37
+
38
  """
39
 
40
  # You can copy an official description
 
44
 
45
  _HOMEPAGE = "https://huggingface.co/datasets/SLPL/naab"
46
 
 
47
  _LICENSE = "mit"
48
 
49
  N_FILES = {
 
106
  datasets.SplitGenerator(
107
  name=datasets.Split.TRAIN,
108
  gen_kwargs={
109
+ "filepaths": train_downloaded_files,
110
  "split": "train"
111
  }
112
  ),
113
  datasets.SplitGenerator(
114
  name=datasets.Split.TEST,
115
  gen_kwargs={
116
+ "filepaths": test_downloaded_files,
117
  "split": "test"
118
  }
119
  ),
120
  ]
121
 
122
 
123
+ def _generate_examples(self, filepaths, split):
124
+ for filepath in filepaths:
125
+ with open(filepath, encoding="utf-8") as f:
126
+ for key, row in enumerate(f):
127
+ if row.strip():
128
+ yield key, {"text": row}
129
+ else:
130
+ yield key, {"text": ""}