Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
kargaranamir commited on
Commit
c3e2446
1 Parent(s): e9ed955

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -18
README.md CHANGED
@@ -10,18 +10,17 @@ language:
10
  pretty_name: Perso-Arabic Corpus
11
  ---
12
 
13
- # Perso-Arabic Corpus
14
 
15
  - Balochi (bal)
16
  - Gilaki (glk)
17
  - Brahui (brh)
18
- - Kashmiri (kas)
19
  - Southern Kurdish (sdh)
20
  - Gorani (hac)
21
 
22
  # Usage
23
  ```
24
- dataset = load_dataset('kargaranamir/perso-arabic')
25
  print(dataset['train'][0]) # First row
26
  ```
27
 
@@ -29,12 +28,11 @@ print(dataset['train'][0]) # First row
29
  If you are not a fan of HF dataloader, download each dataset directly:
30
 
31
  ```
32
- ! wget https://huggingface.co/datasets/kargaranamir/perso-arabic/resolve/main/balochi.csv
33
- ! wget https://huggingface.co/datasets/kargaranamir/perso-arabic/resolve/main/gilaki.csv
34
- ! wget https://huggingface.co/datasets/kargaranamir/perso-arabic/resolve/main/brahui.csv
35
- ! wget https://huggingface.co/datasets/kargaranamir/perso-arabic/resolve/main/kashmiri.csv
36
- ! wget https://huggingface.co/datasets/kargaranamir/perso-arabic/resolve/main/southern-kurdish.csv
37
- ! wget https://huggingface.co/datasets/kargaranamir/perso-arabic/resolve/main/gorani.csv
38
 
39
  ```
40
 
@@ -53,9 +51,6 @@ If you are not a fan of HF dataloader, download each dataset directly:
53
  - Brahui (brh)
54
  - News: https://talarbrahui.com/category/news/ and https://talarbrahui.com/category/articles/
55
 
56
- - Kashmiri (kas)
57
- - Article: NllbSeed (https://github.com/facebookresearch/flores/blob/main/nllb_seed/README.md)
58
-
59
  - Southern Kurdish (sdh)
60
  - News: https://shafaq.com/ku/ (Feyli)
61
 
@@ -69,6 +64,7 @@ If you are a website/dataset owner and do not want your dataset to be included i
69
  ## Code
70
 
71
  Crawler code is available here under MIT license: https://github.com/kargaranamir/persoarabic-corpus
 
72
 
73
 
74
  ## Ethical Considerations
@@ -86,18 +82,15 @@ If you use any part of this code and data in your research, please cite it using
86
  All the sources related to news, social media, and without mentioned datasets are crawled and compiled in this work.
87
  ```
88
 
89
- @misc{persoarabic-corpus,
90
  author = {Kargaran, Amir Hossein},
91
- title = {Perso-Arabic Corpus},
92
  year = {2023},
93
  publisher = {Github},
94
  journal = {Github Repository},
95
- howpublished = {\url{https://github.com/kargaranamir/persoarabic-corpus}},
96
  }
97
  ```
98
 
99
- If the corpus is just hosted here, you must provide proper citations for the original work, List:
100
- - [NLLB](https://arxiv.org/abs/2207.04672) if you use Kashmiri (kas) from the NllbSeed source.
101
 
102
 
103
- Script checking and cleaning in terms of scripts is done using [GlotScript](https://arxiv.org/abs/2309.13320).
 
10
  pretty_name: Perso-Arabic Corpus
11
  ---
12
 
13
+ # GlotSparse Corpus
14
 
15
  - Balochi (bal)
16
  - Gilaki (glk)
17
  - Brahui (brh)
 
18
  - Southern Kurdish (sdh)
19
  - Gorani (hac)
20
 
21
  # Usage
22
  ```
23
+ dataset = load_dataset('kargaranamir/GlotSparse')
24
  print(dataset['train'][0]) # First row
25
  ```
26
 
 
28
  If you are not a fan of HF dataloader, download each dataset directly:
29
 
30
  ```
31
+ ! wget https://huggingface.co/datasets/kargaranamir/GlotSparse/resolve/main/balochi.csv
32
+ ! wget https://huggingface.co/datasets/kargaranamir/GlotSparse/resolve/main/gilaki.csv
33
+ ! wget https://huggingface.co/datasets/kargaranamir/GlotSparse/resolve/main/brahui.csv
34
+ ! wget https://huggingface.co/datasets/kargaranamir/GlotSparse/resolve/main/southern-kurdish.csv
35
+ ! wget https://huggingface.co/datasets/kargaranamir/GlotSparse/resolve/main/gorani.csv
 
36
 
37
  ```
38
 
 
51
  - Brahui (brh)
52
  - News: https://talarbrahui.com/category/news/ and https://talarbrahui.com/category/articles/
53
 
 
 
 
54
  - Southern Kurdish (sdh)
55
  - News: https://shafaq.com/ku/ (Feyli)
56
 
 
64
  ## Code
65
 
66
  Crawler code is available here under MIT license: https://github.com/kargaranamir/persoarabic-corpus
67
+ Script checking and cleaning in terms of scripts is done using [GlotScript](https://arxiv.org/abs/2309.13320).
68
 
69
 
70
  ## Ethical Considerations
 
82
  All the sources related to news, social media, and without mentioned datasets are crawled and compiled in this work.
83
  ```
84
 
85
+ @misc{GlotSparse,
86
  author = {Kargaran, Amir Hossein},
87
+ title = {GlotSparse Corpus},
88
  year = {2023},
89
  publisher = {Github},
90
  journal = {Github Repository},
91
+ howpublished = {\url{https://github.com/kargaranamir/GlotSparse}},
92
  }
93
  ```
94
 
 
 
95
 
96