holylovenia commited on
Commit
f1bbf19
·
verified ·
1 Parent(s): c2dafb4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -27
README.md CHANGED
@@ -1,41 +1,77 @@
 
1
  ---
2
- license: mit
3
- tags:
4
- - self-supervised-pretraining
5
- language:
6
  - ind
7
  - jav
8
  - sun
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # cc100
12
-
13
- This corpus is an attempt to recreate the dataset used for training
14
-
15
  XLM-R. This corpus comprises of monolingual data for 100+ languages and
16
-
17
  also includes data for romanized languages (indicated by *_rom). This
18
-
19
  was constructed using the urls and paragraph indices provided by the
20
-
21
  CC-Net repository by processing January-December 2018 Commoncrawl
22
-
23
  snapshots. Each file comprises of documents separated by
24
-
25
  double-newlines and paragraphs within the same document separated by a
26
-
27
  newline. The data is generated using the open source CC-Net repository.
28
-
29
  No claims of intellectual property are made on the work of preparation
30
-
31
  of the corpus.
32
 
 
 
 
 
 
 
 
 
 
33
  ## Dataset Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
- Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## Citation
38
 
 
39
  ```
40
  @inproceedings{conneau-etal-2020-unsupervised,
41
  title = "Unsupervised Cross-lingual Representation Learning at Scale",
@@ -105,16 +141,14 @@ Run `pip install nusacrowd` before loading the dataset through HuggingFace's `lo
105
  language = "English",
106
  ISBN = "979-10-95546-34-4",
107
  }
108
- ```
109
-
110
- ## License
111
-
112
- MIT
113
-
114
- ## Homepage
115
 
116
- [https://data.statmt.org/cc-100/](https://data.statmt.org/cc-100/)
117
 
118
- ### NusaCatalogue
 
 
 
 
 
 
119
 
120
- For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
 
1
+
2
  ---
3
+ language:
 
 
 
4
  - ind
5
  - jav
6
  - sun
7
+ - mya
8
+ - lao
9
+ - khm
10
+ - tgl
11
+ - vie
12
+ - tha
13
+ - zlm
14
+ pretty_name: Cc100
15
+ task_categories:
16
+ - self-supervised-pretraining
17
+ tags:
18
+ - self-supervised-pretraining
19
  ---
20
 
21
+ This corpus is an attempt to recreate the dataset used for training
 
 
 
22
  XLM-R. This corpus comprises of monolingual data for 100+ languages and
 
23
  also includes data for romanized languages (indicated by *_rom). This
 
24
  was constructed using the urls and paragraph indices provided by the
 
25
  CC-Net repository by processing January-December 2018 Commoncrawl
 
26
  snapshots. Each file comprises of documents separated by
 
27
  double-newlines and paragraphs within the same document separated by a
 
28
  newline. The data is generated using the open source CC-Net repository.
 
29
  No claims of intellectual property are made on the work of preparation
 
30
  of the corpus.
31
 
32
+
33
+ ## Languages
34
+
35
+ ind, jav, sun, mya, mya_zaw, lao, khm, tgl, vie, tha, zlm
36
+
37
+ ## Supported Tasks
38
+
39
+ Self Supervised Pretraining
40
+
41
  ## Dataset Usage
42
+ ### Using `datasets` library
43
+ ```
44
+ from datasets import load_dataset
45
+ dset = datasets.load_dataset("SEACrowd/cc100", trust_remote_code=True)
46
+ ```
47
+ ### Using `seacrowd` library
48
+ ```import seacrowd as sc
49
+ # Load the dataset using the default config
50
+ dset = sc.load_dataset("cc100", schema="seacrowd")
51
+ # Check all available subsets (config names) of the dataset
52
+ print(sc.available_config_names("cc100"))
53
+ # Load the dataset using a specific config
54
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
55
+ ```
56
+
57
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
58
+
59
 
60
+ ## Dataset Homepage
61
+
62
+ [https://data.statmt.org/cc-100/](https://data.statmt.org/cc-100/)
63
+
64
+ ## Dataset Version
65
+
66
+ Source: 2018.12.01. SEACrowd: 2024.06.20.
67
+
68
+ ## Dataset License
69
+
70
+ MIT
71
 
72
  ## Citation
73
 
74
+ If you are using the **Cc100** dataloader in your work, please cite the following:
75
  ```
76
  @inproceedings{conneau-etal-2020-unsupervised,
77
  title = "Unsupervised Cross-lingual Representation Learning at Scale",
 
141
  language = "English",
142
  ISBN = "979-10-95546-34-4",
143
  }
 
 
 
 
 
 
 
144
 
 
145
 
146
+ @article{lovenia2024seacrowd,
147
+ title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages},
148
+ author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
149
+ year={2024},
150
+ eprint={2406.10118},
151
+ journal={arXiv preprint arXiv: 2406.10118}
152
+ }
153
 
154
+ ```