|
--- |
|
license: mit |
|
tags: |
|
- self-supervised-pretraining |
|
language: |
|
- ind |
|
- jav |
|
- sun |
|
--- |
|
|
|
# cc100 |
|
|
|
This corpus is an attempt to recreate the dataset used for training |
|
|
|
XLM-R. This corpus comprises of monolingual data for 100+ languages and |
|
|
|
also includes data for romanized languages (indicated by *_rom). This |
|
|
|
was constructed using the urls and paragraph indices provided by the |
|
|
|
CC-Net repository by processing January-December 2018 Commoncrawl |
|
|
|
snapshots. Each file comprises of documents separated by |
|
|
|
double-newlines and paragraphs within the same document separated by a |
|
|
|
newline. The data is generated using the open source CC-Net repository. |
|
|
|
No claims of intellectual property are made on the work of preparation |
|
|
|
of the corpus. |
|
|
|
## Dataset Usage |
|
|
|
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`. |
|
|
|
## Citation |
|
|
|
``` |
|
@inproceedings{conneau-etal-2020-unsupervised, |
|
title = "Unsupervised Cross-lingual Representation Learning at Scale", |
|
author = "Conneau, Alexis and |
|
Khandelwal, Kartikay and |
|
Goyal, Naman and |
|
Chaudhary, Vishrav and |
|
Wenzek, Guillaume and |
|
Guzm{'a}n, Francisco and |
|
Grave, Edouard and |
|
Ott, Myle and |
|
Zettlemoyer, Luke and |
|
Stoyanov, Veselin", |
|
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
month = jul, |
|
year = "2020", |
|
address = "Online", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://www.aclweb.org/anthology/2020.acl-main.747", |
|
doi = "10.18653/v1/2020.acl-main.747", |
|
pages = "8440--8451", |
|
abstract = "This paper shows that pretraining multilingual language models |
|
at scale leads to significant performance gains for a wide range of |
|
cross-lingual transfer tasks. We train a Transformer-based masked language |
|
model on one hundred languages, using more than two terabytes of filtered |
|
CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms |
|
multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, |
|
including +14.6{%} average accuracy on XNLI, +13{%} average F1 score on |
|
MLQA, and +2.4{%} F1 score on NER. XLM-R performs particularly well on |
|
low-resource languages, improving 15.7{%} in XNLI accuracy for Swahili and |
|
11.4{%} for Urdu over previous XLM models. We also present a detailed |
|
empirical analysis of the key factors that are required to achieve these |
|
gains, including the trade-offs between (1) positive transfer and capacity |
|
dilution and (2) the performance of high and low resource languages at |
|
scale. Finally, we show, for the first time, the possibility of |
|
multilingual modeling without sacrificing per-language performance; XLM-R |
|
is very competitive with strong monolingual models on the GLUE and XNLI |
|
benchmarks. We will make our code and models publicly available.", |
|
} |
|
|
|
@inproceedings{wenzek-etal-2020-ccnet, |
|
title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data", |
|
author = "Wenzek, Guillaume and |
|
Lachaux, Marie-Anne and |
|
Conneau, Alexis and |
|
Chaudhary, Vishrav and |
|
Guzm{'a}n, Francisco and |
|
Joulin, Armand and |
|
Grave, Edouard", |
|
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
month = may, |
|
year = "2020", |
|
address = "Marseille, France", |
|
publisher = "European Language Resources Association", |
|
url = "https://www.aclweb.org/anthology/2020.lrec-1.494", |
|
pages = "4003--4012", |
|
abstract = "Pre-training text representations have led to significant |
|
improvements in many areas of natural language processing. The quality of |
|
these models benefits greatly from the size of the pretraining corpora as |
|
long as its quality is preserved. In this paper, we describe an automatic |
|
pipeline to extract massive high-quality monolingual datasets from Common |
|
Crawl for a variety of languages. Our pipeline follows the data processing |
|
introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that |
|
deduplicates documents and identifies their language. We augment this |
|
pipeline with a filtering step to select documents that are close to high |
|
quality corpora like Wikipedia.", |
|
language = "English", |
|
ISBN = "979-10-95546-34-4", |
|
} |
|
``` |
|
|
|
## License |
|
|
|
MIT |
|
|
|
## Homepage |
|
|
|
[https://data.statmt.org/cc-100/](https://data.statmt.org/cc-100/) |
|
|
|
### NusaCatalogue |
|
|
|
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |