Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
NanoData / README.md
jasonfang3900's picture
Update README.md
3ac68c8 verified
metadata
license: other
license_name: other
license_link: LICENSE
task_categories:
  - text-generation
language:
  - en
size_categories:
  - 100B<n<1T

Dataset Description

To facilitate researchers to use NanoLM for comparative analysis across different model designs, we build a curated pre-training dataset from those of existing large-scale models (i.e., Llama, Falcon, GPT-3). It covers diverse domains to improve the generalization capabilities of the resultant models.

Dataset Creation

The data is mainly post-processed and filtered from RedPajama and RedPajamaV2. We develop a series of cleaning steps to remove redundant formatting, garbled characters, formula errors, duplicated paragraphs, low-quality text, and other unwanted content. After interleaved deduplication on document level of each independent subset, we finally obtain a high-quality dataset.

Dataset Summary

Dataset Num Tokens (B)
CommonCrawl 67.00
C4 15.00
Wikipedia (En) 5.14
Books 4.48
ArXiv 2.50
StackExchange 2.00
Total 97.12

We release the data with approximate 100B tokens. Furthermore, we recommend users to add code dataset such as Starcode, The Stack V2 to enrich model's performance on code and reasoning.

Citation

To cite NanoLM, please use:


@misc{yao2024nanolm,

title={nanoLM: an Affordable LLM Pre-training Benchmark via Accurate Loss Prediction across Scales},

author={Yiqun Yao and Siqi fan and Xiusheng Huang and Xuezhi Fang and Xiang Li and Ziyi Ni and Xin Jiang and Xuying Meng and Peng Han and Shuo Shang and Kang Liu and Aixin Sun and Yequan Wang},

year={2024},

eprint={2304.06875},

archivePrefix={arXiv},

primaryClass={cs.CL}

}

Acknowledgement

The data is mainly curated and filtered from RedPajama and RedPajamaV2. We extend our gratitude to the original authors for their innovative work and for making it available to the community.

License

The code of NanoLM used to process the dataset and loss prediction is licensed under the Apache 2.0 license.

For curated data, please refer to the licenses of the original ones.