jasonfang3900
commited on
Commit
•
3ac68c8
1
Parent(s):
62aa945
Update README.md
Browse files
README.md
CHANGED
@@ -2,4 +2,82 @@
|
|
2 |
license: other
|
3 |
license_name: other
|
4 |
license_link: LICENSE
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: other
|
3 |
license_name: other
|
4 |
license_link: LICENSE
|
5 |
+
task_categories:
|
6 |
+
- text-generation
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
size_categories:
|
10 |
+
- 100B<n<1T
|
11 |
---
|
12 |
+
|
13 |
+
### Dataset Description
|
14 |
+
|
15 |
+
|
16 |
+
To facilitate researchers to use [NanoLM](https://github.com/cofe-ai/nanoLM?tab=readme-ov-file) for comparative analysis across different model designs, we build a curated pre-training dataset from those of existing large-scale models (i.e., Llama, Falcon, GPT-3). It covers diverse domains to improve the generalization capabilities of the resultant models.
|
17 |
+
|
18 |
+
#### Dataset Creation
|
19 |
+
|
20 |
+
The data is mainly post-processed and filtered from [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [RedPajamaV2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).
|
21 |
+
We develop a series of cleaning steps to remove redundant formatting, garbled characters, formula errors, duplicated paragraphs, low-quality text, and other unwanted content. After interleaved deduplication on document level of each independent subset, we finally obtain a high-quality dataset.
|
22 |
+
|
23 |
+
#### Dataset Summary
|
24 |
+
|
25 |
+
| Dataset | Num Tokens (B) |
|
26 |
+
| -------------- | -------------- |
|
27 |
+
| CommonCrawl | 67.00 |
|
28 |
+
| C4 | 15.00 |
|
29 |
+
| Wikipedia (En) | 5.14 |
|
30 |
+
| Books | 4.48 |
|
31 |
+
| ArXiv | 2.50 |
|
32 |
+
| StackExchange | 2.00 |
|
33 |
+
| Total | 97.12 |
|
34 |
+
|
35 |
+
We release the data with approximate 100B tokens. Furthermore, we recommend users to add code dataset such as [Starcode](https://huggingface.co/datasets/bigcode/starcoderdata), [The Stack V2](https://huggingface.co/datasets/bigcode/the-stack-v2-dedup) to enrich model's performance on code and reasoning.
|
36 |
+
|
37 |
+
### Citation
|
38 |
+
|
39 |
+
To cite NanoLM, please use:
|
40 |
+
|
41 |
+
```
|
42 |
+
|
43 |
+
@misc{yao2024nanolm,
|
44 |
+
|
45 |
+
title={nanoLM: an Affordable LLM Pre-training Benchmark via Accurate Loss Prediction across Scales},
|
46 |
+
|
47 |
+
author={Yiqun Yao and Siqi fan and Xiusheng Huang and Xuezhi Fang and Xiang Li and Ziyi Ni and Xin Jiang and Xuying Meng and Peng Han and Shuo Shang and Kang Liu and Aixin Sun and Yequan Wang},
|
48 |
+
|
49 |
+
year={2024},
|
50 |
+
|
51 |
+
eprint={2304.06875},
|
52 |
+
|
53 |
+
archivePrefix={arXiv},
|
54 |
+
|
55 |
+
primaryClass={cs.CL}
|
56 |
+
|
57 |
+
}
|
58 |
+
|
59 |
+
```
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
### Acknowledgement
|
64 |
+
|
65 |
+
The data is mainly curated and filtered from [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [RedPajamaV2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2). We extend our gratitude to the original authors for their innovative work and for making it available to the community.
|
66 |
+
|
67 |
+
### License
|
68 |
+
|
69 |
+
The code of NanoLM used to process the dataset and loss prediction is licensed under the Apache 2.0 license.
|
70 |
+
|
71 |
+
For curated data, please refer to the licenses of the original ones.
|
72 |
+
|
73 |
+
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use)
|
74 |
+
|
75 |
+
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
|
76 |
+
|
77 |
+
* Books: [the_pile_books3 license](https://huggingface.co/datasets/defunct-datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/deepmind/pg19#licensing-information)
|
78 |
+
|
79 |
+
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
|
80 |
+
|
81 |
+
* [Wikipedia License](https://huggingface.co/datasets/legacy-datasets/wikipedia#licensing-information)
|
82 |
+
|
83 |
+
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
|