|
--- |
|
license: odc-by |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- web |
|
- common crawl |
|
size_categories: |
|
- 100B<n<1T |
|
--- |
|
|
|
# 📚 fineweb-pro |
|
|
|
<p align="center"> |
|
<img src="prox-teaser.png"> |
|
</p> |
|
|
|
[ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX) |
|
|
|
DCLM-pro is refined from [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) using the **ProX** refining framework. |
|
It contains about >500B high quality tokens, ready for general language model pre-training. |
|
|
|
## License |
|
DCLM-pro is based on DCLM, which is made available under an cc-by-4.0 license. |
|
|
|
### Citation |
|
``` |
|
@article{zhou2024programming, |
|
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale}, |
|
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei}, |
|
journal={arXiv preprint arXiv:2409.17115}, |
|
year={2024} |
|
} |
|
``` |