Datasets:
sail
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
regmix-data-sample / README.md
SivilTaram's picture
Update README.md
b5b3019 verified
|
raw
history blame
2.97 kB
metadata
license: mit
language:
  - en
tags:
  - regmix
pretty_name: regmix-data-sample
size_categories:
  - 100K<n<1M

RegMix Data Sample

Dataset Description

The RegMix Data Sample is a curated dataset derived from the Pile-Uncopyrighted, specifically designed for the RegMix paper (https://huggingface.co/papers/2407.01492). This dataset aims to facilitate the automatic identification of high-performing data mixtures for language model pre-training by formulating it as a regression task.

Key Features:

  • Size: Approximately 20GB disk space, 5B tokens
  • Distribution: Follows the natural token distribution of domain examples
  • Organization: Examples from different domains are separated into individual files

Dataset Structure

The dataset is organized into two main directories: train and valid, each containing domain-specific JSONL files. The file naming convention is as follows:

[domain]-[identifier]-[number].jsonl

For example: arxiv-10-74305611.jsonl

Domains Included:

arxiv, gutenberg_pg_19, pubmed_central, dm_mathematics, hackernews, stackexchange, enron_emails, nih_exporter, ubuntu_irc, europarl, philpapers, uspto_backgrounds, freelaw, pile_cc, wikipedia_en, github, pubmed_abstracts

Usage

We recommend downloading the entire dataset snapshot instead of using the traditional load_dataset function, as the RegMix code is integrated with the TinyLlama framework.

To download the dataset:

from huggingface_hub import snapshot_download

LOCAL_DIR = "regmix-data-sample"
snapshot_download(repo_id="sail/regmix-data-sample", 
                  repo_type='dataset',
                  local_dir=LOCAL_DIR,
                  local_dir_use_symlinks=False)

This will download the entire snapshot, containing 34 JSON line files (17 for train, and 17 for valid), to your specified local directory.

Data Preprocessing

Our code will preprocess these domain files into binary format with domain prefixes. It allows for random sampling of the dataset using user-defined data mixtures (i.e., domain weights).

Acknowledgements

We extend our gratitude to the creators of the Pile-Uncopyrighted dataset for their efforts in removing copyrighted content from the original Pile dataset, making this work possible.

Citation

If you use this dataset in your research, please cite the RegMix paper:

@article{liu2024regmix,
  title={RegMix: Data Mixture as Regression for Language Model Pre-training},
  author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min},
  journal={arXiv preprint arXiv:2407.01492},
  year={2024}
}

For more information about the RegMix methodology and its applications, please refer to the original paper.