gngdb's picture
Update README.md
d22d66a verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 49226679381.27992
      num_examples: 8018993
  download_size: 27058112765
  dataset_size: 49226679381.27992
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Generated with the following script:

import os
from tqdm import tqdm
import numpy as np
import tiktoken
from datasets import DatasetDict, load_dataset # huggingface datasets

# number of workers in .map() call
# good number to use is ~order number of cpu cores // 2
num_proc = 8

# number of workers in load_dataset() call
# best number might be different from num_proc above as it also depends on NW speed.
# it is better than 1 usually though
num_proc_load_dataset = num_proc

# takes 450GB+ in huggingface .cache dir, about 134M documents (134318121)
dataset = load_dataset("EleutherAI/the_pile_deduplicated", num_proc=num_proc_load_dataset, split=None)

# this results in:
# >>> dataset
# DatasetDict({
#     train: Dataset({
#         features: ['text'],
#         num_rows: 134318121
#     })
# })

# we want to reduce to same size as openwebtext
# by documents 8M / 134M = 0.05970149254
# by tokens 9B / 800B = 0.01125
# to be safe I'll take the bigger number
dataset = dataset['train'].train_test_split(test_size=0.05970149254, seed=42, shuffle=True)
dataset = DatasetDict({'train': dataset['test']})
dataset.push_to_hub("gngdb/subset_the_pile_deduplicated")