File size: 1,483 Bytes
36f7492 d22d66a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 49226679381.27992
num_examples: 8018993
download_size: 27058112765
dataset_size: 49226679381.27992
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Generated with the following script:
```python
import os
from tqdm import tqdm
import numpy as np
import tiktoken
from datasets import DatasetDict, load_dataset # huggingface datasets
# number of workers in .map() call
# good number to use is ~order number of cpu cores // 2
num_proc = 8
# number of workers in load_dataset() call
# best number might be different from num_proc above as it also depends on NW speed.
# it is better than 1 usually though
num_proc_load_dataset = num_proc
# takes 450GB+ in huggingface .cache dir, about 134M documents (134318121)
dataset = load_dataset("EleutherAI/the_pile_deduplicated", num_proc=num_proc_load_dataset, split=None)
# this results in:
# >>> dataset
# DatasetDict({
# train: Dataset({
# features: ['text'],
# num_rows: 134318121
# })
# })
# we want to reduce to same size as openwebtext
# by documents 8M / 134M = 0.05970149254
# by tokens 9B / 800B = 0.01125
# to be safe I'll take the bigger number
dataset = dataset['train'].train_test_split(test_size=0.05970149254, seed=42, shuffle=True)
dataset = DatasetDict({'train': dataset['test']})
dataset.push_to_hub("gngdb/subset_the_pile_deduplicated")
``` |