Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
Zyda / README.md
BerenMillidge's picture
Update README.md
a9a7363 verified
|
raw
history blame
4.45 kB
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: source
      dtype: string
    - name: filtering_features
      dtype: string
    - source_other: dump
      dtype: string
  splits:
    - name: train
      num_examples: 1594197267
  download_size: 3.3TB
license: odc-by
pretty_name: Zyda
task_categories:
  - text-generation
language:
  - en
size_categories:
  - n>1T
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*/*/*
  - config_name: zyda_no_starcoder
    data_files:
      - split: train
        path: data/zyda_no_starcoder/*/*
  - config_name: zyda_arxiv_only
    data_files:
      - split: train
        path: data/zyda_no_starcoder/zyda_arxiv/*
  - config_name: zyda_c4-en_only
    data_files:
      - split: train
        path: data/zyda_no_starcoder/c4_en/*
  - config_name: zyda_peS2o_only
    data_files:
      - split: train
        path: data/zyda_no_starcoder/zyda_peS2o/*
  - config_name: zyda_pile-uncopyrighted_only
    data_files:
      - split: train
        path: data/zyda_no_starcoder/zyda_pile-uncopyrighted/*
  - config_name: zyda_refinedweb_only
    data_files:
      - split: train
        path: data/zyda_no_starcoder/zyda_refinedweb/*
  - config_name: zyda_slimpajama_only
    data_files:
      - split: train
        path: data/zyda_no_starcoder/zyda_slimpajama/*
  - config_name: zyda_starcoder_only
    data_files:
      - split: train
        path: data/zyda_starcoder/*/*

Dataset Card for Zyda

Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.

How to download

Full dataset: datasets.load_dataset("Zyphra/Zyda", split="train")

Full dataset without StarCoder: datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")

For downloading individual components put their name in the name arg of load_dataset():

  • zyda_arxiv_only
  • zyda_c4-en_only
  • zyda_peS2o_only
  • zyda_pile-uncopyrighted_only
  • zyda_refinedweb_only
  • zyda_slimpajama_only
  • zyda_starcoder_only

Dataset Description

  • Curated by: Zyphra
  • Language(s) (NLP): Primarily English
  • License: Open Data Commons License

Dataset Structure

Dataset fields:

  • text: contains actual text for training
  • source: component the text is coming from
  • filtering_features: precomputed values of different features that were used for filtering (converted to json string)
  • source_other: metadata from the source dataset (converted to json string)

Source Data

Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted

C4-en: https://huggingface.co/datasets/allenai/c4

peS2o: https://huggingface.co/datasets/allenai/peS2o

RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb

SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B

arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed

StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata

Data Collection and Processing

[More Information Needed]

Personal and Sensitive Information

As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.

Bias, Risks, and Limitations

As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.

Citation [optional]

If you use our dataset to train a model, please cite us at:

(-/TODO)