Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
c46ef38
·
verified ·
1 Parent(s): 0d494c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -56,7 +56,7 @@ configs:
56
 
57
  Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
58
 
59
- Zyda is the primary dataset used in phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a dataset.
60
 
61
  Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
62
 
 
56
 
57
  Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
58
 
59
+ An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a dataset.
60
 
61
  Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
62