BerenMillidge
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -56,7 +56,7 @@ configs:
|
|
56 |
|
57 |
Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
|
58 |
|
59 |
-
Zyda
|
60 |
|
61 |
Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
|
62 |
|
|
|
56 |
|
57 |
Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
|
58 |
|
59 |
+
An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a dataset.
|
60 |
|
61 |
Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
|
62 |
|