Datasets:
Deduped dataset across all CC dumps or within each dump?
I want to know if the deduplication and fitering pipeline combined all dumps and then did the thing or was it run separately for each dump?
Let's say if I want to train a model on all 15T tokens then should I use all the dataset as is or should I first run the dedupe pipeline combining all dumps and then use that dataset?
Following the thread. I'm very interested in this statement:
While we originally intended to deduplicate the dataset as a whole, our ablations showed that training on a sampling of individually deduplicated dumps/crawls outperformed training on a sampling of all the dumps/crawls deduplicated together. We will discuss this further in our technical report.
I was not able to find the mentioned technical report attached to the dataset card