Datasets:
Simple exact deduplication removes 2/3 of data.
I've downloaded latest version of the dataset on Aug 5th, tokenized with gpt4-o tokenizer, and ran simplest dedup with spark imaginable - df.dropDuplicates("text"). "text" column is tokenized version of text. This has reduced size of dataset from 15T tokens to 5T. This is consistent with prior experiments I've done for deduplicating whole family of fineweb datasets. This observation is consistent with people complaining of duplicates in fineweb-edu dataset. Those numbers are consistent with trying to do deduplication just on url field.
This observation seems so... stange, that I want to make sure I'm not missing something before proceeding forward.
Can I get any comment from HuggingFace folks?
Do you want me to publish my deduped version? I see no point in people downloading 3 times the data they need and having duplicates in their data. If you need to upsample, you can upsample on final leg of building dataset for training.
We discuss how we perform deduplication on the blogpost. In particular, we do not deduplicate across different dumps. Our experiments show that simply repeating the entire fully deduplicated dataset provides worse performance than this "natural upsampling" version.
Ok, why not publish deduped version with additional column of repetition factor? It would give opportunity to people who need deduped version to have deduped version, but for people who want this upsampling - they can get it on the fly
"simply repeating the entire fully deduplicated dataset provides worse performance than this "natural upsampling" version" - I believe that. But why would you need 15T tokens at any cost including just repeating dataset? You need very very big cluster to train very big model to need 15T tokens. And if you have such compute resources you wouldn't just upsample fineweb, you'll have resources for much more data processing. If you would have had some tricky upsample of Wiki I would understand that, but I disagree with the idea that there is a value in build in upsample of staff in dataset of this size at the cost of having to work with additional tens of terrabytes of data