Datasets:
Sample dataset?
Would it be possible for you all to release a sample of this dataset? I realize that it's broken up into CC crawl dumps which can be downloaded individually. But, any one of those dumps won't be representative of the whole dataset.
Dolma has a "v1_6-sample" which is 16.4GB gzipped and around 10b tokens. I've found it very helpful. Could a sample around that size be made for FineWeb?
Comment from
@karpathy
@mrfakename
@guipenedo
on X on this topic:
https://twitter.com/karpathy/status/1786502899343970700
Oh, awesome. Great minds think alike π
I am working on 100m, 300m, and 1b token samples as well, controlling for diversity in document length and cluster quality after I run embeddings.
I personally don't feel any sampling with clustering or document length is necessary for this. Simple random subsamples of different sizes (with one around 10b) would suffice (for me).
Hi, we've just added subsets randomly sampled from the whole dataset with 10, 100 and 350 billion tokens: https://huggingface.co/datasets/HuggingFaceFW/fineweb#smaller-sample-versions
Hi, we've just added subsets randomly sampled from the whole dataset with 10, 100 and 350 billion tokens: https://huggingface.co/datasets/HuggingFaceFW/fineweb#smaller-sample-versions
Thank you so much for this sample set! However, if I want to pre-train my model with only ~2T tokens, are there any suggestion to choose the data subset among different dumps?
Hi, we've just added subsets randomly sampled from the whole dataset with 10, 100 and 350 billion tokens: https://huggingface.co/datasets/HuggingFaceFW/fineweb#smaller-sample-versions
Thank you so much for this sample set! However, if I want to pre-train my model with only ~2T tokens, are there any suggestion to choose the data subset among different dumps?
We plan to release more details on this later but I would generally avoid dumps 2021-49 to 2023-14 inclusive. 2019-26 to 2021-43 are quite strong and the last two (2023-50 and 2024-10) are the best from our testing
.....