What is the total # tokens after sampling proportion? 1.7T or 1.65T

#36
by ivanzhouyq - opened

Hi! Thanks for sharing the dataset and sampling proportion!

I noticed one discrepancy on token counts. On the data card, it says:

A subset of total data was used for training of OLMo 7B-v1.7. The token counts are based on the full dataset, whereas taking into account sampling proportion gives the final actual token counts used for training --- 1.715 trillion tokens.

However, when calculating the sum of tokens based on the sampling ratio, the total # tokens is 1.65T. There is a gap of 70B tokens, which is about the size of C4 with sampling proportion.

Am I miss anything? Is the calculation correct, or the sampling proportion needs to be updated?

image.png

Thanks!

Ai2 org

Hello, Sorry for late reply! Briefly, we sampled C4 at 100%, not 50%.

Exact counts as shown below

source billion tokens type upsample final
dolma: gutenberg books 5.3 REF 100% 5.3
dolma: pes2o 57.2 REF 100% 57.2
dolma: wikipedia & wikibooks 3.7 REF 200% 7.4
redpajama: stackexchange 19.6 REF 100% 19.6
redjapama: arxiv 28.0 REF 100% 28.0
proofpile2: algebraic stack 12.6 REF 100% 12.6
proofpile2: openwebmath 12.7 REF 100% 12.7
tulu: flan v1 (v1-decontaminated-60M-shots_all-upweight_1-dialog_true-sep_newline) 16.5 REF 100% 16.5
CC News 14.3 REF 100% 14.3
dolma: c4 138.4 HQW 100% 138.4
dolma: reddit 79.9 HQW 100% 79.9
refinedweb 456.4 HQW 100% 456.4
megawika v1 (refs from wikipedia) 4.6 REF 100% 4.6
starcoder 263.8 C 100% 263.8
dolma: cc high 356.8 W 50.2% 179.2
dolma: cc middle 452.4 W 50.4% 227.8
dolma: cc low 386.3 W 49.6% 191.4
total 1715.1

Thanks for clarifying, @soldni ! That makes sense.

I got 50% sampling proportion from this page: https://huggingface.co/datasets/allenai/dolma#summary-statistics-v17
Shall it be corrected?

This comment has been hidden

Sign up or log in to comment