Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

how many commoncrawl snatpshot this work used?

#4
by kimcando - opened

hi, thank you in advance! I have two questions.

  1. how many snapshots does this work use ? so far, the number of available commoncrawl snapshot is approximately 90. Did you guys use all snapshots and do steps(url filtering, text extraction , language identification, repetition removal and more deuplication process; reference link[1] here: https://the-decoder.com/falconlm-open-source-language-model-beats-metas-llama/ )

  2. which data type does this work use(e.g, WET format, WARC foramt) it seems you guys use "trafilatura " as stated in data description. For me, it seems to use WARC format. However, above reference link[1]'s figure tells that text extraction rarely discards the text. However, if using WARC format, a comparable amount of text should be discarded. So could you specify which format you guys choose?

Thank you!

Technology Innovation Institute org

Hey!

  1. To keep the data homogeneous and include both older & recent content we sample from all CommonCrawl dumps available, but only keep a few segments from each. The dump and segment fields on each instance contain the relevant information.
  2. We indeed start from .WARC files as we found the text extraction used in .WET files to be of relatively poor quality. The figure is a bit misleading (we will add some clarification in the paper), but at this stage it only measures removal rate in number of documents -- so this corresponds to ~2% of docs for which we failed to extract content. Obviously if this was expressed in tokens/words, the removal rates would be much higher at this stage.
FalconLLM changed discussion status to closed

@FalconLLM Could you provide more details about the perceived lower quality of WET files? Are you suggesting that the extraction of content from .wet files is flawed in your assessment?

Sign up or log in to comment