Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -309,12 +309,19 @@ configs:
|
|
309 |
data_files:
|
310 |
- split: train
|
311 |
path: wikimedia_others/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
312 |
---
|
313 |
WARNING: THIS "README" IS JUST A STUB, IT WILL BE IMPROVED DURING THE
|
314 |
NEXT FEW DAYS AND FILLED WITH MANY OTHER
|
315 |
INFORMATIONS AND DETAILED STATISTICS
|
316 |
|
317 |
-
**Testimole -- A multi-billion Italian text corpus
|
318 |
|
319 |
The goal of this work is to create a huge linguistic resource for the
|
320 |
Italian language that can be used for several NLP applications,
|
@@ -350,8 +357,7 @@ goals.
|
|
350 |
This is the project that started the entire work: the goal was to
|
351 |
collect the largest possible amount of Usenet posts published in the
|
352 |
hierachies it.\* and italia.\* \[3\], as they were listed on
|
353 |
-
"
|
354 |
-
and gathered mainly from Google Groups archive.
|
355 |
|
356 |
This split contains 19.395.579.455 tokens. Texts were not checked for
|
357 |
language, but it is a safe assumption that most of the text contained is
|
@@ -360,21 +366,18 @@ in Italian as the selected Usenet hierarchies target only Italian users.
|
|
360 |
Detailed statistics, already computed, will follow very soon. For now,
|
361 |
here are general stats about this part of the dataset:
|
362 |
|
363 |
-
{
|
364 |
|
365 |
-
\"char\_count\": 59389804791,
|
366 |
|
367 |
-
\"
|
368 |
|
369 |
-
\"
|
370 |
|
371 |
-
\"
|
372 |
|
373 |
-
\"
|
374 |
|
375 |
-
\"
|
376 |
|
377 |
-
}
|
378 |
|
379 |
83GB of JSONL file before the conversion to HuggingFace dataset
|
380 |
|
@@ -394,19 +397,15 @@ general stats about this part of the dataset:
|
|
394 |
|
395 |
{
|
396 |
|
397 |
-
\"
|
398 |
-
|
399 |
-
\"token\_count\": 62415825978,
|
400 |
|
401 |
-
\"
|
402 |
|
403 |
-
\"
|
404 |
|
405 |
-
\"
|
406 |
|
407 |
-
\"
|
408 |
-
|
409 |
-
\"author\_count\": 37426524,
|
410 |
|
411 |
\"hasImage\": 46071
|
412 |
|
@@ -621,11 +620,11 @@ contain html code.
|
|
621 |
*Work in progress; this will contain statistics of tokens, chars and sentences lenght for each diachronic resource (usenet newsgroup, post, blog) for each month for each year*
|
622 |
|
623 |
**Conclusions**
|
624 |
-
|
625 |
**References (partial)**
|
626 |
|
627 |
\* \[1\] <https://pdai.info/>
|
628 |
|
629 |
\* \[2\] https://github.com/openai/tiktoken
|
630 |
|
631 |
-
\* \[3\] <https://xmau.com/usenet/>
|
|
|
309 |
data_files:
|
310 |
- split: train
|
311 |
path: wikimedia_others/train-*
|
312 |
+
task_categories:
|
313 |
+
- text-classification
|
314 |
+
- text-generation
|
315 |
+
language:
|
316 |
+
- it
|
317 |
+
size_categories:
|
318 |
+
- 100B<n<1T
|
319 |
---
|
320 |
WARNING: THIS "README" IS JUST A STUB, IT WILL BE IMPROVED DURING THE
|
321 |
NEXT FEW DAYS AND FILLED WITH MANY OTHER
|
322 |
INFORMATIONS AND DETAILED STATISTICS
|
323 |
|
324 |
+
**Testimole -- A multi-billion tokens Italian text corpus**
|
325 |
|
326 |
The goal of this work is to create a huge linguistic resource for the
|
327 |
Italian language that can be used for several NLP applications,
|
|
|
357 |
This is the project that started the entire work: the goal was to
|
358 |
collect the largest possible amount of Usenet posts published in the
|
359 |
hierachies it.\* and italia.\* \[3\], as they were listed on
|
360 |
+
"www.eternal-september.org" and gathered mainly from Google Groups archive.
|
|
|
361 |
|
362 |
This split contains 19.395.579.455 tokens. Texts were not checked for
|
363 |
language, but it is a safe assumption that most of the text contained is
|
|
|
366 |
Detailed statistics, already computed, will follow very soon. For now,
|
367 |
here are general stats about this part of the dataset:
|
368 |
|
|
|
369 |
|
|
|
370 |
|
371 |
+
\"chars": 59389804791,
|
372 |
|
373 |
+
\"tokens": 19395579455,
|
374 |
|
375 |
+
\"sentences": 519535427,
|
376 |
|
377 |
+
\"post": 89499446,
|
378 |
|
379 |
+
\"thread": 14521548,
|
380 |
|
|
|
381 |
|
382 |
83GB of JSONL file before the conversion to HuggingFace dataset
|
383 |
|
|
|
397 |
|
398 |
{
|
399 |
|
400 |
+
\"chars": 199436329709,
|
|
|
|
|
401 |
|
402 |
+
\"tokens": 62415825978,
|
403 |
|
404 |
+
\"sentences": 1673025712,
|
405 |
|
406 |
+
\"posts": 468391746,
|
407 |
|
408 |
+
\"threads": 25280745,
|
|
|
|
|
409 |
|
410 |
\"hasImage\": 46071
|
411 |
|
|
|
620 |
*Work in progress; this will contain statistics of tokens, chars and sentences lenght for each diachronic resource (usenet newsgroup, post, blog) for each month for each year*
|
621 |
|
622 |
**Conclusions**
|
623 |
+
*Work in progress*
|
624 |
**References (partial)**
|
625 |
|
626 |
\* \[1\] <https://pdai.info/>
|
627 |
|
628 |
\* \[2\] https://github.com/openai/tiktoken
|
629 |
|
630 |
+
\* \[3\] <https://xmau.com/usenet/>
|