Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -311,10 +311,10 @@ configs:
|
|
311 |
path: wikimedia_others/train-*
|
312 |
---
|
313 |
WARNING: THIS "README" IS JUST A STUB, IT WILL BE IMPROVED DURING THE
|
314 |
-
NEXT FEW DAYS
|
315 |
INFORMATIONS AND DETAILED STATISTICS
|
316 |
|
317 |
-
**Testimole
|
318 |
|
319 |
The goal of this work is to create a huge linguistic resource for the
|
320 |
Italian language that can be used for several NLP applications,
|
@@ -343,9 +343,9 @@ cl100k\_base model \[2\] was used for tokenization. This dataset is
|
|
343 |
composed of several sub-datasets, each with different types of data and
|
344 |
goals.
|
345 |
|
346 |
-
Conversational (\~ 85 Billions tokens)
|
347 |
|
348 |
-
UsenetArchiveIT
|
349 |
|
350 |
This is the project that started the entire work: the goal was to
|
351 |
collect the largest possible amount of Usenet posts published in the
|
@@ -378,7 +378,7 @@ here are general stats about this part of the dataset:
|
|
378 |
|
379 |
83GB of JSONL file before the conversion to HuggingFace dataset
|
380 |
|
381 |
-
Forum
|
382 |
|
383 |
The second part of the project is the one that produced the largest
|
384 |
amount of data. 62.415.825.978 A list of Italian message boards based on
|
@@ -418,7 +418,7 @@ Regarding multimodality, in short: this feature is not very well
|
|
418 |
implemented. More details will follow, but do not expect too much
|
419 |
regarding this point.
|
420 |
|
421 |
-
General notes on conversational datasets
|
422 |
|
423 |
The data contained in the "usenet" and "forums" splits were generated by
|
424 |
Italian users of the Internet between 1995 and 2024. For this reason,
|
@@ -455,7 +455,9 @@ The post should not contain personal information as in all the forums
|
|
455 |
internal rules was asked to the user not to share personal information
|
456 |
as they would have been publicly available on the web.
|
457 |
|
458 |
-
|
|
|
|
|
459 |
|
460 |
This split of the dataset contains articles published as Open Access
|
461 |
using the platform OJS. It comprised mainly academic journals from
|
@@ -464,7 +466,7 @@ dataset. All the articles are published with Creative Commons licenses,
|
|
464 |
and the license used for the single article can be retrieved from the
|
465 |
metadata.
|
466 |
|
467 |
-
Blogs
|
468 |
|
469 |
This resource was gathered by scraping data from blogs written in
|
470 |
Italian. The project started with a collection of blogs regarding
|
@@ -508,7 +510,7 @@ annotated so it could be used for interesting diachronic analysis.
|
|
508 |
Finally, the blog split contains also an annotation for the language
|
509 |
used, as identified by the FastText library.
|
510 |
|
511 |
-
Wikimedia
|
512 |
|
513 |
This split doesn't need many explanation as it is simply a dump of
|
514 |
wikimedia resources in Italian (Wikipedia, Wikibooks, Wikinews,
|
@@ -527,7 +529,7 @@ included in this split are: eml (emilian e rumagno) ,fur (furlan) ,la
|
|
527 |
(sicilianu) ,sc (sardu) and vec (veneto). Using this data, depending
|
528 |
from the goal of the project, could produce very interesting results.
|
529 |
|
530 |
-
Books
|
531 |
|
532 |
This collection contains mainly the books coming from LiberLiber's
|
533 |
project "Manuzio" \[2\]. The books were downloaded from the website in
|
@@ -541,7 +543,7 @@ sources, such as the Creative Commons licensed school books of
|
|
541 |
"Matematicamente" \[3\] and Oilproject-Weschool \[4\] as well as some
|
542 |
other CC and PD license book found online.
|
543 |
|
544 |
-
Websites
|
545 |
|
546 |
I created a very generic script that is able to extract all the text of
|
547 |
a website as well as the text contained in Office, PDF and TeX
|
@@ -561,12 +563,15 @@ that we will discuss in the appropriate section.
|
|
561 |
Despite these two point, users are encouraged to use this section as it
|
562 |
is composed of medium-high and high quality contents.
|
563 |
|
564 |
-
Reddit
|
565 |
|
566 |
It contains a small subsets (4192672 messages) of conversations in some
|
567 |
Italian subreddits.
|
568 |
|
569 |
-
|
|
|
|
|
|
|
570 |
|
571 |
The presence of duplicate text can be, depending from the use cases, a
|
572 |
big problem for several machine learning tasks. I tried to avoid as much
|
@@ -611,7 +616,13 @@ in the form of 1) header of the website 2) list of links 3) footer of
|
|
611 |
the website. All the HTML was converted using HTML2TEXT so it should not
|
612 |
contain html code.
|
613 |
|
614 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
615 |
|
616 |
\* \[1\] <https://pdai.info/>
|
617 |
|
|
|
311 |
path: wikimedia_others/train-*
|
312 |
---
|
313 |
WARNING: THIS "README" IS JUST A STUB, IT WILL BE IMPROVED DURING THE
|
314 |
+
NEXT FEW DAYS AND FILLED WITH MANY OTHER
|
315 |
INFORMATIONS AND DETAILED STATISTICS
|
316 |
|
317 |
+
**Testimole -- A multi-billion Italian text corpus **
|
318 |
|
319 |
The goal of this work is to create a huge linguistic resource for the
|
320 |
Italian language that can be used for several NLP applications,
|
|
|
343 |
composed of several sub-datasets, each with different types of data and
|
344 |
goals.
|
345 |
|
346 |
+
**Conversational (\~ 85 Billions tokens):**
|
347 |
|
348 |
+
**UsenetArchiveIT**
|
349 |
|
350 |
This is the project that started the entire work: the goal was to
|
351 |
collect the largest possible amount of Usenet posts published in the
|
|
|
378 |
|
379 |
83GB of JSONL file before the conversion to HuggingFace dataset
|
380 |
|
381 |
+
**Forum**
|
382 |
|
383 |
The second part of the project is the one that produced the largest
|
384 |
amount of data. 62.415.825.978 A list of Italian message boards based on
|
|
|
418 |
implemented. More details will follow, but do not expect too much
|
419 |
regarding this point.
|
420 |
|
421 |
+
**General notes on conversational datasets:**
|
422 |
|
423 |
The data contained in the "usenet" and "forums" splits were generated by
|
424 |
Italian users of the Internet between 1995 and 2024. For this reason,
|
|
|
455 |
internal rules was asked to the user not to share personal information
|
456 |
as they would have been publicly available on the web.
|
457 |
|
458 |
+
**General**
|
459 |
+
|
460 |
+
**OJS**
|
461 |
|
462 |
This split of the dataset contains articles published as Open Access
|
463 |
using the platform OJS. It comprised mainly academic journals from
|
|
|
466 |
and the license used for the single article can be retrieved from the
|
467 |
metadata.
|
468 |
|
469 |
+
**Blogs**
|
470 |
|
471 |
This resource was gathered by scraping data from blogs written in
|
472 |
Italian. The project started with a collection of blogs regarding
|
|
|
510 |
Finally, the blog split contains also an annotation for the language
|
511 |
used, as identified by the FastText library.
|
512 |
|
513 |
+
**Wikimedia**
|
514 |
|
515 |
This split doesn't need many explanation as it is simply a dump of
|
516 |
wikimedia resources in Italian (Wikipedia, Wikibooks, Wikinews,
|
|
|
529 |
(sicilianu) ,sc (sardu) and vec (veneto). Using this data, depending
|
530 |
from the goal of the project, could produce very interesting results.
|
531 |
|
532 |
+
**Books**
|
533 |
|
534 |
This collection contains mainly the books coming from LiberLiber's
|
535 |
project "Manuzio" \[2\]. The books were downloaded from the website in
|
|
|
543 |
"Matematicamente" \[3\] and Oilproject-Weschool \[4\] as well as some
|
544 |
other CC and PD license book found online.
|
545 |
|
546 |
+
**Websites**
|
547 |
|
548 |
I created a very generic script that is able to extract all the text of
|
549 |
a website as well as the text contained in Office, PDF and TeX
|
|
|
563 |
Despite these two point, users are encouraged to use this section as it
|
564 |
is composed of medium-high and high quality contents.
|
565 |
|
566 |
+
**Reddit**
|
567 |
|
568 |
It contains a small subsets (4192672 messages) of conversations in some
|
569 |
Italian subreddits.
|
570 |
|
571 |
+
**Italatex**
|
572 |
+
Still work in progress. A collection of materials written in LaTeX.
|
573 |
+
|
574 |
+
**DEDUPLICATION**
|
575 |
|
576 |
The presence of duplicate text can be, depending from the use cases, a
|
577 |
big problem for several machine learning tasks. I tried to avoid as much
|
|
|
616 |
the website. All the HTML was converted using HTML2TEXT so it should not
|
617 |
contain html code.
|
618 |
|
619 |
+
|
620 |
+
**Detailed statistics**
|
621 |
+
*Work in progress; this will contain statistics of tokens, chars and sentences lenght for each diachronic resource (usenet newsgroup, post, blog) for each month for each year*
|
622 |
+
|
623 |
+
**Conclusions**
|
624 |
+
|
625 |
+
**References (partial)**
|
626 |
|
627 |
\* \[1\] <https://pdai.info/>
|
628 |
|