Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Portuguese
ArXiv:
Libraries:
Datasets
Dask
License:
nicholasKluge's picture
Update README.md
ee4aedb verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: score
      dtype: float64
    - name: embedding
      sequence: float64
    - name: dataset
      dtype: string
  splits:
    - name: train
      num_bytes: 1199742546
      num_examples: 110000
  download_size: 856443525
  dataset_size: 1199742546
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
task_categories:
  - text-classification
language:
  - pt
tags:
  - portuguese
  - language-modeling
pretty_name: GigaVerbo Text-Filter
size_categories:
  - 100K<n<1M

GigaVerbo Text-Filter

Table of Contents

Dataset Description

Dataset Summary

GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of GigaVerbo (i.e., specifically those that were not synthetic). This dataset was used to train the text-quality filters described in "Tucano: Advancing Neural Text Generation for Portuguese". To create the text embeddings, we used sentence-transformers/LaBSE. All scores were generated by GPT-4o.

Supported Tasks and Leaderboards

This dataset can be utilized for tasks involving text classification/regression in Portuguese.

Languages

Portuguese

Dataset Structure

Data Instances

The dataset consists of the following features:

  • text: a string of text in Portuguese.
  • score: the score attributed by GPT-4o to that corresponding string of text.
  • embedding: embedding vector generated by sentence-transformers/LaBSE to that corresponding string of text.
  • name: the subset of GigaVerbo from which the corresponding text string originated.

Data Fields

{
  "text": "A inteligência artificial (de sigla: IA; do inglês: artificial intelligence, de sigla: AI) é um campo de estudo multidisciplinar que abrange varias áreas do conhecimento ...",
  "score": 0.85,
  "embedding": [0.313, 0.716, 0.897, 0.571, 0.061, 0.712, 0.265, 0.092, 0.816, 0.998, ...],
  "name" : "brwac"
}

Data Splits

Available splits are train.

from datasets import load_dataset

dataset = load_dataset("TucanoBR/GigaVerbo-Text-Filter", split='train')

# If you don't want to download the entire dataset, set streaming to `True`
dataset = load_dataset("TucanoBR/GigaVerbo-Text-Filter", split='train', streaming=True)

Dataset Creation

Curation Rationale

This dataset was developed as part of the study "Tucano: Advancing Neural Text Generation for Portuguese". In short, GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of GigaVerbo.

Source Data

Initial Data Collection and Normalization

GigaVerbo Text-Filter has been scored GPT-4o. Text embeddings were generated by sentence-transformers/LaBSE.

Who are the source language producers?

All text samples are native to Portuguese or translated from other languages to Portuguese (slight contamination of different languages should also be expected).

Annotations

Annotation process

GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of GigaVerbo. All text samples are native to Portuguese or translated from other languages to Portuguese (slight contamination of different languages should also be expected).

Who are the annotators?

Nicholas Kluge Corrêa.

Personal and Sensitive Information

This dataset can potentially contain personal and sensitive information, along with offensive, toxic, and disturbing language.

Considerations for Using the Data

Social Impact of Dataset

The presence of personal and sensitive information within the dataset raises concerns about privacy and data protection, potentially leading to breaches of individuals' confidentiality and security. Furthermore, the inclusion of offensive, toxic, and disturbing language in the dataset poses risks of perpetuating harmful behaviors and attitudes, contributing to the normalization of hate speech and online toxicity. Therefore, careful handling and ethical considerations are essential to mitigate these potential social impacts and promote responsible dataset use.

Discussion of Biases

The inclusion of offensive, toxic, and disturbing language in the dataset poses risks of perpetuating harmful behaviors and attitudes, contributing to the normalization of hate speech and online toxicity.

Other Known Limitations

A significant portion of the dataset's data has been translated using translation engines, potentially resulting in corrupted samples of both language and code. While useful for quickly converting text between languages, translation engines often struggle with accurately preserving the syntax, semantics, and context of programming languages. As a result, the translated code may contain errors, syntax inconsistencies, or even introduce vulnerabilities, rendering it unreliable or unusable for its intended purpose.

Additional Information

Dataset Curators

Nicholas Kluge Corrêa.

Licensing Information

The following datasets and respective licenses from GigaVerbo (only training splits are a part of the corpus):

Citation Information


@misc{correa2024tucanoadvancingneuraltext,
      title={{Tucano: Advancing Neural Text Generation for Portuguese}}, 
      author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
      year={2024},
      eprint={2411.07854},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.07854}, 
}

Aknowlegments

We gratefully acknowledge the granted access to the Marvin cluster hosted by University of Bonn along with the support provided by its High Performance Computing & Analytics Lab.

Contributions

If you want to contribute, contact me at [email protected]!