Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
Portuguese
Size:
100K - 1M
ArXiv:
License:
File size: 10,057 Bytes
235c9cf aab626b 3251af7 aab626b 235c9cf 3aa80a1 235c9cf 3aa80a1 ee4aedb 3aa80a1 ee4aedb 3aa80a1 ee4aedb 3aa80a1 ee4aedb 3aa80a1 c84d093 3aa80a1 ee4aedb 3aa80a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
---
dataset_info:
features:
- name: text
dtype: string
- name: score
dtype: float64
- name: embedding
sequence: float64
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 1199742546
num_examples: 110000
download_size: 856443525
dataset_size: 1199742546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
language:
- pt
tags:
- portuguese
- language-modeling
pretty_name: GigaVerbo Text-Filter
size_categories:
- 100K<n<1M
---
# GigaVerbo Text-Filter
<img src="./logo-gigaverbo.png" height="200">
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Aknowlegments](#aknowlegments)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter
- **Repository:** https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter
- **Paper:** [Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)
- **Point of Contact:** [Nk-correa](mailto:[email protected])
### Dataset Summary
GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo) (i.e., specifically those that were not synthetic). This dataset was used to train the text-quality filters described in "_[Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)_". To create the text embeddings, we used [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). All scores were generated by GPT-4o.
### Supported Tasks and Leaderboards
This dataset can be utilized for tasks involving text classification/regression in Portuguese.
### Languages
Portuguese
## Dataset Structure
### Data Instances
The dataset consists of the following features:
- **text:** a string of text in Portuguese.
- **score:** the score attributed by GPT-4o to that corresponding string of text.
- **embedding:** embedding vector generated by [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) to that corresponding string of text.
- **name:** the subset of GigaVerbo from which the corresponding text string originated.
### Data Fields
```python
{
"text": "A inteligência artificial (de sigla: IA; do inglês: artificial intelligence, de sigla: AI) é um campo de estudo multidisciplinar que abrange varias áreas do conhecimento ...",
"score": 0.85,
"embedding": [0.313, 0.716, 0.897, 0.571, 0.061, 0.712, 0.265, 0.092, 0.816, 0.998, ...],
"name" : "brwac"
}
```
### Data Splits
Available splits are `train`.
```python
from datasets import load_dataset
dataset = load_dataset("TucanoBR/GigaVerbo-Text-Filter", split='train')
# If you don't want to download the entire dataset, set streaming to `True`
dataset = load_dataset("TucanoBR/GigaVerbo-Text-Filter", split='train', streaming=True)
```
## Dataset Creation
### Curation Rationale
This dataset was developed as part of the study "[Tucano: Advancing Neural Text Generation for Portuguese](https://arxiv.org/abs/2411.07854)". In short, GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo).
### Source Data
#### Initial Data Collection and Normalization
GigaVerbo Text-Filter has been scored GPT-4o. Text embeddings were generated by [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
#### Who are the source language producers?
All text samples are native to Portuguese or translated from other languages to Portuguese (slight contamination of different languages should also be expected).
### Annotations
#### Annotation process
GigaVerbo Text-Filter is a dataset with 110,000 randomly selected samples from 9 subsets of [GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo). All text samples are native to Portuguese or translated from other languages to Portuguese (slight contamination of different languages should also be expected).
#### Who are the annotators?
[Nicholas Kluge Corrêa](mailto:[email protected]).
### Personal and Sensitive Information
This dataset can potentially contain personal and sensitive information, along with offensive, toxic, and disturbing language.
## Considerations for Using the Data
### Social Impact of Dataset
The presence of personal and sensitive information within the dataset raises concerns about privacy and data protection, potentially leading to breaches of individuals' confidentiality and security. Furthermore, the inclusion of offensive, toxic, and disturbing language in the dataset poses risks of perpetuating harmful behaviors and attitudes, contributing to the normalization of hate speech and online toxicity. Therefore, careful handling and ethical considerations are essential to mitigate these potential social impacts and promote responsible dataset use.
### Discussion of Biases
The inclusion of offensive, toxic, and disturbing language in the dataset poses risks of perpetuating harmful behaviors and attitudes, contributing to the normalization of hate speech and online toxicity.
### Other Known Limitations
A significant portion of the dataset's data has been translated using translation engines, potentially resulting in corrupted samples of both language and code. While useful for quickly converting text between languages, translation engines often struggle with accurately preserving the syntax, semantics, and context of programming languages. As a result, the translated code may contain errors, syntax inconsistencies, or even introduce vulnerabilities, rendering it unreliable or unusable for its intended purpose.
## Additional Information
### Dataset Curators
[Nicholas Kluge Corrêa](mailto:[email protected]).
### Licensing Information
The following datasets and respective licenses from GigaVerbo (only training splits are a part of the corpus):
- [HPLT-PT](https://huggingface.co/datasets/HPLT/hplt_monolingual_v1_2) (License: [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
- [CC-2023](https://huggingface.co/datasets/dominguesm/CC-MAIN-2023-23) (License: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/deed.en))
- [CCc100](https://huggingface.co/datasets/eduagarcia/CrawlPT_dedup) (License: [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/))
- [MC4-PT](https://huggingface.co/datasets/thegoodfellas/mc4-pt-cleaned) (License: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html))
- [Blogset-BR](https://huggingface.co/datasets/thegoodfellas/blogset-br) (License: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html))
- [BrWaC](https://huggingface.co/datasets/UFRGS/brwac) (License: Unknown)
- [Wikipedia](https://huggingface.co/datasets/graelo/wikipedia) (License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/))
- [Corpus Carolina](https://huggingface.co/datasets/carolina-c4ai/corpus-carolina) (License: [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en))
- [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) (License: [ODC-By](https://opendatacommons.org/licenses/by/1-0/), [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
- [OSCAR](https://huggingface.co/datasets/eduagarcia/CrawlPT_dedup) (License: [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
- [Legal Portuguese](https://huggingface.co/datasets/eduagarcia/LegalPT_dedup) (License: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/deed.en))
- [Xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum) (License: [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en))
- [Roots Wikiquote](https://huggingface.co/datasets/bigscience-data/roots_pt_wikiquote) (License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/))
- [Roots Ted Talks](https://huggingface.co/datasets/bigscience-data/roots_pt_ted_talks_iwslt) (License: [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en))
### Citation Information
```latex
@misc{correa2024tucanoadvancingneuraltext,
title={{Tucano: Advancing Neural Text Generation for Portuguese}},
author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
year={2024},
eprint={2411.07854},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.07854},
}
```
### Aknowlegments
We gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing \& Analytics Lab.
### Contributions
If you want to contribute, contact me at [[email protected]](mailto:[email protected])!
|