PtBrVId / README.md
hugosousa's picture
Upload dataset
76af9fe verified
|
raw
history blame
8.98 kB
---
dataset_info:
- config_name: journalistic
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1172734772.6569607
num_examples: 1742725
- name: valid
num_bytes: 1345863.2574352932
num_examples: 2000
- name: test
num_bytes: 28294
num_examples: 36
download_size: 787050993
dataset_size: 1174108929.914396
- config_name: legal
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 146574307
num_examples: 466434
download_size: 89418636
dataset_size: 146574307
- config_name: literature
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 29744489.161964517
num_examples: 88522
- name: valid
num_bytes: 672024.7884585644
num_examples: 2000
- name: test
num_bytes: 12767
num_examples: 36
download_size: 21126825
dataset_size: 30429280.95042308
- config_name: politics
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 7970329
num_examples: 5810
download_size: 4605661
dataset_size: 7970329
- config_name: social_media
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 265857455
num_examples: 2020928
download_size: 188356429
dataset_size: 265857455
- config_name: web
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 278541298
num_examples: 140887
download_size: 165251198
dataset_size: 278541298
configs:
- config_name: journalistic
data_files:
- split: train
path: journalistic/train-*
- split: valid
path: journalistic/valid-*
- split: test
path: journalistic/test-*
- config_name: legal
data_files:
- split: train
path: legal/train-*
- config_name: literature
data_files:
- split: train
path: literature/train-*
- split: valid
path: literature/valid-*
- split: test
path: literature/test-*
- config_name: politics
data_files:
- split: train
path: politics/train-*
- config_name: social_media
data_files:
- split: train
path: social_media/train-*
- config_name: web
data_files:
- split: train
path: web/train-*
---
# PtBrVId
The developed corpus is a composition of pre-existing datasets initially created for other NLP tasks that provide permissive licenses. The first release of the corpus is available on [Huggingface](https://huggingface.co/datasets/Random-Mary-Smith/port_data_random).
#### Data Sources
The corpus consists of the following datasets:
<p align="center">
<table>
<tr>
<th>Domain</th>
<th>Variety</th>
<th>Dataset</th>
<th>Original Task</th>
<th># Docs</th>
<th>License</th>
<th>Silver Labeled</th>
</tr>
<tr>
<td rowspan="5">Literature</td>
<td rowspan="3">PT-PT</td>
<td><a href="http://arquivopessoa.net/">Arquivo Pessoa</a></td>
<td>-</td>
<td>~4k</td>
<td>CC</td>
<td>✔</td>
</tr>
<tr>
<td><a href="https://www.gutenberg.org/ebooks/bookshelf/99">Gutenberg Project</a></td>
<td>-</td>
<td>6</td>
<td>CC</td>
<td>✔</td>
</tr>
<tr>
<td><a href="https://www.clul.ulisboa.pt/recurso/corpus-de-textos-literarios">LT-Corpus</a></td>
<td>-</td>
<td>56</td>
<td>ELRA END USER</td>
<td>✘</td>
</tr>
<tr>
<td rowspan="2">PT-BR</td>
<td><a href="https://www.kaggle.com/datasets/rtatman/brazilian-portuguese-literature-corpus">Brazilian Literature</a></td>
<td>Author Identification</td>
<td>81</td>
<td>CC</td>
<td>✘</td>
</tr>
<tr>
<td>LT-Corpus</td>
<td>-</td>
<td>8</td>
<td>ELRA END USER</td>
<td>✘</td>
</tr>
<tr>
<td rowspan="2">Politics</td>
<td>PT-PT</td>
<td><a href="http://www.statmt.org/europarl/">Koehn (2005) Europarl</a></td>
<td>Machine Translation</td>
<td>~10k</td>
<td>CC</td>
<td>✘</td>
</tr>
<tr>
<td>PT-BR</td>
<td>Brazilian Senate Speeches</td>
<td>-</td>
<td>~5k</td>
<td>CC</td>
<td>✔</td>
</tr>
<tr>
<td rowspan="2">Journalistic</td>
<td>PT-PT</td>
<td><a href="https://www.linguateca.pt/CETEMPublico/">CETEM Público</a></td>
<td>-</td>
<td>1M</td>
<td>CC</td>
<td>✘</td>
</tr>
<tr>
<td>PT-BR</td>
<td><a href="https://www.linguateca.pt/CETEMFolha/">CETEM Folha</a></td>
<td>-</td>
<td>272k</td>
<td>CC</td>
<td>✘</td>
</tr>
<tr>
<td rowspan="3">Social Media</td>
<td>PT-PT</td>
<td><a href="https://www.aclweb.org/anthology/2021.ranlp-1.37/">Ramalho (2021)</a></td>
<td>Fake News Detection</td>
<td>2M</td>
<td>MIT</td>
<td>✔</td>
</tr>
<tr>
<td rowspan="2">PT-BR</td>
<td><a href="https://www.aclweb.org/anthology/2022.lrec-1.322/">Vargas (2022)</a></td>
<td>Hate Speech Detection</td>
<td>5k</td>
<td>CC-BY-NC-4.0</td>
<td>✘</td>
</tr>
<tr>
<td><a href="https://www.aclweb.org/anthology/2021.wlp-1.72/">Cunha (2021)</a></td>
<td>Fake News Detection</td>
<td>2k</td>
<td>GPL-3.0 license</td>
<td>✔</td>
</tr>
<tr>
<td>Web</td>
<td>BOTH</td>
<td><a href="https://www.aclweb.org/anthology/2020.lrec-1.451/">Ortiz-Suarez (2020)</a></td>
<td>-</td>
<td>10k</td>
<td>CC</td>
<td>✔</td>
</tr>
</table>
</p>
<p align="center">
<em>Table 1: Data Sources</em>
</p>
#####
Note: The dataset "Brazilian Senate Speeches" was created by the authors of this paper, using web crawling of the Brazilian Senate website and is available in the Huggingface repository.
#### Annotation Schema & Data Preprocessing Pipeline
We leveraged our knowledge of the Portuguese language to identify data sources that guaranteed mono-variety documents. However, this first release lacks any kind of supervision, so we cannot guarantee that all documents are mono-variety. In the future, we plan to release a second version of the corpus with a more robust annotation schema, combining automatic and manual annotation.
To improve the quality of the corpus, we applied a preprocessing pipeline to all documents. The pipeline consists of the following steps:
1. Remove all NaN values.
2. Remove all empty documents.
3. Remove all duplicated documents.
4. Apply the [clean_text](https://github.com/jfilter/clean-text) library to remove non-relevant information for language identification from the documents.
5. Remove all documents with a length significantly more than two standard deviations from the mean length of the documents in the corpus.
The pipeline is illustrated in Figure 1.
<p align="center">
<img src="assets/pipeline_lid.jpg" alt="Image Description">
</p>
<p align="center">
<em>Figure 1: Data Pre-Processing Pipeline</em>
</p>
#### Class Distribution
The class distribution of the corpus is presented in Table 2. The corpus is highly imbalanced, with the majority of the documents being from the journalistic domain. In the future, we plan to release a second version of the corpus with a more balanced distribution across the six domains. Depending on the imbalance of the textual domain, we used different strategies to perform train-validation-test splits. For the heavily imbalanced domains, we ensured a minimum of 100 documents for validation and 400 for testing. In the other domains, we applied a stratified split.
<p align="center">
<table>
<tr>
<th>Domain</th>
<th># PT-PT</th>
<th># PT-BR</th>
<th>Stratified</th>
</tr>
<tr>
<td>Politics</td>
<td>6500</td>
<td>4894</td>
<td>&#10003;</td>
</tr>
<tr>
<td>Web</td>
<td>7960</td>
<td>21592</td>
<td>&#10003;</td>
</tr>
<tr>
<td>Literature</td>
<td>18282</td>
<td>2772</td>
<td>&#10003;</td>
</tr>
<tr>
<td>Law</td>
<td>392839</td>
<td>5766</td>
<td>&#10005;</td>
</tr>
<tr>
<td>Journalistic</td>
<td>1494494</td>
<td>354180</td>
<td>&#10003;</td>
</tr>
<tr>
<td>Social Media</td>
<td>2013951</td>
<td>6222</td>
<td>&#10005;</td>
</tr>
</table>
</p>
<p align="center">
<em>Table 2: Class Balance across the six textual domains in both varieties of Portuguese.</em>
</p>
#### Future Releases & How to Contribute
We plan to release a second version of this corpus considering more textual domains and extending the scope to other Portuguese varieties. If you want to contribute to this corpus, please [contact us]().