Datasets:
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2590923950
num_examples: 1103446
download_size: 1516857634
dataset_size: 2590923950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- pt
pretty_name: Wikipedia-PT
size_categories:
- 1M<n<10M
tags:
- portuguese
Wikipedia-PT
Dataset Summary
The Portuguese portion of the Wikipedia dataset.
Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
Languages
Portuguese
Dataset Structure
Data Instances
An example looks as follows:
{
'text': 'Abril é o quarto mês...'
}
Data Fields
text
(str
): Text content of the article.
Data Splits
All configurations contain a single train
split.
Dataset Creation
Initial Data Collection and Normalization
The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org
You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html
The articles have been parsed using the mwparserfromhell
tool.
When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761
Licensing Information
Copyright licensing information: https://dumps.wikimedia.org/legal.html
All original textual content is licensed under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License. Some text may be available only under the Creative Commons license; see their Terms of Use for details. Text written by some authors may be released under additional licenses or into the public domain.
Citation Information
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}