Datasets:
dataset_info:
features:
- name: text
dtype: string
- name: corpus
dtype: string
- name: original_id
dtype: int64
splits:
- name: train
num_bytes: 141807806497
num_examples: 50336214
download_size: 84893303434
dataset_size: 141807806497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-sa-4.0
language:
- tr
Dataset Card for Dataset Name
vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of OSCAR-2201 and mC4. This dataset is originally created for training VBART and later used for training TURNA. The cleaning procedures of this dataset are explained in Appendix A of the VBART Paper. It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
Dataset Details
Dataset Description
- Curated by: VNGRS-AI
- Language (NLP): Turkish
- License: cc-by-nc-sa-4.0
Uses
vngrs-web-corpus is mainly intended to pretrain language models and word representations.
Dataset Structure
- text[Str]: main text content of dataset
- corpus[Str]: source corpus
- original_id[Int]: original index of data at the source corpus
Bias, Risks, and Limitations
This dataset holds content crawled on the open web. It is cleaned based on a set of rules and heuristics without accounting for the semantics of the content. In cases where the content is irrelevant or inappropriate, it should be flagged and removed accordingly. The dataset is intended for research purposes only and should not be used for any other purposes without prior consent from the relevant authorities.
Citation
All attributions should be made to VBART paper.
@article{turker2024vbart,
title={VBART: The Turkish LLM},
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
journal={arXiv preprint arXiv:2403.01308},
year={2024}
}