Datasets:
File size: 2,121 Bytes
3f984fb 49f4963 3f984fb 49f4963 eeda0b9 ee5c620 376c069 eeda0b9 49f4963 376c069 49f4963 376c069 49f4963 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
dataset_info:
features:
- name: text
dtype: string
- name: corpus
dtype: string
- name: original_id
dtype: int64
splits:
- name: train
num_bytes: 141807806497
num_examples: 50336214
download_size: 84893303434
dataset_size: 141807806497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-sa-4.0
language:
- tr
---
# Dataset Card for Dataset Name
vngrs-web-corpus is a mixed-dataset made of cleaned Turkish sections of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4).
This dataset is originally created for training [VBART](https://arxiv.org/abs/2403.01308) and later used for training [TURNA](https://arxiv.org/abs/2401.14373).
The cleaning procedures of this dataset are explained in Appendix A of the [VBART Paper](https://arxiv.org/abs/2401.14373).
It consists of 50.3M pages and 25.33B tokens when tokenized by VBART Tokenizer.
## Dataset Details
### Dataset Description
- **Curated by:** [VNGRS-AI](https://vngrs.com/ai/)
- **Language (NLP):** Turkish
- **License:** cc-by-nc-sa-4.0
## Uses
vngrs-web-corpus is mainly intended to pretrain language models and word representations.
## Dataset Structure
- **text**[Str]: main text content of dataset
- **corpus**[Str]: source corpus
- **original_id**[Int]: original index of data at the source corpus
## Bias, Risks, and Limitations
This dataset holds content crawled on the open web. It is cleaned based on a set of rules and heuristics without accounting for the semantics of the content.
In cases where the content is irrelevant or inappropriate, it should be flagged and removed accordingly.
The dataset is intended for research purposes only and should not be used for any other purposes without prior consent from the relevant authorities.
## Citation
All attributions should be made to VBART paper.
```
@article{turker2024vbart,
title={VBART: The Turkish LLM},
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
journal={arXiv preprint arXiv:2403.01308},
year={2024}
}
``` |