configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: valid | |
path: data/valid-* | |
dataset_info: | |
features: | |
- name: input_ids | |
sequence: int32 | |
splits: | |
- name: train | |
num_bytes: 49605042096 | |
num_examples: 48253932 | |
- name: valid | |
num_bytes: 595216112 | |
num_examples: 579004 | |
download_size: 24336775144 | |
dataset_size: 50200258208 | |
# Dataset Card for "turkish_corpus_tokenized" | |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |