File size: 1,478 Bytes
4c92915 b149eb3 4c92915 bbac295 4c92915 b149eb3 4c92915 bbac295 4c92915 a018145 9a72476 a018145 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
dataset_info:
- config_name: telugu_asr
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 47887486
num_examples: 209270
download_size: 20219871
dataset_size: 47887486
- config_name: telugu_nlp
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 387671180
num_examples: 47415
download_size: 150012515
dataset_size: 387671180
- config_name: wikipedia
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 710613522
num_examples: 87854
download_size: 209754217
dataset_size: 710613522
configs:
- config_name: telugu_asr
data_files:
- split: train
path: telugu_asr/train-*
- config_name: telugu_nlp
data_files:
- split: train
path: telugu_nlp/train-*
- config_name: wikipedia
data_files:
- split: train
path: wikipedia/train-*
---
# Dataset
This repository contains the final dataset created using various resources. The primary datasets used for the construction of this final dataset are:
- [Telugu NLP Dataset from Kaggle](https://www.kaggle.com/datasets/sudalairajkumar/telugu-nlp)
- [Telugu ASR Corpus from HuggingFace](https://huggingface.co/datasets/parambharat/telugu_asr_corpus)
- [Wikipedia Telugu Dataset from Wikimedia on HuggingFace](https://huggingface.co/datasets/wikimedia/wikipedia)
These datasets have been combined to form a comprehensive resource for Telugu Natural Language Processing (NLP) tasks.
|