Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
Persian
Size:
10K - 100K
DOI:
License:
language: | |
- fa | |
license: apache-2.0 | |
size_categories: | |
- 10K<n<100K | |
task_categories: | |
- automatic-speech-recognition | |
pretty_name: Persian ASR Youtube (30 Second Chunk) | |
dataset_info: | |
features: | |
- name: audio | |
dtype: audio | |
- name: video_id | |
dtype: string | |
- name: segment_id | |
dtype: int64 | |
- name: title | |
dtype: string | |
- name: transcription | |
dtype: string | |
- name: youtube_url | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 15011333947.12248 | |
num_examples: 32746 | |
- name: test | |
num_bytes: 1868480465.95316 | |
num_examples: 4094 | |
- name: val | |
num_bytes: 1876553690.74436 | |
num_examples: 4093 | |
download_size: 18614667732 | |
dataset_size: 18756368103.82 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: test | |
path: data/test-* | |
- split: val | |
path: data/val-* | |
# How To Use | |
```python | |
from datasets import load_dataset | |
train = load_dataset('pourmand1376/asr-farsi-youtube-chunked-30-seconds', split='train+val') | |
test =load_dataset('pourmand1376/asr-farsi-youtube-chunked-30-seconds', split='test') | |
``` | |
+300 Hours ASR dataset generated from [this kaggle dataset](https://www.kaggle.com/datasets/amirpourmand/asr-farsi-youtube-chunked-30-seconds/) |