File size: 5,952 Bytes
8b0668c 647102b 8b0668c 647102b 8b0668c 647102b 8b0668c 647102b 8b0668c e685761 647102b 3392950 4147dcb 26b2bba 76d648e 3f33424 76d648e 3392950 e685761 3392950 26b2bba 3392950 37fa267 26b2bba 4147dcb 26b2bba 37fa267 26b2bba db1c443 37fa267 26b2bba 37fa267 26b2bba 37fa267 e685761 37fa267 192186e b9509a1 9fda193 192186e 2c3bd59 192186e e685761 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
language:
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
- ur
license: cc-by-4.0
size_categories:
- 1M<n<10M
pretty_name: Pralekha
dataset_info:
features:
- name: n_id
dtype: string
- name: doc_id
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
splits:
- name: aligned
num_bytes: 10274361211
num_examples: 1566404
- name: unaligned
num_bytes: 4466506637
num_examples: 783197
download_size: 5812005886
dataset_size: 14740867848
configs:
- config_name: default
data_files:
- split: aligned
path: data/aligned-*
- split: unaligned
path: data/unaligned-*
tags:
- data-mining
- document-alignment
- parallel-corpus
---
# Pralekha: An Indic Document Alignment Evaluation Benchmark
<div style="display: flex; gap: 10px;">
<a href="https://arxiv.org/abs/2411.19096">
<img src="https://img.shields.io/badge/arXiv-2411.19096-B31B1B" alt="arXiv">
</a>
<a href="https://huggingface.co/datasets/ai4bharat/Pralekha">
<img src="https://img.shields.io/badge/huggingface-Pralekha-yellow" alt="HuggingFace">
</a>
<a href="https://github.com/AI4Bharat/Pralekha">
<img src="https://img.shields.io/badge/github-Pralekha-blue" alt="GitHub">
</a>
<a href="https://creativecommons.org/licenses/by/4.0/">
<img src="https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey" alt="License: CC BY 4.0">
</a>
</div>
**PRALEKHA** is a large-scale benchmark for evaluating document-level alignment techniques. It includes 2M+ documents, covering 11 Indic languages and English, with a balanced mix of aligned and unaligned pairs.
---
## Dataset Description
**PRALEKHA** covers 12 languages—Bengali (`ben`), Gujarati (`guj`), Hindi (`hin`), Kannada (`kan`), Malayalam (`mal`), Marathi (`mar`), Odia (`ori`), Punjabi (`pan`), Tamil (`tam`), Telugu (`tel`), Urdu (`urd`), and English (`eng`). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: **news bulletins** and **podcast scripts**, offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality.
The dataset has a **1:2 ratio of aligned to unaligned document pairs**, making it ideal for benchmarking cross-lingual document alignment techniques.
### Data Fields
Each data sample includes:
- **`n_id`:** Unique identifier for aligned document pairs.
- **`doc_id`:** Unique identifier for individual documents.
- **`lang`:** Language of the document (ISO-3 code).
- **`text`:** The textual content of the document.
### Data Sources
1. **News Bulletins:** Data was custom-scraped from the [Indian Press Information Bureau (PIB)](https://pib.gov.in) website. Documents were aligned by matching bulletin IDs, which interlink bulletins across languages.
2. **Podcast Scripts:** Data was sourced from [Mann Ki Baat](https://www.pmindia.gov.in/en/mann-ki-baat), a radio program hosted by the Indian Prime Minister. This program, originally spoken in Hindi, was manually transcribed and translated into various Indian languages.
### Dataset Size Statistics
| Split | Number of Documents | Size (bytes) |
|---------------|---------------------|--------------------|
| **Aligned** | 1,566,404 | 10,274,361,211 |
| **Unaligned** | 783,197 | 4,466,506,637 |
| **Total** | 2,349,601 | 14,740,867,848 |
### Language-wise Statistics
| Language (`ISO-3`) | Aligned Documents | Unaligned Documents | Total Documents |
|---------------------|-------------------|---------------------|-----------------|
| Bengali (`ben`) | 95,813 | 47,906 | 143,719 |
| English (`eng`) | 298,111 | 149,055 | 447,166 |
| Gujarati (`guj`) | 67,847 | 33,923 | 101,770 |
| Hindi (`hin`) | 204,809 | 102,404 | 307,213 |
| Kannada (`kan`) | 61,998 | 30,999 | 92,997 |
| Malayalam (`mal`) | 67,760 | 33,880 | 101,640 |
| Marathi (`mar`) | 135,301 | 67,650 | 202,951 |
| Odia (`ori`) | 46,167 | 23,083 | 69,250 |
| Punjabi (`pan`) | 108,459 | 54,229 | 162,688 |
| Tamil (`tam`) | 149,637 | 74,818 | 224,455 |
| Telugu (`tel`) | 110,077 | 55,038 | 165,115 |
| Urdu (`urd`) | 220,425 | 110,212 | 330,637 |
---
# Usage
You can use the following commands to download and explore the dataset:
## Downloading the Entire Dataset
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/pralekha")
```
## Downloading a Specific Split (aligned or unaligned)
``` python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/pralekha", split="<split_name>")
# For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned")
```
## Downloading a Specific Language from a Split
```python
from datasets import load_dataset
dataset = load_dataset("ai4bharat/pralekha", split="<split_name>/<lang_code>")
# For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned/ben")
```
---
## License
This dataset is released under the [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/) license.
---
## Contact
For any questions or feedback, please contact:
- Raj Dabre ([[email protected]](mailto:[email protected]))
- Sanjay Suryanarayanan ([[email protected]](mailto:[email protected]))
- Haiyue Song ([[email protected]](mailto:[email protected]))
- Mohammed Safi Ur Rahman Khan ([[email protected]](mailto:[email protected]))
Please get in touch with us for any copyright concerns.
|