Datasets:
language:
- fa
pretty_name: Persian Historical Documents Handwritten Characters
size_categories:
- 1K<n<10K
tags:
- ocr
- character-recognition
- persian
- historical
- handwritten
- nastaliq
- character
Persian Historical Documents Handwritten Characters
Dataset Description
- Repository: https://github.com/iarata/persian-docs-ocr
- Paper: https://doi.org/10.1007/978-3-031-53969-5_20
- Point of Contact: hajebrahimi.research [at] gmail [dot] com
Summary
This dataset contains pre-processed images of Persian characters' contextual forms (except letter گ) from 5 handwritten Persian historical books written in Nastaliq script. The dataset contains 2775 images of 111 classes. The images are in TIFF format and have a resolution of 72 dpi. The images are in black and white and have a size of 395 × 395 pixels.
Languages
Persian
Dataset Structure
The dataset is structured as follows:
├── data
│ ├── 06a9_01.tif
│ ├── 06a9_02.tif
│ ├── 06a9_03.tif
│ ├── 06a9_04.tif
│ ├── 06a9_05.tif
│ ├── ...
│ ├── 06a9_25.tif
│ │
│ ├── 06cc_01.tif
│ ├── 06cc_02.tif
│ ├── 06cc_03.tif
│ ├── 06cc_04.tif
│ ├── 06cc_05.tif
│ ├── ...
│ ├── 06cc_25.tif
│ ├── ...
The naming of each image indicates the UTF-16 hexadecimal code (Hex to String Decoder) of a character's contextual form followed by the number of the image. In the numbering, every 5 images are from a new book. The contextual form of every character is treated as a separate class resulting in 111 classes.
Dataset Creation
For building this dataset 5 historical Persian books from the Library of Congress
Source Data
The data was collected from 5 historical Persian books from the Library of Congress. The books are as follows:
The images were pre-processed using the following steps:
Images were first normalized to reduce noise from the background of the characters. The normalized image is then converted to a single-channel grayscale image. Following that, image thresholding is applied to the grayscale image to remove the characters' background. The thresholded image is binarized so that the pixel values greater than 0 become 255 (white), and pixels with a value of 0 (black) remain unchanged. Finally, the binarized image is inversed.
Annotations
Before pre-processing the images the characters were cropped from the books and were saved with their UTF-16 hexadecimal code plus the number of the image (e.g. 06a9_01.tif).
Annotators:
Citation Information
Hajebrahimi, A., Santoso, M.E., Kovacs, M., Kryssanov, V.V. (2024). Few-Shot Learning for Character Recognition in Persian Historical Documents. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P.M., Umeton, R. (eds) Machine Learning, Optimization, and Data Science. LOD 2023. Lecture Notes in Computer Science, vol 14505. Springer, Cham. https://doi.org/10.1007/978-3-031-53969-5_20
BibTeX:
@InProceedings{10.1007/978-3-031-53969-5_20,
author="Hajebrahimi, Alireza
and Santoso, Michael Evan
and Kovacs, Mate
and Kryssanov, Victor V.",
editor="Nicosia, Giuseppe
and Ojha, Varun
and La Malfa, Emanuele
and La Malfa, Gabriele
and Pardalos, Panos M.
and Umeton, Renato",
title="Few-Shot Learning for Character Recognition in Persian Historical Documents",
booktitle="Machine Learning, Optimization, and Data Science",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="259--273",
abstract="Digitizing historical documents is crucial for the preservation of cultural heritage. The digitization of documents written in Perso-Arabic scripts, however, presents multiple challenges. The Nastaliq calligraphy can be difficult to read even for a native speaker, and the four contextual forms of alphabet letters pose a complex task to current optical character recognition systems. To address these challenges, the presented study develops an approach for character recognition in Persian historical documents using few-shot learning with Siamese Neural Networks. A small, novel dataset is created from Persian historical documents for training and testing purposes. Experiments on the dataset resulted in a 94.75{\%} testing accuracy for the few-shot learning task, and a 67{\%} character recognition accuracy was observed on unseen documents for 111 distinct character classes.",
isbn="978-3-031-53969-5"
}