File size: 15,183 Bytes
f65dfab
036bd28
 
8a4a94f
 
 
 
 
 
 
 
 
 
 
 
 
f65dfab
370ae41
 
 
8a4a94f
 
 
 
 
 
 
 
4d79b3f
8a4a94f
4d79b3f
 
 
 
 
 
8a4a94f
 
370ae41
4d79b3f
8a4a94f
 
 
4d79b3f
 
 
 
 
 
 
 
 
 
 
 
8a4a94f
4d79b3f
 
 
8a4a94f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4d79b3f
8a4a94f
 
 
 
4d79b3f
8a4a94f
 
4d79b3f
 
 
 
 
 
 
 
 
 
 
8a4a94f
 
4d79b3f
8a4a94f
 
4d79b3f
 
 
 
 
 
8a4a94f
 
 
 
 
 
 
 
 
 
 
 
4d79b3f
 
8a4a94f
 
 
4d79b3f
 
8a4a94f
 
567b72e
8a4a94f
 
4d79b3f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a4a94f
 
 
 
4d79b3f
8a4a94f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4d79b3f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a4a94f
 
 
 
 
 
4d79b3f
 
8a4a94f
 
 
 
4d79b3f
8a4a94f
4d79b3f
 
 
8a4a94f
4d79b3f
8a4a94f
 
 
d82439b
 
 
 
 
036bd28
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
---
configs:
- config_name: default
task_categories:
- image-to-text
size_categories:
- 1M<n<10M
language:
- en
- fr

splits:
- name: train
  num_examples: 9357567

---

<h1 style="color: #2c3e50; background-color: #ecf0f1; padding: 10px; border-left: 5px solid #3498db;">
  <span style="font-weight: bold;">Dataset Card for Finance Commons AMF OCR dataset (FC-AMF-OCR)</span>
</h1>

## Dataset Description

- **Contact at LightOn:** [Said Taghadouini](mailto:[email protected])

### Dataset Summary


The FC-AMF-OCR dataset is a comprehensive document collection derived from the [AMF-PDF](https://huggingface.co/datasets/PleIAs/AMF-PDF) dataset, which is part of the Finance Commons collection. This extensive dataset comprises 9.3 million images, each processed through Optical Character Recognition (OCR) using the [docTR](https://github.com/mindee/doctr) library. While native text annotations are available in the [AMF-Text](https://huggingface.co/datasets/PleIAs/AMF-Text) dataset, these annotations suffer from imperfections and inaccuracies, including mainly missing spaces, extra spaces, artifacts, etc. Additionally, the format of these annotations — presented as a single, continuous block of text without page demarcations — limits their utility for image-to-text tasks.

The FC-AMF-OCR dataset aims to address these limitations by providing:

- Full bounding box information for each element
- Confidence scores for individual words, lines, and text blocks
- Per-page annotations instead of a single block of text per document
- Solve the space inaccuracies in the native text annotations

<center>
    <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/sample0.png" alt="An example from the FC-AMF-OCR dataset" width="1100" height="600">
    <p><em>An example page of one pdf document with existing text annotation(red) and the OCR annotation(green). For simplicity, we order text from left to right and top to bottom.</em></p>
</center>


Most existing large scale OCR datasets like the Industry Documents Library (IDL) or the PDF Association dataset (PDFA) suffer from a number of issues:
- Time Coverage: These datasets consist primarily of older documents or PDFs from specific periods, which might not reflect current trends or developments.
- OCR Engines: They use outdated or inconsistent OCR technologies, affecting the accuracy and reliability of text extraction.
- Further, some of these annotations are limited to what can be extracted and is readily available - text drawn in images and only present as bitmap renditions is missed entirely.

FC-AMF-OCR enhances existing datasets by offering detailed OCR annotations for a recent collection of text-rich documents from the French Authority for Financial Markets (AMF). It leverages the excellent open-source [docTR](https://github.com/mindee/doctr) OCR engine to extract text from various elements, including images and logos. By utilizing an open-source solution, FC-AMF-OCR ensures stability against API changes and allows users to implement custom filtering as needed. This approach provides researchers and developers with a reliable and transparent tool for comprehensive document understanding and analysis.

Following most large scale OCR datasets like [IDL](https://huggingface.co/datasets/pixparse/idl-wds), this dataset is also in [webdataset](https://github.com/webdataset/webdataset/) .tar format and can be used with the `webdataset` library in a seamless way. Concretely, each document exists as a pair of a `pdf` and a `json.gz` file containing the OCR annotation.

### Load the dataset with `datasets`

This dataset can be used with Hugging Face datasets. Here is an example of how to stream the dataset directly from Hugging Face so you don't have to download the dataset locally.

<div class="alert alert-info">
<b>Note:</b> We do recommend downloading the dataset to speed up the processing.
</div>

```python
from datasets import load_dataset

dataset = load_dataset('lightonai/fc-amf-ocr', streaming=True)
print(next(iter(dataset['train'])).keys())
>> dict_keys(['__key__', '__url__', 'pdf', 'json.gz'])
```

You can download the dataset using the following command:

```python
 import os
 from huggingface_hub import HfApi
 
 os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
 api = HfApi()
 api.snapshot_download("lightonai/fc-amf-ocr", repo_type="dataset", local_dir_use_symlinks=False)
 
```

### Approach

We start from the original dataset, which is a collection of 633,244 PDF files and apply some simple filters to remove files that are not relevant for training. The main goal is to have a dataset that is ready to use for large-scale training. We use the following filters:

* Corrupted files: we remove files that fail to be decoded correctly or that take too long to load.
* Page count: we remove files that have more than 500 pages. Large files take too long to load and render.
* Keep original quality: we apply no compression or rendering that would degrade the quality of the original PDF.

The basic filtering removes less than 1% of the original dataset. After the basic filtering:
* We selected the best performing models from the [docTR](https://github.com/mindee/doctr) library. For maximum accuracy, we keep all models in full precision(FP32).
 - detection model : [DBNet with a ResNet-50 backbone](https://mindee.github.io/doctr/latest/modules/models.html#doctr.models.detection.db_resnet50)
 - recognition model : [CRNN with a VGG-16 backbone](https://mindee.github.io/doctr/latest/modules/models.html#doctr.models.recognition.crnn_vgg16_bn)
* We use data-parallel to parallelize the OCR process over multiple GPUs. This is done by splitting the dataset into multiple shards and processing each shard in parallel.
* The recognition model is compiled with torch.compile to speed up the inference.

By default the images are rendered at a DPI of 144 for all the processing steps but we provide the original PDFs so users can render them at their preffered quality. Having access to the full PDF quality is very important for training robust models.

The dataset's page distribution is represented in the following histogram. On average, documents contain approximately 15 pages, while the median page count is about 2.

<center>
    <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/page_distribution.png" alt="." width="600" height="300">
    <p><em>The distribution of number of pages in the FC-AMF-OCR dataset. </em></p>
</center>

We also show the year distribution of the dataset. The dataset contains documents from 2008 to 2024. This shows that the dataset is relatively recent and covers a wide range of years which complements previous datasets.

<center>
    <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/year_distribution.png" alt="." width="600" height="300">
    <p><em>The distribution of years in the FC-AMF-OCR dataset. </em></p>
</center>

### How to visualize a page from the dataset?

PDF files are sourced from a variety of origins and are typically stored in RGB format. These files can consist of multiple pages, each of which can be rendered using different tools or engines according to your needs. One recommended option is pdf2image, a tool that converts PDF pages into images. To use [pdf2image](https://github.com/Belval/pdf2image), you need to install the poppler-utils package, which provides the necessary support for rendering and processing PDF files efficiently. This approach allows for flexible handling of PDFs, making it easier to extract and manipulate content from multi-page documents.

```bash
apt-get install poppler-utils 
```

```python
from pdf2image import convert_from_bytes

page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_page=1)[0]
page
```

<center>
    <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/first_page.png" alt="." width="600" height="300">
    <p><em>A page from the FC-AMF-OCR dataset. </em></p>
</center>

Each `pdf` is paired with a `json.gz` file with the structure shown below. This strucure is that of docTR outputs, you can learn more here: [here](https://mindee.github.io/doctr/using_doctr/using_models.html#what-should-i-do-with-the-output). We explicitly avoid applying any OCR post-processing to get an approximate reading order. There are multiple ways of getting a reading order from bounding boxes. Users can use their own heuristics to extract the reading order from the bounding boxes.

```json
{
    'pages': [{
        'page_idx': 0,
        'dimensions': [1684, 1191],
        'geometry': [[0.2514, 0.0712], [0.4144, 0.0986]],
        'lines': [{
            'geometry': [[0.2515, 0.0713], [0.4144, 0.0850]],
            'words': [
                {
                    'value': 'European',
                    'confidence': 0.9998,
                    'geometry': [[0.2515, 0.0732], [0.3150, 0.0850]]
                },
                {
                    'value': 'Securities',
                    'confidence': 0.9986,
                    'geometry': [[0.3164, 0.0723], [0.3840, 0.0830]]
                },
                {
                    'value': 'and',
                    'confidence': 0.9998,
                    'geometry': [[0.3854, 0.0713], [0.4144, 0.0840]]
                }
            ]
        },
        {
            'geometry': [[0.2515, 0.0840], [0.3730, 0.0986]],
            'words': [
                {
                    'value': 'Markets',
                    'confidence': 0.9977,
                    'geometry': [[0.2515, 0.0850], [0.3053, 0.0967]]
                },
                {
                    'value': 'Authority',
                    'confidence': 0.8129,
                    'geometry': [[0.3067, 0.0840], [0.3730, 0.0986]]
                }
            ]
        }]
    }]
}
```

## Document Structure
The structural organization of the documents, including words, lines, blocks, pages, and the overall document is as follows.

| Element    | Description |
|------------|-------------|
| **Word**   | A Word is an uninterrupted sequence of characters. |
| **Line**   | A collection of Words aligned spatially and meant to be read together. |
| **Block**  | A collection of Lines. |
| **Page**   | A collection of Blocks that were on the same physical page. |

The top-level key, `pages`, is a list containing each page in the document. In this example, only one page is shown. 

- **Page**: 
  - `page_idx`: The index of the page in the document (starts at 0).
  - `dimensions`: The dimensions of the page in pixels, formatted as `[height, width]`.

- **Blocks**:
  - A page consists of several `blocks`, each containing lines.
  - `geometry`: Defines the bounding box of the block using normalized coordinates relative to the page size.

- **Lines**:
  - Each block contains a list of `lines`, where a line is a sequence of words grouped together.
  - `geometry`: Bounding box of the line in normalized coordinates relative to the page size.

- **Words**:
  - Each line is composed of individual `words` (continuous sequences of characters).
  - `value`: The text content of the word.
  - `confidence`: The confidence score of the OCR engine for the word.
  - `geometry`: Bounding box of the word in normalized coordinates relative to the page size.

For each page, the structure includes:
- **Blocks**: Grouped lines within a page.
- **Lines**: Sequences of words within a block.
- **Words**: Individual characters or words detected within each line, along with their confidence scores and positions.

### Bounding box visualization

You can visualize the bounding boxes of the dataset using the following code snippet. This code uses the [pdf2image](https://github.com/Belval/pdf2image) library to convert the PDF files to images.

```python
import gzip

import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
from pdf2image import convert_from_path


def visualize_bounding_boxes(pdf_path, json_path, page_num=0):
    with gzip.open(json_path, 'rt', encoding='utf-8') as f:
        json_data = json.load(f)

    image = convert_from_path(pdf_path)[page_num]
    img_width, img_height = image.size

    fig, ax = plt.subplots(1, figsize=(20, 20))
    ax.imshow(image)

    patches_list = []

    for block in json_data['pages'][page_num]['blocks']:
        for line in block['lines']:
            for word in line['words']:
                bbox = word['geometry']
                x1, y1 = bbox[0]
                x2, y2 = bbox[1]

                x1, y1 = x1 * img_width, y1 * img_height
                x2, y2 = x2 * img_width, y2 * img_height

                width = x2 - x1
                height = y2 - y1

                rect = patches.Rectangle((x1, y1), width, height, linewidth=1, edgecolor='r', facecolor='none')
                patches_list.append(rect)

    patch_collection = PatchCollection(patches_list, match_original=True)
    ax.add_collection(patch_collection)

    plt.axis('off')
    plt.tight_layout()
    plt.show()
```

Visualizing all bounding boxes on a given page, we obtain the following:
<center>
    <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/bboxes.png" alt="." width="600" height="300">
    <p><em>An example page with bounding box annotations in the FC-AMF-OCR dataset. </em></p>
</center>

### Data Splits

There is only a single train split for this dataset.

#### Train
* `fc-amf-train-{0000..0838}.tar`
* 838 shards (each shard is around 500 MB)
* 605,438 PDF files or samples
* 9.3M pages

## Additional Information

### Compute 

The compute was carried out on an HPE Cray node with 8xH100, hosted on Orange Business Cloud Avenue.

### Note

This dataset is intended as an OCR-heavy pre-training task for vision-language models or specialized OCR models. The current version contains multilingual data with English and French as the most represented languages. The OCR annotation might not work well for other languages due to the OCR engine limitations. Filtering based on word confidence scores can be used as a heuristic to subsample the dataset for higher quality. This approach can be scaled further by using a larger dataset with more languages and more diverse content, making it a reliable way to get multimodal data for documents.

### Licensing Information

Data has been OCRed from the original dataset. As a consequence it has the same [AMF-PDF](https://huggingface.co/datasets/PleIAs/AMF-PDF) license.

<div style="font-size: 0.8em; color: #666; background-color: #f0f0f0; padding: 5px; border-left: 3px solid #1E90FF; margin-top: 10px;">
<small><i>Note:</i> This dataset card template was inspired by the PDFA/IDL dataset cards.</small>
</div>


To reference this publication in your work, please use the following BibTeX entry:

```
@misc{FC-AMF-OCR, 
title={FC-AMF-OCR Dataset : LightOn releases a 9.3 million images OCR dataset to improve real world document parsing}, 
author={Taghadouini, Said}, 
organization={LightOn},  
url={https://www.lighton.ai/lighton-blogs/fc-amf-ocr-dataset}, 
year={2024}
}
```