staghado commited on
Commit
4d79b3f
1 Parent(s): 88839a5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +152 -118
README.md CHANGED
@@ -21,26 +21,38 @@ splits:
21
 
22
  ### Dataset Summary
23
 
24
- FC-AMF-OCR dataset is a document dataset OCRed from the [AMF-PDF](https://huggingface.co/datasets/PleIAs/AMF-PDF) dataset part of the Finance Commons collection. The dataset comes with native text annotation available here [AMF-Text](https://huggingface.co/datasets/PleIAs/AMF-Text) but these annotations are imperfect and contain many errors(missing spaces, OCR artifacts, etc.). The goal of this dataset is to provide a more accurate and complete OCR annotation for the Finance Commons AMF dataset with full bounding box information and confidence scores for each word, line and block.
25
 
26
- Most existing large scale OCR datasets like the Industry Documents Library (IDL) or the PDF Association dataset (PDFA) suffer from a number of issues:
27
- - Time Coverage: These datasets consist primarily of older documents or PDFs from specific periods, which might not reflect current trends or developments.
28
- - OCR Engines: They use outdated or inconsistent OCR technologies, affecting the accuracy and reliability of text extraction.
29
- - Further, some of these annotations are limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely.
30
 
31
- FC-AMF-OCR aims to address these issues by providing comprehensive OCR annotations of a recent collection of text-heavy documents and use the excellent [docTR](https://github.com/mindee/doctr) open source OCR engine. Using an open source OCR engine makes it less subject to API changes and users know what to expect and apply specific filtering logic if need be.
 
 
 
 
 
32
 
33
  <center>
34
  <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/sample0.png" alt="An example from the FC-AMF-OCR dataset" width="600" height="300">
35
- <p><em>An example page of one pdf document with existing text annotation(red) and the OCR annotation(green). </em></p>
36
  </center>
37
 
38
- Following most large scale OCR datasets like [IDL](https://huggingface.co/datasets/pixparse/idl-wds), this dataset is in [webdataset](https://github.com/webdataset/webdataset/) .tar format and can be used with derived forms of the `webdataset` library.
39
 
40
- ### Usage with `datasets`
 
 
 
 
 
 
 
 
 
 
 
41
 
42
- This dataset can be used with webdataset library or Hugging Face datasets. Here is an example of how to stream the dataset directly from Hugging Face so you don't have to download the dataset locally.
43
- > Note: We do recommend downloading the dataset to save bandwidth.
 
44
 
45
  ```python
46
  from datasets import load_dataset
@@ -56,33 +68,42 @@ You can download the dataset using the following command:
56
  import os
57
  from huggingface_hub import HfApi
58
 
59
-
60
  os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
61
  api = HfApi()
62
  api.snapshot_download("lightonai/fc-amf-ocr", repo_type="dataset", local_dir_use_symlinks=False)
63
 
64
  ```
65
 
66
-
67
- #### Approach
68
-
69
-
70
-
71
- #### Filtering process
72
 
73
  We start from the original dataset, which is a collection of 633,244 PDF files and apply some simple filters to remove files that are not relevant for training. The main goal is to have a dataset that is ready to use for large-scale training. We use the following filters:
74
 
75
  * Corrupted files: we remove files that fail to be decoded correctly or that take too long to load.
76
- * Page count: we remove files that have more than 500 pages.
77
  * Keep original quality: we apply no compression or rendering that would degrade the quality of the original PDF.
78
 
79
- Here is a histogram of the number of pages in the dataset.
 
 
 
 
 
 
 
 
 
 
80
  <center>
81
  <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/page_distribution.png" alt="." width="600" height="300">
82
- <p><em>The distribution of number pf pages in the FC-AMF-OCR dataset. </em></p>
83
  </center>
84
 
85
- At the end, each document exists as a pair of a `pdf` and a `json.gz` file containing extensive OCR annotation. The packaging in webdataset format makes it easy to use in image-to-text tasks at scale.
 
 
 
 
 
86
 
87
  ### How to visualize a page from the dataset?
88
 
@@ -95,107 +116,64 @@ apt-get install poppler-utils
95
  ```python
96
  from pdf2image import convert_from_bytes
97
 
98
- pdf_first_page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_page=1)[0]
 
99
  ```
100
 
101
  <center>
102
- <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/pdf_first_page.png" alt="" width="400" height="600">
 
103
  </center>
104
 
105
- Each `pdf` is paired with a `json.gz` file with the following structure. These strucure is that of docTR outputs. Learn more [here](https://mindee.github.io/doctr/using_doctr/using_models.html#what-should-i-do-with-the-output). We explicitly avoid applying any OCR post-processing to get an approximate reading order. Users can use their own heuristics to extract the reading order from the boinding boxes.
106
 
107
  ```json
108
- {'pages': [{'page_idx': 0,
109
- 'dimensions': [1684, 1191],
110
- 'blocks': [{'geometry': [[0.11751876049538201, 0.0478515625],
111
- [0.2390290459697733, 0.126953125]],
112
- 'lines': [{'geometry': [[0.14513473446683461, 0.0478515625],
113
- [0.15618112405541562, 0.0556640625]],
114
- 'words': [{'value': '*',
115
- 'confidence': 0.9999570846557617,
116
- 'geometry': [[0.14513473446683461, 0.0478515625],
117
- [0.15618112405541562, 0.0556640625]]}]},
118
- {'geometry': [[0.1258035526868178, 0.060546875],
119
- [0.13823074097397148, 0.0703125]],
120
- 'words': [{'value': '*',
121
- 'confidence': 0.9995156526565552,
122
- 'geometry': [[0.1258035526868178, 0.060546875],
123
- [0.13823074097397148, 0.0703125]]}]},
124
- {'geometry': [[0.11751876049538201, 0.0751953125],
125
- [0.2390290459697733, 0.09375]],
126
- 'words': [{'value': '*',
127
- 'confidence': 0.9917723536491394,
128
- 'geometry': [[0.11751876049538201, 0.078125],
129
- [0.13270754617968095, 0.08984375]]},
130
- {'value': 'esma',
131
- 'confidence': 0.9842223525047302,
132
- 'geometry': [[0.1506579292611251, 0.0751953125],
133
- [0.2390290459697733, 0.09375]]}]},
134
- {'geometry': [[0.12442275398824515, 0.09765625],
135
- [0.13823074097397148, 0.107421875]],
136
- 'words': [{'value': '*',
137
- 'confidence': 0.9726816415786743,
138
- 'geometry': [[0.12442275398824515, 0.09765625],
139
- [0.13823074097397148, 0.107421875]]}]},
140
- {'geometry': [[0.21693626679261124, 0.0986328125],
141
- [0.23074425377833752, 0.107421875]],
142
- 'words': [{'value': '*',
143
- 'confidence': 0.9999707937240601,
144
- 'geometry': [[0.21693626679261124, 0.0986328125],
145
- [0.23074425377833752, 0.107421875]]}]},
146
- {'geometry': [[0.14375393576826195, 0.1123046875],
147
- [0.15756192275398823, 0.12109375]],
148
- 'words': [{'value': '*',
149
- 'confidence': 0.9999815225601196,
150
- 'geometry': [[0.14375393576826195, 0.1123046875],
151
- [0.15756192275398823, 0.12109375]]}]},
152
- {'geometry': [[0.1989858837111671, 0.1123046875],
153
- [0.21279387069689337, 0.1220703125]],
154
- 'words': [{'value': '*',
155
- 'confidence': 0.9999063014984131,
156
- 'geometry': [[0.1989858837111671, 0.1123046875],
157
- [0.21279387069689337, 0.1220703125]]}]},
158
- {'geometry': [[0.17275070843828716, 0.119140625],
159
- [0.18379709802686817, 0.126953125]],
160
- 'words': [{'value': '*',
161
- 'confidence': 0.9989649057388306,
162
- 'geometry': [[0.17275070843828716, 0.119140625],
163
- [0.18379709802686817, 0.126953125]]}]}],
164
- 'artefacts': []},
165
- {'geometry': [[0.251456234256927, 0.0712890625],
166
- [0.41439048068849704, 0.0986328125]],
167
- 'lines': [{'geometry': [[0.251456234256927, 0.0712890625],
168
- [0.41439048068849704, 0.0849609375]],
169
- 'words': [{'value': 'European',
170
- 'confidence': 0.9998014569282532,
171
- 'geometry': [[0.251456234256927, 0.0732421875],
172
- [0.3149729743912678, 0.0849609375]]},
173
- {'value': 'Securities',
174
- 'confidence': 0.9985648989677429,
175
- 'geometry': [[0.3163537730898405, 0.072265625],
176
- [0.38401290931989923, 0.0830078125]]},
177
- {'value': 'and',
178
- 'confidence': 0.9997856020927429,
179
- 'geometry': [[0.3853937080184719, 0.0712890625],
180
- [0.41439048068849704, 0.083984375]]}]},
181
- {'geometry': [[0.251456234256927, 0.083984375],
182
- [0.3729665197313182, 0.0986328125]],
183
- 'words': [{'value': 'Markets',
184
- 'confidence': 0.9976556301116943,
185
- 'geometry': [[0.251456234256927, 0.0849609375],
186
- [0.3053073835012594, 0.0966796875]]},
187
- {'value': 'Authority',
188
- 'confidence': 0.8129342794418335,
189
- 'geometry': [[0.30668818219983207, 0.083984375],
190
- [0.3729665197313182, 0.0986328125]]}]}],
191
- 'artefacts': []},
192
- ...
193
- {'page_idx': ...},
194
  }
195
  ```
196
 
197
  ## Document Structure
198
- The structural organization of the documents, including words, lines, blocks, pages, and the overall document.
199
 
200
  | Element | Description |
201
  |------------|-------------|
@@ -203,7 +181,6 @@ The structural organization of the documents, including words, lines, blocks, pa
203
  | **Line** | A collection of Words aligned spatially and meant to be read together. |
204
  | **Block** | A collection of Lines. |
205
  | **Page** | A collection of Blocks that were on the same physical page. |
206
- > Artefacts are not used here.
207
 
208
  The top-level key, `pages`, is a list containing each page in the document. In this example, only one page is shown.
209
 
@@ -230,23 +207,80 @@ For each page, the structure includes:
230
  - **Lines**: Sequences of words within a block.
231
  - **Words**: Individual characters or words detected within each line, along with their confidence scores and positions.
232
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
233
  ### Data Splits
234
 
235
  There is only a single train split for this dataset.
236
 
237
  #### Train
238
  * `fc-amf-train-{0000..0838}.tar`
239
- * 838 shards (each shard is 500 MB for ease of use)
240
- * 605,438 samples
241
  * 9.3M pages
242
 
243
  ## Additional Information
244
 
245
- ### Note
246
 
247
- This dataset is intended as an OCR-heavy pre-training basis for vision-language models or specialized OCR models. The current version contains multilingual data with English and French as the most represented languages. The OCR annotation might not work well for other languages. Filtering based on word confidence scores can be used as a heuristic to subsample the dataset.
 
 
248
 
249
- In a future release, we will add language information per pdf.
250
 
251
  ### Licensing Information
252
 
 
21
 
22
  ### Dataset Summary
23
 
 
24
 
25
+ The FC-AMF-OCR dataset is a comprehensive document collection derived from the [AMF-PDF](https://huggingface.co/datasets/PleIAs/AMF-PDF) dataset, which is part of the Finance Commons collection. This extensive dataset comprises 9.3 million images, each processed through Optical Character Recognition (OCR) using the [docTR](https://github.com/mindee/doctr) library. While native text annotations are available in the [AMF-Text](https://huggingface.co/datasets/PleIAs/AMF-Text) dataset, these annotations suffer from imperfections and inaccuracies, including mainly missing spaces, extra spaces, artifacts, etc. Additionally, the format of these annotations — presented as a single, continuous block of text without page demarcations — limits their utility for image-to-text tasks.
 
 
 
26
 
27
+ The FC-AMF-OCR dataset aims to address these limitations by providing:
28
+
29
+ - Full bounding box information for each element
30
+ - Confidence scores for individual words, lines, and text blocks
31
+ - Per-page annotations instead of a single block of text per document
32
+ - Solve the space inaccuracies in the native text annotations
33
 
34
  <center>
35
  <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/sample0.png" alt="An example from the FC-AMF-OCR dataset" width="600" height="300">
36
+ <p><em>An example page of one pdf document with existing text annotation(red) and the OCR annotation(green). For simplicity, we order text from left to right and top to bottom.</em></p>
37
  </center>
38
 
 
39
 
40
+ Most existing large scale OCR datasets like the Industry Documents Library (IDL) or the PDF Association dataset (PDFA) suffer from a number of issues:
41
+ - Time Coverage: These datasets consist primarily of older documents or PDFs from specific periods, which might not reflect current trends or developments.
42
+ - OCR Engines: They use outdated or inconsistent OCR technologies, affecting the accuracy and reliability of text extraction.
43
+ - Further, some of these annotations are limited to what can be extracted and is readily available - text drawn in images and only present as bitmap renditions is missed entirely.
44
+
45
+ FC-AMF-OCR enhances existing datasets by offering detailed OCR annotations for a recent collection of text-rich documents from the French Authority for Financial Markets (AMF). It leverages the excellent open-source [docTR](https://github.com/mindee/doctr) OCR engine to extract text from various elements, including images and logos. By utilizing an open-source solution, FC-AMF-OCR ensures stability against API changes and allows users to implement custom filtering as needed. This approach provides researchers and developers with a reliable and transparent tool for comprehensive document understanding and analysis.
46
+
47
+ Following most large scale OCR datasets like [IDL](https://huggingface.co/datasets/pixparse/idl-wds), this dataset is also in [webdataset](https://github.com/webdataset/webdataset/) .tar format and can be used with the `webdataset` library in a seamless way. Concretely, each document exists as a pair of a `pdf` and a `json.gz` file containing the OCR annotation.
48
+
49
+ ### Load the dataset with `datasets`
50
+
51
+ This dataset can be used with Hugging Face datasets. Here is an example of how to stream the dataset directly from Hugging Face so you don't have to download the dataset locally.
52
 
53
+ <div class="alert alert-info">
54
+ <b>Note:</b> We do recommend downloading the dataset to speed up the processing.
55
+ </div>
56
 
57
  ```python
58
  from datasets import load_dataset
 
68
  import os
69
  from huggingface_hub import HfApi
70
 
 
71
  os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
72
  api = HfApi()
73
  api.snapshot_download("lightonai/fc-amf-ocr", repo_type="dataset", local_dir_use_symlinks=False)
74
 
75
  ```
76
 
77
+ ### Approach
 
 
 
 
 
78
 
79
  We start from the original dataset, which is a collection of 633,244 PDF files and apply some simple filters to remove files that are not relevant for training. The main goal is to have a dataset that is ready to use for large-scale training. We use the following filters:
80
 
81
  * Corrupted files: we remove files that fail to be decoded correctly or that take too long to load.
82
+ * Page count: we remove files that have more than 500 pages. Large files take too long to load and render.
83
  * Keep original quality: we apply no compression or rendering that would degrade the quality of the original PDF.
84
 
85
+ The basic filtering removes less than 1% of the original dataset. After the basic filtering:
86
+ * We selected the best performing models from the [docTR](https://github.com/mindee/doctr) library. For maximum accuracy, we keep all models in full precision(FP32).
87
+ - detection model : [DBNet with a ResNet-50 backbone](https://mindee.github.io/doctr/latest/modules/models.html#doctr.models.detection.db_resnet50)
88
+ - recognition model : [CRNN with a VGG-16 backbone](https://mindee.github.io/doctr/latest/modules/models.html#doctr.models.recognition.crnn_vgg16_bn)
89
+ * We use data-parallel to parallelize the OCR process over multiple GPUs. This is done by splitting the dataset into multiple shards and processing each shard in parallel.
90
+ * The recognition model is compiled with torch.compile to speed up the inference.
91
+
92
+ By default the images are rendered at a DPI of 144 for all the processing steps but we provide the original PDFs so users can render them at their preffered quality. Having access to the full PDF quality is very important for training robust models.
93
+
94
+ The dataset's page distribution is represented in the following histogram. On average, documents contain approximately 15 pages, while the median page count is about 2.
95
+
96
  <center>
97
  <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/page_distribution.png" alt="." width="600" height="300">
98
+ <p><em>The distribution of number of pages in the FC-AMF-OCR dataset. </em></p>
99
  </center>
100
 
101
+ We also show the year distribution of the dataset. The dataset contains documents from 2008 to 2024. This shows that the dataset is relatively recent and covers a wide range of years which complements previous datasets.
102
+
103
+ <center>
104
+ <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/year_distribution.png" alt="." width="600" height="300">
105
+ <p><em>The distribution of years in the FC-AMF-OCR dataset. </em></p>
106
+ </center>
107
 
108
  ### How to visualize a page from the dataset?
109
 
 
116
  ```python
117
  from pdf2image import convert_from_bytes
118
 
119
+ page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_page=1)[0]
120
+ page
121
  ```
122
 
123
  <center>
124
+ <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/first_page.png" alt="." width="600" height="300">
125
+ <p><em>A page from the FC-AMF-OCR dataset. </em></p>
126
  </center>
127
 
128
+ Each `pdf` is paired with a `json.gz` file with the following structure. These strucure is that of docTR outputs. Learn more [here](https://mindee.github.io/doctr/using_doctr/using_models.html#what-should-i-do-with-the-output). We explicitly avoid applying any OCR post-processing to get an approximate reading order. Users can use their own heuristics to extract the reading order from the bounding boxes.
129
 
130
  ```json
131
+ {
132
+ 'pages': [{
133
+ 'page_idx': 0,
134
+ 'dimensions': [1684, 1191],
135
+ 'geometry': [[0.2514, 0.0712], [0.4144, 0.0986]],
136
+ 'lines': [{
137
+ 'geometry': [[0.2515, 0.0713], [0.4144, 0.0850]],
138
+ 'words': [
139
+ {
140
+ 'value': 'European',
141
+ 'confidence': 0.9998,
142
+ 'geometry': [[0.2515, 0.0732], [0.3150, 0.0850]]
143
+ },
144
+ {
145
+ 'value': 'Securities',
146
+ 'confidence': 0.9986,
147
+ 'geometry': [[0.3164, 0.0723], [0.3840, 0.0830]]
148
+ },
149
+ {
150
+ 'value': 'and',
151
+ 'confidence': 0.9998,
152
+ 'geometry': [[0.3854, 0.0713], [0.4144, 0.0840]]
153
+ }
154
+ ]
155
+ },
156
+ {
157
+ 'geometry': [[0.2515, 0.0840], [0.3730, 0.0986]],
158
+ 'words': [
159
+ {
160
+ 'value': 'Markets',
161
+ 'confidence': 0.9977,
162
+ 'geometry': [[0.2515, 0.0850], [0.3053, 0.0967]]
163
+ },
164
+ {
165
+ 'value': 'Authority',
166
+ 'confidence': 0.8129,
167
+ 'geometry': [[0.3067, 0.0840], [0.3730, 0.0986]]
168
+ }
169
+ ]
170
+ }]
171
+ }]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
172
  }
173
  ```
174
 
175
  ## Document Structure
176
+ The structural organization of the documents, including words, lines, blocks, pages, and the overall document is as follows.
177
 
178
  | Element | Description |
179
  |------------|-------------|
 
181
  | **Line** | A collection of Words aligned spatially and meant to be read together. |
182
  | **Block** | A collection of Lines. |
183
  | **Page** | A collection of Blocks that were on the same physical page. |
 
184
 
185
  The top-level key, `pages`, is a list containing each page in the document. In this example, only one page is shown.
186
 
 
207
  - **Lines**: Sequences of words within a block.
208
  - **Words**: Individual characters or words detected within each line, along with their confidence scores and positions.
209
 
210
+ ### Bounding box visualization
211
+
212
+ You can visualize the bounding boxes of the dataset using the following code snippet. This code uses the [pdf2image](https://github.com/Belval/pdf2image) library to convert the PDF files to images.
213
+
214
+ ```python
215
+ import gzip
216
+
217
+ import matplotlib.pyplot as plt
218
+ import matplotlib.patches as patches
219
+ from matplotlib.collections import PatchCollection
220
+ from pdf2image import convert_from_path
221
+
222
+
223
+ def visualize_bounding_boxes(pdf_path, json_path, page_num=0):
224
+ with gzip.open(json_path, 'rt', encoding='utf-8') as f:
225
+ json_data = json.load(f)
226
+
227
+ image = convert_from_path(pdf_path)[page_num]
228
+ img_width, img_height = image.size
229
+
230
+ fig, ax = plt.subplots(1, figsize=(20, 20))
231
+ ax.imshow(image)
232
+
233
+ patches_list = []
234
+
235
+ for block in json_data['pages'][page_num]['blocks']:
236
+ for line in block['lines']:
237
+ for word in line['words']:
238
+ bbox = word['geometry']
239
+ x1, y1 = bbox[0]
240
+ x2, y2 = bbox[1]
241
+
242
+ x1, y1 = x1 * img_width, y1 * img_height
243
+ x2, y2 = x2 * img_width, y2 * img_height
244
+
245
+ width = x2 - x1
246
+ height = y2 - y1
247
+
248
+ rect = patches.Rectangle((x1, y1), width, height, linewidth=1, edgecolor='r', facecolor='none')
249
+ patches_list.append(rect)
250
+
251
+ patch_collection = PatchCollection(patches_list, match_original=True)
252
+ ax.add_collection(patch_collection)
253
+
254
+ plt.axis('off')
255
+ plt.tight_layout()
256
+ plt.show()
257
+ ```
258
+
259
+ Visualizing all bounding boxes on a given page, we obtain the following:
260
+ <center>
261
+ <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/bboxes.png" alt="." width="600" height="300">
262
+ <p><em>An example page with bounding box annotations in the FC-AMF-OCR dataset. </em></p>
263
+ </center>
264
+
265
  ### Data Splits
266
 
267
  There is only a single train split for this dataset.
268
 
269
  #### Train
270
  * `fc-amf-train-{0000..0838}.tar`
271
+ * 838 shards (each shard is around 500 MB)
272
+ * 605,438 PDF files or samples
273
  * 9.3M pages
274
 
275
  ## Additional Information
276
 
277
+ ### Compute
278
 
279
+ The compute was carried out on an HPE Cray node with 8xH100, hosted on Orange Business Cloud Avenue.
280
+
281
+ ### Note
282
 
283
+ This dataset is intended as an OCR-heavy pre-training task for vision-language models or specialized OCR models. The current version contains multilingual data with English and French as the most represented languages. The OCR annotation might not work well for other languages due to the OCR engine limitations. Filtering based on word confidence scores can be used as a heuristic to subsample the dataset for higher quality. This approach can be scaled further by using a larger dataset with more languages and more diverse content, making it a reliable way to get multimodal data for documents.
284
 
285
  ### Licensing Information
286