staghado commited on
Commit
8a4a94f
1 Parent(s): 7a75905

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +252 -0
README.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <---
2
+ task_categories:
3
+ - image-to-text
4
+ size_categories:
5
+ - 1M<n<10M
6
+ language:
7
+ - en
8
+ - fr
9
+
10
+ splits:
11
+ - name: train
12
+ num_examples: 9357567
13
+
14
+ ---
15
+ # Dataset Card for Finance Commons AMF OCR dataset (FC-AMF-OCR)
16
+
17
+ ## Dataset Description
18
+
19
+ - **Contact at LightOn:** [Said Taghadouini](mailto:[email protected])
20
+
21
+ ### Dataset Summary
22
+
23
+ FC-AMF-OCR dataset is a document dataset OCRed from the [AMF-PDF](https://huggingface.co/datasets/PleIAs/AMF-PDF) dataset part of the Finance Commons collection. The dataset comes with native text annotation available here [AMF-Text](https://huggingface.co/datasets/PleIAs/AMF-Text) but these annotations are imperfect and contain many errors(missing spaces, OCR artifacts, etc.). The goal of this dataset is to provide a more accurate and complete OCR annotation for the Finance Commons AMF dataset with full bounding box information and confidence scores for each word, line and block.
24
+
25
+ Most existing large scale OCR datasets like the Industry Documents Library (IDL) or the PDF Association dataset (PDFA) suffer from a number of issues:
26
+ - Time Coverage: These datasets consist primarily of older documents or PDFs from specific periods, which might not reflect current trends or developments.
27
+ - OCR Engines: They use outdated or inconsistent OCR technologies, affecting the accuracy and reliability of text extraction.
28
+ - Further, some of these annotations are limited to what can be extracted and is readily available - text drawn in images and only present as a bitmap rendition might be missed entirely.
29
+
30
+ FC-AMF-OCR aims to address these issues by providing comprehensive OCR annotations of a recent collection of text-heavy documents and use the excellent [docTR](https://github.com/mindee/doctr) open source OCR engine. Using an open source OCR engine makes it less subject to API changes and users know what to expect and apply specific filtering logic if need be.
31
+
32
+ <center>
33
+ <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/sample0.png" alt="An example from the FC-AMF-OCR dataset" width="600" height="300">
34
+ <p><em>An example page of one pdf document with existing text annotation(red) and the OCR annotation(green). </em></p>
35
+ </center>
36
+
37
+ Following most large scale OCR datasets like [IDL](https://huggingface.co/datasets/pixparse/idl-wds), this dataset is in [webdataset](https://github.com/webdataset/webdataset/) .tar format and can be used with derived forms of the `webdataset` library.
38
+
39
+ ### Usage with `datasets`
40
+
41
+ This dataset can be used with webdataset library or Hugging Face datasets. Here is an example of how to stream the dataset directly from Hugging Face so you don't have to download the dataset locally.
42
+ > Note: We do recommend downloading the dataset to save bandwidth.
43
+
44
+ ```python
45
+ from datasets import load_dataset
46
+
47
+ dataset = load_dataset('lightonai/fc-amf-ocr', streaming=True)
48
+ print(next(iter(dataset['train'])).keys())
49
+ >> dict_keys(['__key__', '__url__', 'pdf', 'json.gz'])
50
+ ```
51
+
52
+ You can download the dataset using the following command:
53
+
54
+ ```python
55
+ import os
56
+ from huggingface_hub import HfApi
57
+
58
+
59
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
60
+ api = HfApi()
61
+ api.snapshot_download("lightonai/fc-amf-ocr", repo_type="dataset", local_dir_use_symlinks=False)
62
+
63
+ ```
64
+
65
+
66
+ #### Approach
67
+
68
+
69
+
70
+ #### Filtering process
71
+
72
+ We start from the original dataset, which is a collection of 633,244 PDF files and apply some simple filters to remove files that are not relevant for training. The main goal is to have a dataset that is ready to use for large-scale training. We use the following filters:
73
+
74
+ * Corrupted files: we remove files that fail to be decoded correctly or that take too long to load.
75
+ * Page count: we remove files that have more than 500 pages.
76
+ * Keep original quality: we apply no compression or rendering that would degrade the quality of the original PDF.
77
+
78
+ Here is a histogram of the number of pages in the dataset.
79
+ <center>
80
+ <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/page_distribution.png" alt="." width="600" height="300">
81
+ <p><em>The distribution of number pf pages in the FC-AMF-OCR dataset. </em></p>
82
+ </center>
83
+
84
+ At the end, each document exists as a pair of a `pdf` and a `json.gz` file containing extensive OCR annotation. The packaging in webdataset format makes it easy to use in image-to-text tasks at scale.
85
+
86
+ ### How to visualize a page from the dataset?
87
+
88
+ PDF files are sourced from a variety of origins and are typically stored in RGB format. These files can consist of multiple pages, each of which can be rendered using different tools or engines according to your needs. One recommended option is pdf2image, a tool that converts PDF pages into images. To use [pdf2image](https://github.com/Belval/pdf2image), you need to install the poppler-utils package, which provides the necessary support for rendering and processing PDF files efficiently. This approach allows for flexible handling of PDFs, making it easier to extract and manipulate content from multi-page documents.
89
+
90
+ ```bash
91
+ apt-get install poppler-utils
92
+ ```
93
+
94
+ ```python
95
+ from pdf2image import convert_from_bytes
96
+
97
+ pdf_first_page = convert_from_bytes(sample['pdf'], dpi=300, first_page=1, last_page=1)[0]
98
+ ```
99
+
100
+ <center>
101
+ <img src="https://huggingface.co/datasets/lightonai/fc-amf-ocr/resolve/main/docs/pdf_first_page.png" alt="" width="400" height="600">
102
+ </center>
103
+
104
+ Each `pdf` is paired with a `json.gz` file with the following structure. These strucure is that of docTR outputs. Learn more [here](https://mindee.github.io/doctr/using_doctr/using_models.html#what-should-i-do-with-the-output). We explicitly avoid applying any OCR post-processing to get an approximate reading order. Users can use their own heuristics to extract the reading order from the boinding boxes.
105
+
106
+ ```json
107
+ {'pages': [{'page_idx': 0,
108
+ 'dimensions': [1684, 1191],
109
+ 'blocks': [{'geometry': [[0.11751876049538201, 0.0478515625],
110
+ [0.2390290459697733, 0.126953125]],
111
+ 'lines': [{'geometry': [[0.14513473446683461, 0.0478515625],
112
+ [0.15618112405541562, 0.0556640625]],
113
+ 'words': [{'value': '*',
114
+ 'confidence': 0.9999570846557617,
115
+ 'geometry': [[0.14513473446683461, 0.0478515625],
116
+ [0.15618112405541562, 0.0556640625]]}]},
117
+ {'geometry': [[0.1258035526868178, 0.060546875],
118
+ [0.13823074097397148, 0.0703125]],
119
+ 'words': [{'value': '*',
120
+ 'confidence': 0.9995156526565552,
121
+ 'geometry': [[0.1258035526868178, 0.060546875],
122
+ [0.13823074097397148, 0.0703125]]}]},
123
+ {'geometry': [[0.11751876049538201, 0.0751953125],
124
+ [0.2390290459697733, 0.09375]],
125
+ 'words': [{'value': '*',
126
+ 'confidence': 0.9917723536491394,
127
+ 'geometry': [[0.11751876049538201, 0.078125],
128
+ [0.13270754617968095, 0.08984375]]},
129
+ {'value': 'esma',
130
+ 'confidence': 0.9842223525047302,
131
+ 'geometry': [[0.1506579292611251, 0.0751953125],
132
+ [0.2390290459697733, 0.09375]]}]},
133
+ {'geometry': [[0.12442275398824515, 0.09765625],
134
+ [0.13823074097397148, 0.107421875]],
135
+ 'words': [{'value': '*',
136
+ 'confidence': 0.9726816415786743,
137
+ 'geometry': [[0.12442275398824515, 0.09765625],
138
+ [0.13823074097397148, 0.107421875]]}]},
139
+ {'geometry': [[0.21693626679261124, 0.0986328125],
140
+ [0.23074425377833752, 0.107421875]],
141
+ 'words': [{'value': '*',
142
+ 'confidence': 0.9999707937240601,
143
+ 'geometry': [[0.21693626679261124, 0.0986328125],
144
+ [0.23074425377833752, 0.107421875]]}]},
145
+ {'geometry': [[0.14375393576826195, 0.1123046875],
146
+ [0.15756192275398823, 0.12109375]],
147
+ 'words': [{'value': '*',
148
+ 'confidence': 0.9999815225601196,
149
+ 'geometry': [[0.14375393576826195, 0.1123046875],
150
+ [0.15756192275398823, 0.12109375]]}]},
151
+ {'geometry': [[0.1989858837111671, 0.1123046875],
152
+ [0.21279387069689337, 0.1220703125]],
153
+ 'words': [{'value': '*',
154
+ 'confidence': 0.9999063014984131,
155
+ 'geometry': [[0.1989858837111671, 0.1123046875],
156
+ [0.21279387069689337, 0.1220703125]]}]},
157
+ {'geometry': [[0.17275070843828716, 0.119140625],
158
+ [0.18379709802686817, 0.126953125]],
159
+ 'words': [{'value': '*',
160
+ 'confidence': 0.9989649057388306,
161
+ 'geometry': [[0.17275070843828716, 0.119140625],
162
+ [0.18379709802686817, 0.126953125]]}]}],
163
+ 'artefacts': []},
164
+ {'geometry': [[0.251456234256927, 0.0712890625],
165
+ [0.41439048068849704, 0.0986328125]],
166
+ 'lines': [{'geometry': [[0.251456234256927, 0.0712890625],
167
+ [0.41439048068849704, 0.0849609375]],
168
+ 'words': [{'value': 'European',
169
+ 'confidence': 0.9998014569282532,
170
+ 'geometry': [[0.251456234256927, 0.0732421875],
171
+ [0.3149729743912678, 0.0849609375]]},
172
+ {'value': 'Securities',
173
+ 'confidence': 0.9985648989677429,
174
+ 'geometry': [[0.3163537730898405, 0.072265625],
175
+ [0.38401290931989923, 0.0830078125]]},
176
+ {'value': 'and',
177
+ 'confidence': 0.9997856020927429,
178
+ 'geometry': [[0.3853937080184719, 0.0712890625],
179
+ [0.41439048068849704, 0.083984375]]}]},
180
+ {'geometry': [[0.251456234256927, 0.083984375],
181
+ [0.3729665197313182, 0.0986328125]],
182
+ 'words': [{'value': 'Markets',
183
+ 'confidence': 0.9976556301116943,
184
+ 'geometry': [[0.251456234256927, 0.0849609375],
185
+ [0.3053073835012594, 0.0966796875]]},
186
+ {'value': 'Authority',
187
+ 'confidence': 0.8129342794418335,
188
+ 'geometry': [[0.30668818219983207, 0.083984375],
189
+ [0.3729665197313182, 0.0986328125]]}]}],
190
+ 'artefacts': []},
191
+ ...
192
+ {'page_idx': ...},
193
+ }
194
+ ```
195
+
196
+ ## Document Structure
197
+ The structural organization of the documents, including words, lines, blocks, pages, and the overall document.
198
+
199
+ | Element | Description |
200
+ |------------|-------------|
201
+ | **Word** | A Word is an uninterrupted sequence of characters. |
202
+ | **Line** | A collection of Words aligned spatially and meant to be read together. |
203
+ | **Block** | A collection of Lines. |
204
+ | **Page** | A collection of Blocks that were on the same physical page. |
205
+ > Artefacts are not used here.
206
+
207
+ The top-level key, `pages`, is a list containing each page in the document. In this example, only one page is shown.
208
+
209
+ - **Page**:
210
+ - `page_idx`: The index of the page in the document (starts at 0).
211
+ - `dimensions`: The dimensions of the page in pixels, formatted as `[height, width]`.
212
+
213
+ - **Blocks**:
214
+ - A page consists of several `blocks`, each containing lines.
215
+ - `geometry`: Defines the bounding box of the block using normalized coordinates relative to the page size.
216
+
217
+ - **Lines**:
218
+ - Each block contains a list of `lines`, where a line is a sequence of words grouped together.
219
+ - `geometry`: Bounding box of the line in normalized coordinates relative to the page size.
220
+
221
+ - **Words**:
222
+ - Each line is composed of individual `words` (continuous sequences of characters).
223
+ - `value`: The text content of the word.
224
+ - `confidence`: The confidence score of the OCR engine for the word.
225
+ - `geometry`: Bounding box of the word in normalized coordinates relative to the page size.
226
+
227
+ For each page, the structure includes:
228
+ - **Blocks**: Grouped lines within a page.
229
+ - **Lines**: Sequences of words within a block.
230
+ - **Words**: Individual characters or words detected within each line, along with their confidence scores and positions.
231
+
232
+ ### Data Splits
233
+
234
+ There is only a single train split for this dataset.
235
+
236
+ #### Train
237
+ * `fc-amf-train-{0000..0838}.tar`
238
+ * 838 shards (each shard is 500 MB for ease of use)
239
+ * 605,438 samples
240
+ * 9.3M pages
241
+
242
+ ## Additional Information
243
+
244
+ ### Note
245
+
246
+ This dataset is intended as an OCR-heavy pre-training basis for vision-language models or specialized OCR models. The current version contains multilingual data with English and French as the most represented languages. The OCR annotation might not work well for other languages. Filtering based on word confidence scores can be used as a heuristic to subsample the dataset.
247
+
248
+ In a future release, we will add language information per pdf.
249
+
250
+ ### Licensing Information
251
+
252
+ Data has been OCRed from the original dataset. As a consequence it has the same [AMF-PDF](https://huggingface.co/datasets/PleIAs/AMF-PDF) license.