dataset_info:
features:
- name: file_name
dtype: string
- name: url_id
dtype: int64
- name: country
dtype: string
- name: lang
dtype: string
- name: pdf_bytes_base64
dtype: binary
- name: hosts
struct:
- name: country
dtype: string
- name: host
dtype: string
- name: ip_address
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: tld
dtype: string
- name: provenance
struct:
- name: cc_detected_mime
dtype: string
- name: cc_digest
dtype: string
- name: cc_http_mime
dtype: string
- name: cc_truncated
dtype: string
- name: cc_warc_end
dtype: int64
- name: cc_warc_file_name
dtype: string
- name: cc_warc_start
dtype: int64
- name: fetched_digest
dtype: string
- name: fetched_length
dtype: int64
- name: fetched_status
dtype: string
- name: url
dtype: string
- name: info
struct:
- name: created
dtype: string
- name: creator
dtype: string
- name: custom_metadata
dtype: string
- name: exit_value
dtype: int64
- name: form
dtype: string
- name: javascript
dtype: string
- name: metadata_stream
dtype: string
- name: modified
dtype: string
- name: optimized
dtype: string
- name: page_rotation
dtype: float64
- name: page_size
dtype: string
- name: pages
dtype: float64
- name: parse_time_millis
dtype: int64
- name: pdf_version
dtype: float64
- name: producer
dtype: string
- name: stderr
dtype: string
- name: tagged
dtype: string
- name: timeout
dtype: string
- name: user_properties
dtype: string
- name: tika
struct:
- name: attachment_count
dtype: float64
- name: container_exception
dtype: string
- name: created
dtype: string
- name: encrypted
dtype: string
- name: has_collection
dtype: string
- name: has_marked_content
dtype: string
- name: has_signature
dtype: string
- name: has_xfa
dtype: string
- name: has_xmp
dtype: string
- name: location
dtype: string
- name: macro_count
dtype: float64
- name: mime
dtype: string
- name: modified
dtype: string
- name: num_pages
dtype: float64
- name: parse_status
dtype: string
- name: parse_time_millis
dtype: int64
- name: pdf_contains_damaged_font
dtype: string
- name: pdf_contains_non_embedded_font
dtype: string
- name: pdf_has_acroform_fields
dtype: string
- name: pdf_incremental_updates
dtype: float64
- name: pdf_num_3d_annotations
dtype: float64
- name: pdf_overall_unmapped_unicode_chars
dtype: float64
- name: pdf_producer
dtype: string
- name: pdf_version
dtype: float64
- name: pdfa_version
dtype: string
- name: pdfuaid_part
dtype: float64
- name: pdfvt_version
dtype: float64
- name: pdfx_conformance
dtype: string
- name: pdfx_version
dtype: string
- name: pdfxid_version
dtype: string
- name: tika_eval_lang
dtype: string
- name: tika_eval_num_alpha_tokens
dtype: float64
- name: tika_eval_num_tokens
dtype: float64
- name: tika_eval_oov
dtype: float64
- name: xmp_creator_tool
dtype: string
- name: tika_attachments
list:
- name: container_exception
dtype: string
- name: created
dtype: string
- name: emb_depth
dtype: int64
- name: embedded_exception
dtype: float64
- name: embedded_id
dtype: float64
- name: embedded_id_path
dtype: string
- name: embedded_resource_type
dtype: string
- name: encrypted
dtype: string
- name: has_collection
dtype: string
- name: has_marked_content
dtype: string
- name: has_signature
dtype: string
- name: has_xfa
dtype: string
- name: has_xmp
dtype: string
- name: location
dtype: string
- name: mime
dtype: string
- name: modified
dtype: string
- name: num_pages
dtype: float64
- name: parse_status
dtype: string
- name: parse_time_millis
dtype: int64
- name: pdf_contains_damaged_font
dtype: string
- name: pdf_contains_non_embedded_font
dtype: string
- name: pdf_has_acroform_fields
dtype: string
- name: pdf_incremental_updates
dtype: float64
- name: pdf_num_3d_annotations
dtype: float64
- name: pdf_overall_unmapped_unicode_chars
dtype: float64
- name: pdf_producer
dtype: string
- name: pdf_version
dtype: float64
- name: pdfa_version
dtype: string
- name: pdfuaid_part
dtype: float64
- name: pdfvt_version
dtype: float64
- name: pdfx_conformance
dtype: string
- name: pdfx_version
dtype: string
- name: pdfxid_version
dtype: string
- name: tika_eval_lang
dtype: string
- name: tika_eval_num_alpha_tokens
dtype: float64
- name: tika_eval_num_tokens
dtype: float64
- name: tika_eval_oov
dtype: float64
- name: xmp_creator_tool
dtype: string
splits:
- name: train
num_bytes: 1840299303
num_examples: 1000
download_size: 1756624272
dataset_size: 1840299303
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- PDF
size_categories:
- 1K<n<10K
Dataset Card for SAFEDOCS 1k PDF
This dataset is a subset (first 1,000 PDFs) of the original SAFEDOCS (CC-MAIN-2021-31-PDF-UNTRUNCATED) merged with the corresponding metadata.
The PDFs in this dataset are found in the 0000.zip
file available here. The metadata files are available here.
Each file corresponds to a feature in the dataset:
hosts
: cc-hosts-20230324-1k.csvprovenance
: cc-provenance-20230324-1k.csvinfo
: pdfinfo-20230324-1k.csvtika
: tika-20230714-1k.csvtika_attachments
: tika-with-attachments-20230714-1k.csv
The PDFs were encoded in base64 and added as the pdf_bytes_base64
feature.
The keys country
(from hosts
) and lang
(from tika
, as tika_eval_lang
) were replicated as features to make it easier to filter the data based on origin and language.
You can easily and quickly load it:
dataset = load_dataset("dvgodoy/SAFEDOCS_1k")
Dataset({
features: ['file_name', 'url_id', 'country', 'lang', 'pdf_bytes_base64', 'hosts', 'provenance', 'info', 'tika', 'tika_attachments'],
num_rows: 1000
})
Decoding PDFs
To handle the PDFs, you will need to decode the pdf_bytes_base64
feature and load it into a PDF object using your favorite PDF library (e.g. pdfplumber
):
import io
import pdfplumber
# load the bytes into your favorite PDF library e.g., `pdfplumber`
for encoded in mini_batch['pdf_bytes_base64']:
bytes_content = io.BytesIO(base64.b64decode(encoded))
pdf_obj = pdfplumber.open(bytes_content)
# process the PDFs
# ...
# CLOSE the objects after you've used them
bytes_content.close()
pdf_obj.close()
You can use any other library/package to load the PDF, just make sure it can open a PDF from bytes.
Table of Contents
Dataset Description
- Homepage: SAFEDOCS (CC-MAIN-2021-31-PDF-UNTRUNCATED)
- Repository:
- Paper: Making more sense of PDF structures in the wild at scale
- Leaderboard:
- Point of Contact:
Dataset Summary
The original SAFEDOCS (CC-MAIN-2021-31-PDF-UNTRUNCATED) consists of nearly 8,000,000 PDFs gathered from across the web in July/August of 2021.
This "1k" version contains only the first 1,000 PDFs from the original dataset.
Dataset Structure
Data Instances
A sample from the training set is provided below :
{
'file_name': '0000000.pdf',
'url_id': 6368476,
'country': 'US',
'lang': 'eng',
'pdf_bytes_base64': b'JVBERi0xLjcKJe...',
'hosts': {
'host': 'augustaarchives.com',
'tld': 'com',
'ip_address': '66.228.60.224',
'country': 'US',
'latitude': 33.7485,
'longitude': -84.3871
},
'provenance': {
'url': 'http://augustaarchives.com/cgi-bin/content/view.php?data=ib_math_sl_practice_questions_and_answers&filetype=pdf&id=14310d60ac64f9023c7dbc458f0f7b38',
'cc_digest': 'H66IGSGFXBJLJIKKN7LSHQSM3YTX4TDF',
'cc_http_mime': 'application/pdf',
'cc_detected_mime': 'application/pdf',
'cc_warc_file_name': 'crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00143.warc.gz',
'cc_warc_start': 3724499,
'cc_warc_end': 3742341,
'cc_truncated': '',
'fetched_status': 'ADDED_TO_REPOSITORY',
'fetched_digest': '00000836a41510292765a86e57a80f1e182360ca5c6ff0841482b31c8f25ab84',
'fetched_length': 29690
},
'info': {
'parse_time_millis': 385,
'exit_value': 0,
'timeout': 'f',
'stderr': '',
'pdf_version': 1.7,
'creator': 'Acrobat 9.0.0',
'producer': 'Adobe InDesign CC 2015 (Macintosh)|x|Adobe PDF Library 15.0; modified using iText 2.1.7 by 1T3XT',
'created': '2021-07-31T03:13:33Z',
'modified': '2021-07-31T03:13:33Z',
'custom_metadata': 'no',
'metadata_stream': 'yes',
'tagged': 'no',
'user_properties': 'no',
'form': 'none',
'javascript': 'no',
'pages': 12.0,
'page_size': '297.638 x 419.528 pts',
'page_rotation': 0.0,
'optimized': 'no'
},
'tika': {
'parse_status': 'OK',
'parse_time_millis': 1430,
'mime': 'application/pdf',
'macro_count': nan,
'attachment_count': nan,
'created': '2021-07-31 03:13:33',
'modified': '2021-07-31 03:13:33',
'encrypted': 'f',
'has_xfa': 'f',
'has_xmp': 't',
'has_collection': 'f',
'has_marked_content': 'f',
'num_pages': 12.0,
'xmp_creator_tool': 'Acrobat 9.0.0',
'pdf_producer': 'Microsoft Word|x|DocuCom PDF Driver 4.61 for NT',
'pdf_version': 1.7,
'pdfa_version': '',
'pdfuaid_part': nan,
'pdfx_conformance': '',
'pdfx_version': '',
'pdfxid_version': '',
'pdfvt_version': nan,
'pdf_num_3d_annotations': 0.0,
'pdf_has_acroform_fields': '',
'pdf_incremental_updates': 0.0,
'pdf_overall_unmapped_unicode_chars': 0.0,
'pdf_contains_damaged_font': 'f',
'pdf_contains_non_embedded_font': 't',
'has_signature': '',
'location': '',
'tika_eval_num_tokens': 1489.0,
'tika_eval_num_alpha_tokens': 1344.0,
'tika_eval_lang': 'eng',
'tika_eval_oov': 0.2961309552192688,
'container_exception': ''
},
'tika_attachments': [{
'parse_status': 'OK',
'parse_time_millis': 1430,
'mime': 'application/pdf',
'emb_depth': 0,
'embedded_id': nan,
'embedded_id_path': '',
'embedded_resource_type': '',
'created': '2021-07-31 03:13:33',
'modified': '2021-07-31 03:13:33',
'encrypted': 'f',
'has_xfa': 'f',
'has_xmp': 't',
'has_collection': 'f',
'has_marked_content': 'f',
'num_pages': 12.0,
'xmp_creator_tool': 'Acrobat 9.0.0',
'pdf_producer': 'Microsoft Word|x|DocuCom PDF Driver 4.61 for NT',
'pdf_version': 1.7,
'pdfa_version': '',
'pdfuaid_part': nan,
'pdfx_conformance': '',
'pdfx_version': '',
'pdfxid_version': '',
'pdfvt_version': nan,
'pdf_num_3d_annotations': 0.0,
'pdf_has_acroform_fields': '',
'pdf_incremental_updates': 0.0,
'pdf_overall_unmapped_unicode_chars': 0.0,
'pdf_contains_damaged_font': 'f',
'pdf_contains_non_embedded_font': 't',
'has_signature': '',
'location': '',
'tika_eval_num_tokens': 1489.0,
'tika_eval_num_alpha_tokens': 1344.0,
'tika_eval_lang': 'eng',
'tika_eval_oov': 0.2961309552192688,
'container_exception': '',
'embedded_exception': nan
}]
}
Dataset Creation
Curation Rationale
PDF is a ubiquitous format and is used across many industrial and research domains. Many existing corpora focusing on extant data (such as GovDocs1) are now quite old and no longer reflect current changes and trends in both PDF itself (as a file format) or in PDF-creating and authoring applications. With advances in machine learning technology, the need for larger data sets is also in high demand. This corpus is thus helpful for:
- PDF technology and software testing, assessment, and evaluation
- Information privacy research
- Document understanding, text extraction, table identification, OCR/ICR, formula identification, document recognition and analysis, and related document engineering domains
- Malware and cyber-security research
- ML/AI applications and research (document classification, document content, text extraction, etc)
- Preservation and archival research
- Usability and accessibility research
- Software engineering research (parsing, formal methods, etc.)
Source Data
Initial Data Collection and Normalization
The PDF files were initially identified by Common Crawl as part of their July/August 2021 crawl (identified as CC-MAIN-2021-31) and subsequently updated and collated as part of the DARPA SafeDocs program.
Additional Information
Credits
The original dataset was gathered by a team at NASA’s Jet Propulsion Laboratory (JPL), California Institute of Technology while supporting the Defense Advance Research Project Agency (DARPA)’s SafeDocs Program. The JPL team included Chris Mattmann (PI), Wayne Burke, Dustin Graf, Tim Allison, Ryan Stonebraker, Mike Milano, Philip Southam and Anastasia Menshikova.
The JPL team collaborated with Peter Wyatt, the Chief Technology Officer of the PDF Association and PI on the SafeDocs program, in the design and documentation of this corpus.
The JPL team and PDF Association would like to thank Simson Garfinkel and Digital Corpora for taking ownership of this dataset and publishing it. Our thanks are extended to the Amazon Open Data Sponsorship Program for enabling this large corpus to be free and publicly available as part of Digital Corpora initiative.
Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology.
The research was carried out at the NASA (National Aeronautics and Space Administration) Jet Propulsion Laboratory, California Institute of Technology under a contract with the Defense Advanced Research Projects Agency (DARPA) SafeDocs program. Government sponsorship acknowledged.