|
--- |
|
license: odc-by |
|
task_categories: |
|
- text-generation |
|
dataset_info: |
|
- config_name: doc |
|
features: |
|
- name: section |
|
dtype: string |
|
- name: filename |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 2258177156.7461534 |
|
num_examples: 38094 |
|
- name: validation |
|
num_bytes: 59397635.08845607 |
|
num_examples: 1002 |
|
- name: test |
|
num_bytes: 59456914.165390655 |
|
num_examples: 1003 |
|
download_size: 938691731 |
|
dataset_size: 2377031706.0 |
|
- config_name: docx |
|
features: |
|
- name: section |
|
dtype: string |
|
- name: filename |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 4605598.853503184 |
|
num_examples: 141 |
|
- name: validation |
|
num_bytes: 261310.57324840763 |
|
num_examples: 8 |
|
- name: test |
|
num_bytes: 261310.57324840763 |
|
num_examples: 8 |
|
download_size: 1788590 |
|
dataset_size: 5128220 |
|
- config_name: logs |
|
features: |
|
- name: section |
|
dtype: string |
|
- name: filename |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 2350324475.916881 |
|
num_examples: 9223 |
|
- name: validation |
|
num_bytes: 61924411.541559376 |
|
num_examples: 243 |
|
- name: test |
|
num_bytes: 61924411.541559376 |
|
num_examples: 243 |
|
download_size: 718096901 |
|
dataset_size: 2474173298.9999995 |
|
- config_name: pptx |
|
features: |
|
- name: section |
|
dtype: string |
|
- name: filename |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 9517778 |
|
num_examples: 963 |
|
- name: validation |
|
num_bytes: 513930 |
|
num_examples: 53 |
|
- name: test |
|
num_bytes: 436852 |
|
num_examples: 54 |
|
download_size: 5314310 |
|
dataset_size: 10468560 |
|
- config_name: rtf |
|
features: |
|
- name: section |
|
dtype: string |
|
- name: filename |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 61558658.13180516 |
|
num_examples: 942 |
|
- name: validation |
|
num_bytes: 3398142.4871060173 |
|
num_examples: 52 |
|
- name: test |
|
num_bytes: 3463491.3810888254 |
|
num_examples: 53 |
|
download_size: 22547280 |
|
dataset_size: 68420292 |
|
- config_name: txt |
|
features: |
|
- name: section |
|
dtype: string |
|
- name: filename |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 1358006724.1432111 |
|
num_examples: 41393 |
|
- name: validation |
|
num_bytes: 35727522.10740843 |
|
num_examples: 1089 |
|
- name: test |
|
num_bytes: 35760329.749380335 |
|
num_examples: 1090 |
|
download_size: 608912009 |
|
dataset_size: 1429494576 |
|
configs: |
|
- config_name: doc |
|
data_files: |
|
- split: train |
|
path: doc/train-* |
|
- split: validation |
|
path: doc/validation-* |
|
- split: test |
|
path: doc/test-* |
|
- config_name: docx |
|
data_files: |
|
- split: train |
|
path: docx/train-* |
|
- split: validation |
|
path: docx/validation-* |
|
- split: test |
|
path: docx/test-* |
|
- config_name: logs |
|
data_files: |
|
- split: train |
|
path: logs/train-* |
|
- split: validation |
|
path: logs/validation-* |
|
- split: test |
|
path: logs/test-* |
|
- config_name: pptx |
|
data_files: |
|
- split: train |
|
path: pptx/train-* |
|
- split: validation |
|
path: pptx/validation-* |
|
- split: test |
|
path: pptx/test-* |
|
- config_name: rtf |
|
data_files: |
|
- split: train |
|
path: rtf/train-* |
|
- split: validation |
|
path: rtf/validation-* |
|
- split: test |
|
path: rtf/test-* |
|
- config_name: txt |
|
data_files: |
|
- split: train |
|
path: txt/train-* |
|
- split: validation |
|
path: txt/validation-* |
|
- split: test |
|
path: txt/test-* |
|
--- |
|
# govdocs1: by file extension |
|
|
|
Markdown-parsed versions of docs in [govdocs1](https://digitalcorpora.org/corpora/file-corpora/files/) with light filtering. |
|
|
|
## usage |
|
|
|
This loads all the `.doc` files (parsed to markdown with `pandoc`): |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
# If the dataset is gated/private, make sure you have run huggingface-cli login |
|
dataset = load_dataset("BEE-spoke-data/govdocs1-by-extension", "doc") |
|
``` |
|
|
|
## details |
|
|
|
Most documents ratain formatting capturing the original `.doc` (or similar file) like tables which is cool: |
|
|
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/dXJWcIUSPKN2y0b9RFT2a.png) |
|
|
|
## info |
|
|
|
https://digitalcorpora.org/corpora/file-corpora/files/ |
|
|
|
|
|
``` |
|
@inproceedings{garfinkel2009bringing, |
|
title={Bringing Science to Digital Forensics with Standardized Forensic Corpora}, |
|
author={Garfinkel, Simson and Farrell, Paul and Roussev, Vassil and Dinolt, George}, |
|
booktitle={Digital Forensic Research Workshop (DFRWS) 2009}, |
|
year={2009}, |
|
address={Montreal, Canada}, |
|
url={https://digitalcorpora.org/corpora/file-corpora/files/} |
|
} |
|
``` |