Papers
arxiv:2401.00908

DocLLM: A layout-aware generative language model for multimodal document understanding

Published on Dec 31, 2023
Β· Submitted by akhaliq on Jan 3
#1 Paper of the day
Authors:
,
,
,
,

Abstract

Enterprise documents such as forms, invoices, receipts, reports, contracts, and other similar records, often carry rich semantics at the intersection of textual and spatial modalities. The visual cues offered by their complex layouts play a crucial role in comprehending these documents effectively. In this paper, we present DocLLM, a lightweight extension to traditional large language models (LLMs) for reasoning over visual documents, taking into account both textual semantics and spatial layout. Our model differs from existing multimodal LLMs by avoiding expensive image encoders and focuses exclusively on bounding box information to incorporate the spatial layout structure. Specifically, the cross-alignment between text and spatial modalities is captured by decomposing the attention mechanism in classical transformers to a set of disentangled matrices. Furthermore, we devise a pre-training objective that learns to infill text segments. This approach allows us to address irregular layouts and heterogeneous content frequently encountered in visual documents. The pre-trained model is fine-tuned using a large-scale instruction dataset, covering four core document intelligence tasks. We demonstrate that our solution outperforms SotA LLMs on 14 out of 16 datasets across all tasks, and generalizes well to 4 out of 5 previously unseen datasets.

Community

This comment has been hidden
deleted

Sorry I dozed off when I heard "enterprise documents" but I'm awake again now. What about, for instance, service repair manuals? These are a lot more challenging to address than invoices for your NIPS hotel.

What 16 datasets? The link to the PDF doesn't work. Why are these 16 datasets considered SoA? Why are dataset benchmarks the only indicator of "progress" in NLP?

What is a "textual modality"? What is your model of "textual semantics"? What is "textual semantics"? Are you talking about natural language semantics? How do you integrate linguistic knowledge with "spatial modalities"?

Are code and model weights available for this model?

Any plans to release a space demo?

Waiting for the model weights and model card to use it. If there's a plan, please release it soon.

Hey folks, where is the model? how can we try it

Does anyone know what OCR engine have they used / what is the best one for commercial use? I feel like GPT4 is bottle necked by the OCR.

Looks like they are using tesseract

Where can I access the code for this model?

I read the paper, then came to HuggingFace for the model, disappointed not to find it here.

model and code please?

Great tease.. model and code?

Tease... (っ Β°Π” Β°;)っ

EDIT: discussed via email, authors will update arxiv to reflect current results are on the validation set; and they will make an effort to add their results on the public leaderboard for the test set ^^

Hi, main author of the DUDE dataset here. How did you generate results on DUDE without submitting predictions for evaluation on the test set to the RRC platform? (https://rrc.cvc.uab.es/?ch=23&com=evaluation&task=1)

Similarly, how did you generate the results for GPT-4?

If you used the validation set for evaluation, please contact me for help in getting unbiased results on the test set ;)

deleted

chunked-vectors.jpeg

This comment has been hidden

The reference object is too weak, It does not mean that the paper is better one

so still no code yet?

Any updates on when this model will become available?

deleted

C'mon Big Banking! The people have spoken! Give the people what they want! Give them the modalities!

I have reimplemented the model architecture based on baichuan2-7b which is available at https://huggingface.co/JinghuiLuAstronaut/DocLLM_baichuan2_7b, however, the newly added parameters are random initialized, you can continuous pre-training or fine-tuning

Β·
This comment has been hidden

This seems superceded with OwlDoc1.5 that just came out a few days ago. https://huggingface.co/mPLUG/DocOwl1.5

DocLLM: Revolutionizing Document Understanding with Layout-Aware AI

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.00908 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.00908 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.00908 in a Space README.md to link it from this page.

Collections including this paper 104