Tony Wu's picture

Tony Wu

tonywu71

AI & ML interests

RAG, LLMs, ASR

Recent Activity

liked a model about 6 hours ago
answerdotai/ModernBERT-large
liked a model about 6 hours ago
answerdotai/ModernBERT-base
updated a model 5 days ago
vidore/colpali-v1.3
View all activity

Organizations

Illuin Technology's profile picture Blog-explorers's profile picture ILLUIN Vidore's profile picture EVEIL's profile picture PDFPages's profile picture

Posts 1

view post
Post
732
ColPali: A new approach to efficient and intelligent document retrieval ๐Ÿš€

Our latest research paper, "ColPali: Efficient Document Retrieval with Vision Language Models," introduces a groundbreaking approach to large-scale visual document analysis. By leveraging Vision Language Models (VLMs), we have created a new framework for document retrieval that's both powerful and efficient.

Key Insights:
๐Ÿ’ก ColPali combines ColBERT's multi-vector strategy with VLMs' document understanding capabilities
โš™๏ธ ColPali is based on PaliGemma-3B (SigLIP, Gemma-2B) + a linear projection layer and is trained to maximize the similarity between the document and the query embeddings
๐Ÿ“Š The Vision Document Retrieval benchmark (ViDoRe) is a challenging dataset that spans various industry topics and aims at matching real-life retrieval scenarios
๐Ÿ† ColPali outperforms existing models on all datasets in ViDoRe (average NDCG@5 of 81.3% vs 67.0% for the best baseline model)
โšก ColPali is faster at document embedding compared to traditional PDF parser pipelines, making ColPali viable for industrial use
๐Ÿ” ColPali is highly interpretable thanks to patch-based similarity maps

Dive deeper into ColPali and explore our resources:
๐Ÿ“‘ Full paper: arxiv.org/abs/2407.01449
๐Ÿ› ๏ธ Datasets, model weights, evaluation code, leaderboard, demos: huggingface.co/vidore

Shoutout to my amazing co-authors Manuel Faysse ( @manu ) and Hugues Sibille ( @HugSib ). We are grateful for the invaluable feedback from Bilel Omrani, Gautier Viaud, Celine Hudelot, and Pierre Colombo. This work is sponsored by ILLUIN Technology. โœจ

datasets

None public yet