BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions
Abstract
We introduce BLIP3-KALE, a dataset of 218 million image-text pairs that bridges the gap between descriptive synthetic captions and factual web-scale alt-text. KALE augments synthetic dense image captions with web-scale alt-text to generate factually grounded image captions. Our two-stage approach leverages large vision-language models and language models to create knowledge-augmented captions, which are then used to train a specialized VLM for scaling up the dataset. We train vision-language models on KALE and demonstrate improvements on vision-language tasks. Our experiments show the utility of KALE for training more capable and knowledgeable multimodal models. We release the KALE dataset at https://huggingface.co/datasets/Salesforce/blip3-kale
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Precision or Recall? An Analysis of Image Captions for Training Text-to-Image Generation Model (2024)
- TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives (2024)
- Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models (2024)
- NeKo: Toward Post Recognition Generative Correction Large Language Models with Task-Oriented Experts (2024)
- NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper