Papers
arxiv:2406.08487

Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models

Published on Jun 12
· Submitted by yifanzhang114 on Jun 13
Authors:
,
,
,
,
,

Abstract

Seeing clearly with high resolution is a foundation of Large Multimodal Models (LMMs), which has been proven to be vital for visual perception and reasoning. Existing works usually employ a straightforward resolution upscaling method, where the image consists of global and local branches, with the latter being the sliced image patches but resized to the same resolution as the former. This means that higher resolution requires more local patches, resulting in exorbitant computational expenses, and meanwhile, the dominance of local image tokens may diminish the global context. In this paper, we dive into the problems and propose a new framework as well as an elaborate optimization strategy. Specifically, we extract contextual information from the global view using a mixture of adapters, based on the observation that different adapters excel at different tasks. With regard to local patches, learnable query embeddings are introduced to reduce image tokens, the most important tokens accounting for the user question will be further selected by a similarity-based selector. Our empirical results demonstrate a `less is more' pattern, where utilizing fewer but more informative local image tokens leads to improved performance. Besides, a significant challenge lies in the training strategy, as simultaneous end-to-end training of the global mining block and local compression block does not yield optimal results. We thus advocate for an alternating training way, ensuring balanced learning between global and local aspects. Finally, we also introduce a challenging dataset with high requirements for image detail, enhancing the training of the local compression layer. The proposed method, termed LMM with Sophisticated Tasks, Local image compression, and Mixture of global Experts (SliME), achieves leading performance across various benchmarks with only 2 million training data.

Community

Paper author Paper submitter

A novel framework addressing high-resolution challenges while optimizing computational efficiency for Large Multimodal Models (LMMs).

Very cool! The models and data are at https://huggingface.co/collections/yifanzhang114/slime-665bcb2d0d71762b86fdbd2d

Don't hesitate to add the paper url to the model cards so they are linked automatically from here

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.08487 in a Space README.md to link it from this page.

Collections including this paper 2