LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
Abstract
Large multimodal models (LMMs) are processing increasingly longer and richer inputs. Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark that features video-language interleaved inputs up to an hour long. Our benchmark includes 3,763 varying-length web-collected videos with their subtitles across diverse themes, designed to comprehensively evaluate LMMs on long-term multimodal understanding. To achieve this, we interpret the primary challenge as to accurately retrieve and reason over detailed multimodal information from long inputs. As such, we formulate a novel video question-answering task termed referring reasoning. Specifically, as part of the question, it contains a referring query that references related video contexts, called referred context. The model is then required to reason over relevant video details from the referred context. Following the paradigm of referring reasoning, we curate 6,678 human-annotated multiple-choice questions in 17 fine-grained categories, establishing one of the most comprehensive benchmarks for long-form video understanding. Evaluations suggest that the LongVideoBench presents significant challenges even for the most advanced proprietary models (e.g. GPT-4o, Gemini-1.5-Pro, GPT-4-Turbo), while their open-source counterparts show an even larger performance gap. In addition, our results indicate that model performance on the benchmark improves only when they are capable of processing more frames, positioning LongVideoBench as a valuable benchmark for evaluating future-generation long-context LMMs.
Community
Official website: https://longvideobench.github.io
HF dataset: https://huggingface.co/datasets/longvideobench/LongVideoBench
GitHub repo: https://github.com/longvideobench/LongVideoBench
Thank you for sharing this interesting paper on long-context video-language understanding. It's great to see more research in this important area.
We recently published a related paper LVBench: An Extreme Long Video Understanding Benchmark that also explores challenges in long-form video understanding.
Thank you. Hope our benchmarks can help LMMs improve their long-context abilities in the future.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video Understanding (2024)
- Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis (2024)
- Towards Event-oriented Long Video Understanding (2024)
- MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding (2024)
- Needle In A Video Haystack: A Scalable Synthetic Framework for Benchmarking Video MLLMs (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper