Abstract
Answering complex, long-context questions remains a major challenge for large language models (LLMs) as it requires effective question clarifications and context retrieval. We propose Agentic Long-Context Understanding (AgenticLU), a framework designed to enhance an LLM's understanding of such queries by integrating targeted self-clarification with contextual grounding within an agentic workflow. At the core of AgenticLU is Chain-of-Clarifications (CoC), where models refine their understanding through self-generated clarification questions and corresponding contextual groundings. By scaling inference as a tree search where each node represents a CoC step, we achieve 97.8% answer recall on NarrativeQA with a search depth of up to three and a branching factor of eight. To amortize the high cost of this search process to training, we leverage the preference pairs for each step obtained by the CoC workflow and perform two-stage model finetuning: (1) supervised finetuning to learn effective decomposition strategies, and (2) direct preference optimization to enhance reasoning quality. This enables AgenticLU models to generate clarifications and retrieve relevant context effectively and efficiently in a single inference pass. Extensive experiments across seven long-context tasks demonstrate that AgenticLU significantly outperforms state-of-the-art prompting methods and specialized long-context LLMs, achieving robust multi-hop reasoning while sustaining consistent performance as context length grows.
Community
LLMs struggle with long-context reasoning—retrieving key info & clarifying complex queries. We introduce Agentic Long-context Understanding (AgenticLU), an agentic framework that:
✅ Uses Chain-of-Clarifications (CoC) to iteratively refine queries & retrieve relevant evidence.
✅ Scales training data generation as a tree search, achieving 97.8% answer recall on NarrativeQA.
✅ Amortizes the data generation cost into training with two-stage finetuning (SFT + DPO) for efficient single-pass inference.
📜 Read the paper: https://arxiv.org/pdf/2502.15920
🔗 Code & data: https://github.com/EvanZhuang/AgenticLU
https://huggingface.co/datasets/yzhuang/Agentic-Long-Context-Understanding-QA
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper