Abstract
Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that capture only essential information. In this work, we propose Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes, where LLMs generate minimalistic yet informative intermediate reasoning outputs while solving tasks. By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks.
Community
Here is an article on ajithp.com featuring this paper : https://ajithp.com/2025/03/02/chain-of-draft-llm-prompting/
Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that capture only essential information. In this work, we propose Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes, where LLMs generate minimalistic yet informative intermediate reasoning outputs while solving tasks. By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks.
An update of the paper is coming soon, where we are going to discuss the limitations of CoD.
Update:
The updated paper is now alive on arXiv (see section 4.5).
The paper has gotten a lot of attention recently, I want to make it clear that CoD is not a replacement for CoT as it has its limitations. However, it provides a low-cost, low-latency alternative to CoT. As many have pointed out, the best solution might be having a router model to decide when to use CoD or CoT based on specific use case.
We also want to point out that all current models are not trained with CoD style data, we anticipate that if additional CoD data is added to the training process, we would see a much more consistent performance for CoD across tasks and models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs (2025)
- IAO Prompting: Making Knowledge Flow Explicit in LLMs through Structured Reasoning Templates (2025)
- Inference-Time Computations for LLM Reasoning and Planning: A Benchmark and Insights (2025)
- Understanding Before Reasoning: Enhancing Chain-of-Thought with Iterative Summarization Pre-Prompting (2025)
- Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking (2025)
- Efficient Reasoning with Hidden Thinking (2025)
- Meta-Reasoner: Dynamic Guidance for Optimized Inference-time Reasoning in Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper