Papers
arxiv:2502.18600

Chain of Draft: Thinking Faster by Writing Less

Published on Feb 25
· Submitted by Q-bert on Mar 3
#2 Paper of the day
Authors:
,
,

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that capture only essential information. In this work, we propose Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes, where LLMs generate minimalistic yet informative intermediate reasoning outputs while solving tasks. By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks.

Community

Here is an article on ajithp.com featuring this paper : https://ajithp.com/2025/03/02/chain-of-draft-llm-prompting/

Paper submitter

Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that capture only essential information. In this work, we propose Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes, where LLMs generate minimalistic yet informative intermediate reasoning outputs while solving tasks. By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks.

An update of the paper is coming soon, where we are going to discuss the limitations of CoD.

Update:
The updated paper is now alive on arXiv (see section 4.5).
The paper has gotten a lot of attention recently, I want to make it clear that CoD is not a replacement for CoT as it has its limitations. However, it provides a low-cost, low-latency alternative to CoT. As many have pointed out, the best solution might be having a router model to decide when to use CoD or CoT based on specific use case.
We also want to point out that all current models are not trained with CoD style data, we anticipate that if additional CoD data is added to the training process, we would see a much more consistent performance for CoD across tasks and models.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.18600 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.18600 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.18600 in a Space README.md to link it from this page.

Collections including this paper 13